Actions

Ontolog Forum

Revision as of 08:22, 9 January 2016 by imported>KennethBaclawski (Fix PurpleMediaWiki references)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Ontology Summit 2013 Track C: Building Ontologies to Meet Evaluation Criteria

Track Co-champions: Matthew West & MikeBennett

Background

There are two approaches that can be taken to assuring the quality of an ontology:

1. Measure the quality of the result against the requirements that it should meet.

2. Use a process or methodology which will ensure the quality of the resultant ontology.

If you wait to the end of ontology development to measure the quality, the costs of correction of any errors are likely to be high. Therefore using a process or methodology that builds quality into an ontology can have significant benefits. At present, however, it is unclear if there is any process or methodology that, if followed, is sufficient to guarantee the quality of a resulting ontology, and most of those that do exist are relatively informal and tend to require expert support.

A consideration in evaluating ontologies is the different scenarios in which they are used. For example, one might be used as a formal conceptual model to inform development and another might be used in an ontology based application. Both the evaluation criteria and the development methodologies employed may vary widely.

Mission

To investigate the state of the art in ontology development methodologies, including key achievements and key gaps that currently exist.

Objectives

1. Examine the explicit and implicit methodologies that are known to exist.

2. Understand the role that upper ontologies play in ontology development methodologies.

3. Understand the role of ontological patterns in ontology development methodologies.

4. Identify how to apply the intrinsic and extrinsic aspects of ontology evaluation identified by the other tracks, within the applicable development methodologies.

5. Identifying how to frame the applicable ontology development methodologies within the frameworks of established quality assurance regimes (such as ISO 9000 and CMMI) for industrial applications.


Synthesis & Track Input to the Communique

see: http://ontolog.cim3.net/forum/ontology-summit/2013-04/msg00000.html

Please find, below, the initial input from Track C for the Summit Communique.

For section B: Introduction

This section should answer mainly these questions:

(1) Why is ontology evaluation important?

  • Establishing requirements (agreed between users and developers of an ontology) that an ontology needs to meet in order to meet the needs of its application means that those developing the ontology have a better chance of meeting those requirements (you can't fail to meet unstated requirements).
  • Confirming that an ontology meets the requirements should be part of the acceptance of an ontology in a wider systems development context. There may be several stages of development and maintenance with different levels of requirements at different stages.
  • When looking to reuse rather than reinvent an ontology, an evaluation of the ontology in terms of what requirements it meets, will make it easier to identify an ontology that may be appropriately reused, in whole or in part, for some other purpose.

(2) What is the scope of this document?

We focus in the communique on the evaluation of ontologies under the following aspects

  • Is the domain represented appropriately (given the requirements of the IT system)?
  • Is the ontology human-intelligble?
  • Is the ontology maintainable?
  • Does the query/reasoning capability and performance meet the requirements of the IT system?

For section C: The State of the Art of Ontology Evaluation

This section should cover these topics:

  • (1) The terminological distinctions that we use in the rest of the text. [all tracks]
  • (2) What are the desirable characteristics of ontologies and how are they measured? For each of the main kinds of ontology evaluation, it should highlight desirable characteristics of ontologies (e.g., reusability) and measurable metrics (e.g., natural language definitions of classes and relations) linked to them. This communique should not strive for an exhaustive list, but should focus on the most important characteristics. [track A, track B]
Track C noted that for integrating ontologies, consistency was a critical property. Achieving consistency across large and potentially geographically and culturally diverse development and maintenance teams was a particular challenge in methodology development.
  • (3) What best practices should one adopt (across the whole ontology life cycle) to ensure that ontologies have the desired characteristics identified in C-2? Ideally this section should be organized by the characteristics mentioned in C-2; at a minimum there needs to be a clear correlation between the desirable characteristics and best practices. [track C]
The development process for an ontology needs to have a number of stages, just like the data model in a traditional information systems development process. Similarly requirements need to be identified in levels too, starting with the capabilities of the overall system that the ontology is a component of, to capabilities of the ontology itself in that setting, to high level requirements, like consistency, to detailed requirements, like conforming to naming standards. The ontology development needs to go through stages to match, equivalent to conceptual, logical, and physical data model development in information systems. There are architectural decisions to be made in terms of the choices of ontological commitments the ontology needs to make and does make. There are choices of ontology language and implementation environment. There is little evidence of this in current practice, where ontology development seems to start with someone writing some OWL or CL.
  • (4) What tool-support is currently available to support the evaluation of the characteristics identified in C-2 and the best practices identified in C-3? Again, the point is not an exhaustive list of all available tools, but draw an explicit connection between the results of the other tracks and the findings of the tool track. [track D]
There is little or no integrated tool support for multilevel/multistage ontology development beyond some tools to directly support the development of ontologies at this physical level.

For section D: Future Steps

What needs need to be addressed in order to improve the situation for ontology evaluation and, thus, indirectly, improve the quality of ontologies out there? This includes theoretical contributions (e.g., a better understanding of the characteristics or the development of better metrics) as well as lack of tool support. [all tracks]

  • A better understanding of the relationships between requirements at different levels and how low level requirements support higher level requirements.
  • Ontology development methodologies that align with and recognize similar stages to information systems development with distinct conceptual, logical, and physical stages, so that ontology development does not start at the physical level with the choice of an implementation language.
  • A clearer understanding of the architecture of ontology development and the different aspects of architecture that are relevant, from ontological commitments to language choices.

--

maintained by the Track-C champions: Matthew West & Mike Bennett ... please do not edit