Ontolog Forum

Revision as of 20:39, 18 December 2012 by imported>Null (: Last updated at: 2012-01-31 18:01:14 By user: MikeBennett)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

OntologySummit2012: (X-Track-A1) "Ontology Quality and Large-Scale Systems" Community Input

Track Co-Champions: Dr. AmandaVizedom & Mr. MikeBennett

Mission Statement:

This cross-track aspect will focus on the evaluation of ontologies within the context of Big Systems applications. Whether creating, developing, using, reusing, or searching for ontologies for use in big systems, engineers, architects, designers, developers and project owners will encounter questions about ontology evaluation and quality. How should those questions be answered? How do we know whether an ontology is fit for use in (or on) a large-scale engineered system or a large-scale systems engineering effort? This cross-track aspect ties together the evaluation-related discussions that arise within the Summit Tracks and individual sessions, providing a context in which to take up and address the issues generally. Specific focus will evolve with recurring themes, potentially including such topics as ontology quality characteristics, fitness for purpose, requirements, metrics, evaluation methodologies and resources.   

see also: OntologySummit2012_Quality_Synthesis

General Discussion

2012.01.25, AmandaVizedom:

Some initial thoughts on the scope of this cross-track topic and potential threads within it:

Already, after the first events of OntologySummit2012, a variety of quality-related issues have come up. More are likely, in the judgement of your humble co-champions. Here, we begin to gather these issues under one umbrella.

  • Meta-topic: Questions about the Quality Cross-track
    • Question: Is this about the quality of ontologies for large-scale systems, or about the quality of such systems themselves?
      • Response (AmandaVizedom): This track is specifically focused on the quality of the ontologies. This does not rule out discussion of the quality of systems incorporating, or engineered using, ontologies, insofar as dependencies exist between the two. The focus, however, is on ontology quality specifically.
    • Question: Is it possible to say anything meaningful about ontology quality within the limits of the cross-track, given how much debate there is, and how little settlement, about ontology quality?
      • Response (AmandaVizedom): Indeed, the unsettled state of the question, in contrast to the significant effect ontology quality has on systems that incorporate ontologies, is precisely why the track was suggested. No argument, then, as to whether this is a reasonable question. Here's why I think we *can* make useful progress: because we are limited by the specific focus of the track, and the summit itself, on ontologies for large-scale systems and systems engineering. We are therefore obligated to confine ourselves to discussing ontology quality 'as it makes a difference to' large-scale systems systems engineering. We will exercise some constraint on ourselves, and focus within this motivating context. This practical focus takes a considerable amount of potential discussion out of scope. This practical focus also gives us an agreed reference direction for the discussions we do have: the direction of big systems and systems engineering use cases, and the ways in which characteristics of ontologies support or fail to support thos
  • Topic: Elements, Dimensions, or Degrees of Ontology Quality When we speak of ontology quality, even when focusing on what makes a difference to large-scale systems & systems engineering, we may be thinking of many different things. The importance of considering these different things separately has been raised. How to usefully frame and identify these quality-related things, however, is neither clear nor standardized. Within the summit discussion, at least these ways of analyzing ontology quality have been suggested:
    • in terms "how much quality is needed,"
    • in terms of types or dimensions of ontology quality,
    • in terms of distinct characteristics (or features, or properties) that ontologies may have, fail to have, or have in degrees.

How should we understand ontology quality? What manner of breaking down the complex

  • Topic: Metrics and Measurement What metrics are available for ontology quality? What methods of measurement? What's missing? For what elements of ontology quality are metrics and measurements needed but missing? How easily might such metrics and measurements be developed?
  • Topic: Ontology Evaluation How are ontologies evaluated? How should they be evaluated? How much ontology evaluation is general (use-independent)? How much ontology evaluation is use-specific? Are currently used methods of ontology evaluation any good? Good enough?
  • Topic: Specification of Ontology Requirements How are ontology requirements for systems specified? How should they be specified? What guidance or assistance is available for specifying ontology requirements? What guidance or assistance is needed?
  • Topic: Use Cases What about past and present Big Systems that incorporate ontologies? How do (did) they manage ontology quality? How well does (did) that work? What do these use cases contain by way of issues, solutions, lessons learned, and challenges regarding ontology quality?
  • Topic: Relating Use Cases, Requirements, and Ontology Quality What are the relationships between use cases, ontology requirements, and elements, dimensions, or degrees of ontology quality?

2012.1.31 MikeBennett :

  • Background: What is Quality?
    • There are really two distinct usages of the term 'Quality' which are in circulation:
      • The natural language sense: 'How good is this thing?'
      • The sense used in Quality Assurance: 'How is this thing shown to comply with the stated requirements?'
    • We could think of these as qualitative quality and quantitive quality (the Q word was not really a good choice for QA - it's really about having processes that demonstrate control over deliverables)
      • These are different. For example Macdonald's Golden Arches have arguably the best quality assurance of any restaurant chain, but they do not have the best burgers. It is the consistency of production which is monetizable to them.
      • There is some cross-over:
        • to the extent that you can quantify "what makes a good ontology?" you can build the answers to this into the formal specifications against which an ontology is verified and validated
        • you can also build into the development process, the design reviews and other activities required to ensure that those who have some understanding of those intangibles are able to input to the process, correct and or veto deliverables and so on - just as in code development you would have design reviews for coding, application of agreed design conventions and so on.
  • Background: Applying Quality to Big Systems
    • Not all big systems are engineered, and not all engineering is big systems.
    • Here we are looking at ontology quality both for engineered and for non engineered big systems
    • Non engineered systems (big or otherwise) introduce a new dimension into what an ontology is required to do
      • and therefore, for quantitative quality, how one is to demonstrate that these requirements are met.
    • Example: Engineered systems are design by intelligent agents (people) and so are designed to operate within clearly defined and formally specified parameters
      • for instance (simplifying wildly), a feedback control system is designed to operate within one stable quadrant of all the available behaviors of the system (the unstable quadrants are where you will find behaviors like unstable oscillations, things that fly apart etc.). The designer knows not to cross these mathematically defined boundaries
      • non designed, i.e. emergent systems are subject to no such constraints. Complex dynamic systems have no predefined bounds on their behavior
    • What does this mean for ontology?
      • An ontology for an engineered system has a reasonably well defined set of ontological commitments:
        • the scope, granularity and so on, of the terms within it, match the scope of the engineering design and the granularity of the terms which the engineer needed to use to design this.
        • This should be reasonably amenable to formal quality controls.
      • For emergent systems (or any non engineered system) the choices for ontological commitment must be made by the ontologist
        • what granularity of descriptions of kinds of 'thing' is appropriate to adequately describe the system for whatever are the purposes for which it is to be described? (this depends on what is the use case for the ontology itself)
        • What features or aspects of the system need to be described for the purposes to which the ontology is to be put?
        • What theoretical or descriptive framework is appropriate?
      • Much of this may be intangible, qualitative quality. How and to what extend can elements of this be quantified, or experts brought in to the review process who understand these questions?

I made an attempt during today's call (Session 03, ConferenceCall_2012_01_26) to notes some of the quality-related issues raised and remarks made. I'm sure I didn't get everything, or get everything quite as the speaker intended. Add and correct!

JackRing, slide 8:

  • Is the ontologists' goal (iteration stop rule) Proof of Correctness or Fit for Purpose or what?
  • How to ensure the continual integrity of any resulting ontology?

AnatolyLevenchuk, slide 5:

  • For ontology-based formalization of systems engineering, the ontologies used must not be "folk" or "common sense" ontologies, but must be counter-intuitive, based in "engineering state-of-the-art", "knowledge about things, not about descriptions of things."

AnatolyLevenchuk, slide 7:

  • The needed type of ontology, supporting "engineering artifact...processing... needs combined usage of terminology/semiotics and ontology."

AnatolyLevenchuk, slide 8:

  • The needed ontology must evolve, and include:
    • less formal semantics, more formal pragmatics
    • multi-agent belief revision theory
    • separation of administrative and ontology domains (units of ontology maintaining/editing/communication/library granularity and units of belief revision.

AnatolyLevenchuk, slide 10:

  • The needed ontology must support:
    • "knowledge computations"
    • induction, deduction, and abduction


  • Many examples of failure to understand the relations between concepts in different models, especially examples in which there was a failure to understand that the concepts do not refer to the same thing.
  • Need to incorporate both formal ontological theories and conceptual modeling principles