Actions

Ontolog Forum

Revision as of 06:12, 9 January 2016 by imported>KennethBaclawski (Fix PurpleMediaWiki references)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

NIST-Ontolog-NCOR Mini-Series: Ontology Measurement and Evaluation - Kick-off Session - Thu 19-Oct-2006

  • Topic: "NIST-Ontolog-NCO Mini-Series: Ontology Measurement and Evaluation - Kick-off Session"

Conference Call Details

  • Date: Thursday, October 19, 2006
  • Start Time: 17:30 UTC / 6:30pm BST / 1:30pm EDT / 10:30am PDT (see world clock for other time zones)
    • Duration: 2.0 hours
  • Dial-in Number: +1-641-696-6600 (Iowa, USA)
    • Participant Access Code: "686564#"
  • Shared-screen support (VNC session) will be started 5 minutes before the call at: http://vnc2.cim3.net:5800/
    • view-only password: "ontolog"
    • if you plan to be logging into this shared-screen option (which the speaker may be navigating), and you are not familiar with the process, please try to call in 5 minutes before the start of the session so that we can work out the connection logistics. Help on this will generally not be available once the presentation starts.
    • people behind corporate firewalls may have difficulty accessing this. If that is the case, please download the slides below and runing them locally. The speaker will prompt you to advance the slides during the talk.
  • RSVP to peter.yim@cim3.com appreciated, to allow us to prepare enough conferencing resources.
  • Please note that this session will be recorded, and the audio archive is expected to be made available as open content to our community membership and the public at-large under our prevailing open IPR policy.

Attendees

  • Also Expected (and probably joined us after the roll call):
    • ...(to register for participation, please add your name here or e-mail <peter.yim@cim3.com> so that we can reserve enough resources to support the session.)...

Background

This is the first event of a mini-series of talks and discussions that revolve around the topic: "Ontology Measurement and Evaluation" during which this community will explore the landscape, issues and solutions relating to the measurement, evaluation, quality and testing of ontologies.

This is a Joint NIST-Ontolog-NCOR initiative. A planning meeting for this mini-series took place on 22-Aug-2006, during which the scope and plans for the program was discussed among members of the community. Dr. Steven Ray, who is the Chief of NIST's Manufacturing Systems Integration Division (MSID), a long time member of the Ontolog community, as well as the convener of NCOR's Ontology Evaluation Committee, was invited to champion the program. This series is expected to last about 6 months during which invited speaker and technical discussion events will be featured (at the rate of about one event per month).

See also: OntologyMeasurementEvaluation (the 'project' homepage for this mini-series)

Agenda & Proceedings: "Ontology Measurement and Evaluation" - Mini-series Kick-off Session

This session will begin with opening remarks by representatives from the co-organizers of this mini-series, Professor Barry Smith (NCOR), Mr. Peter Yim (Ontolog) and Dr. Steven Ray (NIST). Dr. Ray (program lead) will kick-off the program by giving us an overview on this "Ontology Measurement and Evaluation" series. He will then introduce our keynote speaker Dr. Christopher Welty from IBM Research, who will give a talk on: "Ontology Quality and the Semantic Web". This will be followed by a 30-minute Q & A session with the speakers and an open discussion on today's topic as well as what the community might want to hear or discuss during the rest of this series.

This mini-series will explore the landscape, issues and solutions relating to the measurement, evaluation, quality and testing of ontologies.

  • Pertinent Issues we might explore during this (and subsequent) session(s):
    • 1. Why do we need to care about ontology quality?
    • 2. What are objective means of classifying something as an ontology, taxonomy, data model, semantic networks, or tagged markup, etc.
    • 3. How can ontologies be evaluated and measured?
    • 4. How can the quality of Ontology Design Tools be assessed?
    • ...(more to come)
  • Session Format: this is be a virtual session conducted over an augmented conference call
    • 1. we'll go around with a self-introduction of participants (10~15 minutes) - we'll skip this if we have moe than 20 participants (in which case, it will be best if members try to update their namesake pages on this wiki prior to the call so that everyone can get to know who's who more easily.)
    • 2. Opening by the co-organizer (BarrySmith, Peter P. Yim & SteveRay)
    • 3. Program kick-off (SteveRay - 15 minutes)
    • 4. Keynote Presentation (ChrisWelty - 45~60 minutes)
    • 5. Q & A (~10 minutes)
    • 6. Open discussion by all participants (15~20 minutes)
    • 7. Summary / Conclusion / Follow-up (SteveRay - 5 minutes)

Keynote Presentation: "Ontology Quality and the Semantic Web"

One of the guiding principles of the web and its machine interpretable

successor the semantic web is to "let a million flowers bloom." HTML was based on technology nearly two decades old at the time (Hypertext), for which a research community concerned mainly with Human-Computer interaction was investigating what the "right way" to use hypertext for effective communication was. The vast majority of early HTML pages completely ignored this and yet the web thrived. Still, as the web became a serious medium for dissemination, institutions for whom effective communication was critical did begin to take this research seriously and today's highly visible web pages are designed by people with experience and training on how to "do it right". The progress and evolution of the semantic web should follow the same path - the semantic web standards (RDF, OWL, and RIF) are based on decades-old technology from Knowledge Representation and Databases, and there has been for about 15 years a research community associated with this field that has studied what the "right way" to use these systems is. This field, which I will call "ontology engineering" for this talk, is concerned among other things with ontology quality and its impact.

In this talk I will discuss research on characterizing ontology quality and measuring the impact of quality on knowledge-based systems.
  • Speakers' presentations: (Slides can be accessed by pointing your web browsers to the respective "slide" links below) ... (note that slides will be available by the time of the session)

Questions, Answers & Discourse

  • If you want to speak or have questions for the panel, we appreciate your posting them as instructed below: (please identify yourself)
    • experimental: try using the queue management chat tool
    • point a separate browser window (or tab) to http://webconf.soaphub.org/conf/room and enter: Room: "ontolog_20061019" & My Name: e.g. "JaneDoe"
    • or point your browser to: http://webconf.soaphub.org/conf/room/ontolog_20061019
      • instructions: once you got access to the page, click on the "settings" button, and identify yourself (by modifying the Name field). You can indicate that you want to ask a question verbally by clicking on the "hand" button, and wait for the moderator to call on you; or, type and send your question into the chat window at the bottom of the screen.
    • For those who have further questions or remarks on the topic, please post them to the [ontolog-forum] so that everyone in the community can benefit from the discourse.
  • ... Questions, Comments:
    • Peter P. Yim: a lot of the confusion probably comes from things like thesauri, taxonomies, 'folksonomies' , .. etc. all labeling themselves as "Ontologies" ... could we start axiomatizing each of them rid everyone of the ambiguity, once and for all?
    • Steve Ray: they are all useful, for some purpose, ... so how about creating classifications (like a Class-1 ontology meaning something that is of high quality, fully axiomatized, ... etc. etc.; maybe Class-[>1] for folksonomies or what not.
    • Leo Obrst: we need more than that ... we will need to characterize the kind of model too -- like in the case of people (illegitimatelyly) converting thesauri into OWL by representing the broader-than/narrower-than relationship with a subclass relationship ... that too strong! We need to define those things well, and provide people with guidelines too, if we want to help them do a quality job.
  • Input from the participants via the [webconf.soaphub.org/conf/room/ontolog_20061019] chat session:
    • Alan Ruttenberg: one part rule dependant on granularity? - Desktop Computer used to have several boxes attached by wires - now all built in into one some times e.g. handheld.
    • Alan Ruttenberg: I am confused about example of rigidity. Entity that my answering

machine recorded, based on knowing that the "there is a message" light blinking. Possibly a person, possibly a fax machine calling the wrong number. So not necessarily a person.

    • Doug Holmes: Comment: Agreed that "Bad ontologies" are probably not ontologies, but they seeem to be useful KA artifacts; perhaps they are "pre-ontologies"
    • Atilla Elci: Talking of ontology quality, shouldn't we address "metric" issue? What metrics are there say to measure "absolute" or "relative" quality? Say any measures perhaps like that of ONTOMETRIC (Tello & Perez) or of Natural Language Application Metric (Gangemi et al.).
  • Session ended 2006.10.19 12:31 pm PDT

Audio Recording of this OntologyMeasurementEvaluation Chris Welty Session

(Thanks to Bob Smith and Peter P. Yim for their help with getting the session recorded. =ppy)

  • To download the audio recording of the session, click here
    • the playback of the audio files require the proper setup, and an MP3 compatible player on your computer.
  • Conference Date and Time: Oct. 19, 2006 10:48am~12:30m Pacific Daylight Time
  • Duration of Recording: 1 Hour 39 Minutes
  • Recording File Size: 11.9 MB (in mp3 format)
  • Telephone Playback Expiration Date: Oct. 29, 2006 12:56 PM PDT
    • Prior to the above Expiration Date, one can call-in and hear the telephone playback of the session.
    • Playback Dial-in Number: 1-805-620-4002 (Ventura, CA)
    • Playback Access Code: 285313#
    • suggestions:
      • its best that you listen to the session while having the presentation opened in front of you. You'll be prompted to advance slides by the speaker.