Actions

Ontolog Forum

Revision as of 06:41, 9 January 2016 by imported>KennethBaclawski (Fix PurpleMediaWiki references)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Ontology Summit 2013: Virtual Panel Session-02 - Thu 2013-01-24

Summit Theme: "Ontology Evaluation Across the Ontology Lifecycle"

Summit Track Title: Track-B: Extrinsic Aspects of Ontology Evaluation

Session Topic: Extrinsic Aspects of Ontology Evaluation: Finding the Scope

  • Session Co-chairs: Dr. ToddSchneider (Raytheon) and Mr. TerryLongstreth (Independent Consultant) - intro slides

Panelists / Briefings:

  • Dr. ToddSchneider (Raytheon) & Mr. TerryLongstreth (Independent Consultant) - "Evaluation Dimensions, A Few" slides
  • Mr. HansPolzer (Lockheed Martin Fellow (ret.)) - "Dimensionality of Evaluation Context for Ontologies" slides
  • Ms. Mary Balboni et al. (Raytheon) - "Black Box Testing Paradigm in the Lifecycle" slides
  • Ms. MeganKatsumi (University of Toronto) - "A Methodology for the Development and Verification of Expressive Ontologies" slides

Archives

Abstract

OntologySummit2013 Session-02: "Extrinsic Aspects of Ontology Evaluation: Finding the Scope" - intro slides

This is our 8th Ontology Summit, a joint initiative by NIST, Ontolog, NCOR, NCBO, IAOA & NCO_NITRD with the support of our co-sponsors. The theme adopted for this Ontology Summit is: "Ontology Evaluation Across the Ontology Lifecycle."

Currently, there is no agreed methodology for development of ontologies, and there are no universally agreed metrics for ontology evaluation. At the same time, everybody agrees that there are a lot of badly engineered ontologies out there, thus people use -- at least implicitly -- some criteria for the evaluation of ontologies.

During this Ontology Summit, we seek to identify best practices for ontology development and evaluation. We will consider the entire lifecycle of an ontology -- from requirements gathering and analysis, through to design and implementation. In this endeavor, the Summit will seek collaboration with the software engineering and knowledge acquisition communities. Research in these fields has led to several mature models for the software lifecycle and the design of knowledge-based systems, and we expect that fruitful interaction among all participants will lead to a consensus for a methodology within ontological engineering. Following earlier Ontology Summit practice, the synthesized results of this season's discourse will be published as a Communiqué.

At the Launch Event on 17 Jan 2013, the organizing team provided an overview of the program, and how we will be framing the discourse around the theme of of this OntologySummit. Today's session is one of the events planned.

As the area of ontology evaluation is still new and its boundaries and dimensions have yet to be defined. We propose to ask the community (panelists and participants alike) to provide input for the dimensions of ontology evaluation during this session, and methodologies that can be applied.

More details about this Ontology Summit is available at: OntologySummit2013 (homepage for this summit)

Briefings

  • Dr. ToddSchneider (Raytheon) & Mr. TerryLongstreth (Independent Consultant) - "A Few Evaluation Dimensions" slides
    • Abstract: ... The area of ontology evaluation is still new and its boundaries and dimensions have yet to be defined. We propose to ask the community to provide input for the dimensions of ontology evaluation.
  • Mr. HansPolzer (Lockheed Martin (ret.)) - "Dimensionality of Evaluation Context for Ontologies" slides
    • Abstract: ... Evaluation of anything, including ontologies, is done for some purpose within some context. Often much of that purpose and context is left implicit because it is assumed to be shared among the participants in the evaluation process. However, as the number and scope of the things being evaluated grows, and as the contexts in which they are evaluated become more diverse, implicit purpose and context dimensions become problematic. The appropriateness of any given set of evaluation attributes and their valuation depends significantly on evaluation purpose and context. This presentation draws on some past experiences with evaluation context issues in related domains to motivate attention to more explicit representation of evaluation context in ontology evaluation. It also suggests some important evaluation context dimensions for consideration by the ontology community as a starting point for further exploration and refinement by the community.
  • Ms. Mary Balboni et al. (Raytheon) - "Black Box Testing Paradigm in the Lifecycle" slides
    • Abstract: ... One may approach ontology evaluation as testing in a Black Box paradigm (i.e., the ontology exists within said Black Box). What are some basic Black Box testing methods and applications in the Lifecycle? Can testing a large database be akin to testing an ontology? What are some interesting data points regarding large database Black Box testing, especially if they can relate to ontology testing? Are Security concerns already covered by Black Box testing? This paper broaches these subjects from an engineering point of view, to provoke thoughts and ideas on Black Box testing of a system that may include an ontology.
  • Ms. MeganKatsumi (University of Toronto) - "A Methodology for the Development and Verification of Expressive Ontologies" slides
    • Abstract: ... The design and evaluation of first-order logic ontologies pose multiple challenges. If we consider the ontology lifecycle, two issues of critical importance are the specification of the intended models for the ontology's concepts (requirements) and the relationship between these models and the models of the ontology's axioms (verification). This talk presents a methodology in which automated reasoning plays a critical role for the development and verification of first-order logic ontologies. Its focus is on the verification of requirements (intended models), and how the results of this evaluation can be used to both revise the requirements and correct errors in the ontology. The methodology will be illustrated using examples from the Boxworld ontology (available in COLORE). While it is focused on the challenges of the development of first-order logic ontologies, this methodology may also be useful for ontology development in other logical languages.

Agenda

OntologySummit2013 - Panel Session-02

  • Session Format: this is a virtual session conducted over an augmented conference call

Proceedings

Please refer to the above

IM Chat Transcript captured during the session

see raw transcript here.

(for better clarity, the version below is a re-organized and lightly edited chat-transcript.)

Participants are welcome to make light edits to their own contributions as they see fit.

-- begin in-session chat-transcript --

[09:03] Peter P. Yim: Welcome to the

Ontology Summit 2013: Virtual Panel Session-02 - Thu 2013-01-24

Summit Theme: Ontology Evaluation Across the Ontology Lifecycle

  • Summit Track Title: Track-B: Extrinsic Aspects of Ontology Evaluation

Session Topic: Extrinsic Aspects of Ontology Evaluation: Finding the Scope

Panelists / Briefings:

  • Mr. Hans Polzer (Lockheed Martin Fellow (ret.)) - "Dimensionality of Evaluation Context for Ontologies"
  • Ms. Mary Balboni et al. (Raytheon) - "Black Box Testing Paradigm in the Lifecycle"
  • Ms. Megan Katsumi (University of Toronto) - "A Methodology for the Development and Verification of Expressive Ontologies"

Logistics:

  • (if you haven't already done so) please click on "settings" (top center) and morph from "anonymous" to your RealName (in WikiWord format)
  • Mute control: *7 to un-mute ... *6 to mute
  • Can't find Skype Dial pad?
    • for Windows Skype users: it's under the "Call" dropdown menu as "Show Dial pad"
    • for Linux Skype users: please note that the dial-pad is only available on v4.1 (or later or the earlier Skype versions 2.x,)

if the dialpad button is not shown in the call window you need to press the "d" hotkey to enable it.

Attendees: Todd Schneider (co-chair), Terry Longstreth (co-chair), Alan Rector, Anatoly Levenchuk,

Angela Locoro, Bob Schloss, Bobbin Teegarden, Carmen Chui, Dalia Varanka, Donghuan Tang, Fabian Neuhaus,

Fran Lightsom, Frank Olken, GaryBergCross, Hans Polzer, Jack Ring, Joel Bender, John Bilmanis,

Ken Baclawski, Laleh Jalali, Leo Obrst, MariCarmenSuarezFigueroa, Mary Balboni, Matthew West,

Max Petrenko, Megan Katsumi, Michael Grüninger, Mike Dean, Mike Riben, Oliver Kutz, Pavithra Kenjige,

QaisAlKhazraji, Peter P. Yim, Ram D. Sriram, Richard Martin, RosarioUcedaSosa, Steve Ray, Till Mossakowski,

Torsten Hahmann, Trish Whetzel

Proceedings: == . [08:57] anonymous morphed into Donghuan

[09:13] Donghuan morphed into PennState:Qais

[09:14] PennState:Qais morphed into PennState

[09:17] anonymous1 morphed into Max Petrenko

[09:21] anonymous2 morphed into Mary Balboni

[09:23] anonymous1 morphed into Carmen Chui

[09:23] anonymous1 morphed into Fabian Neuhaus

[09:24] PennState morphed into Donghuan

[09:24] Donghuan morphed into Qais

[09:24] Qais morphed into PennState

[09:26] PennState morphed into Qais_Donghuan

[09:24] anonymous morphed into Angela Locoro

[09:25] Angela Locoro morphed into Angela Locoro

[09:26] anonymous morphed into John Bilmanis

[09:26] anonymous morphed into Steve Ray

[09:27] Michael Grüninger morphed into Megan Katsumi

[09:29] Matthew West: Just a note, but the Session page shows the conference starting at 1630 UTC

when it is actually 1730 UTC.

[09:55] Peter P. Yim: @MatthewWest - thank you for the prompt ... sorry, everyone, the session

start-time should be: 9:30am PST / 12:30pm EST / 6:30pm CET / 17:30 GMT/UTC

[09:30] anonymous morphed into RosarioUcedaSosa

[09:31] anonymous morphed into Ram D. Sriram

[09:33] anonymous1 morphed into Torsten Hahmann

[09:55] anonymous morphed into laleh

[09:59] Peter P. Yim: @laleh - would be kindly provide your real name (in WikiWord format, if you

please) and morph into the with "Settings" (botton at top center of window)

[10:01] laleh morphed into Laleh Jalali

[10:03] Peter P. Yim: @LalehJalali - thank you, welcome to the session ... are you one of RameshJain's

students at UCI?

[10:08] Laleh Jalali: Yes

[09:34] Peter P. Yim: == [0-Chair] Todd Schneider & Terry Longstreth (co-chairs) opening the session ...

[09:37] anonymous morphed into Frank Olken

[09:39] Peter P. Yim: == [2-Polzer] Hans Polzer presenting ...

[09:40] List of members: Alan Rector, Anatoly Levenchuk, Angela Locoro, Bobbin Teegarden, Bob Schloss,

Carmen Chui, Dalia Varanka, Fabian Neuhaus, Frank Olken, Fran Lightsom, Hans Polzer, Joel Bender,

John Bilmanis, Ken Baclawski, Leo Obrst, MariCarmenSuarezFigueroa, Mary Balboni, Matthew West,

Max Petrenko, Megan Katsumi, Michael Grüninger, Mike Dean, Mike Riben, Oliver Kutz, Peter P. Yim,

Qais_Donghuan, Ram D. Sriram, Richard Martin, RosarioUcedaSosa, Steve Ray, Terry Longstreth, Todd Schneider,

Torsten Hahmann, vnc2

[09:42] anonymous morphed into Trish Whetzel

[09:44] Mike Riben: are we on slide 5?

[09:47] Jack Ring: Pls stop using "Next Slide" and say number of slide

[09:47] anonymous morphed into GaryBergCross

[09:52] Todd Schneider: Jack, Hans is on slide 7.

[09:45] Jack Ring: Is your Evaluation Context different from. Ontology Context?

[09:56] Todd Schneider: Qais, if you have a question would type it in the chat box?

[09:56] Peter P. Yim: @Qais_Donghuan - we will hold questions off till after the presentations are done,

please post your questions on the chat-space (as a placeholder/reminder) for now

[09:55] Terry Longstreth: On slide 8, Hans mentions reasoners as an aspect of the ontology, but as

Uschold has pointed out, the reasoner may be used as a test/evaluation tool

[09:57] Todd Schneider: Terry, the evaluation(s) may need to be redone if the reasoner is changed.

[10:03] Terry Longstreth: Sure. I was just pointing out that the reasoner may be a tool for extrinsic

evaluation.

[10:04] Todd Schneider: Terry, yes a tool used in evaluation and the subject of evaluation itself

(e.g., performance).

[10:08] Steve Ray: @Hans: It would help if you could provide some concrete examples that would bring

your observations into focus.

[10:10] Michael Grüninger: @Hans: In what sense is ontology compatibility considered to be a rating?

[10:09] Peter P. Yim: == [1-Schneider] Todd Schneider presenting, and soliciting input on Ontology

Evaluation dimensions ...

[10:01] Jack Ring: (ref. ToddSchneider's solicitation for input on dimensions) Reusefulness of an

ontology or subset(s) thereof?

[10:08] Jack Ring: This is a good start toward an ontology of ontology evaluation but we have a

loooong way to go.

[10:10] anonymous morphed into Pavithra Kenjige

[10:15] Jack Ring: In systems think the three basic dimensions are Quality, Parsimony, Beauty

[10:15] Todd Schneider: The URL for adding to the list of possible evaluation dimensions is

http://ontolog.cim3.net/cgi-bin/wiki.pl?OntologySummit2013_Extrinsic_Aspects_Of_Ontology_Evaluation_CommunityInput

[10:15] MariCarmenSuarezFigueroa: In the legal part, maybe we should consider also license (and not

only copyright)

[10:15] Terry Longstreth: Thanks Mari Carmen

[10:16] Fabian Neuhaus: @Todd, we need more than a list. We need definitions of the terms on your

evaluation dimensions list, because they are not self-explanatory.

[10:16] Matthew West: Relevance, Clarity, Consistency, Accessibility, timeliness,completeness,

accuracy, costs (development, maintenance), Benefits

[10:17] Matthew West: Provenance

[10:17] Todd Schneider: Fabian, yes we will need definitions, context, and possibly intent. But first

I'd like to conduct a simple gathering exercise.

[10:18] Matthew West: Modularity

[10:17] Fabian Neuhaus: @Todd: it seems that your "evaluation dimensions" are very different from

Hans' dimensions.

[10:20] Todd Schneider: Fabian, yes. Hans was talking about context. I'm thinking of things more

directly related to evaluation criteria. Both Hans and I like metaphors from physics.

[10:48] Leo Obrst: @Todd: your second set of slides, re: slide 4: Precision, Recall, Coverage,

Correctness and perhaps others will also be important for Track A Intrinsic Aspects of Ontology

Evaluation. Perhaps your metrics will be: Precision With_Respect_To(domain D, requirement R), etc.?

Just a thought.

[10:21] Peter P. Yim: == [3-Balboni] Mary Balboni presenting ...

[10:21] Terry Longstreth: Mary's term: CSCI - Computer Software Configuration Item - smallest unit of

testing at some level (varies by customer: sometimes a module, sometimes a capability ...)

[10:23] Terry Longstreth: Current speaker - Mary Balboni - slides 3-Balboni

[10:27] Bobbin Teegarden: @Mary, slide 4 testing continuum -- may need to go one more step: critical

testing' is in actual usage (step beyond beta) and that feedback loop that creates continual

improvement. Might want to extend the thinking to 'usage as a test' and ongoing criteria in field

usage?

[10:29] Terry Longstreth: @Bobbin - good point and note that in many cases, evaluation may not start

until (years?) after the ontology has been put into continuous usage

[10:29] Till Mossakowski: how does it work that injection of bugs leads to finding more (real) bugs?

Just because there is more overall debugging effort?

[10:30] Fabian Neuhaus: @Till: I think it allows you to evaluate the coverage of your tests.

[10:33] Jack Ring: It seems that your testing is focused on finding bugs as contrasted to discovering

dynamic and integrity limits. Instead of "supports system conditions" it should be 'discovers how

ontology limits system envelope"

[10:35] Jack Ring: Once we understand how to examine a model for progress properties and integrity

properties we no longer need to run a bunch of tests to determine ontology efficacy.

[10:29] Steve Ray: @Mary: Some of your testing examples look more like what we would call intrinsic

evaluation. Specifically I'm thinking of your example of finding injected bugs.

[10:59] Mary Balboni: @SteveRay: Injected bugs - yes it is intrinsic to those that inject the

defects, but would be extrinsic to the testers that are discovering defects ...

[11:01] Steve Ray: @Mary: I would agree with you provided that the testers are testing via blackbox

methods such as performance given certain inputs, and not by examining the code for logical or

structural bugs. Are we on the same page?

[11:03] Mary Balboni: @SteveRay - absolutely!

[10:49] Bobbin Teegarden: @JackRing Would 'effectiveness' fall under beauty? What criteria?

[10:58] Jack Ring: @Bobbin, Effect-iveness is a Quality factor. Beauty is in the eye of the

beer-holder.

[10:37] Terry Longstreth: Example of business rule: ask bank for email when account drops below $200.

Evaluate by cashing checks until balance below threshold.

[10:36] Todd Schneider: Leo, have you cloned yourself?

[10:37] Leo Obrst: No, I had to reboot firebox and it had some fun.

[10:41] Jack Ring: No one has mentioned the dimension of complexness. Because ontologies quickly

become complex topologies then the response time becomes very important if implemented on a von

Neumann architecture. Therefore the structure of the ontology for efficiency of response becomes an

important dimension.

[10:42] Bobbin Teegarden: At DEC, we used an overlay on all engineering for RAMPSS -- Reliability,

Availability, Maintainability, Performance, Scalability, and Security. Maybe these all apply for

black box here? Mary has cited some of them...

[10:56] Mary Balboni: @BobbinTeegarden: re ongoing criteria in field usage - yes during what we call

sustainment after delivery, upgrades are sent out acceptance tests are repeated and depending on how

much is changed, the testing may only be regression of specific areas in the system..

[10:43] Leo Obrst: @MaryBalboni: re: slide 14: back in the day, we would characterize 3 kinds of

integrity: 1) domain integrity (think value domains in a column, i.e., char, int, etc.), 2)

referential integrity (key relationships: primary/foreign), 3) semantic integrity (now called

business rules). Ontologies do have these issues. On the ontology side, they can be handled

slightly differently: e.g., referential integrity (really mostly structural integrity) will be

handled differently based on Open World Assumption (e.g., in OWL) or Closed World Assumption (e.g.,

in Prolog), with the latter being enforced in general by integrity constraints.

[10:52] Mary Balboni: @LeoObrst - thanks for feedback - since I am not an expert in Ontology it is

very nice to see that these testing paradigms are reusable - and tailorable.

[10:44] Peter P. Yim: == [4-Katsumi] Megan Katsumi presenting ...

[10:53] Leo Obrst: @Megan: Nicola Guarino for our upcoming (Mar. 7, 2013) Track A session will talk

along the lines of your slides 8, etc.

[10:52] Till Mossakowski: Is it always clear what the intended models are? After all, initially you

will have only an informal understanding of the domain, which will be refined during the process of

formalisation. Only in this process, the class of intended models becomes clearer.

[10:54] Michael Grüninger: @Till: At any point in development, we are working with a specific set of

intended models, which is why we call this verification. Validation is addressing the question of

whether or not we have the right set of intended models.

[10:56] Michael Grüninger: We formalize the ontology's requirements as the set of intended models (or

indirectly as a set of competency questions). It might not always be clear what the intended models

are, but this is analogous to the case in software development when we are not clear as to what the

requirements are.

[10:56] Till Mossakowski: @Michael: OK, that is similar as in software validation and verification.

But then validation should be mentioned, too.

[10:56] Todd Schneider: Michael, so there's a presumption that you have extensive explicit knowledge

of the intended model(s), correct?

[10:58] Michael Grüninger: @Todd: since intended models are the formalization of the requirements,

extensive explicit knowledge of intended models is equivalent to "extensive explicit knowledge

about the requirements"

[10:57] Leo Obrst: @Till, Michael: one issue is the mapping of the "conceptualization" to the

intended models, right? I guess Michael's requirements are in affect statements/notions of the

conceptualization. Is that right?

[10:59] Michael Grüninger: @LeoObrst: I suppose there could be the case where someone incorrectly

specified the intended models or competency questions that formalize a particular requirement (i.e.

the conceptualization is wrong)

[10:59] Till Mossakowski: It seems that two axiomatisations (requirements and design) are compared

with each other. The requirements describe the intended models. Is this correct?

[11:00] Michael Grüninger: @Till: We would say that the intended models describe the requirements.

[11:01] Michael Grüninger: @Till: The notion of comparing axiomatizations arises primarily when we

use the models of some other ontology as a way of formalizing the intended models of the ontology we

are evaluating

[11:02] Till Mossakowski: @Michael: but you cannot give the set of intended models to a prover, only

an axiomatisation of it. Hence it seems that you are testing two different axiomaisations against

each other.

[11:00] Todd Schneider: All, due to a changing schedule I need to leave this session early. Cheers.

[11:02] MariCarmenSuarezFigueroa: We could also consider the verification of requirements

(competency questions) using e.g. SPARQL queries.

[11:04] Peter P. Yim: @MeganKatsumi - ref. your slide#4 ... would you see some "fine tuning" after the

ontology has been committed to "Application" - adjustment to the "Requirements" and "Design"

possibly?

[11:06] Terry Longstreth: Fabian suggests that Megan's characterization of semantic correctness is

too strong...

[11:09] Michael Grüninger: @Till: Yes, when we use theorem proving, we need to use the axiomatization

of another theory. However, there are also cases in which we verify an ontology directly in the

metatheory. In terms of COLORE, we need to use this latter approach for the core ontologies.

[11:10] Torsten Hahmann: @Till: but you can give individual models to a theorem prover. It is a

question how to come up with a good set of models to evaluate the axiomatization.

[11:11] Till Mossakowski: OK, but this probably means that you have a set of intended models that is

more exemplary than exhaustive.

[11:11] Fabian Neuhaus: @Till, Michael. It seems to me that Till has a good point. Especially if the

ontology and the set of axioms that express the requirements both have exactly the same models, it

seems that you just have two equivalent axiom sets (ontologies)

[11:12] Torsten Hahmann: Yes, of course, the same as with software verification.

[11:12] Till Mossakowski: indeed, but sometimes it might just be an implication

[11:15] Till Mossakowski: further dimensions: consistency; correctness w.r.t. intended models (as in

Megan's talk), completeness in the sense of having intended logical consequences

[11:16] Megan Katsumi: @Leo: I'm not sure that I understand your question, can you give an example?

[11:03] Leo Obrst: @Megan: what if you have 2 or more requirements, e.g., going from a 2-D to a 3-D

or 4-D world?

[11:17] Peter P. Yim: == Q&A and Open Discussion ... soliciting of additional thoughts on Evaluation

Dimensions

[11:17] Bobbin Teegarden: It seems we have covered correctness, precision, meeting requirements, etc

well, but have we really addressed 'goodness' of an ontology? And certainly haven't addressed an

elegant' ontology, or do we care? Is this akin to Jack's 'beauty' assessment?

[11:17] Bob Schloss: Because of the analogy we heard with Database Security Blackbox Assessment, I

wonder if there is an analogy to "normalization" (nth normal form) for database schemas. Is some

evaluation criteria related to factoring, simplicity, minimalism, straightforwardness.....

[11:19] Torsten Hahmann: another requirement that I think hasn't been mentioned yet: granularity

(level of detail)

[11:21] Leo Obrst: @Torsten: yes, that was my question, i.e., granularity.

[11:22] Torsten Hahmann: @Leo: I thought so.

[11:22] MariCarmenSuarezFigueroa: I'm also think granularity is a very important dimension....

[11:19] Bob Schloss: I am also thinking about issues of granularity and regularity ... If a program

wants to remove one instance "entity" from a knowledge base, does this ontology make it very simple

to just do the remove/delete, or is it so interconnected that removal requires a much more

complicated syntax....

[11:24] Bob Schloss: Although this is driven by the domain, some indication of an ontology's rate of

evolution or degree of stability or expected rate of change may be important to those using

organizations. If there are 2 ontologies, and one, by being very simple and universal, doesn't have

as many specifics but will be stable for decades; whereas another, because it is very detailed using

concepts that are related to current technologies, current business practices, and therefore may

need to be updated every year or two... I'd like to know this.

[11:29] Matthew West: Yes, stability is an important criteria. For me that is about how much the

existing ontology needs to change when you need to make an addition.

[11:24] MariCarmenSuarezFigueroa: Sorry I have to go (due to another commitment). Thank you very

much for the interesting presentations. Best Regards

[11:28] Bob Schloss: Another analogy to the world of blackbox testing... the software engineers have

ideas of Orthogonal Defect Classification and more generally, ways of estimating how many remaining

bugs there are in some software based on the rates and kinds of discovery of new bugs that have

happened over time up until the present moment. I wonder if there is something for an ontology...

one that has a constant level of utilization, but which is having a decrease in reporting of

errors.... can we guess how many other errors remain in the ontology? Again... this is an

analogy.... some way of estimating "quality"...

[11:27] Michael Grüninger: @Fabian: It would be great if we could also focus on criteria and

techniques that people are already using in practice with real ontologies and applications.

[11:27] Steve Ray: @Michael: +1

[11:28] Fabian Neuhaus: @michael +1

[11:29] Leo Obrst: Perhaps the main difference between Intrinsic -> Extrinsic is that at least some

of the Intrinsic predicates are also Extrinsic predicates with additional arguments, e.g., Domain,

Requirement, etc.?

[11:30] Leo Obrst: Must go, thanks, all!

[11:31] Peter P. Yim: wonderful session ... really good talks ... thanks everyone!

[11:31] Peter P. Yim: -- session ended: 11:30 am PST --

[11:31] List of attendees: Alan Rector, Anatoly Levenchuk, Angela Locoro, Bob Schloss, Bobbin Teegarden,

Carmen Chui, Dalia Varanka, Donghuan Tang, Fabian Neuhaus, Fran Lightsom, Frank Olken, GaryBergCross,

Jack Ring, Joel Bender, John Bilmanis, Ken Baclawski, Laleh Jalali, Leo Obrst, MariCarmenSuarezFigueroa,

Mary Balboni, Matthew West, Max Petrenko, Megan Katsumi, Michael Grüninger, Mike Dean, Mike Riben,

Oliver Kutz, Pavithra Kenjige, QaisAlKhazraji, Peter P. Yim, Ram D. Sriram, Richard Martin, RosarioUcedaSosa,

Steve Ray, Terry Longstreth, Till Mossakowski, Todd Schneider, Torsten Hahmann, Trish Whetzel, vnc2

-- end of in-session chat-transcript --

  • Further Question & Remarks - please post them to the [ ontology-summit ] listserv
    • all subscribers to the previous summit discussion, and all who responded to today's call will automatically be subscribed to the [ ontology-summit ] listserv
    • if you are already subscribed, post to <ontology-summit [at] ontolog.cim3.net>
    • (if you are not yet subscribed) you may subscribe yourself to the [ ontology-summit ] listserv, by sending a blank email to <ontology-summit-join [at] ontolog.cim3.net> from your subscribing email address, and then follow the instructions you receive back from the mailing list system.
    • (in case you aren't already a member) you may also want to join the ONTOLOG community and be subscribed to the [ ontolog-forum ] listserv, when general ontology-related topics (not specific to this year's Summit theme) are discussed. Please refer to Ontolog membership details at: http://ontolog.cim3.net/cgi-bin/wiki.pl?WikiHomePage#nid1J
      • kindly email <peter.yim@cim3.com> if you have any question.

Additional Resources


For the record ...

How To Join (while the session is in progress)

Conference Call Details

  • Date: Thursday, 24-Jan-2013
  • Start Time: 9:30am PST / 12:30pm EST / 6:30pm CET / 17:30 UTC
  • Expected Call Duration: ~2.0 hours
  • Dial-in:
    • Phone (US): +1 (206) 402-0100 ... (long distance cost may apply)
      • ... [ backup nbr: (415) 671-4335 ]
      • when prompted enter Conference ID: 141184#
    • Skype: joinconference (i.e. make a skype call to the contact with skypeID="joinconference") ... (generally free-of-charge, when connecting from your computer)
      • when prompted enter Conference ID: 141184#
      • Unfamiliar with how to do this on Skype? ...
        • Add the contact "joinconference" to your skype contact list first. To participate in the teleconference, make a skype call to "joinconference", then open the dial pad (see platform-specific instructions below) and enter the Conference ID: 141184# when prompted.
      • Can't find Skype Dial pad? ...
        • for Windows Skype users: Can't find Skype Dial pad? ... it's under the "Call" dropdown menu as "Show Dial pad"
        • for Linux Skype users: please note that the dial-pad is only available on v4.1 (or later; or on the earlier Skype versions 2.x,) if the dialpad button is not shown in the call window you need to press the "d" hotkey to enable it. ... (ref.)
  • Shared-screen support (VNC session), if applicable, will be started 5 minutes before the call at: http://vnc2.cim3.net:5800/
    • view-only password: "ontolog"
    • if you plan to be logging into this shared-screen option (which the speaker may be navigating), and you are not familiar with the process, please try to call in 5 minutes before the start of the session so that we can work out the connection logistics. Help on this will generally not be available once the presentation starts.
    • people behind corporate firewalls may have difficulty accessing this. If that is the case, please download the slides above (where applicable) and running them locally. The speaker(s) will prompt you to advance the slides during the talk.
  • In-session chat-room url: http://webconf.soaphub.org/conf/room/summit_20130124
    • instructions: once you got access to the page, click on the "settings" button, and identify yourself (by modifying the Name field from "anonymous" to your real name, like "JaneDoe").
    • You can indicate that you want to ask a question verbally by clicking on the "hand" button, and wait for the moderator to call on you; or, type and send your question into the chat window at the bottom of the screen.
    • thanks to the soaphub.org folks, one can now use a jabber/xmpp client (e.g. gtalk) to join this chatroom. Just add the room as a buddy - (in our case here) summit_20130124@soaphub.org ... Handy for mobile devices!
  • Discussions and Q & A:
    • Nominally, when a presentation is in progress, the moderator will mute everyone, except for the speaker.
    • To un-mute, press "*7" ... To mute, press "*6" (please mute your phone, especially if you are in a noisy surrounding, or if you are introducing noise, echoes, etc. into the conference line.)
    • we will usually save all questions and discussions till after all presentations are through. You are encouraged to jot down questions onto the chat-area in the mean time (that way, they get documented; and you might even get some answers in the interim, through the chat.)
    • During the Q&A / discussion segment (when everyone is muted), If you want to speak or have questions or remarks to make, please raise your hand (virtually) by clicking on the "hand button" (lower right) on the chat session page. You may speak when acknowledged by the session moderator (again, press "*7" on your phone to un-mute). Test your voice and introduce yourself first before proceeding with your remarks, please. (Please remember to click on the "hand button" again (to lower your hand) and press "*6" on your phone to mute yourself after you are done speaking.)
  • RSVP to peter.yim@cim3.com with your affiliation appreciated, ... or simply just by adding yourself to the "Expected Attendee" list below (if you are a member of the community already.)
  • Please note that this session may be recorded, and if so, the audio archive is expected to be made available as open content, along with the proceedings of the call to our community membership and the public at-large under our prevailing open IPR policy.

Attendees