Actions

Ontolog Forum

Revision as of 18:51, 3 October 2018 by imported>KennethBaclawski
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Session Planning
Duration 1 hour
Date/Time October 3 2018 16:00 GMT
9:00am PDT/12:00pm EDT
5:00pm BST/6:00pm CST
Convener KenBaclawski

Ontology Summit 2019 Planning

Agenda

Conference Call Information

  • Date: Wednesday, 3-October-2018
  • Start Time: 9:00am PDT / 12:00pm EDT / 6:00pm CEST / 5:00pm BST / 1600 UTC
  • Expected Call Duration: 1 hour
  • The Video Conference URL is https://zoom.us/j/689971575
    • iPhone one-tap :
      • US: +16699006833,,689971575# or +16465588665,,689971575#
    • Telephone:
      • Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 558 8665
      • Meeting ID: 689 971 575
      • International numbers available: https://zoom.us/u/Iuuiouo
  • Chat Room

Attendees

Proceedings

[12:16] Ken Baclawski: Donna suggests that there be a project separate from the summit devoted to the unfinished work of the Ontology Summit 2007.

[12:17] Ken Baclawski: This project would be an open-ended working group which would report its findings up to that point at the Symposium as the basis for a workshop at the Symposium.

[12:28] Gary Berg-Cross: Common sense reason fits into General AI, but also Explanations or Understanding Explanations using common sense reasoning.

[12:29] Gary Berg-Cross: We had this as potential subtopics:

Natural language generation based on ontologies

Explaining machine learning models using ontologies

The role that the Open Knowledge Network could play in explanations.

Explanation for industrial ontologies (e.g., IOF) and systems engineering (e.g., ST4SE)

Explanation of mathematical, statistical and physical models

[12:33] MikeBennett: If we are looking at ontologies to explain things, we should have some component in which we explain ontologies.

[12:35] DonnaFritzsche: How Ontologies/Domain Model can Expand Machine Learning Systems and the Capability to Explain

[12:35] DonnaFritzsche: How Ontologies expand the Capability of Chat Bots of Better Understand/Explain

[12:36] MikeBennett: hh:33 is with an If; maybe it would be a session or an intro, maybe a track, IF we did that topic.

[12:36] DonnaFritzsche: Note regarding wiki - last time I tried to get edit permissions, it did not work - I will try again

[12:37] ToddSchneider: Here's a URL for the IAOA Term list, but this should change: http://iaoaedu.cs.uct.ac.za/pmwiki.php?n=IAOAEdu.TermList

[12:39] Gary Berg-Cross: We will need to discuss the idea of Explanation itself before jumping in to deep.

[12:40] Ram D. Sriram: @Gary: One can also generate Ontologies from Natural Language

[12:41] DonnaFritzsche: +1 @Gary

[12:46] Gary Berg-Cross: Are we still on? I can't hear people.

[12:49] Ken Baclawski: @Gary: Yes, we are still on.

[12:50] Ken Baclawski: @Donna: If you can't log into the wiki, let me know and I can change your password.

[12:51] DonnaFritzsche: great example Janet (re: Surgeon)

[12:52] DonnaFritzsche: from Janet: Conceptual Interoperability as a possible track

[12:53] DonnaFritzsche: (diagram from Andreas Tok?)

[12:58] Gary Berg-Cross: Here is some background discussing Explanations: Theoretical Work There has also been some theoretical work on explanation. [Chajewska and Halpern, 1997] proposed a formal definition of explanation in general probabilistic systems, after examining two contemporary ideas and finding them incomplete. In expert systems, [Johnson and Johnson, 1993] presented a short survey of accounts of explanation in philosophy, psychology and cognitive science and found that they fall into 3 categories: associations between antecedent and consequent; contrasts and differences; and causal mechanisms. In recommender systems, [Yetim, 2008] proposed a framework of justifications which uses existing models of argument to enumerate the components of a justification and provide a taxonomy of justification types. [Corfield, 2010] aims to formalize justifications for the accuracy of ML models by classifying them into four types of reasoning, two based on absolute performance and two rooted in Bayesian ideas. More recently, [Doshi-Velez and Kim, 2017] considered how to evaluate human interpretability of machine learning models. They proposed a taxonomy of three approaches: application-grounded, which judges explanations based on how much they assist humans in performing a real task; human-grounded, which judges explanations based on human preference or ability to reason about a model from the explanation; and functionally-grounded, which judges explanations without human input, based on some formal proxy for interpretability. For this third approach, they hypothesized that matrix factorization of result data (quantized by domain and method) may be useful for identifying common latent factors that influence interpretability

[13:00] MikeBennett: Reading the above, it seems to talk to epistemology; are we making the (v useful) link from ontology to epistemology (meaning versus truth)

[13:01] MikeBennett: Agree we need an intro session that set the scene

[13:01] Gary Berg-Cross: Common sense examples: "Who is taller, Prince William or his baby son Prince George? Can you make a salad out of a polyester shirt? If you stick a pin into a carrot, does it make a hole in the carrot or in the pin? These types of questions may seem silly, but many intelligent tasks, such as understanding texts, computer vision, planning, and scientific reasoning require the same kinds of real-world knowledge and reasoning abilities. For instance, if you see a six-foot-tall person holding a two-foot-tall person in his arms, and you are told they are father and son, you do not have to ask which is which. If you need to make a salad for dinner and are out of lettuce, you do not waste time considering improvising by taking a shirt of the closet and cutting it up. If you read the text, "I stuck a pin in a carrot; when I pulled the pin out, it had a hole," you need not consider the possibility "it" refers to the pin."

Resources

Audio Recording

Previous Meetings

Next Meetings

... further results