Actions

Ontolog Forum

Revision as of 01:14, 8 May 2019 by imported>KennethBaclawski (→‎Proceedings)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Session Symposium
Duration 3 hour
Date/Time May 07 2019 15:00 GMT
8:00am PST/11:00am EST
4:00pm GMT/5:00pm CET
Convener Ken Baclawski

Ontology Summit 2019 Symposium Session 1

Agenda

  • The following is the schedule. All times are in US Eastern Time.
  • Track Session Video Recording
  • Panel Session Video Recording
    • 12:30pm to 2:00pm Panel Discussion
      • Panelists: John Sowa, Michael Grüninger, Amit Sheth, Hari Srihari
      • Can computer-based explanations be made as informative and useful as an explanation from a well-informed person?
      • Are there disconnects among researchers, industry and media, or between users and investors with respect to what constitutes an acceptable or successful explanation?
      • Is there general ignorance about what is already possible vs what is well beyond current capabilities?
      • What role do ontologies play?
      • How can one integrate ontological and statistical approaches?
      • What is required for an ontology to support explanation?

Conference Call Information

  • Date: Tuesday, 07-May-2019
  • Start Time: 8:00am PDT / 11:00pm EDT / 5:00pm CEST / 4:00pm BST / 1500 UTC
  • Expected Call Duration: 3 hours
  • The Video Conference URL is https://zoom.us/j/689971575
    • iPhone one-tap :
      • US: +16699006833,,689971575# or +16465588665,,689971575#
    • Telephone:
      • Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 558 8665
      • Meeting ID: 689 971 575
      • International numbers available: https://zoom.us/u/Iuuiouo
  • Chat Room

Attendees

Proceedings

Commonsense and Narrative Track Proceedings

[11:10] RaviSharma: Welcome everyone on behalf of Ken and all of us to Symposium first session today.

[11:17] RaviSharma: Why was it called Turing Test? As in computer science and math is it the same meaning?

[11:24] RaviSharma: ref. Grosof - how do we understand provenance learning that allows us to begin with what we have learned so far, in terms of software or solutions?

[11:26] RaviSharma: How do we partition what is generally known vs what a set of users or a user understand?

[11:28] RaviSharma: Benjamin Grosof rule based example for finance is an example of cross track item bet commonsense rules and finance

[11:29] RaviSharma: ERGO AI from Grosof

[11:31] RaviSharma: Niket Tandon's challenge ties with last years summit topic Context and situational awareness

[11:31] Ken Baclawski: @[11:17] RaviSharma: Nowadays, a "Turing Test" is any experiment where a person is interacting with another entity. The entity is said to "pass the test" if the person cannot tell whether the entity is a human or a machine. This was not Turing's original test which was an imitation game.

[11:32] RaviSharma: Ken thanks for answer

[11:34] RaviSharma: Gary - Niket's explanation of continuity of existence of ball or basket, does it not assume situational awareness?

[11:36] RaviSharma: Gary- we get opportunity to ask Sargur Srihari tomorrow as to how to determine which of the distributions is relevant for a given type of situation, Laplace, student, etc.?

[11:39] Ken Baclawski: @[11:36] RaviSharma: We can ask Sargur today. The panel discussion is coming up today.

[11:40] RaviSharma: Yes Ken thanks

[11:40] RaviSharma: Gary - is there a notion of progressive learning e.g. is this item edible, veg, not veg

[11:43] RaviSharma: and then move on to quality healthy etc, thus first filter then progressively more refined is this built in to ML today? and is there a solution that distinguishes the state such as commonsense knowledge, then visual id with that knowledge then probabilities that 2 or more objects are similar or not etc to actual AI reasoning?

[11:44] Gary: @Ravi regarding understand provenance learning - it is more understanding the provenance of the knowledge/information. Where did this info come from? Observation, a DB etc.

[11:46] Gary: @Ravi who asked " How do we partition what is generally known vs what a set of users or a user understand?" We might have provenance info attached to some knowledge - like "my user believe that they will be late to the meeting."

[11:48] RaviSharma: Janet - it appears you also addressed lot of logic and preferences of users and how does that tie with narratives, am I reasonable in assuming that there is strong x-track correlation and overlap bet Commonsense and Narrative as related to Explanation and more generally all of AI?

[11:48] Gary: Reference on this Grice issue - Bickhard, Mark H. "The social-interactive ontology of language." Ecological Psychology 27.3 (2015): 265-277.

[11:50] RaviSharma: Janet - my favorite is concatenation i.e. narrative discourse embedded in another discourse example this story was told by A at time T1 is same as the same told by B at time T2 etc and combinations thereof.?

[11:52] RaviSharma: Janet excellent Onto Des Pattern from Giddi

[11:52] Gary: @Ravi asked, "Niket's explanation of continuity of existence of ball or basket, does it not assume situational awareness?" Yes, he called it commonsense aware and discussed it as understanding the agent, event, object and process - the ingredients of a situation.

[11:53] Gary: @ravi asked "is there a notion of progressive learning e.g. is this item edible, veg, not veg" The NELL system is an example of this.

[11:54] RaviSharma: Ken - what is beyond the Proof in Explanation? Explanation is complete and by the way it is also supported by Proof, does it mean beyond math in explanations?

[11:56] RaviSharma: Janet Wulrich view is more universal for all objects and concepts, people see it differently depending on context? thanks that was what tied us to last year's summit.

[11:58] RaviSharma: Janet included trust and information as well.

[12:00] Ken Baclawski: @[11:54] RaviSharma: I contrasted the view that a proof is an explanation (indeed, the "gold standard" for explanation) with the reverse: if one uses explanation logic, then the explanation is not only a rigorous proof but also an explanation. Proofs in the Mathematics literature are far from being complete and rigorous. Their purpose is to convince other mathematicians in the same specialty (which could be quite small) that the theorem is probably true. In most cases, one could generate a complete and rigorous proof from the published argument, but it is very difficult to accomplish and rarely done. My talk proposed that by taking explanation as the goal, one can develop fully complete and rigorous proofs that can also be readable narratives.

[12:06] Mark Underwood: Sorry, Ken/Janet - I had to swap you out. Was there a question for me?

[12:08] RaviSharma: episodic vs progressive knowledge.

[12:10] Ken Baclawski: @[12:06] Mark Underwood: Yes, there was. Janet can explain it.

[12:11] RaviSharma: As new generations learn through Audio Visual as well as less of NL based, and reach certain levels say in a course and for grades and levels of explainability where are we with machines and ontologies?

[12:12] Mark Underwood: RE Learner model @ ADL https://www.adlnet.gov/video-privacy-pt1

[12:16] RaviSharma: Mark thanks for explanations

[12:17] RaviSharma: Gary and Janet thanks for some answers

[12:18] RaviSharma: Ken thanks for proof related comments

[12:24] RaviSharma: Gary introduced situation, representation, user base etc? also intelligent agents that is where we are. Semantic resources. Janet MOD DOD systems to support decision making against threats. By the way I worked on DOD MOD Knowledge Bases by integrating different services Databases into one single for knowledge sharing among DOD components such as Army Nave Special forces etc. Space Command would be now a new one. The problem with knowledge reuse was that Drones do differently than Aircraft, etc. just to mention a few.

[12:25] Mark Underwood: http://sites.ieee.org/sagroups-1484-20-1/

[12:29] RaviSharma: Gary - Carnegie Mellon could be a site to visit for these areas of discussion, any URLs?

[12:31] RaviSharma: Mark - role playing technology appears like ... Gary said Michel Gruninger has references to it e.g. Godfather movie scene?

[12:31] RaviSharma: Janet gave ef to knowl creation, gamaform, game chronotopes, thanks janet.

[12:32] RaviSharma: thanks. all

[12:33] Mark Underwood: @Janet - Apologies for the weak response. Been away from that literature for a time. Thx for the notes, Ravi

Panel Proceedings

  • Can computer-based explanations be made as informative and useful as an explanation from a well-informed person?
    • Are there disconnects among researchers, industry and media, or between users and investors with respect to what constitutes an acceptable or successful explanation?
    • Is there general ignorance about what is already possible vs what is well beyond current capabilities?
  • What role do ontologies play?
    • How can one integrate ontological and statistical approaches?
    • What is required for an ontology to support explanation?

[12:41] Gary: @Ravi Michael Gruninger has a video ontology. Davis and Marcys refer to the Godfather scene. "To take another example, consider what happens when we watch a movie, putting together information about the motivations of fictional characters we have met only moments before. Anyone who has seen the unforgettable horses head scene in The Godfather immediately realizes what is going on. It is not just it is unusual to see a severed horse head, it is clear Tom Hagen is sending Jack Woltz a message if I can decapitate your horse, I can decapitate you; cooperate, or else. For now, such inferences lie far beyond anything in artificial intelligence

[12:45] RaviSharma: Panel discussion starts

[12:45] RaviSharma: Michael and Hari introduced

[12:46] RaviSharma: Hari- DL and mapping between dark and human understandable systems.

[12:47] RaviSharma: Q for Hari how dark are the machine representations, obviously there is some structure and sequence for inputs from accessed knowledge and filtering modeling.

[12:49] RaviSharma: These are shown as some structure in DARPA models as well so is there a mapping of those o how we understand as humans/ are any pause or stepped learning levels that are mutually natural equivalents?

[12:50] Gary: Coherent Insight might involve same framework/(micro) theory/model within which we reason.

[12:50] RaviSharma: Ptolemaic is complex but equivalent

[12:51] Gary: Coherence BTW is an important concept in discourse understanding.

[12:51] TerryLongstreth: Michael: what is the relationship between insight and intuition informing the 3-year old, and can that relationship provide a framework for a self expanding and correcting, machine generated ontology for use in preparing explanations?

[12:52] janet singer: @Mark - Glad I was able to locate your slide on the gamiforms and chronotopes. And good to see such innovative uses of narrative in tech development

[12:53] BobbinTeegarden: What part does metaphor play in discovering the patterns of how things fit together?

[12:55] Mark Underwood: Discourse coherence is maddeningly mingled with competency, domain specificity (don't know or misunderstand specialized terms), learner journey (knowledge onboarding)

[12:55] TerryLongstreth: Can "unsatisfying" be a metric for an explanation system?

[12:57] ToddSchneider: Would the ability to generalize an explanation be a criteria for the 'quality' of an explanation?

[13:03] janet singer: Discourse coherence relates to sense-making and fusion as well as competence, domain specificity, learner journey, others?

[13:04] ToddSchneider: Terry? SCOPE??

[13:05] Mark Underwood: The references to modeling & simulation are apt here... Possibly a topic we should flesh out?

[13:06] TerryLongstreth: "The computer told me." is the modern equivalent of "I read it in the Times." I.e. explanation helps to establish reputation of the system.

[13:08] RaviSharma: Ram - I emailed you

[13:09] janet singer: Modeling and simulation, coherence, metaphor, dialog all good topics

[13:11] RaviSharma: Hari- you are able to see what AI systems can do today and tomorrow but have you seen Audio visual learning dependent AI vs NL based? does it lead to still same optimism?

[13:14] Ram d. Sriram: Check this out. Harari talks about XAI in this video, I believe. https://www.ynharari.com/fei-fei-li-yuval-noah-harari-coming-ai-upheaval/

[13:15] RaviSharma: @Amit - AI systems assisting in bettering human tasks and response to near realtime situations are where we are, but many complex and slow decisions such as global environmental strategies with current incomplete knowledge, we are still miles away or not?

[13:20] Mark Underwood: RE simulation - DARPA's focus in this 2017 XAI Gunning deck was ModSim ArduPilot.org https://www.darpa.mil/attachments/XAIProgramUpdate.pdf

[13:21] RaviSharma: @Amit - are there solutions that have some prelim Qs to patient to determine at least their knowl level so that appropriate explanations can be selected either from a database knowl base or by human doctor?

[13:25] RaviSharma: Hari - emphasis on how to build AI systems, software 2.0 is ML capability, two extremes let is learn vs soft 1.0 where we give it all inputs.

[13:25] BobbinTeegarden: What is Software 2.0 -- characteristics?

[13:26] RaviSharma: We need Soft 3.0 where self corrections in what is learned

[13:28] Gary: @Mike G do you think it useful to try to embed AI systems in the physical world to acquire CSK such as you are engineering into your PRAxIS suite?

[13:29] janet singer: Last question is good: What is required for an ontology to support explanation?

[13:29] janet singer: Cant remember who raised that, but they seemed to have some direction in mind

[13:30] AmitSheth: Ravi- yes I agree we are miles away unless we are talking about well defined simpler problem (eg labeling an image where the label is reasonably meaningful for most observers and label is about main objects or general information....)

[13:30] MichaelGruninger: @Gary: I agree. Further, there are already a lot of systems embedded in the world (e.g. IoT), and many of these are used with implicit commonsense knowledge that has not yet been made explicit in formally specified ontologies

[13:32] Gary: @MichaelGruninger Yes, but the IoT's we are building and placing have not learning or knowledge retention ability built in.

[13:32] ToddSchneider: Janet, you're adding new requirements on an ontology.

[13:33] RaviSharma: Amit - thanks and what about labeling the image based on level of knowl of viewer, radiologist or for uninformed patient showing where kidney is?

[13:34] janet singer: @Todd - or using this as an opportunity for revisiting and clarifying requirements on an ontology

[13:37] ToddSchneider: Michael, would 'explanation' be an ontology in addition to a domain ontology?

[13:38] Gary: To some degree human "perception" of the world is based on some "model" of the world we construct during our development.

[13:38] MichaelGruninger: Ontologies are so much more than WordNet and ImageNet

[13:38] RaviSharma: Amit and other panelists- Are there examples / use cases for with and without use of Ontology, to explicitly see value of ontology. The reason is that we have richness of understanding relationships and thus to understand and rank predicates that are important in more effective AI solutions?

[13:38] RaviSharma: Michael G - Yes

[13:39] RaviSharma: Michael Gruninger - How do we leverage them?

[13:39] RaviSharma: into AI solutions

[13:41] MikeBennett: Aren't there 2 dimensions to the 'kind of ontology' problem: depth of abstraction (use of TLO etc.) for disambiguation, versus richness of axioms for reasoning? Both play a role in explanations but it seems to me these are different roles.

[13:41] RaviSharma: what happens when multiple and multiple levels of ontologies are involved in AI? are there well understood integration concepts of such richness in ontologies to more beneficial use in AI solutions?

[13:43] RaviSharma: Mike - great observation, we need to distinguish them.

[13:43] janet singer: @Todd - what do you have in mind for explanation as a domain-invariant ontology? Would conversation be another?

[13:43] janet singer: Or maybe those two in one?

[13:44] ToddSchneider: Janet, an 'separate' ontology for Explanation would likely not be domain invariant. There would likely be dependencies among them.,

[13:45] RaviSharma: Hari informed about MPE most probable explanation.

[13:46] AmitSheth: Example of combining Ontologies/Knowledge graphs (three) with probabilistic graph model: http://ontologforum.org/index.php/ConferenceCall_2019_05_07

[13:47] RaviSharma: Amit - thanks for multiple ontology smart city example.

[13:48] janet singer: Ok, so how would an explanation ontology be different from a domain ontology? Is the idea that it would be at a meta level to the target ontology, so it would be about the use of the target?

[13:49] AmitSheth: presentation: http://www.knoesis.org/sites/default/files/aaai2016understandingtrafficdynamics-160218135325.pptx

[13:49] TerryLongstreth: Amit: does your paper show integration or incorporation of probabilistic sections with an ontology?

[13:49] AmitSheth: the ontologies we used were modeled using different representations, not using all classical logic

[13:50] AmitSheth: each "ontologies" were existing

[13:51] RaviSharma: Michael - Typical example for universal or general understanding are useful for creating recognition knowledge bases (AI based) and you also bring in statistical aspects.

[13:52] BobbinTeegarden: Is it possible that a 'contextual' ontology could be built on the fly, with just the things related in some way to the core of concentration/discourse, to help discover one or more explanations?

[13:53] janet singer: Using typical examples get back to leveraging heuristics research

[13:55] MikeBennett: Does the contextual ontology need to be shallow? Why not use a 'wire frame' TLO with just the high level abstractions germane to that context, thereby disambiguating from other things in other contexts? Of course you can make the context implicit (thereby a shallow ontology) but you need not?

[13:56] Gary: @Bobbin I would call this a KB with some ontology modules covering assumptions that are part of the interpretation of the discourse.

[13:56] TerryLongstreth: Can the computer invent or discover a concept? a conceptual model?

[13:57] BobbinTeegarden: @Mike: Could it be shallow to begin with, and morph towards better and better explanation? Does this tend towards Sowa's 'discourse'?

[13:57] ToddSchneider: Michael Gruninger, if logics other than First Order are used, or assumed, in the creation of an ontology, how would this impact ontology development? Ontological Analysis?

[13:57] Gary: @Terry That is what ML systems do.

[13:58] MikeBennett: @Bobbin i think so; that's the idea behind the 'wire frame' TLO approach - compatible but not complete.

[13:59] BobbinTeegarden: @Terry I'm working with a little startup Pattern Computer that claims that they can discover new patterns. Stay tuned ... ;0)

[14:02] BobbinTeegarden: @Terry and Jans Aasman of Allegrograph fame does a demo that given a few concepts can scour the web and pull in a set of related concepts in a related ontology.. on the fly conceptual ontology?

[14:02] Gary: On this Q of how many exceptions you allow there is the idea of normally assimilating info into a current k-structure until we find something that affords a accommodation process.

[14:03] MikeBennett: @Bobbin aren't those words? How will it distinguish between 2 legitimate uses of words like Account or Asset?

[14:04] ToddSchneider: So, would the ability to create context free domain models/ontologies be useful, if they could be created?

[14:04] MichaelGruninger: @Todd: Yes, I agree

[14:06] Gary: Locality would be important factor in building a useful contextualized KB on the fly. But background K is important to determine relevance and meaning in the sense of what is connected and how.

[14:06] TerryLongstreth: @Gary - I've run out of time but the followup to my question was going to be about the ML or DeepL systems discovering ontologies and assigning probabilistic values to accommodate (machine generated) evolution of the ontology; the ontology could then be applied to generation of explanation(S)

[14:06] AmitSheth: @Terry: not in that paper - learning from data represented in probabilistic model was mapped to knowledge structure/ontology; we are doing now integrating knowledge (components) into deep learning now (manuscripts in progress, will go on arxiv)

[14:06] ToddSchneider: Local to what?

[14:07] TerryLongstreth: Bobbin- thanks for the info.

[14:07] TerryLongstreth: Bye all

[14:10] ToddSchneider: Ken, could you include the list of questions (that the panel was responding to) in the chat?

KenBaclawski: Yes, they are at the start of the chat proceedings above.

[14:11] RaviSharma: Amit Hari Michael panelists and colleagues

[14:11] RaviSharma: thanks

Resources

Communiqué Draft in pdf Communiqué Draft in docx

Previous Meetings

... further results

Next Meetings

... further results