Ontolog Forum

Session Explainable AI Session 1
Duration 1 hour
Date/Time Feb 20 2019 17:00 GMT
9:00am PST/12:00pm EST
5:00pm GMT/6:00pm CET
Convener & Co-Champions: Ram D. Sriram and Ravi Sharma

Ontology Summit 2019 Explainable AI Session 1


Speaker: Dr. William Clancey

  • Title: "Explainable AI Past, Present, and Future—A Scientific Modeling Approach”
  • Presentation: Slides Video Recording
  • Bio: William J. Clancey, Ph.D.

Florida Institute for Human and Machine Cognition, Pensacola

Dr. William J. Clancey, a senior research scientist at IHMC, is a computer scientist whose research relates cognitive and social science in the study of work practices and the design of agent systems. He received a PhD in Computer Science at Stanford University (1979) and Mathematical Sciences BA at Rice University (1974). He has developed artificial intelligence applications for medicine, education, finance, robotics, and spaceflight systems. At the Institute for Research on Learning (1987-1997), he co-developed ethnographic methods for studying and modeling work systems. At NASA Ames Research Center as Chief Scientist of Human-Centered Computing, Intelligent Systems Division (1998-2013), his team automated file management between Johnson Space Center Mission Control and the International Space Station, receiving the JSC Exceptional Software Award. He is a Fellow of the American College of Medical Informatics, Association for Psychological Science, Association for Advancement of AI, and the National Academy of Inventors. His book Working on Mars: Voyages of Scientific Discovery with the Mars Exploration Rovers received the AIAA 2014 Gardner-Lasser Aerospace History Literature Award. He has presented invited lectures in over 20 countries.

Conference Call Information

  • Date: Wednesday, 20-February-2019
  • Start Time: 9:00am PST / 12:00pm EST / 6:00pm CET / 5:00pm GMT / 1700 UTC
  • Expected Call Duration: 1 hour
  • The Video Conference URL is
    • iPhone one-tap :
      • US: +16699006833,,689971575# or +16465588665,,689971575#
    • Telephone:
      • Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 558 8665
      • Meeting ID: 689 971 575
      • International numbers available:
  • Chat Room



[12:07] RaviSharma: Ravi introduced Dr Bill Clancey and noted common interest areas such as human spaceflight.

[12:11] DavidWhitten: What was the name of the collaborator Bill mentioned that created the "herstory" ?

[12:12] DavidWhitten: i.e. the list of all rules that had been used in answering a query.

[12:14] DavidWhitten: Hmm. Why means Why are you asking this question, not why was this conclusion made.

[12:15] DavidWhitten: Separated the time with the computer into the gathering stage, decision stage, and review stage.

[12:17] DavidWhitten: Ohio State was involved in AI in the early to mid 1970s. Does anyone know who was working on it?

[12:19] DavidWhitten: MYCIN had a rudimentary Disease Hierarchy of types of disease, their symptoms, etc.

[12:20] Mark Fox: I believe the OSU work was led by Chandrasakaran.

[12:21] BobbinTeegarden: The same Chandrasakaran who started the KADS research at Ohio State?

[12:21] RaviSharma: thanks Mark

[12:23] DavidWhitten: MYCIN included categories to refine the goal search implicit in a rule

[12:24] Mark Fox: Pople was at the University of Pittsburgh.

[12:24] DavidWhitten: Who made a set of operators for causal operations for diagnosis of diseases?

[12:24] DavidWhitten: Was that Pople ?

[12:25] DavidWhitten: from 1975-1985 He recognized how to model data (knowledge representation) and the teaching knowledge of the rules and data

[12:25] DavidWhitten: Got involved with 20 founders, at TEknowlge

[12:26] DavidWhitten: Investigated knowledge bases, to extract the implicit design in a set of rules (knowledge base)

[12:26] DavidWhitten: Categories such as a patient category of Compromised Host.

[12:27] DavidWhitten: The relevant structure of a rule is part of how you write rules, how you prove them relevant, create a disease ontology, and how you prove the observations match the rule.

[12:28] RaviSharma: Bill- here agent is bio organism that is identified to cause a disease or symptom?

[12:28] DavidWhitten: Bill Clancey was always thinking about how to lift the MYCIN and QA system's core into a general tool. This was e-Mycin.

[12:29] DavidWhitten: The abstraction process has different forms. 1) definitional mapping using numeric model.

[12:29] DavidWhitten: Knowledge Engineering is a new way of modeling science and engineering.

[12:30] DavidWhitten: apparently tools are MYCIN, EMYCIN, Neomycin, Heracles, and CASTER

[12:30] DavidWhitten: CASTER was used in Mechanical Engineering.

[12:31] BobbinTeegarden: This looks like the early KADS work...

[12:32] DavidWhitten: Chandrasakaran stated that there were ontological commitments about processes that allowed general models.

[12:33] DavidWhitten: Breaking down an ontology requires attention to categories used for teaching.

[12:35] DavidWhitten: Just before graduation, worked on Therapy Explanation with table driven approach (1979) to regularize the Therapy algorithm, with rules to distinguish which categories should be used, and then to refine them to subcategories.

[12:35] DavidWhitten: Explanation was driven off combination of rules and tables used for data/observations.

[12:35] RaviSharma: Applying Mycin based general domain understanding helped you get desired results in CASTER would that point to possibility of metamodeling or knowledge patterns / graphs that would apply (as template) to other domains? What would it be that would typify applicable domains?

[12:36] BobbinTeegarden: So is this a kind of 'process ontology'?

[12:37] DavidWhitten: worked on GUIDON an instructing program. This program examined the records and its deduction trace, with input from user, to increase internal data.

[12:38] DavidWhitten: Able to attach data to the particular rules used in inference.

[12:39] DavidWhitten: Explanation is a long term part of Medical diagnosis, intelligence tutoring system, and general "assistant" programs.

[12:40] DavidWhitten: This is Symbolic Artificial Intelligence. Requires System models, formal language and defined data. separating the models from explanations. Causal reasoning, generating stories and narrative. Beverly Wolf worked on this.

[12:41] DavidWhitten: Bewildered by why today's programs and lack of history knowledge, recreating the past.

[12:42] DavidWhitten: MAPS does not have an interaction mode to examine alternate routes. No way to discuss advantages, just "5 minutes faster" Are the recommendations worth following. No reflection of DATA by MAPS program. (not program introspection).

[12:42] Gary: Bill's critique of current advice systems like Map systems show in part that the systems lack the type of commonsense knowledge that are needed for dialog.

[12:43] DavidWhitten: He did better than MAPS ten (10) years ago.

[12:43] RaviSharma: Bill- one of the reasons is that bidirectional inputs imply understanding on part of GPS routing systems otherwise today they are passive internal decision option presenters one at a time?

[12:44] BobbinTeegarden: Regarding heuristic matches, similarity is contextual... so highest 'match' is relative to the context

[12:44] DavidWhitten: He wants to question recommendations by Netflix. Such as highly recommended documentary on "Alpha Go"

[12:44] DavidWhitten: GRIN. The DARPA Explainable AI slide shows up for third time in third talk.

[12:46] DavidWhitten: Dunning wants to reconfigure the learning process for machine learning/Neural Nets to be explainable. Note: does not guarantee it will be an Understandable model. No learner model. No user model. Dunning wants to focus on other aspects.

[12:47] Gary: It is David Gunning

[12:47] Gary:

[12:47] DavidWhitten: DARPA Explainable AI is really generate tools for using in explainable AI.

[12:48] DavidWhitten: The understanding of the Learning system is built up by looking at neural net as it is created, to recognize general areas.

[12:49] DavidWhitten: Work System Design Methodology (from 1992) method to find out requirements, do participatory design, fast turnaround on tools creation, recognize experimental nature of WSDM.

[12:49] BobbinTeegarden: Is participatory design always machine-human, or also machine-machine?

[12:49] Gary: Bill's ethnographic design is an important part of KE, but, alas, little used.

[12:49] DavidWhitten: How programs and models interact with people will be contextual and discoverable. Few programs use this method.

[12:52] RaviSharma: Systems thinking keeps coming back, also interdomain is important?

[12:53] RaviSharma: request participants to raise hand

[12:55] Mark Fox: Engineering data that neural nets use to learn is analogous to engineering rules for expert systems

[12:55] Gary: participatory design could be human to human,

[12:57] DouglasRMiles: So it is still possible to get funding and work on Explainable AI?

[12:58] DouglasRMiles: I didn't realize the govt was still interested

[13:00] DouglasRMiles: I suppose the software just has to pretend like it uses NNs ... the same way LarKC pretended to use RDF

[13:02] DouglasRMiles: (sorry if that sounds naughty )

[13:02] Mark Fox: Within the world of ontologies, Ontology Design Patterns are one way to transfer structures from one domain to another.

[13:05] BobbinTeegarden: I saw earlier some work that bridged on the KADS patterns (Chandrasekeran, Esprit research), and you mentioned what's needed now is patterns. Are those patterns akin to sophisticated KADS patterns (like heuristic match applied to xxx)?

[13:07] Gary: I couldn't be heard, but my Q for Bill, whose early work I admire, concerns his observations on brittleness. We covered some of his work in our session on commonsense and this may be something to consider to address brittleness. DARPA also has a program on this. Is this also interacting with the XAI program?

[13:07] DouglasRMiles: A big question I have also have is why the CYC Software Application infrastructure hasn't been made available to the teams to hook up the algorithms

[13:11] Gary: A related Q for Bill is what he thinks of efforts like NELL.

[13:15] DouglasRMiles: There are ways to walk programs out of brittleness thru knowledge engineering process

[13:15] Gary: New AI area - FMEA

[13:16] ToddSchneider: There's an ISO specification for processes: Process Specification Language (PSL).

[13:17] ToddSchneider: PSL has a common logic representation.

[13:19] Gary: We use participatory design as part of our session to leverage domain vocabularies as understood by people in the field to create ontology design patterns.

[13:20] ToddSchneider: For medicine there are the ontologies from the OBO Foundry.

[13:22] John Sowa: Open Cyc is the freely available ontology + software. See

[13:23] Gary: One thing you run into when talking about connecting models is to develop bridging concepts between them.

[13:23] DouglasRMiles: since has been down a new port is being developed

[13:23] ToddSchneider: An ontological model may not be able to represent sufficient aspects of a dynamic model (e.g., fluid flow models).

[13:25] DouglasRMiles: the idea is to allow software writing in many languages to be able to use a common assert/query interface

[13:26] TerryLongstreth: Congenital problems in computer systems are called "design flaws"

[13:27] BobbinTeegarden: We wrote a book Cognitive Patterns based on our work modeling business using the KADS patterns, like tinker toys, similar to the MYCIN building logic. I have a UML plugin of KADS patterns, if anyone wants to play with it let me know.

[13:34] BobbinTeegarden: JohnS: LINDA system? URL?

[13:39] Gary: Generative communication in Linda

[13:40] AlexShkotin:

[13:41] John Sowa: Thanks Alex, that's the original paper. I'll send a note with more references

[13:42] DouglasRMiles: BTW with the LarKC/OpenCYC thing i am working on i am enabling the Linda interactor that Tarau created


Video Recording

Previous Meetings

... further results

Next Meetings

... further results