Ontolog Forum

Session Overview of Commonsense Knowledge and Explanation
Duration 1 hour
Date/Time Dec 05 2018 17:00 GMT
9:00am PST/12:00pm EST
5:00pm GMT/6:00pm CET
Convener Gary Berg-Cross and Torsten Hahmann

Ontology Summit 2019 Overview of Commonsense Knowledge and Explanation

An early goal of AI was to teach/program computers with enough factual knowledge about the world so that they could reason about it in the way people do. The starting observation is that every ordinary person has "commonsense" or basic knowledge about the real world that is common to all humans. Spatial and physical reasoning are good examples. This is the kind we want to endow our machines with for several reasons including as part of conversation and understanding. System understanding of human perceptual and memory limitations might, for example, be an important thing for a dialog system to know about. For all these reasons it seems in turn reasonable to argue that:

  • If we want to design, and implement intelligent agents that are truly capable of providing explanations to people, then it is reasonable to believe that models of how humans explain decisions and behavior to each other are a good way to start analyzing the problem.

Early on (in the days of Good Old fashion AI or GOFAI) this such a belief was describes as giving systems a capacity for "commonsense". However, early AI demonstrated that the nature and scale of the problem was difficult. People seemed to need a vast store of everyday knowledge for common tasks. For example, a variety of knowledge was needed to understand even the simplest children's story. This is a feat that children master with what seems a natural process. One resulting approach was an effort like CyC to encode a broad range of human commonsense knowledge as a step to understanding text which would bootstrap further learning. Some believe that today this problem of scale can be addressed in a new ways including via modern machine learning. But these methods do not build in an obvious way to provide machine generated explanations of what they "know." As fruitful explanations appeal to folks understanding of the world, common sense reasoning would be a significant portion of any computer generated explanations.

In light of AI experience how hard is this to build into smart systems? We have made progress but it is still hard. One difficult aspect of common sense is making sure the explanations are presented at multiple levels of abstraction, i.e. from not too detailed to tracing exact justifications for each inference step.

Explanations also need to be flexible to the consumer of explanation including the context. For example, ‘everyday’ explanations are typically explanations in the context of why particular facts (events, properties, decisions, etc.) occurred, as opposed to explanations of more general relationships, such as those seen in scientific explanation. All too often scientific explanations, while valid are offered without taking a listener/consumers context and interests into account. We know for example that humans expect explanations to be contrastive at times. That is, an explanation of a phenomena is not a deep questions of ultimate causes by are sought in response to particular counterfactual cases that might have been possible - "why did the treatment cause swelling in the opposed to the liver? ... or not swelling at all (that was a surprise)?

This session will give an overview of these and pre-view other issues to be addressed in later sessions in light of current ML efforts and best practices for AI explanations.


We will begin with a briefing following this outline:

  • Why/how is Commonsense Knowledge & Reasoning relevant to Explanation?
    • Both are ubiquitous in everyday thinking, speaking and perceiving in our ordinary interaction with the world
  • Some history complementing to Ram's AI Evolution Slide with a short historical AI perspective from McCarthy's early thoughts, production systems, naive physics to modular ontologies (like CYC).
  • Connections to earlier Summits & talks
  • More recent Commonsense efforts
  • Examples of research and Issues
  • Preliminary schedule of sessions & speakers

The briefing will be followed by general discussion as time permits.

Conference Call Information

  • Date: Wednesday, 05-December-2018
  • Start Time: 9:00am PST / 12:00pm EST / 6:00pm CET / 5:00pm GMT / 1700 UTC
  • Expected Call Duration: 1 hour
  • The Video Conference URL is
    • iPhone one-tap :
      • US: +16699006833,,689971575# or +16465588665,,689971575#
    • Telephone:
      • Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 558 8665
      • Meeting ID: 689 971 575
      • International numbers available:
  • Chat Room



[11:53] Gary Berg-Cross: Torsten Hahmann, my co-champion, had a family obligation come up in the last hour and may not be able to be online to present or answer Qs. In that case I will be doing the talk which was planned as being presented by both of us.

[12:10] DavidWhitten: Wasn't Herbert & Simon the proponents of Production Systems ?

[12:11] DavidWhitten: Is fluidity much like fluency of a language?

[12:14] John Sowa: what is the url of the slides?

[12:15] Ken Baclawski: The slides are at

[12:19] RaviSharma: slide 4 asks for too much pre-work before any use of Common sense?

[12:22] RaviSharma: interesting to note that microtheories help us in ontology for context and commonsense in addition to subject/content knowledge ?

[12:25] John Sowa: Ravi, the amount of commonsense is immense. Cyc with 6000 microtheories can only scratch the surface.

[12:26] John Sowa: Cyc started to represent commonsense in 1984.

[12:28] John Sowa: After 34 years, they still have to develop a new microtheory for every new application.

[12:34] RaviSharma: slide 18 - meaningful information can also be extracted by

[12:34] RaviSharma: and grammar etc,?

[12:35] RaviSharma: i meant NLP and ...

[12:35] John Sowa: Reading is the most important source of common sense knowledge.

[12:36] John Sowa: That requires not just NL processing, but NL understanding -- NLU!

[12:40] TerryLongstreth: Per (I think it was) slide 21: does common sense reasoning imply an ability to argue? Would Asimov's Laws (or any similar legislative constraints) be achievable without such an ability?

[12:41] John Sowa: For more about NLU, see

[12:41] AlexShkotin: what about

[12:43] John Sowa: Alex, look at my cogmem.pdf slides. There are some serious problems with IBM Watson.

[12:43] John Sowa: Short answer: The team that built Watson did not get more cash for what they did. So they left.

[12:47] BruceBray:

[12:50] BruceBray: that is an example of the value of "world knowledge"

[12:56] Ram Sriram: On the work funded by Paul Allen's venture you should contact Vinay Chaudhri. The project kind of fizzled, but had a spin off.

[13:04] Mark Underwood: Maybe useful to stage a session on dual software / trained "operator" solutions

[13:06] BobbinTeegarden: @john if logic is the intermediate stage, what is the next or final stage?

[13:07] RaviSharma: Ram do you refer to vulcan, intended for...

[13:09] TerryLongstreth: @Bobbin: I want the car to decide whether to hit the child or the policeman

[13:11] BruceBray: thanks, great session

[13:15] RaviSharma: A question for John, if the subject knowledge parson at terminal is replaced by an Expert System (both having same domain knowledge expertise) can the process of follow-on-question be facilitated or eventually successively eliminated through automated AI and also have inbuilt reasoning?

[13:16] Ram Sriram: @Ravi: Yes that was Vulcan ventures. Also, we need to work on the speakers for our sessions. Will send you some thoughts on Friday

[13:19] RaviSharma: John and Gary - if we are trying to get handle on some concept or new subject, we go through the learning process e.g. thesaurus, dictionary, domain relevant terms such as geology and basin or reservoir etc and eventually at least build a preliminary understanding, thus using NLP, domain terms and filtering process we reach the goal partially. Are there similar steps possible through a combination of NLP, Domain and reasoning and by elimination of less relevant answers to arrive at recipe for knowledge extraction and answers to the question or concepts queries?

[13:20] RaviSharma: Ram Yes I almost put that off on my calendar and look forward to any subsequent work on our track, thanks

[13:21] RaviSharma: Ken thanks for a few minutes extension and welcome Michael Gruninger.


Previous Meetings

... further results

Next Meetings

... further results