Actions

Ontolog Forum

Session Communique
Duration 1 hour
Date/Time June 12 2019 14:00 GMT
9:00am PST/12:00pm EST
5:00pm GMT/6:00pm CET
Convener Ken Baclawski

Ontology Summit 2019 Communiqué

Agenda

In this session, we continue development of the Communiqué, especially the findings and challenges.

Conference Call Information

  • Date: Wednesday, 12-June-2019
  • Start Time: 9:00am PDT / 12:00pm EDT / 6:00pm CEST / 5:00pm BST / 1600 UTC
  • Expected Call Duration: 1 hour
  • The Video Conference URL is https://zoom.us/j/689971575
    • iPhone one-tap :
      • US: +16699006833,,689971575# or +16465588665,,689971575#
    • Telephone:
      • Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 558 8665
      • Meeting ID: 689 971 575
      • International numbers available: https://zoom.us/u/Iuuiouo
  • Chat Room

Participants

Proceedings

Note: The Communiqué discussion was not recorded.

[13:03] RaviSharma: Ram proposed the idea that past 10 years summit sessions speakers or whom we determine are active, be invited to speak on their view on how ontology has progressed over the years

[13:04] RaviSharma: janet retro and pro - spective views.

[13:04] RaviSharma: Janet said 2007 discussion 2007-ontology taxonomy discussions.

[13:05] RaviSharma: Ken received a paper relating to that

[13:06] RaviSharma: janet - integrating the integrators

[13:07] RaviSharma: Ram- - what has happened in 10 yrs. e.g. Sowa

[13:08] RaviSharma: Ram - Nichola Guarino

[13:08] RaviSharma: Ram - Mark Musen

[13:11] RaviSharma: Janet - juxtapose ML and ontology, data were answer to everything at one time. Ontology supports theory.

[13:12] RaviSharma: Ravi - data is needed but so are terms and vocabularies for developing ontologies.

[13:12] Ken Baclawski: The quest for explanations highlights the fact that data and statistics is not enough, one must also have theory.

[13:12] Ken Baclawski: We will return to this discussion in 2 weeks.

[13:15] RaviSharma: Ravi - next session will finalize the communique, give and few days for final short comments and a deadline after which it will be published.

Resources

Sowa on 11/14

Basic problems:

  • People talk and think in some version of natural language.
  • Precise definitions of system terms can't help if people don't know what to ask for.
  • Machine-learning systems are even worse: they learn a maze of numbers that have no words of any kind.

We need systems that can explain how, what, and why.

Ram on 11/28

For the future, we envision a fruitful marriage between classic logical approaches (ontologies) with statistical approaches which may lead to context-adaptive systems (stochastic ontologies) that might work similar as the human brain.

Maybe one step further is in linking probabilistic learning methods with large knowledge representations (ontologies) and logical approaches, thus making results re-traceable, explainable and comprehensible on demand.

Derek Doran on 11/28

Symbolic systems can prove a fact is true given a knowledge base of facts, and show a user an inference sequence to prove the fact holds.

Sub-symbolic systems (statistical ML):

  • White-box: You can see model mechanisms simple enough to be able to trace how inputs map to outputs
  • Grey-box: You have some vision mechanisms, but parameters are numerous, decisions are probabilistic, or inputs get “lost” (transformed)
  • Black-box: The model is so complex with so large a parameter space that it is not easily decipherable

Types of sub-symbolic systems:

  • White Box: regression, DTs, rule mining, linear SVMs
  • Grey Box: clustering, Bayesian nets, genetic algs, logic programming
  • Black Box: DNNs, matrix factorizations, non-linear dimensionality reduction

The problem: Most white, some grey box models provide quantitative explanations (“variance explained”) that is, generally, not useful to a layman responsible for an action recommended by the model.

Gary and Torsten on 12/5

Conclusion is that together commonsense and NL understanding are key to a robust AI explanation system.

Proofs found by Automated Theorem Provers provide a map from inputs to outputs.

  • But do these make something clear?
  • They may know the “how” but not the “why.”
  • And scripts and rule-based systems got so complex that the trace was hard to follow with conditional paths.

CS reasoning is needed to deal with incomplete and uncertain information in dynamic environments responsively and appropriately.

So we support the thesis that explanation systems need to understand the context of the user and need to be able to communicate to a person and be trusted.

Ontologies are needed to make explicit structural, strategic and support knowledge which enhanced the ability to understand and modify the system as well as support suitable explanations.

Common sense is "perhaps the most significant barrier" between the focus of AI applications today, and the human-like systems we dream of.

Ken on 1/16

Challenges:

  • Automated generation of explanations based on an ontology
  • Objective evaluation of the quality of explanations

Gary and Torsten on 1/16

Some Frequently Recurring Questions (but not specifically about explanation):

1. How can we leverage the best of various approaches to achieving commonsense?

2. How to best inject commonsense knowledge into machine learning approaches?

3. How to bridge formal knowledge representations (formal concepts and relations as axiomatized in logic) and NLP techniques and language disambiguation

Michael Gruninger on 1/23

Challenge: How can we design and evaluate a software system that represents commonsense knowledge and that can support reasoning (such as deduction and explanation) in everyday tasks?

Discussion Questions

  • What does it mean to understand a set of instructions?
  • Is there a common ontology that we all share for representing and reasoning about the physical world?

Summary

  • The PRAxIS Ontologies are intended to support the representation of commonsense knowledge about the physical world – perception, objects, and processes.

Challenges:

  • How do we evaluate these ontologies?
  • How are these ontologies related to existing upper ontologies?

Ben Grosof on 1/23

Issues in the field of explanation today

Confusion about concepts

  • Esp. among non-research industry and media
  • But needs to be addressed first in the research community

Mission creep, i.e., expansivity of task/aspect

  • Esp. among researchers.

Ignorance of what’s already practical

Disconnect between users and investors

  • Users often perceive critical benefits/requirements for explanation
  • Investors (both venture and enterprise internal) often fail to perceive value of explanation

Mark Underwood on 2/6

Financial Explanation is harder than it seems

Call Center employees are knowledge workers but may not have the best tools available to them

Tools do not operate in real time or systems do not interoperate

Explanation suitability is often context - and scenario - dependent

Compliance, retention and profit must be reconciled in decision support software

And explanations must follow suit or at least help CSR’s do so

Mike Bennett on 2/6

The ontology itself needs to be explained

  • Theory of Meaning
    • Grounded versus Correspondence theory (isomorphism) in ontologies
  • How this features in explaining the ontology
  • What each element of the ontology represents
  • Understanding each element of the ontology in relation to
    • Other elements in the ontology (grounded)
    • Contextual relations (isomorphic / correspondence)

Challenges in understanding regulations

  • Unstructured Text size
    • Federal Reserve System Final Rule 12 CFR Part 223 - 143 pages of text
    • Summary: 19 pages (comprehensive review)
  • Reference chains
    • Implements the Federal Reserve Act (references it and others)
  • Definitions to identify, delimit and flesh out
  • Complex sentences: Legalese and NL ambiguities
  • Exceptions/ exemptions

Challenges in explaining ontologies

  • A difficult research problem
  • Issues:
    • Explaining to business SMES: Context
    • Explaining to Implementers: Depth

Summary

  • Ontologies help in explanation in many financial scenarios (lending, credit, risk and exposures)
  • Reasoning with ontologies adds a further aspect that itself needs to be explained
  • Ontology concepts still need to be presented to end users in an explainable way
  • Not all things to be explained are first order (Rules, mathematical constructs etc.)

Augie Turano on 2/13

Conclusion (about Medical AI, not specifically explanations):

AI in medical diagnostics is still a novelty, with many clinicians still left to be convinced of its reliability, sensitivity and integration into clinical practice without undermining clinical expertise

Possibilities are endless, chatbots, interpretation of cell scans, slides, images etc, diagnosis from EHR data (genotypes, phenotypes) and there is a huge push from Venture capital investment companies.

Combination of ML and AI will probably yield most useful outcomes, not accuracy alone.

Ram on 2/13

Summary

  • Metrology for AI and AI for Metrology
  • Medical diagnosis requires multi - modal reasoning
  • Standards introduced at the right time will lead to innovation
  • Testing algorithms will increase trust in AI
  • Learning programs need to explain reasoning
  • Public - Private partnerships may help accelerate progress
  • Challenge problems will improve performance

Clancey on 2/20

Common Interactive Systems Today Cannot Explain Behavior or Advice

  • Especially they cannot answer "why not" questions

Operations are dynamic and interactive in a system of people, technology, and the environment – not linear and placeless.

Explanation involves discourse, follow - up and mutual learning; requires a "user model" of interests and knowledge.

"Explanation" is not a module – rather it drives the design process; needs are empirically discovered in prototype experiments.

Explanations might address shortcomings of symbolic AI

Identifying domain representations as "knowledge" obscured system - modeling methods and hence the domain - general scientific accomplishment

Ongoing tuning and extension required "Knowledge Engineers"

Brittle – boundaries not tested; system not reflective

Not integrated with legacy systems and work practice

Niket Tandon on 3/6

Challenges:

  • Robustness
    • Deep Learning systems are easily fooled
  • Commonsense
  • Natural Language
  • Adversaries
  • Training Sparsity
  • Unseen Situations

Underlying assumption: commonsense representation is DL friendly

For these reasons, commonsense aware models may:

  • help to create adversarial learning training data
  • generalize to other novel situations and domains
  • compensate for limited training data
  • be amicable to explanation e.g. with intermediate structures

Challenges

  • ACQUISITION:
    • Implicit
    • Reporting bias
    • Multimodal
    • Contextual
  • REPRESENT:
    • Logical
    • Graph
    • Unstructured

Conclusion

  • Commonsense for Deep Learning can help overcome challenges making models more robust and more amicable to explanation

Tiddi on 3/13

Why do we need (systems generating) explanations?

  • to learn new knowledge
  • to find meaning (reconciling contradictions in our knowledge)
  • to socially interact (creating a shared meaning with the others)
  • ...and because GDPR says so Users have a "right to explanation" for any decision made about them

Summary

  • Sharing and reusing is the key to explainable systems
    • Lots of data
    • Lots of theories (e.g. insights from the social/cognitive sciences [10])
  • (My) desiderata:
    • cross-disciplinary discussions
    • formalised common-sense knowledge (Web of entities, Web of actions)
    • links between data, allow serendipitous knowledge discovery

Sargur Srihari on 4/10

Summary and Conclusion

  • Next wave of AI is probabilistic AI (Intel)
  • Most Probable Explanation (MPE) of PGMs is useful for XAI
  • Forensics demands explainability
  • Forensic Impression evidence lends itself to a combination of deep learning and probabilistic explanation

Arash Shaban-Nejad on 4/17

Previous Meetings

... further results

Next Meetings