Actions

Ontolog Forum

Revision as of 14:18, 7 May 2019 by imported>KennethBaclawski (→‎Agenda)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Session Medical Explanation
Duration 1.5 hour
Date/Time Apr 17 2019 16:00 GMT
9:00am PDT/12:00pm EDT
5:00pm BST/6:00pm CEST
Convener Ken Baclawski

Ontology Summit 2019 Medical Explanation Session 3

Agenda

  • Synthesis Discussion
  • Arash Shaban-Nejad, Ph.D., MPH
    • Semantic Analytics for Global Health Surveillance
    • Assistant Professor,
    • UTHSC-ORNL Center for Biomedical Informatics
    • UTHSC Department of Pediatrics
    • Adjunct Faculty, The Bredesen Center for Interdisciplinary Research, The University of Tennessee, Knoxville.
    • Slides
    • Video Recording
  • Planning for the Symposium

Conference Call Information

  • Date: Wednesday, 17-April-2019
  • Start Time: 9:00am PDT / 12:00pm EDT / 6:00pm CEST / 5:00pm BST / 1600 UTC
  • Expected Call Duration: 1.5 hour
  • The Video Conference URL is https://zoom.us/j/689971575
    • iPhone one-tap :
      • US: +16699006833,,689971575# or +16465588665,,689971575#
    • Telephone:
      • Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 558 8665
      • Meeting ID: 689 971 575
      • International numbers available: https://zoom.us/u/Iuuiouo
  • Chat Room

Attendees

Proceedings

[11:58] RaviSharma: Ken will Ram or David introduce the speaker

[11:59] Ken Baclawski: @Ravi: The speaker is probably going to be late today, so we can wait a little while before deciding this.

[12:08] Gary: You can see some examples of track summaries from the last years.

[12:09] Gary: Example of a track summary - see http://ontologforum.org/index.php/Blog:Contexts_in_the_Open_Knowledge_Network

[12:10] RaviSharma: David please confirm that you will take lead on medical explainability track?

[12:15] Gary: Another track synthesis (from 2017) in slide format as a basis for input to the communique: https://s3.amazonaws.com/ontologforum/OntologySummit2017/Synthesis/Synthesis2-TrackA--GaryBergCross_20170426.pdf

[12:18] Mark Underwood: thx Gary

[12:23] Gary: Ram, might someone from DARPA get a panel of 4 people funded on XAI to be on for 2 hours? We have some of our sessions as background.

[12:24] Gary: Would it be too much to have some of our Communique "findings" as one discussion point??

[12:32] Gary: There are some publications coming out of the DARPA XAI work/task groups. We had Bill Clancey talk who is part of some of this. As an example of summaries see:

Explanation in Human-AI Systems: A Literature Meta-Review

Synopsis of Key Ideas and Publications and Bibliography for Explainable AI, Prepared by Task Area 2

Shane T. Mueller Michigan Technical University

Robert R. Hoffman, William Clancey, Abigail Emrey

Institute for Human and Machine Cognition, Gary Klein MacroCognition, LLC DARPA XAI Program February 2019

[12:33] Gary: Report is at: https://arxiv.org/ftp/arxiv/papers/1902/1902.01876.pdf

[12:35] Gary: Ken, can you provide a link to the conf/workshop you mentioned participating in??

[12:36] Ken Baclawski: @Gary: cogsima2019.org

[12:38] Gary: Thanks Ken, I see that it included:

Tutorial Session 1: Conversational Explanations - Explainable AI through Human-Machine Conversation, Dave Braines, IBM Research UK

Abstract: Explainable AI has significant focus within both the research community and the popular press. The tantalizing potential of artificial intelligence solutions may be undermined if the machine processes which produce these results are black boxes that are unable to offer any insight or explanation into the results, the processing, or the training data on which they are based. The ability to provide explanations can help to build user confidence, rapidly indicate the need for correction or retraining, as well provide initial steps towards the mitigation of issues such as adversarial attacks, or allegations of bias. In this tutorial we will explore the space of Explainable AI, but with a particular focus on the role of the human users within the human-machine hybrid team, and whether a conversational interaction style is useful for obtaining such explanations quickly and easily. The tutorial is broken down into three broad areas which are dealt with sequentially:

Explainable AI What is it? Why do we need it? Where is the state of the art? Starting with the philosophical definition of explanations and the role they serve in human relationships, this will cover the core topic of explainable AI, looking into different techniques for different kinds of AI systems, different fundamental classifications of explanations (such as transparent, post-hoc and explanation by example) and the different roles that these may play with human users in a human-machine hybrid system. Examples of adversarial attacks and the role of explanations in mitigating against these will be given, along with the need to defend against bias (either algorithmic or through training data issues). Human roles in explanations Building on the work reported in "Interpretable to whom?" [Tomsett, R., Braines, D., Harborne, D., Preece, A., & Chakraborty, S. (201. Interpretable to Whom? A Role-based Model for Analyzing Interpretable Machine Learning Systems, ICML Workshop on Human Interpretability in Machine Learning (WHI 201, Jul 2018] this section examines the different roles that a human (or machine) user within the system may be fulfilling, and why the role has an important part to play in determining what kind of explanation may be required. In almost all current AI explanation-related research the role of the user is not a primary consideration, but we assert that the ability to create a meaningful explanation must take this into account. The goals of the users will vary depending on their role, and the explanations that will serve them in achieving these goals will also vary. Conversational explanations The role of conversational machine agents (such as Alexa, Siri and Google) are becoming increasingly commonplace, but the typical interactions that these agents fulfill are fairly simple. Conversational interactions can be especially useful in complex or evolving situations where the ability to design a rich and complete user interface in advance may not be possible. In our ongoing research we are investigating the role of a conversational interaction with AI explanations and will report the findings so far in this section. There will also be a live interactive demo for optional use by the audience during this session.

Intended Audience: The intended audience for this tutorial are researchers in any field where complex algorithms or processes can be used to inform human decision-making. The participants will be taken through a general overview of explanation in both human and machine contexts, and how the role of the agent will have a significant impact on what kind of explanation might be useful. The workshop will then move into some ongoing research into the role of conversation as a tool to enable explanations in human-machine hybrid systems, along with an interactive demonstration of an early version of this capability.

[12:46] ToddSchneider: Ken, was there any resolution as to the format for the Symposium?

[12:49] RaviSharma: Todd - He said do not worry he will try and bring it into common format

[12:50] RaviSharma: Gary thanks for the post, we will continue in synthesis and communique

[12:50] Ken Baclawski: @Todd: The suggestions were to have a panel of around 4 persons, to have a findings/challenges panel, and to have sessions for the tracks.

[12:51] ToddSchneider: Ken, would these occur over one or two days?

[12:51] RaviSharma: Ravi commented on inter and intra track AI Panel meeting, possible type of participants (Fed. Contractors, PIs) and 2 panels during virtual Summit May 6-7.

[12:55] Ken Baclawski: For the synthesis, each track should produce a summary which should be sent to me and I will edit them and produce a preliminary overall synthesis. The summaries should be completed in two weeks. Gary: Commonsense track, Janet: Narrative track, Mark: Financial track, David: Medical track, Ram: XAI track.

[12:55] janet singer: Ken - Where did you say the 'Meaning is in the Machine' piece is?

[12:55] AndreaWesterinen: My apologies but I have another call at 1pm. Have to drop off the zoom ...

[12:59] RaviSharma: Arash's slide 19 clearly shows the Query using SPARQL and thus semantic and ontology aspects in malaria Query.

[13:00] RaviSharma: Also slide 21

[13:01] RaviSharma: Arash - do mapping rules use more than vocabularies and RDBMS?

[13:04] janet singer: sorry I need to leave in a few minutes - will take lead on Narrative summary with findings/challenges

[13:07] BobbinTeegarden: can we get the slides?

[13:07] Gary: Q -what existing ontologies, if any, were used to develop the ontologies used here?

[13:08] RaviSharma: domain agnostic

[13:08] RaviSharma: can be reused

[13:08] RaviSharma: for similar diseased e. g.

[13:09] Ken Baclawski: @janet: The article "The Meanings in the Machine" by Julie Jenson Bennett was in IEEE Edge, but originally appeared in IEEE Computer vol. 50, no. 9, 2017.

[13:12] RaviSharma: mapping based on knowledge engineer's expertise and create mapping based on ontology

[13:13] Gary: @Janet see https://pdfs.semanticscholar.org/56ba/835fc05ceedae56dfb12e77b0f30ad7aa999.pdf

[13:14] BobbinTeegarden: Where are they going with it?

[13:15] RaviSharma: thanks Arash

[13:15] TerryLongstreth: JICS - UT Joint ...

[13:18] BobbinTeegarden: Morning after 8am Pacific

[13:18] BobbinTeegarden: No, not Eastern, Pacific please

[13:19] BobbinTeegarden: West coast 8am is fine,.

[13:21] BobbinTeegarden: Do Wednesday + Thursday?

[13:22] Gary: Doing Tuesday and Wed. May 7-8.

Resources

Previous Meetings

... further results

Next Meetings

... further results