|Session||Overview of Explainable AI|
|Date/Time||Nov 28 2018 17:00 GMT|
|9:00am PST/12:00pm EST|
|5:00pm GMT/6:00pm CET|
|Convener||Ram D. Sriram and Ravi Sharma|
Ontology Summit 2019 Overview of Explainable AI
- Derek Doran "Okay but Really... What is Explainable AI? Notions and Conceptualizations of the Field"
- Slide Presentation
- Derek Doran is an Associate Professor of Computer Science and Engineering at Wright State University. His research interests are in developing statistical, deep learning and topological data analysis (TDA) methods for the study of complex web and cyber-systems. Of current interest is in endowing and augmenting deep learning algorithms and TDA models to make them inherently explainable and comprehensible to users. His research is supported by NSF, AFRL, ORISE, and the Ohio Federal Research Network. Derek is author of over 75 publications, most of which are in AI and ML venues. He is on the editorial board for Social Network Analysis and Mining and the International Journal of Web Engineering and Technologies. Derek is also a founding chair of the Neural-Symbolic Learning and Reasoning special interest group on Explainable AI (http://daselab.cs.wright.edu/nesy/sig-xai/). More information at: http://derk--.github.io
- The video recording of the session is available.
Conference Call Information
- Date: Wednesday, 28-November-2018
- Start Time: 9:00am PST / 12:00pm EST / 6:00pm CET / 5:00pm GMT / 1700 UTC
- ref: World Clock
- Expected Call Duration: 1 hour
- The Video Conference URL is https://zoom.us/j/689971575
- iPhone one-tap :
- US: +16699006833,,689971575# or +16465588665,,689971575#
- Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 558 8665
- Meeting ID: 689 971 575
- International numbers available: https://zoom.us/u/Iuuiouo
- iPhone one-tap :
- Chat Room
- Aaron Hammer
- Alex Shkotin
- Andrea Westerinen
- Andrew Dougherty
- Bruce Bray
- David Eddy
- Derek Doran
- Gary Berg-Cross
- Janet Singer
- John Sowa
- Ken Baclawski
- Mark Underwood
- Md Kamruzzaman Sarker
- Mike Bennett
- Mike Riben
- Ram D. Sriram
- Ravi Sharma
- Terry Longstreth
- Todd Schneider
- Tom Tinsley
[12:25] BruceBray: https://distill.pub/2017/feature-visualization/
[12:37] John Sowa: General principle: note that *every* explanation is an answer to a how or why question.
[12:39] BruceBray: Great presentation! Doctor Analogy resonates with me as a healthcare provider and informaticist
[12:39] John Sowa: That'w a much simpler definition of XAI: (1) Can it answer how and why questions? and (2) Can it answer a follow up question?
[12:43] Gary Berg-Cross: Much of what Derek covered relates to what Torsten and I will also talk about next week on the connection to commonsense (knowledge and reasoning).
[12:44] RaviSharma: does slide explanation in this example follows decision rules
[12:52] John Sowa: Comment about question-answering systems: They only answer questions that begin with who, what, when, or where. An XAI system must answer how and why quest
[12:53] John Sowa: Follow up questions are *always* in the context of the current topic,
[12:56] Gary Berg-Cross: @John, yes and your point involves the idea that the AI will understand and keep a context for the conversation.
[12:57] Ram D. Sriram: @Gary: Without Context it will be hard to explain
[12:58] TerryLongstreth: @Gary B-C: background knowledge in human situations is negotiable between agents; essentially establishing correspondence points between or among incumbent individual contexts
[13:00] ToddSchneider: So potential explanations are additional competencies questions?
[13:01] Gary Berg-Cross: The system will need to understand linguistic terms but reasoning might be more successful with formal terms.
[13:05] BruceBray: @Gary - agree, a great example of why ontology can collaboratively enhance AI applications
[13:09] Gary Berg-Cross: We will need to consider how an AI will learn from interactions with users.
[13:11] RaviSharma: thanks Derek for a wonderful presentation