Actions

Ontolog Forum

Session Medical
Duration 1.5 hour
Date/Time Mar 27 2019 16:00 GMT
9:00am PDT/12:00pm EDT
4:00pm GMT/5:00pm CET
Co-Champions: Ram D. Sriram and David Whitten

Ontology Summit 2019 Medical Explanation Session 2

Agenda

The speaker today is:

  • Ugur Kursuncu and Manas Gaur
    • Explainability of Medical AI through Domain Knowledge
    • Video Recording
    • Wright State University

Conference Call Information

  • Date: Wednesday, 27-March-2019
  • Start Time: 9:00am PDT / 12:00pm EDT / 5:00pm CET / 4:00pm GMT / 1600 UTC
  • Expected Call Duration: 1.5 hours
  • The Video Conference URL is https://zoom.us/j/689971575
    • iPhone one-tap :
      • US: +16699006833,,689971575# or +16465588665,,689971575#
    • Telephone:
      • Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 558 8665
      • Meeting ID: 689 971 575
      • International numbers available: https://zoom.us/u/Iuuiouo
  • Chat Room

Attendees

Proceedings

[12:09] Ken Baclawski: The recording will be posted after the session, and an outline of the slide content is posted below.

[12:16] RaviSharma: with help of live chat I can now see his slides.

[12:18] ToddSchneider: How should we understand the notion 'concept based information'?

[12:22] RaviSharma: Ugur - what is either the improvement in diagnosis with use of All AI data, multimodal medical data vs only social media data?

[12:23] RaviSharma: or improvement in probability of social media based only data?

[12:26] ToddSchneider: Is the mapping of the 'personal data' into/with the medical knowledge based on natural language terms or phrases?

[12:26] RaviSharma: how does openness of patient in soc media affect the result?

[12:28] RaviSharma: medical entity data?

[12:28] TerryLongstreth: At what point do the subjects (patients..?) know that their social media accounts are being scanned/extracted? Did the research control for intentional misdirection on the part of subjects after they learned? Or was the use of social media data covert/hidden from the subjects?

[12:32] ToddSchneider: Is there an assumption of the existence of social media data for a person?

[12:32] RaviSharma: if you limit the social interaction among the similar patients what do you expect the result to be compared to social media data?

[12:36] ToddSchneider: Perhaps a better question is 'How much personal data is needed' (for the system to be 'useful')?

[12:50] Ken Baclawski: Arash Shaban-Nejad Semantic "Analytics for Global Health Surveillance" will be speaking on April 17. Slides are available at http://bit.ly/2YvlHLK

[12:53] TerryLongstreth: PSQ9 - questionnaire

[12:54] RaviSharma: thanks ken

[12:59] ToddSchneider: I have to get to another meeting. Thank you.

[13:02] RaviSharma: please upload speaker slides, thanks

[13:06] RaviSharma: thanks to speakers

Resources

The following is an outline of the slide content, not including the images.

1. Explainability of Medical AI through Domain Knowledge

  • Ugur Kursuncu and Manas Gaur

with Krishnaprasad Thirunarayan and Amit Sheth

  • Kno.e.sis Research Center
    • Department of Computer Science and Engineering
    • Wright State University, Dayton, Ohio USA

2. Why AI systems in Medical Systems

  • Growing need for clinical expertise
  • Need for rapid and accurate analysis for growing healthcare big data including Patient Generated Health Data and Precision Medicine data
    • Improve productivity, efficiency, workflow, accuracy and speed, both for doctors and for patients
    • Patient empowerment through smart (actionable) health data

3. Why Explainability in Medical AI Systems

  • Trust in AI systems by clinicians and other stakeholders
  • Major healthcare consequences
  • Legal requirements; need to adhere to guidelines/protocols
  • More significant for some specific medical fields, such as mental health

4. Patient-Doctor Relationship

  • Cultural and political reason for ownership of personal data
  • Privacy concerns for personal data:
    • Two stages for permission to use: model creation, personal health decision-making
    • Incomplete data due to privacy concerns
  • How would AI systems treat patients?
  • For personalized healthcare: Researchers or analyzers or doctors need such personal data to provide explainable decisions supported by AI systems

5. How will AI assist humans in medical domain?

  • Intelligent assistants through conversational AI (chatbots)
  • Multimodal personal data
    • Text, voice, image, sensors, demographics
  • Help physician burnouts
  • Legal implications

6. Challenges

  • Common ground and understanding between machines and humans.
    • Forming cognitive associations
  • Big Multimodal data
  • Ultimate goal: Recommending or Acting?

7. Problem: Reasoning over the outcome

  • How were the conclusions arrived at?
  • If some unintuitive/erroneous conclusions were obtained, how can we trace back and reason about them?

8. A Mental Health Use Case

  • Clinician
    • Previsit
    • In-Visit
    • Post-Visit
  • Patient
    • Recommendations on:
      • Cost
      • Location
      • Relevance to disease
  • Big multimodal data for humans!
    • Capacity
    • Performance
    • Efficiency
    • Explainability
  • Explainability is required as to how data is relevant and significant with respect to the patient situation

9. Explainability vs Interpretability

  • Explainability is the combination of interpretability and traceability via a knowledge graph

10. A Mental Health Use Case

  • From Patient to Social Media to Clinician to Healthcare

11. A Mental Health Use Case

  • From Patient to Clinician via a Black box AI system

12. A Mental Health Use Case

  • What is the severity level of suicide risk of a patient?
  • ML can be applied to a variety of input data: Text, image, network, sensor, knowledge

13. Explainability with Knowledge

  • Explainability through incorporation of knowledge graphs in machine learning processes
    • Knowledge enhancement before model is trained
    • Knowledge harnessing after model is trained
    • Knowledge infusing while model is trained

14. Explanation through knowledge enhancement

15. Relevant Research: Explaining the prediction of mental health disorders (CIKM 2018)

16. Relevant Research: Explaining the prediction of mental health disorders (CIKM 2018)

  • Explanation through word features that is created through Semantic Encoding and Decoding Optimization technique
    • Semantic encoding of personal data into knowledge space
    • Semantic decoding of knowledge into personal data space

17. Relevant Research: Explaining the prediction of mental health disorders (CIKM 2018)

18. Relevant Research: Explaining the prediction of mental health disorders (CIKM 2018)

19. Relevant Research: Explaining the prediction of severity of suicide risk (WWW 2019)

20. Relevant Research: Explaining the prediction of severity of suicide risk (WWW 2019)

  • Progression of users through severity levels of suicide risk

21. Explanation through Knowledge Harvesting

22. Relevant Research: Explaining the prediction wisdom of crowd (WebInt 2018)

23. Explanation through Knowledge Infusion

24. Explanation through Knowledge Infusion

  • Learning what specific medical knowledge is more important as the information is processed by the model
  • Measuring the importance of such infused knowledge
  • Specific functions and how they can be operationalized for explainability
    • Knowledge-Aware Loss Function (K-LF)
    • Knowledge-Modulation Function (K-MF)

25. Evaluation

  • ROC & AUC
    • Assessments of true positives and false positive rates, to properly measure feature importance
  • Inverse Probability Estimates
    • Estimate the counterfactual or potential outcome if all patients in dataset were assigned either label or have close estimated probabilities
  • PRM: Perceived Risk Measure
    • The ratio of disagreement between the predicted and actual outcomes summed over disagreements between the annotators multiplied by a reduction factor that reduces the penalty if the prediction matches any other annotator

26. Evaluation

27. Mental Health Ontology

  • Extensively used in this research
  • Built based on DSM-5, which is the main guideline documentation for psychiatrists
  • Includes: SNOMED-CT, Drug Abuse Ontology and Slang terms

28. Key Takeaways

  • Medical explainability is a necessity to form trust for medical community
  • Three ways of explainability with knowledge
  • Interpretability and traceability are necessary and sufficient conditions for explainability
  • Infusing knowledge would further enhance the reasoning capabilities

29. Questions?

Previous Meetings

... further results

Next Meetings

... further results