Actions

Ontolog Forum

Revision as of 02:26, 12 April 2021 by imported>KenBaclawski (→‎Discussion)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Session Neuro-Symbolic Learning Ontologies
Duration 1.5 hour
Date/Time 07 Apr 2021 16:00 GMT
9:00am PDT/12:00pm EDT
5:00pm BST/6:00pm CEST
Convener Ram D. Sriram
Track C

Ontology Summit 2021 Neuro-Symbolic Learning Ontologies

Ontologies are a rich and versatile construct. They can be extracted, learned, modularized, interrelated, transformed, analyzed, and harmonized as well as developed in a formal process. This summit will explore the many kinds of ontologies and how they can be manipulated. The goal is to acquaint both current and potential users of ontologies with the possibilities for how ontologies could be used for solving problems.

Agenda

  • 12:00 - 12:30 EDT Henry Kautz, National Science Foundation, Toward a Taxonomy of Neuro-Symbolic Systems
    • Abstract: Deep learning gains much of its power to handle ambiguity and to reason about similarity through it use of vector representations. In order to perform input and output, however, deep learning systems must convert between vector and more traditional symbolic representations. Might symbols be employed not just at the periphery but within a deep learning system itself? We provide an overview of a number of different architectures for neural-symbolic systems that have appeared in the literature, and discuss their potential advantages and limitations.
  • 12:30 - 13:00 EDT Amit Sheth Semantics of the Black-Box: Can knowledge graphs help make deep learning systems more interpretable and explainable?
    • Abstract: The recent series of innovations in deep learning have shown enormous potential to impact individuals and society, both positively and negatively. The deep learning models utilizing massive computing power and enormous datasets have significantly outperformed prior historical benchmarks on increasingly difficult, well-defined research tasks across technology domains such as computer vision, natural language processing, signal processing, and human-computer interactions. However, the Black-Box nature of deep learning models and their over-reliance on massive amounts of data condensed into labels and dense representations pose challenges for the system’s interpretability and explainability. Furthermore, deep learning methods have not yet been proven in their ability to effectively utilize relevant domain knowledge and experience critical to human understanding. This aspect is missing in early data-focused approaches and necessitated knowledge-infused learning and other strategies to incorporate computational knowledge. Rapid advances in our ability to create and reuse structured knowledge as knowledge graphs make this task viable. In this talk, we will outline how knowledge, provided as a knowledge graph, is incorporated into the deep learning methods using knowledge-infused learning. We then discuss how this makes a fundamental difference in the interpretability and explainability of current approaches and illustrate it with examples relevant to a few domains.
    • Bio: Prof. Amit Sheth is an Educator, Researcher, and Entrepreneur. He is the founding director university-wide AI Institute at the University of South Carolina. He is a Fellow of IEEE, AAAI, AAAS and ACM. He has (co-)founded four companies, three of them by licensing his university research outcomes, including the first Semantic Search company in 1999 that pioneered technology similar to what is found today in Google Semantic Search and Knowledge Graph. He is particularly proud of the success of his 45 Ph.D. advisees and postdocs in academia, industry research, and entrepreneurs.
    • Slides
  • 13:00 - 13:30 EDT Discussion
  • Video Recording

Conference Call Information

  • Date: Wednesday, 07 Apr 2021
  • Start Time: 9:00am PDT / 12:00pm EDT / 6:00pm CEST / 5:00pm BST / 1600 UTC
  • Expected Call Duration: 1.5 hours
  • The Video Conference URL is https://bit.ly/3i1uPRl
    • iPhone one-tap :
      • +16465588656,,83077436914#,,,,,,0#,,822275# US (New York)
      • +13017158592,,83077436914#,,,,,,0#,,822275# US (Germantown)
    • Telephone:
  • Chat Room: https://bit.ly/39PzQJW
    • If the chat room is not available, then use the Zoom chat room.

Attendees

Discussion

[12:08] RaviSharma: Welcome Dr Kautz

[12:10] RaviSharma: Dr Kautz - hope you will share your slides with us

[12:10] RaviSharma: What is the criterion for shallow or deep learning?

[12:16] RaviSharma: Dr Kautz - Concepts are at best not well classified or complex enough to express the situation, hence how are deep learnings tied too much on concepts?

[12:17] RaviSharma: you are beginning to answer my question.

[12:26] RaviSharma: Is back propagation simply iterative or more improved learning with each reprocess

[12:28] RaviSharma: What is n? an integer?

[12:31] RaviSharma: Tensors also apply to multidimensional spaces, are neural structures amenable to multi dimensional tensor analysis as Quantum states or General relativity.

[12:31] RaviSharma: Welcome Amit Sheth

[12:33] RaviSharma: There is huge difference between aware and knowing or cognitive and conscious state - for humans.

[12:35] RaviSharma: What is the difference between mindfulness and conscious thinking?

[12:38] RaviSharma: Is there parallel processing at advantage in such schema (Attention).

[12:47] RaviSharma: Dr Kautz - maybe the question could be "intellect examining thoughts from mind" and "cognitive thinking"?

[12:53] RaviSharma: Is there a way to capture in NL itself aggregates of NL that represent NL Understanding?

[12:54] RaviSharma: Please describe the transition between knowledge and understanding?

[12:56] Gary Berg-Cross: If we view the various alternatives as modular architectures with components then we can imagine a neuro-cognitive system capable of using the various components - Neuro [symbolic] Neuro; symbolic Neuro (sub-symbolic) etc....They get evoked by cognitive situations.

[12:58] RaviSharma: ML in this case Learning word limits to repeatability of learned entities but in what you are presenting what is difference between learning and understanding?

[13:01] Gary Berg-Cross: @Ravi, there is no guarantee that labeled concepts like "understanding" are coherent enough concepts to easily settle the question of difference. Additional concepts are needed to explain each phenomena. It is easy, however, to posit that not all learning involves conscious understanding.

[13:02] Gary Berg-Cross: Have to leave for another meeting.

[13:04] RaviSharma: Amit - am I correct that you are addressing text (NL) and Data based systems for address Knowledge and understanding as well as Understanding, how these three end objectives are related

[13:07] RaviSharma: In K-IL you are depending on some aspects of knowledge being a node in KG but in reality KGs usually are networks of Entity and relationships but not aggregates which knowledge nodes require? So what else do you need to combine to KG?

[13:11] RaviSharma: Amit - kindly share your slides with Ram / Ken for putting on forum.

[13:14] RaviSharma: In relation to education (exams) oriented learning, is there an easier use case compared to general Learning?

[13:15] RaviSharma: feature cluster vs training set

[13:24] Sudha Ram: How can we incorporate the concept of analogies into the neuro-symbolic approach?

[13:40] RaviSharma: Thanks to Drs Henry Kautz and Amit Sheth for interesting talks.

Resources

Previous Meetings

... further results

Next Meetings

... further results