ConferenceCall 2024 04 17: Difference between revisions
Ontolog Forum
(→Agenda) |
|||
(2 intermediate revisions by the same user not shown) | |||
Line 21: | Line 21: | ||
== Agenda == | == Agenda == | ||
* '''[[AmitSheth|Amit Sheth]]''' ''Forging Trust in Tomorrow’s AI: A Roadmap for Reliable, Explainable, and Safe NeuroSymbolic Systems'' [https://bit.ly/4aLDy5V Video Recording] | * '''[[AmitSheth|Amit Sheth]]''' ''Forging Trust in Tomorrow’s AI: A Roadmap for Reliable, Explainable, and Safe NeuroSymbolic Systems'' | ||
** [https://bit.ly/4aLDy5V Video Recording] | |||
** In Pedro Dominguez's influential 2012 paper, the phrase "Data alone is not enough" emphasized a crucial point. I've long shared this belief, which is evident in our Semantic Search engine, which was commercialized in 2000 and detailed in a patent. We enhanced machine learning classifiers with a comprehensive WorldModel™, known today as knowledge graphs, to improve named entity, relationship extraction, and semantic search. This early project highlighted the synergy between data-driven statistical learning and knowledge-supported symbolic AI methods, an idea I'll explore further in this talk. <br/> Despite the remarkable success of transformer-based models in numerous NLP tasks, purely data-driven approaches fall short in tasks requiring Natural Language Understanding (NLU). Understanding language - Reasoning over language, generating user-friendly explanations, constraining outputs to prevent unsafe interactions, and enabling decision-centric outcomes necessitates neurosymbolic pipelines that utilize knowledge and data. | ** In Pedro Dominguez's influential 2012 paper, the phrase "Data alone is not enough" emphasized a crucial point. I've long shared this belief, which is evident in our Semantic Search engine, which was commercialized in 2000 and detailed in a patent. We enhanced machine learning classifiers with a comprehensive WorldModel™, known today as knowledge graphs, to improve named entity, relationship extraction, and semantic search. This early project highlighted the synergy between data-driven statistical learning and knowledge-supported symbolic AI methods, an idea I'll explore further in this talk. <br/> Despite the remarkable success of transformer-based models in numerous NLP tasks, purely data-driven approaches fall short in tasks requiring Natural Language Understanding (NLU). Understanding language - Reasoning over language, generating user-friendly explanations, constraining outputs to prevent unsafe interactions, and enabling decision-centric outcomes necessitates neurosymbolic pipelines that utilize knowledge and data. | ||
** Problem: Inadequacy of LLMs for Reasoning<br/>LLMs like GPT-4, while impressive in their abilities to understand and generate human-like text, have limitations in reasoning. They excel at pattern recognition, language processing, and generating coherent text based on input. However, their reasoning capabilities are limited by their need for true understanding or awareness of concepts, contexts, or causal relationships beyond the statistical patterns in the data they were trained on. While they can perform certain types of reasoning tasks, such as simple logical deductions or basic arithmetic, they often need help with more complex forms of reasoning that require deeper understanding, context awareness, or commonsense knowledge. They may produce responses that appear rational on the surface but lack genuine comprehension or logical consistency. Furthermore, their reasoning does not adapt well to the dynamicity of the environment, i.e., the changing environment in which the AI model is operating (e.g., changing data and knowledge). | ** Problem: Inadequacy of LLMs for Reasoning<br/>LLMs like GPT-4, while impressive in their abilities to understand and generate human-like text, have limitations in reasoning. They excel at pattern recognition, language processing, and generating coherent text based on input. However, their reasoning capabilities are limited by their need for true understanding or awareness of concepts, contexts, or causal relationships beyond the statistical patterns in the data they were trained on. While they can perform certain types of reasoning tasks, such as simple logical deductions or basic arithmetic, they often need help with more complex forms of reasoning that require deeper understanding, context awareness, or commonsense knowledge. They may produce responses that appear rational on the surface but lack genuine comprehension or logical consistency. Furthermore, their reasoning does not adapt well to the dynamicity of the environment, i.e., the changing environment in which the AI model is operating (e.g., changing data and knowledge). | ||
Line 40: | Line 41: | ||
== Resources == | == Resources == | ||
[https://bit.ly/4aLDy5V Video Recording] | * [https://bit.ly/4aLDy5V Video Recording] | ||
* [https://youtu.be/YbWyNT7O3Jk YouTube Video] | |||
== Previous Meetings == | == Previous Meetings == |
Latest revision as of 14:46, 19 April 2024
Session | Applications |
---|---|
Duration | 1 hour |
Date/Time | 17 Apr 2024 16:00 GMT |
9:00am PDT/12:00pm EDT | |
4:00pm GMT/6:00pm CEST | |
Convener | Ram D. Sriram |
Ontology Summit 2024 Applications
Agenda
- Amit Sheth Forging Trust in Tomorrow’s AI: A Roadmap for Reliable, Explainable, and Safe NeuroSymbolic Systems
- Video Recording
- In Pedro Dominguez's influential 2012 paper, the phrase "Data alone is not enough" emphasized a crucial point. I've long shared this belief, which is evident in our Semantic Search engine, which was commercialized in 2000 and detailed in a patent. We enhanced machine learning classifiers with a comprehensive WorldModel™, known today as knowledge graphs, to improve named entity, relationship extraction, and semantic search. This early project highlighted the synergy between data-driven statistical learning and knowledge-supported symbolic AI methods, an idea I'll explore further in this talk.
Despite the remarkable success of transformer-based models in numerous NLP tasks, purely data-driven approaches fall short in tasks requiring Natural Language Understanding (NLU). Understanding language - Reasoning over language, generating user-friendly explanations, constraining outputs to prevent unsafe interactions, and enabling decision-centric outcomes necessitates neurosymbolic pipelines that utilize knowledge and data. - Problem: Inadequacy of LLMs for Reasoning
LLMs like GPT-4, while impressive in their abilities to understand and generate human-like text, have limitations in reasoning. They excel at pattern recognition, language processing, and generating coherent text based on input. However, their reasoning capabilities are limited by their need for true understanding or awareness of concepts, contexts, or causal relationships beyond the statistical patterns in the data they were trained on. While they can perform certain types of reasoning tasks, such as simple logical deductions or basic arithmetic, they often need help with more complex forms of reasoning that require deeper understanding, context awareness, or commonsense knowledge. They may produce responses that appear rational on the surface but lack genuine comprehension or logical consistency. Furthermore, their reasoning does not adapt well to the dynamicity of the environment, i.e., the changing environment in which the AI model is operating (e.g., changing data and knowledge). - Solution: Neurosymbolic AI combined with Custom and Compact Models:
Compact custom language models can be augmented with neurosymbolic methods and external knowledge sources while maintaining a small size. The intent is to support efficient adaptation to changing data and knowledge. By integrating neurosymbolic approaches, these models acquire a structured understanding of data, enhancing interpretability and reliability (e.g., through verifiability audits using reasoning traces). This structured understanding fosters safer and more consistent behavior and facilitates efficient adaptation to evolving information, ensuring agility in handling dynamic environments. Furthermore, incorporating external knowledge sources enriches the model's understanding and adaptability across diverse domains, bolstering its efficiency in tackling varied tasks. The small size of these models enables rapid deployment and contributes to computational efficiency, better management of constraints, and faster re-training/fine-tuning/inference. - About the Speaker: Professor Amit Sheth (Web, LinkedIn) is an Educator, Researcher, and Entrepreneur. As the founding director of the university-wide AI Institute at the University of South Carolina, he grew it to nearly 50 AI researchers. He is a fellow of IEEE, AAAI, AAAS, ACM, and AIAA. He has co-founded four companies, including Taalee/Semangix which pioneered Semantic Search (founded 1999), ezDI, which supported knowledge-infused clinical NLP/NLU, and Cognovi Labs, an emotion AI company. He is proud of the success of over 45 Ph.D. advisees and postdocs he hs advised/mentored.
Conference Call Information
- Date: Wednesday, 17 April 2024
- Start Time: 9:00am PDT / 12:00pm EDT / 6:00pm CEST / 5:00pm BST / 1600 UTC
- ref: World Clock
- Expected Call Duration: 1 hour
- Video Conference URL: https://bit.ly/48lM0Ik
- Conference ID: 876 3045 3240
- Passcode: 464312
The unabbreviated URL is: https://us02web.zoom.us/j/87630453240?pwd=YVYvZHRpelVqSkM5QlJ4aGJrbmZzQT09
Participants
Discussion
Resources
Previous Meetings
Session | |
---|---|
ConferenceCall 2024 04 10 | Synthesis |
ConferenceCall 2024 04 03 | Synthesis |
ConferenceCall 2024 03 27 | Foundations and Architectures |
... further results |
Next Meetings
Session | |
---|---|
ConferenceCall 2024 04 24 | Applications |
ConferenceCall 2024 05 01 | Risks and Ethics |
ConferenceCall 2024 05 08 | Risks and Ethics |
... further results |