ConferenceCall 2023 05 31: Difference between revisions
Ontolog Forum
(One intermediate revision by the same user not shown) | |||
Line 37: | Line 37: | ||
# Large Language Models (LLMs) | # Large Language Models (LLMs) | ||
# From perception to cognition | # From perception to cognition | ||
* [https://bit.ly/ | * [https://bit.ly/3IPrqVq Slides] | ||
* Part II by [[ArunMajumdar|Arun Majumdar]]: Permion technology and applications. | * Part II by [[ArunMajumdar|Arun Majumdar]]: Permion technology and applications. | ||
* [https://bit.ly/3qdWvM5 Video Recording] | * [https://bit.ly/3qdWvM5 Video Recording] | ||
Line 153: | Line 153: | ||
* [https://bit.ly/3N7pIBG Slides] | * [https://bit.ly/3N7pIBG Slides] | ||
* [https://bit.ly/3qdWvM5 Video Recording] | * [https://bit.ly/3qdWvM5 Video Recording] | ||
* [https://youtu.be/6K6F_zsQ264 Video Recording on YouTube] | |||
* [https://permion.ai/permion_osiris.html Permion Osiris Webpage] | * [https://permion.ai/permion_osiris.html Permion Osiris Webpage] | ||
[[Category:Icom_conf_Conference]] | [[Category:Icom_conf_Conference]] | ||
[[Category:Occurrence| ]] | [[Category:Occurrence| ]] |
Latest revision as of 17:55, 4 June 2023
Session | GPT |
---|---|
Duration | 1.5 hour |
Date/Time | 31 May 2023 16:00 GMT |
9:00am PDT/12:00pm EDT | |
4:00pm GMT/5:00pm CST | |
Convener | Ken Baclawski |
Special Session of the Ontolog Forum on GPT
Evaluating and Reasoning with and about GPT
Large Language Models are derived from large volumes of texts stored on the WWW, and more texts acquired as they are used. GPT and related systems use the mathematical methods of tensor calculus to process LLMs in a wide range of AI applications. But the LLM methods are purely verbal. Their only connection to the world and human ways of thinking and acting is through the texts that people produce. Although quite useful, LLMs by themselves cannot support important AI methods of perception, action, reasoning, and cognition. For more general, precise, and reliable methods, they must be integrated with a broader range of AI technology.
Agenda
- Part I by John Sowa
- Strengths and limitations of GPT
- Large Language Models (LLMs)
- From perception to cognition
- Slides
- Part II by Arun Majumdar: Permion technology and applications.
- Video Recording
Conference Call Information
- Date: Wednesday, 31 May 2023
- Start Time: 9:00am PDT / 12:00pm EDT / 6:00pm CEST / 5:00pm BST / 1600 UTC
- ref: World Clock
- Expected Call Duration: 1.5 hour
- Video Conference URL
- Conference ID: 837 8041 8377
- Passcode: 323309
- Chat Room
The unabbreviated URLs are:
- Conference URL: https://us02web.zoom.us/j/83780418377?pwd=MlRFTHJQclRBd3RhYVl3aG1rTHJOQT09
- Chat room: http://webconf.soaphub.org/conf/room/ontology_summit_2023
Participants
There were 40 attendees, including
- Anatol Reibold
- Arun Majumdar
- Bobbin Teegarden
- David Eddy
- Gary Berg-Cross
- John Sowa
- Jack Park
- James Overton
- Janet Singer
- Ken Baclawski
- Ludmila Malahov
- Marc-Antoine Parent
- Marcia Zeng
- Mike Bennett
- Phil Jackson
- Ram D. Sriram
- Todd Schneider
Discussion
Please note that the questions listed below were answered during the session and are available on the video recording.
Probabilistic Reasoning
[12:16] ToddSchneider: Does reasoning include probabilistic reasoning?
Linguistic Information
[12:24] ToddSchneider: What sorts of linguistic information?
[12:24] Anatol Reibold: What is your opinion about
- Reasoning Mining
- Argument generation
- Persuasive Communications ?
Can it be the next hype?
Knowledge Graphs
[12:25] ToddSchneider: Don't transformers represent a type of knowledge graph?
[12:33] Ram D. Sriram: I presume GPT-like systems somehow generate KGs. Any thoughts on how Permion’s KGs integrate with GPT’s KGs
Generation Capabilities of GPT
[12:28] Mike Bennett: I wonder if GPT would make a good way to generate human-readable glossaries from conventional OWL ontologies?
Emotions
[12:37] BobbinTeegarden: Why aren't emotions just another aspect of the context... maybe part of the 'why' in who what when where why how of context?
[12:38] Mike Bennett: +1. Anything can be an aspect of context.
[13:18] BobbinTeegarden: Isn't a 'viewpoint' a context (perspective = context?)?
Reasoning
[12:38] Phil Jackson: Some questions for John: Can GPT talk to itself? To what extent does GPT reason in NL or a formal language? Or, how does GPT approximate reasoning by just using neural nets?
[12:55] ToddSchneider: Phil, Are you asking if ChatGPT can ask itself questions (i.e. some sort of self-referentiality capability)?
[13:51] BobbinTeegarden: Is your reasoning then 'holonic'?
[13:53] BobbinTeegarden: Is your reasoning recursive, and if so, how do you unwind?
Intentions
[13:20] Gary Berg-Cross: As a test text it would be good to use something that involves agents with intentions.
Self-Awareness
[13:23] Phil Jackson: Arun, can you ask your system what it knows or thinks about itself?
[13:24] Phil Jackson: Todd, yes that would be one kind of question about ChatGPT.
[13:38] ToddSchneider: Is it important to avoid self-referentiality?
[13:39] BobbinTeegarden: Does your GPT have a sense of 'self'?
Common Logic
[13:29] Marc-Antoine Parent: Can you say more about graph embedding for common logic?
Foundational Ontologies
[13:29] ToddSchneider: Arun, can your system use as 'input' a foundational ontology?
Performance Metrics
[13:44] Gary Berg-Cross: It may be interesting to some to look back on NIST's Performance Metrics for Intelligent Systems brought together researchers, developers, and users from disparate academic disciplines and domains of application to share ideas about how to tackle the multifaceted challenges of defining and measuring intelligence in artificial systems. The intelligent systems could take numerous forms: robots, factory or enterprise control systems, smart homes, decision support systems, etc. A community was formed, which evolved over the years. https://apps.dtic.mil/sti/pdfs/ADA515942.pdf
Meta-Reasoning
[13:49] Marc-Antoine Parent: Thank you, Arun. Thinking a lot of unification as meta-reasoning, but nowhere near where you're at.
Availability of Permion
[13:56] BobbinTeegarden: Can we get an interface to play with Permion?
Safety and Reliability of AI
[13:47] Ram D. Sriram: If ChaptGPT-like systems are returning inaccurate results then are people like Sam Altman worried about “Stupid” AI triggering disasters (like destroying a nuclear reactor). I see lot of folks want research on AI to be stopped temporarily and look into regulating AI (just like nuclear regulators are being regulated).
[14:00] Gary Berg-Cross: It would seem that the Permion experience would provide some high level requirements for a safe and responsive AI system.
Additional links
- https://www.ted.com/talks/yejin_choi_why_ai_is_incredibly_smart_and_shockingly_stupid/c?language=en
- Excellent read: "Too Big for a Single Mind: How the Greatest Generation of Physicists Uncovered the Quantum World," Tobias Hurter, 2021