Actions

Ontolog Forum

Session GPT
Duration 1.5 hour
Date/Time 31 May 2023 16:00 GMT
9:00am PDT/12:00pm EDT
4:00pm GMT/5:00pm CST
Convener Ken Baclawski

Special Session of the Ontolog Forum on GPT

Evaluating and Reasoning with and about GPT

Large Language Models are derived from large volumes of texts stored on the WWW, and more texts acquired as they are used. GPT and related systems use the mathematical methods of tensor calculus to process LLMs in a wide range of AI applications. But the LLM methods are purely verbal. Their only connection to the world and human ways of thinking and acting is through the texts that people produce. Although quite useful, LLMs by themselves cannot support important AI methods of perception, action, reasoning, and cognition. For more general, precise, and reliable methods, they must be integrated with a broader range of AI technology.

Agenda

  1. Strengths and limitations of GPT
  2. Large Language Models (LLMs)
  3. From perception to cognition

Conference Call Information

  • Date: Wednesday, 31 May 2023
  • Start Time: 9:00am PDT / 12:00pm EDT / 6:00pm CEST / 5:00pm BST / 1600 UTC
  • Expected Call Duration: 1.5 hour
  • Video Conference URL
    • Conference ID: 837 8041 8377
    • Passcode: 323309
  • Chat Room

The unabbreviated URLs are:

Participants

There were 40 attendees, including

Discussion

Please note that the questions listed below were answered during the session and are available on the video recording.

Probabilistic Reasoning

[12:16] ToddSchneider: Does reasoning include probabilistic reasoning?

Linguistic Information

[12:24] ToddSchneider: What sorts of linguistic information?

[12:24] Anatol Reibold: What is your opinion about

  1. Reasoning Mining
  2. Argument generation
  3. Persuasive Communications ?

Can it be the next hype?

Knowledge Graphs

[12:25] ToddSchneider: Don't transformers represent a type of knowledge graph?

[12:33] Ram D. Sriram: I presume GPT-like systems somehow generate KGs. Any thoughts on how Permion’s KGs integrate with GPT’s KGs

Generation Capabilities of GPT

[12:28] Mike Bennett: I wonder if GPT would make a good way to generate human-readable glossaries from conventional OWL ontologies?

Emotions

[12:37] BobbinTeegarden: Why aren't emotions just another aspect of the context... maybe part of the 'why' in who what when where why how of context?

[12:38] Mike Bennett: +1. Anything can be an aspect of context.

[13:18] BobbinTeegarden: Isn't a 'viewpoint' a context (perspective = context?)?

Reasoning

[12:38] Phil Jackson: Some questions for John: Can GPT talk to itself? To what extent does GPT reason in NL or a formal language? Or, how does GPT approximate reasoning by just using neural nets?

[12:55] ToddSchneider: Phil, Are you asking if ChatGPT can ask itself questions (i.e. some sort of self-referentiality capability)?

[13:51] BobbinTeegarden: Is your reasoning then 'holonic'?

[13:53] BobbinTeegarden: Is your reasoning recursive, and if so, how do you unwind?

Intentions

[13:20] Gary Berg-Cross: As a test text it would be good to use something that involves agents with intentions.

Self-Awareness

[13:23] Phil Jackson: Arun, can you ask your system what it knows or thinks about itself?

[13:24] Phil Jackson: Todd, yes that would be one kind of question about ChatGPT.

[13:38] ToddSchneider: Is it important to avoid self-referentiality?

[13:39] BobbinTeegarden: Does your GPT have a sense of 'self'?

Common Logic

[13:29] Marc-Antoine Parent: Can you say more about graph embedding for common logic?

Foundational Ontologies

[13:29] ToddSchneider: Arun, can your system use as 'input' a foundational ontology?

Performance Metrics

[13:44] Gary Berg-Cross: It may be interesting to some to look back on NIST's Performance Metrics for Intelligent Systems brought together researchers, developers, and users from disparate academic disciplines and domains of application to share ideas about how to tackle the multifaceted challenges of defining and measuring intelligence in artificial systems. The intelligent systems could take numerous forms: robots, factory or enterprise control systems, smart homes, decision support systems, etc. A community was formed, which evolved over the years. https://apps.dtic.mil/sti/pdfs/ADA515942.pdf

Meta-Reasoning

[13:49] Marc-Antoine Parent: Thank you, Arun. Thinking a lot of unification as meta-reasoning, but nowhere near where you're at.

Availability of Permion

[13:56] BobbinTeegarden: Can we get an interface to play with Permion?

Safety and Reliability of AI

[13:47] Ram D. Sriram: If ChaptGPT-like systems are returning inaccurate results then are people like Sam Altman worried about “Stupid” AI triggering disasters (like destroying a nuclear reactor). I see lot of folks want research on AI to be stopped temporarily and look into regulating AI (just like nuclear regulators are being regulated).

[14:00] Gary Berg-Cross: It would seem that the Permion experience would provide some high level requirements for a safe and responsive AI system.

Additional links

Resources