Actions

Ontolog Forum

Session Broader thoughts
Duration 1 hour
Date/Time 8 Nov 2023 17:00 GMT
9:00am PST/12:00pm EST
5:00pm GMT/6:00pm CET
Convener Andrea Westerinen and Mike Bennett

Ontology Summit 2024 Fall Series Broader thoughts

Agenda

  • Anatoly Levenchuk, strategist and blogger at Laboratory Log
    • Hybrid Reasoning, the Scope of Knowledge, and What Is Beyond Ontologies?
    • This talk discusses styles of definitions for knowledge graphs (KG) combined with large language models (LLMs). The KG architectures and systems are reviewed, taken from Ontolog Forum's 2020 Communique. A framework is proposed for a cognitive architecture using both LLMa and KGs for the evolution of knowledge during 4E (embodied, extended, embedded, enacted) cognition. In this framework, ontologies are understood as answers to the question "What is in the world?" and can be found in representations that vary across a spectrum of formality/rigor. An example is given of the use of ontology engineering training in management, where upper-level ontologies are given to students in the form of informal course texts (with the goal of obtaining a fine-tuned LLM within the "neural networks" of students' brains) coupled with lower-level ontologies that are more formal (such as data schemas for databases and knowledge graphs).
    • Anatoly Levenchuk has worked as a strategy consultant for more than 30 years. He helps with vision and strategy definition to many government agencies and large companies. Now he is science head of Aisystant that serves as a school in engineering and management. His first machine learning project was in 1977, first ontology engineering project was in 1980. He is author of several textbooks on systems thinking, methodology, systems engineering, systems management, natural and artificial intelligence, education as "person engineering". His blog "Laboratory Log" http://ailev.lievjournal.ru in Russian has more than 3,000 subscribers.
    • Slides
  • Arun Majumdar and John Sowa, Permion AI
    • Trustworthy Computation: Diagrammatic Reasoning With and About LLMs
    • Large Language Models (LLMs) were designed for machine translation (MT). Although LLM methods cannot do any reasoning by themselves, they can often find and apply reasoning patterns that they find in the vast resources of the WWW. For common problems, they frequently find a correct solution. For more complex problems, they may construct a solution that is partially correct for some applications, but disastrously wrong or even hallucinogenic for others. Systems developed by Permion use LLMs for what they do best. But they combine them with precise and trusted methods of diagrammatic reasoning based on conceptual graphs (CGs). They take advantage of the full range of technology developed by 60+ years of AI, computer science, and computational linguistics. For any application, Permion methods derive an ontology tailored to the policies, rules, and specifications of the project or business. All programs and results they produce are guaranteed to be consistent with that ontology.
    • John's Slides
    • Arun's Slides
  • Video Recording

Conference Call Information

  • Date: Wednesday, 8 November 2023
  • Start Time: 9:00am PST / 12:00pm EST / 6:00pm CET / 5:00pm GMT / 1700 UTC
    • Note that Daylight Saving Time has ended in Europe, Canada and the US.
    • ref: World Clock
  • Expected Call Duration: 1 hour
  • Video Conference URL: https://bit.ly/48lM0Ik
    • Conference ID: 876 3045 3240
    • Passcode: 464312

The unabbreviated URL is: https://us02web.zoom.us/j/87630453240?pwd=YVYvZHRpelVqSkM5QlJ4aGJrbmZzQT09

Participants

Complete list of participants was not captured

Discussion

  • Ravi Sharma: When I think about models, it is not necessarily graph or even video but something like a kind of vision mind-based understanding and yes I express it often in language or express math using the language also.
    • Anatoly Levenchuk: Yes, any form. But if it (model) has patterns then you can consider it as text -- semiotics tell us "all is texts of patterns as letters". Pattern languages are about behavior as a text (chain of patterns). Mathematics is about patterns too.
  • Michael DeBellis: I don't understand what the speaker means by quantum/digital memory.
    • Anatoly Levenchuk: It is about exact copy of information, information is about difference. 1 bit is a result of measurement. To evolution can proceed, you need genes as exactly copied information about results of previous evolution steps. If you have not digital/quantum/discrete form for exact copying, you cannot accumulate knowledge, error in analog form will be prohibitive for evolution.
  • Ravi Sharma: Second bullet can you expand how cognition and quantum memory are understood?
    • Mike Bennett: That is the 2nd bullet of Slide 3 (for later reference in the Q&A)
    • Andrea Westerinen: This is discussed at the end of the talk, in the Q&A
  • Ravi Sharma: Generative and interactive models, how will these be integrated at different levels?
    • Anatoly Levenchuk: Generative and discriminative (not interactive) models. You can ask about any models, but get different types of answers: possible worlds descriptions from generative models and classification labels from discriminative models. With many nuances, sure )))
    • Arun Majumdar: @Anatoly Levenchuk I think your distinction between "interactive" models is very important!!! See for example the interactionist paradigm of computing by Peter Wegner and Dinah Goldin.
  • Ravi Sharma: Anatoly would you call L4 Contemplative?
    • Anatoly Levenchuk: No. All models about activity (enactive cognition, activity is everywhere).
  • Ravi Sharma: Generation Differentiation is described, how do you integrate to get all facts together that means knowledge?
    • Anatoly Levenchuk: I simply deny all guesses that give problems in inference. All survived guesses are integrated (i.e., not give errors in inference).
  • Ravi Sharma: Last Q for Anatoly, how do you introduce value systems in these cognitive architectures other than through social media etc.
  • Douglas Miles: The most interesting and confusing thing about LLMs is we have no idea how to teach them any new skills... other than: We fire a hose of data and text at them and just pray
  • Amit Sheth: About passing Turing Test: Counter Turing Test CT^2: AI-Generated Text Detection is Not as Easy as You May Think -- Introducing AI Detectability Index, https://lnkd.in/gr6cizEZ
    • Arun Majumdar: The distinction between "human" and machine is being attacked on stylometric principles but this is very hard as computer GPT outputs are mixed with human inputs ...
  • Ravi Sharma: What efforts are likely to succeed in making ChatGPT more accurate?
    • Arun Majumdar: @Ravi Sharma so our approach is to use Conceptual Graphs as our formal knowledge graph approach to creating a "surrogate" model to drive the LLM/GPTs.
  • Arun Majumdar: We (John and I) are not focused on the detection problem because that is something that is not related to our primary focus in knowledge graphs (Conceptual Graphs) as a formalism for symbolic AI with Generative AI.
    • Arun Majumdar: I can share screenshots that compares what we do to what others do. Basically, zero-hallucinations. We can look at a simple hallucination problem later after John's talk if needed. Ken has my screenshots.
  • Ravi Sharma: Is math not derived from metaphysics?
    • Arun Majumdar: Math is one of the great abstractions of human kind that is a formal but extremely open field for creativity. The key is that creativity in math is often understated. Closed World Models (CWMs) which include LLMs are not capable of creating outside something that they have been input (in their ML training, even with human reinforcement feedback learning).
    • Arun Majumdar: Tuning a GPT or LLM is still a closed world model.
    • Arun Majumdar: Semiotic models differ - the semiotics of Saussure are different to Pierce. One is dyadic, and the other is triadic. So the semiotics matter. These distinctions are not involved in GPT/LLM constructs. However, this important distinction may be approachable in the future.
    • Arun Majumdar: The patterns in mathematics are not probabilistic. They are driven by rational principles. So these are different things.
    • Arun Majumdar: You can use probability, like you can use a million monkeys typing with probabilistic bias to get some pattern candidates but there is no deep insight or principle driving these outputs.
  • Ravi Sharma: LLM and cognitive scientists and cog-memory well explained.
  • Janet Singer: Question for Anatoly re 4E cognition. Shift to 4E seems to be the key for appreciating and taking advantage of human-machine teaming opportunities post LLM revolution. But do you see 4E as applicable to software AIs separated from humans? Or just to robots (arguably embodied and embedded if in limited sense)? Or is it that we are the 4E extensions of the software AIs?
  • Arun Majumdar: @Anatoly Levenchuk Behavior is not truly just a random process. There is "intention" in living beings - such as the need to survive and thrive. Stochastics are a way to study behavior, to mimic some behaviors but the complexity of goal-driven and intentful behavior is actually much more complex than probabilities.
  • Arun Majumdar: Permion uses tensor mathematics but in the logical formalisms so that predications and symbolic logic has a mapping to/from from the tensorial structures. There are several emerging developments including, for example, the exploration of alternatives algebras such as Clifford or Geometric algebras beyond the conventional real valued Gibbs algebras used nowadays.
  • Douglas Miles: The earth may not be round enough to have a completely circular cord
    • Douglas Miles: It takes a person that had experience beyond words to answer that
    • Andrea Westerinen: This is a reference to a thought experiment discussed by John Sowa - about a circular cord around the earth's equator, and adding 1 yard to it. It raises the cord to approx 6" above the ground.
  • Mike Bennett: For those of us not from the US, what is the capital of Alaska? It feels like this is a vital plot point.
    • Andrea Westerinen: This is a reference to a ChatGPT response in John's/Arun's presentation - that Juneau is the capital of Alaska, but there have been attempts to move the capital to another city. And, the underlying issue is whether this information was in the provided training data/corpus (e.g., could you track provenance?)
  • Ravi Sharma: Arun Say more about scaffolding. Is it only E and R and detailed parsed later, but using what rules?
    • Arun Majumdar: Scaffolding is based on purely mathematical methods based on the Zipf distribution laws of terminologies and the H-Point of the Zipf distribution and Eigen computation methods to identify the modules of graph structures from the text. Scaffolding provides a language

agnostic foundation.

  • Dale Fitch: Is your "logic" an ontology?
    • Arun Majumdar: Yes we induce and also human update the "ontology" but the logic is using the ontology

Resources

Previous Meetings

 Session
ConferenceCall 2023 11 01Demos of information extraction via hybrid systems
ConferenceCall 2023 10 25A look across the industry, Part 2
ConferenceCall 2023 10 18A look across the industry, Part 1
... further results

Next Meetings

 Session
ConferenceCall 2023 11 15Synthesis
ConferenceCall 2024 02 21Overview
ConferenceCall 2024 02 28Foundations and Architectures
... further results