Ontolog Forum

Session LLMs, Ontologies and KGs
Duration 1 hour
Date/Time 13 Mar 2024 16:00 GMT
9:00am PDT/12:00pm EDT
4:00pm GMT/5:00pm CST
Convener Gary Berg-Cross

Ontology Summit 2024 LLMs, Ontologies and KGs


  • Fabian Neuhaus Ontologies in the era of large language models
    • Abstract: The potential of large language models (LLM) has captured the imagination of the public and researchers alike. In contrast to previous generations of machine learning models, LLMs are general-purpose tools, which can communicate with humans. In particular, they are able to define terms and answer factual questions based on some internally represented knowledge. Thus, LLMs support functionalities that are closely related to ontologies. In this perspective article, I will discuss the consequences of the advent of LLMs in the field of applied ontology.
    • Bio: Fabian Neuhaus is a senior researcher at the faculty of computer science at the Otto von Guericke Univerity in Magdeburg, Germany. His main research areas are ontology engineering and ontology languages. He contributed to the development of the Basic Formal Ontology, the Relationship Ontology, ISO Common Logic, and the OMG Distributed Ontology Modeling and Specification Language. Fabian Neuhaus is the vice-president of the International Association for Ontology and its Applications (IAOA). Currently, he is working on neurosymbolic methods for ontology extension and methods for integrating ontological knowledge in machine learning.
    • Video Recording

Conference Call Information

  • Date: Wednesday, 13 March 2024
  • Start Time: 9:00am PDT / 12:00pm EDT / 5:00pm CET / 4:00pm GMT / 1600 UTC
    • ref: World Clock
    • Note: The US and Canada are on Daylight Saving Time while Europe has not yet changed.
  • Expected Call Duration: 1 hour
  • Video Conference URL:
    • Conference ID: 876 3045 3240
    • Passcode: 464312

The unabbreviated URL is:



Ontology Commitments

[12:21] Dr Ravi Sharma: Fabian we learned that ontology is a collection of Ontological Commitments? Then how are relationships differentiated in this model?

[12:36] Dr Ravi Sharma: stochastic models are a reality of nature so are all stochastic determinations against ontology commitments?

Consensus and Interoperability

[12:27] Dr Ravi Sharma: consensus among domain experts also needs experts in interdomain space? How is consensus developed then?

[12:32] Alexandru Todor (DIN): One of the primary perspectives in the debate is that Large Language Models (LLMs) are incapable of generating consensus. However, this issue largely depends on the dataset used. If the LLM is trained on scientific papers, which inherently lack a pre-existing consensus, then naturally, it would struggle to create one. On the other hand, if it is trained on standards documents, which are the result of prior consensus-building processes, then the argument that LLMs cannot generate consensus may not hold.

[12:35] Gary Berg-Cross: @Alexandru - I disagree. The LLM may use pre-existing consensus (as an object) but not build consensus by a reasoned process.

[12:37] Mark Underwood: @Ravi In the IEEE WG on metadata standards, we are recommending a consensus-based standardization, similar to the observations by Fabian here. A central WG goal is to facilitate interoperability. See

[12:41] Mark Underwood: Replying to "@Ravi In the IEEE WG...

Hallucinations and Failures of Logic

[12:34] Dr Ravi Sharma: very badly it interpreted the even numbers?

[12:34] Douglas R. Miles: Is this ChatGPT-4 or 3.5 ?

[12:36] Dan Brickley: FWIW here is a quick test in (3/Opus)

[12:35] Michael DeBellis: One of the best integrations of knowledge graphs and LLMs is Retrieval Augmented Generation (RAG) Rag eliminates hallucinations by using a set of documents as the only oracle for truth. If a RAG system can't get the answer from the documents it says it doesn't know. RAG also eliminates black box reasoning because it doesn't just return an answer, it returns objects (e.g., representing documents) that support the answer. AllegroGraph has some great integration with ChatGPT to implement RAG. I've used it (together with Nivedita Dutta) with the DrMO ontology to build a RAG system that answers clinician questions about dental materials and products.

[12:42] Douglas R. Miles: No ideal way or pathway to fix a mistake in a LLM

[12:55] Bart Gajderowicz: Regarding the contradictory cave example, ChatGPT is trained to give you information that you would agree with. It’s very popular with marketers for that reason. Custom LLMs that you can train yourself do a better job of being consistent. This all assumes you give it data that is consistent with your beliefs. Starting with an ontology to contextualise input data goes a long way to remove inconsistencies.

[12:58] Dan Brickley: I updated the gist link with another response from Claude, I asked it

[13:04] Dr Ravi Sharma: Mathew yes that is very useful line of thinking, chain of prompts. Another comment is hacking the logic or using inferences in the process to remove less logical answers.

[13:07] Gary Berg-Cross: When the App is incorrect, it is interesting to see if it can explain why it responded as it did.

LLMs Lack an Ontology

[12:36] Michael Gruninger: The claim that there is no ontology within LLMs (which I agree with) seems to support the idea that ontologies still play a role in AGI, since we want explanations and arguments that are logically consistent.

[13:09] Dan Brickley: these things don't have a single unified view, they contain multitudes, and that's their huge strength. You just have to go fishing to see what you can elicit :)

Prompted Ontology Generation

[12:40] Dr Ravi Sharma: Is there a case for prompted Generative AI with LLMs?

[12:46] Dan Brickley: Maybe the rise of LLMs means that text-valued properties in ontology design may lose some stigma? Historically it feels like there's social pressure to model out everything explicitly with types, enumerations, codes etc rather than use natural language, because natural language was notoriously hard to compute with. Maybe that's changing (typing this as my audio/video is a mess; thanks for an interesting talk!)

[12:48] Dan Brickley: Prompts can be millions of tokens now

[12:49] Mike Bennett: If LLM does the right thing as long as you tell it exactly what to do, then it has more or less the same role as paper.

[12:49] Michael DeBellis: The way to do it is not with prompt engineering but with RAG which I described above. I also build a BFO assistant that works quite well with just a few BFO documents:

[12:54] ToddSchneider: If LLM’s are updated (with additional data/information), there may be no guarantee the same question/request will get the same response/answer.


[12:29] Dr Ravi Sharma: Asynchronous participation can be automated?

[12:31] ToddSchneider: LLMs can have a bias among different ‘views’ of a domain.

[12:50] Dan Brickley: (not to plug google stuff but...:

[12:59] Matthew Lange (IC-FOODS): Are the chats being captured/posted with recordings/slides? There's a lot in here.

Ken Baclawski: The chats and recordings have been captured and posted. Slides are posted when they become available.

[12:55] Till Mossakowski: I could talk about this next week...

[12:55] Till Mossakowski: I mean about the topic that Fabian just mentions

[13:05] Douglas R. Miles: Excellent Talk.. Thank you!

[13:08] Dr Ravi Sharma: Cory Nice to see you.


Previous Meetings

ConferenceCall 2024 03 06LLMs, Ontologies and KGs
ConferenceCall 2024 02 28Foundations and Architectures
ConferenceCall 2024 02 21Overview
... further results

Next Meetings

ConferenceCall 2024 03 20Foundations and Architectures
ConferenceCall 2024 03 27Foundations and Architectures
ConferenceCall 2024 04 03Synthesis
... further results