Actions

ConferenceCall 2023 10 11: Difference between revisions

Ontolog Forum

mNo edit summary
No edit summary
 
(11 intermediate revisions by 2 users not shown)
Line 18: Line 18:
|}
|}


= [[OntologySummit2024|Ontology Summit 2024]] {{#show:{{PAGENAME}}|?session}} =
= [[OntologySummit2024_FallSeries|Ontology Summit 2024 Fall Series]] {{#show:{{PAGENAME}}|?session}} =


== Agenda ==
== Agenda ==
'''Deborah McGuiness'''<br />
'''[[DeborahMcGuinness|Deborah L. McGuinness]]'''<br />
Rennselaer Tetherless World Senior Constellation Chair<br />
Rennselaer Tetherless World Senior Constellation Chair<br />
Professor of Computer Science, Cognitive Science, and Industrial and Systems Engineering<br />
Professor of Computer Science, Cognitive Science, and Industrial and Systems Engineering<br />
Line 28: Line 28:


'''Abstract''': AI is in the news with astonishing regularity and the variety of announcements is often dizzying. In this talk, we will explore some opportunities (as well as threats) from the world of generative AI with respect to semantic technologies. We will explore some questions worth pondering as we plan our ontology and knowledge graph directions and hopefully leave with some mutually beneficial synergies between large language models and classical Ontology Summit topics.
'''Abstract''': AI is in the news with astonishing regularity and the variety of announcements is often dizzying. In this talk, we will explore some opportunities (as well as threats) from the world of generative AI with respect to semantic technologies. We will explore some questions worth pondering as we plan our ontology and knowledge graph directions and hopefully leave with some mutually beneficial synergies between large language models and classical Ontology Summit topics.
[https://bit.ly/3LYsfgh Slides]
[https://bit.ly/48PYojR Video Recording]


== Conference Call Information ==
== Conference Call Information ==
Line 37: Line 41:


== Participants ==
== Participants ==
* [[AndreaWesterinen|Andrea Westerinen]]
* Ayya Niyyanika Bhikkhuni
* [[BartGajderowicz|Bart Gajderowicz]]
* [[DeborahMcGuinness|Deborah McGuinness]]
* [[DouglasMiles|Douglas Miles]]
* [[GaryBergCross|Gary Berg-Cross]]
* John O'Gorman
* [[JohnSowa|John Sowa]]
* [[KenBaclawski|Ken Baclawski]]
* [[LeiaDickerson|Leia Dickerson]]
* [[MarkUnderwood|Mark Underwood (IS Innovation)]]
* [[MikeBennett|Mike Bennett]]
* Phil Jackson
* [[RaviSharma|Ravi Sharma]]
* [[SusanneVejdemo|Sus Vejdemo]]
* [[ToddSchneider|Todd Schneider]]


== Discussion ==
== Discussion ==
* [[RaviSharma|Ravi Sharma]] : Deborah what if the training set is not representative, how do we know the errors are due to lack of a good raining set or bad algorithms such as Bayes implementation?
* [[RaviSharma|Ravi Sharma]] : Would like to know how much richness beyond ontologies can be added to KGs?
** [[ToddSchneider|Todd Schneider]] : What about the converse: How can ontologies be leveraged to improve LLMs?
** [[ToddSchneider|Todd Schneider]] : Ontologies and knowledge graphs provide explicitness, in contrast to LLMs.
* [[BartGajderowicz|Bart Gajderowicz]] : Following this year’s FOIS in Canada, I submitted a proposal for a FOIS Working Group specifically for an ontology benchmark suite, in case anyone is interested in joining me in the effort.
** [[ToddSchneider|Todd Schneider]] : Bart, ‘ontology benchmark suite’??
** [[BartGajderowicz|Bart Gajderowicz]] : Yes, it’s exactly what is it sounds like. At FOIS there were a few presentations that used datasets and a few that found issues with published ontologies. It came from these works
* [[RaviSharma|Ravi Sharma]] : Are LLM aware of graphic ways of understanding or even ability to create imagelike information from patterns?
* [[MikeBennett|Mike Bennett]] : I would expect a wine ontology to have 'Relative' concepts like terroir (land in wine context); vintage (time in wine context); varietal (grape in wine context) etc. i.e. contextually relevant concepts. The reason I mention it: how does any LLM recognize or relate to contextually relative concepts?
* [[RaviSharma|Ravi Sharma]] : Seems like Agile development type template
* [[RaviSharma|Ravi Sharma]] : In wine ontology, what would be impact of adding one or two more variables?
* [[SusanneVejdemo|Sus Vejdemo]] : I'm creating an ontology for the linguistics of temperature terms (how different language communities divide up the sensation of hot vs cold in different ways). I used chatGPT to help me with OWL syntax - but if I hadn't already had a very firm grasp of what I wanted to know, it would have led me pretty wrong.
* [[DouglasMiles|Douglas Miles]] : LLMs turn out to be the best version of a search engine at finding obscure data
** [[DouglasMiles|Douglas Miles]] : (not calling *this* obscure, but it is ideal for finding exactly the works we are looking for sometimes)
* [[SusanneVejdemo|Sus Vejdemo]] : Beware - chatGPT can sometimes delete classes without informing you if you run bigger ontologies through it.
** [[RaviSharma|Ravi Sharma]] : Sue that is why context and prior knowledge are good filters?
* [[DouglasMiles|Douglas Miles]] : often i will ask "What information was just left out"
** [[DouglasMiles|Douglas Miles]] : and the LLM usually (with enough harassment) will supply me with what i wanted
* [[MikeBennett|Mike Bennett]] : I like ChatBS as a term. LLM is technically bullshit.
* Phil Jackson : Can LLM's perform 'self-talk' yet, e.g. to emulate artificial consciousness?
** [[BartGajderowicz|Bart Gajderowicz]] : There’s a recent paper that evaluates whether the LLM knows it’s hallucinating, that may be close to its internet “thinking” as it explores different text it wants to generate: Azaria, A., & Mitchell, T. (2023). The Internal State of an LLM Knows When its Lying. http://arxiv.org/abs/2304.13734
** Phil Jackson : thanks for this reference
** [[SusanneVejdemo|Sus Vejdemo]] : We'd want it to have an "evidentiality" signal, like some natural languages do!
* [[DouglasMiles|Douglas Miles]] : Chat GPT3.5 or 4 there?
** [[DouglasMiles|Douglas Miles]] : hah forecasting
* [[MikeBennett|Mike Bennett]] : Mansplaining as a service
* [[JohnSowa|John Sowa]] : Summary of all these messages:  LLMs are flaky. If you're lucky, they're great. If not, you have no idea what went wrong.  That is not acceptable for any mission-critical applications.
* [[RaviSharma|Ravi Sharma]] : What would have the outcome been if you had experts key inputs to create a new exposures health ontology?
* [[GaryBergCross|Gary Berg-Cross]] : It will be good to look at these leverage and pain points again in a year
* John O'Gorman : @Semantium is using a faceted, foundational ontology
* [[DouglasMiles|Douglas Miles]] : GPT for Python code, it's kinda hit-or-miss.. mostly garbage. But with Prolog and Lisp, it's useful!..  Maybe it's because there's less bad code out there (only one place the CMU AI archive).. Tehre will be a big market for LLM curators
* [[ToddSchneider|Todd Schneider]] : We don’t even know what ‘context’ is.
** [[MikeBennett|Mike Bennett]] : Context is the nexus of time, place, role, event etc. i.e. a bunch of concepts in the ontology (or instances of these)
** John O'Gorman : @Todd Schneider - Context is the way language reduces ambiguity.
*** [[ToddSchneider|Todd Schneider]] : Yes, but that’s an application of [a] ‘context’.
** [[AndreaWesterinen|Andrea Westerinen]] : I have tried to provide ‘context’ in my prompts. For example, summarize this news article assuming it was written by a right-wing publication.
* [[DouglasMiles|Douglas Miles]] : You have to harass them over and over John .. such as "Give me three completely different translations to CLIF:
* [[AndreaWesterinen|Andrea Westerinen]] : My “sandbox” is translating NL to an ontology.  Will be talking about it on Nov 1.
** [[MarkUnderwood|Mark Underwood (IS Innovation)]] : Will try to make your talk.  May have possible uses for specialized, DSL-type ontologies that are constructed from emerging, fresh domains; e.g., AWS Lambda
* [[GaryBergCross|Gary Berg-Cross]] : Just as chatGPTs can do some programming they can express concepts in formal languages and thus help with Ontology population.
* [[DouglasMiles|Douglas Miles]] : "Would that translation you just gave translate back to the same English?"
* [[BartGajderowicz|Bart Gajderowicz]] : John Sowa: Speaking with those in classic literature, translation models do a poor job exactly because of the nuances in classics where meaning is lost due to the poetic styles
* [[DouglasMiles|Douglas Miles]] : I am in general agreement.. and admit there is no clear way to resolve LLM issues
* Ayya Niyyanika Bhikkhuni : The metrics are a concern even more when the “research” being done is about what something from an ancient language means and how to apply what it “means” in today’s world.  So the pain point of competency  metrics is not just the actual textual validity of the translation but also of “context.”
* [[SusanneVejdemo|Sus Vejdemo]] : What's a good way to get on the mailing list? I found this seminar from a Taxonomy LinkedIn group post. (I'm a linguist, semanticist)
** [[LeiaDickerson|Leia Dickerson]] : https://ontologforum.com/index.php/WikiHomePage
** [[LeiaDickerson|Leia Dickerson]] : Sign up and more info is here.  Also, if you are not on it, I suggest adding yourself to the KGC Slack channel. https://www.knowledgegraph.tech/community/
** [[AndreaWesterinen|Andrea Westerinen]] : Send email to membership@ontologforum.org
* [[GaryBergCross|Gary Berg-Cross]] : I've used chats for competency Qs but not use cases which seems like an interesting possibility as part of KE sessions.


== Resources ==
== Resources ==
* [https://bit.ly/48PYojR Video Recording]
* [https://youtu.be/BHqEjqWWVaQ YouTube Video]


== Previous Meetings ==
== Previous Meetings ==
Line 50: Line 121:
         |?|?Session|mainlabel=-|order=asc|limit=3}}
         |?|?Session|mainlabel=-|order=asc|limit=3}}


[[Category:OntologySummit2024]]
[[Category:OntologySummit2024_FallSeries]]
[[Category:Icom_conf_Conference]]
[[Category:Icom_conf_Conference]]
[[Category:Occurrence| ]]
[[Category:Occurrence| ]]

Latest revision as of 20:54, 16 February 2024

Session Setting the stage
Duration 1 hour
Date/Time 11 Oct 2023 16:00 GMT
9:00am PDT/12:00pm EDT
4:00pm GMT/5:00pm CST
Convener Andrea Westerinen Mike Bennett

Ontology Summit 2024 Fall Series Setting the stage

Agenda

Deborah L. McGuinness
Rennselaer Tetherless World Senior Constellation Chair
Professor of Computer Science, Cognitive Science, and Industrial and Systems Engineering

Title: The Evolving Landscape: Generative AI, Ontologies, and Knowledge Graphs

Abstract: AI is in the news with astonishing regularity and the variety of announcements is often dizzying. In this talk, we will explore some opportunities (as well as threats) from the world of generative AI with respect to semantic technologies. We will explore some questions worth pondering as we plan our ontology and knowledge graph directions and hopefully leave with some mutually beneficial synergies between large language models and classical Ontology Summit topics.

Slides

Video Recording

Conference Call Information

  • Date: Wednesday, 11 October 2023
  • Start Time: 9:00am PDT / 12:00pm EDT / 6:00pm CEST / 5:00pm BST / 1600 UTC
  • Expected Call Duration: 1 hour
  • Video Conference URL: https://bit.ly/48lM0Ik
    • Conference ID: 876 3045 3240
    • Passcode: 464312

The unabbreviated URL is: https://us02web.zoom.us/j/87630453240?pwd=YVYvZHRpelVqSkM5QlJ4aGJrbmZzQT09

Participants

Discussion

  • Ravi Sharma : Deborah what if the training set is not representative, how do we know the errors are due to lack of a good raining set or bad algorithms such as Bayes implementation?
  • Ravi Sharma : Would like to know how much richness beyond ontologies can be added to KGs?
    • Todd Schneider : What about the converse: How can ontologies be leveraged to improve LLMs?
    • Todd Schneider : Ontologies and knowledge graphs provide explicitness, in contrast to LLMs.
  • Bart Gajderowicz : Following this year’s FOIS in Canada, I submitted a proposal for a FOIS Working Group specifically for an ontology benchmark suite, in case anyone is interested in joining me in the effort.
    • Todd Schneider : Bart, ‘ontology benchmark suite’??
    • Bart Gajderowicz : Yes, it’s exactly what is it sounds like. At FOIS there were a few presentations that used datasets and a few that found issues with published ontologies. It came from these works
  • Ravi Sharma : Are LLM aware of graphic ways of understanding or even ability to create imagelike information from patterns?
  • Mike Bennett : I would expect a wine ontology to have 'Relative' concepts like terroir (land in wine context); vintage (time in wine context); varietal (grape in wine context) etc. i.e. contextually relevant concepts. The reason I mention it: how does any LLM recognize or relate to contextually relative concepts?
  • Ravi Sharma : Seems like Agile development type template
  • Ravi Sharma : In wine ontology, what would be impact of adding one or two more variables?
  • Sus Vejdemo : I'm creating an ontology for the linguistics of temperature terms (how different language communities divide up the sensation of hot vs cold in different ways). I used chatGPT to help me with OWL syntax - but if I hadn't already had a very firm grasp of what I wanted to know, it would have led me pretty wrong.
  • Douglas Miles : LLMs turn out to be the best version of a search engine at finding obscure data
    • Douglas Miles : (not calling *this* obscure, but it is ideal for finding exactly the works we are looking for sometimes)
  • Sus Vejdemo : Beware - chatGPT can sometimes delete classes without informing you if you run bigger ontologies through it.
    • Ravi Sharma : Sue that is why context and prior knowledge are good filters?
  • Douglas Miles : often i will ask "What information was just left out"
    • Douglas Miles : and the LLM usually (with enough harassment) will supply me with what i wanted
  • Mike Bennett : I like ChatBS as a term. LLM is technically bullshit.
  • Phil Jackson : Can LLM's perform 'self-talk' yet, e.g. to emulate artificial consciousness?
    • Bart Gajderowicz : There’s a recent paper that evaluates whether the LLM knows it’s hallucinating, that may be close to its internet “thinking” as it explores different text it wants to generate: Azaria, A., & Mitchell, T. (2023). The Internal State of an LLM Knows When its Lying. http://arxiv.org/abs/2304.13734
    • Phil Jackson : thanks for this reference
    • Sus Vejdemo : We'd want it to have an "evidentiality" signal, like some natural languages do!
  • Douglas Miles : Chat GPT3.5 or 4 there?
  • Mike Bennett : Mansplaining as a service
  • John Sowa : Summary of all these messages: LLMs are flaky. If you're lucky, they're great. If not, you have no idea what went wrong. That is not acceptable for any mission-critical applications.
  • Ravi Sharma : What would have the outcome been if you had experts key inputs to create a new exposures health ontology?
  • Gary Berg-Cross : It will be good to look at these leverage and pain points again in a year
  • John O'Gorman : @Semantium is using a faceted, foundational ontology
  • Douglas Miles : GPT for Python code, it's kinda hit-or-miss.. mostly garbage. But with Prolog and Lisp, it's useful!.. Maybe it's because there's less bad code out there (only one place the CMU AI archive).. Tehre will be a big market for LLM curators
  • Todd Schneider : We don’t even know what ‘context’ is.
    • Mike Bennett : Context is the nexus of time, place, role, event etc. i.e. a bunch of concepts in the ontology (or instances of these)
    • John O'Gorman : @Todd Schneider - Context is the way language reduces ambiguity.
      • Todd Schneider : Yes, but that’s an application of [a] ‘context’.
    • Andrea Westerinen : I have tried to provide ‘context’ in my prompts. For example, summarize this news article assuming it was written by a right-wing publication.
  • Douglas Miles : You have to harass them over and over John .. such as "Give me three completely different translations to CLIF:
  • Andrea Westerinen : My “sandbox” is translating NL to an ontology. Will be talking about it on Nov 1.
    • Mark Underwood (IS Innovation) : Will try to make your talk. May have possible uses for specialized, DSL-type ontologies that are constructed from emerging, fresh domains; e.g., AWS Lambda
  • Gary Berg-Cross : Just as chatGPTs can do some programming they can express concepts in formal languages and thus help with Ontology population.
  • Douglas Miles : "Would that translation you just gave translate back to the same English?"
  • Bart Gajderowicz : John Sowa: Speaking with those in classic literature, translation models do a poor job exactly because of the nuances in classics where meaning is lost due to the poetic styles
  • Douglas Miles : I am in general agreement.. and admit there is no clear way to resolve LLM issues
  • Ayya Niyyanika Bhikkhuni : The metrics are a concern even more when the “research” being done is about what something from an ancient language means and how to apply what it “means” in today’s world. So the pain point of competency metrics is not just the actual textual validity of the translation but also of “context.”
  • Sus Vejdemo : What's a good way to get on the mailing list? I found this seminar from a Taxonomy LinkedIn group post. (I'm a linguist, semanticist)
  • Gary Berg-Cross : I've used chats for competency Qs but not use cases which seems like an interesting possibility as part of KE sessions.

Resources

Previous Meetings

 Session
ConferenceCall 2023 10 04Overview

Next Meetings

 Session
ConferenceCall 2023 10 18A look across the industry, Part 1
ConferenceCall 2023 10 25A look across the industry, Part 2
ConferenceCall 2023 11 01Demos of information extraction via hybrid systems
... further results