ConferenceCall 2024 03 06: Difference between revisions
Ontolog Forum
(Created page with "{| class="wikitable" style="float:right; margin-left: 10px;" border="1" cellpadding="10" |- ! scope="row" | Session | session::LLMs, Ontologies and KGs |- ! scope="row" | Duration | duration::1 hour |- ! scope="row" rowspan="3" | Date/Time | has date::6 Mar 2024 17:00 GMT |- | 9:00am PST/12:00pm EST |- | 5:00pm GMT/6:00pm CET |- ! scope="row" | Convener | Gary Berg-Cross |} = Ontology Summit 2024 {{#show:{{P...") |
No edit summary |
||
Line 21: | Line 21: | ||
== Agenda == | == Agenda == | ||
* '''Hamed Babaei Giglou''' ''LLMs4OL | * 12:00 - 12:07 '''[[GaryBergCross|Gary Berg-Cross]]''' ''Summary of some issues from Session 1 and some foundational issues for LLMs vs Cognitive/Symbolic systems'' | ||
* 12:07 - 12:37 '''Hamed Babaei Giglou''' ''Exploring LLMs for Ontology: Ontology Learning and Ontology Matching'' | |||
Abstract: Large Language Models for Ontology Learning (LLMs4OL) framework utilizes Large Language Models (LLMs) for Ontology Learning (OL). LLMs have shown significant advancements in natural language processing, demonstrating their ability to capture complex language patterns in different knowledge domains. Within the LLMs4OL framework, we investigate the "Can LLMs effectively apply their language pattern capturing capability to OL, which involves automatically extracting and structuring knowledge from natural language text?" | |||
To explore LLMs in the area of OL, the first challenge is the formulation of the work and where to use LLM to be suitable for the given task. We conduct a comprehensive analysis using the zero-shot prompting method to evaluate nine different LLM model families for three main OL tasks (where our formulation to use LLMs comes in): term typing, taxonomy discovery, and extraction of non-taxonomic relations. Additionally, the evaluations encompass diverse genres of ontological knowledge, including lexicosemantic knowledge in WordNet, geographical knowledge in GeoNames, and medical knowledge in UMLS. | |||
The obtained empirical results show that foundational LLMs are not sufficiently suitable for ontology construction that entails a high degree of reasoning skills and domain expertise. Nevertheless, when effectively fine-tuned they just might work as suitable assistants, alleviating the knowledge acquisition bottleneck, for ontology construction. | |||
Within this presentation, the main focus will be given to experimented LLM variants and results analysis within the OL task. The whole project is released to the community with detailed documentation here: [https://github.com/HamedBabaei/LLMs4OL https://github.com/HamedBabaei/LLMs4OL] | |||
Bio: Hamed Babaei Giglou is a researcher at TIB and is currently, involved in the Neural-Symbolic SCholarly InnovatioN EXTraction (SciNEXT) project in collaboration with Open Research Knowledge Graph (ORKG) project at TIB -- German National Library of Science and Technology. Hamed got a bachelor's and Master's degree in computer science and worked as an NLP Researcher for more than 3 years in the industry before joining the TIB as a PhD candidate. Currently, he is pursuing a PhD in "Computer Science: NLP and Semantic Web Technologies" under the supervision of Dr. Jennifer D'Souza and Prof. Dr. Sören Auer. His current research focuses on employing LLM in various ontology tasks, such as Ontology Learning and Ontology Matching. | |||
* 12:37 - 13:00 Discussion | |||
== Conference Call Information == | == Conference Call Information == |
Revision as of 03:41, 3 March 2024
Session | LLMs, Ontologies and KGs |
---|---|
Duration | 1 hour |
Date/Time | 6 Mar 2024 17:00 GMT |
9:00am PST/12:00pm EST | |
5:00pm GMT/6:00pm CET | |
Convener | Gary Berg-Cross |
Ontology Summit 2024 LLMs, Ontologies and KGs
Agenda
- 12:00 - 12:07 Gary Berg-Cross Summary of some issues from Session 1 and some foundational issues for LLMs vs Cognitive/Symbolic systems
- 12:07 - 12:37 Hamed Babaei Giglou Exploring LLMs for Ontology: Ontology Learning and Ontology Matching
Abstract: Large Language Models for Ontology Learning (LLMs4OL) framework utilizes Large Language Models (LLMs) for Ontology Learning (OL). LLMs have shown significant advancements in natural language processing, demonstrating their ability to capture complex language patterns in different knowledge domains. Within the LLMs4OL framework, we investigate the "Can LLMs effectively apply their language pattern capturing capability to OL, which involves automatically extracting and structuring knowledge from natural language text?"
To explore LLMs in the area of OL, the first challenge is the formulation of the work and where to use LLM to be suitable for the given task. We conduct a comprehensive analysis using the zero-shot prompting method to evaluate nine different LLM model families for three main OL tasks (where our formulation to use LLMs comes in): term typing, taxonomy discovery, and extraction of non-taxonomic relations. Additionally, the evaluations encompass diverse genres of ontological knowledge, including lexicosemantic knowledge in WordNet, geographical knowledge in GeoNames, and medical knowledge in UMLS.
The obtained empirical results show that foundational LLMs are not sufficiently suitable for ontology construction that entails a high degree of reasoning skills and domain expertise. Nevertheless, when effectively fine-tuned they just might work as suitable assistants, alleviating the knowledge acquisition bottleneck, for ontology construction.
Within this presentation, the main focus will be given to experimented LLM variants and results analysis within the OL task. The whole project is released to the community with detailed documentation here: https://github.com/HamedBabaei/LLMs4OL
Bio: Hamed Babaei Giglou is a researcher at TIB and is currently, involved in the Neural-Symbolic SCholarly InnovatioN EXTraction (SciNEXT) project in collaboration with Open Research Knowledge Graph (ORKG) project at TIB -- German National Library of Science and Technology. Hamed got a bachelor's and Master's degree in computer science and worked as an NLP Researcher for more than 3 years in the industry before joining the TIB as a PhD candidate. Currently, he is pursuing a PhD in "Computer Science: NLP and Semantic Web Technologies" under the supervision of Dr. Jennifer D'Souza and Prof. Dr. Sören Auer. His current research focuses on employing LLM in various ontology tasks, such as Ontology Learning and Ontology Matching.
- 12:37 - 13:00 Discussion
Conference Call Information
- Date: Wednesday, 6 March 2024
- Start Time: 9:00am PST / 12:00pm EST / 6:00pm CET / 5:00pm GMT / 1700 UTC
- ref: World Clock
- Expected Call Duration: 1 hour
- Video Conference URL: https://bit.ly/48lM0Ik
- Conference ID: 876 3045 3240
- Passcode: 464312
The unabbreviated URL is: https://us02web.zoom.us/j/87630453240?pwd=YVYvZHRpelVqSkM5QlJ4aGJrbmZzQT09
Participants
Discussion
Resources
Previous Meetings
Next Meetings
Session | |
---|---|
ConferenceCall 2023 10 04 | Overview |
ConferenceCall 2023 10 11 | Setting the stage |
ConferenceCall 2023 10 18 | A look across the industry, Part 1 |
... further results |