Actions

Ontolog Forum

Revision as of 21:49, 29 January 2025 by Forum (talk | contribs) (→‎Resources)

Session Track 1
Duration 1 hour
Date/Time 29 Jan 2025 17:00 GMT
9:00am PST/12:00pm EST
5:00pm GMT/6:00pm CET
Convener Gary Berg-Cross

Ontology Summit 2025 Track 1

Agenda

Giancarlo Guizzardi

  • Title: "Explanation, Semantics, and Ontology"
  • Abstract: It is well-known by now that, of the so-called 4Vs of Big Data (Velocity, Volume, Variety and Veracity), the bulk of effort and challenge is in the latter two: (1) data comes in a large variety of representations (both from a syntactic and semantic point of view); (2) data can only be useful if truthful to the part of reality that it is supposed to represent. Moreover, the most relevant questions we need to have answered in science, government and organizations can only be answered if we put together data that reside in different data silos, which are produced in a concurrent manner by different agents and in different points of time and space. Thus, data is only useful in practice if it can (semantically) interoperate with other data. Every data schema represents a certain conceptualization, i.e., it makes an ontological commitment to a certain worldview. Issue (2) is about understanding the relation between data schemas and their underlying conceptualizations. Issue (1) is about safely connecting these different conceptualisations represented in different schemas. To address (1) and (2), we need to be able to properly explain these data schemas, i.e., to reveal the real-world semantics (or the ontological commitments) behind them. In this talk, I discuss the strong relation between the notions of real-world semantics, ontology, and explanation. I will present a notion of explanation termed Ontological Unpacking, which aims at explaining symbolic representation artifacts (conceptual models connected to data schemas, knowledge graphs, logical specifications). I show that these artifacts when produced by Ontological Unpacking differ from their traditional counterparts not only in their expressivity but also on their nature: while the latter typically merely have a descriptive nature, the former have an explanatory one. Moreover, I show that it is exactly this explanatory nature that is required for semantic interoperability. I will also discuss the relation between Ontological Unpacking and other forms of explanation in philosophy and science, as well as in Artificial Intelligence. I will argue that the current trend in XAI (Explainable AI) in which “to explain is to produce a symbolic artifact” (e.g., a decision tree or a counterfactual description) is an incomplete project resting on a false assumption, that these artifacts are not “inherently interpretable”, and that they should be taken as the beginning of the road to explanation, not the end. This talk is based on the following paper: https://www.sciencedirect.com/science/article/pii/S0169023X24000491
  • Slides
  • Video Recording

Conference Call Information

Participants

Discussion

Resources

Previous Meetings

 Session
ConferenceCall 2025 01 22Keynote
ConferenceCall 2025 01 15Overview

Next Meetings

 Session
ConferenceCall 2025 02 05Track 1
ConferenceCall 2025 02 12Track 1
ConferenceCall 2025 02 19Track 1
... further results