Actions

Ontolog Forum

Revision as of 21:07, 11 March 2020 by imported>KennethBaclawski (→‎Resources)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Session Sargur Srihari
Duration 1 hour
Date/Time 11 March 2020 16:00 GMT
9:00am PDT/12:00pm EDT
4:00pm GMT/5:00pm CET
Convener KenBaclawski
Track How

Ontology Summit 2020 Sargur Srihari

Knowledge graphs, closely related to ontologies and semantic networks, have emerged in the last few years to be an important semantic technology and research area. As structured representations of semantic knowledge that are stored in a graph, KGs are lightweight versions of semantic networks that scale to massive datasets such as the entire World Wide Web. Industry has devoted a great deal of effort to the development of knowledge graphs, and they are now critical to the functions of intelligent virtual assistants such as Siri and Alexa. Some of the research communities where KGs are relevant are Ontologies, Big Data, Linked Data, Open Knowledge Network, Artificial Intelligence, Deep Learning, and many others.

Agenda

A knowledge graph is based on Subject-Predicate-Object (SPO) triples. The SPO triples are combined to form a graph where nodes represent entities (E) that consist of subjects and objects while directed edges represent relationships (R).

Knowledge Graphs typically adhere to some deterministic rules, such as type constraints and transitivity. They also include “softer” statistical patterns or regularities, which are not universally true but nevertheless have useful predictive power.

Probabilistic knowledge graphs incorporate statistical models for relational data. Triples are assumed to be incomplete and noisy. The joint distribution is modeled from a subset D ⊆ExRxE x{0,1} of observed triples.

There are two main types of models: latent feature models and Markov random fields (MRFs). Latent feature models can be trained using deep learning. MRFs can be derived from Markov Logic Representations of facts in a database. The talk will describe learning and inference using probabilistic knowledge graphs.

Conference Call Information

  • Date: Wednesday, 11 March 2020
  • Start Time: 9:00am PDT / 12:00pm EDT / 5:00pm CET / 4:00pm GMT / 1600 UTC
    • Note that Canada and the US are on Summer Time while Europe is on Standard Time so the session is one hour earlier than usual in Europe. It is also one hour earlier in many other countries in the world.
    • For the time in your area please click here: World Clock
  • Expected Call Duration: 1 hour
  • The Video Conference URL is https://zoom.us/j/689971575
    • iPhone one-tap :
      • US: +16699006833,,689971575# or +16465588665,,689971575#
    • Telephone:
      • Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 558 8665
      • Meeting ID: 689 971 575
      • International numbers available: https://zoom.us/u/Iuuiouo
  • Chat Room: http://bit.ly/2LkAbKj
    • If the chat room is not available, then use the Zoom chat room.

Attendees

Discussion

[12:09] RaviSharma: Sargur please share your slides on meeting page for downloads

[12:10] David Eddy: So... we still 100% ignore UN-NATURAL LANGUAGE? Un-Natural language simply doesn't exist?

[12:12] RaviSharma: Srihari - how many contexts you need to parse generally to get the correct embedding in NLP

[12:12] KenBaclawski: @RaviSharma: Hari will be sending me his slides after the talk and I will post them at that time.

[12:18] RaviSharma: Srihari - what is the Google size now?

[12:22] RaviSharma: Intuitively, how are tensorial elements reduced to vectors in query operation?

[12:27] RaviSharma: Srihari - how do you link probability in tensor representation similar to error bars or what is normal distribution in tensor based or vector data is it like circle of confusion in multidimensional space?

[12:31] RaviSharma: Srihari - adjacency does it imply common nodes, but in tensor 3

[12:32] David Eddy: this elegant math reminds me of conversation with Paul Samuelson in context of the 2008 "kerfuffle"... "Well... we did train these fellows." Missing context here was since there are no measures / datasets about systems, that "variable" is "simply" left out of the equations.

[12:32] RaviSharma: Srihari - if tensor elements are sparse, then can diagonalization or assumption of orthogonal symmetry help?

[12:35] RaviSharma: Srihari - what is meaning of theta parameter and how you identify it, perturbations

[12:42] RaviSharma: RESCAL-ALS algorithm to compute the RESCAL tensor factorization. The solution is a matrix and a core tensor. RESCAL factors a (usually sparse) three-way tensor X such that each frontal slice X_k is factored into X_k = A * R_k * A^T

[12:43] RaviSharma: I searched the term

[12:43] TerryLongstreth: Can we have the URL of your website posted here?

[12:47] KenBaclawski: Hari Srihari website: https://cedar.buffalo.edu/~srihari/

[12:51] David Eddy: As we would pester our high school calculus teacher, after he'd put up a particularly elegant proof... "Very nice... what's the practical value...?"

[12:52] janet singer: You said semantics and syntax are left behind. Are they implicitly present in the data being sufficiently clean, structured and consistent to begin with?

[12:53] RaviSharma: Thanks for how uncertainty is introduced, it seems similar to Bayesian decision in multivariate distribution?

[12:53] Mike Bennett: @David the potential is huge. Consider a triple store but instead of inert presence or absence of a relation you have the capacity to learn. Result is neural learning with explicit (discoverable) semantics.

[12:54] David Eddy: @Janet... if the semantics & syntax are stripped out... how useful can the "data" be?

[12:54] RaviSharma: mike - thanks

[12:55] David Eddy: @MikeB... but what's the value/relevance if (for instance) FIBO ignores the operational systems?

[12:57] Mike Bennett: @David completely different use cases. If I understand the math this looks a lot like a brain. i see no overlap with what you can do with this and what you would do with a semantics standard like FIBO, which explicitly aims to lock down concepts for standardization purposes.

[12:59] RaviSharma: David - what is missing is the new data set for xray detectors

[13:01] TerryLongstreth: Is anyone exploring the Open World issues?

[13:04] ToddSchneider: Our speaker is back online.

[13:06] David Eddy: @Mike... one thing I'm trying to get closer to is how to "bridge the gap" between the elegance of clean ontologies & the messiness of decades of accumulated operational systems.

[13:12] RaviSharma1: thanks Srihari about a priori vs your description of not knowing the dataset till ML or NN tensor 3

[13:13] Mike Bennett: Is classification a matter of applying learning to the Generalization relation?

[13:13] janet singer: Judea Pearl shifted from focus on correlational Bayesian networks to causal reasoning

[13:14] RaviSharma: janet asked correlational vs causal and Bayesian criteria

[13:16] RaviSharma: Srihari said Markov could point us to causality?

[13:17] RaviSharma: cause is gravity for apple falling

[13:18] RaviSharma: Janet - can we discover cause by something missing in datasets or KGs

[13:19] RaviSharma: thanks Ken and others

[13:20] ToddSchneider: Meeting ends @13:20 EDT

Resources

Previous Meetings

... further results

Next Meetings

... further results