|Date/Time||Feb 22 2017 17:30 GMT|
|9:30 PST/12:30 EST|
|5:30pm GMT/6:30pm CET|
Ontology Summit 2017 Launch
Video Teleconference: https://bluejeans.com/768423137
Meeting ID: 768423137
Chat room: http://bit.ly/2lRq4h5
There are many connections between ontologies, AI, machine learning and reasoning. The Ontology Summit will explore, identify and articulate the relationships between these areas, with special attention to the three track themes.
- Introduction Ken Baclawski Slides
- Track A
- Track B
- Co-Champions Mike Bennett and Andrea Westerinen Slides
- Blog: Using background knowledge to improve machine learning results
- Track C
- Alan Rector
- Aleksandra Sojic
- Alex Shkotin
- Beth DiGiulian
- Bobbin Teegarden
- Bob Schloss
- Christi Kapp
- Christopher Spottiswoode
- Dalia Varanka
- David Hay
- David Newman
- Debra Lina Ciriaco
- Donna Fritzsche
- Eric Chan
- Gary Berg-Cross
- Jack Ring
- Jim Disbrow
- Jim Logan
- Joel Bender
- Jose Parente de Olivia
- Ken Baclawski
- Kiki Hempelmann
- Laure Vieu
- Lavern Pritchard
- Mark Underwood
- Max Petrenko
- Mike Bennett
- Mike Bobak
- Mike Denny
- Mike Riben
- Ognyan Kulev
- Pat Cassidy
- Ram D. Sriram
- Ravi Sharma
- Terry Longstreth
- Todd Schneider
- Valerie Charron
- William Sweet
[12:42] MikeBennett: Is there a bit.ly URI for today's session page?
[12:46] KenBaclawski: @MikeBennett: Yes, this should work: http://bit.ly/2kV2PEr
[12:50] MikeBennett: Thanks
[12:52] ToddSchneider: Ken, can you capture the list of participants on the web meeting?
[12:59] KenBaclawski: @ToddSchneider: I will do that as soon as someone else shares the screen.
[13:09] TerryLongstreth: @Gary - in an example from NELL, you mention 'the mayor probably lives in the same city' Does NELL actually aware of probabilities?
[13:12] gary berg-cross: Terry, I think that there is some. As they say "NELL learns to acquire these two types of knowledge in a variety of ways. It learns free-form text patterns for extracting this knowledge from sentences on the web, it learns to extract this knowledge from semi-structured web data such as tables and lists, it learns morphological regularities of instances of categories, and it learns probabilistic horn clause rules that enable it to infer new instances of relations from other relation instances that it has already learned.
[13:12] ravisharma: great start on overview diagram
[13:14] MarkUnderwood: Offtopic: BlueJeans is definitely a net plus
[13:16] ravisharma: how will clusters be handled in ontologies as these are one of the pattern technique for machine learning?
[13:18] ravisharma: machine learning with training set will likely be more successful as it will define multidimensional boundaries where the concept or term will be a likely match.
[13:19] ravisharma: What is FEIII?
[13:21] gary berg-cross: @Ravi A start on this is the Discourse Frame. So the sense of the word buy is clarified by knowing about the context of a commercial transfer that involves certain individuals, e.g. a seller, a buyer, goods, money, etc. Linguistic frames are referred to by sentences in text that describe different situations of the same type i.e. frame occurrences. The words in a sentence "evoke" concepts as well as the perspective from which the situation expressed is viewed. Cluster patterns are build around these.
[13:24] MikeBennett: @Ravi I can't remember what it stands for but this is the data challenge I described, taking data in SEC 10K filings and deducing relationships. It's part of an event whose name I can't remember but please stay tuned for full deetails when we present on this.
[13:25] ravisharma: great Gary - this implies closely related or overlapping terms could form a cluster to be identified by machine learning is that right?
[13:26] TerryLongstreth: Dial +1.408.740.7256 and enter 768423137#.
[13:32] MikeBennett: @Ravi here is a link to some information about the FEIII challenge and the DSMM workshop of which it will be a part.
[13:33] ravisharma: Ram - last speaker gave example of how to use ontologies for learning, your exampe is for using ontology with a reasoner (inference) and where do the reasoning rules come from.
[13:33] ravisharma: do they come from Domains and languages or are they embedded and extracted from ontologies?
[13:37] gary berg-cross: @ravi [13:25] Closely related info might be synonyms of parts of a common object or process etc. These may get handled differently but yes, they may be part of a cluster or a super cluster in some way.
[13:40] Donna Fritzsche: HI folks, we are using a new virtual meeting system today. Please be patient with us!
[13:43] JackRing: Does Reasoning detect the limits of 'believed propositions?'
[13:46] JackRing: The issue today is whether the agenda addresses ontology refinement or drift over time, e.g. is Pluto a planet?
[13:46] Jim Disbrow: Inductive and deductive reasoning are being considered - but what about addictive reasoning?
[13:47] JackRing: Now that 'science' includes consensus then there is also abductive reasoning.
[13:48] ChristiKapp: As it relates to false texts on the internet. I started via mining of DNS data, google keyword trends, as well as the hyperlinks between different entities publishing news - the resulting "network" of web sites (created mostly by non-profits) shows a strong connection between diverse entities. I was heading in the direction of querying the 501(c)(3)'s in Wikipedia, but found that the data was not yet populated. so probably there could be some automation regarding this data combination to provide a score on the credibility of the text source
[13:50] gary berg-cross: @Jack [13:46] to decide if Pluto is a "planet" we look at the definition of a planet. Has that been formalized so a system understands the constraints applied. If we don't agree on such a definiton than a system may do no better than people.
[13:50] Donna Fritzsche: I need to drop off. Thanks all! Donna
[13:50] ravisharma: would you also apply reasoning to ontology itself to discover say closely related terms or identify important relationships among things.
[13:50] MikeBennett: Sounds like we need to start on the meta-ontology early in the Summit process so we know what we are all talking about e.g. abductive reasoning.
[13:51] gary berg-cross: Well Peirce fnally made it to the chat....Abductive reasoning (also called abduction, abductive inference or retroduction) is a form of logical inference which goes from an observation to a theory which accounts for the observation, ideally seeking to find the simplest and most likely explanation.
[13:51] Jim Disbrow: Think of the word "phonon" - and it is defined differently in a half-dozen professions. Learning these difference spreads and broadens the quickness. Learning these new meanings is how abductive learning is working.
[13:52] ravisharma: mikeBennett - thanks for financial reference and SEC 10K example.
[13:54] JackRing: Now that this year addresses the triad then we are into algorithms so how shall we ascertain the dynamic and integrity limits of any given construct?
[13:54] ravisharma: Jim -in acoustics and probably in literature different meanings to PHONON but ontology context I presume will be able to differentiate these for abductive reasoning?
[13:54] Ram D. Sriram: Logging off. Need to get to the next meeting
[13:54] ravisharma: great work in identifying speakers, Ram. Thanks.
[14:01] MarkUnderwood: Newbies... see you on Twitter @ontologysummit