Actions

Ontolog Forum

Revision as of 20:47, 8 March 2017 by imported>KennethBaclawski (→‎Proceedings)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Session 1, Track A: Automation of Knowledge Extraction and Ontology Learning

SessionChair: Gary Berg-Cross

Abstract

Context: Building & maintaining knowledge bases & ontologies is hard work and could use some automated help.

Perception: Various parts of AI, such as NLP and machine learning are developing rapidly and could offer help.

Motivation: bring together various researchers to discuss the issues and state of the art.

Sample Questions:

  • What are the ranges of methods used to extract knowledge and build ontologies and other knowledge structures?
  • How have been techniques enhanced and expanded over time?
  • What issues of knowledge building and reuse have been noted?
  • Are there hybrid efforts?

Agenda and Speakers

  • Introduction: Gary Berg-Cross Slides

Speakers

  • Estevam Hruschka (Associate Professor at Federal University of Sao Carlos DC-UFSCar & adjunct Professor at Carnegie Mellon University) will speak on work growing out of Never-Ending Language Learning (NELL) Slides

Abstract: Never-Ending Learning approach for Populating and Extending an Ontology

Abstract: NELL (Never-Ending Language Learner) is a computer system that runs 24/7, forever, learning to read the web and, as a result, populating and extending its own ontology. NELL has two main tasks to be performed each day: i) extract (read) more facts from the web, and integrate these into its growing ontology; and ii) learn to read better than yesterday, enabling it to go back to the text it read yesterday, and today extract more facts, more accurately. This system has been running 24 hours/day for over seven years now. The result so far is an ontology having +100 million interconnected instances (e.g., servedWith(coffee, applePie), isA(applePie, bakedGood)), that NELL is considering at different levels of confidence, along with hundreds of thousands of learned phrasings, morphological features, and web page structures that NELL uses to extract beliefs from the web. The main motivation for building NELL is based on the belief that we will never really understand machine learning until we can build machines that learn many different things, over years, and become better learners over time.

Short bio Estevam R. Hruschka Jr. is co-leader of the Carnegie Mellon Read the Web project –http://rtw.ml.cmu.edu/rtw/), and the head of the Machine Learning Lab (MaLL) at Federal University of Sao Carlos (UFSCar), in Brazil. He is also adjunct professor in the Machine Learning Department at Carnegie Mellon University, USA, associate professor at UFSCar, Brazil and member of the AI4Good Foundation (http://ai4good.org/) Steering Committee. Estevam has been ”young research fellow” at FAPESP (Sao Paulo state research agency, Brazil) and, currently, he is ”research fellow” at CNPq (Brazilian research agency). His main research interests are never-ending learning, machine learning, probabilistic graphical models and natural language understanding. He has been working on machine learning with many international research teams, collaborating with research groups from companies and universities.

Abstract: A machine reader is a tool able to transform natural language text to formal structured knowledge so as the latter can be interpreted by machines, according to a shared semantics. FRED is a machine reader for the semantic web: its output is a RDF/OWL graph, whose design is based on frame semantics. Nevertheless, FRED’s graph are domain and task independent making the tool suitable to be used as a semantic middleware for domain- or task- specific applications. To serve this purpose, it is available both as REST service and as Python library. In this talk I will give an overview of the method and principles behind FRED’s implementation.

Slides

Short bio Valentina Presutti is a researcher at the Semantic Technology Laboratory of the National Research Council (CNR) in Rome, and she is associated at LIPN (University Paris 13 and CNRS). She received her Ph.D in Computer Science in 2006 at the University of Bologna (Italy). She contributes as researchers to, and is the scientific responsible at CNR for, EU as well as national funded project e.g. MARIO (H2020), IKS (FP7). She is a board member of the Association for Ontology Design and Patterns. She contributes to the Semantic Web community serving in scientific event roles e.g. Workshop Chair at ISWC 2017, Program Chair at ESWC 2013 and I-Semantics 2012. She has more than 90 publications in international journals/conferences/workshops on topics such as Semantic Web, knowledge extraction, and ontology design. She developed her expertise in ontology design, knowledge extraction also serving as a consultant for private as well as public organizations. Her research interests include Semantic Web and Linked Data, ontology quality, ontology design and patterns, assistive robotics, natural language understanding.

  • Alessandro Oltramari (Research Scientist at Bosch) "From machines that learn to machines that know: the role of ontologies in machine intelligence" Slides

Abstract Deep learning networks are getting better and better at recognizing visual and acoustic information, from pictures of cats to human faces, from gunshots to voices. But to make sense of a movie scene, or to interpret speech, machines need world models, namely semantic structures capable of connecting the dots from perception to knowledge. In my presentation I will talk about the state of the art of the art in the integration between machine learning algorithms and ontologies. I will also briefly illustrate how this integration is nowadays a key requirement for enabling intelligent services in the Internet of Things.

Short bio Alessandro Oltramari is a Research Scientist and Project Leader at Bosch. Prior to this position, he was a Research Associate at Carnegie Mellon University. He received his Ph.D. from University of Trento (Italy) in Cognitive Science, in co-tutorship with the Institute for Cognitive Science and Technology of the Italian National Research Council (ISTC- CNR). He has held a research position at the Laboratory for Applied Ontology (ISTC-CNR) in Trento from 2000 to 2010. He has been a Visiting Research Associate at Princeton University (Cognitive Science Laboratory) in 2005 and 2006. His primary interests are centered on theoretical and applied research on knowledge representation and agent technologies. In particular, his research activity mainly deals with integrating ontologies and cognitive architectures for high-level reasoning in knowledge-intensive tasks.

Virtual Meeting Details

Date: Wed., 8-March-2017 Start Time: 9:30am PST / 12:30pm EST / 6:30pm CEST / 5:30pm BST / 1630 UTC

Expected Call Duration: ~90 minutes

Video Teleconference: https://bluejeans.com/768423137 (3) Meeting ID: 768423137 (4) Chat room: http://bit.ly/2lRq4h5

Proceedings

[12:01] KenBaclawski: The Track A Session 1 will begin at 12:30pm EST.

[12:57] AndreaWesterinen: It seems that some of the learnings are done similar to Hearst patterns, where the predicates define the verbs in the pattern. Would this be an accurate statement?

[13:00] Max Petrenko: It does look similar to Hearst's patterns.

[13:00] Max Petrenko: Question: how large is the current inventory of extracted relations?

[13:00] mriben: did he say what the ontology is used for after it has "learned" the language?

[13:00] Max Petrenko: No

[13:02] AndreaWesterinen: @mriben It seems that the ontology is used to expand on the learning (i.e., keep learning as it keeps crawling new pages).

[13:03] Estevam: Max: yes, one of NELL's components uses Hearst patterns, but keep on learning more and more different features in different languages and also from images, from morphology, from conversations (chatbots and twitter), etc.

[13:04] David Newman: Estevam, What are the NLP techniques you are using to identify relations between terms (e.g. LSA, word2vec etc.?

[13:04] Estevam: Max: our knowledge base has more than 100 million facts, and growing...

[13:05] TerryLongstreth: @Estevam: Do you have any statistics on what percentage of 'facts' discovered by NELL have to be corrected or augmented by a person?

[13:05] Estevam: Max: not all the facts are 100% correct, and the idea is that NELL can become a better learner every day, thus she should be able to correct wrong facts that were previously learned.

[13:07] Max Petrenko: Estevam: thanks, I was wondering about the number of relations, not facts/triples, but I think I get the picture.

[13:07] DaveWhitten: @Estevam, what mechanism does NELL use to know that it has learned wrong facts? Is there a human vote, or neural net example or ??

[13:07] gary berg-cross: Q from Michael Singer "Did NELL have "Chase is a Credit Union" with 100% confidence? Is that some other Chase Bank? https://www.quora.com/Is-Chase-a-credit-unionChase is not a credit union it is a for profit corporation. It is a commercial bank. A federal credit union is not for profit and is owned by its share holders and the profit goes back to its members in the form of low interest rates and dividends. ..."

[13:08] ToddSchneider: Estevam, you indicated that confidence values can change (over time/new data). Is there any data about when or if these confidence values converge? Are there any common characteristics of 'facts' for which the confidence values converge (assuming some do)?

[13:09] Estevam: @David Newman: NELL uses different NLP techniques for preprocessing, as well as for extracting named entities and relations. Among them we use POS, dependency parsers, grammar based parsers, etc. I can send you the basic papers that show all those NLP techniques, just send me an e-mail: estevam.hruschka@gmail.com

[13:10] ToddSchneider: Estevam, if the list of papers you referred to is short, could you post it into this chat?

[13:11] AndreaWesterinen: @Estevam I would be interested in the basic papers as well. Can you post links in this chat?

[13:14] DaveWhitten: Is FRED a form of NELL ?

[13:15] mriben: @andreaWesterinen..it appears this ontology would be useful for more than just "learned knowledge" and useful as a knowledgebase for real world applications of some sort. @Estevam, are you collaborating with others to put the ontology into systems of practicality?..

[13:16] Estevam: @TerryLongstreth: very good question. Over the years NELL has received (on average) 2.4 human feedback per category per month. It means that for one given category (let's say: city) NELL receives less than 3 human feedback each month. NELL's precision varies depending on each category, for "easy"categories (such as city, people, company, etc.) NELL has precision above 90%. For difficult categories, on the other hand, the precision can drop to 60%. On average, NELL's precision is around 75%. We do not correct all "wrong facts" because we want NELL to do it by it self, or by automatically and autonomously going to social networks and having conversations with people (what we call "conversing learning").

[13:16] DaveWhitten: Apparently a Box pattern can have n-ary predicates in it.

[13:18] Estevam: @TerryLongstreth: in summary, the "wrong" facts are key items that we would like NELL to get better and correct in the future.

[13:19] AndreaWesterinen: @Estevam Maybe you could send the list of papers to Gary who could post them on the Track blog page?

[13:20] Alan Rector: Does Nell distinguish between learning the ontology - concepts and definitions - and facts, e.g. about Fred?

[13:22] Francesco Corcoglioniti: @Estevam, does NELL perform some form of entity linking (e.g., wrt. Wikipedia/DBpedia) or coreference resolution for recognized entities (e.g., differentiating between occurrences of "Amazon" as the company and "Amazon" as the forest, and associating extracted relations to the respective disambiguated entity)?

[13:23] Estevam: @Max: thanks Max! regarding relations and categories, we started NELL with approx. 300 categories and 300 relations. Currently NELL has about 350 categories and more than 600 relations. 90% of the new predicates were learned by NELL in it Ontology Extension component. You can expect, however, that next month NELL should add about 10.000 new relations (created by a "verb-based" ontology extension).

[13:25] MarkUnderwood: Public service announcement: Twitter-philes @ontologysummit #ontologysummit

[13:28] Max Petrenko: Thanks, @Estevam, this helped.

[13:30] TerryLongstreth: Thanks, Estevan.

[13:32] Estevam: @DaveWhitten:: thank you for asking! For most of the tasks NELL always applies the idea of multi-view, multi-task learning (meaning that it uses many different approaches for each task), it is also true for "defining confidence" (high confidence facts tend to be right). But, in general, NELL counts on the evidences it can find during its continuously reading process. So, let's imagine that NELL learned fact1 (in iteration X, that happened in the past) which is wrong, if in next iterations (X+1, X+2, ...) NELL reads more web pages about that fact1, it should be capable of reviewing its confidence on fact1 so that it will not be considered "valid"anymore.

[13:33] MikeBennett: Are the slides being advanced? I still see the title slide

[13:34] Matthew Lange (@mateolan): i only see the first slide

[13:34] DaveWhitten: I only see the first slide as well.

[13:36] Max Petrenko: Would it be possible to share the slide decks for today's talks?

[13:36] Ken Baclawski: @Max Petrenko: Yes, all of the slide decks are on the meeting page. See [13:37] Valentina Presutti: DaveWhitten: not exactly. But you can certainly think of using FRED as a tool in a pipeline such as NELL

[13:37] Valentina Presutti: Yes, DaveWhitten there can be n-ary predicates in a Box. In fact boxes can go very complex :-)

[13:41] Valentina Presutti: @Alan Rector, I am not sure I got if in your message there was a question about FRED :-) Anyway, FRED distinguishes between facts and concepts/relations

[13:41] Valentina Presutti: @Estevam: how do you formalize learned facts?

[13:41] Estevam: @Michael Singer: that's a very interesting comment! You are right, NELL is wrong in that. And what is expected is that the more NELL reads, the more she will be capable of dropping down the confidence about that fact. So, we do not want to simply delete (or manually correct) all "wrong" facts. But, if we were wanting to keep the knowledge base free from "wrong facts" we should have deleted that . :-)

[13:42] Valentina Presutti: @Estevam: how do you manage internal alignment between the learned concepts?

[13:44] gary berg-cross: Alessandro has to leave for a meeting in about 15 minutes so perhaps we will give him the first go at Qs if Valentina and Estevan can stay on a bit longer.

[13:44] Valentina Presutti: @Estevam: we used NELL ontology (specifically relations) for evaluating an alignment task, and it resulted very hard to align to NELL ontology, at that time we speculated that NELL labels looked a bit unnatural. But this was three years ago. Any idea whether this makes sense or if you noticed something like this and addressed the problem eventually (http://content.iospress.com/articles/semantic-web/sw221)

[13:44] Valentina Presutti: @ gary berg-cross: fine to me, no problem

[13:45] Alan Rector: Apologies. I have to leave unexpectedly, Alan

[13:45] Aldo Gangemi: (wave) to Alan

[13:46] Valentina Presutti: Hi Alan, thanks for attending!

[13:47] MarkUnderwood: @Valentina - I wonder how FRED would fare with human NLP when the human understood and accommodated some of the limitations (quirks? challenges?) of the system. In other words, can't the problem be partly addressed from the other side of the HCI equation?

[13:48] MarkUnderwood: *problems/issues

[13:48] MarkUnderwood: By the way, a nice tour de force of some NLP issues :)

[13:49] Valentina Presutti: @MarkUnderwood: thanks! Do you mean for example using human computation such as crowdsourcing?

[13:50] Valentina Presutti: I think this is a possible direction. Another idea I have is to try also with evolutionary algorithms, but this would require to integrate FRED in a pipeline similar to NELL. What I mean is that there's the need to include the evolution of language in the game, and humans yes!

[13:51] DaveWhitten: @Valentina since the box and graph model can be very visually complex, is there also a way to express the same information in some form of Controlled Natural Language ?

[13:52] Valentina Presutti: @DaveWhitten: the boxes are a graphical representation of a subset of FOL, hence you can represent them in FOL :-)

[13:53] Estevam: @ToddSchneider: for now NELL just tries to re-calculate confidence of all facts, in every iteration, thus not expecting for convergence. But, this is one (of the many) issues we need to work and get a better solution. Identifying when convergence was achieved is really very important for a never-ending learning system.

[13:53] DaveWhitten: @Valentina. What is the limit to your subset of FOL ?

[13:54] DaveWhitten: @Valentina, if I recall correctly, FRED also allows modal operations, which would be a superset of FOL, right?

[13:54] Estevam: @ToddSchneider: I'll also post that list here! :-) Thank you for asking.

[13:55] Valentina Presutti: @DaveWhitten: DRT is not a theory developed by myself or my group. However, in DRS you only have binary relations, I think this is the main limits. You can find more precise details in H. Kamp. A theory of truth and semantic representation. In: J. A. G. Groenendijk, T. M. V. Janssen, and M. B. J. Stokhof, editors, Formal Methods in the Study of Language, Part I, pages 277322. Mathematisch Centrum, 1981.

[13:56] Valentina Presutti: @DaveWhitten: about modal operations - no we use a lightweight representation of them, they are not primitives but we have a vocabulary for expressing them in the domain of discourse

[13:56] Valentina Presutti: @DaveWhitten: we use reification and express them as "qualities" of events

[13:57] ToddSchneider: Estevam, even if 'convergence' of confidence values is not expected, have any been observed? also, could variations in confidence values reflect something about the ingested data?

[13:58] Estevam: @AndreaWesterinen: yes, I'll just set an initial list and also post here! :-) Thanks for asking!

[13:58] DaveWhitten: I have heard of DRS (https://en.wikipedia.org/wiki/Discourse_representation_theory ) I believe it is used by the ACE controlled natural language system

[14:00] Valentina Presutti: DRS are representations based on DRT

[14:00] ravisharma: on machine learning Other than usual logical actions, how mature are ontologies that provide intelligent action such as intervention?

[14:02] DaveWhitten: @Valentina, do you use a commonly available (Prolog/SQL/etc) Database to store your "knowledge/fact" tuples, or a specialized homegrown knowledge base system ?

[14:03] AndreaWesterinen: @Estevam Can you answer the same question as for Valentina, what is the backing store for NELL?

[14:05] DaveWhitten: I just realized there is the possibility of just using computer memory (possibly virtual) as a "backing store" (thanks @AndreaWesterinen for the phrase)

[14:06] Estevam: @mriben: yes, I agree, using NELL's knowledge base to specific domains is a great idea. We have worked on some specific domains and trying to integrate NELL's knowledge base to real problems. One interesting point is that when using NELL focusing on high precision and high recall, and not focusing on necessarily starting from very feel labeled data, it would make lots of sense to have a very high number of initial examples for every category (one example would be starting NELL using DBpedia, or WikiData as initial ontology), so that NELL could extend it. One issue that needs attention would be scalability, but that is very interesting.

[14:07] Valentina Presutti: @DaveWhitten we use Virtuoso (triple store)

[14:07] gary berg-cross: Valentina, Estevam Thank you. Can you stay on for a few more minutes for Qs?

[14:08] gary berg-cross: Can Alessandro discuss technologies used ?

[14:08] Estevam: gary berg-cross: yes, count on me!

[14:09] Valentina Presutti: @Gary: yes

[14:11] MarkUnderwood: It's in S3, which works this week https://s3.amazonaws.com/ontologforum/OntologySummit2017/TrackA/RolesOfOntologiesInMachineIntelligence--AlessandroOltramari_20170308.pdf

[14:12] DaveWhitten: It seems that the process of doing NLP already involves some classification and categories used by the word choice and the implied metaphors and image schema. Do either NELL or FRED explicitly create models of these ?

[14:15] Russ Reinsch: Does this chat app automatically capture the text in the chat, at the end of the session?

[14:16] SteveRay: Yes.

[14:16] gary berg-cross: @Russ Ken captures the text and posts it as part of the session.

[14:16] MarkUnderwood: @Valentina, no I meant training a human "operator" in FRED's capabilities and constraining the process vs. trying to wrestle w/ all the NLP issues raised

[14:17] MarkUnderwood: @Valentina - Human "middleware"

[14:17] gary berg-cross: @Ken Let us know when we have to end the session. :-(

[14:18] KenBaclawski: @Russ: Yes, I capture the chat as the Proceedings in the meeting page. I edit it a little, of course.

[14:18] Russ Reinsch: Thanks Gary / Steve.

[14:19] Russ Reinsch: Ken I would like to get back on the email listserve. My email is russell.reinsch@gmail

[14:19] ToddSchneider: You can e-mail the contents of the chat. Click on the 'Actions' button at the top of the chat window.

[14:19] KenBaclawski: @gary berg-cross: If the participants are fine with it, then we can continue. There is no limitation in the conference calls or video.

[14:20] Russ Reinsch: Thanks Todd

[14:20] MarkUnderwood: @Russ - example from last week http://ontologforum.org/index.php/ConferenceCall_2017_03_01

[14:21] MarkUnderwood: I must leave, thanks everyone for attending. Keep the OntoFaith.

[14:22] gary berg-cross: Next week, same time Track B with Co-Champions Mike Bennett Andrea Westerinen Topic: Using background knowledge to improve machine learning results

[14:22] ravisharma: The internal representation of learned concepts in Theo Framework is interesting, is there a URL? to share.

[14:24] ravisharma: What about the patterns can they be used in Theo?

[14:24] KenBaclawski: @Russ Reinsch: I tried sending an invitation to you.

[14:25] Russ Reinsch: @Ken - received. graci

[14:26] ToddSchneider: Ken, I've been using Audacity for the audio recording. Do you want the 'project' files or the MP3 versions?

[14:27] KenBaclawski: @Russ Reinsch: Do you have an account on the Ontology Community wiki? If not, I can create one for you.

[14:27] ravisharma: would be great as some of the concepts are not in chat.

[14:27] Valentina Presutti: @MarkUnderwood: in a sense this is what we do :-) the operators are us and we make continuous error analysis and try to identify new patterns and address them. Maybe with crowdsourcing this could become much more powerful

[14:28] Valentina Presutti: *powerful

[14:28] KenBaclawski: @gary berg-cross: Participants are leaving, so it would be good time to adjourn the meeting.

[14:29] gary berg-cross: Valentina, Estevam, Alessandro - thanks for turning this into a productive panel discussion ! We will have to have a follow up to see how the collaborations are going.

[14:31] AndreaWesterinen: Thanks, all! Have to leave for another meeting.

[14:32] ravisharma: Gary, Ken, Speakers great session. Thanks.

[14:35] Max Petrenko: Excellent talks today, thank you.

Attendees