Actions

Ontolog Forum

Revision as of 22:35, 21 December 2016 by imported>KennethBaclawski (→‎Proceedings)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Number 02
Duration 1.5 hour
Date/Time December 21 2016 17:00 GMT
9:00 PST/12:00 EST
6:00pm CET
Convener KenBaclawski

OntologySummit Pre-Launch Planning Session III

Session Chair: Kenneth Baclawski (IAOA; Northeastern University)

Topic: Pre-Launch Planning Session III for OntologySummit2017

Conference Call Details

Please see the Connection Details

Attendees

Abstract

The OntologySummit is an annual series of events that involves the ontology community and communities related to each year's theme chosen for the summit. The Ontology Summit was started by Ontolog and NIST, and the program has been co-organized by Ontolog, NIST, NCOR, NCBO, IAOA, NCO_NITRD along with the co-sponsorship of other organizations that are supportive of the Summit goals and objectives.

Agenda

1. Ontology Summit 2017 Description

The topic for the summit is AI, Learning, Reasoning and Ontologies. We need a longer explanation of the topic. The following is a suggestion for a summit logo: https://s3.amazonaws.com/ontologforum/OntologySummit2017/Pre-Launch/OntologySummit2017_Logo_v2.jpg

2. Ontology Summit 2017 Tracks

3. Other suggestions

Suggestion: It might be useful for each session to be tagged with the expected level, such as Beginner, Intermediate, Advanced, or possibly with some prerequisites.

4. AOB

5. Next Meeting

Proceedings

[12:13] KenBaclawski: http://ontologforum.org/index.php/ConferenceCall_2016_12_15

[12:17] AlexShkotin: do you think that ontology way is opposite to neural network?

[12:17] gary berg-cross: Peter Stone has this on a site: I am the founder and director of the Learning Agents Research Group (LARG) within the Artificial Intelligence Laboratory in the Department of Computer Science at The University of Texas at Austin.

I am also the COO and co-founder of Cogitai, Inc.

My main research interest in AI is understanding how we can best create complete intelligent agents. I consider adaptation, interaction, and embodiment to be essential capabilities of such agents. Thus, my research focuses mainly on machine learning, multiagent systems, and robotics. To me, the most exciting research topics are those inspired by challenging real-world problems. I believe that complete successful research includes both precise, novel algorithms and fully implemented and rigorously evaluated applications. My application domains have included robot soccer, autonomous bidding agents, autonomous vehicles, autonomic computing, and social agents.

[12:21] gary berg-cross: Amit in "2016 Reflections on Semantic Web, Lined Data and Smart Data" what has been happening is that AI, which is a far bigger field and has a lot more followers and practitioners, has recognized that background knowledge (again both broad-based and domain-specific) is increasingly key to further improving machine learning and NLP. In other words, AI, with its much larger footprint in research and practice has realized that knowledge will propel machine understanding of (diverse) content. In this context, while the Semantic Web standards are doing no better or worse compared to recent years, the core of value proposition of the Semantic Web is being co-opted by or swallowed by the bigger area of AI.

[12:23] gary berg-cross: IJCAI Workshop on Semantic Machine Learning (SML 2016) had a panel on "Panel Discussion

Title: Challenges and Prospects of integrating Semantics and Background Knowledge into Machine Learning" July 10, 2016 New York, USA http://datam.i2r.a-star.edu.sg/sml16/

[12:30] KenBaclawski: Using machine learning to develop and improve ontologies (ML -> Ontology) Gary Berg-Cross

Improving machine learning using background knowledge (Ontology -> ML) Amit Sheth and Peter Stone

[12:31] KenBaclawski: Donna and Amit can help with Gary's track

[12:39] AlexShkotin: but we should consider different OWL2 profiles and reasoning in DL and beyond.

[12:43] AlexShkotin: do we have something in high oder logic reasoning?

[12:47] KenBaclawski: Using machine learning to develop and improve ontologies (ML -> Ontology) Gary Berg-Cross, (Donna and Amit could also contribute)

Improving machine learning using background knowledge (Ontology -> ML) Amit Sheth and Peter Stone

Neurophysiological aspects of learning (Learning -> Reasoning) Tom Mitchell (CMU)

Reasoning with ontologies (Ontologies -> Reasoning) Michael Gruninger?, Leo Obrst?

[12:49] AlexShkotin: CASL

[12:50] AlexShkotin: HETS project

[12:50] KenBaclawski: For HETS see http://hets.eu/

[12:51] AlexShkotin: http://theo.cs.ovgu.de/Research/Hets.html

[12:52] AlexShkotin: Coq

[12:53] gary berg-cross: Example of a Hybrid system discussion and reasoning: Reasoning With Neural Tensor Networks for Knowledge Base Completion Richard Socher , Danqi Chen*, Christopher D. Manning, Andrew Y. Ng Computer Science Department, Stanford University, Stanford, CA 94305, USA richard@socher.org, {danqi,manning}@stanford.edu, ang@cs.stanford.edu Abstract Knowledge bases are an important resource for question answering and other tasks but often suffer from incompleteness and lack of ability to reason over their discrete entities and relationships. In this paper we introduce an expressive neural tensor network suitable for reasoning over relationships between two entities. Previous work represented entities as either discrete atomic units or with a single entity vector representation. We show that performance can be improved when entities are represented as an average of their constituting word vectors. This allows sharing of statistical strength between, for instance, facts involving the Sumatran tiger and Bengal tiger. Lastly, we demonstrate that all models improve when these word vectors are initialized with vectors learned from unsupervised large corpora. We assess the model by considering the problem of predicting additional true relations between entities given a subset of the knowledge base. Our model outperforms previous models and can classify unseen relationships in WordNet and FreeBase with an accuracy of 86.2% and 90.0%, respectively. 1 Introduction Ontologies and knowledge bases such as WordNet [1], Yago [2] or the Google Knowledge Graph are extremely useful resources for query expansion [3], coreference resolution [4], question answering (Siri), information retrieval or providing structured knowledge to users. However, they suffer from incompleteness and a lack of reasoning capability. Much work has focused on extending existing knowledge bases using patterns or classifiers applied to large text corpora. However, not all common knowledge that is obvious to people is expressed in text [5, 6, 2, 7]. We adopt here the complementary goal of predicting the likely truth of additional facts based on existing facts in the knowledge base. Such factual, common sense reasoning is available and useful to people. For instance, when told that a new species of monkeys has been discovered, a person does not need to find textual evidence to know that this new monkey, too, will have legs (a meronymic relationship inferred due to a hyponymic relation to monkeys in general). We introduce a model that can accurately predict additional true facts using only an existing database. This is achieved by representing each entity (i.e., each object or individual) in the database as a vector. These vectors can capture facts about that entity and how probable it is part of a certain relation. Each relation is defined through the parameters of a novel neural tensor network which can explicitly relate two entity vectors. The first contribution of this paper is the new neural tensor network (NTN), which generalizes several previous neural network models and provides a more powerful way to model relational information than a standard neural network layer.

[12:55] Donna Fritzsche: data lake

[12:55] AlexShkotin: Gary do you have URL to this paper?

[12:56] gary berg-cross: Some work by Biplav Srivastava Research Staff Member and Master Inventor at IBM ABLE: Agent Building and Learning Environment

Overview Publications The Agent Building and Learning Environment (ABLE) project at the IBM T.J. Watson research laboratory started in early 1999. The goals of the project were to produce a fast, re-usable and scalable toolkit for creating intelligent software applications. ABLE release 1.0 was posted on the IBM alphaWorks site in May, 2000. The ABLE research team has delivered regular updates to alphaWorks with early releases focused on the core framework and Swing-based tooling, moving on to several years of work on our ABLE rule language (ARL) and rule engines, followed by our ABLE distributed multi-agent platform and Eclipse-based tooling. The most recent work adds agent-based modeling and simulation extensions on top of the ABLE toolit and enhanced Eclipse 3.7 support. ABLE software technology has been delivered in IBM products since 2002, and has been applied to areas such as autonomic computing, automotive diagnostics, communications trace analysis, system health monitoring, medical diagnostics, agent-based modeling and simulation, complex workload generation, business rules and policy, adding intelligence to pervasive computing devices and healthcare payments and incentives simulations. The Agent Building and Learning Environment (ABLE) is a Java-based framework, component library, and productivity toolkit for building intelligent agents that can use machine learning and reasoning. ABLE has several major components: A Java framework for building intelligent agents with support for lightweight messaging, event queuing, and asynchronous operations. Machine learning agents for classification, prediction, forecasting, and clustering, supported by beans that provide file and database access and filtering. A reasoning component that includes a rule-based programming language known as ABLE Rule Language (ARL). ARL is a Java-like language with rule processing engines ranging from simple procedural scripting, to forward and backward chaining, to fuzzy systems, Prolog, and Rete' inferencing techniques. A set of simulation classes and agents to support large-scale agent-based modeling and simulation studies. An agent platform that facilitates agent management and communication across a distributed network of systems. ABLE includes these tools: Agent Editor, a graphical editor to assemble and connect components (Eclipse plugin). Rule Editor, a text editor to develop and debug rulesets (Eclipse plugin). Simulation Editor, a graphical editor to create simulation models including agents and scenarios (Eclipse plugin) Platform Console, an application for managing distributed agents (Eclipse Rich Client Platform).

[12:56] Donna Fritzsche: http://searchdatamanagement.techtarget.com/feature/Medical-technologist-drives-semantic-data-lake-development

[12:57] Donna Fritzsche: http://franz.com/about/press_room/

[12:57] Donna Fritzsche: Montifeore

[13:01] gary berg-cross: From last year Parsa Mirhaji The Semantic Data Lake in Healthcare We are entering the age of precision medicine and learning healthcare systems. These principles depend on robust evidence generation frameworks that synthesize genetic, clinical, behavioral and non-traditional data (e.g. patient generated data, environmental, socio-economic, and behavioral data) and knowledge bases and open information from public domain (linked open data). Frameworks that account for reproducibility of research findings, empower patients through engaging them with their own health information and analytic use of their data, provide precise contextual information about meaning and quality of the underlying data, and support all aspects of regulatory compliance while enabling broader collaborations through data and information sharing. This will require informatics platforms that go beyond depositing data in repositories or data warehouses provided by traditional big-data platforms. We introduce a semantic approach to underpin evidence generation in healthcare. The Semantic Data Lake extends state of the art in big-data management to account for large scale integration of data, metadata, knowledge, and linked open data to support analytics across the spectrum of applications from precision medicine to accountable and learning healthcare systems.

[13:03] gary berg-cross: Here is one item I can leverage for my track: The State of the Art in Ontology Learning: A Framework for Comparison Mehrnoush Shamsfard, and Ahmad Abdollahzadeh Barforoush Intelligent Systems Laboratory, Computer Engineering Dept., Amir Kabir University of Technology, Hafez ave., Tehran, Iran. Email: shams@pnu.ac.ir, ahmad@ce.aku.ac.ir Abstract In recent years there have been some efforts to automate the ontology acquisition and construction process. The proposed systems differ from each other in some distinguishing factors and have many features in common. This paper presents the state of the art in ontology learning (OL) and introduces a framework for classifying and comparing OL systems. The dimensions of the framework answer to questions about what to learn, from where to learn and how to learn. They include features of the input, the methods of learning and knowledge acquisition, the elements learned, the resulted ontology and also the evaluation process. To extract the framework over 50 OL systems or modules from the recent workshops, conferences and published journals are studied and seven prominent of them with most differences are selected to be compared according to our framework. In this paper after a brief description of the seven selected systems we will describe the framework dimensions. Then we will place the representative ontology learning systems into our framework. At last we will describe the differences, strengths and weaknesses of various values for our dimensions in order to present a guideline for researchers to choose the appropriate features (dimensions values) to create or use an OL system for their own domain or application.

[13:04] AlexShkotin: what about CNL ontologies? do we have that somewhere?

[13:15] AlexShkotin: Gary, last paper is from 2003?

[13:16] Donna Fritzsche: http://healthitanalytics.com/news/montefiore-semantic-data-lake-tackles-predictive-analytics

[13:18] gary berg-cross: Q for Ram, when is the Symposium scheduled for in 2017?

[13:28] AlexShkotin: OK, Merry Christmas and Happy New Year! C U:-)

[13:33] gary berg-cross: Bye

Previous Meetings