Actions

Ontolog Forum

Revision as of 19:54, 10 May 2017 by imported>Garybc (→‎References)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Cognitive Scaffolding

Summary of Background and Opportunities

The relations of machine learning (ML), reasoning and ontological knowledge, within an AI context, have been the focus of the 2017 Ontology Summit. Extracting information and building knowledge bases and ontologies using ML and intelligent NLP techniques, for example, was discussed in Track A. machine learning has come a long way since Arthur Samuel's1959 definition of ML as a sub-field of computer science that gives “computers the ability to learn without being explicitly programmed.” In practice, this means developing computer programs that can make predictions based on data and much more and richer data has become available. Modern machine learning workflow often include routine tasks for: problem evaluation, data exploration, data pre-processing, model training followed by testing and deployment. One observation was that many of these techniques have become more cognitive, contextual and holisitic rather than purely bottom up which avails itself of more intelligent processing and uses more knowledge.

Challenges

Some knowledge is needed to handle ambiguity for words such as “pen” which have many senses. One sense of “pen” is of a writing instrument, but another is of a small enclosure for holding animals or children depending on context. But the context here isn’t really a statistical one. Understanding a sentence with “pen in it often requires real world knowledge about the relative sizes of boxes and pens. Even simple and less general tasks such a reading employ a knowledge starter basic AI processes such as supervised or semi-supervised learning. An example of seeding knowledge for intelligent process was illustrated in the NELL (Never Ending Language Learning) system for one important cognitive task - reading. The inputs to NELL include: 1. an initial ontology defining hundreds of conceptual categories (e.g.,person, sportsTeam, fruit, emotion) and 1. relations (e.g., playsOnTeam (athlete,sportsTeam), playsInstrument (musician,instrument)) that NELL was expected to read about, and 2. 10 to 15 seed instance examples of each category and relation.

Seeding knowledge in this case was helped by the topical focus of what NELL's reading task was. A more general questions is the ontological basis of sufficient knowledge needed by an autonomous, intelligent agents (IA) which observes and acts on an environment in a directed way to achieve goals remains an abiding question. No single architecture, technique or tool, for example, available to built or develop intelligent agent has proven adequate to address all the functionality needed for even relatively simple information agents such as envisioned in the original DAML effort. Candidates approaches, however, abound from disciplines as diverse as Cognitive Development, Cognitive Science, Developmental Robotics and AI. There are, for example, many cognitive architecture proposed including some from Cognitive Psychology such as Soar (Laird, Newell, & Rosenbloom, 1987; Newell, 1990; Wray & Laird, 2003) and Act-R, as well as BDI architectures (Georgeff & Lansky, 1987; Huber, 1999; Rao & Georgeff, 1991).

But such agent systems have difficulty accommodating commonsense things like diverse spatio-temporal information, including quantitative and qualitative assessments within a single analytic context in a suitable period of time. Yet as part of analytic process for understanding situation humans easily integrate both quantitative and qualitative information assessments to arrive at analytic conclusions and this happens before humans, for example, learn to read. That is the seeding for something like NELL has already occurred. How is this? It seems reasonable to assume that some degree of innate structure is needed to develop a cognitive system and relevant knowledge for these common things as part of an agent's experience. Such cognitive development, particularly in the context of general intelligence is sometimes discussed in terms of a early scaffolding with a core set of cognitive abilities providing a temporary structure to afford organizing more general knowledge and learning during the progressive development into a richer cognitive skill system.

In cognitive science, knowledge is conceived as the main outcome of the process of understanding [Neisser, 1987; and Albertazzi, 2000]: by interacting with the environment, intelligent agents are able to interpret and represent world facts, suitably acting to preserve themselves and pursue specific goals accordingly. Representing knowledge is a necessary step for communication, but knowledge can be properly represented only insofar that world phenomena are previously presented to humans, namely experienced through cognitive structures. In cognitive science, knowledge is conceived as the main outcome of the process of understanding [1,2]: by interacting with the environment, intelligent agents are able to interpret and represent world facts, suitably acting to preserve themselves and pursue specific goals accordingly. Representing knowledge is a necessary step for communication, but knowledge can be properly represented only insofar that world phenomena are previously presented to humans, namely experienced through cognitive structures. Such a cognitive scaffolding can be understood as a starter set - a type of dynamic building block.

But to date, there is no accepted stater sets within domains and a general theory as to what this is or what the first recognizable knowledge and reasoning components are. The Cognitive Linguistics Hypothesis G. Lakoff & M. Johnson (The Body in the Mind ), for example, suggests that likely, common human experiences with the world are simple, limited and constrained. Given this a core part of meaning is grounded in perception and action. This core meaning is represented in what they call ‘image schemata’ which act as metaphorical frames & cognitive building blocks. Candidate Image Schema include such familiar ontological foundation concepts as: Objects, Process and Part-Whole relations, but also, Motion, Full-Empty , Container, Blockage, Surface, Path, Link , Collection. Merging, Scale and Emerge [Oltamari, 2016]. Some ontology design pattern work and reference ontologies has leveraged these notions such as work on containment, motion and path


  • One challenge is how to consider alternative candidates for a good set of knowledge that could provide adequate cognitive scaffolding. Alternative perspectives such as discussed above and from others are possible.
  • Some capability statements may also provide some idea such the scaffolding and experience needed to: Handle the frame problem to decide what’s relevant when the world is constantly changing around us? (ref “Cognitive Wheels: The Frame Problem of AI,” by Dan Dennett.)
    • AI systems don’t just need commonsense facts & knowledge about the world, they have to know what knowledge is relevant, from one situation to the next.
  • As part of scaffolding intelligent agents need knowledge for on how to internally represent knowledge in a system so that it’s accessible and usable. How is such meta-knowledge about representation learned?
  • As part of scaffolding intelligent agents need control mechanisms to find relevant pieces of knowledge in particular contexts. How is this learned and what knowledge is involved?
  • Recognize existing patterns/entities, even with partial and/or noisy input. What is this?
  • Determining what existing categories a pattern belongs to (and how well it fits). What kind of thing is this?
  • Predict the remainder and/or continuations of a given partial pattern (predict). What is next?
  • Be able to learn new patterns/ entities, and to be able to categorize them.
  • Focus/selection/importance: Selecting pertinent information at the input level as well as during learning and cognition.
  • Be able to learn new skills both mental and physical
  • Be abler to learn via a wide range of modes, including: unsupervised, supervised, exploration, instruction, etc
  • Support integrated long-term memory, i.e., its knowledge base must be immediately available to all other abilities.

Future Prospects

Work on both ODPs, Cognitive Architectures, Developmental Cognitive Science, Artificial General Inelegance are relevant as well as Bio-Inspired Intelligent Systems.

References

  • Albertazzi, L.: Which semantics? In: Albertazzi, L. (ed.) Meaning and cognition. A multidisciplinary approach, pp. 1–24. Benjamins Publishing Company, Amsterdam (2000)
  • Neisser, U.: From direct perception to conceptual structure. In: Concepts and Conceptual development, pp. 11–24. Cambridge University Press, Cambridge (1987)
  • Oltramari, A. (2011). An introduction to hybrid semantics: the role of cognition in semantic resources. In Modeling, Learning, and Processing of Text Technological Data Structures (pp. 97-109). Springer Berlin Heidelberg.