Actions

Ontolog Forum

Revision as of 14:28, 27 April 2018 by imported>ToddSchneider (Corrected typos and missing verb.)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Purpose This track discusses how one can harmonize diverse conceptualizations in multi-context systems engineering with ontologies.


Toward an Ontology of Context that enables system realization and efficacy assessment

Approximately one million of us strive to conceive and design systems that serve various stakeholders throughout society. The stakeholders are engaged in governance, commerce, industry, security, and education among others. In their attempts to cope with non-deterministic situations the stakeholders rely on us for systems that are increasingly trustworthy even though we cannot predict what a system will encounter. The value proposition is significant. Our million-person success will determine the successes of 20 million developers and operators then upwards of 200 million stakeholders throughout society.

Our ability to serve has been dwindling for more than two decades. Approximately 50% of our projects have failed to meet customer expectations. Those projects that have not failed typically experienced double-digit cost overruns and schedule slips. Meanwhile stakeholder situations are increasing in extent, variety (temporal and semantic) and ambiguity causing stakeholders to demand systems that are ever-greater in scope, complexity, agility and trustworthiness. This is leading us into the era of autonomous systems, ready or not.

Here is one perspective. In a typical engagement we must conceive and design a Problem Suppression System that when deployed and operated moderates the non-deterministic Problem System in ways that generate excellent Stakeholder Value. In each case the ‘we’ takes the form of a sociotechnical system that ranges from two persons to thousands of persons engaged in a development projects lasting days to months to years (hopefully no longer decades) followed by years of corrective and evolutionary maintenance all while avoiding unintended consequences. We must evolve our sociotechnical systems from performing a) prescient design of specific responses to predicted stimuli to b) permissive design that assures only acceptable responses to only authorized stimuli. This introduces the challenge of expressing the determinants of ‘acceptable’ and ‘authorized’ that ensure both trustworthy value and Do No Harm.

One clue is distinct. This is not so much about general systems theory and engineering practice as about general semantics and double loop learning. Our success is becoming conditional on how well we system initializers, administrators and operators understand one another, understand our context, notably the stakeholders, and understand the stakeholder context, notably, the non-deterministic Problem System.

Fundamental to mutual understanding is a shared ontology that spans and harmonizes multiple languages such as natural, logic, technologies, computer, project, etc., for multiple perspectives of the sociotechnical system participants and for expressing the kinds of rules that relate observables to ‘acceptable’ and ‘authorized’ conclusions.

Further, one master ontology will not serve because it would never be learned. As suggested at www.starkermann.com a shared ontology must facilitate mutual understanding throughout ‘N’ workgroups consisting of 2 to 7 participants each.

Commentary

Model Based Engineering

Engineering, in particular systems engineering, is moving towards being 'model based'. The models can include both graphical (e.g., UML, SysML) and algorithmic/simulations. Any model, using a common English definition (e.g., a simplified description, especially a mathematical one, of a system or process), necessarily does not describe nor represent everything. But only those 'elements', entities or relations, the model creator deems most relevant to the intended purpose and uses of the model(s). The process of deciding relevancy for inclusion in a model necessarily has embedded the context(s) of the model creator and, possibly, the intent for the models. Unfortunately, what this, or these, contexts are are not made explicit. Either because the model creator may not be fully be aware of them, consider them sufficiently important for inclusion, or have available tools that facilitate the representation or even (natural language) description of them.