Copy Link
Add to Bookmark
Report

AIList Digest Volume 5 Issue 121

eZine's profile picture
Published in 
AIList Digest
 · 15 Nov 2023

AIList Digest           Wednesday, 13 May 1987    Volume 5 : Issue 121 

Today's Topics:
Reports - NMSU Computer and Cognitive Science Abstracts (2 of 2)

----------------------------------------------------------------------

Date: Sun, 10 May 87 16:07:45 MDT
From: yorick%nmsu.csnet@RELAY.CS.NET
Subject: Computer and Cognitive Science Abstracts (2 of 2)


Computing Research Laboratory
New Mexico State University
Box 30001
Las, Cruces, NM 88003.


Krueger, W. (1986)
Transverse Criticality and its Application to Image Processing,
MCCS-85-61.

The basis for investigation into visual recognition of objects is
their representation. One appealing approach begins by replacing the
objects themselves by their bounding surfaces. These then are represented by
surfaces which have been smoothed according to various prescriptions.
The resulting smoothed surfaces are subjected to geometric analysis
in an attempt to find critical events which correspond to
``landmarks'' that serve to define the original object.

Many vision researchers have used this outline, often incorporating
it into a larger one that uses the critical events as constraints in
surface generation programs. To deal with complex objects these
investigators have proposed a number of candidates for the notion of
critical event, most of which take the form of zero-crossings of some
differentially defined quantity associated to surfaces (e.g. Gaussian
curvature, etc.). Many of these require some a posteriori geometric
conditioning (e.g. planarity) in order to be visually significant.

In this report, we introduce the notion of a transverse critical line
of a smooth function defined on a smooth surface. Transverse
criticality attempts to capture the trough/crest behavior manifested
by quantities which are globally defined on surfaces (e.g. curvature
troughs and crests, irradiance troughs and crests). This notion can
be used to study both topographic and photometric surface behavior
and includes, as special cases, definitions proposed by other
authors, among which notions are the regular edges of Phillips and
Machuca [PM] and the interesting flutings of Marr [BPYA].
Applications are made to two classes of surfaces which are important
in computer vision height surfaces and generalized cones.


Graham, N. & Harary, F. (1986)
Packing and Mispacking Subcubes into Hypercubes,
MCCS-85-65.

A node-disjoint packing of a graph G into a larger graph H
is a largest collection of disjoint copies of G contained
in H; an edge disjoint packing is defined similarly, but no two
copies of G have a common edge. Two packing numbers of G into H
are defined accordingly. It is easy to determine both of these numbers
when G is a subcube of a hypercube H.

A mispacking of G into H is a maximal collection of disjoint
copies of G whose removal from H leaves no subgraph G, such that
the cardinality of this collection is minimum. Two mispacking numbers
of G into H are defined analogously. Their exact determination
is quite difficult but we obtain upper bounds.


Dietrich, E. & Fields, C. (1986),
Creative Problem Solving Using Wanton Inference:
It takes at least two to tango,
MCCS-85-70.

This paper introduces \fBwanton inference\fR, a problem solving strategy for
creative problem solving. The central idea underlying wanton inference is
that creative solutions to problems are often generated by ignoring
boundaries between domains of knowledge and making new connections between
previously unassociated elements of one's knowledge base. The major
consequence of using the wanton inference strategy is that the size of search
spaces is greatly increased. Hence, the wanton inference strategy is
fundamentally at odds with the received view in AI that the essence of
intelligent problem solving is limiting the search for solutions. Our view
is that the problem of limiting search spaces is an artificial problem in AI,
resulting from ignoring both the nature of creative problem solving and the
social aspect of problem solving. We argue that this latter aspect of
problem solving provides the key to dealing with the large search spaces
generated by wanton inference.


Ballim, A. (1986),
The Subjective Ascription of Belief to Agents,
MCCS-85-74.

A computational model for determining an agent's beliefs from the viewpoint
of an agent known as the system is described. The model is based on the
earlier work of Wilks and Bien(1983) which argues for a method of dynamically
constructing nested points of view from the beliefs that the system holds.
This paper extends their work by examining problems involved in ascribing
beliefs called meta-beliefs to agents, and by developing a representation
to handle these problems. The representation is used in ViewGen, a
computer program which generates viewpoints.


Partridge, D. (1986), The Scope and Limitations of
First Generation Expert Systems, MCCS-85-43.

It is clear that expert system's technology is one of AI's
greatest successes so far. Currently we see an ever increasing
application of expert systems, with no obvious limits to their
applicability. Yet there are also a number of
well-recognized problems associated with this new technology.
I shall argue that these problems are not the puzzles of normal
science that will yield to advances within the current
technology; on the contrary, they are symptoms of severe inherent
limitations of this first generation technology. By reference
to these problems I shall outline some important aspects of the
scope and limitations of current expert system's technology.
The recognition of these limitations is a prerequisite of
overcoming them as well as of developing an awareness of the
scope of applicability of this new technology.


Gerber, M., Dearholt, D.W., Schvaneveldt, R.W., Sachania,
V. & Esposito, C. (1987), Documentation for PATHFINDER: A Program
to Generate PFNETs, MCCS-87-47.

This documentation provides both user and programmer documentation for
PATHFINDER, a program which generates PFNETs from symmetric distance
matrices representing various aspects of human knowledge. User
documentation includes instructions for input and output file formats,
instructions for compiling and running the program, adjustments to
incomplete or incompatable data sets, a general description of the
algorithm, and a glossary of terms. Programmer documentation includes a
detailed description of the algorithm with an explanation of each
function and procedure, and hand execution examples of some of the more
difficult to read code. Examples of input and output files are included.


Ballim, A. (1986)
Generating Points of View,
MCCS-85-68.

Modelling the beliefs of agents is normally done in a static manner.
This paper describes a more flexible dynamic approach to generating
nestings which represent what the system believes other agents
believe. Such nestings have been described in Wilks and Bien (1983)
as has their usefulness. The methods presented here are based upon
those described in Wilks and Bien (ibid) but have been augmented to
handle various problems. A system based on this paper is currently
being written in Prolog.


The Topological Cubical Dimension of a Graph
Frank Harary
MCCS-86-80

A cubical graph G is a subgraph of some hypercube $Q sub n$. The
cubical dimension cd(G) is the smallest such n. We verify that the
complete graph $K sub p$ is homeomorphic to a cubical graph H \(sb $Q
sub p-1$. Hence every graph G has a subdivision which is a cubical
graph. This enables us to define the topological cubical dimension
tcd(G) as the minimum such n.

When G is a full binary tree, the value of tcd is already known.
Computer scientists, motivated by the use of the architecture of a
hypercube for massively parallel supercomputers, defined the dilation
of an edge e of G within a subdivision H of G as the lenth of the image
of e in H, and the dilation of G as the maximum dilation of an edge
of G. The two new invariants, tcd(G) and the minimum dilation of G
among all cubical subdivisions H of G, are studied.


CP: A Programming Environment for
Conceptual Interpreters
M.J. Coombs and R.T. Hartley
MCCS-87-82

A conceptual approach to problem-solving is explored which we
claim is much less brittle than logic-based methods. It also
promises to support effective user/system interaction when
applied to expert system design. Our approach is ``abductive''
gaining its power from the generation of good hypotheses rather
than deductive inference, and seeks to emulate the robust
cooperative problem-solving of multiple experts. Major
characteristics include:

(1) use of conceptual rather than
syntactic representation of knowledge;

(2) an empirical approach to reasoning by model generation and
evaluation called Model Generative Reasoning;

(3) dynamic composition of reasoning strategies from actors embedded
in the conceptual structures; and

(4) characterization of the reasoning cycle in terms of cooperating
agents.


Semantics and the Computational
Paradigm in Cognitive Psychology
Eric Dietrich
MCCS-87-83

There is a prevalent notion among cognitive scientists and philosophers of
mind that computers are merely formal symbol manipulators, performing the
actions they do solely on the basis of the syntactic properties of the
symbols they manipulate. This view of computers has allowed some
philosophers to divorce semantics from computational explanations. Semantic
content, then, becomes something one adds to computational explanations to
get psychological explanations. Other philosophers, such as Stephen Stich
have taken a stronger view, advocating doing away with semantics entirely.
This paper argues that a correct account of computation requires us to
attribute content to computational processes in order to explain which
functions are being computed. This entails that computational psychology
must countenance mental representations. Since anti-semantic positions are
incompatible with computational psychology thus construed, they ought to be
rejected. Lastly, I argue that in an important sense, computers are not
formal symbol manipulators.


Problem Solving in Multiple Task Environments
Eric Dietrich and Chris Fields
MCCS-87-84

We summarize a formal theory of multi-domain problem solving
that provides a precise representation of the inferential dynamics
of problem solving in multiple task environments. We describe
a realization of the theory as an abstract virtual machine that
can be implemented on standard architectures. We show that
the behavior of such a machine can be described in terms of
formally-specified analogs of mental models, and present a necessary
condition for the use of analogical connections between such
models in problem solving.


An Automated Particulate Counting System for Cleanliness
Verification of Aerospace Test Hardware
\fIJeff Harris and Edward S. Plumer\fR
MCCS-87-86

An automated, computerized particle counting system
has been developed to verify the cleanliness of aerospace test
hardware. This work was performed by the Computing Research
Laboratory at New Mexico State University (CRL) under a contract
with Lockheed Engineering and Management Services Company at the
NASA Johnson Space Center, White Sands Test Facility. Aerospace
components are thoroughly cleaned and residual particulate matter
remaining on the components is rinsed onto 47 mm diameter test filters. The
particulates on these filters are an indication of the
contamination remaining on the components. These filters are
examined under a microscope, and particles are sized and counted.
Previously, the examination was performed manually; this
operation has now been automated. Rather than purchasing a
dedicated particle analysis system, a flexible system utilizing
an IBM PC-AT was developed. The computer, combined with a
digitizing board for image acquisition, controls a
video-camera-equipped microscope and an X-Y stage to allow
automated filter positioning and scanning. The system provides
for complete analysis of each filter paper, generation of
statistical data on particle size and quantity, and archival
storage of this information for further evaluation. The system is
able to identify particles down to 5 micrometers in diameter and
discriminate between particles and fibers. A typical filter scan
takes approximately 5 minutes to complete. Immediate operator
feedback as to pass-fail for a particular cleanliness standard is
also a feature. The system was designed to be operated by
personnel working inside a class 100 clean room. Should it be
required, a mechanism for more sophisticated recognition of
particles based on shape and color may be implemented.


Solving Problems by Expanding Search Graphs:
Mathematical Foundations for a Theory of Open-world Reasoning
Eric Dietrich and Chris Fields
MCCS-87-88

We summarize a mathematical theory describing a virtual machine
capable of expanding search graphs. This machine can, at least
sometimes, solve problems where it is not possible to precisely
and in detail specify the space it must search. The mechanism for
expansion is called wanton inference. The theory specifies which
wanton inferences have the greatest chance of producing solutions
to given problems. The machine, using wanton inference,
satisfies an intuitive definition of open-world reasoning.


Software Engineering Constraints Imposed by
Unstructured Task Environments
Eric Dietrich and Chris Fields
MCCS-87-91

We describe a software engineering methodology for building
multi-domain (open-world) problem solvers which inhabit
unstructured task environments. This methodology is based on a
mathematical theory of such problem solving. When applied, the
methodology results in a specification of program behavior that
is independent of any architectural concerns. Thus the
methodology produces a specification prior to implementation
(unlike current AI software engineering methodology). The data
for the specification are derived from experiments run on human
experts.


Multiple Agents and the Heuristic Ascription of Belief.
Yorick Wilks and Afzal Ballim
MCCS-86-75

A method for heuristically generating nested beliefs (what some agent
believes that another agent believes ... about a topic) is described.
Such nested beliefs (points of view) are esential to many processes
such as discourse processing and reasoning about other agents' reasoning
processes. Particular interest is paid to the class of beliefs known as
\fIatypical beliefs\fR and to intensional descriptions. The heuristic
methods described are emboddied in a program called \fIViewGen\fR which
generates nested viewpoints from a set of beliefs held by the system.


An Algorithm for Open-world Reasoning
using Model Generation
M.J. Coombs, E. Dietrich & R.T. Hartley
MCCS-87-87

The closed-world assumption places an unacceptable constraint on a
problem-solver by imposing an \fIa priori\fR notion of relevance on
propositions in the knowledge-base. This accounts for much of the
brittleness of expert systems, and their inability to model natural
human reasoning in detail.

This paper presents an algorithm for an open-world problem-solver.
Termed Model Generative Reasoning, we replace deductive inference
with a procedure based on the generation of alternative, intensional
domain descriptions (models) to cover problem input, which are then evaluated
against domain facts as alternative explanations. We also give an illustration
of the workings of the algorithm using concepts from process control.


Pronouns in mind: quasi-indexicals and the ``language of thought''
Yorick Wilks, Afzal Ballim, & Eric Dietrich
MCCS-87-92

The paper examines the role of the natural-formal language
distinction in connection with the "language of thought"
(LOT) issue. In particular, it distinguishes a
realist-uniform/attributist-uniform approach to LOT and seeks to link
that distinction to the issue of whether artificial
intelligence is fundamentally a science or engineering. In a
second section, we examine a particular aspect of natural
language in relation to LOT: pronouns/indexicals. The focus
there is Rapaport's claims about indexicals in belief
representations. We dispute these claims and argue that he
confuses claims about English sentences and truth
conditions, on the one hand, with claims about beliefs, on
the other. In a final section we defend the representational
capacity of the belief manipulation system of Wilks, Bien
and Ballim against Rapaport's published criticisms.

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT