Copy Link
Add to Bookmark
Report

AIList Digest Volume 2 Issue 067

eZine's profile picture
Published in 
AIList Digest
 · 15 Nov 2023

AIList Digest             Friday, 1 Jun 1984       Volume 2 : Issue 67 

Today's Topics:
Natural Language - Request,
Expert Systems - KS300 Reference,
AI Literature - CSLI Report on Bolzano,
Scientific Method - Hardware Prototyping,
Perception - Identity,
Seminar - Perceptual Organization for Visual Recognition
----------------------------------------------------------------------

Date: 4 Jun 84 8:08:13-EDT (Mon)
From: ihnp4!houxm!houxz!vax135!ukc!west44!ellis @ Ucb-Vax.arpa
Subject: Pointers to natural language interfacing

Article-I.D.: west44.214

I am investigating the feasibility of writing a natural language interface for
the UNIX operating system, and need some pointers to good articles/papers/books
dealing with natural language intrerpreting. Any help would be gratefully
appreciated as I am fairly 'green' in this area.

mcvax
|
ukc!root44!west44!ellis
/ \
vax135 hou3b
\ /
akgua

Mark Ellis, Wesfield College, Univ. of London, England.


[In addition to any natural language references, you should certainly
see "Talking to UNIX in English: An Overview of an On-line UNIX
Consultant" by Robert Wilensky, The AI Magazine, Spring 1984, pp.
29-39. Elaine Rich also mentioned this work briefly in her introduction
to the May 1984 issue of IEEE Computer. -- KIL]

------------------------------

Date: 28 May 84 12:55:37-PDT (Mon)
From: hplabs!hao!seismo!cmcl2!floyd!vax135!cornell!jqj @ Ucb-Vax.arpa
Subject: Re: KS300 Question
Article-I.D.: cornell.195

KS300 is owned by (and a trademark of) Teknowledge, Inc. Although
it is largeley based on Emycin, it was extensively reworked for
greater maintainability and reliability, particularly for Interlisp-D
environments (the Emycin it was based on ran only on DEC-20
Interlisp).

Teknowledge can be reached by phone (no net address, I think)
at (415) 327-6600.

------------------------------

Date: Wed 30 May 84 19:41:17-PDT
From: Dikran Karagueuzian <DIKRAN@SU-CSLI.ARPA>
Subject: CSLI Report

[Forwarded from the CSLI newsletter by Laws@SRI-AI.]

New CSLI-Report Available

``Lessons from Bolzano'' by Johan van Benthem, the latest CSLI-Report,
is now available. To obtain a copy of Report No. CSLI-84-6, contact
Dikran Karagueuzian at 497-1712 (Casita Hall, Room 40) or Dikran at SU-CSLI.

------------------------------

Date: Thu 31 May 84 11:15:35-PDT
From: Al Davis <ADavis at SRI-KL>
Subject: Hardware Prototyping


On the issue of the Stone - Shaw wars. I doubt that there really is
a viable "research paradigm shift" in the holistic sense. The main
problem that we face in the design of new AI architectures is that
there is a distinct possibility that we can't let existing ideas
simply evolve. If this is true then the new systems will have to try
to incorporate a lot of new strategies which create a number of
complex problems, i.e.

1. Each new area means that our experience may not be
valid.

2. Interactions between these areas may be the problem,
rather than the individual design choices - namely
efficient consistency is a difficult thing to
achieve.

In this light it will be hard to do true experiments where one factor
gets isolated and tested. Computer systems are complex beasts and the
problem is even harder to solve when there are few fundamental metrics
that can be applied microscopically to indicate success or failure.
Macroscopically there is always cost/performance for job X, or set of
tasks Y.

The experience will come at some point, but not soon in my opinion.
It will be important for people like Shaw to go out on a limb and
communicate the results to the extent that they are known. At some
point from all this chaos will emerge some real experience that will
help create the future systems which we need now. I for one refuse to
believe that an evolved Von Neumann architecture is all there is.

We need projects like DADO, Non-Von, the Connection Machine, ILLIAC,
STAR, Symbol, the Cosmic Cube, MU5, S1, .... this goes on for a long
time ..., --------------- if given the opportunity a lot can be
learned about alternative ways to do things. In my view the product
of research is knowlege about what to do next. Even at the commercial
level very interesting machines have failed miserably (cf. B1700, and
CDC star) and rather Ho-Hum Dingers (M68000, IBM 360 and the Prime
clones) have been tremendous successes.

I applaud Shaw and company for giving it a go along with countless
others. They will almost certainly fail to beat IBM in the market
place. Hopefully they aren't even trying. Every 7 seconds somebody
buys an IBM PC - if that isn't an inspiration for any budding architect
to do better then what is?

Additionally, the big debate over whether CS or AI is THE way is
absurd. CS has a lot to do with computers and little to do with
science, and AI has a lot to do with artificial and little to do with
intelligence. Both will and have given us something worthwhile, and a
lot of drivel too. The "drivel factor" could be radically reduced if
egotism and the ambition were replaced with honesty and
responsibility.

Enough said.

Al Davis
FLAIR

------------------------------

Date: Mon, 28 May 84 14:28:32 PDT
From: Charlie Crummer <crummer@AEROSPACE>
Subject: Identity

The thing about sameness and difference is that humans create them; back
to the metaphor and similie question again. We say, "Oh, he's the same old
Bill.", and in some sense we know that Bill differs from "old Bill" in many
ways we cannot know. (He got a heart transplant, ...) We define by
declaration the context within which we organize the set of sensory perceptions
we call Bill and within that we recognize "the same old Bill" and think that
the sameness is an attribute of Bill! No wonder the eastern sages say that we
are asleep!

[Read Hubert Dreyfus' book "What Computers Can't Do".]

--Charlie

------------------------------

Date: Wed, 30 May 1984 16:15 EDT
From: MONTALVO%MIT-OZ@MIT-MC.ARPA
Subject: A restatement of the problem (phil/ai)

From: (Alan Wexelblat) decvax!ittvax!wxlvax!rlw @ Ucb-Vax

Suppose that, while touring through the grounds of a Hollywood movie
studio, I approach what, at first, I take to be a tree. As I come
near to it, I suddenly realize that what I have been approaching is,
in fact, not a tree at all but a cleverly constructed stage prop.

So, let me re-pose my original question: As I understand it, issues of
perception in AI today are taken to be issues of feature-recognition.
But since no set of features (including spatial and temporal ones) can
ever possibly uniquely identify an object across time, it seems to me
(us) that this approach is a priori doomed to failure.

Spatial and temporal features, and other properties of objects that
have to do with continuity and coherence in space and time DO identify
objects in time. That's what motion, location, and speed detectors in
our brains to. Maybe they don't identify objects uniquely, but they
do a good enough job most of the time for us to make the INFERENCE of
object identity. In the example above, the visual features remained
largely the same or changed continuously --- color, texture normalized
by distance, certainly continuity of boundary and position. It was
the conceptual category that changed: from tree to stage prop. These
latter properties are conceptual, not particularly visual (although
presumably it was minute visual cues that revealed the identity in the
first place). The bug in the above example is that no distiction is
made between visual features and higher-level conceptual properties,
such as what a thing is for. Also, identity is seen to be this
unitary thing, which, I think, it is not. Similarities between
objects are relative to contexts. The above stage prop had
spatio-termporal continuity (i.e., identity) but not conceptual
continuity.

Fanya Montalvo

------------------------------

Date: Wed, 30 May 84 09:18 EDT
From: Izchak Miller <Izchak%upenn.csnet@csnet-relay.arpa>
Subject: The experience of cross-time identity.

A follow-up to Rosenberg's reply [greatings, Jay]. Most
commentators on Alan's original statement of the problem have failed to
distinguish between two different (even if related) questions:
(a) what are the conditions for the cross-time (numerical) identity
of OBJECTS, and
(b) what are the features constitutive of our cross-time EXPERIENCE
of the (numerical) identity of objects.
The first is an ontological (metaphysical) question, the second is an epis-
temological question--a question about the structure of cognition.
Most commentators addressed the first question, and Rosenberg suggests
a good answer to it. But it is the second question which is of importance to
AI. For, if AI is to simulate perception, it must first find out how
perception works. The reigning view is that the cross-time experience of the
(numerical) identity of objects is facilitated by PATTERN RECOGNITION.
However, while it does indeed play a role in the cognition of identity, there
are good grounds for doubting that pattern recognition can, by itself,
account for our cross-time PERCEPTUAL experience of the (numerical) sameness
of objects.
The reasons for this doubt originate from considerations of cases of
EXPERIENCE of misperception. Put briefly, two features are characteristic of
the EXPERIENCE of misperception: first, we undergo a "change of mind" regar-
ding the properties we attribute to the object; we end up attributing to it
properties *incompatible* with properties we attributed to it earlier. But--
and this is the second feature--despite this change we take the object to have
remained *numerically one and the same*.
Now, there do not seem to be constraints on our perceptual "change of
mind": we can take ourselves to have misperceived ANY (and any number) of the
object's properties -- including its spatio-temporal ones -- and still
experience the object to be numerically the same one we experienced all along.
The question is how do we maintain a conscious "fix" on the object across such
radical "changes of mind"? Clearly, "pattern recognition" does not seem a
good answer anymore since it is precisely the patterns of our expectations
regarding the attributes of the object which change radically, and incom-
patibly, across the experience of misperception. It seems reasonable to con-
clude that we maintain such a fix "demonstratively" (indexically), that is
independently whether or not the object satisfies the attributive content (or
"pattern") of our perception.
All this does not by itself spell doom (as Alan enthusiastically seems
to suggest) for AI, but it does suggest that insofar as "pattern recognition"
is the guiding principle of AI's research toward modeling perception, this
research is probably dead end.

Izchak (Isaac) Miller
Dept. of Philosophy
University of Pennsylvania

------------------------------

Date: 24 May 84 9:04:56-PDT (Thu)
From: hplabs!sdcrdcf!sdcsvax!akgua!clyde!burl!ulysses!unc!mcnc!ncsu!uvacs!gmf
@ Ucb-Vax.arpa
Subject: Comment on Greek ship problem
Article-I.D.: uvacs.1317

Reading about the Greek ship problem reminded me of an old joke --
recorded in fact by one Hierocles, 5th century A.D. (Lord knows how
old it was then):

A foolish fellow who had a house to sell took a brick from one wall
to show as a sample.

Cf. Jay Rosenberg: "A board is a part of a ship *at a time*. Once it's
been removed and replaced, it no longer *is* a part of the ship. It
only once *was* a part of the ship."

Hierocles is referred to as a "new Platonist", so maybe he was a
philosopher. On the other hand, maybe he was a gag-writer. Another
by him:

During a storm, the passengers on board a vessel that appeared in
danger, seized different implements to aid them in swimming, and
one of them picked for this purpose the anchor.

Rosenberg's remark quoted above becomes even clearer if "board" is
replaced by "anchor" (due, no doubt, to the relative anonymity of
boards, as compared with anchors).

Gordon Fisher

------------------------------

Date: 4 Jun 84 7:47:08-EDT (Mon)
From: ihnp4!houxm!houxz!vax135!ukc!west44!gurr @ Ucb-Vax.arpa
Subject: Re: "I see", said the carpenter as he picked up his hammer and saw.
Article-I.D.: west44.211

The point being, if WE can't decide logically what constitudes a "REAL"
perception for ourselves (and I contend that there is no LOGICAL way out
of the subjectivist trap) how in the WORLD can we decide on a LOGICAL basis
if another human, not to mention a computer, has perception? We can't!!

Therefore we operate on a faith basis a la Turing and move forward on a
practical level and don't ask silly questions like, "Can Computers Think?".


For an in depth discussion on this, read "The Mind's I" by Douglas R.
Hofstatder and Daniel C. Dennett - this also brings in the idea that you can't
even prove that YOU, not to mention another human being, can have perception!

mcvax
/
ukc!root44!west44!gurr
/ \
vax135 hou3b
\ /
akgua


Dave Gurr, Westfield College, Univ. of London, England.

------------------------------

Date: Tue 29 May 84 08:44:42-PDT
From: Sharon Bergman <SHARON@SU-SCORE.ARPA>
Subject: Ph.D. Oral - Perceptual Organization for Visual Recognition

[Forwarded from the Stanford bboard by Laws@SRI-AI.]

Ph.D. Oral

Friday, June 1, 1984 at 2:15

Margaret Jacks Hall, Room 146

The Use of Perceptual Organization for Visual Recognition

By David Lowe (Stanford Univ., CS Dept.)


Perceptual organization refers to the capability of the human visual to
spontaneously derive groupings and structures from an image without
higher-level knowledge of its contents. This capability is currently missing
from most computer vision systems. It will be shown that perceptual groupings
can play at least three important roles in visual recognition: 1) image
segmentation, 2) direct inference of three-space relations, and 3) indexing
world knowledge for subsequent matching. These functions are based upon the
expectation that image groupings reflect actual structure of the scene rather
than accidental alignment of image elements. A number of principles of
perceptual organization will be derived from this criterion of
non-accidentalness and from the need to limit computational complexity. The
use of perceptual groupings will be demonstrated for segmenting image curves
and for the direct inference of three-space properties from the image. These
methods will be compared and contrasted with the work on perceptual
organization done in Gestalt psychology.

Much computer vision research has been based on the assumption that recognition
will proceed bottom-up from the image to an intermediate depth representation,
and subsequently to model-based recognition. While perceptual groupings can
contribute to this depth representation, they can also provide an alternate
pathway to recognition for those cases in which there is insufficient
information for bottom-up derivation of the depth representation. Methods will
be presented for using perceptual groupings to index world knowledge and for
subsequently matching three-dimensional models directly to the image for
verification. Examples will be given in which this alternate pathway seems to
be the only possible route to recognition.

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT