Copy Link
Add to Bookmark
Report

AIList Digest Volume 1 Issue 106

eZine's profile picture
Published in 
AIList Digest
 · 15 Nov 2023

AIList Digest           Wednesday, 30 Nov 1983    Volume 1 : Issue 106 

Today's Topics:
Conference - Logic Conference Correction,
Intelligence - Definitions,
AI - Definitions & Research Methodology & Jargon,
Seminar - Naive Physics
----------------------------------------------------------------------

Date: Mon 28 Nov 83 22:32:29-PST
From: PEREIRA@SRI-AI.ARPA
Subject: Correction

The ARPANET address in the announcement of the IEEE 1984 Logic Programming
Symposium should be PEREIRA@SRI-AI, not PERIERA@SRI-AI.

Fernando Pereira

[My apologies. I am the one who inserted Dr. Pereira's name incorrectly.
I was attempting to insert information from another version of the same
announcement that also reached the AIList mailbox. -- KIL]

------------------------------

Date: 21 Nov 83 6:04:05-PST (Mon)
From: decvax!mcvax!enea!ttds!alf @ Ucb-Vax
Subject: Re: Behavioristic definition of intelligence
Article-I.D.: ttds.137

Doesn't the concept "intelligence" have some characteristics in common with
a concept such as "traffic"? It seems obvious that one can measure such
entities as "traffic intensity" and the like thereby gaining an indirect
understanding of the conditions that determine the "traffic" but it seems
very difficult to find a direct measure of "traffic" as such. Some may say
that "traffic" and "traffic intensity" are synonymous concepts but I don't
agree. The common opinion among psychologists seems to be that
"intelligence" is that which is measured by an intelligence test. By
measuring a set of problem solving skills and weighing the results together
we get a value. Why not call it "intelligence" ? The measure could be
applicable to machine intelligence also as soon as (if ever) we teach the
machines to pass intelligence tests. It should be quite clear that
"intelligence" is not the same as "humanness" which is measured by a Turing
test.

------------------------------

Date: Sat, 26 Nov 83 2:09:14 EST
From: A B Cooper III <abc@brl-bmd>
Subject: Where wise men fear to tread

Being nothing more than an amateur observer on the AI scene,
I hesitate to plunge in like a fool.

Nevertheless, the roundtable on what constitutes intelligence
seems ed to cover many interesting hypotheses:

survivability
speed of solving problems
etc

but one. Being married to a professional educator, I've found
that the common working definition of intelligence is
the ability TO LEARN.

The more easily one learns new material, the
more intelligent one is said to be.

The more quickly one learns new material,
the more intelligent one is said to be.

One who can learn easily and quickly across a
broad spectrum of subjects is said to
be more intelligent than one whose
abilities are concentrated in one or
two areas.

One who learns only at an average rate, except
for one subject area in which he or she
excells far above the norms is thought
to be TALENTED rather than INTELLIGENT.

It seems to be believed that the most intelligent
folks learn easily and rapidly without
regard to the level of material. They
assimilate the difficult with the easy.


Since this discussion was motivated, at least in part, by the
desire to understand what an "intelligent" computer program should
do, I feel that we should re-visit some of our terminology.

In the earlier days of Computer Science, I seem to recall some
excitement about machines (computers) that could LEARN. Was this
the precursor of AI? I don't know.

If we build an EXPERT SYSTEM, have we built an intelligent machine
(can it assimilate new knowledge easily and quickly), or have we
produced a "dumb" expert? Indeed, aren't many of our AI or
knowledge-based or expert systems really something like "dumb"
experts?

------------------------

You might find the following interesting:

Siegler, Robert S, "How Knowledge Influences Learning,"
AMERICAN SCIENTIST, v71, Nov-Dec 1983.

In this reference, Siegler addresses the questions of how
children learn and what they know. He points out that
the main criticism of intelligence tests (that they measure
'knowledge' and not 'aptitute') may miss the mark--that
knowledge and learning may be linked, in humans anyway, in
ways that traditional views have not considered.

-------------------------

In any case, should we not be addressing as a primary research
objective, how to make our 'expert systems' into better learners?

Brint Cooper
abc@brl.arpa

------------------------------

Date: 23 Nov 83 11:27:34-PST (Wed)
From: dambrosi @ Ucb-Vax
Subject: Re: Intelligence
Article-I.D.: ucbvax.373

Hume once said that when a discussion or argument seems to be
interminable and without discernable progress, it is worthwhile
to attempt to produce a concrete visualisation of the concept
being argued about. Often, he claimed, this will be IMPOSSIBLE
to do, and this will be evidence that the word being argued
about is a ringer, and the discussion pointless. In more
modern parlance, these concepts are definitionally empty
for most of us.
I submit the following definition as the best presently available:
Intelligence consists of perception of the external environment
(e.g. vision), knowledge representation, problem solving, learning,
interaction with the external environment (e.g. robotics),
and communication with other intelligent agents (e.g. natural
language understanding). (note the conjunctive connector)
If you can't guess where this comes from, check AAAI83
procedings table of contents.
bruce d'ambrosio
dambrosi%ucbernie@berkeley

------------------------------

Date: Tuesday, 29 Nov 1983 11:43-PST
From: narain@rand-unix
Subject: Re: AI Challenge


AI is advanced programming.

We need to solve complex problems involving reasoning, and judgment. So
we develop appropriate computer techniques (mainly software)
for that. It is our responsibility to invent techniques that make development
of efficient intelligent computer programs easier, debuggable, extendable,
modifiable. For this purpose it is only useful to learn whatever we can from
traditional computer science and apply it to the AI effort.

Tom Dietterich said:

>> Your view of "knowledge representations" as being identical with data
>> structures reveals a fundamental misunderstanding of the knowledge vs.
>> algorithms point. Most AI programs employ very simple data structures
>> (e.g., record structures, graphs, trees). Why, I'll bet there's not a
>> single AI program that uses leftist-trees or binomial queues! But, it
>> is the WAY that these data structures are employed that counts.

We at Rand have ROSS (Rule Oriented Simulation System) that has been employed
very successfully for developing two large scale simulations (one strategic
and one tactical). One implementation of ROSS uses leftist trees for
maintaining event queues. Since these queues are in the innermost loop
of ROSS's operation, it was only sensible to make them as efficient as
possible. We think we are doing AI.

Sanjai Narain
Rand Corp.

------------------------------

Date: Tue, 29 Nov 83 11:31:54 PST
From: Michael Dyer <dyer@UCLA-CS>
Subject: defining AI, AI research methodology, jargon in AI (long msg)

This is in three flaming parts: (I'll probably never get up the steam to
respond again, so I'd better get it all out at once.)

Part I. "Defining intelligence", "defining AI" and/or "responding to AI
challenges"
considered harmful: (enough!)

Recently, I've started avoiding/ignoring AIList since, for the most
part, it's been a endless discussion on "defining A/I" (or, most
recently) defending AI. If I spent my time trying to "define/defend"
AI or intelligence, I'd get nothing done. Instead, I spend my time
trying to figure out how to get computers to achieve some task -- exhibit
some behavior -- which might be called intelligent or human-like.
If/whenever I'm partially successful, I try to keep track about what's
systematic or insightful. Both failure points and partial success
points serve as guides for future directions. I don't spend my time
trying to "define" intelligence by BS-ing about it. The ENTIRE
enterprise of AI is the attempt to define intelligence.

Here's a positive suggestion for all you AIList-ers out there:

I'd be nice to see more discussion of SPECIFIC programs/cognitive
models: their Assumptions, their failures, ways to patch them, etc. --
along with contentful/critical/useful suggestions/reactions.

Personally, I find Prolog Digest much more worthwhile. The discussions
are sometimes low level, but they almost always address specific issues,
with people often offering specific problems, code, algorithms, and
analyses of them. I'm afraid AIList has been taken over by people who
spend so much time exchanging philosophical discussions that they've
chased away others who are very busy doing research and have a low BS
tolerance level.

Of course, if the BS is reduced, that means that the real AI world will
have to make up the slack. But a less frequent digest with real content
would be a big improvement. {This won't make me popular, but perhaps part
of the problem is that most of the contributors seem to be people who
are not actually doing AI, but who are just vaguely interested in it, so
their speculations are ill-informed and indulgent. There is a use for
this kind of thing, but an AI digest should really be discussing
research issues. This gets back to the original problem with this
digest -- i.e. that researchers are not using it to address specific
research issues which arise in their work.}

Anyway, here are some examples of task/domains topic that could be
addressed. Each can be considered to be of the form: "How could we get
a computer to do X"
:

Model Dear Abby.
Understand/engage in an argument.
Read an editorial and summarize/answer questions about it.
Build a daydreamer
Give legal advice.
Write a science fiction short story
...

{I'm an NLP/Cognitive modeling person -- that's why my list may look
bizzare to some people}

You researchers in robotics/vision/etc. could discuss, say, how to build
a robot that can:

climb stairs
...
recognize a moving object
...
etc.

People who participate in this digest are urged to: (1) select a
task/domain, (2) propose a SPECIFIC example which represents
PROTOTYPICAL problems in that task/domain, (3) explain (if needed) why
that specific example is prototypic of a class of problems, (4) propose
a (most likely partial) solution (with code, if at that stage), and 4)
solicit contentful, critical, useful, helpful reactions.

This is the way Prolog.digest is currently functioning, except at the
programming language level. AIList could serve a useful purpose if it
were composed of ongoing research discussions about SPECIFIC, EXEMPLARY
problems, along with approaches, their limitations, etc.

If people don't think a particular problem is the right one, then they
could argue about THAT. Either way, it would elevate the level of
discussion. Most of my students tell me that they no longer read
AIList. They're turned off by the constant attempts to "defend or
define AI"
.

Part II. Reply to R-Johnson

Some of R-Johnson's criticisms of AI seem to stem from viewing
AI strictly as a TOOLS-oriented science.

{I prefer to refer to STRUCTURE-oriented work (ie content-free) as
TOOLS-oriented work and CONTENT-oriented work as DOMAIN or
PROCESS-oriented. I'm referring to the distinction that was brought up
by Schank in "The Great Debate" with McCarthy at AAAI-83 Wash DC).

In general, tools-oriented work seems more popular and accepted
than content/domain-oriented work. I think this is because:

1. Tools are domain independent, so everyone can talk about them
without having to know a specific domain -- kind of like bathroom
humor being more universally communicable than topical-political
humor.

2. Tools have nice properties: they're general (see #1 above);
they have weak semantics (e.g. 1st order logic, lambda-calculus)
so they're clean and relatively easy to understand.

3. No one who works on a tool need be worried about being accused
of "ad hocness".

4. Breakthroughs in tools-research happen rarely, but when it
does, the people associated with the breakthrough become
instantly famous because everyone can use their tool (e.g. Prolog)

In contrast, content or domain-oriented research and theories suffer
from the following ills:

1. They're "ad hoc" (i.e. referring to THIS specific thing or
other).

2. They have very complicated semantics, poorly understood,
hard to extend, fragile, etc. etc.

However, many of the most interesting problems pop up in trying
to solve a specific problem which, if solved, would yield insight
into intelligence. Tools, for the most part, are neutral with respect
to content-oriented research questions. What does Prolog or Lisp
have to say to me about building a "Dear Abby" natural language
understanding and personal advice-giving program? Not much.
The semantics of lisp or prolog says little about the semantics of the
programs which researchers are trying to discover/write in Prolog or Lisp.
Tools are tools. You take the best ones off the shelf you can find for
the task at hand. I love tools and keep an eye out for
tools-developments with as much interest as anyone else. But I don't
fool myself into thinking that the availability of a tool will solve my
research problems.

{Of course no theory is exlusively one or the other. Also, there are
LEVELS of tools & content for each theory. This levels aspect causes
great confusion.}

By and large, AIList discussions (when they get around to something
specific) center too much around TOOLS and not PROCESS MODELS (ie
SPECIFIC programs, predicates, rules, memory organizations, knowledge
constructs, etc.).

What distinguishes AI from compilers, OS, networking, or other aspects
of CS are the TASKS that AI-ers choose. I want computers that can read
"War and Peace" -- what problems have to be solved, and in what order,
to achieve this goal? Telling me "use logic" is like telling me
to "use lambda calculus" or "use production rules".

Part III. Use and abuse of jargon in AI.

Someone recently commented in this digest on the abuse of jargon in AI.
Since I'm from the Yale school, and since Yale commonly gets accused of
this, I'm going to say a few words about jargon.

Different jargon for the same tools is BAD policy. Different jargon
to distinguish tools from content is GOOD policy. What if Schank
had talked about "logic" instead of "Conceptual Dependencies"?
What a mistake that would have been! Schank was trying to specify
how specific meanings (about human actions) combine during story
comprehension. The fact that prolog could be used as a tool to
implement Schank's conceptual dependencies is neutral with respect
to what Schank was trying to do.

At IJCAI-83 I heard a paper (exercise for the reader to find it)
which went something like this:

The work of Dyer (and others) has too many made-up constructs.
There are affects, object primitives, goals, plans, scripts,
settings, themes, roles, etc. All this terminology is confusing
and unnecessary.

But if we look at every knowledge construct as a schema (frame,
whatever term you want here), then we can describe the problem much
more elegantly. All we have to consider are the problems of:
frame activation, frame deactivation, frame instantiation, frame
updating, etc.

Here, clearly we have a tools/content distinction. Wherever
possible I actually implemented everything using something like
frames-with-procedural-attachment (ie demons). I did it so that I
wouldn't have to change my code all the time. My real interest,
however, was at the CONTENT level. Is a setting the same as an emotion?
Does the task: "Recall the last 5 restaurant you were at" evoke the
same search strategies as "Recall the last 5 times you accomplished x",
or "the last 5 times you felt gratitude."? Clearly, some classes of
frames are connected up to other classes of frames in different ways.
It would be nice if we could discover the relevant classes and it's
helpful to give them names (ie jargon). For example, it turns out that
many (but not all) emotions can be represented in terms of abstract goal
situations. Other emotions fall into a completely different class (e.g.
religious awe, admiration). In my program "love" was NOT treated as
(at the content level) an affect.

When I was at Yale, at least once a year some tools-oriented person
would come through and give a talk of the form: "I can
represent/implement your Scripts/Conceptual-Dependency/
Themes/MOPs/what-have-you using my tool X"
(where X = ATNs, Horn
clauses, etc.).

I noticed that first-year students usually liked such talks, but the
advanced students found them boring and pointless. Why? Because if
you're content-oriented you're trying to answer a different set of
questions, and discussion of the form: "I can do what you've already
published in the literature using Prolog"
simply means "consider Prolog
as a nice tool"
but says nothing at the content level, which is usually
where the advanced students are doing their research.

I guess I'm done. That'll keep me for a year.

-- Michael Dyer

------------------------------

Date: Mon 28 Nov 83 08:59:57-PST
From: Doug Lenat <LENAT@SU-SCORE.ARPA>
Subject: CS Colloq 11/29: John Seely Brown

[Reprinted from the SU-SCORE bboard.]

Tues, Nov 29, 3:45 MJH refreshments; 4:15 Terman Aud (lecture)

A COMPUTATIONAL FRAMEWORK FOR A QUALITATIVE PHYSICS--
Giving computers "common-sense" knowledge about physical mechanisms

John Seely Brown
Cognitive Sciences
Xerox, Palo Alto Research Center

Humans appear to use a qualitative causal calculus in reasoning about
the behavior of their physical environment. Judging from the kinds
of explanations humans give, this calculus is quite different from
the classical physics taught in classrooms. This raises questions as
to what this (naive) physics is like, how it helps one to reason
about the physical world and how to construct a formal calculus that
captures this kind of reasoning. An analysis of this calculus along
with a system, ENVISION, based on it will be covered.

The goals for the qualitative physics are i) to be far simpler than
classical physics and yet retain all the important distinctions
(e.g., state, oscillation, gain, momentum), ii) to produce causal
accounts of physical mechanisms, and (3) to provide a logic for
common-sense, causal reasoning for the next generation of expert
systems.

A new framework for examining causal accounts has been suggested
based on using collections of locally interacting processors to
represent physical mechanisms.

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT