Copy Link
Add to Bookmark
Report

AIList Digest Volume 1 Issue 066

eZine's profile picture
Published in 
AIList Digest
 · 15 Nov 2023

AIList Digest            Friday, 30 Sep 1983       Volume 1 : Issue 66 

Today's Topics:
Rational Psychology - Definition,
Halting Problem
Natural Language Understanding
----------------------------------------------------------------------

Date: Tue 27 Sep 83 22:39:35-PDT
From: PEREIRA@SRI-AI.ARPA
Subject: Rational X

Oh dear! "Rational psychology" is no more about rational people than
"rational mechanics" is about rational rocks or "rational
thermodynamics" about rational hot air. "Rational X" is the
traditional name for the mathematical, axiomatic study of systems
inspired and intuitively related to the systems studied by the
empirical science "X." Got it?

Fernando Pereira

------------------------------

Date: 27 Sep 83 11:57:24-PDT (Tue)
From: ihnp4!houxm!hogpc!houti!ariel!norm @ Ucb-Vax
Subject: Re: Rational Psychology
Article-I.D.: ariel.463

Actually, the word "rational" in "rational psychology" is merely
redundant. One would hope that psychology would be, as other
sciences, rational. This would in no way detract from its ability to
investigate the causes of human irrationality. No science really
should have to be prefaced with the word "rational", since we should
be able to assume that science is not "irrational". Anyone for
"Rational Chemistry"?

Please note that the scientist's "flash of insight", "intuituion",
"creative leap" is heavily dependent upon the rational faculty, the
faculty of CONCEPT-FORMATION. We also rely upon the rational faculty
for verifying and for evaluating such insights and leaps.

--Norm Andrews, AT&T Information Systems, Holmdel, New Jersey

------------------------------

Date: 26 Sep 83 13:01:56-PDT (Mon)
From: ihnp4!drux3!drufl!samir @ Ucb-Vax
Subject: Rational Psychology
Article-I.D.: drufl.670

Norm,

Let me elaborate. Psychology, or logic of mind, involves BOTH
rational and emotional processes. To consider one exclusively defeats
the purpose of understanding.

I have not read the article we are talking about so I cannot
comment on that article, but an example of what I consider a "Rational
Psychology" theory is "Personal Construct Theory" by Kelly. It is an
attractive theory but, in my opinion, it falls far short of describing
"logic of mind" as it fails to integrate emotional aspects.

I consider learning-concept formation-creativity to have BOTH
rational and emotional attributes, hence it would be better if we
studied them as such.

I may be creating a dichotomy where there is none. (Rational
vs. Emotional). I want to point you to an interesting book "Metaphors
we live by" (I forget the names of Authors) which in addition to
discussing many other ai-related (without mentioning ai) concepts
discusses the question of Objective vs. Subjective, which is similar
to what we are talking here, Rational vs. Emotional.

Thanks.

Samir Shah
AT&T Information Systems, Denver.
drufl!samir

------------------------------

Date: Tue, 27 Sep 1983 13:30 EDT
From: MINSKY@MIT-OZ
Subject: Re: Halting Problem

About learning: There is a lot about how to get out of loops in my
paper "Jokes and the Cognitive Unconscious". I can send it to whoever
wants, either over this net or by U.S. Snail.
-- minsky

------------------------------

Date: 26 Sep 83 10:31:31-PDT (Mon)
From: ihnp4!clyde!floyd!whuxlb!pyuxll!eisx!pd @ Ucb-Vax
Subject: the Halting problem.
Article-I.D.: eisx.607

There are two AI problems that I know about: the computing power
problem (combinatorial explosions, etc) and the "nature of thought"
problem (knowledge representation, reasoning process etc). This
article concerns the latter.

AI's method (call it "m") seems to model human information processing
mechanisms, say legal reasoning methods, and once it is understood
clearly, and a calculus exists for it, programming it. This idea can
be transferred to various problem domains, and voila, we have programs
for "thinking" about various little cubbyholes of knowledge.

The next thing to tackle is, how do we model AI's method "m" that was
used to create all these cubbyhole programs ? How did whoever thought
of Predicate Calculus, semantic networks, Ad nauseum block world
theories come up with them ? Let's understand that ("m"), formalize
it, and program it. This process (let's call it "m'") gives us a
program that creates cubbyhole programs. Yeah, it runs on a zillion
acres of CMOS, but who cares.

Since a human can do more than just "m", or "m'", we try to make
"m''", "m'''" et al. When does this stop ? Evidently it cannot. The
problem is, the thought process that yields a model or simulation of a
thought process is necessarily distinct from the latter (This is true
of all scientific investigation of any kind of phenomenon, not just
thought processes). This distinction is one of the primary paradigms
of western Science.

Rather naively, thinking "about" the mind is also done "with" the
mind. This identity of subject and object that ensues in the
scientific (dualistic) pursuit of more intelligent machine behavior -
do you folks see it too ? Since scientific thought relies on the clear
separation of a theory/model and reality, is a
mathematical/scientific/engineering discipline inadequate for said
pursuit ? Is there a system of thought that is self-describing ? Is
there a non-dualistic calculus ?

What we are talking about here is the ability to separate oneself from
the object/concept/process under study, understand it, model it,
program it... it being anything, including the ability it self. The
ability to recognize that a model is a representation within one's
mind of a reality outside of ones mind. Trying to model this ability
is leads one to infinite regress. What is this ability ? Lets call it
conciousness. What we seem to be coming up with here is, the
INABILITY of math/sci etc to deal with this phenomenon, codify at it,
and to boldly program a computer that has conciousness. Does this mean
that the statement:

"CONCIOUSNESS CAN, MUST, AND WILL ONLY COME TO EXISTENCE OF ITS OWN
ACCORD"

is true ? "Conciousness" was used for lack of a better word. Replace
it by X, and you still have a significant statement. Conciousness
already has come to existence; and according to the line of reasoning
above, cannot be brought into existence by methods available.

If so, how can we "help" machines to achieve conciousness, as
benevolent if rather impotent observers ? Should we just
mechanistically build larger and larger neural network simulators
until one says "ouch" when we shut a portion of it off, and better,
tries to deliberately modify(sic) its environment so that that doesn't
happen again? And may be even can split infinitives ?

As a parting shot, it's clear that such neural networks, must have
tremendous power to come close to a fraction of our level of
abstraction ability.

Baffled, but still thinking... References, suggestions, discussions,
pointers avidly sought.

Prem Devanbu

ATTIS Labs , South Plainfield.

------------------------------

Date: 27 Sep 83 05:20:08 EDT (Tue)
From: rlgvax!cal-unix!wise@SEISMO
Subject: Natural Language Analysis and looping


A side light to the discussions of the halting problem is "what then?"
What do we do when a loop is detected? Ignore the information?
Arbitrarily select some level as the *true* meaning?

In some cases, meaning is drawn from outside the language. As an
example, consider a person who tells you, "I don't know a secret".
The person may really know a secret but doesn't want you to know, or
may not know a secret and reason that you'll assume that nobody with a
secret would say something so suspicious ...

A reasonable assumption would be that if the person said nothing,
you'd have no reason to think he knows a secret, so if that was the
assumption which he desired for you to make, he would just have kept
quiet, so you may conclude that the person knows no secret.

This rather simplistic example demonstrates one response to the loop,
i.e., when confronted with circular logic, we disregard it. Another
possibility is that we may use external information to attempt to help
dis-ambiguate by selecting a level of the loop. (e.g. this is a
three-year-old, who is sufficiently unsophisticated that he may say
the above when he does, in fact, know a secret.)

This may support the study of cognition as an underpinning for NLP.
Certainly we can never expect a machine to react as we (who is 'we'?)
do unless we know how we react.

------------------------------

Date: 28 Sep 1983 1723-PDT
From: Jay <JAY@USC-ECLC>
Subject: NLP, Learning, and knowledge rep.

As an undergraduate student here at USC, I am required to pass a
Freshman Writting class. I have noticed in this class that one field
of the NL Problem is UNSOLVED even in humans. I am speaking of the
generation of prose.

In AI terms the problems are...

The selection of a small area of the knowledge base which is small
enough to be written about in a few pages, and large enough that a
paper can be generated at all.

One of the solutions to this problem is called 'clustering.' In the
middle of a page one draws a circle about the topic. Then a directed
graph is built by connecting associated ideas to nodes in the graph.
Just free association does not seem to work very well, so it is
sugested to ask a number of question, about the main idea, or any
other node. Some of the questions are What, Where, When, Why (and the
rest of the "Journalistic" q's), can you RELATE an incident about it,
can you name its PARTS, can you describe a process to MAKE or do it.
Finally this smaller data base is reduced to a few interesting areas.
This solution is then a process of Q and A on the data base to
construct a smaller data base.

Once a small data base has been selected, it needs to be given a
linear representation. That is, it must be organized into a new data
base that is suitable to prose. There are no solutions offered for
this step.

Finally the data base is coded into English prose. There are no
solutions offered for this step.

This prose is read back in, and compared to the original data base.
Ambiguities need to be removed, some areas elaborated on, and others
rewritten in a clearer style. There are no solutions offered for this
step, but there are some rules - Things to do, and things not to do.

j'

------------------------------

Date: Tuesday, 27 September 1983 15:25:35 EDT
From: Robert.Frederking@CMU-CS-CAD
Subject: Re: NL argument between STLH and Pereira

Several comments in the last message in this exchange seemed worthy of
comment. I think my basic sympathies lie with STLH, although he
overstates his case a bit.

While language is indeed a "fuzzy thing", there are different shades
of correctness, with some sentences being completely right, some with
one obvious *error*, which is noticed by the hearer and corrected,
while others are just a mess, with the hearer guessing the right
answer. This is similar in some ways to error-correcting codes, where
after enough errors, you can't be sure anymore which interpretation is
correct. This doesn't say much about whether the underlying ideal is
best expressed by a grammar. I don't think it is, for NL, but the
reason has more to do with the fact that the categories people use in
language seem to include semantics in a rather pervasive way, so that
making a major distinction between grammatical (language-specific,
arbitrary) and other knowledge (semantics) might not be the best
approach. I could go on at length about this (in fact I'm currently
working on a Tech Report discussing this idea), but I won't, unless
pressed.

As for ignoring human cognition, some AI people do ignore it, but
others (especially here at C-MU) take it very seriously. This seems
to be a major division in the field -- between those who think the
best search path is to go for what the machine seems best suited for,
and those who want to use the human set-up as a guide. It seems to me
that the best solution is to let both groups do their thing --
eventually we'll find out which path (or maybe both) was right.

I read with interest your description of your system -- I am currently
working on a semantic chart parser that sounds fairly similar to your
brief description, except that it is written in OPS5. Thus I was
surprised at the statement that OPS5 has "no capacity for the
parallelism" needed. OPS5 users suffer from the fact that there are
some fairly non-obvious but simple ways to build powerful data
structures in it, and these have not been documented. Fortunately, a
production system primer is currently being written by a group headed
by Elaine Kant. Anyway, I have an as-yet-unaccepted paper describing
my OPS5 parser available, if anyone is interested.

As for scientific "camps" in AI, part of the reason for this seems to
be the fact that AI is a very new science, and often none of the
warring factions have proved their points. The same thing happens in
other sciences, when a new theory comes out, until it is proven or
disproven. In AI, *all* the theories are unproven, and everyone gets
quite excited. We could probably use a little more of the "both
schools of thought are probably partially correct" way of thinking,
but AI is not alone in this. We just don't have a solid base of
proven theory to anchor us (yet).

In regard to the call for a theory which explains all aspects of
language behavior, one could answer "any Turing-equivalent computer".
The real question is, how *specifically* do you get it to work? Any
claim like "my parser can easily be extended to do X" is more or less
moot, unless you've actually done it. My OPS5 parser is embedded in a
Turing-equivalent production system language. I can therefore
guarantee that if any computer can do language learning, so can my
program. The question is, how? The way linguists have often wanted
to answer "how" is to define grammars that are less than
Turing-equivalent which can do the job, which I suspect is futile when
you want to include semantics. In any event, un-implemented
extensions of current programs are probably always much harder than
they appear to be.

(As an aside about sentences as fundamental structures, there is a
two-prong answer: (1) Sentences exist in all human languages. They
appear to be the basic "frame" [I can hear nerves jarring all over the
place] or unit for human communication of packets of information. (2)
Some folks have actually tried to define grammars for dialogue
structures. I'll withhold comment.)

In short, I think warring factions aren't that bad, as long as they
all admit that no one has proven anything yet (which is definitely not
always the case), semantic chart parsing is the way to go for NL,
theories that explain all of cognitive science will be a long time in
coming, and that no one should accept a claim about AI that hasn't
been implemented.

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT