Copy Link
Add to Bookmark
Report

AIList Digest Volume 4 Issue 075

eZine's profile picture
Published in 
AIList Digest
 · 15 Nov 2023

AIList Digest           Thursday, 10 Apr 1986      Volume 4 : Issue 75 

Today's Topics:
Games - Game-Playing Programs,
Philosophy - Computer Consciousness & Wittgenstein and NL &
Reply to Lucas on Formal Systems

----------------------------------------------------------------------

Date: Wed, 09 Apr 86 11:54:48 -0500
From: lkramer@dewey.udel.EDU
Subject: Game-Playing Programs

Re: Allen Sherzer's request for information of AI game-playing
programs.
I wrote a program last year for an expert systems course that plays
the card game Spades. (ESP -- Expert Spades Player) It is implemented
as a frame-based expert system written in minifrl (my revision of the
frame primitives in Winston and Horn's Lisp) on top of Franz. The pro-
gram is fairly simple-minded in that it doesn't learn from its mistakes
or deal well with novel situations, but it still is able to play a fairly
good game of Spades.
In addition, since it is written as an expert system, its rule-base is
easily modifiable.

Mostow has written a (much more sophisticated) program that plays Hearts
and is able to operationalize from fairly general advice.
--1983, Mostow, D.J., Machine transformation of advice into a heuristic
search procedure. In R.S. Michalski, J. Carbonell, and T. M.
Mitchell, eds., Machine learning: An artificial Intelligence
Approach. Tioga Press.

------------------------------

Date: 9 Apr 86 08:55:00 EST
From: "CUGINI, JOHN" <cugini@nbs-vms.ARPA>
Reply-to: "CUGINI, JOHN" <cugini@nbs-vms.ARPA>
Subject: computer consciousness


Thought I'd jump in here with a few points.

1. There's a metaphilosophers (don't ask me why the "meta") mailing
list where folks thrash on about this stuff constantly, so if you
care, listen in. Tune in to: MetaPhilosophers%MIT-OZ@MIT-MC.

2. There's a common problem with confusing epistemological questions
(what would constitute evidence for computer consciousness) and
ontological ones (so, is it *really* conscious). Those who
subscribe to various verificationist fallacies are especially
vulnerable, and indeed may argue that there is ultimately
no distinction. The point is debatable, obviously, but we
shouldn't just *assume* that the latter question (is it *really*
conscious) is meaningless unless tied to an operational definition.
After all, conscious experience is the classic case of a
*private* phenomenon (ie, no one else can directly "look" at your
experiences). If this means that consciousness fails a
verificationist criterion of meaningfulness, so much the worse
for verificationism.

3. Taking up the epistemological problem for the moment, it
isn't as obvious as many assume that even the most sophisticated
computer performance would constitute *decisive* evidence for
consciousness. Briefly, we believe other people are conscious
for TWO reasons: 1) they are capable of certain clever activities,
like holding English conversations in real-time, and 2) they
have brains, just like us, and each of us knows darn well that
he/she is conscious. Clearly the brain causes/supports
consciousness and external performance in ways we don't
understand. A conversational computer does *not* have a brain;
and so one of the two reasons we have for attributing
consciousness to others does not hold.

Analogy: suppose you know that cars can move, that they all have
X-type-engines, and that there's something called combustion
which depends on X-type-engines and which is instrumental in getting
the cars to move. Let's say you have a combustion-detector
which you tried out on one car and, sure enough, it had it, but
then you dropped your detector and broke it. You're still pretty
confident that the other cars have combustion. Now you see a
very different type of vehicle which can move, but which does
NOT have an X-type-engine - in fact you're not too sure whether
it's really an engine at all. Now, is it just obvious that this
other vehicle has combustion?? Don't we need to know a) a good
definition of combustion, b) some details as to how X-type-engines
and combustion are related? c) some details as to how motion
depends on combustion, d) in what respects the new "engine"
resembles/differs from X-type-engines, etc etc.? The point is
that motion (performance) isn't *decisive* evidence for combustion
(consciousness) in the absence of an X-type-engine (brain).

John Cugini <Cugini@NBS-VMS>

------------------------------

Date: 2 Apr 86 08:58:24 GMT
From: amdcad!cae780!leadsv!rtgvax!ramin@ucbvax.berkeley.edu (Pantagruel)
Subject: Natural Language processing


An issue that has propped up now and again through my studies has been
the relation between current Natural Language/Linguistic research and
the works of Ludwig Wittgenstein (especially through the whole Vienna
School mess and later in his writings in "Philosophical Investigations").

It appears to me (in observing trends in such theories) and especially
after the big hoopla over Frames that AI/Cognitive Research has spent
the past 30 years experimenting through "Tractatus" and has just now warmed
up to "P.I." The works of the Vienna School's context-free language analyses
earlier in this century seems quite parallel to early context-free language
parsing efforts.

The later studies in P.I. with regards to the role of Natural Context and
the whole Picture-Theory rot seems to have been a direct result of the
failure of the context-free approach. Quite a few objections voiced nowadays
by researchers on the futility of context-free analysis seems to be very
similar to the early chapters in P.I.

I still haven't gone through Wittgenstein with a fine enough comb as I
would like... especially this latter batch of his notes that I saw
a few weeks ago finally published and available publicly... But I still
think there is quite a bit of merit to this fellow's study of language
and cognition.

Any opinions on this...? Any references to works to the contrary?

I must be fair in warning that I hold Wittgensteins' works to contain
the answers to some of the biggest issues facing us now... Personally, I'm
holding out for someone to come up with some relevant questions...
I think Bertrand Russell was correct in assessing L.W.'s significance...

Please mail back to me for a livelier dialogue... The Net seems rather
hostile nowadays... (but post to net if you think it merits a public forum)...



"Pantagruel at his most vulgar..."

= = =
Alias: ramin firoozye | USps: Systems Control Inc.
uucp: ...!shasta \ | 1801 Page Mill Road
...!lll-lcc \ | Palo Alto, CA 94303
...!ihnp4 \...!ramin@rtgvax | ^G: (415) 494-1165 x-1777
= = =

------------------------------

Date: Fri, 4 Apr 86 13:34:54 est
From: Stanley Letovsky <letovsky@YALE.ARPA>
Subject: Reply to Lucas


At the conference on "AI an the Human Mind" held at Yale early in
March 1986, a paper was presented by the British mathematician John
Lucas. He claimed that AI could never succeed, that a machine was in
principle incapable of doing all that a mind can do. His argument went
like this. Any computing machine is essentially equivalent to a system
of formal logic. The famous Godel incompleteness theorem shows that for
any formal system powerful enough to be interesting, there are truths
which cannot be proved in that system. Since a person can see and
recognize these truths, the person can transcend the limitations of the
formal system. Since this is true of any formal system at all, a person
can always transcend a formal system, therefore a formal system can
never be a model of a person. Lucas has apparently been pushing this
argument for several decades.

Marvin Minsky gave the rebuttal to this; he said that formal
systems had nothing to do with AI or the mind, since formal systems
required perfect consistency, whereas what AI required was machines that
make mistakes, that guess, that learn and evolve. I was less sure of
that refutation; although I agreed with Minsky, I was worried that
because the algorithms for doing all that guessing and learning and
mistake making would run on a computer, there was still a level of
description at which the AI model must look like a consistent formal
system. This is equivalent to the statement that your theory of the
mind is a consistent theory. I was worried that Lucas could revive his
argument at that level, and I wanted a convincing refutation. I have
found one, which I will now present.

First, we need to clarify the relationship between a running
computer program and a system of formal logic. A running computer
program is a dynamic object, it has a history composed of a succession
of states of the machine. A formal system, by contrast, is timeless:
it has some defining axioms and rules of inference, and a space of
theorems and nontheorems implicitly defined by those axioms and rules.
For a formal system to model a dynamic process, it must describe in its
timeless manner the temporal behavior or history of the process. The
axioms of the formal system, therefore, will contain a time parameter.
They might look something like this:

if the process is in a state of type A at time t1,
it will be in a state of type B in the next instant.

A more complicated problem is how the interaction between the
computer program and the outside world is to be modelled within the
formal system. You cannot simulate input and output by adding axioms to
the formal system, because changing the axioms changes the identity of
the system. Moreover, input and output are events in the domain of the
running program; within the formal system they are just axioms or
theorems which assert that such and such an input or output event
occurred at such and such a time. The ideal solution to this problem is
to include within the formal system a theory of the physics of the world
as well as a theory of the mind. This means that you can't construct a
theory of the mind until you have a theory of the rest of the universe,
which seems like a harsh restriction. Of course, the theory of the rest
of the universe need not be correct or very detailed; an extremely
impoverished theory would simply be a set of assertions about sensory
data received at various instants. Alternatively, you could ignore I/O
completely and just concern yourself with a model of isolated thought;
if we debunk Lucas' argument for this case we can leave it to him to
decide whether to retreat to the high ground of embodied thinking
machines. Therefore I will ignore the I/O issue.

The next point concerns the type of program that an AI model of
the mind is likely to be. Again, ignoring sensory and motor processing
and special purpose subsystems like visual imagery or solid modelling,
we will consider a simple model of the mind as a process whose task is
belief fixation. That is, the job of the mind is to maintain a set of
beliefs about the world, using some kind of abductive inference
procedure: generate a bunch of hypotheses, evaluate their credibility
and consistency using a variety of heuristic rules of evidence, and, on
occasion, commit to believe a particular hypothesis.

It is important to understand that the set of beliefs maintained
by this program need not be consistent with each other. If we use the
notation
believes(Proposition,Instant)
to denote the fact that the system believes a particular proposition at
some instant, it is perfectly acceptable to have both
believes(p,i)
and
believes(not(p),i)
be theorems of the formal system which describes the program's behavior.
The formal system must be a consistent description of the behavior of
the program, or we do not have a coherent theory. The behavior of the
program must match Lucas' (or some other person's) behavior or we do not
have a correct theory. However the beliefs maintained by the program
need not be a consistent theory of anything, unless Lucas happens to
have some consistent beliefs about something.

For those more comfortable with technical jargon, the formal
system has a meta-level and an object level. The object level describes
Lucas beliefs and is not necessarily consistent; the meta-level is our
theory of Lucas' belief fixation process and had better be consistent.
The object level is embedded in the meta-level using the modal operator
"believes".

What would it mean to formulate a Godel sentence for this system?
To begin with, we seem to have a choice about where to formulate the
Godel sentence: at the object level or the meta level. Formulating a
Godel sentence for the object level, that is, the level of Lucas'
beliefs, is clearly a waste of time, however. This level is not
required to be consistent, and so Godel's trick of forcing us to choose
between consistency and completeness fails: we have already rejected
consistency.

The more serious problem concerns a Godel sentence formulated for
the meta-level, which must be consistent. The general form of a Godel
sentence is
G: not(provable(G))
where "provable" is a predicate which you embed in the system in a
clever way, and which captures the notion of provability within the
system. The meaning of such a sentence is "This sentence is not a
theorem", and therein lies the Godelian dilemma: if the sentence is
true, the system is incomplete because not all statable truths are
theorems. If the sentence is false, then the system is inconsistent,
because G is both true and false. This dilemma holds for all
"sufficiently powerful" systems, and we assume that our model of Lucas
falls into this category, and that one can therefore write down a Godel
sentence for the model.

What is critical to realize, however, is that the Godel sentence
for our model of Lucas is not a belief of Lucas' according to the model.
The form of the Godel sentence
G: not(provable(G))
is syntactically distinct from the form of an assertion about Lucas'
beliefs,
believes(p,t)
Nothing stops us from having
believes(G,t)
be provable in the system, despite the fact that G is not itself
provable in the system. (Actually, the last sentence is incorrect,
since it is illegal to put G inside the scope of the "believes"
operator. G is a meta-level sentence, and only object level sentences
are permitted inside "believes". The object level and the meta level
are not allowed to share any symbols. If you want to talk about Lucas's
beliefs about the model of himself, you will have to embed Lucas' model
of the model of himself at the object level, but we can ignore this
technicality.)

This point is crucial: the Godel sentence for our theory of Lucas
as a belief-fixing machine is not a theorem ascribing any beliefs to
Lucas. Therefore the fact that Lucas can arrive at a belief that the
Godel sentence is true is perfectly compatible with the fact that the
system cannot prove G as a theorem. Lucas' argument depends on the
claim that if he believes G, he transcends the formal system: this is
his mistake. Lucas can believe whatever he wants about what sentences
can or can't be proved within the model of himself. The only way his
beliefs have any bearing on the correctness of the model is if the model
predicts that Lucas will believe something he doesn't, or disbelieve
something he believes. In other words, the usual criteria of science
apply to judging the correctness of the model, and no Godelian sophistry
can invalidate the model a priori.

Lucas' argument has a certain surface plausibility to it. Its
strength seems to depend on the unwarranted assumption that the theorems
of the formal system correspond directly to the beliefs of the mind
being modelled by that system. This is a naive and completely
fallacious assumption: it ignores the fact that minds are temporal
processes, and that they are capable of holding inconsistent beliefs.
When these issues are taken into account, Lucas' argument falls flat.

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT