Copy Link
Add to Bookmark
Report

AIList Digest Volume 1 Issue 073

eZine's profile picture
Published in 
AIList Digest
 · 15 Nov 2023

AIList Digest            Tuesday, 11 Oct 1983      Volume 1 : Issue 73 

Today's Topics:
Halting Problem,
Conciousness,
Rational Psychology
----------------------------------------------------------------------

Date: Thu 6 Oct 83 18:57:04-PDT
From: PEREIRA@SRI-AI.ARPA
Subject: Halting problem discussion

This discussion assumes that "human minds" are at least equivalent
to Universal Turing Machines. If they are restricted to computing
smaller classes of recursive functions, the question dissolves.

Sequential computers are idealized as having infinite memory because
that makes it easier to study mathematically asymptotic behavior. Of
course, we all know that a more accurate idealization of sequential
computers is the finite automaton (for which there is no halting
problem, of course!).

The discussion on this issue seemed to presuppose that "minds" are the
same kind of object as existing (finite!) computing devices. Accepting
this presupposition for a moment (I am agnostic on the matter), the
above argument applies and the discussion is shown to be vacuous.

Thus fall undecidability arguments in psychology and linguistics...

Fernando Pereira

PS. Any silliness about unlimited amounts of external memory
will be profitably avoided.

------------------------------

Date: 7 Oct 83 1317 EDT (Friday)
From: Robert.Frederking@CMU-CS-A (C410RF60)
Subject: AI halting problem

Actually, this isn't a problem, as far as I can see. The Halting
Problem's problem is: there cannot be a program for a Turing-equivalent
machine that can tell whether *any* arbitrary program for that machine will
halt. The easiest proof that a Halts(x) procedure can't exist is the
following program: (due to Jon Bentley, I believe)
if halts(x) then
while true do print("rats")
What happens when you start this program up, with itself as x? If
halts(x) returns true, it won't halt, and if halts(x) returns false, it
will halt. This is a contradiction, so halts(x) can't exist.

My question is, what does this have to do with AI? Answer, not
much. There are lots of programs which always halt. You just can't
have a program which can tell you *for* *any* *program* whether it will
halt. Furthermore, human beings don't want to halt, i.e., die (this
isn't really a problem, since the question is whether their mental
subroutines halt).

So as long as the mind constructs only programs which will
definitely halt, it's safe. Beings which aren't careful about this
fail to breed, and are weeded out by evolution. (Serves them right.)
All of this seems to assume that people are Turing-equivalent (without
pencil and paper), which probably isn't true, and certainly hasn't been
proved. At least I can't simulate a PDP-10 in my head, can you? So
let's get back to real discussions.

------------------------------

Date: Fri, 7 Oct 83 13:05:16 CDT
From: Paul.Milazzo <milazzo.rice@Rand-Relay>
Subject: Looping in humans

Anyone who believes the human mind incapable of looping has probably
never watched anyone play Rogue :-). The success of Rogomatic (the
automatic Rogue-playing program by Mauldin, et. al.) demonstrates that
the game can be played by deriving one's next move from a simple
*fixed* set of operations on the current game state.

Even in the light of this demonstration, Rogue addicts sit hour after
hour mechanically striking keys, all thoughts of work, food, and sleep
forgotten, until forcibly removed by a girl- or boy-friend or system
crash. I claim that such behavior constitutes looping.

:-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-)

Paul Milazzo <milazzo.rice@Rand-Relay>
Dept. of Mathematical Sciences
Rice University, Houston, TX

P.S. A note to Rogue fans: I have played a few games myself, and
understand the appeal. One of the Rogomatic developers is a
former roommate of mine interested in part in overcoming the
addiction of rogue players everywhere. He, also, has played
a few games...

------------------------------

Date: 5 Oct 83 9:55:56-PDT (Wed)
From: hplabs!hao!seismo!philabs!cmcl2!floyd!clyde!akgua!emory!gatech!owens
@ Ucb-Vax
Subject: Re: a definition of consciousness?
Article-I.D.: gatech.1379

I was doing required reading for a linguistics class when I
came across an interesting view of consciousness in "Foundations
of the Theory of Signs"
, by Charles Morris, section VI, subsection
12, about the 6th paragraph (Its also in the International
Encyclopedia of Unified Science, Otto Neurath, ed.).
to say that Y experiences X is to define a relation E of which
Y is the domain and X is the range. Thus, yEx says that it is true
that y experiences x. E does not follow normal relational rules
(not transitive or symmetric. I can experience joe, and joe can
experience fred, but it's not nessesarily so that I thus experience
fred.) Morris goes on to state that yEx is a "conscious experience"
if yE(yEx) ALSO holds, otherwise it's an "unconscious experience".
Interesting. Note that there is no infinite regress of
yE(yE(yE....)) that is usually postulated as being a consequence of
computer consciousness. However the function that defines E is defined,
it only needs to have the POTENTIAL of being able to fit yEx as an x in
another yEx, where y is itself. Could the fact that the postulated
computer has the option of NOT doing the insertion be some basis for
free will??? Would the required infinite regress of yE(yE(yE....
manifest some sort of compulsiveness that rules out free will?? (not to
say that an addict of some sort has no free will, although it's worth
thinking about).
Question: Am I trivializing the problem by making the problem of
consiousness existing or not being the ability to define the relation
E? Are there OTHER questions that I haven't considered that would
strengthen or weaken that supposition? No flames, please, since this
ain't a flame.

G. Owens
at gatech CSNET.

------------------------------

Date: 6 Oct 83 9:38:19-PDT (Thu)
From: ihnp4!ihuxr!lew @ Ucb-Vax
Subject: towards a calculus of the subjective
Article-I.D.: ihuxr.685

I posted some articles to net.philosophy a while back on this topic
but I didn't get much of rise out of anybody. Maybe this is a better
forum. (Then again, ...) I'm induced to try here by G. Owens article,
"Re: definition of consciousness".

Instead of trying to formulate a general characteristic of conscious
experience, what about trying to characterize different types of subjective
experience in terms of their physical correlates? In particular, what's
the difference between seeing a color (say) and hearing a sound? Even
more particularly, what's the difference between seeing red, and seeing blue?

I think the last question provides a potential experimental test of
dualism. If it could be shown that the subjective experience of a red
image was constituted by an internal set of "red" image cells, and similarly
for a blue image, I would regard this as a proof of dualism. This is
assuming the "red" and "blue" cells to be physically equivalent. The
choice between which were "red" and which were "blue" would have no
physical basis.

On the other hand, suppose there were some qualitative difference in
the firing patterns associated with seeing red versus seeing blue.
We would have a physical difference to hang our hat on, but we would
still be left with the problem of forming a calculus of the subjective.
That is, we would have to figure out a way to deduce the type of subjective
experience from its physical correlates.

A successful effort might show how to experience completely new colors,
for example. Maybe our restriction to a 3-d color space is due to
the restricted stimulation of subjective color space by three inputs.
Any acid heads care to comment?

These thoughts were inspired by Thomas Nagel's "What is it like to be a bat?"
in "The Minds I". I think the whole subjective-objective problem is
given short shrift by radical AI advocates. Hofstadter's critique of
Nagel's article was interesting, but I don't think it addressed Nagel's
main point.

Lew Mammel, Jr. ihuxr!lew

------------------------------

Date: 6 Oct 83 10:06:54-PDT (Thu)
From: ihnp4!zehntel!tektronix!tekecs!orca!brucec @ Ucb-Vax
Subject: Re: Parallelism and Physiology
Article-I.D.: orca.179

-------
Re the article posted by Rik Verstraete <rik@UCLA-CS>:

In general, I agree with your statements, and I like the direction of
your thinking. If we conclude that each level of organization in a
system (e.g. a conscious mind) is based in some way on the next lower
level, it seems reasonable to suppose that there is in some sense a
measure of detail, a density of organization if you will, which has a
lower limit for a given level before it can support the next level.
Thus there would be, in the same sense, a median density for the
levels of the system (mind), and a standard deviation, which I
conjecture would be bounded in any successful system (only the top
level is likely to be wildly different in density, and that lower than
the median).

Maybe the distinction between the words learning and
self-organization is only a matter of granularity too. (??)

I agree. I think that learning is simply a sophisticated form of
optimization of a self-organizing system in a *very* large state
space. Maybe I shouldn't have said "simply." Learning at the level of
human beings is hardly trivial.

Certainly, there are not physically two types of memories, LTM
and STM. The concept of LTM/STM is only a paradigm (no doubt a
very useful one), but when it comes to implementing the concept,
there is a large discrepancy between brains and machines.

Don't rush to decide that there aren't two mechanisms. The concepts of
LTM and STM were developed as a result of observation, not from theory.
There are fundamental functional differences between the two. They
*may* be manifestations of the same physical mechanism, but I don't
believe there is strong evidence to support that claim. I must admit
that my connection to neurophysiology is some years in the past
so I may be unaware of recent research. Does anyone out there have
references that would help in this discussion?

------------------------------

Date: 7 Oct 83 15:38:14-PDT (Fri)
From: harpo!floyd!vax135!ariel!norm @ Ucb-Vax
Subject: Re: life is but a dream
Article-I.D.: ariel.482

re Michael Massimilla's idea (not original, of course) that consciousness
and self-awareness are ILLUSIONS. Where did he get the concept of ILLUSION?
The stolen concept fallacy strikes again! This fallacy is that of using
a concept while denying its genetic roots... See back issues of the Objectivist
for a discussion of this fallacy.... --Norm on ariel, Holmdel, N.J.

------------------------------

Date: 7 Oct 83 11:17:36-PDT (Fri)
From: ihnp4!ihuxr!lew @ Ucb-Vax
Subject: life is but a dream
Article-I.D.: ihuxr.690

Michael Massimilla informs us that consciousness and self-awareness are
ILLUSIONS. This is like saying "It's all in your mind." As Nietzsche said,
"One sometimes remains faithful to a cause simply because its opponents
do not cease to be insipid."


Lew Mammel, Jr. ihuxr!lew

------------------------------

Date: 5 Oct 83 1:07:31-PDT (Wed)
From: decvax!duke!unc!mcnc!ncsu!fostel @ Ucb-Vax
Subject: RE: Rational Psychology
Article-I.D.: ncsu.2357


Someone's recent attempt to make the meaning of "Rational Psychology" seem
trivial misses the point a number of people have made in commenting on the
odd nature of the name. The reasoning was something like this:
1) rational "X" means the same thing in spite of what "X" is.
2) => rational psychology is a clear and simple thing
3) wake up guys, youre being dumb.

Well, I think this line misses at least one point. The argument above
is probably sound provided one accepts the initial premise, which I do not
neccessarily accept. Another example of the logic may help.
1) Brute Force elaboration solve problems of set membership. E.g. just
look at the item and compare it with every member of the set. This
is a true statement for a wide range of possible sets.
2) Real Numbers are a kind of set.
3) Wake up Cantor, you're wasting (or have wasted) your time.
It seems quite clear that in the latter example, the premise is naive and
simply fails to apply to sets of infinite proportions. (Or more properly
one must go to some effort to justify such use.)

The same issue applies to the notion of Rational Psychology. Does it make
sense to attempt to apply techniques which may be completely inadequate?
Rational analysis may fail completely to explain the workings of the mind,
esp when we are looking at the "non-analytic" capabilities that are
implied by psychology. We are on the edge of a philosophical debate, with
terms like "dual-ism" and "phsical-ism" etc marking out party lines.

It may be just as ridiculous to some people to propose a rational study
of psychology as it seems to most of us that one use finite analysis
to deal with trans-finite cardinalities [or] as it seems to some people to
propose to explain the mind via physics alone. Clearly, the people who
expect rational analytic method to be fruitful in the field of psychology
are welcome to coin a new name for themselve. But if they, or anyone else
has really "Got it now" please write a dissertation on the subject and
enter history along side Kant, St Thomas Aquinus, Kierkergard ....
----GaryFostel----

------------------------------

Date: 4 Oct 83 8:54:09-PDT (Tue)
From: decvax!linus!philabs!seismo!rlgvax!cvl!umcp-cs!velu @ Ucb-Vax
Subject: Rational Psychology - Gary Fostel's message
Article-I.D.: umcp-cs.2953

Unfortunately, however, many pet theories in Physics have come about as
inspirations, and not from the "technical origins" as you have stated!
(What is a "technical origin", anyway????)

As I see it, in any science a pet theory is a combination of insight,
inspiration, and a knowledge of the laws governing that field. If we
just went by known facts, and did not dream on, we would not have
gotten anywhere!

- Velu
-----
Velu Sinha, U of MD, College Park
UUCP: {seismo,allegra,brl-bmd}!umcp-cs!velu
CSNet: velu@umcp-cs ARPA: velu.umcp-cs@UDel-Relay

------------------------------

Date: 6 Oct 83 12:00:15-PDT (Thu)
From: decvax!duke!unc!mcnc!ncsu!fostel @ Ucb-Vax
Subject: RE: Intuition in Physics
Article-I.D.: ncsu.2360


Some few days ago I suggested that there was something "different"
about psychology and tried to draw a distinction between the flash
of insight or the pet theory in physics as compared to psychology.

Well, someone else commented on the original, in a way that sugested
I missed the mark in my original effort to make it clear. One more time:

I presume that at birth, one's mind is not predisposed to one or another
of several possible theories of heavy molecule collision (for example.)
Further, I think it unlikely that personal or emotional interaction in
one "pre-analytic" stage (see anything about developmental psych.) is
is likely to bear upon one's opinions about those molecules. In fact I
find it hard to believe that anything BUT technical learning is likely
to bear on one's intuition about the molecules. One might want to argue
that one's personality might force you to lean towards "aggressive" or
overly complex theories, but I doubt that such effects will lead to
the creation of a theory. Only a rather mild predisposition at best.

In psychology it is entirely different. A person who is aggresive has
lots of reasons to assume everyone else is as well. Or paranoid, or
that rote learning is esp good or bad, or that large dogs are dangerous
or a number of other things that bear directly on one's theories of the
mind. And these biases are aquired from the process of living and are
quite un-avoidable. This is not technical learning. The effect is
that even in the face of considerable technical learning, one's intuition
or "pet theories" in psychology might be heavily influenced in creation
of the theory as well as selection, by one's life experiences, possibly
to the exclusion of one's technical opinions. (Who knows what goes on in
the sub-conscious.) While one does not encounter heavy molecules often
in one's everyday life or one's childhood, one DOES encounter other people
and more significantly one's own mind.

It seems clear that intuition in physics is based upon a different sort
of knowledge than intuition about psychology. The latter is a combination
of technical AND everyday intuition while the former is not.
----GaryFostel----

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT