Copy Link
Add to Bookmark
Report

AIList Digest Volume 5 Issue 164

eZine's profile picture
Published in 
AIList Digest
 · 15 Nov 2023

AIList Digest           Wednesday, 1 Jul 1987     Volume 5 : Issue 164 

Today's Topics:
Theory - The Symbol Grounding Problem

----------------------------------------------------------------------

Date: 30 Jun 87 00:19:12 GMT
From: mind!harnad@princeton.edu (Stevan Harnad)
Subject: Re: The symbol grounding problem


marty1@houdi.UUCP (M.BRILLIANT) of AT&T Bell Laboratories, Holmdel asks:

> how about walking through what a machine might do in perceiving a chair?
> ...let a machine train its camera on that object. Now either it
> has a mechanical array of receptors and processors, like the layers
> of cells in a retina, or it does a functionally equivalent thing with
> sequential processing. What it has to do is compare the brightness of
> neighboring points to find places where there is contrast, find
> contrast in contiguous places so as to form an outline, and find
> closed outlines to form objects... Now the machine has the outline
> of an object in 2 dimensions, and maybe some clues to the 3rd
> dimension... inductively find a 3D form that would give rise to the
> 2D view the machine just saw... Then, if the object is really
> unfamiliar, let the machine walk around the chair, or pick it
> up and turn it around, to refine its hypothesis.

So far, apart from its understandable toward current engineering hardware
concepts, there is no particular objection to this description of a
stereoptic sensory receptor.

> Now the machine has a form. If the form is still unfamiliar,
> let it ask, "What's that, Daddy?" Daddy says, "That's a chair."
> The machine files that information away. Next time it sees a
> similar form it says "Chair, Daddy, chair!" It still has to
> learn about upholstered chairs, but give it time.

Now you've lost me completely. Having acknowledged the intricacies of
sensory transduction, you seem to think that the problem of categorization
is just a matter of filing information away and finding "similar forms."

> do you really want this machine to be so Totally Turing that it
> grows like a human, learns like a human, and not only learns new
> objects, but, like a human born at age zero, learns how to perceive
> objects? How much of its abilities do you want to have wired in,
> and how much learned?

That's an empirical question. All it needs to do is pass the Total
turing Test -- i.e., exhibit performance capacities that are
indistinguishable from ours. If you can do it by building everything
in a priori, go ahead. I'm betting it'll need to learn -- or be able to
learn -- a lot.

> But back to the main question. I have skipped over a lot of
> detail, but I think the outline can in principle be filled in
> with technologies we can imagine even if we do not have them.
> How much agreement do we have with this scenario? What are
> the points of disagreement?

I think the main details are missing, such as how the successful
categorization is accomplished. Your account also sounds as if it
expects innate feature detectors to pick out objects for free, more or
less nonproblematically, and then serve as a front end for another
device (possibly a conventional symbol-cruncher a la standard AI?)
that will then do the cognitive heavy work. I think that the cognitive
heavy work begins with picking out objects, i.e., with categorization.
I think this is done nonsymbolically, on the sensory traces, and that it
involves learning and pattern recognition -- both sophisticated
cognitive activities. I also do not think this work ends, to be taken
over by another kind of work: symbolic processing. I think that ALL of
cognition can be seen as categorization. It begins nonsymbolically,
with sensory features used to sort objects according to their names on
the basis of category learning; then further sorting proceeds by symbolic
descriptions, based on combinations of those atomic names. This hybrid
nonsymbolic/symbolic categorizer is what we are; not a pair of modules,
one that picks out objects and the other that thinks and talks about them.
--

Stevan Harnad (609) - 921 7771
{bellcore, psuvax1, seismo, rutgers, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet harnad@mind.Princeton.EDU

------------------------------

Date: 30 Jun 87 20:52:21 GMT
From: diamond.bbn.com!aweinste@husc6.harvard.edu (Anders Weinstein)
Subject: Re: The symbol grounding problem

In reply to my statement that
>> the *semantic* meaning of a symbol is still left largely unconstrained
>> even after you take account of it's "grounding" in perceptual
>> categorization. This is because what matters for intentional content
>> is not the objective property in the world that's being detected, but
>> rather how the subject *conceives* of that external property, a far
>> more slippery notion...

Stevan Harnad (harnad@mind.UUCP) writes:
>
> As to what people "conceive" themselves to be categorizing: My model
> is proposed in a framework of methodological epiphenomenalism. I'm
> interested in what's going on in people's heads only inasmuch as it is
> REALLY generating their performance, not just because they think or
> feel it is.

I regret the subjectivistic tone of my loose characterization; what people
can introspect is indeed not at issue. I was merely pointing out that the
*meaning* of a symbol is crucially dependent on the rest of the cognitive
system, as shown in the Churchland's example:

>> ... primitive people may be able to reliably
>> categorize certain large-scale atmospheric electrical discharges;
>> nevertheless, the semantic content of their corresponding states might
>> be "Angry gods nearby" or some such.
>>
> ... "Angry gods nearby" is not just an atomic label for
> "thunder" (otherwise it WOULD be equivalent to it in my model -- both
> labels would pick out approximately the same thing); in fact, it is
> decomposable, and hence has a different meaning in virtue of the
> meanings of "angry" and "gods." There should be corresponding internal
> representational differences (iconic, categorical and symbolic) that
> capture that difference.

"Angry gods nearby" is composite in *English*, but it need not be composite
in native, or, more to the point, in the supposed inner language of the
native's categorical mechanisms. They may have a single word, say "gog",
which we would want to translate as "god-noise" or some such. Perhaps they
train their children to detect gog in precisely the same way we train
children to detect thunder -- our internal thunder-detectors are identical.
Nevertheless, the output of their thunder-detector does not *mean* "thunder".

Let me try to clarify the point of these considerations. I am all for an
inquiry into the mechanisms underlying our categorization ablities. Anything
you can discover out about these mechanisms would certainly be a major
contribution to psychology. My only concern is with semantics: I was piqued
by what seemed to be an ambitious claim about the significance of the
psychology of categorization for the problem of "intentionality" or intrinsic
meaningfulness. I merely want to emphasize that the former, interesting
though it is, hardly makes a dent in the latter.

As I said, there are two reasons why meaning resists explication by this kind
of psychology: (1) holism: the meaning of even a "grounded" symbol will
still depend on the rest of the cognitive system; and (2) normativity:
meaning is dependent upon a determination of what is a *correct* response,
and you can't simply read such a norm off from a description of how the
mechanism in fact performs.

I think these points, particularly (1), should be quite clear. The fact that
a subject's brain reliably asserts the symbol "foo" when and only when
thunder is presented in no way "fixes" the meaning of "foo". Of course it is
obviously a *constraint* on what "foo" may mean: it is in fact part of what
Quine called the "stimulus meaning" of "foo", his first constraint on
acceptable translation. Nevertheless, by itself it is still way too weak to
do the whole job, for in different contexts the postive output of a reliable
thunder-detector could mean "thunder", something co-extensive but
non-synonymous with "thunder", "god-noise", or just about anything else.
Indeed, it might not *mean* anything at all, if it were only part of a
mechanical thunder-detector which couldn't do anything else.

I wonder if you disagree with this?

As to normativity, the force of problem (2) is particularly acute when
talking about the supposed intentionality of animals, since there aren't any
obvious linguistic or intellectual norms that they are trying to adhere to.
Although the mechanics of a frog's prey-detector may be crystal clear, I am
convinced that we could easily get into an endless debate about what, if
anything, the output of this detector really *means*.

The normativity problem is germane in an interesting way to the problem of
human meanings as well. Note, for example, that in doing this sort of
psychology, we probably won't care about the difference between correctly
identifying a duck and mis-identifying a good decoy -- we're interested in
the perceptual mechanisms that are the same in both cases. In effect, we are
limiting our notion of "categorization" to something like "quick and largely
automatic classification by observation alone"
.

We pretty much *have* to restrict ourselves in this way, because, in the
general case, there's just no limit to the amount of cognitive activity that
might be required in order to positively classify something. Consider what
might go into deciding whether a dolphin ought to be classified as a fish,
whether a fetus ought to be classified as a person, etc. These decisions
potentially call for the full range of science and philosophy, and a
psychology which tries to encompass such decisions has just bitten off more
than it can chew: it would have to provide a comprehensive theory of
rationality, and such an ambitious theory has eluded philosophers for some
time now.

In short, we have to ignore some normative distinctions if we are to
circumscribe the area of inquiry to a theoretically tractable domain of
cognitive activity. (Indeed, in spite of some of your claims, we seem
committed to the notion that we are limiting ourselves to particular
*modules* as explained in Fodor's modularity book.) Unfortunately -- and
here's the rub -- these normative distinctions *are* significant for the
*meaning* of symbols. ("Duck" doesn't *mean* the same thing as "decoy").

It seems that, ultimately, the notion of *meaning* is intimately tied to
standards of rationality that cannot easily be reduced to simple features of
a cognitive mechanism. And this seems to be a deep reason why a descriptive
psychology of categorization barely touches the problem of intentionality.

Anders Weinstein
BBN Labs

------------------------------

Date: 30 Jun 87 19:02:28 GMT
From: teknowledge-vaxc!dgordon@beaver.cs.washington.edu (Dan Gordon)
Subject: Re: The symbol grounding problem: Against Rosch &
Wittgenstein

In article <931@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
>(And I must repeat: Whether or not we can introspectvely report the features
>we are actually using is irrelevant. As long as reliable, consensual,
>all-or-none categorization performance is going on, there must be a set of
>underlying features governing it -- both with sensory and more

Is this so? There is no reliable, consensual all-or-none categorization
performance without a set of underlying features? That sounds like a
restatement of the categorization theorist's credo rather than a thing
that is so.

Dan Gordon

------------------------------

Date: 30 Jun 87 20:49:32 GMT
From: ihnp4!homxb!houdi!marty1@ucbvax.Berkeley.EDU (M.BRILLIANT)
Subject: Re: The symbol grounding problem

In article <937@mind.UUCP>, harnad@mind.UUCP (Stevan Harnad) writes:
> ...
> marty1@houdi.UUCP (M.BRILLIANT) of AT&T Bell Laboratories, Holmdel asks:
> > how about walking through what a machine might do in perceiving a chair?
> > ... (a few steps skipped here)
> > Now the machine has a form. If the form is still unfamiliar,
> > let it ask, "What's that, Daddy?" Daddy says, "That's a chair."
> > The machine files that information away. Next time it sees a
> > similar form it says "Chair, Daddy, chair!" ...
>
> Now you've lost me completely. Having acknowledged the intricacies of
> sensory transduction, you seem to think that the problem of categorization
> is just a matter of filing information away and finding "similar forms."

I think it is. We've found a set of lines, described in 3 dimensions,
that can be rotated to match the outline we derived from the view of a
real chair. We file it in association with the name "chair." A
"similar form" is some other outline that can be matched (to within
some fraction of its size) by rotating the same 3D description.

> I think the main details are missing, such as how the successful
> categorization is accomplished......

Are we having a problem with the word "categorization"? Is it the
process of picking discrete objects out of a pattern of light and
shade ("that's a thing"), or the process of naming the object ("that
thing is a chair"
)?

> ..... Your account also sounds as if it
> expects innate feature detectors to pick out objects for free, more or
> less nonproblematically.....

You left out the part where I referred to computer-aided-design
modules. I think we can find outlines by looking for contiguous
contrasts. If the outlines are straight we (the machine, maybe also
humans) can define the ends of the straight lines in the visual plane,
and hypothesize corresponding lines in space. If hard-coding this
capability gives an "innate feature detector" then that's what I want.

> ...... and then serve as a front end for another
> device (possibly a conventional symbol-cruncher a la standard AI?)
> that will then do the cognitive heavy work. I think that the cognitive
> heavy work begins with picking out objects, i.e., with categorization.

I think I find objects with no conscious knowledge of how I do it (is
that what you call "categorization")? Saying what kind of object it is
more often involves conscious symbol-processing (sometimes one forgets
the word and calls a perfectly familiar object "that thing").

> I think this is done nonsymbolically, on the sensory traces, and that it
> involves learning and pattern recognition -- both sophisticated
> cognitive activities.

If you're talking about finding objects in a field of light and shade, I
agree that it is done nonsymbolically, and everything else you just said.

> ..... I also do not think this work ends, to be taken
> over by another kind of work: symbolic processing.....

That's where I have trouble. Calling a penguin a bird seems to me
purely symbolic, just as calling a tomato a vegetable in one context,
and a fruit in another, is a symbolic process.

> ..... I think that ALL of
> cognition can be seen as categorization. It begins nonsymbolically,
> with sensory features used to sort objects according to their names on
> the basis of category learning; then further sorting proceeds by symbolic
> descriptions, based on combinations of those atomic names. This hybrid
> nonsymbolic/symbolic categorizer is what we are; not a pair of modules,
> one that picks out objects and the other that thinks and talks about them.

Now I don't understand what you said. If it begins nonsymbolically,
and proceeds symbolically, why can't it be done by linking a
nonsymbolic module to a symbolic module?

M. B. Brilliant Marty
AT&T-BL HO 3D-520 (201)-949-1858
Holmdel, NJ 07733 ihnp4!houdi!marty1

------------------------------

Date: 30 Jun 87 19:47:08 GMT
From: ihnp4!homxb!houdi!marty1@ucbvax.Berkeley.EDU (M.BRILLIANT)
Subject: Re: The symbol grounding problem

In article <937@mind.UUCP>, harnad@mind.UUCP (Stevan Harnad) writes:
> marty1@houdi.UUCP (M.BRILLIANT) of AT&T Bell Laboratories, Holmdel asks:
> ....
> > do you really want this machine to be so Totally Turing that it
> > grows like a human, learns like a human, and not only learns new
> > objects, but, like a human born at age zero, learns how to perceive
> > objects? How much of its abilities do you want to have wired in,
> > and how much learned?
>
> That's an empirical question. All it needs to do is pass the Total
> turing Test -- i.e., exhibit performance capacities that are
> indistinguishable from ours. If you can do it by building everything
> in a priori, go ahead. I'm betting it'll need to learn -- or be able to
> learn -- a lot.

To refine the question: how long do you imagine the Total Turing Test
will last? Science fiction stories have robots or aliens living in
human society as humans for periods of years, as long as they live with
strangers, but failing after a few hours trying to supplant a human and
fool his or her spouse.

By "performance capabilities," do you mean the capability to adapt as a
human does to the experiences of a lifetime? Or only enough learning
capability to pass a job interview?

M. B. Brilliant Marty
AT&T-BL HO 3D-520 (201)-949-1858
Holmdel, NJ 07733 ihnp4!houdi!marty1

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT