Copy Link
Add to Bookmark
Report

AIList Digest Volume 5 Issue 147

eZine's profile picture
Published in 
AIList Digest
 · 15 Nov 2023

AIList Digest            Tuesday, 16 Jun 1987     Volume 5 : Issue 147 

Today's Topics:
Theory - Symbol Grounding and Physical Invertibility

----------------------------------------------------------------------

Date: 11 Jun 87 21:15:31 GMT
From: diamond.bbn.com!aweinste@husc6.harvard.edu (Anders Weinstein)
Subject: Re: The symbol grounding problem

In article <828@mind.UUCP> Stevan Harnad <harnad@mind.UUCP> writes
>
>> There's no [symbol] grounding problem, just the old
>> behavior-generating problem
> There is:
>(1) the behavior-generating problem (what I have referred to as the problem of
>devising a candidate that will pass the Total Turing Test), (2) the
>symbol-grounding problem (the problem of how to make formal symbols
>intrinsically meaningful, independent of our interpretations), and (3)
>the conjecture (based on the existing empirical evidence and on
>logical and methodological considerations) that (2) is responsible for
>the failure of the top-down symbolic approach to solve (1).

It seems to me that in different places, you are arguing the relation between
(1) and (2) in both directions, claiming both

(A) The symbols in a purely symbolic system will always be
ungrounded because such systems can't generate real performance;
and
(B) A purely symbolic system can't generate real performance because
its symbols will always be ungrounded.

That is, when I ask you why you think the symbolic approach won't work, one
of your reasons is always "because it can't solve the grounding problem", but
when I press you for why the symbolic approach can't solve the grounding
problem, it always turns out to be "because I think it won't work." I think
we should get straight on the priority here.

It seems to me that, contra (3), thesis (A) is the one that makes perfect
sense -- in fact, it's what I thought you were saying. I just don't
understand (B) at all.

To elaborate: I presume the "symbol-grounding" problem is a *philosophical*
question: what gives formal symbols original intentionality? I suppose the
only answer anybody knows is, in brief, that the symbols must be playing a
certain role in what Dennett calls an "intentional system", that is, a system
which is capable of producing complex, adaptive behavior in a rational way.

Since such a system must be able to respond to changes in its environment,
this answer has the interesting consequence that causal interaction with the
world is a *necessary* condition for original intentionality. It tells us
that symbols in a disconnected computer, without sense organs, could never be
"grounded" or intrinsically meaningful. But those in a machine that can
sense and react could be, provided the machine exhibited the requisite
rationality.

And this, as far as I can tell, is the end of what we learn from the "symbol
grounding"
problem -- you've got to have sense organs. For a system that is
not causally isolated from the environment, the symbol-grounding problem now
just reduces to the old behavior-generating problem, for, if we could just
produce the behavior, there would be no question of the intentionality of the
symbols. In other words, once we've wised up enough to recognize that we must
include sensory systems (as symbolic AI has), we have completely disposed of
the "symbol grounding" problem, and all that's left to worry about is the
question of what kind of system can produce the requisite intelligent
behavior. That is, all that's left is the old behavior-generating problem.

Now as I've indicated, I think it's perfectly reasonable to suspect that the
symbolic approach is insufficient to produce full human performance. You
really don't have to issue any polemics on this point to me; such a suspicion
could well be justified by pointing out the triviality of AI's performance
achievements to date.

What I *don't* see is any more "principled" or "logical" or "methodological"
reason for such a suspicion; in particular, I don't understand how (B) could
provide such a reason. My system can't produce intelligent performance
because it doesn't make its symbols meaningful? This statement has just got
things backwards -- if I could produce the behavior, you'd have to admit that
its symbols had all the "grounding" they needed for original intentionality.

In sum, apart from the considerations that require causal embedding, I don't
see that there *is* any "symbol-grounding" problem, at least not any problem
that is any different from the old "total-performance generating" problem.
For this reason, I think your animadversions on symbol grounding are largely
irrelevant to your position -- the really substantial claims pertain only to
"what looks like it's likely to work" for generating intelligent behavior.

On a more specific issue:
>
>You've bypassed the three points I brought up in replying to your
>challenge to my invertibility criterion for an analog transform the
>last time: (1) the quantization in standard A/D is noninvertible,

Yes, but *my* point has been that since there isn't necessarily any more loss
here than there is in a typical A/A transformation, the "degree of
invertibility"
criterion cross-cuts the intuitive A/D distinction.

Look, suppose we had a digitized image, A, which is of much higher resolution
than another analog one, B. A is more invertible since it contains more
detail from which to reconstruct the original signal, but B is more
"shape-preserving" in an intuitive sense. So, which do you regard as "more
analog"
? Which does your theory think is better suited to subserving our
categorization performance? If you say B, then invertibility is just not what
you're after.

Anders Weinstein
BBN Labs

------------------------------

Date: 12 Jun 87 08:16:00 EST
From: cugini@icst-ecf.arpa
Reply-to: <cugini@icst-ecf.arpa>
Subject: symbol grounding and physical invertibility


S. Harnad replies:

> According to my view, invertibility (and perhaps inversion)
> captures just the relevant features of causation and resemblance that
> are needed to ground symbols. The relation is between the proximal
> projection (of a distal object) onto the sensory surfaces -- let's
> call it P -- and an invertible transformation of that projection [I(P)].
> The latter is what I call the "iconic representation." Note that the
> invertibility is with the sensory projection, *not* the distal object. I
> don't believe in distal magic. My grounding scheme begins at the
> sensory surfaces ("skin and in"). No "wider" metaphysical causality is
> involved, just narrow, local causality.

Well, OK, glad you clarified that - I think there are issues here
about the difference between grounding symbols in causation emanating
from distal objects vs. grounding them in proximal sensory surfaces -
(optical illusions, hallucinations, etc.) but let's pass over that
for now.

It still doesn't seem clear why invertibility should be necessary
for grounding (although it may be sufficient). Frinstance, suppose
we humans, or a robot, had four kinds of color receptors lurking
behind our retinas (retinae?), which responded to red, green,
blue and yellow wavelengths. And further suppose that stimulating
the yellow receptors alone produced the same iconic representation
as stimulating the red and green ones - ie both were experienced
as plain old yellow, nor could the experiencer in any way
distinguish between the yellows caused by the two different
stimulations. (A fortiori, the experiencer would certainly not
have more than one categorical representation, nor symbol for
such experiences.) In short, suppose that some information was
lost on the way in from the sensory surface, so we had a many
to one (hence non-invertible) mapping.

Would you then want to say that the symbol "yellow" was not grounded
for such a being?

John Cugini <Cugini@icst-ecf.arpa>

------------------------------

Date: 12 Jun 87 15:52:40 GMT
From: mind!harnad@princeton.edu (Stevan Harnad)
Subject: Re: The symbol grounding problem


marty1@houdi.UUCP (M.BRILLIANT) of AT&T Bell Laboratories, Holmdel writes:

> Human visual processing is neither analog nor invertible.

Nor understood nearly well enough to draw the former two conclusions,
it seems to me. If you are taking the discreteness of neurons, the
all-or-none nature of the action potential, and the transformation of
stimulus intensity to firing frequency as your basis for concluding
that visual processing is "digital," the basis is weak, and the
analogy with electronic transduction strained.

As the (unresolved) discussion of the logical basis of the A/D distinction
last year indicated, nature itself may not be continuous, but
quantized. This would make continuity-based definitions of A/D moot.
If discrete photons strike discrete photoreceptors, then discontintuity
is transforming into discontinuity. Yet the question can still be
asked: Is the transformation preserving physical properties such as
intensity and spatial relations by transforming them to physical
properties that are isomorphic to them (e.g., intensity to frequency,
and spatial adjacency to spatial adjacency) as opposed to merely
"standing for" them in some binary code?

There is also the question of postsynaptic potentials, which, unlike
the all-or-none action potentials, are graded (to within the
pharmacological quantum of a neurotransmitter packet). What if
significant aspects of vision are coded at that level as fields or
gradients and their interactions? Or at the level of local or distributed
patterns of connectivity? Or at the chemical level? We don't even know
how to match up the various resolution-levels or "grains" of the inputs and
transformations involved: light quanta, neural quanta, psychophysical
quanta. What is discrete and above-threshold at one level may become
blurred, "continuous" and below-threshold at another.

> what is the intrinsic meaning of "intrinsically meaningful"?
> The Turing test is an objectively verifiable criterion. How can
> we objectively verify intrinsic meaningfulness?

We cannot objectively verify intrinsic meaningfulness. The Turing test
is the only available criterion. Yet we can make inferences about it
(for example, that it is unlikely to be present in a thermostat or
lisp code running on a vax). And we have some direct (but subjective)
evidence that it exists in at least one case (namely, our own): We
know the difference between looking up a meaning in an English/English
dictionary versus a Chinese/Chinese dictionary (if we are nonspeakers
of Chinese): The former symbols are meaningful and the latter are
not. We also know that we could "ground" an understanding of Chinese
(by translation) in our prior understanding of English; and we assume
that our understanding of English is grounded in our prior perceptual
learning and understanding of categories in the real world of
objects. Objective evidence of this perceptual grounding is provided
by our ability to discriminate, manipulate, categorize, name and
describe real-world objects and our ability to produce and respond to
names and descriptions meaningfully (i.e., all Turing criteria).

So the empirical question becomes the following: Is a device that has
nothing but symbols and can only manipulate them on the basis of their
shape more likely to be like our own (intrinsically grounded) case, or
more like the Chinese/Chinese dictionary, whose meanings can only be
derived by the mediation of an intrinsically grounded system like our own?

But the issue is ultimately empirical. The logical and methodological
considerations can really only serve to motivate pursuing one empirical
hypothesis rather than another (e.g., top-down symbolic vs. bottom-up
hybrid). The final arbiter is the Total Turing Test. If a pure symbolic
module linked to transducers and effectors turns out to be able to
generate all of our performance capacity then the grounding problem and
intrinsic intentionality was a red herring. As I make clear in the
paper "Minds, Machines and Searle," this is an empirical, not a
logical question. But on the evidence to date, this outcome looks
highly unlikely, and the obstacle seems to be the problem of bottom-up
grounding of symbols in nonsymbolic representations and in the real world
of objects.

> Using "analog" to mean "invertible" invites misunderstanding,
> which invites irrelevant criticism.

I have tried to capture with the invertibility criterion certain
features that may be important (perhaps even unique) to the case of
cognitive modeling -- features that fail to be captured by the
conventional electrical engineering criteria. I have acknowledged all
along that the physically invertible/noninvertible distinction may
turn out to be independent of the A/D distinction, although the
overlap looks significant. And I'm doing my best to sort out the
misunderstandings and irrelevant criticism...

> Human (in general, vertebrate) visual processing is a dedicated
> hardwired digital system. It employs data reduction to abstract such
> features as motion, edges, and orientation of edges. It then forms a
> map in which position is crudely analog to the visual plane, but
> quantized. This map is sufficiently similar to maps used in image
> processing machines so that I can almost imagine how symbols could be
> generated from it.

I am surprised that you state this with such confidence. In
particular, do you really think that vertebrate vision is well enough
understood functionally to draw such conclusions? And are you sure
that the current hardware and signal-analytic concepts from electrical
engineering are adequate to apply to what we do know of visual
neurobiology, rather than being prima facie metaphors?

> By the time it gets to perception, it is not invertible, except with
> respect to what is perceived. Noninvertibility is demonstrated in
> experiments in the identification of suspects. Witnesses can report
> what they perceive, but they don't always perceive enough to invert
> the perceived image and identify the object that gave rise to the
> perception. If you don't agree, please give a concrete, objectively
> verifiable definition of "invertibility" that can be used to refute my
> conclusion. If I am right, human intelligence itself relies on neither
> analog nor invertible symbol grounding, and therefore artificial
> intelligence does not require it.

I cannot follow your argument at all. Inability to categorize and identify
is indeed evidence of a form of noninvertibility. But my theory never laid
claim to complete invertibility throughout. (For the disadvantages of
"total invertibility," see Luria's "The Mind of a Mnemonist," or, for a more
literary depiction of the same problem, Borges's "Funes the Memorious." Both
are discussed in a chapter of mine entitled "Metaphor and Mental Duality"
in Simon & Sholes [eds] book "Language, Mind and Brain," Academic Press 1978.
See also the literature on eidetic imagery.] Categorization and identification
itself *requires* selective non-invertibility: within-category differences
must be ignored and diminished, while between-category differences must
be selected and enhanced.

Although I do my best, it is not always possible to get all the relevant
background material for these Net discussions onto the Net. Sometimes I
must reluctantly refer discussants to a fuller text elsewhere. In
principle, though, I'm prepared to re-present any particular piece of
relevant material here. This particular misunderstanding, though,
sounds like it would call for the exposition of my entire theory of
categorization, which I am reluctant to impose on the entire Net
without a wider demand. So let me just say that invertibility is my
provisional criterion for what counts as an analog transformation, and
that I have claimed that symbolic representations must be grounded in
nonsymbolic ones, which include both invertible (iconic) and
noninvertible (categorical) representations.
--

Stevan Harnad (609) - 921 7771
{bellcore, psuvax1, seismo, rutgers, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet harnad@mind.Princeton.EDU

------------------------------

Date: Sun 14 Jun 87 16:42:34-PDT
From: Ken Laws <Laws@Stripe.SRI.Com>
Reply-to: AIList-Request@STRIPE.SRI.COM
Subject: [mind!harnad@princeton.edu (Stevan Harnad): Re: The symbol
grounding problem]

Date: 12 Jun 87 15:52:40 GMT
From: mind!harnad@princeton.edu (Stevan Harnad)

If discrete photons strike discrete photoreceptors, then discontintuity
is transforming into discontinuity. Yet the question can still be
asked: Is the transformation preserving physical properties such as
intensity and spatial relations by transforming them to physical
properties that are isomorphic to them (e.g., intensity to frequency,
and spatial adjacency to spatial adjacency) as opposed to merely
"standing for" them in some binary code?

This makes me uncomfortable. Consider a "hash transformation" that
maps a set of "intuitively meaningful" numeric symbols to a set of
seemingly random binary codes. Suppose that the transformation
can be computed by some [horrendous] information-preserving
mapping of the reals to the reals. Now, the hash function satisfies
my notion of an analog transformation (in the signal-processing sense).
When applied to my discrete input set, however, the mapping does not
seem to be analog (in the sense of preserving isomorphic relationships
between pairs -- or higher orders -- of symbolic codes). Since
information has not been lost, however, it should be possible to
define "relational functions" that are analogous to "adjacency" and
other properties in the original domain. Once this is done, surely
the binary codes must be viewed as isomorphic to the original symbols
rather than just "standing for them".

The "information" in a signal is a function of your methods for
extracting and interpreting the information. Likewise the "analog
nature"
of an information-preserving transformation is a function
of your methods for decoding the analog relationships.

We should also keep in mind that information theorists have advanced
a great deal since the days of Shannon. Perhaps they have too limited
(or general!) a view of information, but they have certainly considered
your problem of decoding signal shape (as opposed to detecting modulation
patterns). I regret that I am not familiar with their results, but
I am sure that methods for decoding both discrete and continuous
information in continuous signals are well studied. Not that all
the answers are in -- vision workers like myself are well aware that
there can be [obvious] information in a signal that is impossible to
extract without a good model of the generating process.

-- Ken

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT