Copy Link
Add to Bookmark
Report

AIList Digest Volume 5 Issue 148

eZine's profile picture
Published in 
AIList Digest
 · 15 Nov 2023

AIList Digest            Tuesday, 16 Jun 1987     Volume 5 : Issue 148 

Today's Topics:
Theory - The Symbol Grounding Problem

----------------------------------------------------------------------

Date: 15 Jun 87 13:23:35 GMT
From: mind!harnad@princeton.edu (Stevan Harnad)
Subject: Re: The symbol grounding problem (Reply to Ken Laws on ailist)


Ken Laws <Laws@Stripe.SRI.Com> on ailist@Stripe.SRI.Com writes:

> Consider a "hash transformation" that maps a set of "intuitively
> meaningful"
numeric symbols to a set of seemingly random binary codes.
> Suppose that the transformation can be computed by some [horrendous]
> information-preserving mapping of the reals to the reals. Now, the
> hash function satisfies my notion of an analog transformation (in the
> signal-processing sense). When applied to my discrete input set,
> however, the mapping does not seem to be analog (in the sense of
> preserving isomorphic relationships between pairs -- or higher
> orders -- of symbolic codes). Since information has not been lost,
> however, it should be possible to define "relational functions" that
> are analogous to "adjacency" and other properties in the original
> domain. Once this is done, surely the binary codes must be viewed
> as isomorphic to the original symbols rather than just "standing for
> them"
.

I don't think I disagree with this. Don't forget that I bit the bullet
on some surprising consequences of taking my invertibility criterion
for an analog transform seriously. As long as the requisite
information-preserving mapping or "relational function" is in the head
of the human interpreter, you do not have an invertible (hence analog)
transformation. But as soon as the inverse function is wired in
physically, producing a dedicated invertible transformation, you do
have invertibility, even if a lot of the stuff in between is as
discrete, digital and binary as it can be.

I'm not unaware of this counterintuitive property of the invertibility
criterion -- or even of the possibility that it may ultimately do it in
as an attempt to capture the essential feature of an analog transform in
general. Invertibility could fail to capture the standard A/D distinction,
but may be important in the special case of mind-modeling. Or it could
turn out not to be useful at all. (Although Ken Laws's point seems to
strengthen rather than weaken my criterion, unless I've misunderstood.)

Note, however, that what I've said about the grounding problem and the role
of nonsymbolic representations (analog and categorical) would stand
independently of my particular criterion for analog; substituting a more
standard one leaves just about all of the argument intact. Some of the prior
commentators (not Ken Laws) haven't noticed that, criticizing
invertibility as a criterion for analog and thinking that they were
criticizing the symbol grounding problem.

> The "information" in a signal is a function of your methods for
> extracting and interpreting the information. Likewise the "analog
> nature"
of an information-preserving transformation is a function
> of your methods for decoding the analog relationships.

I completely agree. But to get the requisite causality I'm looking
for, the information must be interpretation-independent. Physical
invertibility seems to give you that, even if it's generated by
hardwiring the encryption/decryption (encoding/decoding) scheme underlying
the interpretation into a dedicated system.

> Perhaps [information theorists] have too limited (or general!)
> a view of information, but they have certainly considered your
> problem of decoding signal shape (as opposed to detecting modulation
> patterns)... I am sure that methods for decoding both discrete and
> continuous information in continuous signals are well studied.

I would be interested to hear from those who are familiar with such work.
It may be that some of it is relevant to cognitive and neural modeling
and even the symbol grounding problems under discussion here.
--

Stevan Harnad (609) - 921 7771
{bellcore, psuvax1, seismo, rutgers, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet harnad@mind.Princeton.EDU

------------------------------

Date: 12 Jun 87 18:14:08 GMT
From: mind!harnad@princeton.edu (Stevan Harnad)
Subject: Re: The symbol grounding problem


aweinste@Diamond.BBN.COM (Anders Weinstein)
of BBN Laboratories, Inc., Cambridge, MA writes:

> [1] [The only thing] we learn from the "symbol grounding" problem [is
> that] you've got to have sense organs.
> [2] For a system that is not causally isolated from the environment,
> the symbol-grounding problem now just reduces to the old
> behavior-generating problem, for, if we could just produce the behavior,
> there would be no question of the intentionality of the symbols...
> [3] [But claiming that a] system can't produce intelligent
> performance *because* it doesn't make its symbols meaningful... has
> just got things backwards -- if I could produce the behavior, you'd
> have to admit that its symbols had all the "grounding" they needed
> for original intentionality.
> [4] For this reason, I think your animadversions on symbol
> grounding are largely irrelevant to your position -- the really
> substantial claims pertain only to "what looks like it's likely to
> work"
for generating intelligent behavior.

[1] No, we don't merely learn that you need sense organs from the symbol
grounding problem; we also learn that the nature of those sense organs,
and their functional inter-relation with whatever else is going on
downstream, may not be as simple as one might expect. The relation may
be non-modular. It may not be just a matter of a simple hookup between
autonomous systems -- sensory and symbolic -- as it is in current toy models.
I agree that the symbol grounding problem does not logically entail
this further conclusion, but it, together with the data, does suggest
it, and why it might be important for generating successful performance.

[2] I completely agree that a system that could pass the Total Turing
Test using nothing but an autonomous symbolic module hooked to simple
transducers would not be open to question about its "intrinsic
intentionality"
(at least not from groundedness considerations of the
kind I've been describing here). But there's nothing circular about
arguing that skepticism about the possibility of successfully passing
the Total Turing Test with such a system is dictated in part by
grounding considerations. The autonomy of the symbolic level can be
the culprit in both respects. It can be responsible for the performance
failures *and* for the lack of intrinsic intentionality.

[3] Nor is there anything "backwards" about blaming the lack of
intrinsic intentionality for performance failures. Rather, *you* may be
engaging in counterfactual conditionals here.

[4] The symbol grounding problem can hardly be irrelevant to my
substantive hypotheses about what may work, since it is not only the
motivation for them, but part of the explanation of why and how they
may work.

> since there isn't necessarily any more loss [in A/D] than there is
> in a typical A/A transformation, the "degree of invertibility"
> criterion cross-cuts the intuitive A/D distinction.... suppose we
> had a digitized image, A, which is of much higher resolution
> than another analog one, B. A is more invertible since it contains
> more detail from which to reconstruct the original signal, but B is
> more "shape-preserving" in an intuitive sense. So, which do you regard
> as "more analog"? Which does your theory think is better suited to
> subserving our categorization performance? If you say B, then
> invertibility is just not what you're after.

First, if A, the digital representation, is part of a dedicated
system, hardwired to inputs and outputs, and the input stimuli are
invertible, then, as I've said before, the whole system would be "analog"
according to my provisional criterion, perhaps even more analog than
B. If A is not part of a dedicated, physically invertible system, the
question is moot, since it's not analog at all. With equal
invertibility, it is an empirical question which is better suited to
subserve cognition in general, and probably depends on optimality and
capacity considerations. Finally, categorization performance in particular
calls for much more than invertibility, as I've indicated before. Only iconic
representations are invertible. Categorical reprsentations require
selective *noninvertibility*. But that is another discussion...
--

Stevan Harnad (609) - 921 7771
{bellcore, psuvax1, seismo, rutgers, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet harnad@mind.Princeton.EDU

------------------------------

Date: 12 Jun 87 21:36:13 GMT
From: ihnp4!homxb!houxm!houdi!marty1@ucbvax.Berkeley.EDU
(M.BRILLIANT)
Subject: Re: The symbol grounding problem

In article <6521@diamond.BBN.COM>, aweinste@Diamond.BBN.COM (Anders
Weinstein) writes:
> ....
> (A) The symbols in a purely symbolic system will always be
> ungrounded because such systems can't generate real performance;
> ...
> It seems to me that .... thesis (A) is the one that makes perfect
> sense ....
>
> ..... I think it's perfectly reasonable to suspect that the
> symbolic approach is insufficient to produce full human performance....

What exactly is this "purely" symbolic approach? What impure approach
might be necessary? "Purely symbolic" sounds like a straw man: a
system so purely abstract that it couldn't possibly relate to the real
world, and that nobody seriously trying to mimic human behavior would
even try to build anything that pure.

To begin with, any attempt to "produce full human performance" must
involve sensors, effectors, and motivations. Does "purely symbolic"
preclude any of these? If not, what is it in the definition of a
"purely symbolic" approach that makes it inadequate to pull these
factors together?

(Why do I so casually include motivations? I'm an amateur actor. Not
even a human can mimic another human without knowing about motivations.)

M. B. Brilliant Marty
AT&T-BL HO 3D-520 (201)-949-1858
Holmdel, NJ 07733 ihnp4!houdi!marty1

------------------------------

Date: 12 Jun 87 22:19:48 GMT
From: diamond.bbn.com!aweinste@husc6.harvard.edu (Anders Weinstein)
Subject: Re: The symbol grounding problem

In article <837@mind.UUCP> Stevan Harnad (harnad@mind.UUCP) writes:
> But there's nothing circular about
>arguing that skepticism about the possibility of successfully passing
>the Total Turing Test with such a system is dictated in part by
>grounding considerations. The autonomy of the symbolic level can be
>the culprit in both respects. It can be responsible for the performance
>failures *and* for the lack of intrinsic intentionality.

I'm afraid I still don't understand this. You write here as if these are
somehow two *different* things. I don't see them that way, and hence find
circularity. That is, I view intentionality as a matter of rational
behavior. For me, the behavior is primary, and the notion of "symbol
grounding"
or "intrinsic intentionality" is conceptually derivative; and I
thought from your postings that you shared this frankly behavioristic
philosophy.

Baldly put, here is the only plausible theory I know of "symbol grounding":

X has intrinsic intentionality (is "grounded") iff X can pass the TTT.

If you have a better theory, I'd like to hear it, but until then I believe
that TTT-behavior is the very essence of intrinsic intentionality.

Note that since it's the behavior that has conceptual priority, it makes
sense to say that failure on the behavior front is, in a philosophical sense,
the *reason* for a failure to make intrinsic intentionality. But to say the
reverse is vacuous: failure to make intrinsic intentionality just *is the
same thing* as failure to produce full TTT performance. I don't see that
you can significantly distinguish these two failings.

So what could it come to to say that symbolic AI must inevitably choke on the
grounding problem? Since grounding == behavioral capability, all this claim
can mean is that symbolic AI won't be able to generate full TTT performance.

I think, incidentally, that you're probably right in this claim. However, I
also think that the supposed "symbol-grounding problem" *is* irrelevant. From
my point of view, it's just a fancy alternative name for the real issue, the
behavior-generating problem.

>[4] The symbol grounding problem can hardly be irrelevant to my
>substantive hypotheses about what may work, since it is not only the
>motivation for them, but part of the explanation of why and how they
>may work.

I still don't see how it explains anything. The grounding problem *reduces*
to the behavior problem, not the other way around. To say that your approach
is better grounded is only to say that it may work better (ie. generate TTT
performance); there's just no independent content to the claim of
"groundedness". Or do you have some non-behavioral definition of intrinsic
intentionality that I haven't yet heard?

Anders Weinstein
BBN Labs

------------------------------

Date: 13 Jun 87 19:59:12 GMT
From: mind!harnad@princeton.edu (Stevan Harnad)
Subject: Re: The symbol grounding problem


aweinste@Diamond.BBN.COM (Anders Weinstein)
of BBN Laboratories, Inc., Cambridge, MA writes:

> X has intrinsic intentionality (is "grounded") iff X can pass the TTT.
> I thought from your postings that you shared this frankly behavioristic
> philosophy... So what could it come to to say that symbolic AI must
> inevitably choke on the grounding problem? Since grounding == behavioral
> capability, all this claim can mean is that symbolic AI won't be able
> to generate full TTT performance. I think, incidentally, that you're
> probably right in this claim. However,...To say that your approach
> is better grounded is only to say that it may work better (ie.
> generate TTT performance); there's just no independent content to the
> claim of "groundedness". Or do you have some non-behavioral definition
> of intrinsic intentionality that I haven't yet heard?

I think that this discussion has become repetitious, so I'm going to
have to cut down on the words. Our disagreement is not substantive.
I am not a behaviorist. I am a methodological epiphenomenalist.
Intentionality and consciousness are not equivalent to behavioral
capacity, but behavioral capacity is our only objective basis for
inferring that they are present. Apart from behavioral considerations,
there are also functional considerations: What kinds of internal
processes (e.g., symbolic and nonsymbolic) look as if they might work?
and why? and how? The grounding problem accordingly has functional aspects
too. What are the right kinds of causal connections to ground a
system? Yes, the test of successful grounding is the TTT, but that
still leaves you with the problem of which kinds of connections are
going to work. I've argued that top-down symbol systems hooked to
transducers won't, and that certain hybrid bottom-up systems might. All
these functional considerations concern how to ground symbols, they are
distinct from (though ultimately, of course, dependent on) behavioral
success, and they do have independent content.
--

Stevan Harnad (609) - 921 7771
{bellcore, psuvax1, seismo, rutgers, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet harnad@mind.Princeton.EDU

------------------------------

Date: 14 Jun 87 19:45:33 GMT
From: diamond.bbn.com!aweinste@husc6.harvard.edu (Anders Weinstein)
Subject: Re: The symbol grounding problem

In article <1163@houdi.UUCP> marty1@houdi.UUCP (M.BRILLIANT) writes:
>> (A) The symbols in a purely symbolic system ...
>
>What exactly is this "purely" symbolic approach? What impure approach
>might be necessary? "Purely symbolic" sounds like a straw man ...

The phrase "purely symbolic" was just my short label for the AI strategy that
Stevan Harnad has been criticizing. Yes this strategy *does* encompass the
use of sensors and effectors and (maybe) motivations. Sorry if the term was
misleading, I was only using it as pointer; consult Harnad's postings for a
fuller characterization.

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT