Copy Link
Add to Bookmark
Report

AIList Digest Volume 5 Issue 052

eZine's profile picture
Published in 
AIList Digest
 · 15 Nov 2023

AIList Digest            Monday, 23 Feb 1987       Volume 5 : Issue 52 

Today's Topics:
Seminars - Boolean Concept Learning (CMU) &
Knowledge-Based CAD-CAM Software Integration (Rutgers) &
Parallel Techniques in Computer Algebra (SMU) &
A Picture Theory of Mental Images (SUNY) &
Minds, Machines, and Searle (Rutgers)

----------------------------------------------------------------------

Date: 20 Feb 87 11:41:42 EST
From: Marcella.Zaragoza@isl1.ri.cmu.edu
Subject: Seminar - Boolean Concept Learning (CMU)


THEORY SEMINAR

Lenny Pitt
Wednesday, 25 Feb.
3:30
WeH 5409


Recent results on Boolean concept learning.

Lenny Pitt
U. Illinois at Urbana-Champaign

In "A Theory of the Learnable" (Valiant, 1984), a new formal definition
for concept learning from examples was proposed. Since then a number
of interesting results have been obtained giving learnable classes of
concepts. After motivating and explaining Valiant's definition of
probabilistic and approximate learning, we show that even some
apparently simple types of concepts (e.g. Boolean trees, disjuncts
of two conjuncts) cannot be learned (assuming P not equal NP).
The reductions used suggest an interesting relationship between
learnability problems and the approximation of combinatorial optimization
problems.

This is joint work with Leslie G. Valiant.

This talk will be of interest to both Theory and AI people. To schedule
an appointment to meet with him on Wednesday, send mail to stefanis@g.

------------------------------

Date: 18 Feb 87 13:01:42 EST
From: KALANTARI@RED.RUTGERS.EDU
Subject: Seminar - Knowledge-Based CAD-CAM Software Integration
(Rutgers)

RUTGERS COMPUTER SCIENCE AND RUTCOR COLLOQUIUM SCHEDULE - SPRING 1987

Computer Science Department Colloquium :

This talk has already been announced without an abstract which is
given bellow

---------------------------------------
DATE: Friday February 20, 1987

SPEAKER: Dr. Benjamin Cohen

AFFILIATION: RCA Princeton Labs.

TITLE: "Knowledge-Based CAD-CAM Software Integration."

TIME: 2:50 (Coffee and Cookies will be setup at 2:30)

PLACE: Hill Center, Room 705

ABSTRACT

How to integrate large, distributed, heterogeneous CAD/CAM applications
to support data sharing and data integrity is a major software engineering
challenge. One of the key elements in a solution to the integration problem is
the use of knowledge-based techniques and AI languages. A tutorial overview
of the potential role of knowledge-based techniques in integrating distributed,
heterogeneous databases will be presented. We also illustrate the use of
knowledge-based techniques for process and data integration with a case study
of the CAPTEN [Computer Assisted Picture Tube Engineering] Project underway
at the David Sarnoff Research Center.

------------------------------

Date: Sat, 21 Feb 1987 12:49 CST
From: Laurence L. Leff <E1AR0002%SMUVM1.BITNET@wiscvm.wisc.edu>
Subject: Seminar - Parallel Techniques in Computer Algebra (SMU)


Seminar Announcement, Friday February 27, 1987, 315 SIC,
1:30 PM, Southern Methodist University

Stephen Watt
ABSTRACT: PARALLEL TECHNIQUES IN COMPUTER ALGEBRA

This talk presents techniques for exploiting parallel proc-
essing in symbolic mathematical computation. We examine the
use of high-level parallelism when the number of processors
is fixed and independent of the problem size, as in existing
multiprocessors.

Since seemingly small changes to the inputs can cause dra-
matic changes in the execution times of many algorithms in
computer algebra, it is not generally useful to use static
scheduling. We find it is possible, however, to exploit the
high-level parallelism in many computer algebra problems us-
ing dynamic scheduling methods in which subproblems are
treated homogeneously. An OR-parallel algorithm for integer
factorization will be presented along with AND-parallel al-
gorithms for the computation of multivariate polynomial GCDs
and the computation of Groebner bases.

A portion of the talk will be used to present the design of
a system for running computer algebra programs on a multi-
processor. The system is a version of Maple able to dis-
tribute processes over a local area network. The fact that
the multiprocessor is a local area network need not be con-
sidered by the programmer.

------------------------------

Date: Thu, 19 Feb 87 09:57:35 EST
From: "William J. Rapaport" <rapaport%buffalo.csnet@RELAY.CS.NET>
Subject: Seminar - A Picture Theory of Mental Images (SUNY)


STATE UNIVERSITY OF NEW YORK AT BUFFALO

GRADUATE GROUP IN COGNITIVE SCIENCE

MICHAEL J. TYE

Department of Philosophy
Northern Illinois University

A PICTURE THEORY OF MENTAL IMAGES

The picture theory of mental images has become a subject of hot debate
in recent cognitive psychology. Some psychologists, notably Stephen
Kosslyn, have argued that the best explanation of a variety of experi-
ments on imagery is that mental images are pictorial. Although Kosslyn
has valiantly tried to explain just what the basic thesis of the pic-
torial approach (as he accepts it) amounts to, his position remains dif-
ficult to grasp. As a result, I believe, it has been badly misunder-
stood, both by prominent philosophers and by prominent cognitive scien-
tists.

My aims in this paper are to present a clear statement of the picture
theory as it is understood by Kosslyn, to show that this theory presents
no threat to the dominant digital-computer model of the mind (contrary
to the claims of some well-known commentators), and to argue that the
issue of imagistic indeterminacy is more problematic for the opposing
linguistic or descriptional view of mental images than it is for the
picture theory.

Monday, March 9, 1987
3:30 P.M.
Park 280, Amherst Campus

Co-sponsored by: Department of Philosophy

Informal discussion at 8:00 P.M. at a place to be announced. Call Bill
Rapaport (Dept. of Computer Science, 636-3193 or 3181) or Gail Bruder
(Dept. of Psychology, 636-3676) for further information.

William J. Rapaport
Assistant Professor

Dept. of Computer Science, SUNY Buffalo, Buffalo, NY 14260

(716) 636-3193, 3180

uucp:
..!{allegra,boulder,decvax,mit-ems,nike,rocksanne,sbcs,watmath}!sunybcs!rapaport
csnet: rapaport@buffalo.csnet
bitnet: rapaport@sunybcs.bitnet

------------------------------

Date: 20 Feb 87 02:01:33 GMT
From: chandros@topaz.RUTGERS.EDU (Jonathan A. Chandross)
Subject: Seminar - Minds, Machines, and Searle (Rutgers)


USACS


is pleased to announce

a talk by Stevan Harnad on

Minds, Machines, and Searle

Tuesday, February 24th

Hill Center Room 705

at 5:30 PM



For those of you who aren't familiar with Stevan Harnad, he is the
editor of the Brain and Behaviorial Sciences journal (where Searle's
Chinese Room argument first appeared), as well as a regular poster
to mod.ai.

If you would like to come to dinner with us please send mail to:
rutgers!topaz!chandross. I need to know by Monday (2/23) at the
latest to make reservations. For further information, or a transcript
of the talk, send email.



SUMMARY AND CONCLUSIONS:

Searle's provocative "Chinese Room Argument" attempted to show that the
goals of "Strong AI" are unrealizable. Proponents of Strong AI are supposed
to believe that (i) the mind is a computer program, (ii) the brain is
irrelevant, and (iii) the Turing Test is decisive. Searle's point is that
since the programmed symbol-manipulating instructions of a computer capable of
passing the Turing Test for understanding Chinese could always be performed
instead by a person who could not understand Chinese, the computer can hardly
be said to understand Chinese. Such "simulated" understanding, Searle argues,
is not the same as real understanding, which can only be accomplished by
something that "duplicates" the "causal powers" of the brain. In the present
paper the following points have been made:

1. Simulation versus Implementation:
Searle fails to distinguish between the simulation of a mechanism, which is
only the formal testing of a theory, and the implementation of a mechanism,
which does duplicate causal powers. Searle's "simulation" only simulates
simulation rather than implementation. It can no more be expected to understand
than a simulated airplane can be expected to fly. Nevertheless, a successful
simulation must capture formally all the relevant functional properties of a
successful implementation.

2. Theory-Testing versus Turing-Testing:
Searle's argument conflates theory-testing and Turing-Testing. Computer
simulations formally encode and test models for human perceptuomotor and
cognitive performance capacities; they are the medium in which the empirical
and theoretical work is done. The Turing Test is an informal and open-ended
test of whether or not people can discriminate the performance of the
implemented simulation from that of a real human being. In a sense, we are
Turing-Testing one another all the time, in our everyday solutions to the
"other minds" problem.

3. The Convergence Argument:
Searle fails to take underdetermination into account. All scientific theories
are underdetermined by their data; i.e., the data are compatible with more
than one theory. But as the data domain grows, the degrees of freedom for
alternative (equiparametric) theories shrink. This "convergence" constraint
applies to AI's "toy" linguistic and robotic models as well, as they approach
the capacity to pass the Total (asympototic) Turing Test. Toy models are not
modules.

4. Brain Modeling versus Mind Modeling:
Searle also fails to note that the brain itself can be understood only through
theoretical modeling, and that the boundary between brain performance and body
performance becomes arbitrary as one converges on an asymptotic model of total
human performance capacity.

5. The Modularity Assumption:
Searle implicitly adopts a strong, untested "modularity" assumption to the
effect that certain functional parts of human cognitive performance capacity
(such as language) can be be successfully modeled independently of the rest
(such as perceptuomotor or "robotic" capacity). This assumption may be false
for models approaching the power and generality needed to pass the Total
Turing Test.

6. The Teletype versus the Robot Turing Test:
Foundational issues in cognitive science depend critically on the truth or
falsity of such modularity assumptions. For example, the "teletype"
(linguistic) version of the Turing Test could in principle (though not
necessarily in practice) be implemented by formal symbol-manipulation alone
(symbols in, symbols out), whereas the robot version necessarily calls for
full causal powers of interaction with the outside world (seeing, doing
AND linguistic understanding).

7. The Transducer/Effector Argument:
Prior "robot" replies to Searle have not been principled ones. They have added
on robotic requirements as an arbitrary extra constraint. A principled
"transducer/effector" counterargument, however, can be based on the logical
fact that transduction is necessarily nonsymbolic, drawing on analog and
analog-to-digital functions that can only be simulated, but not implemented,
symbolically.

8. Robotics and Causality:
Searle's argument hence fails logically for the robot version of the Turing
Test, for in simulating it he would either have to USE its transducers and
effectors (in which case he would not be simulating all of its functions) or
he would have to BE its transducers and effectors, in which case he would
indeed be duplicating their causal powers (of seeing and doing).

9. Symbolic Functionalism versus Robotic Functionalism:
If symbol-manipulation ("symbolic functionalism") cannot in principle
accomplish the functions of the transducer and effector surfaces, then there
is no reason why every function in between has to be symbolic either.
Nonsymbolic function may be essential to implementing minds and may be a
crucial constituent of the functional substrate of mental states ("robotic
functionalism"
): In order to work as hypothesized, the functionalist's
"brain-in-a-vat" may have to be more than just an isolated symbolic
"understanding" module -- perhaps even hybrid analog/symbolic all the way
through, as the real brain is.

10. "Strong" versus "Weak" AI:
Finally, it is not at all clear that Searle's "Strong AI"/"Weak AI"
distinction captures all the possibilities, or is even representative of the
views of most cognitive scientists.

Hence, most of Searle's argument turns out to rest on unanswered questions
about the modularity of language and the scope of the symbolic approach to
modeling cognition. If the modularity assumption turns out to be false, then
a top-down symbol-manipulative approach to explaining the mind may be
completely misguided because its symbols (and their interpretations) remain
ungrounded -- not for Searle's reasons (since Searle's argument shares the
cognitive modularity assumption with "Strong AI"), but because of the
transdsucer/effector argument (and its ramifications for the kind of hybrid,
bottom-up processing that may then turn out to be optimal, or even essential,
in between transducers and effectors). What is undeniable is that a successful
theory of cognition will have to be computable (simulable), if not exclusively
computational (symbol-manipulative). Perhaps this is what Searle means (or
ought to mean) by "Weak AI."

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT