Copy Link
Add to Bookmark
Report

AIList Digest Volume 5 Issue 257

eZine's profile picture
Published in 
AIList Digest
 · 15 Nov 2023

AIList Digest            Tuesday, 3 Nov 1987      Volume 5 : Issue 257 

Today's Topics:
Methodology - Sharing Code & Critical Analysis and Reconstruction

----------------------------------------------------------------------

Date: 30 Oct 87 14:05:35 GMT
From: bruce@vanhalen.rutgers.edu (Shane Bruce)
Reply-to: bruce@vanhalen.rutgers.edu (Shane Bruce)
Subject: Re: Lenat's AM program


In article <774@orstcs.CS.ORST.EDU> tgd@ORSTCS.CS.ORST.EDU (Tom Dietterich)
writes:
>
>In the biological sciences, publication of an article reporting a new
>clone obligates the author to provide that clone to other researchers
>for non-commercial purposes. I think we need a similar policy in
>computer science. Publication of a description of a system should
>obligate the author to provide listings of the system (a running
>system is probably too much to ask for) to other researchers on a
>non-disclosure basis.
>

The policy which you are advocating, while admirable, is not practical. No
corporation which is involved in state of the art AI research is going to
allow listings of their next product/internal tool to made available to the
general scientific community, even on a non-disclosure basis. Why should
they give away what they intend to sell?

A more practical solution would be for all articles to include a section
on implementation which, while not providing listings, would at least provide
enough information that the project could be duplicated by another competent
researcher in the field.


--
Shane Bruce
HOME: (201) 613-1285 WORK: (201) 932-4714
ARPA: bruce@paul.rutgers.edu
UUCP: {ames, cbosgd, harvard, moss}!rutgers!paul.rutgers.edu!bruce

------------------------------

Date: 30 Oct 87 10:58:40 EST (Fri)
From: sas@bfly-vax.bbn.com
Subject: AIList V5 #254 - Gilding the Lemon

[Authors note: The following message has a bit more vituperation than
I had planned for, however I agree with the basic points.]

While I agree that AI is in a very early stage and it is still
possible to just jump in and get right to the frontier, an incredible
number of people seem to jump in and instead of getting to the
frontier, spend an awful lot of time tromping around the campfire. It
seems like the journals are replete with wheels being reinvented -
it's as if the physics journals were full of papers realizing that the
same force that makes apples fall to ground also moves the planets
about the sun. I'm not saying that there is no good research or that
the universal theory of gravitation is a bad idea, but as Newton
himself pointed out, he stood on the shoulders of giants. He read
other people's published results. He didn't spend his time trying to
figure out how a pendulum's period is related to its length - he read
Galileo.

Personally, I think everyone is entitled to come up with round things
that roll down hills every so often. As a matter of fact, I think
that this can form a very sound basis for learning just how things
work. Physicists realize this and force undergraduates to spend
countless tedious hours trying to fudge their results so it comes out
just the way Faraday or Fermi said it would. This is an excellent
form of education - but it shouldn't be confused with research.
With education, the individual learns something; with research, the
scientific community learns something. All too much of what passes as
research nowadays is nothing more than education.

The current lack of reproducibility is appalling. We have a
generation of language researchers who have never had a chance to play
with the Blocks World or and examine the limitiations of TAILSPIN.
It's as if Elias Howe had to invent the sewing machine without access
to steel or gearing. There's a good chance he would have reinvented
the bone needle and the backstitch given the same access to the fruits
of the industrial revolution that most AI researchers have to the
fruits (lemons) of AI research. Anecdotal evidence, which is really
what this field seems to be based on, just doesn't make for good
science.

Wow, did I write that?
Seth

------------------------------

Date: Fri, 30 Oct 87 15:48:16 WET
From: Martin Merry <mjm%hplb.csnet@RELAY.CS.NET>
Reply-to: Martin Merry <mjm%hplb.csnet@RELAY.CS.NET>
Subject: Once a lemon, always a lemon


Ken Laws argues that critical reviews and reconstructions of existing AI
software are at the moment only peripheral to AI.


> An advisor who advocates duplicating prior work is cutting his
> students' chances of fame and fortune from the discovery of the
> one true path. It is always true that the published works can
> be improved upon, but the original developer has already gotten
> 80% of the benefit with 20% of the work. Why should the student
> butt his head against the same problems that stopped the original
> work (be they theoretical or practical problems) when he could
> attach his name to an entirely new approach?


I had hoped that Drew McDermott's "AI meets Natural Stupidity" had exploded
this view, but apparently not. Substantial, lasting progress in any field of
AI is *never* achievable within the scope of a single Ph.D thesis. Progress
follows from new work building upon existing work - standing on other
researcher's shoulders (instead of, as too often happens, their toes).

This is not an argument for us all to become theorists, working on obscure
extensions to non-standard logics. However, a nifty program which is hacked
together and then only described functionally (i.e. publications only tell you
what it does, with little detail of how it does it, and certainly no
information on the very specialised kluges which make it work in this
particular case) does not advance our knowledge of AI.

Too often in AI, early results from a particular approach may appear promising
and may yield great credit to the discoverer ("80% of the benefit") but don't
actually go beyond solving toy problems. There is a lot of work to do in going
beyond these first sketches ("80% of the work") but if we don't encourage
people to do this we will remain in the sandbox.

Martin Merry Standard disclaimer on personal
HP Labs Bristol Research Centre opinions apply

P.S. For those who haven't seen it, the Drew McDermott paper appears in SIGART
Newsletter 57 (Aug 1976) and is reprinted in "Mind Design" (ed Haugeland),
Bradford Books 1981. It should be required reading for anyone working in
AI....

------------------------------

Date: Fri, 30 Oct 1987 17:03 EST
From: MINSKY%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU
Subject: AIList V5 #254 - AI Methodology

Hurrah for Ken Laws when he says that

>An advisor who advocates duplicating prior work is cutting his
>students' chances of fame and fortune from the discovery of the
>one true path.

AI is still in a great exploratory phase in which there is much to be
discovered. I would say that replicating and evaluating an older
experiment would be a suitable Master's degree topic. Replicating AM
and discovering how to extend its range would be a good doctoral topic
- but because of the latter rather than the former aspect.

As for those complaints about AI's fuzziness - and AI's very name -
those are still virtues at the moment. Many people who profess to be
working on AI recognize that what they are doing is to try to make
computers do things that we don't know yet how to make them do, so AI
is in that sense, speculative computer research. Then, whenever
something become better understood, it is moved into a field with a
more specific type of name. No purpose would be served by trying to
make more precise the name of the exploratory activity - either for
the public consumers or for the explorers themselves.

In fact, I have a feeling that most of those who don't like the name
AI also feel uncomfortable when exploring domains that aren't yet
clearly enough defined for their tastes - and are thus disinclined to
work in those areas. If so, then maintaining the title which some of
us like and others don't may actually serve a useful function. It is
the same reason, I think, why the movement to retitle science fiction
as "speculative fiction" failed. The people who preferred the
seemingly more precise definition were not the ones who were best at
making, and at appreciating, the kinds of speculations under discussion.

Ken Laws went on to say that he would make an exception in his own
field of computer vision. I couldn't tell how much of that was irony.
But in fact I'm inclined to agree at the level of lower level vision
processing - but it seems to me that progress in "high level" vision
has been somewhat sluggish since the late 60s and that this may be
because too many vision hackers tried to be too scientific - and have
accordingly not explored enough high level organizational ideas in
that domain.

- marvin minsky

------------------------------

Date: 1 Nov 87 23:37:01 GMT
From: tgd@orstcs.cs.orst.edu (Tom Dietterich)
Subject: Re: Gilding the Lemon


Ken Laws says
...progress in
AI is driven by the hackers and the graduate students who "don't
know any better"
than to attempt the unreasonable.

I disagree strongly. If you see who is winning the Best Paper awards
at conferences, it is not grad students attempting the unreasonable.
It is seasoned researchers who are making the solid contributions.

I'm not advocating that everyone do rational reconstructions. It
seems to me that AI research on a particular problem evolves through
several stages: (a) problem definition, (b) development of methods,
(c) careful definition and comparative study of the methods, (d)
identification of relationships among methods (e.g., tradeoffs, or
even understanding the entire space of methods relevant to a problem).

Different research methods are appropriate at different stages.
Problem definition (a) and initial method development (b) can be
accomplished by pursuing particular application problems, constructing
exploratory systems, etc. Rational reconstructions and empirical
comparisons are appropriate for (c). Mathematical analysis is
generally the best for (d). In my opinion, the graduate students of
the past two decades have already done a great deal of (a) and (b), so
that we have lots of problems and methods out there that need further
study and comparison. However, I'm sure there are other problems and
methods waiting to be discovered, so there is still a lot of room for
exploratory studies.

--Tom Dietterich

------------------------------

Date: 1 Nov 87 23:45:25 GMT
From: tgd@orstcs.cs.orst.edu (Tom Dietterich)
Subject: Re: Gilding the Lemon (part 2)


Just a couple more points on this subject.

Ken Laws also says
Progress also comes from applications -- very seldom from theory.

My description of research stages shows that progress comes from
different sources at different stages. Applications are primarily
useful for identifying problems and understanding the important
issues.

It is particularly revealing that Ken is "highly suspicious
of any youngster trying to solve all our problems [in computer vision]
by ignoring the accumlated knowledge of the last twenty years."

Evidentally, he feels that there is no accumulated knowledge in AI.
If that is true, it is perhaps because researchers have not studied
the exploratory forays of the past to isolate and consolidate the
knowledge gained.

--Tom Dietterich

------------------------------

Date: Fri, 30 Oct 87 09:45:45 EST
From: Paul Fishwick <fishwick%fish.cis.ufl.edu@RELAY.CS.NET>
Subject: Gilding the Lemon


...From Ken Laws...
> Progress also comes from applications -- very seldom from theory.
> The "neats" have been worrying for years (centuries?) about temporal
> logics, but there has been more payoff from GPSS and SIMSCRIPT (and
> SPICE and other simulation systems) than from all the debates over
> consistent point and interval representations. The applied systems
> are ultimately limited by their ontologies, but they are useful up to
> a point. A distant point.

I'd like to make a couple of points here: both theory and practice are
essential to progress; however, too much of one without the other
creates an imbalance. As far as the allusion to temporal logics and
interval representations, I think that Ken has made a valuable point.
Too often an AI researcher will write on a subject without referencing
non-AI literature which has a direct bearing on the subject. An
illustration, in point, is the reference to temporal representations -
If one really wants to know what researchers have done with concepts
such as *time*, *process*, and *event* then one should seriously review work
in system modeling & control and simulation practice and theory. In doing
my own research I am actively involved in both systems/simulation
methodology and AI methods so I found Ken's reference to GPSS and SPICE
most gratifying.

What I am suggesting is that AI researchers should directly reference
(and build upon) related work that has "non-philosophical" origins. Note
that I am not against philosophical inquiry in principle -- where would
any of us be without it? The other direction is also important - namely,
that reseachers in more established areas such as systems theory and
simulation should look at the AI work to see if "encoding a mental model"
might improve performance or model comprehensibility.

Paul Fishwick
University of Florida
INTERNET: fishwick@fish.cis.ufl.edu

------------------------------

Date: Mon, 02 Nov 87 17:06:33 EST
From: Mario O Bourgoin <mob@MEDIA-LAB.MEDIA.MIT.EDU>
Subject: Re: Gilding the Lemon


In article <12346288066.15.LAWS@KL.SRI.Com> Ken Laws wonders why a
student should cover the same ground as that of another's thesis and
face the problems that stopped the original work. His objection to
re-implementations is that they don't advance the field, they
consolidate it. He is quick to add that he does not object to
consolidation but that he feels that AI must cover more of its
intellectual territory before it can be done effectively.
I know of many good examples of significant progress achieved
in an area of AI through someone's efforts to re-implement and extend
the efforts of other researchers. Tom Dietterich mentioned one when
he talked about David Chapman's work on conjunctive planning. Work on
dependency-directed backtracking for search is another area. AM and
its relatives are good examples in the field of automated discovery.
Research in Prolog certainly deserves mention.
I believe that AI is more than just ready for consolidation: I
think it's been happening for a while just not a lot or obviously. I
love exploration and understand its place in development but it isn't
the blind stab in the dark that one might gather from Ken's article.
I think he agrees as he says:

A student studies the latest AI proceedings to get a
nifty idea, tries to solve all the world's problems
from his new viewpoint, and ultimately runs into
limitations.

The irresponsible researcher is little better than a random
generator who sometimes remembers what he has done. The repetitive
bureaucrat is less than a cow who rechews another's cud. The AI
researcher learns both by exploring to extend the limits of his
experience and consolidating to restructure what he already knows to
reflect what he has learned.
In other fields, Masters students emphasize consolidation and
PHD students emphasize exploration (creativity.) But at MIT, the AI
program is an interdisciplinary effort which offers only a doctorate
and I don't know of a AI Masters elsewhere. This has left the job of
consolidation to accomplished researchers who are as interested in
exploration as their students. Maybe there would be a use for a more
conservative approach.

- --Mario O. Bourgoin

To Ken: The best paraphrase isn't a quote since quoting communicates
that you are interested in what the other said but not what you
understand of it.

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT