Copy Link
Add to Bookmark
Report
Machine Learning List Vol. 6 No. 27
Machine Learning List: Vol. 6 No. 27
Tuesday, October 18, 1994
Contents:
Results of Inductive Learning Competition
Yet another comment on a comment on a comment ... from Wolpert
Challenge for ML
Proben1 Neural Network benchmark collection
Second/Final Call for papers ECML-95
Workshop on knowledge level modeling and machine learning
CFP: AI applications in Geophysical Sciences
EPIA'95 - Conference CFP
Tutorial on applications of machine learning
The Machine Learning List is moderated. Contributions should be relevant to
the scientific study of machine learning. Mail contributions to ml@ics.uci.edu.
Mail requests to be added or deleted to ml-request@ics.uci.edu. Back issues
may be FTP'd from ics.uci.edu in pub/ml-list/V<X>/<N> or N.Z where X and N are
the volume and number of the issue; ID: anonymous PASSWORD: <your mail address>
URL- http://www.ics.uci.edu/AI/ML/Machine-Learning.html
----------------------------------------------------------------------
Date: Fri, 7 Oct 94 21:29:19 BST
From: David.Page@comlab.ox.ac.uk
Subject: Results of Inductive Learning Competition
RESULTS OF THE NEW EAST-WEST CHALLENGE
Donald Michie, Stephen Muggleton
David Page and Ashwin Srinivasan
Oxford University Computing Laboratory, UK.
The results now are in for the 3 Inductive Learning Competitions
announced here 2 months ago. They are available in a compressed
tar file, results.tar.Z, or in ordinary text files (copy all the
files with a .txt extension) at the following FTP site
ftp.comlab.ox.ac.uk
in the directory
pub/Packages/ILP
Many thanks to all those who entered.
Two files with further discussion will be added on Mon., Oct 10,
both as .txt files and as additions to the file results.tar.Z.
URL = ftp://ftp.comlab.ox.ac.uk/pub/Packages/ILP/results.tar.Z
FTP site = ftp.comlab.ox.ac.uk
FTP file = pub/Packages/ILP/results.tar.Z
------------------------------
Date: Sat, 8 Oct 94 16:22:38 PDT
From: Wray Buntine <wray@ptolemy-ethernet.arc.nasa.gov>
Subject: yet another comment on a comment on a comment ... from Wolpert
I realise I'm entering this too late (NASA keeps me busy these days, ..)
but priors are one of my favorite topics, one I've done a lot of research
on.
Regarding Juergen Schmidhuber's comments on the NFL discussion pursued
by the inimitable Dr. Wolpert:
> An additional remark on the prior problem: With infinitely many (but
> enumerable) solution candidates, but without problem specific knowledge,
> it seems that we ought to be glad about a discrete enumerable prior that
> assigns to every solution candidate a probability at least as high as the
> one assigned by any other such prior P (ignoring a constant factor
> depending only on P). A remarkable property of the Solomonoff-Levin
> distribution (or universal prior) P_U is this: P_U dominates all discrete
> enumerable semimeasures P (including probability distributions) in the
> sense that for all P there is a constant c such that P_U(x) >= cP(x) for
> all x.
This is indeed a very good thing. Just a few more points about this:
* An easier version to get into is Rissanen's universal
priors for integers. This has the property that it
is in some sense, and approximately (I've seen no proof
here), the slowest, monotonically decreasing normalizable
prior on integers. Basically, that means it dominates any
other (again, I haven't seen the proof so lets say "most other")
monotonically decreasing proper prior in a similar sense to above.
"Why should this matter?", you ask. Well it makes the prior
*robust* in the sense that an overly complex hypothesis will tend to
be avoided. Robustness of priors is an ongoing research topic, but
in general its a useful feature.
==> this is a very nice prior on integers
* However, the Solomonoff-Levin distribution is only defined upto
a particular universal Turing machine (or whatever, I'm no
expert here). This means you have some data, and do your
calculation according to your version of the Solomonoff-Levin
distribution, then I can always find another universal Turing
machine that will give me a very different answer. So what do
we do in practice? Well, we throw up our hands and make a
subjective choice of a particular universal Turing machine.
==> Solomonoff-Levin distribution doesn't give us an objective prior!
* There are many other "remarkable" properties of priors one can argue
for. So the Solomonoff-Levin distribution is hardly unique in
this respect. All the Rissanen/Solomonoff-Levin type stuff applies to
discrete domains. What about real valued domains. Bernardo has developed
the "reference prior" which gives an information theoretic definition for
the "least informative" prior that applies in real valued domains.
Unfortunately this really only works neatly in 1 real dimension, and many
people believe that this is a fundamental aspect of probabilistic reasoning.
Like the Solomonoff-Levin distribution, we can't avoid being subjective in
general, although we can avoid it in some special cases.
I'd highly recommend Bernardo and Smith's new book "Bayesian Theory",
Wiley, 1994 for discussion of priors, objectivity, robustness and the like.
This is rather technical, but well done, and echos common sentiment
in the Bayesian community. See scts 5.6, and 4.8.2.
Reference analysis is discussed in 5.4. While this may be too technical for
the general reader, flipping through the introductory remarks would
I hope make the perspective clear. Basically:
* No matter how much we wriggle and squirm, obfuscate,
appeal to multiple Russian names, or whatever,
there is no way *in general* of justifying an objective
prior or code length.
* Objectivity, and "letting the data speak for themselves,"
is only a realizable concept in the PAC limit, when the
PAC/PAB style bounds are reached. In other cases,
which are more often than not, subjectivity is an
unavoidable problem. Wolpert's NFL results support this.
* Objectivity can be resolved in a practical way using the
notion of "inter-subjectivity", whereby a community of subjective
scientists reach consensus - very nice exercise in
distributed reasoning not enough AI people know about.
* Arguments exist in some special cases: priors
on a single real value.
* "Nice" properties for some priors exist, such as Rissanen's
universal prior on integers.
Some other aspects of priors:
* In any real problem I have ever seen, some form of prior
knowledge exists. However, going from expert's prior
knowledge to a mathematical representation is clearly
a task fraught with danger, so the use of a prior in
practical problems requires care, for instance, using
notions such as robustness. At the moment, it is sometimes
better to use experimentally supported procedures such as
cross-validation instead of full Bayesian reasoning.
* On practical representations such as decision trees,
Bayesian networks, feed-forward networks, etc., the
development of "reasonable" priors is an ongoing
research topic. We have really come a long way here.
Some work discussing these issues are:
Trees: Quinlan, ML'94, see also Buntine in "AI Frontiers
in Statistics", D. Hand (Ed.), 1993, Chapman & Hall
Bayesian networks: Heckerman and Geiger, UAI'94, Buntine UAI'91
Feed-forward networks: Radford Neal (PhD. thesis, '94),
MacKay (various in Neural Comp.), Buntine and Weigend
Some entertaining reading on priors can be found in Jaynes draft text
book on PROBABILITY THEORY: THE LOGIC OF SCIENCE, on WWW at the URL:
http://omega.albany.edu:8008/JaynesBook.html
Wray Buntine
------------------------------
Date: Fri, 14 Oct 1994 10:08:34 GMT
From: Pavel Brazdil <pbrazdil@ncc.up.pt>
Subject: Challenge for ML
We would like to present the ML community with the following
challenge:
1. Can you beat the existing classification algorithms?
------------------------------------------------------
The first problem is concerned with the problem of trying your
(or your favourite) classification algorithm on some datasets
that were used in exhaustive comparative trials conducted under
the European Project StatLog. We would be particularly
interested to know if you could achieved better results than the
ones known to us!
Tests conducted under StatLog included 22 different
classification algorithms which could be divided into three
groups. The first one includes various decision tree and rule
learning algorithms. The second one various statistical
classification algorithms. This group includes both classical
ones and more modern methods. The third group includes various
neural networks classifiers.
The description of all classification algorithms appears in [D.
Michie et al., 1994]. The StatLog tests were conducted on 23
datasets of industrial interest. Ten of these are available
(from ftp.ncc.up.pt, directory /pub/statlog/datasets). The doc
file associated with each dataset shows a table of results
achieved so far, and also gives details about how the test was
conducted.
If you managed to obtain rather good results overall, we could
extend this information and include also some post-StatLog
results. This could no doubt be of interest to potential future
users!
2. Interested in characterizing the applicability of different
classification algorithm?
----------------------------------------------------------------
The other prolem involves characterizing applicability of
different classification algorithms tested under StatLog on the
basis of exiting test results and certain dataset
characteristics. We would be particularly interested to obtain
readable results (e.g. rules or equations) that would have
significantly better predictive power than the ones reported in
[Brazdil et at, 1994], or the ones we are currently evaluating.
If you were interested in learning about learning, you could
obtain the dataset containing the exiting test results and
certain dataset characteristics from us (ftp.ncc.up.pt,
directory /pub/statlog/meta-data). This dataset contains 506
cases (learning examples). Each case is related to one test
associated with one particular classification algorithms and one
particular dataset. The data is arranged in the form of a
vector. The first 18 values describe various dataset
characteristics. The last value represents normalized error rate
of the classification algorithm which could be considered as a
(continuous) class value. If you have any queries, you could of
course come to us.
With best regards
P.B.Brazdil and J.Gama Tel.: +351 600 1672
LIACC, University of Porto Fax.: +351 600 3654
Rua Campo Alegre 823 Email: statlog-adm@ncc.up.pt
4100 Porto, Portugal
References:
D.Michie, D.Spiegelhalter, C.C.Taylor: Machine Learning, Neural
and Statistical Classification, Ellis Horwood, 1994.
P.Brazdil, J.Gama and B.Henery: Characterizing the applicability
of classification algorithms using meta-level learning, in
Machine Learning - ECML-94, F.Bergadano et al. (eds), Springer
Verlag, 1994.
------------------------------
Date: Thu, 13 Oct 1994 16:53:02 +0100
From: Lutz Prechelt <prechelt@ira.uka.de>
Subject: Proben1 Neural Network benchmark collection
URL: ftp://ftp.ira.uka.de/pub/papers/techreports/1994/1994-21.ps.Z
The technical report
Proben1 --- A Set of Neural Network Benchmark Problems
and Benchmarking Rules
is now available for anonymous ftp as
ftp.ira.uka.de /pub/papers/techreports/1994/1994-21.ps.Z
The report has 38 pages, the file is 158 Kb.
The report is the documentation of a benchmark collection that I have
prepared. This collection is the first closed and exactly documented
benchmark collection specifically made for neural network research.
All of its problems are 'real' problems in the sense that the data has
not been generated artificially. Most of the problems were taken from
the UCI machine learning databases archive.
Particular emphasis lies on achieving reproducibility of results,
which is difficult with most existing real world data benchmarks.
Here is the abstract:
Proben1 is a collection of problems for neural network learning in the
realm of pattern classification and function approximation plus a set
of rules and conventions for carrying out benchmark tests with these
or similar problems. Proben1 contains 15 data sets from 12 different
domains. All datasets represent realistic problems which could be
called diagnosis tasks and all but one consist of real world data. The
datasets are all presented in the same simple format, using an
attribute representation that can directly be used for neural network
training. Along with the datasets, Proben1 defines a set of rules for
how to conduct and how to document neural network benchmarking.
The purpose of the problem and rule collection is to give researchers
easy access to data for the evaluation of their algorithms and
networks and to make direct comparison of the published results feasible.
This report describes the datasets and the benchmarking rules. It
also gives some basic performance measures indicating the difficulty
of the various problems. These measures can be used as baselines for
comparison.
@techreport{Prechelt94c,
author = {Lutz Prechelt},
title = {{PROBEN1} --- {A} Set of Benchmarks and Benchmarking
Rules for Neural Network Training Algorithms},
institution = {Fakult\"at f\"ur Informatik, Universit\"at" Karlsruhe},
year = {1994},
number = {21/94},
address = {D-76128 Karlsruhe, Germany},
month = sep,
note = {Anonymous FTP: /pub/pa\-pers/tech\-reports/1994/1994-21.ps.Z
on ftp.ira.uka.de},
}
The benchmark collection itself (including the report) is available
for anonymous ftp from
ftp://ftp.ira.uka.de/pub/neuron/proben1.tar.gz (ca. 2 Mb)
and also from ftp.cs.cmu.edu as
/afs/cs/project/connect/bench/contrib/prechelt/proben.tar.gz
Lutz
Lutz Prechelt (email: prechelt@ira.uka.de)
Institut fuer Programmstrukturen und Datenorganisation
Universitaet Karlsruhe; 76128 Karlsruhe; Germany
(Voice: ++49/721/608-4068, FAX: ++49/721/694092)
------------------------------
From: Nada.Lavrac@ijs.si
Subject: Second/Final Call for papers ECML-95
Date: Fri, 14 Oct 1994 15:26:05 +0100
ECML-95
8th EUROPEAN CONFERENCE ON MACHINE LEARNING
25-27 April 1995, Heraklion, Crete, Greece
Second Announcement and Final Call for Papers
General Information:
Continuing the tradition of previous EWSL and
ECML conferences, ECML-95 provides the major European
forum for presenting the latest advances in the area of
Machine Learning.
Program:
The scientific program will include invited talks,
presentations of accepted papers, poster and demo
sessions. ECML-95 will be followed by MLNet
familiarization workshops for which a separate call for
proposals will be published (contact mlnet@computing-
science.aberdeen.ac.uk).
Research areas:
Submissions are invited in all areas of Machine
Learning, including, but not limited to:
abduction analogy
applications of machine learning automated discovery
case-based learning comput. learning theory
explanation-based learning inductive learning
inductive logic programming genetic algorithms
learning and problem solving multistrategy learning
reinforcement learning representation change
revision and restructuring
Program Chairs:
Nada Lavravc (J. Stefan Institute, Ljubljana) and
Stefan Wrobel (GMD, Sankt Augustin).
Program Committee:
F. Bergadano (Italy) I. Bratko (Slovenia)
P. Brazdil (Portugal) W. Buntine (USA)
L. De Raedt (Belgium) W. Emde (Germany)
J.G. Ganascia (France) K. de Jong (USA)
Y. Kodratoff (France) I. Kononenko (Slovenia)
W. Maass (Austria) R. Lopez de Mantaras (Spain)
S. Matwin (Canada) K. Morik (Germany)
S. Muggleton (UK) E. Plaza (Spain)
L. Saitta (Italy) D. Sleeman (UK)
W. van de Velde (Belgium) G. Widmer (Austria)
R. Wirth (Germany)
Local chair :
Vassilis Moustakis, Institute of Computer Science,
Foundation of Research and Technology Hellas (FORTH),
P.O. Box 1385, 71110 Heraklion, Crete, Greece (E-mail
ecml-95@ics.forth.gr).
Submission of papers:
Paper submissions are limited to 5000 words. The title
page must contain the title, names and addresses of
authors, abstract of the paper, research area, a list
of keywords and demo request (yes/no). Full address,
including phone, fax and E-mail, must be given for the
first author (or the contact person). Title page must
also be sent by E-mail to ecml-95@gmd.de. If possible,
use the sample LaTeX title page available from
ftp.gmd.de, directory /ml-archive/general/ecml-95, also
accessible via the World-Wide Web ECML95 page at
ftp://ftp.gmd.de/ml-archive/general/ecml-95/ecml95.html.
Six (6) hard copies of the whole paper should be sent by
2 November 1994 to:
Nada Lavrac & Stefan Wrobel (ECML-95)
GMD, FIT.KI, Schloss Birlinghoven, 53754 Sankt Augustin,
Germany
Papers will be evaluated with respect to technical
soundness, significance, originality and clarity.
Papers will either be accepted as full papers (presented
at plenary sessions, published as full papers in the
proceedings) or posters (presented at poster sessions,
published as extended abstracts).
System and application exhibitions :
ECML-95 offers commercial and academic participants
an opportunity to demonstrate their systems and/or
applications. Please announce your intention to demo to
the local chair by 24 March 1995, specifying precisely
what type of hardware and software you need. We
strongly encourage authors of papers that describe
systems or applications to accompany their presentation
with a demo (please indicate on the title page).
Registration and further information:
Current conference information is available online on the
World-Wide Web as:
ftp://ftp.gmd.de/ml-archive/general/ecml-95/ecml95.html
For information about paper submission and program,
contact the program chairs (E-mail ecml-95@gmd.de).
For information about local arrangements or to request
a registration brochure, contact the local chair
(E-mail ecml-95@ics.forth.gr).
Important Dates:
Submission deadline : 2 November 1994
Notification of acceptance : 13 January 1995
Camera ready copy : 9 February 1995
Exhibition requests : 24 March 1995
Conference : 25 -- 27 April 1995
Dr. Nada Lavrac
J. Stefan Institute
Jamova 39
Ljubljana
Slovenia
Phone (+386) 61 1259 199
Fax (+386) 61 219 385
Email nada.lavrac@ijs.si
------------------------------
Date: Fri, 14 Oct 1994 22:05:21 +0100
From: Dieter Fensel <fensel@swi.psy.uva.nl>
Subject: Workshop on knowledge level modeling and machine learning
MLnet Sponsored Familiarization Workshop:
Knowledge Level Modelling and Machine Learning
Heraklion, Crete, Greece,
April 28-29, 1995
At first, knowledge acquisition and machine learning were
two very closely related research fields but there is currently
little interaction between them. One of the reasons for this
weakened relationship results from a paradigm shift in
knowledge acquisition (cf. [DKS93]). Originally, knowledge
acquisition was viewed as a direct transfer of problem-
solving expertise from a human expert to a computer
program. The acquired knowledge was immediately
represented by a running prototype. That is, it was
immediately implemented using a knowledge representation
formalism. The underlying assumption was that frames or
production rules represent knowledge identical to the
cognitive foundation of human expertise. Machine learning
techniques could be included directly in the knowledge
acquisition process. The machine learning algorithms could
use the implemented knowledge as input which was
improved by their application. In the mean time, knowledge
acquisition is no longer viewed as a process which directly
transfers knowledge from a human to an implemented
computer program but rather as a modelling process. The
result of knowledge acquisition is no longer only a running
program but a set of complementary models. One of these
models, the so-called model of the expertise, represents
expertise in a manner which differs significantly both from
the cognitive base of human expertise and from the final
implementation. This model of expertise describes the task
which should be solved by the knowledge-based system and
the knowledge which is required to solve the task effectively
and efficiently. Both are described in an implementation-
independent manner. Both the human expert and the
implemented systems are instantiation (i.e., specific
problem-solving agents) of this model. Three important
requirements are postulated for such a model by the
knowledge acquisition community.
First, the separation of the symbol level and the knowledge
level: At the knowledge level, the expertise is described in an
implementation-independent manner. It is described in terms
of goals, operations, and knowledge about the relationships
of goals and operations. At the symbol level, a specific
computational agent is implemented which carries out the
problem-solving process by means of a computer program.
In terms of software engineering: A knowledge level
description is a specification of the functionality of the
desired system and the required knowledge. A symbol level
description corresponds to an implementation or design
specification. Knowledge acquisition no longer produces
only a running prototype but also a description of the
knowledge which abstracts from its implementation.
Distinguishing between the knowledge and the symbol level
therefore reflects the distinction of specification and design/
implementation in software engineering and in information
system development. The difference lies in the fact, that
knowledge acquisition is not only concerned with the desired
functionality of the system but also with acquiring
knowledge about how this functionality can be achieved,
Second, the use of generic problem-solving methods: The
problem-solving behavior of the system should be described
in a domain-independent and reusable manner by a problem-
solving method. Such a method defines the different
inferences, the different kinds of domain knowledge which
are required by the method, and knowledge about the control
flow between these inference steps. Such a method is generic
in the sense that it can be used to solve similar problems in
different domains. In contrast to general-purpose methods a
problem-solving method is restricted to a specific type of
problems (i.e., to a specific task). For example, problem-
solving methods for diagnostic tasks are decision tables,
heuristic classification, cover-and-differentiate, case-based
reasoning, model-based diagnosis etc. In addition to the kind
of task it is mainly the type of available knowledge which
determines the applicability of a problem-solving method to
a given problem.
Third, different modelling primitives are required for
epistemologically different types of knowledge: A model of
expertise contains different types of knowledge. Most
approaches distinguish between domain knowledge,
inference knowledge, and task-specific control knowledge.
A further type of knowledge concerns the use of domain
knowledge by the inference and control
knowledge. Therefore, a model of expertise must explicitly
distinguish between different types of knowledge and several
modelling primitives must be defined for every type as each
type includes again different knowledge entities.
A widespread approach (especially in Europe) to model-
based knowledge acquisition is the KADS project (KADS-I
and CommonKADS [SWB93]). The KADS model of
expertise allows an implementation-independent description
of the knowledge using several layers with pre-defined
modelling primitives. Up until now little work has been done
which examines possible improvements of the performance
and results of machine learning techniques when they are
applied in this type of a model-based framework. In fact,
there seems to be a kind of cultural barrier between people
working in machine learning and those working in model-
based knowledge acquisition. From a knowledge-level
modelling point of view, work in machine learning is viewed
as symbol level stuff and the latter view the former as
producers of nice graphics and natural language descriptions
without a precise and running semantics. Exceptions are
[DoS94], [TLG93], and [RoA94] who use inference
structures to bias the learning process and [VeA92] and
[GrS93] who discuss the integration of machine learning and
knowledge-level modelling. [FGS93] shows the difficulties
which arise when applying machine learning techniques to
learn knowledge for a diagnostic task to be solved by the
problem-solving method heuristic classification. Goal of the
workshop is to overcome this barrier by discussing the new
role which machine learning can have for model-based
knowledge acquisition. In fact, we are concerned with topics
like:
How can the process of constructing a model of
expertise be supported by machine learning
techniques?
+ How can current machine learning systems be used and
integrated in practical software and knowledge
engineering? Will systems like MOBAL, ENIGME etc.
ever be used in daily life? If so, how? If not, why not?
+ The different types of knowledge (i.e., domain
knowledge, inference knowledge, control knowledge
and mapping knowledge) require different machine
learning procedures and different combinations of them.
How can one type of knowledge be used to guide the
automatic acquisition of other kinds of knowledge?
+ Problem-solving methods divide a complex reasoning
task into several subactivities. How can problem-
solving methods be used to improve the effect and
efficiency of the application of machine learning
procedures?
+ Machine learning techniques usually involve rather
simple problem solvers. But even simple tasks like
diagnosis require several specialized machine learning
techniques and their combination when a problem-
solving method like heuristic classification is used
instead of decision tables. AI research has also produced
a variety of reasoning methods and architectures. Are
there appropriate learning techniques to support this?
+ Knowledge acquisition involves very many types of
learning problems like the transformation between
representation languages. These languages can be
formal or executable but can also use diagrammatic,
natural, or structured text, etc. Do appropriate learning
techniques exist to support this?
+ In the mean time, a large number of formal and
executable knowledge specification languages have
been defined for a model of expertise. Languages like
DESIRE, KARL, KBSSF, and (ML)2 allow a
declarative description of the knowledge which
abstracts from implementational aspects. How can they
have a hand in integrating machine learning techniques
into model-based knowledge acquisition?
Can machine learning techniques be improved by
knowledge-level modelling?
+ How can the bias of a machine learning technique be
represented at the knowledge level? This seems to be a
very important criteria for the acceptance and usability
of these techniques for knowledge acquisition.
+ How can knowledge level description be used to support
the selection, modification, combination, and creation
of machine learning techniques related to given learning
tasks. Can the above mentioned knowledge
specification languages be used for this purpose.
+ Besides being applied during the knowledge acquisition
process, machine learning procedures can also be
integrated into its product. The knowledge-based
system would then not only solve a given problem but
also improve its performance and adapt itself to
modifications of the task and the knowledge. Would
knowledge level descriptions of the learning techniques
be required to enable such a learning system to be
maintainable and to remain intelligible?
This list is not exhaustive and we are interested in different
and controversial points of view. The main purpose of this
workshop is to evaluate the possibilities and limitations for
the use of existing machine learning technology in the
context of knowledge acquisition (and related disciplines
such as information/software engineering) as well as the
application of knowledge acquisition for machine learning.
Furthermore, we are aiming at articulating research goals
which will help to increase these possibilities.
Format and Kind of Contributions
Contributions are invited which present theoretical or
practical results as well as position papers in the area of
model-based knowledge acquisition and machine learning.
Submitted papers must not exceed 15 pages, including
abstract and bibliography. Proceedings will be available at
the workshop.
Papers must be received by the workshop organisers no later
than February 1, 1995. The submittion of an electronical
version (postscript) of a paper is highly recommented.
Acceptance letter will be posted no later than March 7, 1995.
Final camera-ready versions of the paper must be received by
March 28, 1995. Participation without submitting a full
paper is possible but requires the submission of an abstract
(up to two pages) which clarifies the topics of interests.
Submissions and any questions should be send to:
Dieter Fensel
Institute SWI, University of Amsterdam,
Roeterstraat 15,
NL-1018 WB Amsterdam, The Netherlands
Email: dieter@swi.psy.uva.nl
voice: +31-20/525-6791 (secretariat: 6789),
fax: +31-20/5256896,
The workshop will be hosted at University of Heraklion,
Crete, Greece. It will take place immediately after the
European Conference on Machine Learning (ECML95) at
April 28-29, 1995. Funding for travelling and living expenses
are available according to the rules of the ML-network of
excellence. Confirmation letters for workshop participation
will be send out at February 6, 1995 to enable early
registration for ECML'95.
Organisation Committee
Enric Plaza i Cervera
Centre d' Estudis Avancats de Blanes (IIIA/CSIC),
Cami de Santa Barbara
17300 Blanes, Catalunya, Spain
Email: plaza@ceab.es
Dieter Fensel
Department SWI,
University of Amsterdam, Roeterstraat 15,
NL-1018 WB Amsterdam, The Netherlands
Email: dieter@swi.psy.uva.nl
Jean-Gabriel Ganascia
LAFORIA-IBP, Universite Paris et M. Curie,
Tour 46-0, 4 place Jussieu,
75252 Paris Cedex, France
Email: ganascia@laforia.ibp.fr
Claire Nedellec
Equipe Equipe I & A/LRI,
Universite Paris-Sud,
Bat. 490; F-91405 Orsay Cedex, France
Email: cn@lri.fr
Celine Rouveirol
Equipe Equipe I & A/LRI,
Universite Paris-Sud,
Bat. 490; F-91405 Orsay Cedex, France
Email: celine@lri.fr
Maarten Van Someren
Department SWI,
University of Amsterdam, Roetersstraat 15,
NL-1018 WB Amsterdam, The Netherlands
Email: maarten@swi.psy.uva.nl
Walter Van De Velde
AI Laboratory, Vrije Universiteit Brussel,
Pleinlaan 2, 1050 Brussels, Belgium
Email: walter@arti.vub.ac.be
Program Committee
Agnar Aamodt, University of Trondheim, Norway
Patrick Albert, ILOG Gentilly, France
Klaus-Dieter Althoff, University of Kaiserslautern, Germany
Enric Plaza i Cervera, Centre d' Estudis Avancats de Blanes, Spain
Paul Compton, Univerity of New South Wales, Australia
Werner Emde, GMD-Bonn, Germany
Ronen Feldman, Bar-Ilan University, Israel
Dieter Fensel, University of Karlsruhe, Germany
Jean-Gabriel Ganascia, Universite Paris et M. Curie, France
Philippe Laublet, ONERA, Chatillon Cedex, France
Reza Nakhaeizadeh, Daimler-Benz Research Ulm, Germany
Claire Nedellec, Universite Paris-Sud, France
F. Puppe, University of Wuerzburg, Germany
M. M. Richter, University of Kaiserslautern, Germany
Celine Rouveirol, Universite Paris-Sud, France
Franz Schmalhofer, DFKI-Kaiserslautern, Germany
Guus Schreiber, University of Amsterdam, The Netherlands
Nigel Shadbolt, Nottingham University, United Kingdom
Derek Sleeman, University of Aberdeen, United Kingdom
Maarten van Someren, University of Amsterdam, The Netherlands
Rudi Studer, University of Karlsruhe, Germany
Walter Van De Velde, University of Bruessel, Belgium
Bob Wielinga, University of Amsterdam, The Netherlands
Stefan Wrobel, GMD-Bonn, Germany
References
[DKS93] J.-M. David, J.-P. Krivine, and R. Simmons (eds.):
Second Generation Expert Systems, Springer-Verlag,
Berlin, 1993.
[DoS94] H. J. H. van Dompseler and M. W. van Someren:
Using Models of Problem Solving as Bias in Automated Knowledge
Acquisition. In Proceedings of the 11th European Conference on
Artificial Intelligence (ECAI'94), Amsterdam, August 8-12, 1994.
[FGS93] D. Fensel, U. Gappa, and S. Schewe: Applying a
Machine Learning Algorithm In a Knowledge
Acquisition Scenario. In Proceedings of the IJCAI-
Workshop Machine Learning and Knowledge
Acquisition: Common Issues, Contrasting Methods,
And Integrated Approaches, W16, Chambery, France,
August 29, 1993.
[GrS93] N. Graner and D. Sleeman: MUSKRAT: a
Multistrategy Knowledge Refinement and
Acquisition Toolbox. In Proceedings of the Second
International Workshop on Multistrategiy Learning,
1993.
[RoA94] C. Rouveirol and P. Albert: Knowledge Level Model
of a Configurable Learning System. Knowledge Level Model of a
Configurable Learning System. In Luc Steels et al. (eds.), A
Future for Knowledge Acquisition, 8th European Knowledge
Acquisition Workshop (EKAW'94), Hoegaarden, Belgium,
September 26-29, 1994, Lecture Notes in AI, no 867,
Springer-Verlag, Berlin, 1994.
[SWB93] G. Schreiber, B. Wielinga, and J. Breuker (eds.):
KADS. A Principled Approach to Knowledge-Based
System Development, Knowledge-Based Systems, vol
11, Academic Press, London, 1993.
[TLG93] J. Thomas, P. Laublet, and J.-G. Ganascia: A Machine
Learning Tool Designed for a Model-Based
Knowledge Acquisition Approach. In N. Aussenac et
al. (eds.), Knowledge Acquisition for Knowledge-
Based Systems, Proceedings of the 7th European
Workshop (EKAW'93, Toulouse, France, September
6-10, 1993), Lecture Notes in AI no 723, Springer-
Verlag, Berlin, 1993.
[VeA92] W. Van De Velde and A. Aamodt: Machine Learning
Issues in CommonKADS. Research report, ESPRIT
Project P5248 KADS-II, KADS-II/T2.4.3/TR/VUB/
002/3.0, Vrije Universiteit Brussel, January 1992.
------------------------------
From: Gagan Patnaik <gbp@cts.com>
Date: Fri, 7 Oct 1994 10:15:56 -0700
Subject: CFP: AI applications in Geophysical Sciences
Symposium on the
APPLICATION OF ARTIFICIAL INTELLIGENCE COMPUTING IN GEOPHYSICS
Jointly sponsored by the
Int'l Association of Seismology and Physics of the Earth's Interior (IASPEI)
and the Society of Exploration Geophysicists (SEG, USA)
to be held under the auspices of the XXI General Assembly of the
International Union of Geodesy and Geophysics (IUGG)
July 2 - 14, 1995 at Boulder, Colorado, USA
Hosted by the
U. S. National Academy of Sciences
Organized by the
American Geophysical Union (AGU)
and the University of Colorado at Boulder
Symposium Date: JULY 12, 1995 (Wednesday)
Abstract Submission Deadline: FEBRUARY 1, 1995
Papers in the form of ORAL or POSTER presentation are sought on all aspects
of Artificial Intelligence computing applications in Geophysical Sciences
including but not limiting to,
NEURAL COMPUTING
FUZZY SET THEORY (SOFT COMPUTING)
EVOLUTIONARY COMPUTING (GENETIC ALGORITHMS)
AUTOMATED REASONING TECHNIQUES
KNOWLEDGE-BASED SYSTEMS
MACHINE LEARNING AND KNOWLEDGE ACQUISITION
DATABASE MINING AND KNOWLEDGE DISCOVERY.
This symposium is one of several geophysical symposia and workshops being
held during the general assembly of the IUGG associations. For this
interdisciplinary symposium, abstract submissions will be accepted for any
topic related to Geophysical Sciences *with* computing applications from the
above broad definition of Artificial Intelligence techniques. The emphasis
is on *applications* to Geophysical Problems related to the Earth and it's
Environment.
The Geophysical topics of interest for this interdisciplinary symposium
include but are not limited to,
SOLID EARTH GEOPHYSICS, OCEAN SCIENCES, HYDROLOGY, METEOROLOGY
AND SPACE-BASED TECHNOLOGIES APPLIED TO THE EARTH AND IT'S ENVIRONMENT
For example, some of the problem areas from Solid Earth Geophysics are,
Earthquake and Explosion Seismology, Petroleum Geophysics and Reservoir
Modeling (Resource Exploration and Production), Engineering Geophysics
(e.g., Seismic Hazard Assessment and Earthquake Engineering), Environmental
Geophysics (techniques utilized for subsurface investigations and
environmental remediation), and Mining Geophysics.
The common themes that bind all presentations in this symposium are the
Artificial Intelligence computing techniques applied to processing,
interpretation and management of scientific data from Geophysical
observation, simulation and modeling.
ABSTRACT SUBMISSION
Presentations of results on completed work, as well as work-in-progress,
are encouraged. At least one paper submitted by every author will be accepted
for either an oral or poster presentation. Additional papers from the same
author will also be considered.
One camera-ready copy and two additional copies of each abstract must be
submitted in the prescribed format, and received before the deadline
(February 1, 1995). Each abstract printed in the specified format should
clearly indicate the symposium code and title "Application of Artificial
Intelligence Computing in Geophysics". Abstract format, fees, and
instructions for Electronic submission will be announced shortly.
Mailing address to send (original + 2 copies):
IUGG XXI General Assembly
c/o American Geophysical Union
2000 Florida Avenue, N.W.
Washington, D.C. 20009
Please also send one copy directly to one of the conveners listed at the end
of this message (electronic mail preferred).
SOCIAL EVENTS and FREE CIRCULATION OF SCIENTISTS
The symposium as part of the IUGG General Assembly will be held at the
University of Colorado, Boulder, campus. Boulder, Colorado is located at the
base of the foothills of the Rocky Mountains. Many exciting social events and
geological field trips are being planned for participants and accompanying
persons. The past general assembly, IUGG-94, was held in Wellington,
New Zealand, and was attended by participants from more than 40 countries.
"The Organizing Committee fully supports the basic policy of nondiscrimination
and affirms the rights of scientists throughout the world to adhere to or
associate with international scientific activity without restrictions based on
nationality, race, color, age, religion, political philosophy, ethnic origin,
citizenship, language, or sex. The Committee affirms its support of the Int'l
Council of Scientific Unions (ICSU) principle of nondiscrimination and endorses
the guidelines by the ICSU Standing Committee on the Free Circulation of
Scientists".
REGISTRATION AND HOUSING
All participants are required to register and pay appropriate fees. There will
be reduced fees for students and doctoral candidates under the age of 30.
There will also be a charge for accompanying persons who are not attending the
scientific programs. Registration and fees information will be provided
in the next announcement.
There will be a number of Hotels to select from with room rates ranging from
$65 - $120 (U.S. dollars) per night (details in the next announcement). Campus
housing will also be available. The University of Colorado at Boulder has set
aside a large number of dormitory rooms. Campus lodging includes breakfast and
dinner and will cost roughly $50 (U.S. dollars) per night. Family housing
accommodation and campus parking will also be available.
FURTHER INFORMATION AND CONTACTS
Various announcements relating to IUGG-95 are being published in EOS
(a weekly publication of the American Geophysical Union; issues: April 5,
April 19, and August 23, 1994); excerpts from which are included in this
message. This message and additional information including the next
announcement will also be made available on the INTERNET.
Some information of general nature about the IUGG XXI General Assembly may
also be obtained by contacting the American Geophysical Union. (Telephone:
+1 202 462 6900, Fax: +1 202 328 0566, Email: iugg_xxiga@kosmos.agu.org).
For *this symposium* related matters, or for further assistance, please contact
one of the conveners:
Dr. Gagan B. Patnaik Dr. Fred Aminzadeh
Advanced Geocomputing Technologies UNOCAL Corporation
P.O.Box - 927477 5460 East La Palma
San Diego, CA 92192-7477 Anaheim, California 92817
Phone: +1 619 535 4840 Phone: +1 714 693 6990
Fax: +1 619 535 4890 Fax: +1 714 693 5824
Email: gpatnaik@aip.org Email: fred.aminzadeh@st.unocal.com
or, g.patnaik@ieee.org
------------------------------
Subject: EPIA'95 - Conference CFP
Date: Tue, 18 Oct 94 11:02:51 +0000
From: njm@cupido.inesc.pt
EPIA'95 - CALL FOR PAPERS
SEVENTH PORTUGUESE CONFERENCE
ON
ARTIFICIAL INTELLIGENCE
Funchal, Madeira Island, Portugal
October 3-6, 1995
(Under the auspices of the Portuguese Association for AI)
The Seventh Portuguese Conference on Artificial Intelligence
(EPIA'95) will be held at Funchal, Madeira Island, Portugal,
on October 3-6, 1995. As in previous issues ('89, '91, and
'93), EPIA'95 will be run as an international conference,
English being the official language. The scientific program
encompasses tutorials, invited lectures, demonstrations, and
paper presentations. Five well known researchers will present
invited lectures. The conference is devoted to all areas of
Artificial Intelligence and will cover both theoretical and
foundational issues and applications as well. Parallel
workshops on Expert Systems, Fuzzy Logic and Neural
Networks, and Applications of A.I. to Robotics and Vision
Systems will run simultaneously (see below).
INVITED LECTURERS
~~~~~~~~~~~~~~~~~
The following researchers have already confirmed their
participation, as guest speakers:
Marvin Minsky, MIT (USA)
Manuela Veloso, CMU (USA)
Luis Borges de Almeida, IST (Portugal)
Rodney Brooks, MIT (USA)
SUBMISSION OF PAPERS
~~~~~~~~~~~~~~~~~~~~
Authors must submit five (5) complete printed copies of their
papers to the "EPIA'95 submission address". Fax or electronic
submissions will not be accepted. Submissions must be printed
on A4 or 8 1/2"x11" paper using 12 point type. Each page must
have a maximum of 38 lines and an average of 75 characters
per line (corresponding to the LaTeX article-style, 12
point). Double-sided printing is strongly encouraged. The
body of submitted papers must be at most 12 pages, including
title, abstract, figures, tables, and diagrams, but excluding
the title page and bibliography.
ELECTRONIC ABSTRACT
~~~~~~~~~~~~~~~~~~~
In addition to submitting the paper copies, authors should
send to epia95-abstracts@inesc.pt a short (200 words)
electronic abstract of their paper to aid the reviewing
process. The electronic abstract must be in plain ASCII text
(no LaTeX)) in the following format:
TITLE: <title of the paper>
FIRST AUTHOR: <last name, first name>
EMAIL: <email of the first author>
FIRST ADDRESS: <first author address>
COAUTHORS: <their names, if any>
KEYWORDS: <keywords separated by commas>
ABSTRACT: <text of the abstract>
Authors are requested to select 1-3 appropriate keywords from
the list below. Authors are welcome to add additional
keywords descriptors as needed. Applications, agent-oriented
programming, automated reasoning, belief revision, case-based
reasoning, common sense reasoning, constraint satisfaction,
distributed AI, expert systems, genetic algorithms, knowledge
representation, logic programming, machine learning, natural
language understanding, nonmonotonic reasoning, planning,
qualitative reasoning, real-time systems, robotics, spatial
reasoning, theorem proving, theory of computation, tutoring
systems.
REVIEW OF PAPERS
~~~~~~~~~~~~~~~~
Submissions will be judged on significance, originality,
quality and clarity. Reviewing will be blind to the
identities of the authors. This requires that authors
exercise some care not to identify themselves in their
papers. Each copy of the paper must have a title page,
separated from the body of the paper, including the title of
the paper, the names and addresses of all authors, a list of
content areas (see above) and any acknowledgments. The second
page should include the same title, a short abstract of less
than 200 words, and the exact same contents areas, but not
the names nor affiliations of the authors. This page may
include text of the paper. The references should include all
published literature relevant to the paper, including
previous works of the authors, but should not include
unpublished works of the authors. When referring to one's own
work, use the third person. For example, say "previously,
Peter [17] has shown that ...". Try to avoid including any
information in the body of the paper or references that would
identify the authors or their institutions. Such information
can be added to the final camera-ready version for
publication. Please do not staple the title page to the body
of the paper. Submitted papers must be unpublished.
PUBLICATION
~~~~~~~~~~~
The proceedings will be published by Springer-Verlag (lecture
notes in A.I. series). Authors will be required to transfer
copyright of their paper to Springer-Verlag.
ASSOCIATED WORKSHOPS
~~~~~~~~~~~~~~~~~~~~
In the framework of the conference three workshops will be
organized: Applications of Expert Systems, Fuzzy Logic and
Neural Networks in Engineering, and Applications of
Artificial Intelligence to Robotics and Vision Systems. Real
world applications, running systems, and demos are welcome.
CONFERENCE & PROGRAM CO-CHAIRS
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Carlos Pinto-Ferreira Nuno Mamede
Instituto Superior Tecnico Instituto Superior Tecnico
ISR, Av. Rovisco Pais INESC, Apartado 13069
1000 Lisboa, Portugal 1000 Lisboa, Portugal
Voice: +351 (1) 8475105 Voice: +351 (1) 310-0234
Fax: +351 (1) 3523014 Fax: +351 (1) 525843
Email: cpf@kappa.ist.utl.pt Email: njm@inesc.pt
PROGRAM COMMITTEE
~~~~~~~~~~~~~~~~~
Antonio Porto (Portugal) Lauiri Carlson (Finland)
Benjamin Kuipers (USA) Luc Steels (Belgium)
Bernhard Nebel (Germany) Luigia Aiello (Italy)
David Makinson (Germany) Luis Moniz Pereira (Portugal)
Erik Sandewall (Sweden) Luis Monteiro (Portugal)
Ernesto Costa (Portugal) Manuela Veloso (USA)
Helder Coelho (Portugal) Maria Cravo (Portugal)
Joao Martins (Portugal) Miguel Filgueiras (Portugal)
John Self (UK) Yoav Shoham (USA)
Jose Carmo (Portugal) Yves Kodratoff (France)
DEADLINES
~~~~~~~~~
Papers Submission: ................. March 20, 1995
Notification of acceptance: ........ May 15, 1995
Camera Ready Copies Due: ........... June 12, 1995
SUBMISSION & INQUIRIES ADDRESS
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
EPIA95
INESC, Apartado 13069
1000 Lisboa, Portugal
Voice: +351 (1) 310-0325
Fax: +351 (1) 525843
Email: epia95@inesc.pt
SUPPORTERS
~~~~~~~~~~
Banco Nacional Ultramarino Governo Regional da Madeira
Instituto Superior Tecnico INESC
CITMA IBM
TAPair Portugal
PLANNING TO ATTEND
~~~~~~~~~~~~~~~~~~
People planning to submit a paper or/and to attend the
conference or attend a workshop are asked to complete and
return the following form (by fax or email) to the inquiries
address standing their intention. It will help the conference
organizers to estimate the facilities needed for the
conference and will enable all interested people to receive
updated information.
+----------------------------------------------------------------+
| REGISTRATION OF INTEREST |
| |
| Title . . . . . Name . . . . . . . . . . . . . . . . . . . . |
| Institution . . . . . . . . . . . . . . . . . . . . . . . . . |
| Address1 . . . . . . . . . . . . . . . . . . . . . . . . . . . |
| Address2 . . . . . . . . . . . . . . . . . . . . . . . . . . . |
| Country . . . . . . . . . . . . . . . . . . . . . . . . . . . |
| Telephone. . . . . . . . . . . . . . . Fax . . . . . . . . . . |
| Email address. . . . . . . . . . . . . . . . . . . . . . . . . |
| I intend to submit a paper (yes/no). . . . . . . . . . . . . . |
| I intend to participate only (yes/no). . . . . . . . . . . . . |
| I will travel with ... guests |
+----------------------------------------------------------------+
------------------------------
Date: Fri, 14 Oct 94 22:46:56 -0700
From: Pat Langley <langley@flamingo.stanford.edu>
Subject: tutorial on applications of machine learning
TUTORIAL ON APPLICATIONS OF MACHINE LEARNING
Center for the Study of Language and Information
Stanford University, Stanford, California
Friday, November 18, 1994
TUTORIAL CONTENT
Machine learning is the study of computational methods for improving
performance based on experience. This field has long held potential
for automating the time-consuming and expensive process of knowledge
acquisition, but only in the last few years have techniques from
machine learning started to play a significant role in industry. This
one-day tutorial focuses on recent applications of machine learning
technology.
The topics covered will include neural networks, decision-tree induction,
genetic algorithms, case-based learning, grammar acquisition, and theory
revision, with the instructors illustrating each approach through examples
taken from real-world problems. Sample applications will include control
of chemical processes, classification of astronomical objects, prediction
of career choices, configuration of aircraft parts, automated form
completion, and maintenance of diagnostic knowledge bases.
The tutorial is designed to familiarize participants with the basic
techniques in machine learning and with recent successful applications
of these methods. The course will also draw some general lessons from
these examples and identify the steps involved in developing fielded
applications, including problem formulation, representation engineering,
data collection, and evaluation.
The material covered in the tutorial should be useful to anyone in
industry and business who is interested in reducing the time and cost
of developing and maintaining knowledge-based systems.
INSTRUCTORS
John Koza is a Consulting Professor in the Computer Science Department
at Stanford University. He has authored numerous papers on genetic
algorithms and he has recently published two books on genetic
programming. Dr. Koza's work has applied genetic search methods
to a variety of domains, including molecular biology.
Pat Langley is Director of the Institute for the Study of Learning and
Expertise. He has written many articles on machine discovery, rule
induction, and probabilistic learning, and his introductory text on
machine learning will appear shortly. Dr. Langley's recent work has
explored the use of theory revision in knowledge maintenance.
David Rumelhart is a Professor in the Psychology Department at Stanford
University. Dr. Rumelhart has published widely on learning in neural
networks, including a classic book on the topic and a survey of
fielded applications, and he has taught numerous courses in the area.
His recent research has used the backpropagation algorithm on a number
of challenging problems, including handwriting recognition.
Jeffrey Schlimmer is a Professor in the School of Electrical Engineering
and Computer Science at Washington State University. He is the author
of numerous articles on machine learning and its applications, and he
presented a tutorial on this topic at AAAI-94. Dr. Schlimmer's recent
work has used decision-tree induction and grammar acquisition in the
automated completion of forms.
REGISTRATION FORM
Name _____________________________________________________________
Department _______________________________________________________
Company __________________________________________________________
Street ___________________________________________________________
City_____________________________ State/Zip_______________________
Phone ___________________________ Fax ____________________________
Email ____________________________________________________________
To register by mail, please give the information requested above and
enclose check, money order, or credit card details.
If you are using a credit card, you may also register by telephone at
(415) 723--1224, by facsimile at (415) 723--0758, or through electronic
mail to mking@csli.stanford.edu.
The registration fee is $500 on site, $375 if received by November 1,
and $250 for students and faculty.
METHOD OF PAYMENT
___ VISA/Mastercard No. _________________________________________
Expiration Date _____________________________________________
Name ________________________________________________________
Signature ___________________________________________________
___ Check payable to Stanford University
___ Money order payable to Stanford University
Lunch is included in the registration fee.
------------------------------
End of ML-LIST (Digest format)
****************************************