Copy Link
Add to Bookmark
Report

The Network Observer Vol 02 No 08

eZine's profile picture
Published in 
The Network Observer
 · 26 Apr 2019

  


--------------------------------------------------------------------


T H E N E T W O R K O B S E R V E R

VOLUME 2, NUMBER 8 AUGUST 1995

--------------------------------------------------------------------

"You have to organize, organize, organize, and build and
build, and train and train, so that there is a permanent,
vibrant structure of which people can be part."

-- Ralph Reed, Christian Coalition

--------------------------------------------------------------------

This month: Privacy and authoritarian culture
The GII as a library
Consumer guides on the Web
The dynamics of HTML standards

--------------------------------------------------------------------

Welcome to TNO 2(8).

This month's issue features two articles by guest authors. Chris
Borgman's article is based on her powerful presentation at the
Conference on Society and the Future of Computing. Her thesis
is that the Global Information Infrastructure is best understood
as a library that raises all of the human and technical issues
of information indexing and retrieval that librarians have been
working with on a large scale every day forever.

Rich Lethin, who has long filtered the cypherpunks newsgroup for
the Red Rock Eater, has contributed an article about the BBN Auto
Mechanics List, a voluntary cooperative venture in rating Boston
area auto mechanics. We all want to think that this successful
model for mutual assistance on the net can be generalized, but
it's not that simple. The central question for me concerns
critical integrity: what are the conditions that encourage people
to try in good faith to provide honest evaluations of things?
In a world of publicists, the answer to this question might be
depressing.

And I've written a brief article analyzing another argument
against a broad right to privacy. (I've already collected
a batch of such arguments in TNO 1(10).) A larger theme is
the return of authoritarian cultural forms. We know all about
authoritarian government, and we don't like it very much. But
I fear that we've forgotten about the seductions and oppressions
of authoritarian culture. For a crash course in the subject,
let me suggest Morris Shechtman's "Working Without a Net" (a
book for people whose angry authoritarian fathers have convinced
them that a steady diet of harsh criticism is a sign of love,
it comes highly recommended by the Speaker of the House) and Cal
Thomas' book "The Things That Matter Most" (with its completely
unabashed celebration of censorship). I'd like to suggest that
you: (a) figure out in detail what's wrong with the arguments in
these books; (b) figure out why decent, intelligent people might
nonetheless regard them as necessary responses to the world as it
is; and (c) write down what you've learned.

About the quote from Ralph Reed that has become TNO's permanent
motto: for the next several months I'm going to offer some brief
commentaries on it. I hope these commentaries don't seem too
didactic; it's just that I really want the full meaning of Reed's
statement to get across. This month let us notice that he is
talking about a "structure". He means a membership organization.
In his case, of course, it's the Christian Coalition, but the
underlying principle applies widely. He's not just talking about
getting everyone on the net. He's not just talking about getting
his views out to an abstraction called "the public". He's not
just talking about sending out political action alerts to the
ether and hoping that someone somewhere will act on them. He's
talking about building an organization. What does that mean?
It means having chapters and membership lists. It means giving
everyone a chance to discover their own strengths and passions
and the support to enact those things within the framework of the
organization. It means creating a sense of belonging, productive
activity, personal growth, successes, and shared goals. Have
you been involved in such an organization? Have bad experiences
convinced you that organizations are necessarily boring, static,
or oppressive? Have ever had a chance to learn the skills
of working with others democratically within an organization?
These questions are good starting points for defining your vision
and deciding what you want to be remembered for when you die.

This issue of TNO brings the demise of "Company of the Month",
one of the original TNO departments. It has gotten to be more
hassle than it's worth. Maybe it'll be back from time to time.

A footnote. What's wrong with American culture these days that
the only funny comic strip in the newspaper is "Dilbert", which
concerns the intrinsically hysterical topics of computer nerds
and office politics? Do you suppose that Gary Larson would
start drawing "The Far Side" again if we organized a petition on
the Internet? In any event, we really must do something about
the lame strips that have replaced him. Maybe we can get Steve
Bell, who draws the extremely funny comic "If" for The Guardian,
to come to the United States. Newt Gingrich and Bill Clinton
have to be lots more fun to draw than John Major and Tony Blair.

Jerry Garcia 1942-1995 RIP.

--------------------------------------------------------------------

Privacy and authoritarian culture.

Lately I have been encountering an insidious argument for the
limitation of personal privacy. Here is the only written-down
version of it I've seen:

Modern Americans enjoy vastly more privacy than did their
forebears because ever and ever larger numbers of strangers
in our lives are legitimately denied access to our personal
affairs. ... Privacy, however, makes it difficult to form
reliable opinions of one another. Legitimately shielded
from one another's scrutiny, we are thereby more immune
to the routine monitoring that once formed the basis of
our individual reputations. Reputation ... is a necessary
and basic component of the trust that lies at the heart
of social order. To establish and maintain reputations
in the face of privacy, social mechanisms of *surveillance*
have been elaborated and developed. In particular, various
forms of credentials and modern ordeals produce reputations
that are widely accessible, impersonal, and portable from
one location to another. *A society of strangers is one
of immense personal privacy. Surveillance is the cost
of that privacy.* (Steven L. Nock, The Costs of Privacy:
Surveillance and Reputation in America, New York: Aldine
de Gruyter, 1993; page 1.)

So: privacy is a threat to social order; it must therefore be
constrained or restricted; external surveillance serves this
purpose; and the result of surveillance is a kind of objective
publicity that restores social order. (Note that this is
a stronger form of another common argument, that pervasive
surveillance effectively restores industrial society to the
condition of the agrarian village, where the social order was
maintained through everyone knowing everyone else's business.)

This argument contains numerous fallacies; let me just identify a
few of them. The first fallacy is the confusion between privacy
and secrecy: if people have lots of privacy, the argument goes,
then nobody will know anything about anybody else. But privacy
is not the same as secrecy; instead, I have privacy when I
control which matters are secret and which are disclosed, and
when, and how, and to whom. People may have total privacy and
still choose to tell everyone everything. In practice, privacy
permits people to disclose the things they wish to disclose.

The second fallacy is encapsulated in the phrase "one another",
which posits a symmetry and equality of individuals that does
not exist. The most serious issues of privacy in modern society
do not concern private individuals' dealings with one another
on an equal footing; those are regulated reasonably well through
individuals' right to disclose or conceal what they wish,
together with their right to choose whom they have dealings with.
Privacy problems arise, instead, in situations of gross asymmetry
or inequality in power relations. We don't worry terribly about
whether I must disclose my marital troubles to my neighbor, but
we do worry about whether I must disclose my marital troubles to
the government.

These fallacies combine to produce some serious and dangerous
conclusions: if individuals' ability to conceal and refusal to
disclose certain information about themselves is construed as
a threat to social order, then it follows that people must be
compelled to disclose this information. And if asymmetries and
inequalities within society are neglected, then surveillance --
the systematic coercion of disclosure that powerful institutions
exercise against individuals -- is legitimated and even morally
required.

Lurking within this argument are several subsidiary fallacies.
One of them is hidden in the term "reputation", which presupposes
a particular model of information: namely, that you are only able
to develop trust in me by gaining access to information about
me that is public -- i.e., accessible to everyone. If relations
of trust are held crucial to social order, and if relations of
trust are held to require access to publicly available personal
information, then it follows that society must compel public
disclosure of personal information -- not just disclosure to
particular parties, but *public* disclosure. But the second
step of this argument is clearly false: for you to trust me, you
don't need *everyone* to know anything about me; you simply need
to know it yourself. (And even *that* isn't clear.)

I could go on, but I won't. My basic point is that arguments
about privacy frequently encode, through their conflations and
omissions and ambiguities, an authoritarian model of culture in
which people must be actively controlled by outside institutions
in order for society to hold together. I think that libertarian
conservative arguments about the evils of government, whatever
their merits, have helped us to forget -- or, at least, are
not helping us to remember -- what an authoritarian culture is
like. It's a culture in which most people have been convinced
that everyone else must be monitored, regulated, and shamed to
maintain social order. Let's learn to recognize authoritarian
cultural forms, because they're coming back.

--------------------------------------------------------------------

The global information infrastructure as a digital library.

Christine L. Borgman
Department of Library and Information Science
University of California, Los Angeles
cborgman@ucla.edu

The National and Global Information Infrastructures (NII and
GII) offer the promise of creating a global digital library
in which anyone, connected anywhere on the network, can search
for information independent of time, place, or form. Public
discussions of the "information superhighway" suggest that the
global digital library nearly exists already, or that it soon
will be accomplished. Even technical and policy documents
suggest that we are close to achieving universal access to
information resources, such that anyone can find what they want
or need in the glut of information that exists already. While
such claims attract public support for building the computing
network, research funding for those proposing technical
solutions, and customers for computing network services, they
obscure the complexity of the information retrieval problem.
They also obscure the role of libraries in providing access to
distributed information resources. This short paper summarizes
four issues that need to be addressed if the GII is to serve as
the Digital Library of the Future. We discuss these issues in
more depth elsewhere (1).

1. The Global Information Infrastructure should be viewed as a
single Digital Library with access for all.

In the ideal case, the GII will be a decentralized, distributed
"virtual" library that interconnects all the databases and
other resources on the Internet and subsequent computing and
communications networks. The international library community
already has created an institutional framework for such a
system through shared cataloging databases, interlibrary loan
agreements, document delivery services, and other forms of
access to information resources held elsewhere. By utilizing
the technical framework of the GII and the institutional
framework of library cooperation, it should be possible to
search the GII as a single Digital Library to identify, locate,
and obtain information resources, no matter where or in what
form they exist. In theory, the global digital library could
increase international equity in access to information and offer
the freedom to read, a privilege often denied within individual
countries. However, freedom of information is not one of the
basic tenets of the GII policy proposals, despite the efforts of
various human rights groups.

2. The Digital Library should provide pointers to information
resources that exist in all media, whether online or offline.

Discussions of the Global Information Infrastructure and Digital
Libraries often implicitly assume that digital libraries consist
entirely of digitized content and that the full content of all
information resources soon will be online. The value of online
catalogs and indexes that point to offline materials receives
little recognition outside the library community, which we
attribute to misconceptions about the nature of communication
technologies, information resources, and information
organization. First is the misconception that digitized
information will supplant, rather than supplement, information
resources in other forms. New technologies create and fill new
niches, while prior technologies often continue to evolve and
fill other niches, as the history of communication has shown.
The centuries of human knowledge that are stored in non-digitized
formats will continue to be valuable, and only a very small
portion of these resources are likely to be converted to digital
form. Paper and other durable hard copy formats will complement
digitized formats.

Second is the misconception that the information that exists
on the Internet is an adequate substitute for the holdings
and services of libraries. The volume of information on the
Internet pales in comparison to the holdings of the world's
research libraries, most of which has been carefully selected.
Very little of the "free" information on the Internet has
passed through an authoritative review process -- much of it
is self-published or otherwise ephemeral in nature. The reader
or user of such information bears the burden of determining
what are accurate or credible sources, lacking the imprimatur
of reviewers, editors, and publishers, or the judgement of the
librarians who select the materials.

Third is the misconception that catalogs of information resources
lack value unless the full content exists online. In searching
for information, one must first identify the existence of
information resources and their location before they can be
obtained, whether online or offline. The greatest value of
the GII as a Digital Library will be to provide pointers --
catalogs, indexes, abstracts, document surrogates, and other
representations of content -- not only to online information
resources but to the centuries of information resources that will
continue to exist only offline.

3. Implementing the social policy to create the Digital Library
will be even more difficult than implementing the technology
policy.

Information technology will enable the global digital library --
it will not create it, or necessarily even promote it. A single
Digital Library with access for all will be realized only through
the efforts of individual countries, institutions, and people.
The slogan "think globally, act locally" applies to the Global
Information Infrastructure as much as it does to the environment.

The open systems and interoperability principles stated in
the NII and GII proposals are necessities for a global digital
library that provides access to information resources in all
formats, in all languages, and on systems operating on all
technical platforms. Setting, promoting, implementing, and
enforcing interoperability standards is very difficult even
within one country. The Internet achieved interoperability
through cooperation among the government, education, and
non-profit sectors; these sectors now must co-exist with
competitive commercial ventures. Creating interoperable systems
between countries with technology policies that rest on different
political, economic, social, and cultural traditions is even more
difficult. Telecommunications connectivity is far lower in most
parts of the world than in the United States, policies for access
and usage vary widely, and the range of hardware and software
platforms varies even more. The principles of open, unmediated
access to information and the freedom to read that Americans
take for granted do not apply in all countries that are connected
to the Internet. We must account for conflicting standards and
policies in creating a global network, for what works in the
United States does not necessarily work elsewhere.

4. Information retrieval is a hard problem.

The paradox of information retrieval is that a person
must describe the information that he or she does not have.
Claims that we are close to solving this paradox rest on two
misconceptions: an incomplete understanding of the information
retrieval process, and the scaling problem.

a. The information retrieval process.

Information retrieval rarely is a single act of formulating a
query; rather, it usually is a process that begins with some
vaguely-felt need of wanting to know something and gradually
evolves to the point where one can describe some attributes.
Once the need can be phrased sufficiently to begin searching, the
question itself may change through multiple iterations of finding
and using information resources. Thus people usually approach an
information retrieval system with a partially-formed query to be
negotiated.

When searching for information, a person is seeking knowledge
or meaning (e.g., what? why? how?) but must formulate a query
in terms of the content (e.g., words, numbers, symbols) of
extant information entities (e.g., documents, objects). As
an information retrieval system, the GII can deal directly with
information only as entities with content; meaning must be left
to the interpretation of each searcher.

Historically, information retrieval research has focused on the
most easily computable aspects of the process -- starting with a
well-formed query and matching that query against the content of
information entities -- ignoring the information-seeking process
and the context in which the question is asked. Information
retrieval systems are effective only to the extent that they
can assist in answering question, rather than the extent to
which they can match queries. Query matching is a process that
intelligent agents can accomplish; true information retrieval
is not. Query-matching systems were designed for highly skilled
searchers, usually librarians -- the original intelligent agents.
In contrast, the global digital library must serve a population
of information seekers that is heterogeneous in terms of age,
language, culture, subject expertise, and computing expertise,
most of whom will be perpetual novices at information retrieval.
The easy part of the retrieval process may be nearly solved; we
have barely begun research on the hard part.

b. The scaling problem in information retrieval.

The ease of finding information is a function of heterogeneity
and size of the database, as well as the ability to articulate
the question in searchable terms. Finding information is
simplest in small databases with homogeneous content because
the meaning of symbols (terms, images, etc.) is constrained
and the amount of "noise" in retrieval is tolerable. As the
heterogeneity of the database(s) searched increases, the variety
of ways in which each concept might be described increases, the
variety of meanings for each symbol increases, and the number
of irrelevant matches (noise) increases. The global digital
library must support searching of information resources in
multiple languages, multiple character sets, and multiple media,
not just mono-lingual text, further increasing the complexity of
the searching process. The keyword full-text search tools now
appearing on the Internet are being applied to relatively small
databases, by library standards, and already are encountering
all the content control problems well known to librarians
-- variant word endings (e.g., index, indexes, indexing),
indefinite references (e.g., it, that, which), synonyms (e.g.,
heat, thermal), homonyms (e.g., Paris, France; plaster of Paris),
indirect references (e.g., "the matter we discussed yesterday"),
concepts for which no explicit term appears in the document
(e.g., history, democracy, social effects, strategy, statistics),
and difficulties in determining the relative emphasis on each
concept. The old programming slogan GIGO applies -- "garbage
in, garbage out." Information either can be organized as it is
entered into the system to simplify later retrieval, or it can be
organized on the way out -- leaving to the searcher the burden of
sorting through masses of irrelevant information.

Conclusions

The technology and policy of the Global Information
Infrastructure offers unprecedented opportunities -- and
challenges -- for creating the Digital Library of the future.
If we view the GII as a single global digital library, it should
be possible to identify, locate, and obtain information resources
no matter where or in what form they exist, online or offline.
To accomplish this goal, we must tackle the fundamental paradox
of information retrieval -- describing the information that
the information seeker does not have -- by assisting the user
in articulating the question. We have made a start on these
questions in small and homogeneous databases with skilled
searchers, but now must address them in the context of
information resources and user populations that are very large
and heterogeneous. The technical problems may be easier to
solve than the social problems, given the vast range of economic,
cultural, and linguistic boundaries crossed by the Global
Information Infrastructure.

Note

1. A series of papers and a book are forthcoming from this
research. The following papers are in print or in press as of
this writing:

Borgman, C.L. International issues in access to information,
or Can the Internet bring democracy to closed societies with few
telephones or computers?. Proceedings of the Computers, Freedom,
& Privacy Conference, March, 1995, Burlingame, CA. pp. 66-70.
New York: Association for Computing Machinery.

Borgman, C.L. (in press). Information Retrieval Or Information
Morass? Implications Of Library Automation And Computing
Networks In Central And Eastern Europe For The Creation Of A
Global Information Infrastructure. Proceedings of the Annual
Meeting of the American Society for Information Science, Chicago,
October 9-12, 1995. Medford, NJ: Learned Information.

Borgman, C.L. (in press). Will the Global Information
Infrastructure be the Library of the Future? Central and
Eastern Europe as a Case Example. 61st International Federation
of Library Associations General Conference, Istanbul, Turkey,
20-26 August, 1995: Libraries of the Future. The Hague,
Netherlands: International Federation of Library Associations
and Institutions, POB 95312, 2059 CH. IFLA.HQ@IFLA.NL

--------------------------------------------------------------------

Empowering the consumer: The BBN Auto Mechanics List.

Rich Lethin
lethin@ai.mit.edu

Do you worry about marketing wizards using their databases
of intimate personal information to manipulate you? Maybe
prices will change in the supermarket aisles as you walk by;
maybe highway billboards will customize a tailored pitch as you
round a turn. If you find these possibilities spooky, there's
a technological way to fight back: developing databases to
help consumers make informed choices. The general concept is
illustrated by the BBN Auto Mechanics List.

The list is a few pages on the world wide web. It rates auto
repair and body shops in the Boston area based on feedback that
users email to John Bowe, who maintains the page. Each shop has
an entry with a grade, from A+ to F, and a few tersely edited
comments describing the service and satisfaction that the user
got at the shop. Here are two of the 160 entries:

[A] : ABJ FOREIGN MOTORS
91 Marshall St, Somerville - 625-6632
1/95: 1988 Subaru GL wagon. Sears said CV joints needed.
ABJ said no, just protective booties, saving lots of
money. Friendly and busy, yet quick.
8/91: friendly, honest, know their stuff. Had to keep the
pressure on them to get [the work] done by the end of the
day. Still.. [an] "A".
12/90: The guys there are smart, reliable, honest, excellent
mechanics. What more can one ask?

[D-] : CENTURY TIRE
Beacon St, Cambridge
2/95: "Rude and sleazy". Put on cheaper tires than were
paid for. Hesitant to credit visa for difference, tried
to push overpriced accessories instead.
2/95: "Classic bait-and-switch game". Quoted good price for
what he wanted over phone (Nokia Hakipolita Snows for BMW),
but pushed junk on him in person. Slow service.

I was familiar with these shops, and the descriptions matched
my experiences. So this past winter, for repair of damage from
a small accident, I consulted the mechanics list for a good auto
body shop. I chose this one:

[A] : MIKE'S AUTO BODY
Malden
4/95: Rave review, and a better than expected price (and
below Dick's). Happy to drop customer at the T. Even cleaned
road grime. Mike's is mildly associated with a local Porsche
car owner club, so Porsches are a specialty.
8/94: Suspension and related work on Porsche 911 Turbo.
Excellent work. (The Porsche folks in Germany would be proud.)
Genuinely interested in satisfying customer.
8/91: Very good quality, and price is right. Priced job WAY
below Dick's.

and saved about $300 versus the competitor's quote. The 4/95
entry is mine.

How much impact is the list having on Boston repair shops? John
helpfully supplied me with the list's access log, to allow a some
guessing/estimating. Last month, 1300 unique machines accessed
the page, up from 550 the month before. Some of those were from
foreign countries and should be disregarded; to quickly estimate
the proportion of accesses that were local, I looked at the 1000
easily-classified accesses from the EDU domains. Half were from
local schools (most from MIT). So, roughly at most 650 owners
used the list to help choose a repair shop last month.

The list includes 160 shops. If each shop services 20 customers
a day, then these shops serviced 96000 customers in a month.
If each of the 650 accesses to the list resulted in a customer
choosing one of the shops on the list, then less than 1% of all
customers were list-informed. 39 of the shops on the list got
an A, if each of the 650 browsers went to one of those A shops,
those shops' business increased by 2% during the month. So, the
impact on the shops is pretty small now, though at least one is
aware of the list.

But users who do use the list can save a bunch of money.

John told me that maintains the list because he had found it
useful a few times when he worked at BBN; he sees it as a way
to contribute back to the net. It doesn't take much of his time
(he only receives about 4 or 5 messages a week), and he doesn't
have any plans to expand it or form a company around it. He's
busy with his real job.

The mechanics list resembles a restaurant guide book which
gathers its data from cards mailed by selected diners. What
distinguishes it, and how has technology enabled it? One key is
that the net has reduced the cost and work of distributing and
collating the data. In contrast to restaurant guides, there was
no established market for mechanic lists, so a publisher would
take a risk investing in it. Sourcing the page on the web is
effectively free, though: email is free and the incoming text
is handy for incorporation into the web page. The electronic
distribution also changes the character of the list. The
mechanics list gets updated regularly, which improves information
quality (users can give quick pointers to inaccuracy) and also
makes it fun to send in information and see it incorporated.

Some characteristics specific to the Boston auto repair market
probably help make the list work. Boston seems to be the right
size: big enough to make finding a good mechanic a challenge, but
small enough that the number of shops on the list is manageable.
Lots of people are on-line in Boston, and they're a relatively
homogeneous bunch of students, engineers and computer scientists,
so their expectations and experiences of auto repair are likely
to correlate. The student population is particularly transient
and thus unfamiliar with the Boston mechanics.

Can the service work with more Bostonians on-line? Would it
work in other markets? Maintaining the list will become more
than the small distraction it is for John Bowe right now. Maybe
volunteers will help or perhaps the list would be supported
by donations from happy users. Or, perhaps this process could
be automated to scale into a general net-based consumer voting
scheme over companies, products, and manufacturers.

There ARE problems with scaling. Beyond mentioning problems
of "efficiency", such as dislocation and obsolescence of real
people, I'll neglect the consequences and focus on the problems
in its workings. There's lots of room for esoteric economic
models and vigorous hand-waving here, so I'll use the BBN Auto
Mechanics List to try to ground my comments.

The list is vulnerable to abuse. There's nothing to prevent
a garage from contributing bogus raves about itself, slamming
competitors, or hiring an advertising agency to do this for them.
John hasn't noticed any bogus reviews coming in and the content
seems accurate so things seem to be working now -- probably
because of the list's obscurity. But I'm skeptical that it
can continue. In other net forums, such as those discussing
new musical groups, people are being paid to hype specific
artists. Investment newsgroups have had shills promoting penny
stocks. Similar things could happen to the BBN list - though the
relative permanence of mechanics (compared to the musical scene
or stock markets) makes manipulation a bit more difficult.

Is libel a problem? It looks like the list owner is protected
now, with his disclaimer about passing the information on with
no guarantees about accuracy. However, the recent Prodigy case
provides a precedent for considering small bits of editing to
confer responsibility.

Scaling leads to potential for inadvertent degeneracy. It
might avalanche toward extremes with bad reviews influencing the
objectivity of later reviews, or move to irrelevance with a large
variance in perceptions decaying most of the shops' grades toward
"average". There are many, many other ways for information
exchanges of this sort to fail. (One aspect of the list that
fights these trends is that the ratings of garages are not
limited to a single grade. The well-edited descriptions help
readers make their choices in a more informed manner. The
mechanics list occupies a nice position in the representational
spectrum, with letter grades available but more descriptive data
also available.) Can systems methodologies for gathering this
information be designed which fight degeneration?

Why scale up services like this? Recently (6/30/95) the New York
Times profiled Providian Bancorp, which provides credit cards to
consumers. Providian mails credit solicitations with unspecified
interest rates. If a consumer responds, Providian can access
their credit history and use statistical techniques to tailor
the highest possible interest. The techniques might notice that
the consumer has been insensitive to interest rate in carrying
a large balance. In this negotiation, the consumer is being
put at a disadvantage by the records kept of his past behavior.
However, if the consumer could access a database of credit
card companies, interest rates, and background on the consumer-
unfriendly practices of Providian, they'd be comparably leveraged
in the negotiation, and would probably get a better rate.
Informed consumers can make better decisions, and this principle
works for many other misleadingly advertised products.

How can automated tools be structured? Are there systems and
algorithms that can be developed and deployed to increase the
quality of this type of information, and to protect against
abuse? Maybe. Game theory might be employed to design
mechanisms for voting schemes that are robust against shills
using authentication schemes such as digital signatures. Privacy
of respondents needs to be protected, and the technologies for
anonymity on the net that have recently been developed seem a
good starting point. Representational schemes for agents are
in development; these might be used as a language of reputation.
Finally, learning models and prediction mechanisms such as
automatic collaborative filtering might be used to better-tailor
preferences for the consumer. It's a complicated system and it
would be a cool experiment. Let me know what you think.

However, it's probably not necessary to wait for the deployment
an automated huge system right now to have an impact. The BBN
Auto Mechanics List demonstrates that a small effort can do a lot
of good.

--

The BBN Auto Mechanics List is at
http://web1.osf.org:8001/faq/bbn-auto-mech.html

--------------------------------------------------------------------

Wish list.

Strange and instructive things are happening these days in the
world of the WorldWide Web. Take Netscape. Having given away
eleven bazillion copies of its version 1.0 web client and stocked
its coffers with IPO cash that it didn't really need, it's time
for the Netscape company to start making money. But why should
Netscape make any money? After all, the client they've given
away for free is perfectly good. The answer is that the client
they've given away only works with a certain set of features.
Let's focus on HTML. Everyone knows that HTML is not a great
programming language, particularly if you want to create complex
things like tables. So HTML is going to get some more features.
And as the creators of web pages start using those features, the
old web clients will slowly stop working.

Now, this process could happen in one of two ways: (1) the W3C
(the WorldWide Web Consortium) could define some new standards
for HTML and everyone could then go out and support them; or (2)
companies like Netscape could start defining their own features
without any regard for the W3C process. Of course, these two
things will both happen, and they will probably interact with
one another as well. The dynamics of the process will be shaped
by the interesting special properties of HTML and the Web, but
they will also exemplify the larger market dynamics of technical
standards. The most interesting of HTML's special properties
is that the source code for Web pages is public. If you like the
look and feel of someone's Web page, just pull down a menu, grab
their source code, and modify it to insert your own content and
produce the look and feel you want for yourself. Copy-and-modify
programming is important throughout the computer world, but the
Web has taken it to new heights.

But here's the catch: if the page you copied uses non-standard,
Netscape-specific features, and if you use Netscape yourself,
you're unlikely to find out about the problem until much later.
If you really like the non-standard feature then you may not even
care about the problem, figuring that most people use Netscape
anyway and other browsers probably won't crash too badly. HTML
features and programming cliches can travel like viruses through
this dynamic, copied from one neat page to another. It's a good
dynamic in many ways. It lets people get up to speed in HTML
programming very fast, since they never have to start writing
code from a blank screen. On the other hand, it might cause
some havoc for the process of defining standards. Standards are
good because, among other things, they keep people from being
locked in to a particular supplier's products. Suppose that a
Netscape-specific dialect of HTML somehow arose and became widely
used. And let's say that other Web browser companies develop
their own distinct dialects. Then the Web will slowly break
into separate regions, each with its own dialect of the language.
Someone who wanted to use a certain subspace of Web pages would
have to acquire the Web client that can read those pages. The
result would be a fragmented market, with each supplier receiving
a high margin but with the total market greatly depressed because
the benefits to buyers of entering the market would be much
less. Some argue that companies have an incentive to encourage
this situation; if Web features are completely standardized then
barriers to entry in the market for Web clients will be low, and
profits will be low accordingly.

What can we do to prevent such a situation? I would suggest that
the W3C, or some other friend of standards, produce a Web crawler
that checks pages for compliance with the standards. The pages
are all public and various search tools already sweep over them
on a regular basis, so nobody should mind a standards-checking
tool doing the same thing. Every site and every user could set
a switch indicating whether they wish to receive an automated
commentary on their HTML style. (The switch would be set to "no"
by default.) Also, it would be possible to ask for a commentary
on a page right away by feeding a URL to a site which has an HTML
commentary demon running full-time. The commentary might include
things like "warning: <SO-AND-SO> is a Netscape-specific feature
and not part of the standard" or "please note: <SUCH-AND-SUCH>
is part of the draft next version of the standard, but is not yet
officially a standard" and so on.

Such an HTML style crawler might have a variety of uses. For
example, it could track the spread of new features, producing
statistics that would be available on Web pages for anyone to
read. Some difficult design decisions would also be necessary.
For example, does one crawl the Web from the same starting-points
as the standard Web search tools, or from a different set of
points? Does one crawl only those pages that can be reached
through links in pages already crawled over, or does one also
thread the search through other pages in the same directory as
pages that are crawled over? Could Web client producers create
their own crawlers that search for features that are specific to
their competitors' clients, and then send direct mail to those
pages' authors suggesting features that are more compatible with
the standard?

--------------------------------------------------------------------

This month's recommendations.

H. Landis Gabel, ed, Product Standardization and Competitive
Strategy, Amsterdam: North-Holland, 1987. The computer industry
doesn't come close to obeying the laws of supply and demand from
neoclassical economics. Why? Because issues of compatibility
generate all kinds of strange incentives and strategies. This
book contains the best account I've seen of these phenomena.
The first chapter, by Joseph Farrell and Garth Saloner, could
have been the textbook for Bill Gates' march to eleven-figure
wealth -- not by making better products, but getting software to
market quickly and then leveraging various compatibility effects
to consolidate and expand his market dominance. We'll see a
lot more of this sort of thing, and I think it's important for
everyone to learn more about it.

--------------------------------------------------------------------

Follow-up.

Robert Putnam <rputnam@cfia.harvard.edu>, whose article about the
decline of associational ties in the United States I discussed in
TNO 2(3), wrote to register some disagreements and clarifications.
First of all, he was a little disgruntled that I referred to the
Journal of Democracy as "generally conservative", lest anyone
think that *he* is conservative, which he is not. He accurately
points out that an interest in associational ties is a bipartisan
matter at the moment. We went back and forth about his concept
of "social capital", which I had asserted isn't really a kind of
capital. The issue is complicated, because a narrow construal
of the term -- roughly, the stock of relationships of trust that
I have built up -- is indeed a kind of capital, and by summing
up the relationship capital possessed by individuals in a region
we can come to something that deserves to be called "social
capital" and ascribed to a region rather than to an individual
or organization. But his concept goes beyond this to speak
of a general climate of trust and a general assumption of
trustworthiness (something that can be found in northern Italy,
as he argues in his book "Making Democracy Work", but not in
southern Italy), and this is the part that doesn't seem like
capital to me. He also expressed skepticism that computer
networks can support "communities" of the sort required by
his argument -- networks of trusting relationships that form
the basis of a healthy and vigorous civic and economic life.
I don't much care about this myself, though, since I'm much
more interested in viewing computer networks as integrated
with regional and professional communities, supporting their
existing dynamics and perhaps helping to change those dynamics.
Although I probably didn't make this clear in TNO 2(3), see my
longer discussion of the matter in, for example TNO 1(5).

Web picks:

The Internet Engineering Task Force, which sets standards for the
Internet, is on the Web at: http://www.ietf.cnri.reston.va.us

The Institute for the Study of Civic Values has a good collection
of resources on community-building at
http://libertynet.org/~edcivic/iscvhome.html

The Londoners who are defending themselves against charges
of having libeled McDonalds by handing out a leaflet that
was critical of the company's environmental practices and
the nutritional value of its food have a web page with their
original leaflet and other information on the case. The URL
is http://anthfirst.san.ed.ac.uk/McLibelTopPage.html

Negativland's intellectual property web site is at
http://sunsite.unc.edu/wxyc/legal.html

--------------------------------------------------------------------
Phil Agre, editor pagre@ucla.edu
Department of Information Studies
University of California, Los Angeles +1 (310) 825-7154
Los Angeles, California 90095-1520 FAX 206-4460
USA
--------------------------------------------------------------------
Copyright 1995 by the editor. You may forward this issue of The
Network Observer electronically to anyone for any non-commercial
purpose. Comments and suggestions are always appreciated.
--------------------------------------------------------------------

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT