Copy Link
Add to Bookmark
Report

b-z1ne 02

eZine's profile picture
Published in 
bz1ne
 · 28 Dec 2019

  

¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬
::ÆÆÆ[www.blackhat.cx]ÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆ::
____________
--)-----------|____________|
,' ,'
-)------======== ,' ____ ,'
`. `. ,' ,'__ ,'
`. `. ,' ,'
`. `._,'_______,'________________[ vol.1 <=> issue#2 ]
__________.____ _____ _________ ____ __.__________.___ _______ ___________
\______ \ | / _ \ \_ ___ \| |/ _______ /| |\ \ \_ _____/
| | _/ | / /_\ \/ \ \/| < / / | |/ | \ | __)_
| | \ |___/ | \___| | \__/ /__| | \_| \
|______ ________ ____|__ /\______ _____|__ __________\___\____|_______________/
\/ \/ \/ \/ \/ @logicfive.net
.-"" |=========================== ______________ |---------------------------------
"-...l_______________________ | |' || |_]__ |
[`-.|__________ll_| |----- www.blackhat.cx --------
,' ,' `. `. | (c) The BlackHat Project |
,' ,' `. ____`. -------------------------------
-)---------======== `. `.____`.
__ `. `.
/ /\ `.________`.
_ / / \ --)-------------|___________|
,-- / /\/ / \ -,
| ,/ / \/ / |----> the table of contents
,---| \ \ / |---------------------------------------------------------------,
| `-- \ \ / -----' "It is not that I'm so smart, it's just that I stay with
| `\`*_' problems longer." ~ Albert Einstein
| \__________________________________________________________________________'
|
|:0x01 - Welcome.................................................................STAFF
| > Introduction
| > About BlackHat
|:0x02 - 0x69....................................................................STAFF
| > #etcpub TOP 10 0x**
|:0x03 - Network scanning techniques.............................................^sysq
| > TCP Scanning
| > Half-Open Scanning: Scanning with the SYN flag ONE
| > FIN scanning
| > Stealth Scanning
| > TCP reverse ident scanning
| > UDP Scanning
| > Ftp Bounce Attack
| > ICMP Echo scanning
| > IP Fragmentation
|:0x04 - Modifying the Kernel......................................................lkm
| > Upgrading and Installing New Kernel Software
| > Compiling the Kernel from Source Code
| > Where to Find Kernel Sources
| > Using New Kernel Sources
| > Adding Drivers to the Kernel
| > Upgrading Libraries
| > Debugging and Profiling Options
| > Debugging gcc Programs with gdb
| > Summary
|:0x05 - Basics about a BUFFER OVERFLOW.........................................sXcrap
| > Background
| > Fatal return
|:0x06 - About Rootkits..........................................................Geert
| > Intro
| > Wot is a rootkit?
| > Wot do they do?
| > Wot are they used for?
| > Examples of a few very known rootkits
| > How to detect the presence of a rootkit
| > Tools to detect a rootkit
| > Defense : intrusion detection
| > Defense : Prevention
| >
|:0x07 -
|
`-------------------------------------------------------------------------------------'
::ÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆÆ[www.blackhat.cx]ÆÆÆ::

¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬
#=> 0x01 Welcome
¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬

#=[ Introduction. ]=--->

Blackhat Community has always been more or less underground. That is just normal if you think
the nature of the scene. Misunderstood and mysterious group of people is often judged in
world-wide media. Blackhat Magazine or E-zine is trying to bring the community near to common
people and it tries to break the common stereotypes made of "hackers". Every blackhat is not evil,
they just dont want to live in this many ways cruel society.
For them cyberspace and coding is the way to make the difference. Dont judge understand.




#=[ About BlackHat. ]=--->

Bzine is for people. From blackhat scene to all those who are interested about ethics and
security related articles.

"White or Greyhat, it just don't matter
Sucker dive for your life when my shotgun scatters" -Anonymous @ http://phrack.efnet.ru



¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬
#=> 0x02 0x69
¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬

#=[ #etcpub TOP 10 0x** ]=--->

10) 0x50 a hard days night - been talking to sller

09) 0x121 how to rip - tuxtendo

08) 0x87 how to wank - farmer

07) 0x104 teatime troubles when im on irc - apache

06) 0x49 how will i buy entire SUN company via diplum - Malcolmx

05) 0x150 how to code non working cgi scanner - guilefool

04) 0x119 using linux professionally - rigian

03) 0x116 how to compile stacheldraht - timeless

02) 0x106 i like americans - angelsz

01) 0x68 I still think ulogin.c's 10 lines of code have a backdoor - dave dittrich


¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬
#=> 0x03 Network scanning techniques
¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬

#=[ TCP Scanning ]=--->

A port is a number which makes it possible to identify a service: each service (or customer)
listening packages IP and recovers those which him (those are intended to treat them which one
like wearing of destination the open port). It is very interessant to know the ports open on a machine:
they thus represent the services available and contestable on machine.
The first traditional scanning is the scanning of port TCP, its principle is simple: at the time of
a communication networks via TCP/IP, the program customer asks for a connection to the host who
will answer him if a program waiter listens on the port.
As a scanner the machine, one has just to require a connection on the port with scanner.
This this fact by the function connect(socket, struct sockaddr *, int).
The function which does that is:
--------------------------------------------------------------------------------------------
sock=socket(AF_INET,SOCK_STREAM,0); /* it creates the socket */
addr.sin_port=htons(port); /* stores the number of the port */
rc=connect(sock,(struct sockaddr*)&addr,sizeof(addr)); /* connects oneself and store */
/* the result in rc */
close(sock); /* close the connection */
--------------------------------------------------------------------------------------------
This scanning is very simple and it can be implemented on all the machines with any very
used right thus but it poses problems of speed and of anonymity
(a simple open port in listening allows of located).
While making a separate connect() call for every targeted port in a linear fashion would take ages over a slow connection,
you can hasten the scan by using many sockets in parallel. Using non-blocking I/O allows you to set a low time-out period
and watch all the sockets at once. This is the fastest scanning method supported by nmap,
and is available with the -t (TCP) option. The big downside is that this sort of scan is easily detectable and filterable.
The target hosts logs will show a bunch of connection and error messages for the services
which take the connection and then have it immediately shutdown.

#=[ Half-Open Scanning: Scanning with the SYN flag ONE ]=--->

The SYN scanning is the scanning undoubtedly most used and fastest.
But surely not most discrete.
It remains well on more discrete than it scanning with connection supplements but with
the disadvantage of having to implement it by building its own packages TCP and to use the root rights for to make turn
(under Windows, the WinSock implementation does not allow the creation of RAW Socket but it is possible to create its
own bookshop to be able to send packages IP builds completely).
A connection TCP is done in three stages.
First of all the request of connection which is done by sends of a package with flag SYN.
Then the answer of the host who it is, if the port is opened, a package with the flags SYN and ACK,
are a package with flag RST.
Then the third, sends which opens connection is a package sent by the customer with flag ACK.
At the time of a SYN scanning one sends a package SYN and one awaits package SYN+ACK or RST.
That makes it possible not to have to open a connection and thus not to have to to close.
Explanation of the construction of package TCP.
A package TCP is made in the following way:
0 15|16 31
|-------------------------------------------------------------------------------------|
| Number of the port source (16 bits) | Number of the wearing of destination (16 bits)|
|-------------------------------------------------------------------------------------|
| number of sequence on 32 bits |
|-------------------------------------------------------------------------------------|
| number of payment on 32 bits |
|-------------------------------------------------------------------------------------|
| length of the heading 4 | 6 | flags 6| size of window on 16-bits |
|-------------------------------------------------------------------------------------|
| urgent checksum on 16-bits | pointer on 16-bits |
|-------------------------------------------------------------------------------------|
| options (if it there of a) |
|-------------------------------------------------------------------------------------|
| data (if it there of a)
|-------------------------------------------------------------------------------------|
the flags are, in the order: URG, ACK, PUSH, RST, SYN, END.
URG is for indicates that the urgent pointer is valid (it serves A indicates end of the urgent data in the package)
ACK is for indicates that the number of payment is valid (number of
sequence of the next package).
PUSH: so that the manager network passes the screen as quickly as possible to
software.
RST: it rinitialises the connection.
SYN: signal of synchronization for the number of sequence
END: end of connection.
Good wholesale Ben one send a package with the port source and the port of dest
and with flag SYN. And one will await a package with flag RST or with flags ACK and SYN (with the good number of port).
Well on, the fastest remainder not to await the reception of packages between two sendings:
one sendings a package SYN on the machine and one tests without awaiting answer if one were received package,
and one starts again until to have sent all packages SYN;
it does not remain us any more that to listen to the packages arriving during a short moment.
So a RST is indicative of a non- listener. If a SYN|ACK is received,
you immediately send a RST to tear down the connection (actually the kernel does this for us).
The primary advantage to this scanning technique is that fewer sites will log it.
Unfortunately you need root privileges to build these custom SYN packets. SYN scanning is the -s option of nmap.
This scanning remains however detectable: it is enough to listen to the packages returning and of to check that
one does not have business with a scan.

#=[ FIN scanning. ]=--->

Sometimes SYN scanning isn't clandestine enough. Some firewalls and packet filters watch for SYNs to an unallowed port,
and programs like synlogger and Courtney are available to detect these scans. FIN packets, on the other hand,
may be able to pass through unmolested. This scanning technique was featured in detail by Uriel Maimon in Phrack 49,
article 15. The idea is that closed ports tend to reply to your FIN packet with the proper RST. Open ports,
on the other hand, tend to ignore the packet in question.
This is a bug in TCP implementations and so it isn't 100% reliable (some systems, notably Micro$oft boxes,
seem to be immune). It works well on most other systems I've tried. FIN scanning is the -U (Uriel) option of nmap.

#=[ Stealth Scanning ]=--->

The stealth scanning is a scanning of port TCP using of the bugs implementation TCP.
The large advantage of this methods is that it is not easily detectable and that it passes through several firewalls.
Its large disadvantage is that it is a method which does not go to all the blows (that depends on the systems).
The first method is sends it of a package with the FINE flag. If the port is closed,
then a package with flag RST will be turned over if not the package will be been unaware of.
This method goes on the majority of the systems.
The second method is sends it of a package with flag ACK and one awaits it package with flag RST.
If the fields window of the package is different from 0 or if fields TTL is the weak (< = 64) then port is probably open.
This bug walk primarily on old implementation BSD.
A third method is sends it of a package without flag (Null scanning) and a fourth sends it of a package with flags END,
URG and PUSH (scanning Xmas). The behavior will be then the same one as for the scanning with the flag END.

#=[ TCP reverse ident scanning ]=--->

Dave Goldsmith noted in a 1996 Bugtraq post,
the ident protocol (rfc1413) allows for the disclosure of the username of the owner of any process connected via TCP,
even if that process didn't initiate the connection. So you can, for example,
connect to the http port and then use identd to find out whether the server is running as root.
This can only be done with a full TCP connection to the target port (i.e. the -t option).
nmap's -i option queries identd for the owner of all listen()ing ports.

#=[ UDP Scanning ]=--->

The UDP is a protocol of shit (not checking of the reception of the packages)
but remains useful in certain cases (fault???).
For scanner port UDP of opened, the principle is very idiot:
one is sent package UDP on the port and if one recoit an answer ICMP Unreachable Port then the port is closed,
if not the port is open. One is however not obliged to build the packages: indeed, it is enough to
to test the function recvfrom which turns over ECONNREFUSED if the port is closed...
Well on, it is preferable to build its own packages for accelerating the scan.
This scanning technique is slow because of compensation for machines that took RFC 1812 section 4.3.2.8 to heart
and limit ICMP error message rate. The Linux kernel (in net/ipv4/icmp.h) limits destination unreachable message generation
to 80 per 4 seconds, with a 1/4 second penalty if that is exceeded.
At some point I will add a better algorithm to nmap for detecting this.
Also, you will need to be root for access to the raw ICMP socket necessary for reading the port unreachable.
The -u (UDP) option of nmap implements this scanning method for root users.
Some people think UDP scanning is lame and pointless. I usually remind them of the recent Solaris rcpbind hole.
Rpcbind can be found hiding on an undocumented UDP port somewhere above 32770.
So it doesn't matter that 111 is blocked by the firewall.
But can you find which of the more than 30,000 high ports it is listening on?
With a UDP scanner you can! - UDP recvfrom() and write() scanning :
While non-root users can't read port unreachable errors directly,
Linux is cool enough to inform the user indirectly when they have been received.
For example a second write() call to a closed port will usually fail.
A lot of scanners such as netcat and Pluvius' pscan.c does this.
I have also noticed that recvfrom() on non-blocking UDP sockets
usually return EAGAIN ("Try Again", errno 13) if the ICMP error hasn't been received,
and ECONNREFUSED ("Connection refused", errno 111) if it has.
This is the technique used for determining open ports when non-root users use -u (UDP).
Root users can also use the -l (lamer UDP scan) options to force this, but it is a really dumb idea.

#=[ Ftp Bounce Attack ]=--->

It makes it possible to use a waiter ftp like " proxy " for the scanning of port (and other thing
also but which does not interest us here). If our target host is listening on the specified port, the
transfer will be successful (generating a 150 and a 226 response). Otherwise we will get "425
Can't build data connection: Connection refused." Then we issue another PORT command to
try the next port on the target host. The advantages to this approach are obvious (harder to
trace, potential to bypass firewalls). The main disadvantages are that it is slow, and that some
FTP servers have finally got a clue and disabled the proxy "feature".
Protocol ftp envisages the possibility of carrying out transfers of waiter with waiter with a
customer separated to control the transfers. This this fact in practice by specifying the address
of another waiter with order PORT. For example, on waiter ftp, before sends of an order, one
a:
PORT 172,52,36,4,10,1
who specifies to use port 2561 (= 10*256+1) on machine 172.52.36.4 for the transfer of data.
Then if one sends order LIST to the ftp, it returns the result on port 2561 of machine
172.52.36.4. Thus to make a scan port TCP in version ftp Bounce Attack, it is enough
to connect itself to a waiter ftp (not thus modified which accepts that one specifies any IP).
Then to send if a.b.c.d is the IP with scanner, to send them orders " PORT has, B, C, D, p1,
p2 " where p1, p2 indicates the port and " LIST ". It will answer then " 425 Can' T build dated
connection: Connection refused. " if it port is closed and if not the transfer will succeed (answer
150 or 226).
The advantage is well on " a greater anonymity " in the scan (more difficult to trace). It also
makes it possible to circumvent the firewall (spoofing). The desavantage of this scanning is
slowness (complete connection, transfer of data...).

#=[ ICMP Echo scanning ]=--->

The ICMP (Internet Control Message Protocol) makes it possible to send messages of
controls (in particular when a road is closed he says it to us). I could extend me front here on
protocol ICMP but it is not it subject moreover there are very good articles above (in particular
that of fire CoD4 in NoRoute 3 and of Sneakie ' ICMP My Friend ').
I simply will explain here the message echo and echo reply.
0 7|8 15 |16 31
|------------------------------------------------------------------------------|
| standard (0 or 8) | code (0) | checksum |
|------------------------------------------------------------------------------|
| identifying | number of sequence |
|------------------------------------------------------------------------------|
| optional data (if it there of a) |
|------------------------------------------------------------------------------|
Thus a message ICMP Echo request is identified by type 8 and 0. code it The identifier is there
to identify the echo exchange and the number of sequence for saying to which package one is (it
is the field icmp_seq at the time of a ping). To a message echo is normally returned a message
echo reply identified by type 0 and codes it 0 with the same fields as the message echo (number
of identification and sequence).
To make IP scanning by message ICMP Echo

The principle is simple for each IP one sends a package echo and one waits a package echo
reply with the same fields. A package ICMP unreacheable will be recuperate(inalienable
destination) if the host do not exist and a time out if it is inalienable.
Thus one can scanner rather quickly a series of IP.
TCP

In fact, to use the ICMP Echo is not necessary to know if a machine answers correctly, it is
enough to use any package which must have necessarily an answer, this inclu: TCP SYN (and
connect()) and TCP ACK.

ICMP doesn't have a port abstraction. Sometimes useful to determine what hosts in a network
are up by pinging them all. the -P nmap's option does this. Also you might want to adjust the
PING_TIMEOUT #define if you are scanning a large network. nmap supports a host/bitmask
notation to make this sort of thing easier. For example 'nmap -P cert.org/24 152.148.0.0/16'
would scan CERT's class C network and whatever class B entity 152.148.* represents.
Host/26 is useful for 6-bit subnets within an organization.

#=[ IP Fragmentation ]=--->

Fragmentation IP of the packages of scanning makes it possible to pass through some
firewalls by preventing the analysis of the packages.
The header traditional IPv4 is as follows:
0 15 |16 31
|------------------------------------------------------------|
| n°de worm. 4 | head len 4 | TOS 8 | total length on 16 bits|
|------------------------------------------------------------|
| | flags 3 | fragment offset |
|------------------------------------------------------------|
| TTL (8 bits) | protocol 8 | checksum on 16-bits |
|------------------------------------------------------------|
| adress IP source on 32 bits |
|------------------------------------------------------------|
| adress IP of destination on 32 bits |
|------------------------------------------------------------|
where the flags are (in the order): Reserved, Don' T fragment, More fragment.
During the fragmentation (which occurs only on the data), the packages is cut out in blocks
whose size is multiple 8. All the packages obtained (except the last one) puts their flag More
Fragment at 1. The fields fragment offset must contain the shift (divided by 8) into bytes
beginning of the current package compared to the beginning of the package which with
fragmented summer. The size of the package (in the fields total length) is recomputed and the checksum also.
For example, if there is a package IP containing 45 bytes of data (to the direction
data for protocol IP). If one makes a fragmentation out of packages of 16 bytes then there will
be two packages:
* The first containing the first 16 bytes of data, the flag More fragment
put at 1, the fragment offset to 0 and the fields total length to 36 (20 for header IP)
* the second with the 16 bytes according to, the flag More fragment to 1, the fragment
offset with 2 (2*8 = 16) and the fields total length to 36.
* the third with the last 13 bytes, the flag More fragment to 0, the fragment
offset with 4 (4*8 = 32) and the fields total length to 33.
Note: although protocol IP envisages fragmentation on 8 bytes, I obtains
then an error " Operation not permitted " (just like fyodor at the time of dévelopement
of nmap). I thus regulated the constant FRAGMENT_SIZE of rtcscan with 2 (either 2*8=16
bytes). This is a modification of other techniques. Instead of just sending the probe packet, you
break it into a couple of small IP fragments. You are splitting up the TCP header over several
packets to make it harder for packet filters and so forth to detect what you are doing. Be
careful with this! Some programs have trouble handling these tiny packets. My favorite sniffer
segmentation faulted immediately upon receiving the first 36-byte fragment. After that comes a
24 byte one! While this method won't get by packet filters and firewalls that queue all IP
fragments (like the CONFIG_IP_ALWAYS_DEFRAG option in Linux), a lot of networks
can't afford the performance hit this causes. This feature is rather unique to scanners (at least I
haven't seen any others that do this). Thanks to daemon9 for suggesting it. The -f instructs the
specified SYN or FIN scan to use tiny fragmented packets.

¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬
#=> 0x04 Modifying the Kernel
¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬

Usually you will want to leave the Linux kernel alone except when performing a major upgrade, installing a new
networking component (such as NFS or NIS), or installing a new device driver that has special kernel requirements. The
details of the process used to install the kernel drivers are usually supplied with the software. Because this isn't always
the case, though, this chapter should give you a good idea of the general process for working with the kernel.

Don't modify the kernel without knowing what you are doing. If you damage the source code or configuration
information, your kernel may be unusable, and in the worst cases, your filesystem may be affected. Take care and follow
instructions carefully. keep in mind that this chapter only covers the basics of kernel manipulation.

The several versions of Linux in common use have a few inconsistencies between them. For that reason, the exact
instructions supplied in the following sections may not work with your version of Linux. The general approach is the
same, however, and only the directory or utility names may be different. Most versions of Linux supply documentation
that lists the recompilation process and the locations of the source code and compiled programs.

Before doing anything with the kernel or utilities, make sure you have a good set of emergency boot disks and a
complete backup on tape or floppy disk. Although the process of modifying the kernel is not difficult, it does
cause problems every now and again that can leave you stranded without a working system. Boot disks are the
best way to recover, so make at least one extra set.

Because the kernel is compiled with the C compiler supplied as part of Linux, the latter part of this chapter looks at the
C compiler and its flags and how you can use it to your advantage. This information isn't meant to be a complete
reference to the C system, of course, but it should be useful for some basic manipulations you may require when
modifying the kernel (or any other source code compiled by C).


#=[ Upgrading and Installing New Kernel Software ]=--->

Linux is a dynamic operating system. New releases of the kernel or parts of the operating system that can be linked into
the kernel are made available at regular intervals to users. Whether you want to upgrade to the new releases usually
depends on the features or bug fixes that the new release offers. You will probably have to relink the kernel when you
add new software, unless the software is loaded as a utility or device driver.

Avoid upgrading your system with every new release, for a couple of reasons. The most common problem with constant
upgrades is that you may be stuck with a new software package that causes backward compatibility problems with your
existing system or that has a major problem with it. Most new releases of software wipe out existing configuration
information, so you will have to reconfigure the packages that are being installed from scratch. Also, the frequency with
which new releases are made available is so high that you can probably spend more time loading and recompiling kernels
and utilities than using the system. Read the release notes carefully to ensure that the release is worth the installation
time and trouble. Remember that few installations proceed smoothly!

The best advice is to upgrade only once or twice a year, and only when there is a new feature or enhancement that will
make a significant difference to the way you use Linux. It's tempting to always have the latest and newest versions of the
operating system, but there is a lot to be said for having a stable, functioning operating system, too.

If you do upgrade to a new release, bear in mind that you don't have to upgrade everything. The last few Linux releases
have changed only about five percent of the operating system with each new major package upgrade. Instead of
replacing the entire system, just install those parts that will have a definite effect, such as the kernel, compilers and
their libraries, and frequently used utilities. This method saves time and reconfiguration.


#=[ Compiling the Kernel from Source Code ]=--->

Upgrading, replacing, or adding new code to the kernel is usually a simple process. You obtain the source for the kernel,
make any configuration changes, compile it, and then place it in the proper location on the filesystem to run the system
properly. The process is often automated for you by a shell script or installation program, and some upgrades are
completely automated with no need to do anything more than start the upgrade utility.


#=[ Where to Find Kernel Sources ]=--->

Kernel sources for new releases of Linux are available from CD-ROM distributions, FTP sites, user groups, and many
other locations. Most kernel versions are numbered with a version and a patch level, so you see kernel names like
1.12.123 where 1 is the major release, 12 is the minor version release, and 123 is the patch number. Most kernel source
sites maintain several versions simultaneously, so check through the source directories for the
latest version of the kernel.

Patch releases are sometimes numbered differently, and do not require the entire source of the kernel to install. In most
cases, the patch overlays a section of existing source code, and you only need to recompile the kernel to install the patch.
Patches are released quite frequently.

Most kernel source programs are maintained as a gzipped tar file. Unpack the files into a subdirectory of /usr/src, which
is where most of the source code is kept for Linux. Some versions of Linux keep other directories for the kernel source,
so you may want to check any documentation supplied with the system or look for a README file in the /usr/src
directory for more instructions.


#=[ Using New Kernel Sources ]=--->

Often, unpacking the gzipped tar file in /usr/src creates a subdirectory called /usr/src/linux, which can overwrite your
last version of the kernel source. Before starting the unpacking process, rename or copy any existing /usr/src/linux (or
whatever name is used with the new kernel) file so you have a backup version in case of problems.

After unpacking the kernel source, you need to create two symbolic links to the /usr/include directory (if they are not
created already or set by the installation procedure). Usually, the link commands required are the following:

ln -sf /usr/src/linux/include/linux /usr/include/linux

ln -sf /usr/src/linux/include/asm /usr/include/asm

If the directory names are different with your version of Linux, substitute them for /usr/src/linux. Without these links,
the upgrade or installation of a new kernel cannot proceed.

After ungzipping and untarring the source code and establishing the links, you can begin the compilation process. You
must have a version of gcc or g++ (the GNU C and C++ compilers) or some other compatible compiler available for the
compilation. You may have to check with the source code documentation to make sure you have the correct versions of
the compilers; occasionally new kernel features are added that are not supported by older versions of gcc or g++.

Check the file /usr/src/linux/Makefile (or whatever path Makefile is in with your source distribution). This file has a line
that defines the ROOT_DEV, the device that is used as the root filesystem when Linux boots. Usually the line looks like
the following:


ROOT_DEV = CURRENT
If you have any other value, make sure it is correct for your filesystem configuration. If the Makefile has no value, set it
as shown in the preceding line.

The compilation process begins with you changing to the /usr/src/linux directory and issuing the command


make config
which invokes the make utility for the C compiler. The process may be slightly different for some versions of Linux, so
check any release or installation notes supplied with the source code.

The config program issues a series of questions and prompts you to answer to indicate any configuration issues that need
to be completed before the compilation begins. These questions may be about the type of disk drive you are using, the
CPU, any partitions, or other devices like CD-ROMs. Answer the questions as well as you can. If you are unsure,
choose the default values or the one that makes the most sense. The worst case is that you will have to redo the process
if the system doesn't run properly. (You do have an emergency boot disk ready, don't you?)

Next, you have to set all the source dependencies. This step is commonly skipped and can cause a lot of problems if is
not performed for each software release. Issue the following command:


make dep
If the software you are installing does not have a dep file, check the release or installation notes to ensure that the
dependencies are correctly handled by the other steps.

Now you can finally compile the new kernel. The command to start the process is


make Image
which compiles the source code and leaves the new kernel image file in the current directory (usually /usr/src/linux).
If you want to create a compressed kernel image, you can use the following command:


make zImage
Not all releases or upgrades to the kernel support compressed image compilation.
The last step in the process is to copy the new kernel image file to the boot device or a boot floppy disk.

To place the kernel on a floppy disk, use the following command:


cp Image /dev/fd0
Use a different device driver as necessary to place the kernel elsewhere on the hard drive filesystem. Alternatively, if you
plan to use LILO to boot the operating system, you can install the new kernel by running a setup program or the utility
/usr/lilo/lilo. Don't copy the new kernel over your old boot disk's kernel. If the new kernel doesn't boot, you may have to
use the older boot disk to restart your system.

Now all that remains is to reboot the system and see whether the new kernel loads properly. If you have any problems,
boot from a floppy disk, restore the old kernel, and start the process again. Check documentation supplied with the
release source code for any information about problems you may encounter or steps that may have been added to the
process.


#=[ Adding Drivers to the Kernel ]=--->

You may want to link in new device drivers or special software to the kernel without going through the upgrade process
of the kernel itself. This procedure is often necessary when you add a new device like a multiport board or an optical
drive that should be loaded during the boot process. Alternatively, you may be adding special security software that must
be linked into the kernel.

Add-in kernel software usually has installation instructions provided, but the general process is to locate the source in
a directory that the kernel recompilation process can find (such as /usr/src). Instructing the make utility to add the new
code to the kernel may require modifications to the Makefile. Either you or an installation script can make these
modifications. Some software has its own Makefile supplied for this reason.

Then, begin the kernel recompilation with the new software added in to the load. The process is the same as shown in
the preceding section, with the kernel installed in the boot location or set by LILO. Typically, the entire process takes
about 10 minutes and is quite troublefree unless the vendor of the kernel modification did a sloppy job. Make sure that
the source code provided for the modification works with your version of the Linux kernel.


#=[ Upgrading Libraries ]=--->

Most of the software on a Linux system is set to use shared libraries (a set of subroutines used by many programs).
When the message


Incompatible library version

appears on-screen after you upgrade the system and you try to execute a utility, it means that the libraries have been
updated and need to be recompiled. Most libraries are backwards-compatible, so existing software should work
properly even after a library upgrade.
Library upgrades are less frequent than kernel upgrades and can be found in the same places. There are usually
documents that guide you to the latest version of a library, or there may be a file explaining which libraries are necessary
with new versions of the operating system kernel. Most library upgrades are gzipped tar files, and the process for
unpacking them is the same as for kernel source code except the target directories are usually /lib, /usr/lib, and
/usr/include. Usually, any files that have the extension .a or .aa go in the /usr/lib directory. Shared library image files,
which have the format libc.so.version, are installed into /lib.

You may have to change symbolic links within the filesystem to point to the latest version of the library. For example, if
you were running library version libc.so.4.4.1 and upgraded to libc.so.4.4.2, you must alter the symbolic link set in /lib
to this file. The command would be


ln -sf /lib/libc/so/4/4/1 /lib/libc.so.4
where the last name in the link command is the name of the current library file in /lib. Your library name may be different,
so check the directory and release or installation notes first.
You will also have to change the symbolic link for the file libm.so.version in the same manner. Do not delete the symbolic
links; if you do, all programs that depend on the shared library (including ls) will be unable to function.


#=[ Debugging and Profiling Options ]=--->

The gcc compiler supports several debugging and profiling options. Of these options, the two that you are most likely to
use are the -g option and the -pg option.
The -g option tells GCC to produce debugging information that the GNU debugger (gdb) can use to help you to debug
your program. The gcc program provides a feature that many other C compilers do not have. With gcc, you can use the
-g option in conjunction with the -O option (which generates optimized code). This feature can be very useful if you are
trying to debug code that is as close as possible to what will exist in the final product. When you are using these two
options together, be aware that gcc will probably change some of the code that you have written when gcc optimizes the
code.

The -pg option tells gcc to add extra code to your program that, when executed, generates profile information that the
gprof program uses to display timing information about your program.


#=[ Debugging gcc Programs with gdb ]=--->

Linux includes the GNU debugging program called gdb. You can use gdb debugger to debug C and C++ programs. It
enables you to see the internal structure or the memory that a program is using while it is executing. This debugging
program enables you to perform the following functions:
Monitor the value of variables that your program contains
Set breakpoints that stop the program at a specific line of code
Step through the code line by line
When you start gdb, you can specify a number of options on the command line. You will probably run gdb most often
with this command:


gdb filename
When you invoke gdb in this way, you are specifying the executable file that you want to debug. You can also tell gdb to
inspect a core file that was created by the executable file being examined or attach gdb to a currently running process. To
get a listing and brief description of each of these other options, refer to the gdb man page or type gdb -h at the
command line.
To get gdb to work properly, you must compile your programs so that the compiler generates debugging information.
The debugging information that is generated contains the types for each of the variables in your program as well as the
mapping between the addresses in the executable program and the line numbers in the source code. The gdb debugging
program uses this information to relate the executable code to the source code. To compile a program with the
debugging information turned on, use the -g compiler option.


#=[ Summary ]=--->

Recompiling the kernel source and adding new features to the kernel proceeds smoothly as long as you know what you
are doing. Don't let the process scare you, but always keep boot disks on hand. Follow instructions wherever available
as most new software has special requirements for linking into the kernel or replacing existing systems

¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬
#=> 0x05 Basics about a BUFFER OVERFLOW
¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬

#=[ Background ]=--->

How does it come to a buffer overflow and how actually can it for an attack be used? As the
name already says, with such a programming error a buffer is brought to overflowing. Usually
that is a storage area, in which inputs for the further processing are put down. These inputs
come either from delivery parameters with the call of a program of the command line, from
dialog inputs or network minutes. In the event of an error an input value is longer, than it
expects the program. It overwrites thereby the storage area of a variable and the values
following in the memory on it. A condition for this is natural that the programmer forgot to
examine the maximally permissible length of the input values. Classical examples of such
errors appeared in Web servers, as input the path very long URL gotten by the inquiring
Browser.

This alone supplies however still no break-down possibility, since with nearly each processor
architecture the executable program is stored separately from the variables. In order to break
into the machine, the attacker would like to overwrite however not simply the value of
variables, but transfer own program code, which is also implemented afterwards.

Risky overflow problems arise with local variables. Local one variable are valid only within a
subroutine or a function and therefore together with the return address of the function on
the stack in such a way specified are stored. The stack is a storage area, which grows like a
pile. Importance can " be attached " above on the pile and be taken down again " by " the pile.
Thus the return address of many hierarchically interlocked subroutines can be
administered very elegantly: The processor puts the current address with the call of a
subroutine as return address on the stack. If the subroutine calls for his part a further
subroutine, then again the current address is put on the stack & ndash; the stack grows. After a
subroutine is terminated, the address, at which it continues in the superordinate program, is
above on the stack. " Above " on the stack thereby a lower storage address means since the
stack from higher storage addresses grows to lower addresses.

________
0xffffffff | stack |local variables
| |

| |
| heaP |global variables
|________|
| Code |
| segment|code
0x0 |________|


Local variable, which are valid within a subroutine only, can be stored likewise elegantly on
the stack. When leaving the subroutine the processor releases its storage location
automatically again. If now the length of a character string, which is to be stored into local
variable, is larger than the place planned for it, then an unchecked storage of this value
overwrites not only different local variable, but possibly also the return address of the
current subroutine.


Certain functions of the programming language C favour this; they come in innumerable
programs for the employment. It concerns thereby particularly the character string processing
function strcpy(). It has two parameters: a pouring and a destination address, at which the
function expects storage location for a stringer in each case. It copies indications of
indications of the source string into the aiming ring and stops only then if the indication \0
(the ASCII value 0) terminates the source string.

The zero sign as character string end is quite usual in C. If a programmer assumes certain
input values are long maximally 80 indications, and it this value with strcpy() into a buffer
copied, for which it reserved 100 bytes, then does not examine the function strcpy() this
maximum. If the source string contains a zero only after 200 indications, then the function
overwrites evenly further storage areas. If the aiming ring is a local variable, then this
Prozedere can overwrite also the return address of the current function on the stack.

#=[ Fatal return ]=--->

If the processor wants to continue after terminating the current function with the calling
program, then it reads a part of the copied text from the memory of the return address
and interprets it as address. Since at this address with unintentional over runs usually no
meaningful program stands, this condition leads frequently to a crash with the message "
segmentation fault " (memory access protection errors). If at the new address however a
functional machine code stands, then this is implemented.

Exactly this is the goal of the attacker. It looks for places in pograms or network
services, which work on input values with functions as strcpy(), to then transfer and tries over
the input value machine code and to overwrite the return adress in such a manner at the
same time that it points to the straight transferred code.

¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬
#=> 0x06 About Rootkits
¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬

#=[ Intro ]=--->

Hi all,

I joined this ezine about a week ago. I know rza for quite a while now, and he asked whether I knew something interesting to write about in connection with the
ezine that he was doing together with lkm. I don't know whether it interests a lot of people, but I guess it should... My topics will be security, rootkits,
DDoS, and I guess from time to time we touch stuff on linux virii and worms.

It's pretty hard to write something useful on security, especially when there are numerous sites claiming to be the number in internet security. I'd say, if you
want your box to be untouched, keep it off the net :o)


- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -


/* Let's talk about rootkits.
What are they, what do they do, how to get rid of 'em, how to protect yourself...
*/

#=[ Wot is a rootkit? ]=--->

To keep it simple : a rootkit is a set of programs which patch and Trojan existing execution paths within the system.
This process violates the integrity of the trusted
computing base. In other words, a rootkit is something which inserts backdoors into existing programs,
and patches or breaks the existing security system.

#=[ Wot do they do? ]=--->

- A rootkit may disable auditing when a certain user is logged on.
- A rootkit could allow anyone to log in if a certain backdoor password is used.
- A rootkit could patch the kernel itself, allowing anyone to run privileged code if they use a special filename.

#=[ Wot are they used for? ]=--->

- Taking part in a distributed denial of service attack
- Taking part in a distributed application environment
- Misappropriation of data for monetary gain
- Misappropriation of resources for personal gain

No need to explain the first goal, I guess. Everybody slightly interested in security knows what a DDoS attack is. More on this later.
The second use of rootkits touches the distributed application environment. In human language this means that many machines do some huge number crunching work (the same
target) to crack a program, to find security codes, etc etc...
The third goal would be cash :P The attacker finds him/herself a way on a server which contains sensitive data and abuses the data. The last goal would for example be
the defacing of a website. (see also http://attrition.org/mirror/attrition)

#=[ Examples of a few very known rootkits ]=--->

-----------------------------------------

- knark
- t0rn
- cancerserver
- adore
- lrk

#=[ How to detect the presence of a rootkit ]=--->

There are many indications thet might reveal the presence of an installed rootkit on your box. I made a list :

a) User Indications

- Failed log-in attempts
- Log-ins to accounts that have not been used for an extended period of time
- Log-ins during hours other than non-working hours
- The presence of new user accounts that were not created by the system administrator
- su entries or logins from strange places, as well as repeated failed attempts

b) System Indications

- Modifications to system software and configuration files
- Gaps in system accounting that indicate that no activity has occurred for a long period of time
- Unusually slow system performance
- System crashes or reboots
- Short or incomplete logs
- Logs containing strange timestamps
- Logs with incorrect permissions or ownership
- Missing logs
- Abnormal system performance
- Unfamiliar processes
- Unusual graphic displays or text messages.


c) File System Indications

- The presence of new, unfamiliar files or programs
- Changes in file permissions
- Unexplained changes in file size. Be sure to analyize all your system files, including those in your $HOME/ directory such as $HOME/.bashrc for modified $PATH entries, as well as changes in system configuration files in /etc
- Rogue suid and sgid files on the system that do not correspond to your master list of suid and sgid files
- Unfamiliar file names in directories
- Missing files

d) Network Indications

- Repeated probes of the available services on your machines
- Connections from unusual locations
- Repeated login attempts from remote hosts
- Arbitrary log data in log files, indicating attempt at creating either Denial of Service, or crash service

#=[ Tools to detect a rootkit ]=--->

You can of course try the standard linux/unix tools (described a bit further), but normally every clever rootkit replaces that binary by a patched one. In other words,
one that disguises its presence on the system.

- ps (show processes running on the system)
- top (display the top CPU processes)
- vmstat (report virtual memory status)
- netstat (show network statistics, sockets, ports, ...)
- du (Disk Usage)
- md5sums
- rpm (RPM can check the following :
S - file size changed
M - file mode changed
5 - MD5 checksum failed
U - file owner changed
G - group changed)
- SUID/SGID checkers


Tools that are also widely used to detect (sometimes even remove) rootkits, include the following :
- chkrootkit http://www.chkrootkit.org
- tripwire http://www.tripwire.org
- rkscan (kernel-based rootkit scanner) found on http://www.hsc.fr/ressources/outils/rkscan/index.html.en
- checkps http://sourceforge.net/projects/checkps/

#=[ Defense : intrusion detection ]=--->

Intrusion detection systems/packages are used to detect (not prevent) an attack from the outside. We'll talk about prevention
later. It's quite important to install packages like this right after you installed your brand new distro :)

There are a few different types of intrusion detection such as :

- Network Based Intrusion Detection - These mechanisms typically consist of a black box that sits on the network in promiscious
mode, listening for patterns indictive of an intrusion.

- Host Based Intrusion Detection - These mechanisms typically include auditing for specific events that occur on a specific host.
These are not as common, due to the overhead they incur by having to monitor each system event.

- Log File Monitoring - These mechanisms are typically programs that parse log files after an event has already occured, such as
failed login attempts, etc.

- File Integrity Checking - These mechanisms typically check for trojan horses, or files that have otherwise been modified,
indicating an intruder has already been there. The Red Hat Package Manager, RPM, has this capability, as does the well-known
Tripwire package.

#=[ Defense : Prevention ]=--->

Like I said, the ultimate protection would be to stay offline, but not really practical :)
Other tips are of course to keep your
packages up-to-date, especially your daemons. Those are usually the ones that can be exploited. Good examples are bind, apache,
sshd, ....




Links:
======
http://www.cotse.com/ # Security News, tools, ...
http://packetstormsecurity.nl/ # Security News, tools, ...
http://project.honeynet.org/ # How the not-so-elite get monitored
http://www.incidents.org/detect/rating.html # Rating of a "hacker"
http://www.sans.org/y2k/t0rn.htm # Analysis of the t0rn kit
http://www.ists.dartmouth.edu/IRIA/knowledge_base/tools/ramenfind.html # Tool to detect the ramen worm
http://www.nwo.net/security/tools.html
http://www.securityportal.com/topnews/linuxnews.html # Good security portal
http://www.snort.org/








¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬
-------------------------------------------------------------------------------------::
GreetZ :- blackhat@IRCnet !!, #darknet@EFnet , aurecom , ack- , moogz , bajkero
henray , izik, I-Busy , omen1x , quiksand , hegemoOn , slashvar , sts
hypno , pen , sXcrap , ^sysq , ofer , shev >! PRIV8SECURITY !<, SMILE CREW!!!
Special Thankz goes to [Psycho] & tonberi ;)



::################################################################[www.blackhat.cx###::

Brought to you by
:::::::. ::: :::. .,-::::: ::: . :: .: :::. ::::::::::::
;;;'';;' ;;; ;;`;; ,;;;'````' ;;; .;;,. ,;; ;;, ;;`;; ;;;;;;;;''''
[[[__[[\. [[[ ,[[ '[[, [[[ [[[[[/' ,[[[,,,[[[ ,[[ '[[, [[
$$""""Y$$ $$' c$$$cc$$$c $$$ _$$$$, "$$$"""$$$ c$$$cc$$$c $$
_88o,,od8Po88oo,.__ 888 888,`88bo,__,o, "888"88o, 888 "88o 888 888, 88,
""YUMMMP" """"YUMMM YMM ""` "YUMMMMMP" MMM "MMP" MMM YMM YMM ""` MMM
#BlackHat@IRCnet <-> www.BlackHat.cx
¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬


← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT