Copy Link
Add to Bookmark
Report

Intrusion Detection Systems: Part II

Number 0x02: 15/02/2007

eZine's profile picture
Published in 
the bug magazine
 · 29 Dec 2022

[ --- The Bug! Magazine 

_____ _ ___ _
/__ \ |__ ___ / __\_ _ __ _ / \
/ /\/ '_ \ / _ \ /__\// | | |/ _` |/ /
/ / | | | | __/ / \/ \ |_| | (_| /\_/
\/ |_| |_|\___| \_____/\__,_|\__, \/
|___/

[ M . A . G . A . Z . I . N . E ]


[ Numero 0x02 <---> Edicao 0x02 <---> Artigo 0x08 ]



.> 14 de Fevereiro de 2007,
.> The Bug! Magazine < staff [at] thebugmagazine [dot] org >


+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
Sistemas de Deteccao de Intrusos: Parte II
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+


.> 10 de Fevereiro de 2007,
.> y0Rk < gfm [at] rfdslabs [dot] com [dot] br >

"It is of the highest importance in the art of detection to be able to recognize, out of a number of facts, which are incidental and which vital. Otherwise your energy and attention must be dissipated instead of being concentrated."

-Sherlock Holmes

Index

  1. First words
  2. Where on my network?
    • 2.1. Before the firewall
    • 2.2. Inside the Firewall
    • 2.3. Other places

  3. Rules and Filters
    • 3.1. The Filter Policy
      • 3.1.1 Deny all!
      • 3.1.2 Accept all!

    • 3.2. Signatures
      • 3.2.1 The signature filter
      • 3.2.2 Make efficient filters

  4. Different types of filters
    • 4.1. Exploits
    • 4.2. Protocols

  5. Some evasion techniques and concepts
    • 5.1. Unicode
      • 5.1.1. Problems

    • 5.2. Polymorphism
    • 5.3 DDoS
    • 5.4 Fragmentation and Reassembly

1. First Words

We have reached the second part. Now, after knowing the principles and basic names of an Intrusion Detection System, let's know a little bit of how it works, with some implementation tips.

In this part, the logic of the steps is very important. Always keep an eye because the subjects are very linear.

The logic will be the following:

[Location]--[Rules/Filters/Subscriptions]--[Evasion]

Where,

  • Location: You will be presented with options of network locations, depending on your needs.
  • Filters: Some important filter tips to consider.
  • Evasion: After filters, what can go wrong?

The IDS adopted in some examples is Snort, which, besides having great documentation, is available to everyone.

2. Positioning: Where in my network?

As you all know, a NIDS depends on a sensor to detect something and to work. Furthermore, if placed in a good position, it will work even better.

However, before discussing the placement and positioning of the sensor on the network, we need to analyze it: questions about internal systems and the critical network infrastructure, such as databases for example, must be considered.

It is important to realize that whatever the choice we make, it will bring advantages and disadvantages that must be weighed until the effectiveness of the sensor is maximized.


Map:

2.1

	      [Sensor] 
|
[Internet]-------|--------[Fw]-------[Rede]
|

2.2

[Internet]-----[Fw]------|-----------[Rede] 
|
[Sensor]


2.1 Before the Firewall

Usually the sensors are placed in the DMZ -- outside the firewall. In this position, the sensor can pick up all attacks coming from the Internet, including those that the firewall would eventually block. Therefore, the amount of false-positives is much higher. In this position, if the attacker finds the sensor, he could attack it, affecting its auditing.

2.2 After Firewall

With the sensor after the firewall, it will only detect attacks that pass through the firewall rules (from the outside in), decreasing false positives.

It is also possible to identify attacks from within the network itself, since the sensor is on it. Furthermore, several evasion techniques can be avoided (anomalous packets for example), since the packets analyzed will be, as said before, only those that the firewall let through. Others can also be avoided because the three-way-handshake will not be completed.

2.3 Additional Locations

The most common placement, generally speaking, is before the firewall. But it is certain that it is not the only one that will benefit your network. Therefore, as stated before, you need to analyze internal issues and network infrastructure.

For example;

  • Consider placing a sensor in a partner network where you have direct connections.
  • Sharing Issues: The network resources (file system, devices, application servers, etc ...),

Remember that the network can have multiple entry points. The good thing is to analyze them all.

3. Rules and Filters

The effectiveness of an IDS depends not only on its location in the network, but also very much on the quality of its filters. The design and configuration and management of these filters is crucial when setting up an IDS.

3.1 Filter Policy

Define a filter policy that fits your network design and needs. This is very important. The policies are set up, and can be edited later, according to the demands and needs of the network.

Note: Not all IDSes support this flexibility of filters. ISS for example does not allow users to change their filters.

A good way to start the policy definition work, is to follow one of the policies below:

3.1.1 Deny everything!

The reading is simple: Deny everything that is not specifically permitted. That is, everything will be denied, until the needs appear. This can lead to problems -- if your IDS resets connections for example, and applications stop working (vpns, spam filters, etc...).

3.1.2 Accept it all!

It's exactly the opposite of the above policy: Accept everything that is not specifically denied, and then deny the services considered unacceptable. Some networks require freedom and flexibility (e.g. universities, laboratories), so don't always look down on this policy.

3.2 Signatures
3.2.1 The Signature Filter

Let's build some reasoning:
An attack usually has some particularity that identifies it. A signature filter looks for this feature in the data stream provided by a sensor. And, this signature can be described as a boolean relation called a rule.

Sounds easy, right?

But it is not all plain sailing, and signature filters have limitations. They can only recognize an attack when there is a signature for this attack. This is why managing these filters is so important.

This limitation essentially applies when a new threat comes out over the net (0days, worms, etc). If you do not have access to the source code of this threat, or at least understand how it works, it is very difficult to design a filter that prevents attacks.

Most people however wait for someone to develop a signature for this new threat, which sometimes takes days, and in some cases ends up being too late.

3.2.2 Make Efficient Filters

What can be considered an efficient filter? The idea is that it is a filter that understands the data it analyzes and generates relevant alerts with a minimum of false positives.

The first step on this path is to clearly define and delimit the filters' analysis area. It is also important to design filters that allow the integration of an anomaly detection module into the protocols.

Remember: Make filters that reduce the amount of false positives and generate alerts with a high level of information.

3.2.3 Different types of filters

  • Exploits
  • Protocol anomaly

4.1 Exploits

Retrieving the reasoning built in section 3.2.1, an easy but not fully effective way to write a filter for exploits, is to use, for example, a distinct string containing machine instructions that are passed directly to the target computer once the overflow succeeds.

Let's use an illustrative example: Windows Vista RPC Buffer Overflow: (Fake)

EB 12 5E 31 C9 81 E9 88 FF FF FF 81 36 80 BF 32 81 94 FC EE FF FF FF 
E2 F2 EB F5 E8 E2 FF FF FV 03 53 06 1F 74 57 75 91 80 BF BB 95 7F 89
5A 1A CE B1 DE 7C E1 BE 31

If a filter was designed for this specific type of exploit, this string would be our filtering target. The disadvantage of such a filter is that it is limited to a particular exploit, or a few others that use the same shellcode. The danger of this is that if a different piece of code exploits the same vulnerability, the filter will be useless. But we will see this later.

4.2 Protocol Filters

To develop a protocol filter, you have to divide the procedure into a few steps:

  1. Read all the documentation (RFC included!) about the protocol.
  2. Identify all the points that should be checked (headers, fields, etc.)
  3. Get a list of all known attacks using this protocol.

Most vendors reinforce the idea that the more RFC-compliant the protocol, the safer it is from attacks. However, this idea does not apply to all protocols, nor to all cases.

Legitimate applications that produce traffic can yield a large number of false positives. Recognizing a web-based attack using an anomaly scheme is quite difficult. An attacker can supply a malicious value in your request that compromises the webserver without violating the HTTP protocol specifications.

For example, the "phf" attack does not violate the Uniform Resource Locator (URL) RFC 1738. For an I{P|D}S to attempt to detect via anomaly, it would need to know what specifically the phf was providing to the QAlias variable.

http://127.0.0.1/cgi-bin/phf?Qalias=x%0a/bin/cat%20/etc/passwd

Note: The vulnerability allows arbitrary shell commands, not just /bin/cat /etc/passwd.

It would take the protocol to understand how the web-based application schema handles its input, so it is extremely important to use specific filters (in this case web-based, made from the survey in steps 2 and 3) to decrease the chance of an attack going unnoticed simply because it does not hurt the RFC.

Taking it a step further, let's look at the Simple Mail Transfer Protocol or smtp:

Every SMTP command described in RFC 821 (MAIL, RCPT, HELO, VRFY, EXPN, and HELP) was vulnerable to buffer overflows.

In order to be as SMTP-compliant as possible, an I{P|D}S could look for inappropriate values in the argument of commands, thus avoiding some old buffer overflow attacks. That is, it would look for strange arguments in each command used in a session.

In this case the anomaly filters detect the circumstances which are necessary for the attack to succeed and which never occur in normal traffic (in the smtp example).

Under these ideal circumstances, a protocol anomaly filter is an effective generalization of a vulnerability filter. However, its effectiveness is diminished if the traffic is legitimate (in the web-based example).

5. Some evasion techniques and concepts

In the real world, evasion attacks are mostly not that easy to exploit. Usually an attacker doesn't have the "luxury" of injecting arbitrary code into streams, for example.

Here, I will show some concepts of evasion that will serve as a starting point for deeper studies.

5.1 Unicode

Unicode is the character encoding standard developed by the Unicode Consortium. This encoding has always been problematic due to the existence of different standards (ASCII en, en, etc...) and the incompatibility between them due to different interpretations, for example, of special characters and accented characters.

The Unicode character set has various forms of representation, such as UTF-8, UTF-16 and UTF-32. Unicode is required by many standards including Java, LDAP and XML.

5.1.1 Problems

One example is the character "\". Under the original UTF-8, it could be represented by a hex 5C, C19C and E0819C. Many older applications that support UTF-8 will accept all three values and perform the transformation on backslash.

A clear problem with multiple representations is the Microsoft IIS 4/5 Extended Unicode Directory Traversal Vulnerabilty: IIS checks for the directory traversal before decoding UTF-8.

The attack was simple (older readers may remember): The goal was to access the http://victim/../../winnt/system32/cmd.exe address, and to do so, the attacker encodes the "../..." in UTF-8 to "..%C1%9C..."

This is also called String Obfuscation/Manipulation.

5.2 Polymorphic Codes - ShellCode

This is a good tactic, and IDSes do not defend against it easily.

Basically, a polymorphic shellcode is one that has in its payload data a code that is able to modify itself when executed on the target machine.

Its way of evading was thought along the lines of the virus in relation to the anti-virus, and because of this it becomes more dangerous in signature filters, since it does not have a single detectable signature. This falls back to the story in section 3.2.1.

One attempt to avoid success with this attack is to look for a reasonable amount of no-op instructions or try to make the filter as comprehensive as possible.

Note: Snort users can check the pre-processor (spp_fnord), developed by Dragos Ruiu and made for these occasions.
To be honest, I never tested this preprocessor, so I can't tell you how efficient it is.

For more information, a great material in Portuguese is the monograph by Rodrigo Rubira Branco a.k.a BSDaemon <http://www.kernelhacking.com/rodrigo>

5.3 DDoS

A DDoS attack, in general, is essentially an attempt to make a system's resources unavailable, and is arguably the least elegant form of evasion.

There are several tools (or do it yourself, in a test environment) that do the following:

  • Consume processing power, memory, bandwidth;
  • Overflow disk
  • Generate excessive alerts
  • Crash device;

Questions to think about:

On the CPU consuming related issue, a good example is when Snort does its log output into a DB. The problem with this is that every time an output is inserted, it will make an INSERT into the database, consuming a lot of CPU resources.

For excessive alert generation, one can generate several "false-positive" or simply irrelevant events, and in the middle of them a potentially dangerous event, that ends up being a "false-negative" within the context. This makes the operator's analysis difficult, since he will see several legitimate events, and only one illegitimate one, among thousands of events.

In the case of disk overflow, an example is: If an excessive number of logs are regenerated and this causes the partition on which the logs (assuming you leave the logs on the same machine as the IDS) to reach 100% usage, nothing can be logged, including real attacks.

5.4 Fragmentation and Reassembly

I consider this to be the most "low-level" of all the techniques presented.

Let's understand:

When a packet exceeds its maximum limit (MTU), a resource used by IP (and other protocols) is fragmentation and reassembly. The router negotiates with the network peripherals and the next router the maximum size that can be used on that subnet. Datagrams with larger sizes must then be fragmented.

The fragmentation operation consists in breaking the datagram data into transportable units, copying the header to each of them and finally sending them.

But they can arrive at their destination out of order, even when transmitted in order, so each packet is given a number to indicate its place within the order of the stream. This is called the sequence number.


Back to top:

To find out what is happening on a given TCP connection, IDS performs the exact reconstruction of the traffic being generated over a TCP connection.

Order of arrival -----------------> 
[1] [2] [3] [4] [5]
[A] [C] [H] [!] [K]
<--------------------- Output order
[1] [2] [3] [4] [5]
[H] [A] [C] [K] [!]

One of the facts that makes this reconstruction difficult is tracking the exact sequence number of packets. If an IDS misses too many packets it may consequently miss some sequence numbers as well. This would cause the connection to be out of sync, making packet loss recovery and resynchronization impossible.

So, until the IDS could resynchronize the connection, an attack could have happened. In other words, the attack happens exactly in the way the packets are reassembled and reassembled at the end of the fragmentation.

Another point is that if some fragments are lost, the datagram cannot be reassembled. Whoever receives the fragments starts a reassembly timer when the first fragment arrives. If the timer finishes before all fragments arrive, the receiver discards the remaining fragments without processing the datagram. Thus the probability of datagram loss grows when fragmentation occurs, because the loss of a single fragment results in the loss of the entire datagram.

6. References

  1. Guidelines for a Long Term Competitive Intrusion Detection System Erwan Lemonnier
  2. The Science of Vulnerability Filters: A Virtual Software Patch
  3. Network Intrusion Detection: An Analyst Handbook
  4. Ataques Polimorficos Rodrigo Rubira Branco
  5. Wikipedia, Google

← previous
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT