David RagerCS386M

Kandaraj PiamratFall 2004

A Methodology for Evaluating

Wireless Network Security Protocols

Presented on: December 10, 2004

David Rager and Kandaraj Piamrat

CS386M: Communication Networks - Fall 2004

Table of Content

1.Introduction3

2.Explanation of Terms4

3.Methodology6

  1. Authentication capability7
  2. Encryption strength8
  3. Integrity guarantees9
  4. Prevention of attacks10
  5. Identity prevention12
  6. Ease and cost of implementation13
  7. Power consumption14
  8. Novel idea 15

4.Analysis of Protocols16

  1. WEP16
  2. WPA19
  3. RSN21
  4. VPN23

5.Conclusion24

6.References25

7.Appendix27

  1. Comparison of categorical performance27
  2. Main contributors to each protocol’s success27
  3. Derivation of points in concrete form28

Introduction

Wireless networks have been deployed everywhere in today’s internet, causing all to think about its security. Unfortunately wireless networks have a lot of properties that attackers can use to mount an attack. These properties are for example, dynamicity (wireless network are mobile so they change the topology more frequently thana wired one), power constraints (mobile nodes are constrained in power consumption by their batteries), and finally agent-based properties (wireless networks usually use agentssuch as caches and proxies to enhance their performance).

Wireless network security has two wide approaches. The first one can be called “first line of defense” [7], which include prevention mechanisms such as authentication, authorization, and encryption. The second line of defense is the intrusion detection and response approach used to detect the attack or to respond after an attack occurs. In this paper, we consider the first line of defense.

While significant progress has already been made on fixing the problems with the current line of defense, there is no clear metric methodology for evaluating the efficacy of new protocols. We will categorize the different security requirements of a protocol, such as authentication capability, encryption strength, integrity guarantees, protection of identity, and the ease and cost of implementation. After enumerating how we can measure a protocol by these properties, we will analyze WEP, WPA, and the complete 802.11i in terms of these measurements.

Explanation of Terms[17]

RC4 is a symmetric stream cipher with an arbitrary key size.It is used in many applications such as WEP, TLS, and TKIP. It is not patented but it is a trade secret of RSA security. There used to also exist an exportable variant of RC4 which utilizes a 40-bit key, which is vulnerable to a brute force attack.

EAP (Extensible Authentication Protocol) [rfc2284] is a general protocol for PPP authentication that supports multiple authentication mechanisms. It provides an infrastructure that enables clients to authenticate via a central authentication server. EAP does not select a specific authentication mechanism at the link control phase but rather postpones this until the authentication phase;this enables the authenticator to request more information before determining the specific authentication mechanism to use.

802.11X is an IEEE standard for EAP encapsulation in wired and wireless network. It defines three roles: the supplicant (user or client requesting authentication), the authentication server (the server providing authentication), and the authenticator (the device which the supplicant requests access to and that requests access from the authentication server - usually the Wireless Access Point).

TKIP (Temporal Key Integrity Protocol) uses an RC4 stream cipher with 128-bit keys for encryption and 64-bit keys for authentication. It has a per-packet key mixing function to de-correlate the public initialization vectors (IV) from weak keys and also has a rekeying mechanism to provide fresh encryption and integrity keys. As a result, it is resistant to cryptographic attacks based on key reuse.

AES (Advanced Encryption Standard) is a symmetric cipher which is faster than asymmetric ciphers, but its requirements for key exchange makesit difficult to use. It also requires more hardware on the network card than exists on current day devices.

ICV (Integrity Check Value) is a checksum capable of detecting modification of an information system.

MIC (Message Integrity Check) is part of the 802.11i standard. It is an additional 8 byte field placed between the data portion of an 802.11 frame and the 4 byte ICV (Integrity Check Value). In fact, MIC is very similar to the older ICV, but instead of guaranteeing only the packet payload, it also protects the header. The algorithm that implements MIC is known as Michael,and it also implements a frame counter, which discourages replay attacks.

CCMP (Counter mode with Cipher block Chaining Message authentication code Protocol) is the integrity mechanism in the 802.11i standard. It is based on the CCM mode of the AES encryption algorithm. It uses 128-bit keys, with a 48-bit initialization vector for replay avoidance. It has two components.The first is Counter Mode (CM) which provides data privacy, and the second is Cipher Block Chaining Message Authentication Code (CBC-MAC) which provides data integrity and authentication. CCMP is mandatory for anyone implementing RSN (Robust Secure Network). CCMP has some disadvantage since it cannot be used with a machine that does not have enough CPU power.

RADIUS (Remote Authentication Dial In User Service) is a protocol for remote user authentication and accountability. It enables centralized management of authentication data, such as usernames and passwords. It utilizes the MD5 algorithm for secure password hashing. Communications between the client and server are authenticated and encrypted using a shared secret which is not transmitted over the network. The RADIUS server is an excellent choice for keeping track of every user’s access, because it is a centralized authentication server. The disadvantage is that since everything is in the RADIUS server, if it is compromised, the attacker obtains everything.

IV (Initialization Vector) is a block of bits that is combined with the first block of data in any of several modes of a block cipher. In some cryptosystems, it is random and is sent with the cipher text.

Handshaking in data communication is a sequence of events governed by hardware or software, requiring mutual agreement of the state of the operational mode before information exchange. An n-way handshake uses n messages to establish the connection.

Per-Packet Key Mixing is a function used in a per-packet encryption key. It takes the base key, transmitter MAC address, and packet sequence number as inputs and outputs a new per-packet WEP key.

Methodology

In this paper, we considered four main approaches which are cited in chronological order as the following: WEP, WPA, 802.11i / RSN, and VPN. The first three approaches are derived chronologically from each other.This means that the more recent approach tries to solve the problem found in the earlier ones. In this paper we look through all the approaches to see techniques that they use for security and then evaluate these techniques separately from the approaches. At the end of the evaluation we will be able to measure the performance of each approach depending on the purpose of the network.

In order to evaluate each approach, we need to define metrics that we are going to use. Therefore we decide to use the following metrics:

Authentication Capability

When a user wants to use the network, the network devices decide how much authentication is required to allow a new user on the network. A protocol can be trivially setup to allow anyone anonymously on the network, protecting the identity of a client. A protocol can perform authentication via challenge response messages, requiring knowledge of a group key. A protocol may require the hardware of the user to meet certain specifications (like a MAC address).Finally, a protocol may authenticate a user based upon his/her own credentials, perhaps through a password verified via an internal server.

The authentication protocol must not be prone to man in the middle attacks and all exchanged passwords must be securely transmitted.Also, in the event that an intruder can capture an authentication server, the greater the redundancy between servers and synchronized decisions between them, the better.Creating a Byzantine agreement protocol is complex computationally and expensive in terms of network efficiency, so points should be removed under ease of use via number of messages exchanged.

It can be seen that in order to be efficient in authentication, we should consider several parameters. Table 1 explains what should be considered:

Consideration / 0(bad) / 1(fair) / 2(good)
Type of authentication / Key with challenge response / Key with challenge response and MAC address / Credentials based
Number of authentication servers / One / Three / (# faults permitted) *3 + 1
Use of new authentication mechanisms / None / - / Use of EAP (802.11X)[17]
Known MITM attacks / One or more / - / None

Table 1: Authentication capability

Encryption Strength

A good protocol must choose an encryption scheme that is secure under a probabilistic polynomial time model. Additionally, the protocol must apply the encryption scheme in a way that doesnot open a good encryption scheme to vulnerability.A good protocol should have a key management mechanism so the user will not have to worry about the manually generating and installing new keys.

In order to have that good protocol, we consider the different parameters below:

Consideration / 0(bad) / 1(fair) / 2(good)
Key type / Static key / - / Dynamic key
Cipher key type / RC4 / - / AES
Cipher key length / 40 or 104 bit encryption / 128 bit encryption / 128 bit encryption + 64 bit authentication
Key lifetime / 24-bit IV / 48-bit IV / 48-bit IV
Time used to crack / Few hours / Few days / Centuries
Encrypted packet needed to crack / Few millions / - / Few billions
Can be recovered by cryptanalysis / Yes / - / No
Key management used / None / Static / EAP

Table 2: Encryption Strength

Integrity Guarantees

Giving a recipient a means to check a message’s integrity is a well known step for preventing message tampering. A good integrity scheme will compute a hash involving each bit in the message. This hash will be a one-way hash, such that the message can not be reverse engineered and changing a bit in the message should result in a large change in the hash value.

If a hash function that does not meet these requirements is used, then the integrity value should be transmitted under encryption of a fresh or well-protected key. Therefore, it may be good to use a public/private key scheme to communicate the integrity value securely.

In order to ensure integrity, we should guarantee two things: integrity of the message header and integrity of the data itself. For example, it is known that the use of the CRC checksum called Integrity check value is not secure and the packet can be intercepted.So a good protocol will not use this mechanism. On the other hand, CCM is a long term solution and it should be deployed when possible.

Consideration / 0(bad) / 1(fair) / 2(good)
Integrity of message header / None / Michael / CCM[4]
Integrity of the data / CRC-32 / Michael / CCM

Table 3: Integrity Guarantees

Prevention of Attacks

When a key is discovered by attackers, it is important that the discovered key is rotated out soon. Therefore, a protocol that provides a fresh key frequently is more secure than a protocol that uses the same key until user intervention. Additionally, the next key derived should be independent of all previous keys, satisfying a requirement called “forward secrecy.”

Replay attacks

A replay attack involves two users communicating and a third one later using one of the messages communicated to gain some advantage he wouldnot have otherwise. A good example is supposingAlice and Bob are communicating, and Eve is listening. Alice needs to authenticate herself to Bob, so Alice sends Bob an encrypted version of her password.Later, Eve can pose as Alice, because when Bob asks for Alice’s password, all Eve needs to do is replay the message she captured earlier. Bob will accept that authentication and will begin communicating with Eve assuming that Eve is actually Alice. Thus Eve will have access to all the same information that Alice does, perhaps even the ability to change her password.

Once included, prevention of a replay attack is actually quite simple. Bob will send Alicea fresh nonce, a newly generated random number, to act as a session identifier. Alice appends this nonce to the password, and then encrypts. Since each message to encrypt is now different, because each session will have a different nonce, the encrypted version of the password can not be replayed. If Eve tries to act as Alice, she will receive a new nonce from the server, and since Eve does not know the encryption key, she will be unable to create a new password message.

It is interesting to note that the nonce is transmitted in the clear. So long as Bob sends a fresh nonce whenever a new session or IP address is encountered, the actor posing as Alice will always have to know the key to fake an encryption.

We say a protocol is secure from a replay attack if it provides a sense of freshness for each packet, which would keep an intruder from replaying that packet. The use of an initialization vector is a start towards this, but the space must be large enough such that collisions are rare.

DoS

Denial of Service (DoS) takes on many meanings. At its core, DoS attacks involve preventing a client from receiving a service from the network that it would be able to receive under more friendly conditions.

In wireless security, DoS attacks have various forms. As briefly explained in the survey paper and more completely explained in Bellardo and Savage’s work, it is possible to deny a client service to an access point by sending a small 30 messages per second on the link layer. It is also possible to deny wireless networks service by jamming the relevant frequencies, an exploitation of the physical layer. It is also possible for another client to pose as a wireless station, confusing other wireless clients and effectively denying them service. This exploit involves acting as aDHCP server and Internet gateway and is hence a layer three attack. A good protocol is robust from attack on all layers used.

One well-known method for preventing some DoS attacks is to use a “cookie.” A cookie usually contains a hash under a personal key of the source address of the initiator, any session identifiers, and something that the responder knows and can remember across many sessions without setting up state for a specific session.A cookie is usually involved in at least a four-way handshake and works like this:

  1. Initiator sends a request to the responder to have a session
  2. Responder sends back an acknowledgement and a cookie
  3. If the initiator sent with the correct source IP address in step one, he will receive part two and can reply with the cookie and setup the session
  4. If the responder receives the correct cookie, he will also setup the session.

So, the cookie works, because the responder will not initiate state until he has verified the source address of the initiator. Since many DoS attacks rely upon spoofing a source address, this is a reasonably effective method for prevention.

Consideration / 0(bad) / 1(fair) / 2(good)
Replay attack prevention / None / - / IV sequence , Per-packet key mixing
DoS cookie / No / - / Yes
Number of known attacks prevented / None / Some of them / All of them

Table 4: Prevention of attacks

Identity Protection

A good protocol is one that only reveals identity to the intended parties. Preservation of identity keeps attackers from narrowing their search for a given user’s information. At some point, identity or group identity must be communicated if authentication is to make progress. A protocol which reveals identities in plain-text has the worst identity protection, while a protocol that reveals identity under a strong form of encryption with a fresh key has the best type of identity protection.

It is also better to use a basic form of authentication like source IP address validation before revealing identity.

Consideration / 0(bad) / 1(fair) / 2(good)
Group identity
revealed to / Entire network / All parties / Specific parties
Specific identity revealed to / Entire network / All parties / Specific parties

Table 5: Identity protection

Ease and Cost of Implementation

Since the computational costs of setting up an anonymous connection are close to none, we will use this as the highest standard. The “ease” of implementation is a subjective measure, which requires some knowledge of technology already in existence. One concrete measure of a protocol could be the number of gates it would require in a client’s hardware device. Another concrete measure could be the number of lines of code required to implement it. The complexity of the protocol can also be measured by the number of actors involved and the number of messages exchanged.

The network utilization efficiency can be measured by the number of handshakes and parties involved in establishing a user’s identity. In other words, if a protocol requires four parties to authenticate a user instead of three but establishes identity to the same degree of security the protocol requiring four parties is less efficient.

A new protocol should be relatively easy to deploy, in that it does not require a complete overhaul of a network to function. We currently do not know of a protocol that can not be implemented completely incrementally, but we can imagine that such a protocol could be created.

Consideration / 0(bad) / 1(fair) / 2(good)
Computation cost / High / Medium / Low
Incremental installation / No / - / Yes
Number of messages exchanged / 300 / 30 / 3
Number of actors involved / Many actors / - / Few actors
Packet key / Mixing function / Concatenated / No need
Additional server hardware / Yes / - / No
Additional network infrastructure / Yes / - / No
Number of gates in client device / High / - / Low
Lines of Code / High / - / Low

Table 6: Ease and cost of implementation

Power Consumption

Most devices that use wireless connections run from a battery-powered power source. As a result, it is only fair to include power consumption in our evaluation of wireless protocols. Power consumption is best measured in a relative manner between protocols. For example, a protocol which uses AES will use more power than one that uses RC4.

Additionally, when a client receives attack-like behavior, it would be good for a protocol to specify a means to detect the attack and conserve power. This is especially important for networks that are seldom recharged, like sensor networks. A wireless protocol evaluation methodology would be incomplete without considering power. It should be noted that implementing AES in hardware instead of running it from ROM software cuts the power cost significantly [5].