MISST Project 6

LM4 Information Security Analysis

Chad Daniels, Phil Laube, Dorothy Skowrunski,

Sarah Stottsberry, Thomas Vaughn

October 19, 2008

Many organizations have already taken their first steps towards securing their valuables and sensitive data. Most have implemented some solutions to reduce the threat of hackers, thieves, dishonest employees, viruses, bug infested illegal software or the myriad of dangers with the internet. However the most forward looking organizations no longer regard IT security as just a necessary evil, a mere preventive measure to protect their business information. They now acknowledge it as a means of improving productivity and enabling the technology of the future, both of which represent measurably increased profitability and genuine business advantage.

Following are five security technologies that we believe wouldprovide the most benefit to ODNR based on the results of the risk analysis we conducted and attached to this document as a separate file. The categories the risk analysis is based on are Physical Threats, Logical Threats, Technical Threats, Infrastructure Threats and Human Error.The results of risk analysis determined these five technologies:Encryption, Firewall Technology and the development of Unified Threat Management, Vulnerability Scanners,Identity and Access Management, and Virtualization in Disaster Recovery.

Encryption

Encryption, or information scrambling, technology is an important security tool. Properly applied, it can provide a secure communication channel even when the underlying system and network infrastructure is not secure. This is particularly important when data passes through shared systems or network segments where multiple people may have access to the information. In these situations, sensitive data--and especially passwords--should be encrypted in order to protect it from unintended disclosure or modification.

Encryption is a procedure that involves a mathematical transformation of information into scrambled nonreadable text, called "cipher text." The computational process (an algorithm) uses a key--actually just a big number associated with a password or pass phrase--to compute or convert plain text into cipher text with numbers or strings of characters. The resulting encrypted text is decipherable only by the holder of the corresponding key. This deciphering process is also called decryption[1].

A simple way of understanding encryption is imagine sending mail through the postal system in a clear envelope. Anyone with access to it can see the data. If it looks valuable, they might take it or change it. Encryption scrambles the data, essentially creating an envelope for message privacy2.

For ODNR, Encryption could be an important security technology due to the fact that they are divided into so many different divisions. Some information is obviously exchanged between divisions and this information could be vulnerable to hackers. Encryption would render this information useless if were intercepted.

Also due to the fact that some ODNR divisions have data stored on laptops and not necessarily backed up to main severs, this is also a major vulnerability. If the laptop is stolen or lost, having all pertinent information encrypted, would render this information useless to anyone who does not have the appropriate password.

A simple way of understanding encryption is imagine sending mail through the postal system in a clear envelope. Anyone with access to it can see the data. If it looks valuable, they might take it or change it. Encryption scrambles the data, essentially creating an envelope for message privacy[2].

Private key
/ Public Key

Two different cryptographic methods are being applied to computer security problems: private-key and public-key encryption. In private-key encryption, the sender and receiver of information share a secret--a key that is used for both encryption and decryption. In public-key encryption, two different mathematically related keys (a key pair) are used to encrypt and decrypt data. Information encrypted with one key may only be decrypted by using the other half of the key pair.

  • With private key encryption, there's one key for each trunk. The same key is used to lock and unlock it. Therefore, Alice and Bob have to have the same key. If Alice wants to send a message securely to Bob, and he doesn't have the key, she needs to send him that first. That's a problem. Why?
  • With public key encryption, there are two keys for each trunk. If one key is used to lock the trunk, it can't be used to unlock the trunk; only the other key can unlock the trunk. They have to cancel each other out. If Alice wants to send a message securely to Bob, she first gets his public key, maybe off his web page or something. She then locks the trunk with the public key and sends it to Bob. He then unlocks it with the private key!

When using a public-key system for personal authentication or secure messaging, you keep one key secret. The second (public) key can then be distributed to anyone. Some people put their public key on their personal Web page; it might also be stored on a public-key server. The secret (or private) key in a public-key cryptographic system is never transmitted or shared. For example, when using this method for client-side authentication, the server sends some data to your client program. The client uses your private key to encrypt that data. Using your public key, the server will attempt to decrypt the returned data, and, if successful, know that it has established communication with "the real you."

One of the most common uses of public-key technology is to provide a secure communication channel between computer programs. An example of current state of the art public-key technology is the SSL protocol often used to protect information sent between Web browsers and Web servers. SSL stands for Secure Sockets Layer. This protocol, designed by Netscape Communications Corp., is used to send encrypted HTTP (Web) transactions.

Seeing "https" in the URL box on your browser means SSL is being used to encrypt data as it travels from your browser to the server. This helps protect sensitive information--social security and credit card numbers, bank account balances, and other personal information--as it is sent, for ODNR this could be used to protect people’s names, addresses’ and any other information that may be either on portable laptops or shared between divisions.

The SSL protocol was originally developed by Netscape, to ensure security of data transported and routed through HTTP, LDAP or POP3 application layers. SSL is designed to make use of TCP as a communication layer to provide a reliable end-to-end secure and authenticated connection between two points over a network (for example between the service client and the server). Notwithstanding this SSL can be used for protection of data in transit in situations related to any network service, it is used mostly in HTTP server and client applications. Today, almost each available HTTP server can support an SSL session.

Figure 1: SSL between application protocols and TCP/IP

Which problems does SSL target? The main objectives for SSL are:

  • Authenticating the client and server to each other: the SSL protocol supports the use of standard key cryptographic techniques (public key encryption) to authenticate the communicating parties to each other. Though the most frequent application consists in authenticating the service client on the basis of a certificate, SSL may also use the same methods to authenticate the client.
  • Ensuring data integrity: during a session, data cannot be either intentionally or unintentionally tampered with.
  • Securing data privacy: data in transport between the client and the server must be protected from interception and be readable only by the intended recipient. This prerequisite is necessary for both the data associated with the protocol itself (securing traffic during negotiations) and the application data that is sent during the session itself. SSL is in fact not a single protocol but rather a set of protocols that can additionally be further divided in two layers:
  • The protocol to ensure data security and integrity: this layer is composed of the SSL Record Protocol,
  • The protocols that are designed to establish an SSL connection: three protocols are used in this layer: the SSL Handshake Protocol, the SSL ChangeCipher SpecPprotocol and the SSL Alert Protocol.

SSL uses these protocols to address the tasks as described above. The SSL record protocol is responsible for data encryption and integrity. The other three protocols cover the areas of session management, cryptographic parameter management and transfer of SSL messages between the client and the server[3].

Figure 2: SSL Protocol Stack

Do to the volume of information and the possibility of information being shared between divisions and outside sources such as universities, our recommendation would be for ODNR to use encryption to safe guard any information that they may see important. This would elevate the dangers associated with having information on laptops out in the field as it would render the information useless to anyone who does not have the proper keep to unlock the encryption.

Firewalls and Unified Threat Management

A firewall is a piece of software or hardware that filters all network traffic between any computer, home network, or company network and the Internet. As network traffic passes through the firewall, the firewall decides which traffic to forward and which traffic not to forward, based on defined rules. Normally a firewall is installed where an internal network connects to the Internet. Although larger organizations may also place firewalls between [4]different parts of their own network that require different levels of security, most firewalls screen traffic passing between an internal network and the Internet.[5] Firewalls have continued to evolve to meet increasingly complex threats. The latest development is the Unified Threat Management device.

There are several types of firewalls. Each can be appropriate in different situations or it may be appropriate to employ multiple types of firewalls:

  • Packet filter – The simplest type of firewall. Controls access to packets on the basis of the packet address or the specific transport protocol (e.g. HTTP web traffic).
  • Stateful inspection – Maintains “state” or “context” information from one packet to another in the input stream, making sure that only the intended stream of packets goes through and unwanted packets don’t “slip in”.
  • Application proxies – Simulates the (proper) effects of an application so that the application receives only requests to act properly. It does this by inspecting the entire packet against pre-set rules and then determines if the traffic should pass or not.
  • Guards – A sophisticated firewall. Functions like a proxy, but with greater functionality. Decides what services to perform on the user’s behalf in accordance with its available knowledge, such as what it knows about the (outside) user’s identity, previous interactions, etc.
  • Personal - Application program that runs on a workstation to block unwanted traffic[6].

Many firewalls have abilities from several different types and most vendors employ extra features into their product. Which one or which combination is appropriate depends on the level of communication with the outside (usually through the internet). A good way to understand the functionality of firewalls is to look at the Open Systems Interconnect (OSI) model for networking (exhibit to the right)[7]. OSI is a standard reference model for communication between two end users in a network. The model is used in developing products and understanding networks. Each layer in this model builds on the previous layer. OSI divides telecommunication into seven layers. The layers are in two groups. The upper four layers are used whenever a message passes from or to a user. The lower three layers are used when any message passes through the host computer. Messages intended for this computer pass to the upper layers. Messages destined for some other host are not passed up to the upper layers but are forwarded to another host.[8] Firewalls operate at different levels in this platform, which determines their level of sophistication. For example, a packet filter firewall operates at the network layer. At this layer, the filter is only concerned with the address on the outside (beginning) of the packet. As one moves up the layers, it becomes necessary to look farther into the packet and then the stream of packets that ultimately form the application. A guard type firewall would interact at the application level. The guard must look inside and interpret the string of packets to determine if it is genuine and should be passed through. This can even include scanning incoming files for viruses. An example would be: A company wants to allow its employees to fetch files via ftp. However, to prevent introduction of viruses, it will first pass all incoming files through a virus scanner. Even though many of these files will be nonexecutable text or graphics, the company administrator thinks that the expense of scanning them (which should pass) will be negligible.[9]

The level of sophistication does not necessarily equate to the effectiveness. Simple firewalls can be very difficult to circumvent, which in certain situations can increase their effectiveness. So rather than focus on the category, it is important to look at the capabilities of a product that is being evaluated. Another author suggests breaking firewalls into just two types of categories:

  • Network layer firewalls
  • Application layer firewalls

“Network layer firewalls generally make their decisions based on the source address, destination address and ports in individual IP packets. A simple router is the traditional network layer firewall, since it is not able to make particularly complicated decisions about what a packet is actually talking to or where it actually came from. Modern network layer firewalls have become increasingly more sophisticated, and now maintain internal information about the state of connections passing through them at any time. One thing that's an important difference between network layer and application layer firewalls is that many network layer firewalls route traffic directly through them, no logging or evaluation is done. Traffic is passed through solely based on the IP address … Network layer firewalls tend to be very fast and almost transparent to their users.

“Application layer firewalls generally are hosts running proxy servers, which permit no traffic directly between networks, and which perform elaborate logging and examination of traffic passing through them. Since proxy applications are simply software running on the firewall, it is a good place to do lots of logging and access control. Application layer firewalls can be used as network address translators, since traffic goes in one side and out the other, after having passed through an application that effectively masks the origin of the initiating connection.

Having an application in the way in some cases may impact performance and may make the firewall less transparent.” [10]

This author suggests that the future of firewall technology seems to be somewhere in between these two layers, with network layer firewalls looking at the traffic passing through and application layer firewalls becoming more transparent to the user.

One technology that has assisted in the effectiveness of application layer firewalls is Deep Packet Inspection, also referred to as Dynamic Packet Filtering. This process has the firewall look within the packet of information relative to the context of the stream and makes a decision on the significance of the data. This requires significant processing speed[11].

Combining this type of technology with other firewall technologies, vendors have built this functionality into one product. The newer concept of Unified Threat Management (UTM) combines content filtering, spam filtering, intrusion detection and anti-virus duties traditionally handled by multiple systems into one firewall appliance.

[12]

Advantages of these include:

  • Reduced complexity
  • Ease of deployment
  • Integration capabilities
  • The black box approach – separate from the rest of the systems.
  • Troubleshooting ease – allowing for a swap of a troublesome unit rather than attempting to pinpoint a problem.[13]

These offer simplicity to small to medium sized organizations. Also, larger organizations may have many small locations in a diverse area, such as several ODNR offices. A UTM offers a simple, easy to install and manage, solution. But the UTM even has applicability to larger organizations. Viruses and other malicious attacks are becoming increasing complex and now attack in a multifaceted approach to exploit the standalone nature of current defenses. “The threat environment changed with blended attacks. These metastasized attack methods exploit small gaps between various security layers. Security has responded by combining security layers into a cohesive package.”[14]Prices of a UTM device range from $2,500 to $8,000 and up per unit. This concept does not have to be a single product solution, but can also be looked as a platform to offer consistent security across all of these areas. In the same article, Kolodgy suggests that UTM’s will “… evolve into an eXtensible Threat Management (XTM) platform. XTM platforms will take security appliances beyond traditional boundaries by vastly expanding security features, networking capabilities and management flexibility. Future XTM appliances should provide automated processes – such as logging, reputation-based protections, event correlation, network access control and vulnerability management. Adding to the networking capabilities will be management of network bandwidth, traffic shaping, throughput, latency and other features, including unified communications.”[15]