Physical Unclonable Function (PUF) Technology
We will describe four different technologies:
- Digital private keys
- Physical Unclonable Functions (PUFs)
- Controlled PUFs (CPUFs) also referred to as programmable PUFs or processor-PUFs.
- Physically Obfuscated Keys (POKs)
We assume the reader is familiar with public key cryptography and symmetric key cryptography. For a tutorial see
1. Digital Private Keys
Digital private keys (typically 512 or 1024 bits) can be stored in a chip. These keys are non-volatile and are essentially determined by the presence or absence of a connection. Therefore, if an adversary is willing to strip away the surrounding layers then he can discover this key and clone the device that contains the key. The document
http://csg.lcs.mit.edu/~devadas/pubs/AttacksAgainstSmartCards.pdf describes a slew of possible attacks to discover secret keys.
To put different private keys in different chips requires customization in the manufacturing process. The deeper the private key is embedded in silicon, the more secure it is, however, the greater the cost in customization. Customization at the top layers, i.e., metal is cheaper but is not really an option because then the key is easily extracted through de-packaging and microscopy.
The manufacturer will know the private key that is embedded in each device, and has to be trusted to forget it.
PUFs simply correspond to fabricating a particular circuit on a silicon integrated circuit chip to create an unclonable identity for the chip. PUFs can only be used for identification or authentication applications.
The PUF is a circuit built out of standard logic primitives used in semiconductor technology. The inputs to the PUF are digital inputs (0/1) and the outputs are digital outputs (0/1). However, the function implemented by the PUF depends on the timing of the individual gates and wires.
Circuits are fabricated onto silicon by creating layout masks for the circuit. When the PUF is fabricated on many silicon chips (using the same set of layout masks for each chip) due to statistical variation in the manufacturing process, the delays of gates and wires vary from chip to chip. The PUF functionality is such that a digital output is produced based on the relative delays of paths in the PUF. Therefore, when inputs (challenges) are applied to the different chips, they can produce different outputs (responses). In our experiments, two randomly selected PUFs would differ in 22 out of 100 responses when presented with the same set of 100 inputs. We have calculated that if we have a billion PUFs, and we pick a random pair of PUFs and apply 400 challenges to them, these challenges will fail to differentiate between them with a probability of 1 in a billion.
For authentication applications, e.g., credit cards, key cards, once a PUF is fabricated, the ``lock'' has to be created. The lock simply corresponds to a database of challenge response pairs In an authentication application, a server that stores the lock database queries the PUF key with challenges. If the PUF responds correctly the lock opens.
The main engineering issue with the PUF is reliability under environmental variation. Temperature and voltage affect the absolute delays of gates and wires in any circuit. PUFs are relatively robust to environmental variation because we are making relative delay measurements as opposed to absolute delay measurements. Therefore, with high probability the response of a PUF to a challenge is the same even when the temperature varies significantly. However, for some challenges the PUF may produce random or noisy responses rather than deterministic responses. That is, for a particular challenge, a PUF may produce a 1 at one time and then a 0 at a different time due to changes in the environment. For the PUFs we have experimented with the noise level if a stable environment is < 1%. For large changes in temperature (50 C or 90 F) the noise level increases to 4-5%.
In an authentication application, we have two ways of dealing with noise. They are:
- We can simply ignore noise since the noise level (5%) is substantially less than the variation across PUFs (22%). If Alice produces 95 out 100 (or more) correct responses as compared to what is stored in the lock database for Alice we will authenticate her, else we will not.
- We can simply create the lock out of challenges that are robust, i.e., not noisy, and then require that Alice produce 100% correct responses.
- We can use error correction to correct Alice’s response to a challenge, if it is close to what is in the database. Again, we require that Alice produce 100% correct responses after error correction.
For authentication applications, Methods 1 and 2 are most convenient and work very well. Method 3 is useful for controlled/programmable/processor PUFs. Methods 2 and 3 are useful for POKs.
To break an authentication application, an adversary can try to break the lock or break the key. If we assume that the lock is stored in a secure location and challenges are not repeated, then the adversary cannot break the lock.
Assume that the adversary has physical access to a particular PUF chip and wishes to clone it. Since the PUF will have 128 inputs, exhaustively enumerating all possible challenges to determine responses is infeasible.
The adversary can try to probe the PUF chip to determine delays of individual wires and gates. The delays depend on the characteristics (i.e., width, length, thickness) of many layers of metal as well as doped silicon. Probing the PUF to try and determine delays will damage the PUF and change the delays. Further, delay is not just a static number, but is dependent on dynamic coupling between wires and neighboring transistors so the adversary’s job is very difficult.
The most likely attack on PUFs is a software model building attack. Here the adversary attempts to simulate a PUF’s behavior by applying a very large number of challenges and observing the response. If the PUF is a very simple linear circuit this attack has a reasonable chance of success – that is, the adversary can produce a software clone or model of the PUF that predicts the PUF’s responses to within 5%. This means that the adversary is predicting the response to within environmental variation and this breaks the authentication application. However, making the PUF a nonlinear circuit where the delays are complex functions of the challenge will thwart this attack. The more complicated the connections in the PUF circuit, the harder it is to model. Also, this attack does not work for controlled PUFs since controlled PUFs obfuscate their response, and the adversary cannot create a model.
To summarize, PUFs are a means of authenticating chips. They can be made to be reliable and secure by carefully designing the lock in the authentication application and choosing a complicated enough circuit design for the PUF.
3. Controlled PUFs (CPUFs)
A controlled PUF is simply a PUF with control logic that restricts access to the inputs of the PUF (where challenges are presented) and which obfuscates the outputs (where responses are produced). To enable many applications, the most flexible approach may be to simply use a programmable processor for the control logic. So we will refer to a controlled PUF as a programmable PUF or a processor-PUF. Such a programmable processor can be implemented using ~10K gates. Each application will have different software (typically firmware) running on the processor. Alternately, custom logic can be designed and optimized for a specific application. This logic or software corresponds to 1) protocols for CPUF access and 2) error correction.
For a DRM application, the content provider will need to know a challenge response pair for the controlled PUF that no one else knows. He then encrypts his media content using a secret extracted from the challenge response pair. This content can only be decrypted by the controlled PUF. Typically, to ensure that content does not need to be encrypted differently for each different controlled PUF over and over, the content is encrypted using a master key, and then the master key is encrypted using the secret extracted from the challenge response pair for the particular CPUF. (The master key is encrypted differently for each CPUF.)
If you expose the decrypted master key at the pins of the chip, or expose unencrypted content, then the DRM scheme is broken. (For example, if you send digital audio out to speakers, CPUFs can’t help you if an adversary taps the speakers. He can always re-digitize analog output as well. You can force him to do that if you make the CPUF decrypt the master key, decrypt the content, and pass it through a D-to-A before sending it out of the chip. In this case, the adversary has to probe a running chip to discover the master key or digital unencrypted content. This is very difficult to do and virtually impossible without damaging/destroying the CPUF.
We require a challenge response pair management software infrastructure that is required to deploy a DRM application using CPUFs. Before a CPUF is sent to a consumer a challenge response pair is generated in a secure location. Only the manufacturer, or content provider or a rights administrator knows this challenge response pair. Knowing a challenge response pair that no one else knows allows you to create a secure channel to a remote CPUF device. In order for someone to eavesdrop on this channel, they would have to break an encryption algorithm such as AES. This is similar to the situation with PKI. If a user’s private key is indeed private (known only to VeriSign and the user) then anyone can securely communicate with the user using the user’s public key, and an adversary cannot eavesdrop unless he breaks RSA.
For each technology there is a bootstrapping phase: for CPUFs it is the generation of the first challenge response pair, for conventional digital private keys and POKs (see the POK section) it is placing the private key into the chip. Every person who knows a challenge response pair for a CPUF that nobody else knows can communicate securely with a remote CPUF and do DRM. Once bootstrapping is done, the generated challenge response pair can be used to generate new challenge response pairs even when the CPUF is in a remote location, and these new pairs can, in turn, be sent to other content providers. This secure distribution of challenge response pairs is the software infrastructure that needs to be developed for CPUFs. The software infrastructure takes the place of public-key infrastructure (PKI).
An advantage of CPUFs is that the manufacturer and the rights administrator can be different entities. The manufacturer will always be able to eavesdrop and decrypt the message/content in the case where a digital private key is embedded in a chip. The POK case is in between storing a conventional digital (0/1) private key on the chip and the CPUF case as described in the next section.
4. Physically Obfuscated Keys (POKs)
Physically obfuscated keys (POKs) are an alternative to CPUFs. POKs are essentially PUFs with a fixed challenge presented to the PUF, and where the response is not exposed. POKs may have lower security than CPUFs though this is not proven. The main advantage of POKs is that one can, in effect, store the private key of a private/public key pair on a chip in a volatile fashion – this key is only generated when the chip is powered up. Therefore, POKs are more secure than non-volatile digital keys stored on a chip. To break a POK an adversary has to determine a large number of voltages in internal signals buried inside the chip while the chip is running. The POK can be designed so these signals are never steady for a long period of time.
The advantage of storing a private key is that existing public key infrastructure (PKI) can be used to distribute content in a DRM application. If one can store a private key on a chip, then knowing the corresponding public key is enough for a content provider to send encrypted content to the chip that can only be decrypted by that chip. The infrastructure required is standard PKI, which has been deployed for many applications. Existing infrastructure can thus be leveraged.
For a fixed challenge, the different PUFs that implement POKs will produce different responses across different chips. It is possible that none of these responses will correspond to private keys, which need to satisfy a particular mathematical property. However, if a particular private key is to be generated in a particular chip one can set a fuse appropriately, such that the fuse setting in combination with the PUF’s response to the fixed challenge will produce the required private key. If necessary, the same private key can be placed in multiple devices. This will help in reducing the number of differently encrypted media files that need to be created and stored. For a DRM application, this fuse programming has to be done in a secure location before the POK-equipped device is shipped to a consumer. (This is similar to the bootstrapping phase for CPUFs.)
With POKs the bootstrapping does not have to be done by the manufacturer. The manufacturer may just create the POK device with “un-burnt” fuses, and then the satellite company (say) will burn the fuses appropriately for a given POK device. However, the entity that does the burn-in will always know the key, even after the POK device is sold to a content provider or a third party.
For satellite TV access cards, one can use POKs to produce the same group key in many different access cards. This group key is typically a symmetric key and not a private key in a private/public key pair. Content encrypted using a content key is sent to the set-top box from a satellite, and in addition, the content key is encrypted using the group key (and perhaps other group keys) and sent to the set-top box. In one scenario, the encrypted content key is sent to the access card, which decrypts it and sends it back to the set-top box, which can now decrypt the content.
For a PDA application, one can imagine an enterprise carrying out the burn-in of an enterprise private key or a shared symmetric secret key into a set of PDAs that will be provided to all their employees.