Angie McKeown

              I'm only me, but I'm very good at it

Cybersecurity, Papers Please, Writing

Physical Unclonable Functions

Security, Current Vulnerabilities & Future Directions

If you are after a primer on Physical Unclonable Functions, I wrote a quick paper on it… but fair warning that this is an introductory technical paper rather than my usual magazine style.

“Physical Unclonable Functions (PUFs) allow highly complex random numbers to be generated reliably from hardware substrates and then used as cryptographic keys or identifiers.” The idea is that you can build and link your encryption directly into the device hardware itself, making it harder to crack.

 

Abstract—This paper looks at the fundamentals of Physical Unclonable Functions, the current state of PUF security models and protocols, and possible future research directions.

Index Terms— Physical Unclonable Function, PUF security

I. Introduction

Physical Unclonable Functions (PUFs) allow highly complex random numbers to be generated reliably from hardware substrates and then used as cryptographic keys or identifiers.

These highly specialised systems came into common use in the early 2000s and have become increasingly popular ever since with both researchers and commercial vendors.

They represent a significant step forward in hardware security, particularly as even if an attack is successful only that discrete unit is compromised, not the whole range of that hardware model.

This paper briefly covers the fundamental concepts of PUFs, whether they are secure, some known vulnerabilities and attack models, and finally, emerging research directions and where the future is heading for PUFs.

II. Background

A Physical Unclonable Function (PUF) is based on a physical system, and behaves like a ‘random’ function, in that it generates random output values. The value it produces is so complex that it is unpredictable, even for an attacker with physical access to the system and it is irreproducible (or ‘unclonable’) on another copy of the same physical system, even when the functionality of the algorithm itself is known.

There are many types; silicon-based PUFs can be based on memory properties, delays inherent in circuits, or analog electronics. Other physical systems can also be used, such as the randomness patterns of paper or other surfaces, acoustic or magnetic values, or the optical refractions of special coatings.

PUFs are especially useful as they allow integrated circuits (ICs) (and the multiple miniscule flaws in their manufacture) to be used for unique cryptographic key generation [Fig1]. This output can then be easily integrated directly into computational operations. This means they can also provide affordable security for low-power ‘Internet of Things’ devices by generating a complex secret key intrinsic to the device itself, avoiding more explicit storage of critical security information onboard the device as with a more traditional classical ‘secret key’.

Fig1. Using SRAM to generate a cryptographic key.

Fig1. Using SRAM to generate a cryptographic key.

PUFs allow for the authentication of components in high-security scenarios via a challenge-response mechanism, where the physical probing of the structure constitutes the ‘challenge’ and the reaction generated by the PUF is the expected complex ‘response’. There may be many of these Challenge-Response Pairs (CRP) for one PUF.

Ci => [PUF] => Ri

It is due to their ability to generate reproducible and yet unique true randomness that they are useful for cryptographic purposes.

                                                                                                                                                                                          III. Common PUF Examples

There are many types of PUF and new ones are still being created, but some very commonly used electronic examples include;

Fig2. The core of an SRAM cell with two CMOS inverters, cross-coupled.

Fig2. The core of an SRAM
cell with two CMOS inverters, cross-coupled.

The Static Random Access Memory PUF (SRAM PUF) uses the power-on value of a SRAM cell. [Fig 2]

Fig3. CMOS Arbiter PUF

Fig3. CMOS Arbiter PUF

The Arbiter PUF (APUF) exploits delays in the circuit to introduce a race condition. Arbiter PUFs are symmetrical, and an input signal is simultaneously introduced to the two paths. The signals then race through the circuit, and whichever first reaches the flip flop at the end determines which bit is assigned as the output. [Fig3]

Fig4. Ring Oscillator PUF

Fig4. Ring Oscillator PUF

The Ring Oscillator PUF (ROPUF) again features a circuit with many inbuilt delay loops, which each oscillate at their own individual frequency according to minor manufacturing variances. The loops in turn drive counters [Fig 4]. ROPUFs are more useful in FPGA ICs as they lack the symmetry requirements of APUFs.

IV. PUF Applications

PUFs are used in random number generators, remote attestation, device authentication, and protecting intellectual property [1] [2].

They can even be used for counterfeit protection, as an IC PUF can give a binary string as an identifier intrinsic to the device’s construction and extremely difficult to spoof or clone. This also allows identification to take place even after the system has been built and shipped to ensure its integrity.

Embedded PUF IDs in sensors may also have digital forensics applications. Active Hardware Metering also allows chip designers to have them manufactured and built and only activated later, preventing unauthorised users from producing knock-offs or lapsed payments from accessing services, which represents a big step forward against hardware piracy).

New Public PUFs (PPUFs) can now outperform classical public-key cryptography owing to simpler and computationally less expensive design, with speeds of less than one clock cycle, and may also be less susceptible to side-channel attacks [3].

Potknojak and Goudar ably demonstrate the advantages and drawbacks of many PPUF designs and show that even though they may have low numbers of gates they are shown to be highly secure and very low-energy for public-key cryptography [4].  PPUFs are also useful for location authentication [5].

V. State of PUF Security

Despite their initial promise PUFs are not, however, completely infallible. In fact, despite being called unclonable, modelling and cloning attacks have been carried out successfully against some PUFs, and others have reliability issues under environmental pressures.

Stability and Reliability

It is important for PUF behaviour to be stable in a wide range of environmental operating conditions. Noise (or cell instability) can be caused by sensitivity to thermal variation, power supply voltage variations, electromagnetic interference and the effects of aging.

Unusual temperature and voltage values are generally well tolerated with SRAM PUFs, but these and the effects of silicon aging have now been reasonably well researched and can be counteracted with artificial anti-aging solutions during manufacture [6].

However due to limitations with traditional substrates PUFs are also inherently noisy and not uniformly random. For example SRAM PUF response requires some post-processing in order to be used reliably for key generation purposes. This compensates for the noise of the PUF and derives the same cryptographic key each time it is required. A wide range of environmental conditions over time may contribute to noise, especially as the silicon ages.

In most applications the SRAM PUF noise will be lower than 10%, and in worst-case scenarios it might rise to 18%. Furthermore, this noise is coming from a minority of SRAM cells that are flipping very often. Hence, from a reliability point of view, SRAM PUFs are very well suited as a secure storage medium for a cryptographic key [6].

So when used for key generation some mechanisms are required to correct for noise and extract randomness from the PUF response. ‘Fuzzy extractors’ are used to post-process the PUF data [7] before use, ensuring the same cryptographic key can be reliably extracted each time.

There are various ways to test new PUF designs, including Hamming distance between response bit strings to test uniqueness, and testing against the NIST “Statistical Test Suite for Random and Pseudorandom Number Generators” [8], although there are questions in parts of the PUF community whether it is really suitable for all PUF scenarios, as the actual distribution of CRPs is unknown [9].

Weak and Strong PUFs

Bauer and Hamlet’s excellent 2014 Primer [1] provides a very accessible overview of PUF security and its inherent challenges.

PUFs are categorised by the number of CRPs available, with those having only few or one CRP classed as Weak – these are typically SRAM, butterfly and coating PUFs [5]. Weak PUFs can still be used for secret key generation, secure booting, or as the seeds of True Random Number Generators.

PUFs with a very large CRP space are termed Strong PUFs, and can be used for things like low-cost authentication, RFID and key exchange. Each CRP is only used once. They are typically Optical PUFs and XOR Arbiter PUFs [5].

In addition to this, they confer some tamper-proof advantages in that the PUF function is only queryable when the device is powered on [10], and damage to the substrate will cause a change in the response, rendering any future results invalid. The system is also stateless, in that no data is stored between queries, further mitigating tampering attacks.

Attack Models

It is important to understand that attackers may have physical access to the PUF, including power cycles, signal timing, radiation and other on board information. Simple encryption may not be enough, and it is important to test for robustness with these factors in mind.

A growing body of research defines security models for Good PUF, Bad PUF and PUF re-use [11].

  • In the Good PUF model it is assumed that the PUF experiences only one isolated protocol execution and that the hardware is faithfully generated and never manipulated. While this is ideal and possible theoretically, it seems inadvisable to use as the norm when testing for security.
  • The PUF re-use model assumes adversaries will have uncovered a way to carry out CRP measurement on the PUF and will be able to re-use previous Challenge-Response Pairs. CL-PUFs may exhibit these features in the wild via a non-volatile memory module which logs the CRPs for later retrieval[11].
  • In the Bad PUF model, adversaries are assumed to have manipulated the hardware and the PUF has extra properties which allow cheating within the model. A Bad PUF will look and function exactly like the Good PUF however, and may or may not be detectable upon opening, at which point it would be destroyed anyway.

Testing against these models rather than a Good PUF ideal allows for more robust and realistic theoretical attack scenarios.

Research testing PUF protocols for Oblivious Transfer (OT), Bit Commitment (BC), and Key Exchange (KE) found that although they are secure in a Good PUF model they all fail under these new, more robust attack models [11], illustrating the importance of considering practical rather than theoretical security tests.

Two suggested countermeasures include erasable and certifiable PUFs [12], where single responses can be erased from the PUF, or that a PUF is certifiable as Good by authenticating against an online source. Both are relatively onerous to implement, however.

Machine Learning (ML) attacks are more and more frequently published against Strong PUFs, primarily because they feature very large collections of CRPs, often with a more easily accessible interface. What were once considered unpredictable and unclonable functions are now toppling under the sheer brute force capabilities of ML. If a numerical model can be derived of the PUF a ML algorithm is then used to predict output responses to challenges by training the machine learner with a subset of known CRPs, obtained by eavesdropping or direct access.

Newer reconfigurable PUFs (RPUFs) for which the CRP mapping is changeable either periodically or after a certain number of CRPs have been evaluated is designed to render any compromised  CRPs useless, thus defeating machine learning attacks [9]. Complex XOR APUFs have also been found effective against current ML attacks, but come at some performance cost.

Alternatively, weak PUFs such as SRAMs tend more often to be victims of cloning attacks as they have less complicated internals [13].

Side-channel attacks exploit measurable physical components of the system. For example, a successful SRAM PUF exploit obtained the power-on state via the near-infra-red photonic emissions of the circuit, and then built an identical clone using focussed ion beam (FIB) circuit editing to produce exactly the same startup outputs [14]. This editing technique can also be used in other delay-based circuits such as ROPUFs. The use of fuzzy extractors may also increase the attack surface for cloning and side-channel attacks [1].

Combined power-tracing and modelling attacks have been shown very effective on even very strong XOR APUFs, however countermeasures can be as simple as two symmetric inverted output signals with two latches [13], showing there is clearly much value to continued research and many valuable discoveries yet to be made.

VI. Future Directions

Recent research in poly silicon and other materials science has shown that gains can be made in PUF reliability [15]. These allow the selective enhancement of randomness in the material, without any loss of yield, improving PUF performance without impacting the rest of the device. Improving reliability also reduces the need for fuzzy extractor use, thereby reducing the attack surface for modelling and side-channel attacks.

Research has also shown significant gains in noise reduction can be made by intelligent matching of voltage ramp-up time to ambient temperature [16].

It is yet to be seen what effect aging will have on these materials, but advancements in materials science will likely continue to deliver increases in PUF performance and reliability for the foreseeable future.

New PUF designs continue to be invented. A new Carbon NanoTube PUF (CNT PUF) design based on the Lorenz chaotic system claims it suffers only 55% of the usual ML attack bit-wise prediction rates due to the strong de-correlation between Challenge and Response [17].

New quantum mechanical PUFs derive their security from the Heisenberg uncertainty principle, and therefore should be impossible to simulate in a reasonable timescale [5], however they will be harder to integrate into existing technologies as they are using quantum states for their Challenge and Response.

VII. Conclusion

In this paper we looked at how PUFs work, and how secure they were assumed to be compared to how many vulnerabilities were later discovered by research. New advancements in materials science are improving their performance and reliability all the time, and the prospect of quantum computing and nanomaterials should improve the situation even further, but at the end of the day anything which needs to interface with the rest of the world may end up introducing vulnerabilities.

Advances in Machine Learning continue to break down the security barriers that a large CRP space maintains, but new nanotechnology substrates offer interesting materials advantages, and new PUF designs continue to be invented which subvert attackers.

In general usage, PUFs remain incredibly valuable and strong hardware security mechanisms, and with the increasing use of sensor networks and digital forensics they will become increasingly important.

For now many of the nanoelectronics  models remain lab-based. It will be some time before many of the new nanotechnology-based PUF models can be robustly tested, as few outside of high-tech settings have the necessary equipment or ability to integrate well into the supply chain. This puts the problem into the realm of theorists and state actors, so it may be some time before we know their true flaws under load and in the wild.

VIII. References

[1] J. Hamlet and T. Bauer, “Physical Unclonable Functions: A Primer,” IEEE Security & Privacy, vol. 12, no. 6, pp. 97-101, 2014.
[2] M. N. Aman, K. C. Chua and B. Sikdar, “Mutual Authentication in IoT Systems Using Physical Unclonable Functions,” IEEE Internet Of Things Journal, vol. 4, no. 5, pp. 1327-1340, 2017.
[3] A. J. Menezes, S. A. Vanstone and P. C. V. Oorschot, Handbook of Applied Cryptography, CRC Press, 2001.
[4] M. Potkonjak and V. Goudar, “Public Physical Unclonable Functions,” Proceedings of the IEEE, vol. 102, no. 8, pp. 1142-1156, 2014.
[5] Y. Gao, D. C. Ranasinghe, S. F. Al-Sarawi, O. Kavehei and D. Abbott, “Emerging Physical Unclonable Functions With Nanotechnology,” IEEE Access, vol. 4, pp. 61-80, 2016.
[6] Intrinsic ID, “The Reliability of SRAM PUF,” 2017.
[7] Y. Dodis, R. Ostrovsky, L. Reyzin and A. Smith, “Fuzzy extractors: How to generate strong keys from biometrics and other noisy data,” Siam Journal on Computing (and previously Eurocrypt 2004), vol. 38, no. 1, pp. 97-139, 2008.
[8] NIST Computer Security Resource Center, “A Statistical Test Suite for Random and Pseudorandom Number Generators for Cryptographic Applications,” April 2010. [Online]. Available: https://csrc.nist.gov/publications/detail/sp/800-22/rev-1a/final.
[9] C. Chang, Y. Zheng and L. Zhang, “A Retrospective and a Look Forward: Fifteen Years of Physical Unclonable Function Advancement,” IEEE Circuits and Systems Magazine, vol. 17, no. 3, pp. 32-62, 2017.
[10] C. Herder, L. Ren, M. v. Dijk, M.-D. (. Yu and S. Devadas, “Trapdoor Computational Fuzzy Extractors and Stateless Cryptographically-Secure Physical Unclonable Functions,” IEEE Transactions on Dependable and Secure Computing, vol. 14, no. 1, pp. 65-82, 2017.
[11] U. Ruhrmair and M. v. Dijk, “PUFs in Security Protocols: Attack Models and Security Evaluations,” in IEEE Symposium on Security and Privacy, 2013.
[12] M. v. Dijk and U. Ruhrmair, “Protocol Attacks on Advanced PUF Protocols and Countermeasures,” in Design, Automation and Test in Europe Conference and Exhibition, Dresden, Germany, 2014.
[13] A. Mahmoud, U. Rührmair, M. Majzoobi and F. Koushanfar, “Combined Modeling and Side Channel Attacks on Strong PUFs,” Cryptology ePrint Archive, vol. 632, 2013.
[14] C. Helfmeier, C. Boit, D. Nedospasov and J.-P. Seifert, “Cloning Physically Unclonable Functions,” in IEEE International Symposium Hardware-Oriented Security and Trust, Austin, Texas, 2013.
[15] H. Shen, F. Rahman, B. Shakya, X. Xu, M. Tehranipoor and D. Forte, “Poly-Si-Based Physical Unclonable Functions,” IEEE Transactions on Very Large Scale Integration (VLSI) Systems, vol. 25, no. 11, pp. 3207-3217, 2017.
[16] M. Cortez, S. Hamdioui, V. v. d. Leest, R. Maes and G.-J. Schrijen, “Noise Reduction on Memory-based PUFs,” in Hardware Oriented Security and Trust, Austin, Texas, 2013.
[17] L. Liu, H. Huang and S. Hu, “Lorenz Chaotic System Based Carbon Nanotube Physical Unclonable Functions,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2018.
[18] G. E. Suh and S. Devadas, “Physical unclonable functions for device authentication and secret key generation,” in Proceedings of the 4th ACM/IEEE Design Automation Conference, 2007.

 

IX. Figures:

[Fig1] Generating a unique key from an SRAM PUF from “Quiddikey” by Intrinsic ID- -Available: https://www.intrinsic-id.com/products/quiddikey/

[Fig2] SRAM Core from “The Reliability of SRAM PUF” white paper, 2017, Intrinsic ID – Available: https://www.intrinsic-id.com/wp-content/uploads/2017/08/White-Paper-The-reliability-of-SRAM-PUF.pdf

[Fig3]  CMOS Arbiter PUF from “A strong arbiter PUF using resistive RAM” – Available: https://ieeexplore.ieee.org/document/7818358/

[Fig4] Ring Oscillator PUF from   “Ring Oscillator PUF Design and Results” paper – Available: http://class.ece.iastate.edu/cpre583/project_presentations/PUFs_report.pdf

Comments are Closed

Theme by Anders Norén