I have recently read "End-to-end Design of a PUF-based Privacy Preserving Authentication Protocol" by Aysu et al. which was published at CHES 2015.
They have designed and implemented a complete protocol for authenticating lightweight devices and include a security proof.
I would like to start by pointing out their definition of privacy:
A trusted server and a set of [...] deployed devices will authenticate each other where devices require anonymous authentication. [...] Within this hostile environment, the server and the devices will authenticate each other such that the privacy of the devices is preserved against the adversary.So they are protecting the privacy of the authenticating devices against an adversary but not against the server to whom they are authenticating! If your background is in public key crypto or if you consider internet technologies this might seem surprising. After all, can't we just establish a secure channel to the server and an eavesdropper or man-in-the-middle will never learn anything?
Of course, it turns out the situation is not so simple:
- Even if you use public key crypto you have to be very careful. In 2014 a paper by Degrabiele et al. showed how to fingerprint Australian health care cards using only ciphertexts encrypted with a card specific public key. Here is an explanation of the issue. The problem is that they used RSA which does not offer key privacy. Key privacy is the property that given a ciphertext and a set of public keys it should not be possible to determine which key was used for encryption.
- For RFID tags it is actually often assumed that public key crypto is not possible at all. It is either too expensive (in terms of hardware) or too slow. One standard for e-ticketing, for example, requires a response time of less than 300 ms [source].
The malicious adversary can control all communication between the server and (multiple) devices. Moreover, the adversary can obtain the authentication result from both parties and any data stored in the non-volatile memory of the devices.Of course, if you want to use symmetric crypto you need to make sure that an adversary cannot just extract the shared key from a device.
And that is what the authors are tackling in this paper and where there adversarial model comes from!
Their solution to the problem involves another interesting invention, so called physically unclonable functions. These use properties of the hardware to create a function which can only be calculated on one device. Usually this is done by using some more or less random physical properties, like the initial contents of a memory chip after it is powered on, which are then post-processed so that outputs from different devices are random but the output of the same device is reasonably consistent.
With the help of physically unclonable functions the device and the server can derive a new shared secret for each session but the secret cannot be extracted from the device! Awesome.
No comments:
Post a Comment