Entropy in Cloud Computing Applications

Entropy, as it pertains to computer science and cryptography, is one of those topics that most of us (myself included) largely take for granted these days. In this context, entropy is a source of pseudorandomness that is typically collected by the operating system and made available to applications via a pseudorandom number generator (PNRG). We tend to implicitly trust that our applications have a source of entropy that is sufficiently random to ensure that the strength of our cryptographic techniques — SSL handshakes, SSH keys, and the wide variety of other cryptographic techniques used in modern public-facing applications that rely upon a pseudorandom number generator — is as strong, algorithmically speaking, as expected. But what happens when that source of entropy is not as strong as we think it is?

An excellent case study of what happens when a source of entropy is not as random as expected can be found in the weakness that was introduced into the Debian Linux package of the OpenSSL library in May of 2008 (see http://www.debian.org/security/2008/dsa-1571). A change was made to a single line of code in the open source OpenSSL package in order to clean up the output of purify and valgrind as part of the build and test sequence. This minor change had a side effect; it caused the pseudorandom number generator within the OpenSSL library to be predictable because it tightly constrained the number of possible seed values that could be used. Consequently, any cryptographic keys generated using this source of entropy could be guessed within a relatively short period of time using a brute force attack, constrained by the small set of possible seed values to the pseudorandom number generator. This issue was found and addressed quickly, but it illustrates an excellent point about entropy in software applications: a reduction in the quality of a source of entropy can be very difficult to detect if you are not specifically looking for it.

So what does any of this have to do with cloud computing? The current “best practice” for the collection of entropy by an operating system is to collect keyboard timings, mouse movements, network interrupts, disk drive head seeks, and other operating system events that are collectively random and can be processed to generate a stream of randomness to seed the pseudorandom number generator. This works reasonably well for a desktop or laptop that has a keyboard and a mouse and is being used interactively in an arbitrary fashion by a human. It also can be made to work for server hardware, although the rate of entropy generation is slower (and thus activities like key generation are slower) when a human with a keyboard and a mouse is not actively involved, since the technique relies more heavily on unattended events like network and disk use. And this is where a potential problem arises for entropy in cloud computing: a set of virtual machine instances running within a cloud-based virtualization service could potentially share a source of entropy from the underlying hardware. If the instances all share a single piece of underlying physical hardware, then they also all share the same set of network and disk events, and thus a clever attacker might be able to predict the stream of entropy that might be utilized by an application on one of those instances.

There are other techniques for entropy generation (e.g. hardware entropy generators, software techniques involving samples from a microphone or webcam, and entropy services available via the internet) that can be employed to attenuate or eliminate the potential threat of shared entropy sources in cloud computing environments, and as cloud computing environments continue to mature there will undoubtedly be advances in this area to address this issue. In the interim, however, we should all take a closer look at the use of entropy within our cloud-based applications to ensure that we haven’t introduced a “side effect” that will have serious security implications.