Here’s a security update to haunt your dreams, and to make the FBI’s quest for un-exploitable cryptographic backdoors look all the more absurd: a team of Israeli researchers has now shown that the sounds made by a computer’s fan can be analyzed to extract everything from usernames and passwords to full encryption keys. It’s not really a huge programming feat, as we’ll discuss below, but from a conceptual standpoint is shows how wily modern cyber attackers can be — and why the weakest link in any security system still involves the human element.
In hacking, there’s a term called “phreaking” that used to refer to phone hacking via automated touch-tone systems, but which today colloquially refers any kind of system investigation or manipulation that uses sound as its main mechanism of action. Phone phreakers used to make free long distance phone calls by playing the correct series of tones into a phone receiver — but phreaks can listen to sounds just as easily as they can produce them, often with even greater effect.
That’s because sound has the potential to get around one of the most powerful and widely used methods in high-level computer security: air-gapping, or the separation of a system from any externally connected network an attack might be able to use for entry. (The term pre-dates wireless internet, and a Wi-Fi-connected computer is not air-gapped, despite the literal gap of air around it.)
So how do you hack your way into an air-gapped computer? Use something that moves easily through the air, and which all computers are creating to one extent or another: Sound.
One favorite worry of paranoiacs is something called Van Eck Phreaking, in which you listen to the sound output of a device to derive something about what the device is doing; in extreme cases, it’s alleged that an attacker can recreate the image on the screen of a properly mic’ed up CRT monitor. Another, more recent phreaking victory showed that it is possible to break RSA encryption with a full copy of the encrypted message — and an audio recording of the processor as it goes through the normal, authorized decryption process.
Note that in order to do any of this, you have to get physically close enough to your target to put a microphone within listening range. If your target system is inside CIA Headquarters, or Google X, you’re almost certainly going to need an agent on the inside to make that happen — and if you’ve got one of those available, you can probably use them to do a lot more than place microphones in places. On the other hand, once placed, this microphone’s security hole won’t be detectable in the system logs, since it’s not actually interacting with the system in any way, just hoovering up incidental leakage of information.
This new fan-attack actually requires even more specialized access, since you have to not only get a mic close to the machine, but infect the machine with a fan-exploiting malware. The idea is that most security software actively looks for anything that might be unusual or harmful behavior, from sending out packets of data over the internet to making centrifuges spin up and down more quickly. Security researchers might have enough foresight to look at fan activity from a safety perspective, and make sure no malware turns them off and melts the computer or something like that, but will they be searching for data leaks in such an out of the way part of the machine? After this paper, the answer is: “You’d better hope so.”
The team used two fan speeds to represent the 1s and 0s of their code (1,000 and 1,600 RPM, respectively,) and listened to the sequence of fan-whines to keep track. Their maximum “bandwidth” is about 1,200 bits an hour, or about 0.15 kilobytes. That might not sound like a lot, but 0.15KB of sensitive, identifying information can be crippling, especially if it’s something like a password that grants further access. You can fit a little over 150 alpha-numeric characters into that space — that’s a whole lot of passwords to lose in a single hour.
There is simply no way to make any system immune to infiltration. You can limit the points of vulnerability, then supplement those point with other measures — that’s what air-gapping is, condensing the vulnerabilities down to physical access to the machine, then shoring that up with big locked metal doors, security cameras, and armed guards.
But if Iran can’t keep its nuclear program safe, and the US can’t keep its energy infrastructure safe, and Angela Merkel can’t keep her cell phone safe — how likely are the world’s law enforcement agencies to be able to ask a bunch of software companies to keep millions of diverse and security-ignorant customers safe, with one figurative hand tied behind their backs?
On the other hand, this story also illustrates the laziness of the claim that the FBI can’t develop ways of hack these phones on their own, a reality that is equally distressing in its own way. The FBI has bragged that it’s getting better at such attacks “every day,” meaning that the only things protecting you from successful attacks against your phone are: the research resources available to the FBI, and the access to your phone that the FBI can rely on having, for instance by seizing it.
Nobody should be campaigning to make digital security weaker, to any extent, for any reason — as this story shows, our most sensitive information is already more than vulnerable enough as it is.