Home

Volkswagen's lesson on encryption software

You can’t insist on achieving national security using methods that just aren’t working for any single industry.

Nadim Kobeissi
30 October 2015
wfd

1961 VW ad. Flickr/John Lloyd. Some rights reserved.

1961 VW ad. Flickr/John Lloyd. Some rights reserved.

The story of Volkswagen rigging cars with custom software to fool regulators testing for harmful emissions has been all over the news, and with good reason. Quoting Jim Dwyer from the New York Times:

The cars’ software turned on the pollution-control equipment only during inspections […] The software could silently deduce that an inspection was taking place based on the position of the steering wheel (cars hooked up to emissions meters don’t make turns) […] When the test was done and the car was on the road, the pollution controls shut off automatically, apparently giving the car more pep, better fuel mileage or both, but letting it spew up to 35 times the legal limit of nitrogen oxide.

This is huge. Volkswagen is the world’s second-largest car manufacturer. Their admission to rigging accusations didn’t come easy — it followed months of denial after finally coinciding with the resignation of Volkswagen’s CEO. Let me repeat this: the world’s second-largest car manufacturer used custom software in order to effectively give their cars A.I. that can trick regulators into allowing cars that emit thirty-five times the limit of nitrogen oxide.

This story is like some sort of weird reversal of another debate we’re having involving software and regulators; that of encryption backdoors, smartphone security, messaging security and so on. In the smartphone encryption scenario, we’ve seen regulators asking the industry for backdoors as a matter of regulatory necessity, while in the Volkswagen case regulators were accusing the industry of exploiting the same backdoor logic to fool regulations!

The role-reversal that’s going on, with regulators suddenly being on the receiving end of hidden backdoors and in turn being deceived by their supposed guarantees, is additional, easy and stark proof that shipping any system that’s closed against inspection, or that implements special access controls, just doesn’t work when you’re trying to use it to reach sophisticated guarantees[1]. From cars that emit thirty-five times the amount of poison gas they’re supposed to, to elevators that unexpectedly drop three stories due to a programming error, to smartphone encryption that equips law enforcement with a master decryption key — modern industry keeps proving time and time again that such designs go against fundamental laws of engineering that one simply cannot circumvent in good practice.

A Volkswagen backdoor in a messaging client

Let’s recap how Volkswagen’s backdoor worked so that we can see how that exact same logic could be easily transposed into backdooring a supposedly secure mail application[2]. Volkswagen cars came with software that can figure out when the car was being tested for polluting emissions. The software figured this out based on the car’s steering activity, engine running time and barometric pressure, among other factors.

  1. Volkswagen programmed the cars so that they would enable their anti-pollution controls only when the car realized it was being tested for pollution emissions. Otherwise, the car would disable these controls so that it can achieve better performance and make the driver happier with their purchase.
  2. Volkswagen made sure that this special software is well-hidden and that regulators can’t inspect it, making this backdoor difficult to detect.

Sound familiar yet? Here’s that exact same logic transposed into a hypothetical backdoor for encrypted communications on your smartphone:

  1. Your smartphone comes with a messaging client that can figure out, based on a list of suspicious phone numbers that it secretly regularly fetches, whether you are communicating with a suspicious third party. Your smartphone is also able to detect when a forensics expert is running tests on it.[3]
  2. Your smartphone is then programmed to enable your encryption and privacy settings only when you’re messaging someone who’s not on that list of suspicious numbers. Otherwise, message encryption is disabled. If the software detects that a researcher is testing your smartphone for encryption compliance, it still encrypts your messages but generates your new key material using a malicious random number generator in order to insert a surreptitious backdoor anyway.
  3. Your smartphone manufacturer makes sure that this special software is well-hidden and that researchers can’t inspect it, making this backdoor difficult to detect.

The parallels are very real; what makes them real is not only that they sound and work the same, it’s that both of them actually happened. The backdoored random number generator in my example isn’t fiction, but is drawn word for word from a recent NIST-approved cryptographic standard. Meanwhile, almost all of China’s most popular messaging applications will trigger censorship and surveillance automatically if you send messages containing certain keywords.

The reason why it’s critical to build software so that it can always be inspected, especially software running things we bet our lives on like our cars and sometimes our phones, is because otherwise, this simple snippet of code is all it could take to break any and all guarantees, if you could never examine it:

if (isInBlacklist(phoneNumber) || forensicsDetected()) {
                    const ephPrivKey = DUAL_EC(32)
}
else {
                    const ephPrivKey = DevUrandom(32)
}
const ephPubKey = DH25519(g, ephPrivKey)
sendEncryptedMessage([msg, ephPrivKey, ephPubKey], phoneNumber)

That piece of code is fully realistically possible; if I was confident you couldn’t inspect my software’s code, it’s all I would need to insert into a messaging application. If I obfuscated it just a little bit before inserting it, I’m willing to bet it would avoid detection for a fairly long time. We already have research proving that we can use compound Poisson distributions to estimate how long it will take before a software will show bugs; why not use statistics to reliably estimate how long before a backdoor, disguised as a bug, will pop out?

I feel like it’s pointless to buttress this post with examples.[4] TSA-compliant luggage padlocks, which are supposed to keep your luggage safe from strangers but accessible to airport security regulators, recently had their blueprints leaked allowing anyone with a 3D printer to print master keys and unlock everyone’s luggage. A recent publication showed that for three years, a Korean government-mandated smartphone monitoring application was shown by Collin Anderson et al to be vulnerable to mass compromise. The application, intended for parental monitoring, ended up leaking children’s personal information to everyone in their wireless vicinity. This is seriously getting old.

Volkswagen’s emissions fraud was only possible because they had a way to ship safety-critical software that was impervious to inspection. Regulators were the ones on the receiving end this time, and they felt the sting. It doesn’t make sense, in light of such a universal set of examples, to still think that opposing backdoors is unreasonable or paranoid. National security can be your top priority, sure — but you can’t insist on obtaining it using methods that just aren’t working for any single industry.


[1]  Especially when those guarantees are meant for a general public.

[2] This same backdoor logic is already on the table in some law enforcement circles.

[3] If you think detecting a forensics expert is hard, look into machine learning.

[4] I could write a random backdoor-gone-wrong generator and it would generate examples that actually happened probably half of the time.

This article was orginally published on the author's blog on 25 September 2015. Thanks go to the author for allowing us to republish it here.

There is an acute and growing tension between the concern for safety and the protection of our freedoms. How do we handle this? Read more from the World Forum for Democracy partnership.

Expose the ‘dark money’ bankrolling our politics

US Christian ‘fundamentalists’, some linked to Donald Trump and Steve Bannon, have poured at least $50m of ‘dark money’ into Europe over the past decade – boosting the far right.

That's just the tip of the iceberg: we've got many more leads to chase down. Find out more and support our work here.

Had enough of ‘alternative facts’? openDemocracy is different Join the conversation: get our weekly email

Comments

We encourage anyone to comment, please consult the oD commenting guidelines if you have any questions.
Audio available Bookmark Check Language Close Comments Download Facebook Link Email Newsletter Newsletter Play Print Share Twitter Youtube Search Instagram