There was a time when the U.S. government thought it could control encryption, or rather, it thought it stop strong encryption. Beyond a certain key length, government agencies considered distribution of encryption software a violation of federal weapons export laws and attempted to prosecute Philip Zimmerman, author of the famous PGP (Pretty Good Privacy) software, under that theory.
The rationale behind the prosecution was that strong encryption is too hard to crack. To quote one U.S. intelligence official at the time, the “ability of just about everybody to encrypt their messages is rapidly outrunning our ability to decode them.”
The government eventually dropped its prosecution of Zimmerman and the notion that it could put the encryption toothpaste back in the tube. But the unnamed intelligence official was right. As a practical matter, modern encryption is so hard to crack, it’s generally not worth the effort.
So you don’t crack it. You work around it. Effective encryption presupposes a certain level of security on the part of the people concealing the data: Private keys need to be kept secure, unencrypted copies of the data cannot be accessible, the keys should not be reused or easily guessed (e.g., using “password” as the password), and so on. Sloppy practice by data security personnel can, and often does, allow clever hackers to gain access to the data without actually defeating the encryption algorithms.
A recent academic paper, “Encryption Workarounds,” explores this principle thoroughly. The authors are two of my favorite writers: Orin S. Kerr, a professor at the University of Southern California Gould School of Law and famed blogger on Fourth Amendment issues, and Bruce Schneier, adjunct lecturer in public policy at Harvard University’s Berkman Klein Center for Internet and Society and probably the most famous cryptographer around.
Kerr and Schneier define the six basic encryption workarounds:
- Find the key
- Guess the key
- Compel the key
- Exploit a flaw in the encryption scheme
- Access the plaintext when the device is in use
- Locate a plaintext copy
The paper then identifies some lessons, starting with the obvious one: No workaround works all the time, but they all work some of the time. It then considers the often-ambiguous legal status of workarounds.
The authors are specifically concerned with government action, generally when law enforcement wants to gain access to a plaintext version of content that they have (presumably) legally seized. One famous example was in the wake of the early-2016 terrorist attack in San Bernardino, California, when the FBI seized an iPhone 5C from one of the terrorists. Its inability to crack the phone’s encryption led to a hot dispute with Apple, discussed in more detail below.
Whether a person is obliged to provide a password or other key to law enforcement in such cases is an unresolved question. But as a matter of everyone’s defense against attack on our encrypted systems, it is worth considering the same workarounds.