Punk is resistance. During the 1980s and ’90s, the subculture was resistance of a special kind: heavy on fashion and light on politics. Punk generated eccentric hairstyles, tattoos, boots and leather outfits, drug habits, and hard-core music that oozed being against stuff. Yet fashion trumped direct action. Punk was aesthetic anarchy.When computers and networks were added to the mix, cyberpunk was born. The 1990s were a time of extraordinary hope. The decade came barging right through Brandenburg Gate, with the Berlin Wall crashing down in the background. The end of the Cold War and the peaceful collapse of the Soviet Union released an intoxicating sense of optimism, at least in the West. Washington debated the “end of history,” with liberal market economies coming out triumphant. In the Persian Gulf War of 1991, perhaps America’s shortest and most successful ground war operation to date, the Pentagon overcame the mighty Iraqi army — and with it the lingering Vietnam hangover. Silicon Valley and America’s technology startup scene, still bathing in the crisp utopian afterglow of the 1980s, watched the rise of the New Economy, with vertigo-inducing growth rates. Entrepreneurs rubbed their hands in anticipation. Intellectuals were inebriated by the simultaneous emergence of two revolutionary forces: personal computers and the internet. More and more PC owners connected their machines to the fast-growing global computer network, first with clunky, screeching modems, then with faster and faster broadband connections.
But amid the hype and a slowly but steadily growing economic bubble, it dawned on a number of users that something was missing: privacy and secure communications. History, thankfully, was gracious. Even more than that: nature itself was generous to humans in front of plastic keyboards. Unrelated to either PCs or the internet, cryptographers had made a third and no less far-reaching discovery in the 1970s. They didn’t just invent a technology; more like explorers than innovators, they discovered an algorithm based on a beautiful mathematical truth. That truly revolutionary technology was finally unleashed for widespread public use in June 1991: asymmetric encryption, also known as public-key cryptography.
When free crypto was added to the computer underground, “crypto anarchy” emerged. Now people with mirror shades, modems, and PCs could be against stuff. And even better, despite the decade’s spirit for unrestrained optimism, they had found something concrete to be against: the government’s attempts to regulate ciphers. And so cypherpunk was born, a pun on “cyberpunk.” The ideology was powerful — far more powerful and durable than those whimsical and short-lived names implied.
Cryptography is the art of secret communication. Diplomats and military commanders began using secret keys to encrypt their missives thousands of years ago, long before the invention of computers or even the telegraph. To establish secret communication, participants must first have the secret key. Thus arises the problem of key distribution— how to share a secret key with all participants of a secure conversation before the conversation starts. For centuries, key distribution gave large organizations a big advantage. The more resourceful a state’s military and intelligence establishment, the more easily it could manage the logistics of key distribution.
Perhaps the single most significant invention in the history of cryptography came to be in 1973: public-key encryption, or “nonsecret” encryption, as its inventors called it. It is probably the only mathematical algorithm that spurned its own political philosophy. Ironically, “nonsecret” encryption was first discovered in secret at the British eavesdropping agency Government Communications Headquarters (GCHQ). And it was kept secret for many years.
Public-key encryption was revolutionary for a simple reason. It solved the age-old security problem of key distribution. Sharing a secret key had previously required a secure communication channel. If Alice wanted to send Bob a secret message, she would first need to share the secret key with him. But a secret could not be shared on an insecure channel. Suppose Alice sent Bob a letter containing the secret key and asking him to use it to scramble their subsequent correspondence — say, by replacing every letter with a specified alternative letter. Eve (cryptographers like to call the supposed evil eavesdropper “Eve”) could simply intercept the letter and make a copy of Alice’s secret key en route to Bob. Eve would then be able to read all future messages encrypted with this key.
By the 1960s, the British military had started worrying. Tactical radio had become more widespread, along with computers and telecommunication technology, making the problem of key distribution worse. “The management of vast quantities of key material needed for secure communication was a headache for the armed forces,” recalled one of the British government’s leading cryptographers at the time, James Ellis. Ellis first believed, as was generally assumed then, that no secret communication was possible without a secret key first being shared. His view changed with the random discovery of a World War II report, “Final Report on Project C-43,” by a Bell technician, Walter Koenig, also prepared under an NDRC contract.
Back in October 1944, Koenig had suggested a theoretical way of securing a telephone call by having the recipient of a call add noise to the signal and then subtract it afterward. Only Bob could subtract the noise, because only he knew what he had added in the first place. An eavesdropper, Eve, simply would not know how to modify the noise, because she wouldn’t have access to the noise that had been added to the phone conversation in the first place.
The system was impractical at the time. But Ellis got the decisive and entirely counterintuitive cue: there was no need to assume that only the sender could modify the message; the recipient could have a role as well. “The noise which had been added,” Ellis wrote in 1970, “had been generated by the recipient and is not known to the sender or anyone else.” The recipient, therefore, “takes an active part in the encipherment process.” In theory, at least, Ellis seems close to solving the age-old key distribution problem.
Now the secret British cryptographers needed to find a mathematical way to enable the recipient to take part in ordinary encryption and decryption. “The unthinkable was actually possible,” Ellis recalled. But because he was not a mathematician, he could not solve the underlying challenge of finding a suitable one-way function, a mathematical operation that could be performed in only one direction — something that could be done but not undone.
Three years after Ellis’s thought experiment, in 1973, a young Cambridge mathematician, Clifford Cocks, joined the spy agency in Cheltenham. Six weeks into his job, a supervisor casually told Cocks about Ellis’s “really whacky idea.” Cocks understood that finding a suitable one-way function had been the problem. The 22-year-old had worked on number theory before, and the problem of factoring — finding how a number could be divided into other numbers — was familiar to him. “If you wanted a function that couldn’t be inverted,” he remembered, “it seemed very natural to me to think of the concept of multiplying quite large prime numbers together.”
Multiplying two large primes is easy, even if they are more than a hundred digits long. Factoring the two numbers from the much larger product is hard — very hard. It took the freshly recruited spy about thirty minutes to come up with this prime solution. “From start to finish, it took me no more than half an hour. I was quite pleased with myself. I thought, ‘Ooh, that’s nice. I’ve been given a problem, and I’ve solved it.’ ”
Cocks didn’t grasp the implications of what he had just done. But soon colleagues started approaching the wunderkind from Cheltenham in admiration. The young mathematician’s discovery seemed immediately applicable to military communications, and it would become one of GCHQ’s most prized secrets. The only person Cocks could tell was Gill, his wife, who also worked for the spy agency. GCHQ called its discovery “nonsecret encryption.”
But there was a problem. In the mid-1970s, room-sized mainframe computers were not yet sufficiently powerful to crunch large primes into a secure one-way function fast enough. Neither GCHQ nor the National Security Agency (NSA) turned the theoretical possibility of nonsecret encryption into a practical algorithm or crypto product that could actually be used to secure communications. Computers, ironically, were one of the main reasons why cryptographers in the shadows neglected the magic of public keys — and why those in the open discovered this magic.
Meanwhile, a few public academics kept working hard on solving the puzzle of how to exchange a shared secret on a nonprivate channel. Unsurprisingly, the breakthrough happened in the San Francisco Bay Area of the mid-1970s, with its inspiring mix of counterculture and tech entrepreneurship. These pioneers were Whitfield Diffie and Martin Hellman of Stanford University, and Ralph Merkle of UC Berkeley. Their discovery resembled what the British spy agency had already found in secret.
In November 1976, a history-changing article appeared in an obscure journal, IEEE Transactions on Information Theory. It was titled “New Directions in Cryptography.” Diffie and Hellman knew that computers would be coming to the people, as Stewart Brand had just reported from the mesmerized Spacewar players on their own campus. And they knew that these computers would be networked. “The development of computer controlled communication networks promises effortless and inexpensive contact between people or computers on opposite sides of the world,” they wrote in the introduction to their landmark paper. Computer networks, the two cryptographers believed, would be “replacing most mail and many excursions.” Going digital posed a new security problem.
A good old-fashioned paper contract could be signed, sealed, and mailed reasonably securely. Anyone could easily recognize a handwritten signature as authentic, but no one other than the legitimate signer could easily produce it. “This paper instrument,” they wrote, needed to be digitally reproduced. The task was hard. The simple paper system didn’t just work; it worked on a very large scale, and it worked cheaply. Their answer was a public-key cryptosystem, in which “enciphering and deciphering are governed by distinct keys.”
Diffie and Hellman had now suggested a theoretical solution, but much like Ellis at GCHQ four years earlier, they had not found a practical mathematical function that actually implemented this cunning scheme. Yet they inspired dozens of other cryptographers to try. It took about four months.
The solution emerged on April 3, 1977 — on Passover. Ron Rivest, Adi Shamir, and Leonard Adleman, three MIT academics, discovered an actual and elegant method for public-key cryptosystems, after experimenting with more than forty mathematical functions. Rivest, then a 29-year-old assistant professor, was the driving force. That evening, after returning home from a seder with friends that included Adleman and Shamir, he had the eureka moment while sitting on a sofa after midnight, eyes closed: the desired one-way function could be based on very large, randomly chosen prime numbers, over a hundred digits long.
Rivest’s idea exploited the same curious one-way property of large primes that Cocks had discovered in secret. But the academic trio moved right to implementation. The numbers are easily multiplied, but it is nearly impossible to reverse the step and find the two primes that were used to generate the product. Multiplication took seconds; factoring would take millions of years, even with the most powerful computers. The algorithm that Rivest, Shamir, and Adleman suggested took advantage of this asymmetric factorization problem. The public encryption key would contain the product; the private decryption key would contain the two primes. It was safe to share the public key on an insecure channel because the factorization problem was so hard that it was, in effect, already encrypted, scrambled by a one-way function that was easy to perform but nearly impossible to reverse. It was magic.
In April 1977, the trio drafted a technical memo that would soon send shivers down the spine of the NSA. The memo remained obscure at first. After typing it up, Rivest mailed it out for informal review to colleagues, addressed simply as the Computer Science Lab at MIT, 545 Technology Square, Cambridge, MA. One of the recipients of this first draft was Martin Gardner, a columnist at Scientific American. Gardner saw the idea’s potential and mentioned Rivest’s work in the popular “Mathematical Games” column in August 1977.
Gardner announced a “new kind of cipher that would take millions of years to break.” He didn’t have space for technical details, so the column referred to the memo that was “free to anyone who writes Rivest at the above address enclosing a self-addressed, 9-by-12-inch clasp envelope with 35 cents in postage.” The column also mentioned that the National Science Foundation and the Pentagon, more specifically the Office of Naval Research, had funded this remarkable crypto work.
The response was overwhelming. Seven thousand letters came flooding in. They came from all over the world. “Some were from foreign governments,” Rivest recalled. They all wanted to get their hands on Rivest’s revolutionary encryption algorithm in Technical Memo Number 82.
NSA employees also read Gardner’s column in Scientific American. Cocks’s innocent secret discovery in Cheltenham now made the NSA become defensive, and the agency overreacted. Inside the Triple Fence the story sounded not as if somebody was simply making an important discovery, but as if academics were stealing a secret that America’s spies already possessed and guarded closely. That was a problem. The powerful Fort Meade machinery sprang into action.
The public spread of cryptographic knowledge needed to stop, so the spies looked into changing legislation. The agency put pressure on academic publishers. NSA employees warned cryptographers that presenting and publishing their research could have legal consequences. They issued gag orders. Fort Meade tried to censor the National Science Foundation and to take over funding crypto research directly. Vice Admiral Bobby Inman, then the director of America’s most secretive agency, even tried a softer approach: he gave the first public interview ever in the NSA’s history to Science magazine.
“One motive I have in this first public interview is to find a way into some thoughtful discussion of what can be done between the two extremes of ‘that’s classified’ and ‘that’s academic freedom,’” Inman told the magazine. The Texas-born naval career officer said he was deeply concerned about the “burgeoning” academic interest in this field, although he did not explicitly mention public-key encryption. He doubled down and gave a speech on the growing interest in “public cryptography” five months later, in March 1979, to the Armed Forces Communications and Electronics Association:
There is a very real and critical danger that unrestrained public discussion of cryptologic matters will seriously damage the ability of this government to conduct signals intelligence and the ability of this government to protect national security information from hostile exploitation.
The academic discovery meant that sources would soon go dark, the NSA feared.
GCHQ’s tightly guarded secret was no longer a secret. The NSA would try everything it could to stop strong crypto from going public, and it would continue to try over the next two decades: cutting government funding of cryptographic research, or taking over the funding; vetting papers before publication; threatening scholars with criminal proceedings, or trying to convince them that publication damaged the national interest. The agency’s attempts to stop crypto were clumsy and ham-handed. Even their most potent tool, classifying encryption as a weapon under the International Traffic in Arms Regulations, would ultimately fail.
The NSA’s attempts to reign in crypto in the late 1970s foreshadowed a trend: the government’s endeavors to counter the rise of strong encryption confirmed the worldview of those who were inclined to distrust Washington’s secret machinations. The leaked Pentagon Papers and the ensuing Watergate affair earlier in the decade had eroded trust in the federal government, especially on the libertarian left. Resistance was brewing.
Rivest, Shamir, and Adleman’s motivation was a conservative one. They wanted to preserve the status quo, not topple it: “The era of ‘electronic mail’ may soon be upon us,” the trio suspected, correctly. It was therefore the task of cryptographers to “ensure that two important properties of the current ‘paper mail’ system are preserved”: privacy and authentication — that messages remained confidential and they could be signed.
Public-key cryptography made it possible to keep a message private: The sender would scramble the clear text with a key that the recipient had “publicly revealed.” Then the recipient, and only the recipient, could use the matching private key to unscramble the message’s ciphertext. But the new technique could do even more. Public-key cryptography made it possible to “sign” a message electronically, by doing exactly the opposite: having the sender encipher a signature with a privately held encryption key, thus enabling the recipient to verify the message’s origin by deciphering that signature with the sender’s publicly revealed key, thereby proving that only one party, the legitimate sender, could have scrambled the message’s signature. Everybody could decipher and read the signature, but in only one way: with the sender’s public key.
This form of authentication was like a handwritten signature on steroids: signatures could be verified by everybody and forged by nobody. Electronic mail could now be even better than old-fashioned snail mail, with sealed envelopes that only the intended recipient could open and signatures that were impossible to fake, guaranteeing confidentiality and authenticity.
Perhaps the best part was that the “public” in public-key encryption really had two meanings: the key was public and, equally important, the method was simple enough for widespread public use. That was because the cryptographic breakthrough came just at the right time. It coincided with the advent of the mass-market personal computer, the PC, and soon the spread of the internet. Strong crypto was becoming a public good, no longer a privilege of governments and companies. And what GCHQ had called nonsecret encryption was about to inspire an entire set of ideas — some realistic, some utopian — that would come to shape the 21st century.
The mix was potent: computers, networks, and public keys clearly would have a huge impact. But exactly what kind of impact it would be wasn’t obvious. A few scholars who were tracking the pulse of recent technical developments started exploring these possibilities. One of them was David Chaum.
Throughout the 1980s, Chaum was torn between fear and hope. The Berkeley graduate looked like a cliché: gray beard, full mane of hair tied to a ponytail, and Birkenstocks. Chaum was concerned that “automation of the way we pay for goods and services” was advancing in large strides. He shuddered at the prospect of somebody else connecting the dots of his life. Chaum knew that an irritatingly detailed picture could be pieced together from hotel bookings, transportation, restaurant visits, movie rentals, theater visits, lectures, dues, and purchases of food, pharmaceuticals, alcohol, books, news, religious and political material. “Computerization,” he lamented in 1985, “is robbing individuals of the ability to monitor and control the ways information about them is used.”
Individuals in both the private and public sectors would routinely exchange such personal information about consumers and citizens. The individual user, Chaum was concerned, would lose control and visibility; there was no way to tell whether the information collected in bulk was accurate, obsolete, or inappropriate. “The foundation is being laid for a dossier society, in which computers could be used to infer individuals’ life-styles, habits, whereabouts, and associations from data collected in ordinary consumer transactions.” Such an outcome, Chaum suspected, would be unacceptable to many.
Thankfully, public-key encryption had emerged just in time to save privacy from automation, computerization, and data-hungry corporations and governments. So Chaum started working on concrete solutions: untraceable electronic mail, digital pseudonyms, anonymous credentials, and general protection of privacy. Chaum is best known for yet another revolutionary cryptographic discovery: blind signatures.
The nondigital equivalent to a blind signature would be using carbon paper to sign a letter that is already in an envelope, without having read the letter first. A signature, in short, is blind when the content of a message is disguised before the signature is added. This signature can then be used to verify the undisguised message.
Chaum had two situations in mind where blind signatures could be put to use: one was digital voting. Alice might want to prove that she cast a vote in an election while keeping her actual vote anonymous. Chaum’s sophisticated digital blind signature scheme made this possible. A voter could sign the ballot without revealing the cast vote. It became possible to confirm all this electronically: Alice votes anonymously, Bob sends her a blind receipt, and Eve doesn’t see any of it.
But Chaum’s true passion was another purpose for using blind signatures: digital cash. In 1983, he suggested a “fundamentally new kind of cryptography” that would enable a better form of money: third parties could not determine payee or the time or amount of the payment. Individual privacy and anonymity was guaranteed, as when paying cash at a gas station or in a drugstore. At the same time, individuals could provide proof of payment, and they could invalidate payments if someone stole their medium of payment, as when using old-fashioned credit cards.
Chaum combined the best of both worlds: the anonymity of cash and the security of plastic. The article that spelled out the idea became one of his most influential papers, “Numbers Can Be a Better Form of Cash Than Paper.” But using this improved form of cash was not only about convenience and security. If crypto cash would not be adopted widely, Chaum feared, “invisible mass surveillance” would be inevitable, “perhaps irreversible.”
Chaum’s idea was magically simple and powerful. Steven Levy, a perceptive chronicler of the grand cryptography debate of the 1990s, called him the “Houdini of crypto.” So powerful were Chaum’s ideas that an entire movement arose. That movement believed crypto was en route to making the state as we know it obsolete.
Many of these early cryptographers had been exposed to a powerful streak of American culture: civil libertarianism with its deep-seated distrust of the federal government — or of any government. Counterculture, with its focus on free speech, drugs, and sexual liberation, was constantly pushing the boundary of what was legal. Meanwhile, the NSA’s hysterical reaction to basic crypto scholarship amplified this hostility toward government in the emerging computer underground of the 1980s. So it was no coincidence that Bay Area cryptographers unearthed what would become one of the most potent political ideas of the early 21st century.
One of the intellectual founding fathers of the nascent crypto movement was Timothy May. The son of a naval officer, May grew up in a suburb of San Diego. When Tim was 12, his father was posted to Washington and the family made the move to the East Coast. Young Tim, not even a teenager, joined a local gun club. A fascination with firearms would stay with him. He later owned a .22 revolver, a .357 Magnum, an AR-15 assault rifle, a Ruger, a pair of SIG Sauers, and other weapons. Holding a pleasantly heavy, cold metallic firearm felt liberating and empowering. So did reading Ayn Rand, the queen of youthfully aggressive libertarianism.
May was a ferocious reader of fiction as well as nonfiction. Crypto was so new and so radical in its implications that inspiration simply couldn’t come from science, he thought; it could only come from science fiction. Vernor Vinge’s novella “True Names” came to May’s attention in 1986. “You need to read this,” a friend told him, giving him a dog-eared Xerox copy of the entire short story. Vinge feared total identification and transparency: “It occurred to me that a true name is like a serial number in a large database,” the science fiction writer recalled later. The names could serve as identifiers, connecting otherwise disparate information, as what intelligence officers call “selectors.” Whoever had access to a true-names database would have power over the objects in the database.
Vinge’s 1981 novella spelled out the very same tension that was driving Chaum’s fear of the “dossier society” at the very same moment. May was “riveted,” he said later. He thought the story articulated a number of themes that were swirling around in “computer circles” at the time — notably, the role of digital money, anonymity, pseudonyms and reputations, and countering the government’s interest to impose control “in cyberspace.”
Cyberspace was a familiar notion to May, even before it was articulated under that name. May keenly followed science and technology trends, including Jaron Lanier’s early work on virtual reality. The September 1984 issue of Scientific American, the software issue, had on its cover a visualization of Mandala, Lanier’s visual programming language. May, then working at Intel, had also contributed an illustration to that issue: a blue-and-green scan of an electron micrograph showing a small part of an Intel 80186 microprocessor. One day that September, May ran into Lanier at Printers Inc., an independent bookstore in Palo Alto and a gathering spot for San Francisco Peninsula intellectuals, not far from Stanford University. Lanier was sitting two stools over, and they struck up a conversation about cyberspace.
“Encryption makes it easy and even safe to ignore most local laws about what can be done in cyberspace,” May later argued. For May and many other crypto anarchy pioneers, this change was a first-order opportunity. In true cyberpunk fashion, May took the space idea literally. “There is no reason to expect that this capability won’t be a major reason to at least partly move into cyberspace,” May wrote at the time. The nostalgic frontiersman expected that the World Wide Web’s explosive growth, secure communication, and the coming availability of digital money would accelerate the “long-awaited colonization of cyberspace.”
In mid-1988, 10 years after Rivest’s pathbreaking discovery and two years after reading Vinge’s “True Names,” May penned the “Crypto Anarchist Manifesto,” whimsically modeled on another famous manifesto with revolutionary ambitions: “The technology for this revolution — and it surely will be both a social and economical revolution — has existed in theory for the past decade,” May wrote, “but only recently have computer networks and personal computers attained sufficient speed to make the ideas practically realizable.”
The possibilities were extraordinary. “Two persons may exchange messages, conduct business, and negotiate electronic contracts without ever knowing the True Name, or legal identity, of the other,” May wrote in his manifesto, using capital letters in honor of his favorite science fiction author. Then May mobilized that most powerful American myth, the Frontier. Barbed wire, a seemingly minor technical invention, had enabled the fencing off of vast ranches and farms in the open rangeland of the West. Barbs on wire had altered forever the concepts of land and property rights in the frontier states, and it had caused the Fence Cutting Wars a century earlier. May sided with the wire-clipping cattlemen and cowboys. On the electronic open range, the barbed wire need not be accepted as immutable fact.
The comparison was odd, but it sounded powerful: Crypto was a game changer. It also emerged as a seemingly minor technical invention at first, from some obscure branch of mathematics. But this time, technology worked for freedom and liberty, and against those who wanted to build fences around their property. For May, crypto was like “the wire clippers” that would dismantle the illegitimate fences around intellectual property. The federal government, he observed with horror, wanted to slow or halt the spread of this technology, and Washington justified the clampdown with vague references to national security. And yes, just as in True Names, criminals would abuse it and take advantage of the renewed liberties. But none of this would stop the rise of crypto anarchy, May knew. He ended his pamphlet with this battle cry: “Arise, you have nothing to lose but your barbed wire fences.”
The “Crypto Anarchist Manifesto” already contained the seeds of what would become a potent political ideology: technology itself, not humans, would make violence obsolete. May distributed his pamphlet electronically and in print among like-minded activists at the Crypto ’88 conference in Santa Barbara and again at the Hackers Conference that year. But something was missing. The message didn’t quite get out.
By early 1992, Timothy May and a friend, Eric Hughes, were becoming annoyed with the glacial progress of actual cryptographic technologies that could be used by normal people. Yes, Phil Zimmermann had just released his home-brewed PGP (for “Pretty Good Privacy”) 1.0 to the public. This was a significant step. Zimmermann, in violation of export control regulations as well as patent law, gave public-key encryption to the people. And the uptake of his rogue app, as well as the uproar, was big. But the first version of PGP was buggy and clunky to use. Much more was possible, and May and Hughes knew it.
May had a taste for rugged frontier individualism, often sporting a wide-brimmed Stetson hat — a true crypto cowboy. The curious physicist had recently retired from Intel as a self-made man at forty, now independently wealthy and living on a self-sufficient ranch in the Santa Cruz Mountains. Back in 1970, young May had read the Whole Earth Catalog, and he later subscribed to the Whole Earth Review. He also was a former member of the Homebrew Computer Club.
Hughes was just under 30, with blond hair halfway down his back and a long wispy beard. He had studied math at Berkeley. In May 1992, Hughes came down to Santa Cruz from Berkeley to hunt for a house. But the two men were drawn into their common passion: crypto anarchy. “We spent three intense days talking about math, protocols, domain specific languages, secure anonymous systems,” May said later. “Man, it was fun.” The crypto rebels had been inspired by Martin Gardner’s famous 1977 column from Scientific American, 15 years after it came out. “Wow, this is really mind-blowing,” May had thought when he read the piece.
May and Hughes began to rope in others. A group of 16 people started meeting every Saturday in an office building near Palo Alto full of small tech startups. The room had a conference table and corporate-gray carpeting. Stewart Brand was at one of the first meetings, as were Kevin Kelly and Steven Levy, the two Wired writers. They were all united by that unique Bay Area blend: passionate about technology, steeped in counterculture, and unswervingly libertarian.
The crypto group also shared one other thing: a frustration with the slow pace of cryptographic progress. Chaum’s ideas were 10 years old, yet there was still no digital cash, no anonymity by remailer, no privacy, and no security built into the emerging cyberspace. They played games, simulating encrypted messages with envelopes, sometimes for up to four hours, with role playing to see how cryptographically guaranteed anonymity would play out in the marketplace. Simply by using envelopes, the group simulated signatures, trust and reputation systems, and even online black markets.
But already at these first meetings, some were concerned that anonymity could be abused. “Seems like the perfect thing for ransom notes, extortion threats, bribes, blackmail, insider trading and terrorism,” Kelly said to May during an interview in Santa Cruz in the fall of 1992, referring to these early ideas for black markets. Brand shared this skeptical view and had already banned anonymity on the WELL for similar reasons. May was unfazed. “Well, what about selling information that isn’t viewed as legal, say about pot-growing, do-it-yourself abortion?” the self-described anarchist responded to Kelly. “What about the anonymity wanted for whistleblowers, confessionals, and dating personals?”
Most activists sided with May. In September 1992, a few crypto pioneers decided to do the obvious: take their meetings from the Palo Alto office into cyberspace and organize themselves in a mailing list. People were just beginning to use e-mail accounts, so an email list seemed the best and most open way to network the group. Unlike on the WELL up in Sausalito, no membership fees were required and, more important, everyone could sign up anonymously. The list was open; no finger files wanted. Everybody could subscribe by simply emailing firstname.lastname@example.org. The list was hosted on a machine owned by John Gilmore, an early San Francisco–based crypto activist with a long mane, a flimsy beard, and a keen interest in recreational drugs. Gilmore was one of five original employees at Sun Microsystems, and like May, he was independently wealthy at a relatively young age.
Cypherpunk hailed from science fiction. The name was a giveaway, of course. “You guys are just a bunch of cypherpunks,” Jude Milhon exclaimed at one of the group’s first meetings. Then an editor at Mondo 2000, Milhon was better known as St. Jude, a boisterous feminist hacker remembered for demanding that “girls need modems.” St. Jude’s play on words combined the then piping-hot science fiction trend with the British spelling of “cypher.” The would-be anarchists loved their new hipster nickname. “The ‘cyberpunk’ genre of science fiction often deals with issues of cyberspace and computer security,” May explained later, “so the link is natural.” May went up to Berkeley to a couple of Mondo parties hosted by Ken Goffman and his successor as the magazine’s editor in chief, Alison Bailey Kennedy (a.k.a. Queen Mu).
In November 1992, St. Jude, herself a WELL member, ran one of the first stories on the nascent crypto movement in her ultimate cyberpunk magazine. Milhon related the appropriately cryptic anecdote about meeting two masked cypherpunks at a Screaming Meemees concert in the Black Hole, a Bay Area club.
“Actually, unmasking your real identity could be the ultimate collateral,” one of the masked punks told her. “Your killable, torturable body. Even without kids, you’ve got a hostage to fortune — your own meat.”
“AAIEEeeee,” Milhon responded. “That’s great covert gear you got there, guys.”
“The revolutionists can be contacted at email@example.com,” she added.
Crypto anarchy spread. Soon, local chapters popped up in London, Boston, and Washington. Like any reputable subculture phenomenon, the cypherpunks had their own jargon: pseudonyms and anonymous handles simply became “nyms,” for instance, and they called themselves simply “c-punks.”
Steven Levy and Kevin Kelly attended some of the first Palo Alto c-punk meetings. Levy portrayed the new movement in a famous cover story for Wired, in the magazine’s second issue, which came out in May 1993. On the cover were Eric Hughes, Tim May, John Gilmore, holding up an American flag, their faces hidden behind white plastic masks, Gilmore even sporting an EFF T-shirt complete with the internet address of the then newly founded Electronic Frontier Foundation. The geeky rebels had their PGP fingerprints written on the foreheads of the masks.
The same year, in the summer of 1993, Kelly published a long story about the crypto anarchists in the anniversary issue of the Whole Earth Review, guest-edited by its founder, Stewart Brand. Earlier that year, Mosaic 1.0 had been released, the world’s first browser that could display graphics and text on the same page. The software, distributed for free, brought the web to life with color and images. Traffic exploded. The Whole Earth Review pointed out that the blooming network made encryption ever more necessary.
By November 1992, when Mondo first mentioned the list, it had about 100 members, including journalists and even a few people with .mil addresses. Radical libertarians dominated the list, along with “some anarcho-capitalists and even a few socialists.” Many had a technical background from working with computers; some were political scientists, classical scholars, or lawyers. Two years into cypherpunk, the list had about five hundred people on it, after outages on the host machine had knocked the list back from more than seven hundred subscribers. The opinions didn’t easily separate according to the political left and right, so the founding cypherpunks advised members not to rant on hot-button issues, like abortion or guns.
The list, and indeed the group, had no formal leadership. “No ruler = no head = an arch = anarchy,” May clarified, and he recommended looking up the etymology of the word “anarchy,” just to be sure. Eric Hughes administered the list for the first years. The emerging movement had no budget, no voting, no leaders. Yet the community remained active for many years. John Gilmore did the math: from Dec. 1, 1996, to March 1, 1999, the list processed 24,575 messages. That’s approximately 30 messages each day for more than 800 days.
But science fiction wasn’t just a namesake. Fiction stoked the cypherpunk movement’s utopian ideas. On the list, as well as in articles and FAQs, May recommended to “read the sources.” Those sources were not scientific articles on encryption or possibly pamphlets of libertarian or anarchist political thought. No, the recommended sources were novels — namely, George Orwell’s “1984,” John Brunner’s “The Shockwave Rider,” Ayn Rand’s “Atlas Shrugged,” and especially Vernor Vinge’s “True Names.” In fact, Vinge’s work is referenced about 20 times in the “Cyphernomicon,” a sprawling 300 page log that is perhaps the closest thing the movement has to a canonical document, organized as an appropriately messy and never-ending list of frequently asked questions. The only nonfiction source recommended in the document was Chaum’s classic 1985 article “Security without Identification.”
Inspired by fiction, the activists debated what was “holding up the walls of cyberspace,” as May put it. Science Fiction writer William Gibson had famously described the new virtual realm of cyberspace as a consensual hallucination. The crypto activists didn’t like that phrase. Something merely consensual wouldn’t cut it. The new frontier couldn’t just be hallucination. Their idea of cyberspace was rock solid, sturdy enough to withstand the assembled might of the US government, FBI agents and NSA spooks included. This explains why True Names holds much more appeal for the cypherpunks than Neuromancer does.
Vinge’s heroic protagonist — Roger Pollack, a.k.a. Mr. Slippery — was a successful middle-class professional with a suburban house, garden, and car. But on the Other Plane, the Feds suddenly appeared powerless, at the mercy of anonymous hackers equipped with superior power, in need of Pollack’s help. Gibson’s drug-ravaged and addicted hustler Henry Dorsett Case in his novel “Neuromancer” was the most unappealing contrast. Cyberspace couldn’t just be a collectively imagined illusion; the cypherpunks preferred to see the vast online frontier as a new territory that could be as rugged and dangerous to outsiders as the high plains of the Rocky Mountains. The question that hardened the walls of cyberspace was therefore a pertinent one. May tackled it in one of his longer essays:
What keeps these worlds from collapsing, from crumbling into cyberdust as users poke around, as hackers try to penetrate systems? The virtual gates and doors and stone walls described in True Names are persistent, robust data structures, not flimsy constructs ready to collapse.
The answer, by now, was obvious. Cryptography provided the “ontological support for these cyberspatial worlds,” he understood. The astounding mathematical power of quite large primes guaranteed enduring structures in the vastness of a new space that could now be safely “colonized,” May believed. Owning a particular “chunk of cyberspace,” he explained, meant running software on specific machines and networks. And the owners of such virtual properties made the rules. They set access policies and determined the structure of whatever was to happen there: “My house, my rules.” Anybody who didn’t like the rules in a particular virtual world would be welcome to stay away. And, May was convinced, anybody who wanted to call in old-fashioned governments to force a change of the rules would face an uphill battle.
For the libertarian minded, crypto anarchy meant that “men with guns” could not be brought in to interfere with transactions that all participants mutually agreed on. Taking violence out of the equation had two wide-reaching consequences. Two types of men with guns would find crypto hard to cope with. The first were the police and agents of federal law enforcement. No longer would they be able to trace and find those who refused to declare income or deal in illegal goods. The state, in short, would lose a good deal of its coercive power. If financial transactions became untraceable, enforcing taxation would be impossible. And that, of course, was a good thing. “One thing is for sure,” May told Kevin Kelly of the Whole Earth Review already in late 1992, “long-term, this stuff nukes tax collection.”
But crypto wouldn’t affect only the government and the rule of law. The other kinds of men with guns were criminals. And the same applied to them. Criminals would also lose their power to coerce others with threats of physical violence. If the buyers of drugs, for instance, would be untraceable not just for the Feds but also for gangs, then markets that were chronically plagued by violence and abuse would become nonviolent and abuse would stop. Anonymously ordering LSD online was much less risky than going to dodgy street corners and talking up shady pushers.
Strong crypto, made widely available, enabled totally anonymous, unlinkable, and untraceable exchanges between parties who had never met and who would never meet. The anarchists saw it as a logical consequence that these interactions would always be voluntary: since communications were untraceable and unknown, nobody could be coerced into involuntary behavior. “This has profound implications for the conventional approach of using the threat of force,” May argued in the Cyphernomicon. It didn’t matter if the threat of force would come from governments or from criminals or even from companies: “Threats of force will fail.”
Crypto anarchy would not just take force out of chronically violent markets; it would also be a shot in the arm of dysfunctional markets. One of May’s favorite analogies was guilds. Medieval guilds had monopolized information — for instance, how to make leather or silver. When independent entrepreneurs would try to produce these goods outside the guilds, “the King’s men came in and pounded on them because the guild paid a levy to the King.” The police, tax collectors, and corporate interests joined forces.
Printing broke the oppressive system, May argued. Suddenly someone could publish and distribute a treatise on tanning leather, and the king would be unable to stop the knowledge from spreading like wildfire. But even in the age of printing, May lamented, some firms retained a firm grip on specialized technologies, such as gunsmithing. This era would now come to an end. Encryption, he reasoned, would liberate expertise and proprietary knowledge. “Corporations won’t be able to keep secrets because of how easy it will be to sell information on the nets,” he said, in the dated language of the 1990s. All kinds of transactions would become possible, without restrictions.
The “Cyphernomicon” felt raw, unedited, and self-absorbed in style. The pamphlet was clear: there was a great divide; it was either privacy or compliance with laws. Both at the same time, it implied, was impossible. The gun debate offered the template of self-protection versus protection by law and police — “crypto = guns,” as May put it. Both enabled the individual to have “preemptive protection.” Some of the most potent cypherpunk slogans were simply copied and pasted from America’s great gun debate: “If crypto is outlawed, only outlaws will have crypto.” One of the movement’s most popular slogans became an adaptation of those famous five words coined so dramatically by the NRA’s Charlton Heston: now it was not guns but crypto being pried “from my cold, dead hands.”
The National Security Agency. Source: Reuters
From the start, cypherpunk was about getting stuff done, not just debate and organization for debate and organization’s sake. In the summer of 1992, John Gilmore, one of the three original crypto rebels on the Wired cover, had made a bold move against the NSA that catapulted the movement into the national spotlight.
Gilmore had come across two cryptography books that piqued his interest: The first, “Military Cryptanalysis,” by the NSA cryptographer William Friedman, was a four-volume text published while World War II was still raging. The second, “Cryptanalytics,” in six volumes published between 1956 and 1977, was coauthored by Friedman and one of his students, Lambros Callimahos. In each case, the first two volumes had been declassified, which is why Gilmore knew about them.
In early July 1992, Gilmore filed a Freedom of Information Act request with the NSA, asking the agency to declassify the remaining Friedman volumes. Gilmore wasn’t requesting some coffee-table crypto book. Friedman was a legendary cryptographer: he had cofounded the US Army’s Signal Intelligence Service in the 1930s, a direct predecessor to the NSA. The National Cryptologic Museum idolizes him as the “Dean of American Cryptology.” Friedman himself had coined the term “cryptanalysis,” and he even had a 500-person lecture theater at Fort Meade named after him.
Without knowing it, Gilmore had requested one of the NSA’s founding documents. The agency, as is common, simply dragged out its response. Gilmore has a soft voice and the appearance of a peaceful hippie, with John Lennon glasses and a serene smile. Not one to be easily deterred, however, he considered filing an appeal.
Then a cypherpunk from the East Coast got in touch with Gilmore. “You know, I think I saw something like that at a library,” he told him. After Friedman’s death in 1969, his personal papers had gone to a public library on the campus of the Virginia Military Institute in Lexington, Virginia. The papers included the unpublished manuscript of the coveted book. The cypherpunk simply went there, Xeroxed the book’s page proofs, complete with Friedman’s annotations, and sent a thick packet to Gilmore in California. In early October, a week after the packet landed with a thud on Gilmore’s doorstep, a second letter from the East Coast arrived in his mailbox. It was from the NSA.
The agency had written to let Gilmore know that it would not release the books to him. In its response, the NSA referred to a statute — 18 U.S.C., Section 798. That statute made it a federal crime to publish classified cryptologic information. That was when it dawned on Gilmore that he had a problem. The documents in his possession were classified, and disseminating them — even showing them to experts—would be a crime.
But the NSA didn’t know he had the documents already. Gilmore carefully considered his options. He decided to submit copies of the classified documents to a federal district court under seal. The activists had limited trust in the judicial process. They trusted math, not the law. So, before Gilmore filed the sealed document with the judge, he made several copies of the classified book and hid them in extremely unlikely places.
For a while it looked as if the situation could escalate. A Justice Department lawyer representing the NSA demanded that Gilmore surrender his illegal copies and threatened that the NSA might send its own operatives, or FBI agents, to seize Friedman’s proofs from Gilmore. The cypherpunks were worried — especially Gilmore’s lawyer, Lee Tien. They were worried not just for their own personal freedom, but for the freedom of the country. At academic crypto conferences, rumors were making the rounds that the NSA had already raided the New York Public Library and reclassified documents that used to be public, and that in 1983 they had already removed Friedman’s personal correspondence from public access. An early court case seemed to affirm the government’s right to snatch any given document. “If they could do a black-bag job on everyone who had it,” Gilmore recalled in a café in Haight-Ashbury, “then they could classify anything.”
By now the anarchists were beginning to understand what the NSA feared. The c-punks had seen that the spooks at Fort Meade could tweak the law and play politics. But the activists also knew the secret agency hated publicity. So Gilmore started calling some of the technology reporters he knew through the cypherpunks list. One of the best-known journalists in San Francisco at the time was John Markoff, from The New York Times. Gilmore reached out to him. The Times later ran the story. “In Retreat, U.S. Spy Agency Shrugs at Found Secret Data,” the headline read.
Gilmore’s plan worked as predicted: the NSA shunned the light. The agency declassified the Friedman documents in response to the high-profile publicity. The NSA’s lawyers didn’t call Gilmore’s lawyer to tell him that they had finally yielded and would declassify the Friedman books; they told Markoff straightaway. “I heard it from Markoff,” Gilmore recalled. “They wanted to kill the story.”
Friedman’s actual book was useless for the cypherpunks. There was nothing in its pages that made any difference for the kinds of cryptographic tools the activists were developing. But the episode was a psychological success. It reaffirmed two views: that “the NSA was the enemy,” as Gilmore put it — and that this enemy wasn’t almighty. Even a few long-haired hippie activists could score a win against the mighty military machine. And it was only late 1992. The cypherpunks were just getting started.
“Cypherpunks write code,” was the mantra. Naturally, that was a statement of principle and a bit of an exaggeration. Not everybody on the list wrote code. In fact, only 10 percent of the cypherpunks wrote code, and only 5 percent worked on encryption-related projects. Early on, the activists had set their sights on a fundamental privacy problem in the digital age that was, at least then, unrelated to encryption: anonymity.
Making strong encryption available to John Doe was a big step forward for privacy. But it didn’t even begin to solve a fundamental problem: scrambling plaintext to ciphertext beautifully protected the letter inside an envelope. Whoever opened the envelope could not read what was inside. That was great. But the encryption available at the time didn’t conceal what was on the envelope: the sender’s address, the receiver’s address, and some other information about when and how the letter was sent. The correspondents’ identity was openly revealed. Encryption protected the content of packets but not the headers — what later would be called “metadata.” The now publicly available PGP protocol left metadata unprotected. PGP on its own, in short, created confidentiality, not anonymity. The cypherpunks wanted a solution to this problem.
Remailers were the solution. These were dedicated machines programmed to take scissors to the envelopes, encrypted or not, to physically cut out the sender’s address and then forward the email to the recipient. Remailers, in other words, were servers that automatically stripped emails of information that could identify the sender: the code running on the remailer would cut out the metadata, removing the sender’s address, replace it with a nonexistent placeholder such as firstname.lastname@example.org, and forward it to the intended recipient. It was like writing a letter with no sender’s address, or like calling somebody from a public phone with a distorted voice. Remailers could also be chained, to increase security, just in case one remailer kept a log file that could identify the sender. Court orders or lawsuits were ineffective against machines that automatically forgot data. But integrating PGP into remailers was a problem, at least initially.
Eric Hughes and Hal Finney wrote the first such remailers in 1992, in the programming languages Perl and C. By 1996, several dozen remailing machines would be operational. They had many uses. It became possible, for instance, to publish sensitive information simply by emailing it to a publicly accessible email list, because nobody could trace the email back to its sender. In this way the remailers were used to “liberate” ciphers that had not been published before, to spill a few government secrets, and to reveal secrets of the Church of Scientology.
By late 1992, things had started to move. The use of PGP was on the rise, and the first remailers were coming online. On Dec. 1, 1992, two days after Gilmore had scored his symbolic victory against the NSA, John Perry Barlow addressed a meeting of national-security and intelligence officials in McLean, Virginia: “I believe you folks in the Intelligence Community are going to [be] challenged by these issues as directly as anyone,” he told the spooks. The EFF cofounder knew that intelligence agencies were working under strict guidelines separating the domestic from the foreign. “You’re not supposed to be conducting domestic surveillance,” Barlow lectured the gathered officials. “Well, in Cyberspace, the difference between domestic and foreign, in fact the difference between any country and any other country, the difference between us and them, is extremely blurry. If it exists at all.”
For Barlow and the cypherpunks, this Vingean prophecy of a borderless cyberspace secured through the power of large primes was a shimmering pacific dream slowly inching into reality; for national security–minded government officials, it was a sinister, threatening nightmare. So in April 1993, the White House under Bill Clinton tried a new approach: if they couldn’t stop the spread of crypto, they could perhaps control it. The government proposed a new federal standard for encryption.
The proposal was officially named the Escrowed Encryption Standard, or EES. It was designed to enable encrypted telecommunication, especially voice transmission on mobile phones — but with a twist. The standard encompassed an entire family of cryptographic processors, collectively and popularly known as “Clipper chips.” The government expertise for designing such a system, naturally, resided in the country’s mighty signal intelligence agency, the NSA. The proposal was then to be implemented through NIST, the National Institute of Standards and Technology.
The system’s basic feature was simple in theory: When two devices establish a secure connection, law enforcement agencies will still be able to access the key that was used to encrypt the data. In short, communication was protected, but the FBI could read the mail or listen in when needed. The technical implementation of that simple idea was more difficult than expected.
Matt Blaze (r.) at a Passcode panel conversation on encryption at the 2016 South by Southwest Interactive festival. Photo by Ann Hermes/The Christian Science Monitor
NSA engineers came up with what they thought was a neat trick. To make a secure phone call, two phones would first establish a so-called session key to encrypt the conversation. That much was a given. The session key would unlock the ciphertext and reveal the plaintext. So the NSA needed to find a way to make the session key accessible to law enforcement without compromising the phone’s security. To do that, they created a so-called Law Enforcement Access Field, abbreviated LEAF. The LEAF would retain a copy of the session key. That retained session key, of course, was sensitive, and was itself encrypted with a device-specific key, called the “unit key.” This unit key was assigned at the time the Clipper chip was manufactured and hardwired into the device. Unit keys were held in “escrow” by two government agencies. The Feds, in short, had a spare key for encrypted traffic. The White House argued that Clipper would achieve twin goals: the chip would provide Americans with secure telecommunications, and it would not compromise law enforcement agencies in their ability to do legal, warranted wiretaps.
The cypherpunks, predictably, called BS on Clipper. The chip wasn’t just controversial; it was a bombshell. The tiny chip was the big cause the movement was waiting for. The very idea that a government, whatever its constitutional form, should be allowed to hold a copy of all secret keys was simply absurd to the growing number of crypto activists. “Crypto = guns” now meant that the Clinton administration faced the combined rage of First Amendment and Second Amendment activists, of those in favor of free speech and armed self-defense: Berkeley academics = NRA types. “Would Hitler and Himmler have used ‘key recovery’ to determine who the Jews were communicating with so they could all be rounded up and killed?” May asked on the list, rhetorically.
Graffiti sprayers took up the theme: “Stop Clipper — F*** the NSA” appeared on a garage door at the corner of 16th and Harrison St. in San Francisco, in March and April 1994. Realist-minded privacy advocates pointed out that any key escrow database would be a juicy target for aggressive intelligence agencies. The activists saw the LEAF and Clipper as a government-mandated “back door” into secure systems. Key escrow effectively meant “key surrender,” the EFF argued.
One famous anti-NSA slogan of the crypto wars of the early 1990s was the chip-mocking “Big Brother Inside,” a play on the famous tagline of one of the world’s leading chip makers: “Intel Inside.” Others designed a logo for T-shirts and pins, “Fight the Clipper,” with the old tagline from May’s “Crypto Anarchist Manifesto”: “Arise, you have nothing to lose but your barbed wire fences!” John Perry Barlow had one of the most powerful lines: “You can have my encryption algorithm,” he thundered, yet again using that favorite line, “when you pry my cold dead fingers from my private key.”
Graffiti and punch lines didn’t kill the chip. A hack did. This final blow was expertly administered by one of the cypherpunks, Matt Blaze, who then worked for AT&T. The Clipper’s cipher algorithm, known as Skipjack, remained classified. The government would provide the cipher algorithm only in preimplemented, tamper-resistant modules. And it would provide the hardware only through vetted vendors. AT&T was one of those vendors, and it was there that Blaze was able to test the chip.
Blaze found that the LEAF, the spare key, was flawed and indeed vulnerable to tampering. He published his results in a now-famous paper in August 1994. Blaze’s finding meant, ironically, that the Clipper could be fixed by breaking it — at least from the point of view of privacy activists: by subverting the law enforcement access field, the encryption remained intact but law enforcement no longer had access. Soon the Clipper was axed, and the cypherpunks bagged another victory against the government.
Several crypto activists also battled the government in court, with varying degrees of success. Three notable cases concerned the status of computer code and machine language as free speech. In the first case, filed in 1994, Phil Karn, a former Bell Labs engineer and cypherpunk, unsuccessfully brought a suit against the Department of State claiming that restricting the export of a diskette with Bruce Schneier’s book Applied Cryptography on it infringed on his First Amendment rights. In 1996, cypherpunk list member Peter Junger, a professor at the Case Western University School of Law, filed a suit against the Commerce Department for enforcing export regulations against him because he was teaching a computer law class in the United States — ultimately also without success.
Most influential would be a bundle of court cases brought by Daniel Bernstein, known as Bernstein v. US. The young mathematician, represented by the EFF, challenged the State Department for requiring him to get a license to be an arms dealer before he could publish a small encryption program. Then, by the end of the century, on May 6, 1999, the US Court of Appeals for the Ninth Circuit would rule, in a first, that software code was constitutionally protected speech, eventually ushering in the end of the hated cryptographic export control regime.
Street art by Banksy. Photo by Brendan McDermid/Reuters
Emboldened by success and mainstream media coverage, crypto anarchy became not less extreme but more so. Stewart Brand, for one, remained highly skeptical of anonymity online. On April 1, 1994, when the Clipper debate reached its fever pitch, a “nobody” reported on the newsgroup that Phil Zimmermann had been arrested and that the crypto pioneer was being held on bail of $1 million. Brand was about to sit on a panel with Zimmermann and was not amused. He responded to the cypherpunks two days later: “The Zimmerman[n] prank,” he wrote, “hardens my line further against anonymity online. At its best, as here, it is an unholy nuisance.” Brand commanded tremendous authority among subscribers, so the founders didn’t appreciate his snide comment.
“You can’t get rid of anonymity,” Hughes responded the next morning. He pointed out that there is no clear difference between saying something anonymously and saying it by using a pseudonym. “The first use of a pseudonym is as good as anonymous, because it has no past history,” he wrote. This was a clever move. The activists were fond of pointing out that cypherpunks had even been among the nation’s founding fathers. Alexander Hamilton and James Madison had chosen the handle Publius to publish the Federalist Papers. The cypherpunks even compared themselves to the founding fathers: Gilmore explained that May was the Thomas Jefferson, “the essayist,” while Hughes was more the Benjamin Franklin, “the coder.” Basically, Hughes responded to Brand that opposing anonymity was un-American.
But Brand was right. Several influential ideas emerged that polarized crypto anarchy and drove the uncompromising and self-defined founding fathers of online anonymity further to the fringe. One such idea, and perhaps the most prophetic one, was BlackNet.
BlackNet was the anti-WELL. It was the machine of loathing and disgrace: completely anonymous, without physical location, and amoral by design. An anonymous voice out of the emptiness of cyberspace (it had to be!) introduced the idea on August 18, 1993. The “Introduction to BlackNet” came as an email to the cypherpunks list through a remailer. The message started ominously, with a hint of irony and faux exaggeration that was obvious to most members on the list:
Your name has come to our attention. We have reason to believe you may be interested in the products and services our new organization, BlackNet, has to offer. BlackNet is in the business of buying, selling, trading, and otherwise dealing with *information* in all its many forms.
The anonymous author made clear that BlackNet would use public-key cryptosystems to guarantee total and perfect security for customers. The marketplace in cyberspace, the anonymous author wrote, would have no way of identifying its own customers, “unless you tell us who you are (please don’t!).”
BlackNet was also elusive. “Our location in physical space is unimportant,” the message read: “Our location in cyberspace is all that matters.” The mysterious anonymous voice gave a fictional address: “BlackNet <email@example.com>.” That was obvious geek irony: .nil didn’t exist as a top-level domain. But the email then ominously added, “We can be contacted (preferably through a chain of anonymous remailers) by encrypting a message to our public key (contained below) and depositing this message in one of the several locations in cyberspace we monitor.” These locations were two Usenet groups — alt.extropians and alt.fan.david-sternlight — and of course the cypherpunks list itself. This didn’t look like irony.
The idea for BlackNet was to remain “nominally” nonideological. But the author of the ominous passage made clear that he considered nation-states, export laws, patent laws, and national security “relics of the precyberspace era.” These things simply served one nefarious purpose: to expand the state’s power and to further what the BlackNet pioneers called “imperialist, colonialist state fascism.”
The curious email pamphlet then made clear that BlackNet would currently build up its “information inventory,” and that it was interested in acquiring a range of commercial secrets. “Any other juicy stuff is always welcome,” the anonymous voice added. BlackNet was specifically interested in buying trade secrets (“semiconductors”); production methods in nanotechnology (“the Merkle sleeve bearing”); chemical manufacturing (“fullerines and protein folding”); and design plans for things ranging from children’s toys to cruise missiles (“3DO”). Oh, and BlackNet was also interested in general business intelligence — for instance, on mergers and buyouts. “Join us in this revolutionary — and profitable — venture,” the message concluded.
May soon claimed credit for the letter. He had come up with BlackNet in the summer of 1993, as an example of what could be done, “an exercise in guerrilla ontology,” as he called it. The goal was to find a way to ensure fully anonymous, untraceable, two-way exchanges of information. The core idea was to use a public message pool as the main channel. The sender would use a chain of remailers to deposit a note in the pool anonymously and untraceably. The sender would have encrypted this message with the intended recipient’s public key. The intended recipient, and only the intended recipient, could then simply download, decrypt, and read the public message — and respond in the same way.
The hoax was too good not to be true. May claimed that he had come up with BlackNet just for the purposes of education and research. But the interests he listed — in particular, the commercial secrets — seemed a bit too detailed for a prank. It was credible. What was even more remarkable is that May’s overall idea for BlackNet sounded like a business plan. It could actually work.
Al Billings, then an anthropology student at the University of Washington, responded on the list just a few hours after the invitation went online: “It had to happen,” he said. “Even if it isn’t real, it will happen soon enough. I’m all for it.”
“I think it’s not real, or at least wasn’t intended to be,” another cypherpunk responded, picking up on the subtle irony. “My best guess is that it’s all a joke, but that the author will soon start receiving genuine replies; it may yet turn into the real thing.” They were both spot-on.
The master cypherpunk had created a “proof-of-concept implementation of an information trading business with cryptographically protected anonymity of the traders,” in the words of Paul Leyland, an Oxford University number theorist and cryptographer.
The editors at Wired apparently didn’t spot the bluff. The November 1993 issue announced BlackNet as if it were real: “Sent to us anonymously (of course),” Wired wrote and then quoted at length from May’s faux pamphlet. A few months later, in February 1994, Lance Detweiler, a former computer science student from Colorado who had become a cypherpunk troll, posted the BlackNet announcement to more than 20 newsgroups and lists, pushing it out to many thousands of recipients.
Some copies made their way into sensitive networks. Oak Ridge National Laboratory issued an advisory to employees and recommended reporting any contacts with BlackNet to supervisors. Indeed, May soon did receive genuine replies and a number of strange propositions, including an offer to sell information about how the CIA was blackmailing diplomats of certain African nations in Washington and New York. May says he decrypted the message with BlackNet’s private key and then put it away and never responded.
One of the main motivations for BlackNet, as May had outlined, was economic espionage. By the spring of 1994, the idea had made the rounds on forums, newsgroups, and the then novel World Wide Web, a browsable interface with clickable links that has become synonymous with the internet for most users. The FBI began taking the threat of BlackNet-organized industrial espionage seriously and started investigating. One anonymous post on the list claimed that two federal agents had interrogated Detweiler in Denver about BlackNet, and even correctly identified one of the agents.
The Feds also contacted May and other cypherpunks. The crypto cowboy was so concerned about the unwanted attention, and about his idea’s obvious potential for abuse, that he emphasized again and again that he wasn’t the person who had posted the initial message on the cypherpunks list. But BlackNet was never actually used as a message pool to sell information. One of the main ingredients was missing: digital cash, which existed only as an idea then.
Soon, an even nastier suggestion appeared on the list: a public hit list for politicians, set up as a portal for contract killing. It also came from a former Intel engineer, Jim Bell. He was one of the most radical minds on the cypherpunks list (Bell and May met only once, in the early 1980s). In August 1992, Bell had read an article by David Chaum in Scientific American titled “Achieving Electronic Privacy.” Chaum, then the head of a cryptographic research group at Amsterdam University, painted a dark picture: making a phone call, using a credit card, subscribing to a magazine, paying taxes, all these bits and pieces of information could be collected and combined into “a single dossier on your life — not only your medical and financial history but also what you buy, where you travel and whom you communicate with.”
A scary idea, Bell thought when he read this. But Chaum’s suggestion for achieving electronic privacy inspired Bell in unexpected ways. Chaum suggested a method for using as identifiers not real names or Social Security numbers, but “digital pseudonyms.” Such pseudonyms would make it much harder to connect various bits and pieces of information back to the same actual individual, just like in the good old days when cash payments were the norm.
“A few months ago, I had a truly and quite literally ‘revolutionary’ idea,” Bell wrote in April 1995, still inspired by Chaum’s ideas, “and I jokingly called it ‘Assassination Politics.’” Except Bell wasn’t joking. Assassination Politics was the title of a sincere 10-part essay that was to become highly influential. To Bell, Chaum had it the wrong way around: The Amsterdam cryptographer was asking how the freedoms of ordinary life could be reproduced on the internet. For Bell, the more exciting question was, again, the opposite: “How can we translate the freedom afforded by the Internet to ordinary life?”
Bell’s basic concerns seemed innocent enough: keeping the government from banning encryption and digital cash, the new technologies of freedom. But Bell reversed the cypherpunks’ classic motivation: he didn’t just want to ensure that well-established brick-and-mortar freedoms would apply on the new internet; he wanted to translate new internet freedom to the established social norms of ordinary life. Chaum suggested shaping cyberspace in the image of the real world; Bell suggested shaping the real world in the image of cyberspace. This reversal enabled a revolutionary outlook.
Bell suggested a market for assassinations. The MIT graduate had the idea to set up a legal organization that would announce a cash prize. The winner of that cash prize would be whoever correctly “predicted” the death of a specific person. The person on the hit list, Bell clarified, would be a violator of rights already, “usually either government employees, officeholders, or appointees.”
The fact that the suggested system was targeting the government made all the difference: Bell didn’t claim that a person who hires a hit man is not guilty of murder. Obviously, the victim could be innocent. But he was not dealing with innocents. He wouldn’t even initiate the use of force. The government, after all, held a monopoly on violence, and taxing citizens as well as enforcing the law represented a use of force already. By “taking a paycheck of stolen tax dollars” and being tied to police agents, a government employee had already violated the “nonaggression principle.” Therefore, “any acts against him” — the government employee — “are not the initiation of force under libertarian principles.” Killing government employees, Bell argued, was a legitimate form of self-defense. Crypto was, indeed, like guns.
Strong crypto made legitimate hits possible — at least in Bell’s mind. At the heart of his suggested assassination politics was a wish hit list administered by an organization. The list was to be made public. It had two columns: one with a government employee’s name, and one with money pledged for that person’s “predicted” death. (Bell always put that “predicted” in quotation marks, for somebody had to make the prediction come true). Anybody holding a grudge against a particular politician or government agent could bet a small, or large, amount of money on that person’s life. “If only 0.1% of the population, or one person in a thousand, was willing to pay $1 to see some government slimeball dead, that would be, in effect, a $250,000 bounty on his head,” Bell explained. The bounty, once large enough, would be a market-driven incentive for assassins.
Bell’s suggested system worked something like this: “Guessers” would create a file with their “guess” — that is, the politician or bureaucrat’s name and the time stamp of his assassination. The person making the “prediction” would then encrypt that file with a private key, so that nobody else could read or edit this information without having access to the private key. Next, the “guesser” would put the sealed envelope and some digital cash in a second envelope, which would be encrypted with the organization’s public key, so that only the prize-giving organization could open the envelope. The money was needed to avoid a large number of random guesses, Bell reasoned.
Once the hit had been made, the victim’s death would become publicly known through press reports. The winner could then send an encrypted envelope to the organization that contained two things: a private key and a public key. The organization could use the private key to verify that the winner had correctly predicted the hit by opening the previously submitted bet with this private key. But how did the winner get paid? The second element in the envelope was a public key to which only the winner possessed the private key. The public key would effectively be used to “transfer” the prize cash to the winner — by publishing it online, so that everybody could see it. The winner, and only the winner, could then download the encrypted cash and unlock it with another private key. All this would be entirely untraceable. “Perfect anonymity, perfect secrecy, and perfect security,” Bell boasted.
Bell’s vision was truly revolutionary. Nothing, he believed, would stay the same:
Just how would this change politics in America? It would take far less time to answer, “What would remain the same?” No longer would we be electing people who will turn around and tax us to death, regulate us to death, or for that matter sent [sic] hired thugs to kill us when we oppose their wishes.
Bell, predictably, became a divisive figure. Then–IRS inspector Jeff Gordon compared him to terrorist Timothy McVeigh, who bombed a federal building in downtown Oklahoma City in April 1995, killing 168 people. On the other end of the spectrum was John Young, an architect and cypherpunk who would found the first whistleblowing portal in 1996: Cryptome. Young nominated Bell for a Chrysler Design Award for creating an “Information Design for Governmental Accountability.”
Crypto anarchy, some successes notwithstanding, seemed to meander toward the fringe. But curiously, the ideology lacked a proper book-length treatment. There was graffiti, as well as a deluge of rambling emails, magazine stories, interviews, and the messy and disorganized Cyphernomicon. The cypherpunks wrote code, but not books. May, the movement’s self-styled essayist, had tried and failed.
Then, in 1997, Simon & Schuster, one of the big New York publishing houses, published “The Sovereign Individual.” It was a strange book, full of apocalyptic yet optimistic predictions. The two authors, inspired by the political philosophy of cypherpunk, left out the jargon and the arcane crypto discussions, yet kept the boldness: cyberspace was about to kill the nation-state, they argued.
Lord William Rees-Mogg was a prominent, albeit sharply controversial, figure in British public life. From 1967 to 1981, the owlish Rees-Mogg was editor of the Times, chairman of the Arts Council of Great Britain, and the BBC’s vice-chairman. In 1988 he was made a life peer in the House of Lords, as Baron Rees-Mogg of Hinton Blewett in the County of Avon. His coauthor was James Dale Davidson, a conservative American financial commentator and founder of the National Taxpayers Union, an advocacy group.
“As ever more economic activity is drawn into cyberspace, the value of the state’s monopoly power within borders will shrink,” Rees-Mogg and Davidson predicted. “Bandwidth is destined to trump the territorial state.” To back up their futurology, the two pundits called on the acid-dropping former cattle rancher from Wyoming, John Perry Barlow. He had it right, they said: “Antisovereign and unregulatable, the Internet calls into question the very idea of a nation-state.”
Echoing May and the cypherpunks, they argued that the state’s threats of coercion would simply be ineffective online, shielded by strong crypto. “The virtual reality of cyberspace,” they wrote, “will be as far beyond the reach of bullies as imagination can take it.” The advantage of large-scale violence, of police or military force, would be far lower than it had been at any time since the French Revolution. Individuals would no longer need, or tolerate, sovereign states above them. The age of violence was over. The individual would now become the sovereign, effectively taking over from the state. Soon, most of the world’s commerce would be absorbed “into cyberspace,” a novel realm where the governments of old would have “no more dominion” than they exercised over the bottom of the sea or indeed the solar system’s outer planets. “In cyberspace, the threats of physical violence that have been the alpha and omega of politics since time immemorial will vanish.”
One big reason for this coming revolution was digital money. “Cybercash” would slash the state’s ability to control its citizens. In the near future, any commercial transaction would happen over the “World Wide Web,” paid for in untraceable digital cash. Taxation would become difficult, if not impossible, thus cutting the state back to size, if not destroying it entirely. As the two conservative authors put it in a twisted reference to Lennon and McCartney: “Cyberspace is the ultimate off-shore jurisdiction. An economy with no taxes. Bermuda in the sky with diamonds.”
The authors didn’t use the phrase “sovereign individual” as mere slogan. “One bizarre genius” could achieve the same impact in cyberwar as a nation-state, they argued confidently. The Pentagon was no more powerful than some teenage whiz kid. Technology had truly leveled the playing field in future confrontations: “The meek and the mighty will meet on equal terms.” The consequences were profound: “Nation-states will have to be reconfigured to reduce their vulnerability to computer viruses, logic bombs, infected wires, and trapdoor programs that could be monitored by the U.S. National Security Agency, or some teenage hackers,” Rees-Mogg and Davidson predicted.
An advance Kirkus review called the best-selling book “astonishing” and “penetrating.” The Vancouver Sun thought it was “must reading,” Toronto’s Financial Post described it as “sobering.” Reviews appeared in the Guardian and the Wall Street Journal. Predictably, the cypherpunks loved it. “The Sovereign Individual discusses many of the issues discussed on Cypherpunks,” one list member wrote. “Strongly suggested for any cpunk,” added Jim Choate, who had written one of the first remailers. Some were skeptical. The book lacked precision and was full of bold overstatements. And although the two authors never referred to crypto anarchy or the c-punks, the publication gave wider currency to an emerging political philosophy. But the book’s success was short-lived — sharing this fate with the utopian ideology from which it sprang.
Inspired by Vinge’s story and the cypherpunk list, a few entrepreneurs took the idea of the sovereign individual rather literally in those enthusiastic years before the crash of the New Economy. One of them was Ryan Lackey. Digital cash had fascinated Lackey since he was 15 years old. He had already started an e-money startup on Anguilla, a loosely regulated island, but he had run into trouble with the ruling family. An avid c-punk, he had hosted the list archives on an MIT server when he was a student in Boston. Lackey even looked like a textbook cypherpunk: head shaved bald, pale skin from spending too much time in front of screens, rimmed glasses with a dark black frame, and usually dressed all in black. Privacy and internet freedom, as he saw it, were under siege: laws everywhere, particularly in the United States, were getting more and more restrictive and authoritarian.
Several crypto anarchists had long been looking for an offshore jurisdiction to run the automats of freedom: remailers, racks of servers dishing out encryption, and machines minting digital cash. Anguilla initially seemed a good option, as did Tonga. Then, in June 1999, Lackey and Sean Hastings discovered a curious place in a book, How to Start Your Own Country, by Erwin Strauss. That place was the Principality of Sealand, a tiny artificial island on a World War II antiaircraft platform in the North Sea known as Roughs Tower. The rust-covered 550-square-meter platform sat on two giant hollow pontoons, 60 feet above the brown-green waves of the harsh North Sea, 7½ miles off Felixstowe, on the Suffolk coast.
On September 2, 1967, Roy Bates, a retired army major and World War II veteran, had declared the rig independent from Britain, bestowing the title “princess” on his wife (it was her birthday). Britain refused to accept the principality’s sovereignty, as did the US, the UN, and all other international organizations. Hastings and Lackey, however, had perfect timing. In 1999, Prince Roy was battling Alzheimer’s and his health was deteriorating. The “royal” family considered leaving Sealand, making it available for other uses. In November that year, Hastings visited the platform for the first time. He already had experience in offshore financing and online gambling in Anguilla, where he had met Lackey. After inspecting Sealand that November, the entrepreneurs were inspired. They decided to move forward.
“The biggest inspiration was Vernor Vinge, ‘True Names,’ ” recalled Lackey. The vision was to have individuals acting in the Other Plane, able to “live on hardware and transact stuff on their own and not have to be under any government,” Lackey recounted later in Palo Alto. These sovereign individuals were to be “the first class citizens,” he said. The principality was the weakest country they could find in terms of jurisdictional challenges. The deserted antiaircraft platform in the North Sea had simply no legal system, no police, and no law enforcement. Sealand seemed like the perfect place to start for would-be sovereign individuals.
HavenCo Ltd. was to be the world’s first real data haven, “physically secure against any legal actions,” the business plan promised. The idea was to combine the best of the “first world” — high-quality infrastructure — with the best of the “third world”: hosting data and running businesses “free of unnecessary regulation and taxation.” “Sealand is located less than three milliseconds (by light over fiber) from London,” the business plan promised, in language indeed reminiscent of “True Names.”
Lackey, then 20 years old, became the chief engineer and moved to the barren platform. He spent the better part of two years on the rig. He was there mostly on his own, maintaining HavenCo’s operations, with others coming just when journalists were visiting by boat. The media loved the story of Sealand.
No doubt, the startup was a hot idea. It was so hot that Wired magazine had commissioned a cover story on it even before HavenCo had done anything. The magazine’s reporter, Simson Garfinkel, was literally on Lackey’s heels, in the same boat, when HavenCo’s chief engineer visited Sealand for the very first time, in January 2000. Lackey recalls that the Wired reporter was “very credulous.”
“The Ultimate Offshore Startup,” was the magazine’s title story in July 2000. “Meet the high-seas adventurers on a multibillion-dollar quest to build a fat-pipe data haven that answers to nobody.” On Wired’s cover was the rusty antiaircraft platform with helicopter landing pad. Assembled on the platform was a fictional team of nine people (there were only four). The entire rig was not just sticking out of the ocean; it was gargantuan, reaching through the clouds and the atmosphere all the way into space, above the entire round, blue planet, Whole Earth Catalog style. The magazine presented the North Sea rig as a veritable Bermuda in the skies with diamonds. What a public launch.
After the Wired story came out, managing media requests and frequent visits of journalists took up more time than getting actual business operations up and running. Meanwhile, other things didn’t go so well. Lackey had hired Winstar Communications to run a fiber-optic line from London to the shore. Winstar was one of the poster children of the late-1990s internet boom. But the overexposed company, with a revenue of $445 million in 1999, went belly-up in 2001. The fiber connection to the coast was never built, let alone the “fat pipe” to Sealand that Wired had announced so credulously. There was only a wireless link.
HavenCo’s “crazy plan,” Lackey said, was to lay a repeaterless fiber cable from London to the Netherlands via Sealand. That cable also remained a fat pipe dream. The ambitious internet startup had to make do with a data transfer speed of just 10 megabits per second. That meant that downloading just one regular 1.5-GB movie would have taken nearly twenty minutes, clogging the pipe. “Low bandwidth killed the economics,” Lackey recalled.
Life on the c-punk rig was rough. Initially, the small HavenCo team had a nice spot in the edgy and hipster docklands in London. Living in East London was much more pleasant than on a deserted antiaircraft rig in the wintry sea. But money was in short supply, and living on Sealand was the cheapest option. Lackey moved there for good, now dining out of a pantry filled with canned food. Sean Hastings and his wife, Jo, brought a dog to Sealand from the Netherlands. But the black lab annoyed Lackey.
Then there was Colin, an English janitor in his 60s. Lackey didn’t get along with him either. To avoid running into Colin in the claustrophobic 5,000 square feet of interior space, Lackey lived on San Francisco time, even though he was off the coast of England — sleeping during the day, awake at night. He would practically hide in his room, or in the data center. “It was pretty boring,” he recalled. He would stay on the rusty rig for five to six months in one stretch, getting used to the permanent smell of diesel fuel. But he was busy tinkering with computer gear that he had smuggled in from the United States. “It really didn’t matter too much,” Lackey says. He was living in the Other Plane.
The rough life reflected HavenCo’s business situation. The company had five sturdy gray relay racks with blue plugs at the top, with space for 45 servers. But it managed to put in and rent out only a dozen machines. The company never successfully raised sufficient seed money, not even in the bullish market of the New Economy before the crash. And the budget quickly ran thin. One of HavenCo’s main investors, Avi Friedman, was worried about the Y2K problem, so he withdrew about $2 million in cash, in $100 bills, and kept the cash at home. He doled out $1,500 at a time, to make minimum payments. Lackey started using his own credit cards, spending ever more money that he didn’t have.
Businesses did not flock to the data haven as expected. By the summer of 2000, two of the three founders had jumped ship and left the startup. A year later, Lackey managed to keep the company afloat with about 10 customers, primarily casinos. A true cypherpunk at heart, Lackey ran a Mixmaster Type II anonymous remailer on the rig. That felt like the right thing to do, but it didn’t make the company any money. HavenCo’s business plan foresaw $25 million in profits in year three; Lackey ended up losing $220,000 after three years. One day in early 2001, Lackey was standing on the platform overlooking the wide North Sea horizon, when his phone rang. It was Google, offering him an engineering job. But he still believed the cypherpunk data haven could take off. He turned the offer down.
By the end of the decade, crypto anarchy had a mixed record of success. On the one hand, many of the cypherpunk projects had flopped: the list was in demise, and many of the projects that the activists had promoted with such youthful optimism—remailers, PGP, message pools, digital cash, offshore hosting—remained on the fringe, or they had failed outright. The cypherpunks were looking for individual sovereignty, a Bermuda in the sky with diamonds; what they found was a lone geek on a rusty rig in the North Sea without cash. Yet the ideology of crypto anarchy would become spectacularly successful, even without a nonfiction best seller to spread the gospel.
Like many other libertarians, May was fascinated by Friedrich Nietzsche’s philosophy; he even called his cat Nietzsche. “Crypto is not going to enable the bottom 90 percent,” May was sure. “Crypto enables the Übermensch,” he believed. Just like a generation before him, some tech pioneers believed that cyborg technology would enable the superhuman. May ended up making a “huge amount” of money through the list, he says, by investing in promising ideas and companies that he learned about by running the list.
“This may smack of elitism,” May realized, “but I have very little faith in democracy.” Instead, the anarchists put all their faith in technology. The power of large primes trumped the power of large institutions. Math decided, not man. Even under the most adverse conditions, technology was trustworthy — even if laws, even if society, and even if corrupt governments could not be trusted. May’s vision was nothing less than one of an automated political order.
Crypto anarchy embodied the unshakable cybernetic faith in the machine. It combined Wiener’s hubristic vision of the rise of the machines with Brand’s unflinching belief that computers and networked communities would make the world a better place. A direct line connects the techno-utopianism of Timothy Leary to the techno-utopianism of Timothy May, cyberpunk to cypherpunk. Leary felt empowered by the personal computer. For May, just one ingredient was missing: the power of prime numbers. “Cryptography provides for ‘personal empowerment,’” he wrote in 1999.
The cypherpunks had not a trace of doubt that crypto itself was libertarian, that increasing its use would steadily increase degrees of freedom available to the individual. “This is just an inevitable consequence of technology,” May said. To the disciples, servers running remailers and encryption services were libertarian automata, subversive political machines. Whatever their input, their output was freedom.
Sometime in 1999, May looked back on the momentous changes of the previous two decades:
The full-blown, immersive virtual reality of ‘True Names’ may still be far off, but the technologies of cryptography, digital signatures, remailers, message pools, and data havens make many of the most important aspects of ‘True Names’ realizable today, now, on the Net. Mr. Slippery is already here and, as Vernor [Vinge] predicted, the Feds are already trying to track him down.
May need not have spoken in oblique science fiction metaphors. He didn’t know it, but he was right. As he wrote these lines, the Feds were actually busy tracking down a real-world Mr. Slippery in the vast networks of the US military establishment. That Mr. Slippery was doing exactly what May himself had predicted a few years earlier: stealing vast amounts of commercial and military secrets, encrypting them, relaying scrambled versions of these files on machines in third countries, and then exfiltrating them to machines that seemed beyond the reach of the government. And, as in Vinge’s story, the Feds were unable to stop the data theft. That metaphorical Mr. Slippery was not a freedom-loving American citizen. The FBI, after a long and painstaking investigation, was able to determine the culprit in the Other Plane: an intelligence agency of one of the most resourceful rivals of the US.