imaginary family values presents
a blog that reclines to the left
Recently, Americans had to deal with two expensive failures in an infrastructure that millions of people use every day. In one of these failures, the blackout that struck much of the Northeast, the mass media quickly turned from a description of the blackout itself to a discussion of faults in the infrastructure, namely, the power grid. In the other, the one-two-three punch of Blaster, Welchia, and SoBig.F, hardly anyone in the mainstream is discussing the weakness in the corresponding infrastructure, Microsoft Windows, which has helped the worms spread so rapidly.
(A welcome exception is this Washington Post column, brought to me courtesy Kevin Drum, who also demonstrates why experienced Windows users will put off “critical” patches to their operating systems as long as they have something better to do with their time, e.g., root-canal work.)
Linux and Mac fans, of course, have noticed this media tunnel-vision every time a Windows exploit makes the front pages, all the way back to the Melissa virus. Every time it happens, we grind our teeth, mutter imprecations against Bill Gates, and wonder when reality will catch up with Microsoft’s marketing budget. I would suggest, however, that the way we geeks talk about security affects the way suitsnormal people think about security, in a way that discourages them from seeking alternatives to Microsoft.
Geeks of every OS religion share a vocabulary of computer security, and most of the terms in that vocabulary have military connotations. Passwords are cracked, and systems are penetrated, by exploits. Hosts on a network that need to be exposed to the public, but don’t have sensitive information on them, are put in a DMZ. Some operating systems have back doors that bypass whatever security their administrators have set up; in other cases, a program might accept an input that smashes the stack. Crackers find vulnerable dialup lines through war dialing and find vulnerable wireless Internet connections through war driving. Disgruntled employees set logic bombs to take revenge on their managers. A file that was advertised as a Britney Spears picture turns out to be a Trojan horse. You get the idea.
People who are familiar with computer security understand where the dramatic metaphor ends and where prosaic reality begins. If I have a physical firewall around my computer, and someone lights a physical fire outside of it, the safety of my computer depends on the resources of the arsonist: with the right chemicals, any firewall can be turned to rubble. If I have an electronic “firewall” between my computer and the public Internet, and the firewall is configured to block all incoming traffic, the world’s most brilliant network engineers with the world’s most powerful computers will not be able to override the firewall simply by sending packets to it over the Internet.
But try to think like someone who doesn’t know much about computer security, doesn’t have the time or inclination to learn, and doesn’t know how to interpret the metaphors. Microsoft is the largest and wealthiest software company in the world, and Windows and Office are their flagship products. Surely, if they are vulnerable to computer viruses, then any comparable products from any competitor must be at least as vulnerable. Any claim that an operating system written by a bunch of volunteers is more secure than Windows doesn’t deserve a moment’s serious consideration — you might as well say that instead of using the United States Army to restore order to Iraq, we’d be better off sending in a few high-school marching bands.
When people talk about computer worms and viruses, or say their server is infected, they are of course using a biological metaphor instead of a military one. Many geeks also assert that the Windows “monoculture” in the computer world makes it easier for worms to propagate. But they, too, are confusing the metaphor with the facts.
Why is biodiversity such a good defense against biological pathogens? Beyond a certain point, you can be too effective at defending yourself against bacteria — so effective that you starve to death, or suffer allergic reactions to all the available food, instead of eating something that might infect you. Every organism has to trade off a need to protect itself against infection with a need to eat, and breathe, and otherwise interact with a disease-filled world. For a species whose reproductive cycle is a few thousand times slower than a bacterium’s, part of that trade-off involves making your body chemistry as distinct as possible from your neighbors, to minimize the chance that a bug that infected your neighbors will be able to infect you. By the same token, a pathogen that can infect a wide range of hosts is paying a metabolic cost for its ability, and risks losing the evolutionary competition to a strain that can attack a smaller range of hosts with a lower energy expenditure.
Thus, in nature, we see pathogens that specialize and hosts that individualize. Diseases that cross from one species to another are the exception rather than the rule, and even for an extremely virulent disease (e.g., smallpox and Ebola), some hosts in the target species are naturally immune.
But these budget constraints, so to speak, do not apply to computers or computer “viruses”. They are artifacts constrained by human desire and skill, not organisms constrained by natural selection. A computer needs electricity to run, not a fresh supply of executable software from sources that its owners cannot trust. If the authors of SoBig.F had designed it to attack Linux as well as Windows systems, the worm would have taken up a few more kilobytes of hard-disk space and taken a few more milliseconds to travel from one host to another, but this burden would not have saved a single Windows machine from infection.
This is how to explain computer security to the general public:
Disguise, not force, is the essence of the confidence game. A criminal masquerades as a bank examiner, or a roll of dollar bills is switched with a stack of worthless paper. Likewise, in order to subvert a computer system without physically touching it, the attacker must impersonate privileges that he or she does not legitimately have.
Money and power can insulate a person or corporation against physical attacks, but not against cons. Consider, for example, the phoner toner scam: an office worker buys “discount” printer toner from someone who pretends to be the company’s regular supplier, and receives a case of poor-quality toner and an overinflated bill. The owner of a small company with low turnover can give all of its employees corporate credit cards, tell everyone to be careful, and trust that losses to this kind of scam will be rare and controllable. The directors of a large company can limit which employees have the power to spend the company’s money and impose rules on how it may be spent (for example, by only allowing them to get office supplies from authorized vendors). But a large company that doesn’t impose such controls is going to leak money — from “phoner toner” and similar scams, from employees who are careless about spending their employer’s funds, a
nd from deliberate fraud by insiders.
Imagine a company that starts out by treating its assets as a pool from which every worker can draw, grows into a huge firm without changing this attitude, and suffers humiliating losses from one crook after another. The directors add financial controls, but employees, even the honest ones, bitterly resist them. Why? Over the years, cliques within the firm have developed informal systems for sharing and exchanging resources. These systems have served the company and its customers well, but they are incompatible with the new regime. To avoid outright revolt, the directors quickly scale back their plans, trying to minimize the imposition on their employees, and institute piecemeal reforms. But because of the company’s great wealth, the number of employes with positions of financial responsibility, and its reputation as an easy mark, other criminals continue to probe the weaknesses in the system, and the losses continue.
That company is a metaphor for Microsoft Windows. Since its pre-Internet days, Windows has been designed to make it easy for programs to share information. If you’re a programmer who writes applications for Windows, or who is adding features to the operating system itself, this is very convenient. If, however, you’re trying to secure a Windows machine, or fix a Windows security hole without breaking something else, the reverse is true.
For example, after installing the recent patches to shield themselves from the Blaster worm, some users report that they can no longer connect to remote Microsoft Exchange email servers. By contrast, your humble author found out about an SSH security hole while composing this message, and upgraded the machine that serves this Web page without even breaking his laptop’s connection to it.
Even with the best tools, it is hard to write robust software, and Microsoft’s popularity makes it an especially attractive target for vandals. But these two facts are not enough to explain the high cost of Windows security holes. This cost is a by-product of the design of Microsoft Windows, and until Microsoft’s customers demand some fundamental changes in that design, they will continue to pay that cost.
Before open-source fans get too smug, I should remind them that BIND and sendmail, two venerable open-source packages, have long been poster children for bad security. Judging from what others have written about them, they suffer from the same design weaknesses as Windows, on a smaller scale. Fortunately, for both of these packages, there are a variety of competing open-source packages that provide almost all the same functionality.