Cracking the system:

How do intruders break into computers?

How do the big systems stop them?

by Joe Flower

line

A version of this article appears in New Scientist, September 1, 1995

International Copyright 1995 New Scientist

line

  1. The Break-In
  2. Anti-hero of the Nineties
  3. How the system cracker works
  4. Once inside
  5. Up against the firewall
  6. Box: Why do crackers crack?

The Break-In

It was 2 o'clock in the afternoon on Christmas Day, 1994, at a cottage near the beach in San Diego, where California abruptly meets Mexico. Most humans were at Christmas dinner, or relaxing with friends. Tsutomu Shimomura, the owner of the cottage, was away in San Francisco. There was no sound in the cabin but the traffic outside, the distant booming of the surf, and the fans of Shimomura's three computers.

Computers have an exquisite sense of time, but no sense of holidays, and at this precise moment Shimomura's computers suddenly began working very hard on an unusual project: downloading much of their contents to a stranger somewhere out there in the electronic void.

These were no ordinary computers, and Shimomura no ordinary home-computer maven. Tsutomu Shimomura, in fact, is a name to conjure with, in the tight land of computer security. A 30-year-old computational physicist, Shimomura works for the federally-funded San Diego Supercomputer Center (SDSC). His three computers were connected to the private SDSC network, and through that network, to the Internet. Two were hefty Sun SPARC workstations, each powerful enough to run a major commercial computer network alone. The third held thousands of documents and programs about computer hacking, and defenses against it. In moments copies of all of them would be spirited away into the ether of cyberspace.

It started at nine minutes after two, when the three computers began receiving a series of commands that requested information. Someone in cyberspace was trying to discover the details of the relationship between the three. This only took three minutes.

line

14:09:32 toad.com# finger -l @target
14:10:21 toad.com# finger -l @server
14:10:50 toad.com# finger -l root@server
14:11:07 toad.com# finger -l @x-terminal
14:11:38 toad.com# showmount -e x-terminal
14:11:49 toad.com# rpcinfo -p x-terminal
14:12:05 toad.com# finger -l root@x-terminal
line

One of the two SPARC workstations was the "server," the central switching point, for the local network. After a pause of six minutes, suddenly a request for connection arrived at the server from an address out in the Internet.

line

14:18:22.516699 130.92.6.97.600 > server.login: S 1382726960:1382726960(0) win 4096

The address was forged and meaningless, but the server had no way of knowing that. It responded automatically, by sending an acknowledgment, and waiting to hear back, with its "login" port, its entryway, half open. It didn't hear back. Instead, it received another request for connection, then another and another, thirty in all, in only three seconds. The server acknowledged the first eight requests and sat waiting to hear back. The rest it couldn't answer. It would, in fact, ignore any other messages, like a mailbox stuffed too full for even one more letter. No one would be able to do anything to interfere with that computer while the outsider completed his attack.

The other SPARC station was used as an "X-terminal," its software configured in a Windows-like environment made for such large machines. A fraction of a second later, the X-terminal received a request for connection from a real address, then another, then another, 20 in all within 10 seconds, all from the same address.

line

14:18:25.906002 apollo.it.luc.edu.1000 > x-terminal.shell: S 1382726990:1382726990(0) win 4096
14:18:26.094731 x-terminal.shell > apollo.it.luc.edu.1000: S 2021824000:2021824000(0) ack 1382726991 win 4096
14:18:26.172394 apollo.it.luc.edu.1000 > x-terminal.shell: R 1382726991:1382726991(0) win 0

The X-terminal acknowledged each one, but got a "reset" message in return, as if the caller had changed his mind. In fact, the caller did not really want a connection -- he only wanted to see the pattern of the "sequence numbers" that the station gave out with each response. A computer connected to the Internet will give out a new sequence number for each new connection, and it will only connect to a response that gives back that sequence number, plus one -- that's how it keeps many different conversations straight. When you contact a computer through the Internet, it gives your call, for example, the number 10,000. The response from your end must be numbered 10,001 -- or the computer will not recognize it as a response. But suppose your message is wearing a disguise -- suppose it carries the return address of some other computer, some computer that your target trusts implicitly? The acknowledgment, with the sequence number, will go to whoever you are pretending to be, not to you. You will never see the sequence number, but you have to be able to fake the right response. The way around this problem is to find the pattern in the sequence numbers, and respond to the next one in the pattern.

The mysterious faceless caller to the X-terminal found the pattern that he was looking for: each new response took a sequence number exactly 128,000 greater than the one before. Within half a second, another message requesting connection arrived at the X-terminal, but this one did not carry a return address out in the Internet. This one claimed to be from the server, the X-terminal's brother SPARC machine. It was a forgery, but the X-terminal did not know that. It sent an acknowledgement to the real server, which ignored the message, because it had its hands full. But within another quarter second, a forged reply with the proper sequence number arrived from the intruder, and the X-terminal accepted the connection, believing that the mysterious stranger was its own server.

line

14:18:36.245045 server.login > x-terminal.shell: S 1382727010:1382727010(0) win 4096
14:18:36.755522 server.login > x-terminal.shell: . ack 2024384001 win 4096

Now the intruder had one foot in the door -- a connection, but one that only went one way. The intruder could put files and lines of code into Shimomura's X-terminal, but everything the terminal sent back would go to the return address on those messages: the real server on the other SPARC machine. So the intruder added a crucial line to one file, which told the X-terminal, in effect, to "let anyone in, from anywhere, without challenge."

Now the door would open all the way. So the intruder shut down the one-way connection and logged on again with a full two-way connection. Now he could start his real work, ransacking Shimomura's files.

He might have gotten away with it, except for a few details. The first was a tiny program that he did not notice, which was dutifully logging every detail of the transaction and sending the log to a safe place elsewhere on the Internet. The second detail was the two taunting messages that appeared on Shimomura's voice mail days later, their strange whimpers and cries punctuated by computer-modified voices repeating such phrases as: "My technique is the best. Don't you know who I am? Me and my friends, we'll get you. My style is the best. Your technique will be defeated. Your technique is no good."

Tsutomu Shimomura, it turned out, was not a good man for this intruder to taunt. Shimomura, in fact, may have been the one person best equipped to track down the intruder, and he decided to do just that. A month later, when the purloined files showed up on the Well on-line system, he and a pick-up team of experts, backed by the Well's Hua-Pei Chen, set up shop with their laptops in a spare office at the system's Sausalito, California, headquarters to set traps and lie in wait for him. When someone using similar techniques and addresses invaded the computers of the San-Jose-based Netcom On-Line Communication Services and copied 20,000 credit card numbers, Shimomura moved south to the heart of Silicon Valley, and tracked the intruder as he copied files from Apple Computer, Motorola, and other companies. Within days, Shimomura was across the continent, cruising the streets of Raleigh, North Carolina, with a technician of the local cellular phone company, zeroing in on the hijacked cellular signal that was the intruder's lifeline to the net.

At 2 o'clock in the morning on February 15, FBI agents pounded on door number 202 in the Player's Club apartment complex in Duraleigh Hills, just outside Raleigh, and arrested a man with wavy brown hair pulled back into a pony tail. It was Kevin Mitnick, a notorious 31-year-old veteran net intruder, who had already served a year in federal prison for cracking computer nets to steal telephone codes, and who had gone missing almost three years before.

Anti-hero of the Nineties

Despite his rhetoric about "my friends," Mitnick worked alone. Yet he is only the latest and most brazen example of the anti-hero of the Nineties. Reports of computer break-ins have continued to skyrocket, from 1,334 in 1993 to 2,341 in 1994. The Federal Law Enforcement Training Center near Brunswick, Georgia, now trains police officers in cybersleuthing, and the FBI has set up a National Computer Crime Squad. Computer intruders show up in movies and in headlines, and litter the language with new words like "foo," and new meanings for words like "spoof," "virus," "worm," "tunnel," "trust," and "letter bomb." Nothing is safe, it seems, from this cyberspace Ninja, this shape-shifter intruding in private systems, taking files, planting viruses, trashing systems, snooping.

The public has come to call these computer intruders "hackers." Many computer network experts object to the term. This, they argue, is like mixing up burglars and locksmiths. To them, a "hacker" is someone with skill who attempts to exceed limitations -- not someone who breaks into systems.

By whatever name, cybersneaks have entered the public consciousness. How close is the image and the reality? How do these electronic ghosts actually crack systems? What can a big system do to stop them?

How the system cracker works

An intruder in cyberspace works much like an intruder in the physical world. There are lots of fancy ways to get into a building, from lock-picking to the human-fly techniques of climbing drainpipes. But most burglars, most of the time, try the easy way first: they twist doorknobs and push up on windows to find one nobody locked.

Despite the mounting fears for security in cyberspace, a surprising number of systems do the equivalent of leaving their doors unlocked, such as leaving active password files on directories open to the public, or having accounts with, for instance, a login name "customer" and a password "service," or allowing employees to use their last names, or their children's names, as passwords.

The Internet was not built for security. The designers of many of the protocols and programs that are the basic building blocks of cyberspace never thought about people with criminal intent and technical skill bent on breaking in -- so the designs are littered with little problems that can be exploited. For instance, on one proprietary system, with hundreds of discussion groups, the volunteer moderator of any group could create a special piece of software (called an "rc file") that would do certain tasks automatically for anyone who joined the discussion -- bringing them a menu, for instance. The problem? The software did the tasks under the user's account name, and it was written in "C" code, which is not readable in English. So an intruder who cracked any of the hundreds of moderators' accounts could write code into the "rc file" that would temporarily commandeer the account of anyone who came to that discussion group, issuing instructions such as "send your password," or "copy all this account's email to another address" -- all in language that no one could find without analyzing thousands of lines of code in hundreds of files.

"Trust" is the biggest word in cybernet security and, on the Net if not in daily life, trust is "transitive." Follow the bouncing balls: suppose System A allows access to anyone on System B, perhaps because they are different branches of the same company, without any login or authentication procedure. We say that System A "trusts" System B. But what if System B trusts System C? Then System A trusts system C, too, whether it knows it or not. Furthermore, it trusts anyone System C trusts. Now, maybe A, B, and C are all tightly-run ships with no open hatches -- no unencrypted password files, no password-less "guest" accounts. But somewhere down this line of trust, maybe on System G, there's a hole. If a hacker finds that one hole, he has access to the entire string of trust, and he can begin working his way back along it, puncturing the walls of one system after another.

It's long, lonely work, hour after hour at the monitor, and according to most observers, it is almost entirely a young male activity. As a crime, this is not like sticking up a store. It's more like safe-cracking: it takes a lot of equipment, skill, and time.

Many system crackers try short-cuts. "Most of these guys don't do their own research," says net guru Matisse Enzer of Internet Literacy Associates. "They just hear about the security hole from their friends, and go try it out."

An intruder might penetrate a company's system through the biggest open door, going right in the front, and getting an account from the system administrator by methods called "social engineering," engaging the "wetware" -- the people who run the system -- rather than the software. Armed with no more than a legitimate employee's name, he'll try a phone call something like this:

"Hi, my name's John Barleycorn, I'm a new hire in Accounting. Elsie Farrell has asked me to help her with her computer, and she's having trouble making her password work. It would help if you would go in and manually change it to `Ragu'. That's the name of her cat."

Then there is "dumpster diving," going through corporate trash looking for lists of passwords, system diagrams, organization charts, hardware and software descriptions, anything that might guide the cracker.

Some system crackers find any contact with humans or trash cans rather crude and uninteresting, almost a kind of cheating. They will start, instead, with a program that will twist doorknobs and rattle windows on scores of systems, looking for the one hole they need. These holes might include:

The Internet is largely built of a series of relatively small programs, most distributed freely on the net, that perform specific functions. A number of these allow outsiders, without passwords, some limited types of access. Several programs, such as "sendmail," handle email. "File transfer protocol" ("ftp"), "gopher" and "Wide Area Information Servers" ("WAIS") allow outsiders to download programs and information that you have placed in special directories for the purpose. The "hyper text transfer protocol" ("http") allows anonymous strangers access to Web pages. "Network news tranfer protocal" ("nntp") allows Usenet discussions into the system. Other programs, such as "telnet," "remote login" ("rlogin"), and "remote shell" ("rsh"), allow access, but can ask for a password.

Many of these programs contain subtle flaws that can be exploited. For instance, it is possible to use the headers of email messages sent through old versions of sendmail to deliver executable programs. In some cases, it is possible to use ftp to deliver programs into a system for the gopher software to execute. It is sometimes possible to "tunnel" into systems by encapsulating one kind of message (say, an ftp request) inside another (say, email).

Programs such as ISS (the Internet Security Scanner), Icepick, and SATAN (Security Analysis Tool For Auditing Networks), bring these door-rattling methods together in neat packages. SATAN, for instance, can scan hundreds of systems at high speed for a number of common, known vulnerabilities, including for instance: Can an outside user rewrite files in the ftp directory? Is the sendmail version in use an old one with known bugs? Can an outside user grab the password file?

These programs have great power in showing system administrators where the holes are in their own systems -- and at the same time they are powerful tools for hackers themselves. SATAN's authors released the program freely on the Internet last April, prompting a fever of activity by system administrators around the globe, plugging security holes, updating software, and changing passwords. Some firms immediately came out with SATAN detectors (including one named "Gabriel"), software that would report SATAN's peculiar pattern of rapid-fire mutliple probes.

Once inside

Once he finds his way inside a system, the intruder will set out to make himself its master. First he will cover his tracks by modifying the file that logs entries. Then he'll set out to find other passwords, by digging up a genuine password file (if he doesn't already have it). He'll crack the file by running a "dictionary attack," which compares the file to every word in the dictionary, and every known name, as well as known lists of passwords cribbed from the elsewhere on the Net -- an approach which often works simply because people tend to re-use their passwords in one system after another. Or he may set up "Trojan horses," tiny substitutes for legitimate programs. Trojan horses buried in such programs as "login," "telnet," and "ftp," which often require passwords, can record all the passwords in a file, and periodically mail the file to the intruder. Other Trojan horses can open up new back doors in the system, so that the intruder can get back in if his original path is discovered and closed.

Once he has compromised a system, the hacker will try to set himself up with "root" privileges -- that is, as if he is the system administrator, able to do absolutely anything to the system.

If he cracks a system that serves as a node on the Internet, passing messages on to other systems, the intruder may insert a "sniffer," a program that will collect all passwords that come through. Sniffers are named for Network General's legitimate Sniffer, a monitoring device that is useful to network administrators, but hacker's tiny sniffer programs are put to a truly nefarious use. If you "telnet" over the Internet to another system, and that system asks for your password, your reply carrying your password (without any encryption hiding it) will zigzag from node to node through the Internet. If any node along the way has been cracked, the sniffer may copy it, squirrel the copy away in a file, and mail the file full of passwords to a hacker somewhere, so that the intruder can come visit your system -- on your account.

Finally, the intruder will try to discover what systems trust the one he has broken into. First he will searching for the system's /etc/hosts.equiv file, and the users' .rhosts file. These files list the other systems that this system trusts, and the trust often goes both ways.

Then, before passing on to another system, he can do whatever he likes, from reading mail and copying files (as trophies to show to other hackers) to erasing everything on the machine. Usually, though, he does little more than snoop: "Crackers rarely do damage, except by accident," says Enzer.

Up against the firewall

There is an easy way to defend an organization's network against all intruders: put a wall around it like a medieval city, cut all connections to the outside, unplug the modems, sever all ties to the Internet. But the Internet, with its email and its ease of access to information, is turning out to be a powerful and popular tool of commerce. How can you bring that power to every desktop in your organization, without allowing intruders the run of the place?

The walls of medieval city were useless unless they had gateways. Private computer networks also have gateways to the outside world, and security-conscious organizations are increasingly building heavily-fortified gateways called "firewalls." A firewall is a set of computers that use various filters to allow only authorized messages to pass through.

Some filters are based on a series of rules that evaluate the source of the message and its type. Some of the rules might say, in effect: "If a message claims to be from a computer inside the network, but it arrived on an outside line, don't accept it -- it's a forgery." Or: "Don't accept any messages from wacko.com, we've had too many problems with them." Or: "Let through all messages headed for Port 25, the mail port." A careful screen of these rules can make the intruder's task much more difficult. Large systems with complex firewalls often route all contact through a series of machines, each one blocking certain kinds of access and allowing others. The "outside" gateway, connected to the Internet, can only reach one machine inside the firewall, the "inside" gateway -- which doesn't trust the outside one, and only provides it certain limited services. In such a system, messages from outside may pass first to a "firewall router," a computer that does nothing but apply the filter and pass along the ones that survive -- it takes no messages itself, so it cannot be compromised. The messages then go to the outside gateway, which can perform such functions as distributing email to the internal network. But to get to the internal machines, the messages have to pass through an internal router or gateway, which applies its own set of filters, just in case someone has commandeered the gateway machine.

Used this way, firewalls hide the identity and routes of the machines on the inside network. If you send email to anyone in the organization, it goes to the gateway machine, which passes it on to another machine inside the firewall, which looks up the host machine handling email for that person. The gateway machine, which is more vulnerable to outside tinkering, will not have any map of the whole system in its files.

Services that allow people to deposit files or pick them up (such as email and ftp) can be handled entirely outside the "firewall," so that no outsider gets to make a connection straight through from the "outside" world to the organization's "inside" network.

"Application level" firewalls, which use special software rules for each application allowed through, get their fingers into the content of messages. For instance, a "gopher" server on a gateway, designed to accept files from the outside, could specify that the incoming file be in a special format such as "uuencode," which requires a filename, and then turn away all files that have suspect filenames such as ".rhosts."

Poor firewalls leak: they may, for instance, trust an outside computer that the company doesn't control, one in an employee's home or a consultant's office. They may allow services that require and accept incoming calls to random ports or, in hundreds of other ways, allow outsiders to do more to the system than is strictly required.

A good firewall can be so secure that nobody can get into it to alter its system programs -- not even the administrator who built it. According to William Cheswick, a senior researcher at AT&T Labs who helped build the firewall for that company's internal networks, "If I want to work on the system files of the firewall machine, I have to physically walk over to it, turn it off, and re-boot it from the floppy disks." He and Steven Bellovin, co-authors of Firewalls and Internet Security: Repelling the Wily Hacker, claim, "We have never had an undetected illegal entry through our firewall."



line

Box: Why do crackers crack?

System cracking is hard, boring work. Even with newer software tools that automate much of the job, it involves endless hours in front of a computer screen, trying one technique after another. Why do cybersneaks bother? Burglars at least end up with booty they can sell. "There are certainly commercial hackers," says Cheswick, "industrial spies, you might call them, paid by someone to ferret out company secrets. It has happened to us." But commercial system crackers -- called "samurai" in the hacker community -- are still quite rare. Most system crackers do it on their own.

Over the years, some crackers have claimed to see themselves as daring, skilled, moral -- and necessary. In Hackers, Steven Levy described the "Hacker Ethic" as: "a philosophy of sharing, openness, decentralization, and getting your hands on machines at any cost -- to improve the machines, and to improve the world."

The legendary online "Jargon File," edited by Eric S. Raymond and Guy L. Steele Jr. from the contributions of thousands of tech-heads and net wizards over more than two decades, displays the net experts' ambivalent attitude toward system cracking. At one point it says, "Though crackers often like to describe themselves as hackers, most true hackers consider them a separate and lower form of life." Yet it describes one tenet of "the Hacker Ethic" as: "The belief that system-cracking for fun and exploration is ethically OK as long as the cracker commits no theft, vandalism, or breach of confidentiality."

Some have pictured themselves as fighters for the "freedom of information," arguing that the very concept of private information is evil. As one participant in an online discussion of hacking put it: "The ownership of information is repugnant to me." Another participant in the same discussion wrote: "Information belongs in the hands of the people."

Others claim that crackers serve the legitimate social function of causing others to beef up their security. In cyberspace, they say, you need someone to remind you not to leave the family jewels out on the lawn -- and system crackers will often do this in detail. They will "crack root" (gain total control of a system) and then send email to the system operator from the root account describing exactly how they did it. Matisse Enzer of Internet Literacy Associates (and formerly head of system support for the Well online system) dismisses this claim: "That's like saying murder is good because it causes people to meditate on their mortality."

But Cheswick and others who deal regularly with hackers doubt that real, everyday hackers operate from any ethic at all: "They are obviously not doing this for me. They're doing it because it is exciting, it's surreptitious, and it's a challenge."

Bruce Sterling, author of Hacker Crackdown, is more blunt: "It's the thrill of getting into someplace you're not supposed to be. It's a teenage male crime, the sort of thing that young men have been doing through the millennia at the windows of married women. They're cranky little guys. The real computer geniuses couldn't be bothered with this kind of stuff. This is as distant from real mastery of computers as raiding a girl's dorm is from building a dorm."

Some do it for very particular and personal reasons. One man "socially engineered" an account on the Well using a woman's name, just so he could eavesdrop on the conversations in the private Women on the Well conference.

"These guys cause people to spend a lot of time and worry," says Enzer, speaking of a hacker who "cracked root" on the Well. "The truly appropriate response would have been to take him out in an alleyway and spank him severely."


Articles | Technology | Main Page