Inkwell: Authors and Artists
Ted Newcomb (tcn) Tue 28 Aug 12 04:57
Inkwell welcomes our own Bruce Schneier, prolific author and security technologist to discuss his newest book, Liars and Outliers, about the role of trust in society and business. Leading the interview is our own Jon Lebkowsky. Bruce is an internationally renowned security technologist who studies the human side of security. A prolific author, he has written hundreds of articles, essays, and academic papers, as well as eleven books that together have sold more than 400,000 copies. He has testified before Congress, is a frequent guest on television and radio, and is regularly quoted in the press. His blog and monthly newsletter at http://www.schneier.com reach over 250,000 readers worldwide. Jon Lebkowsky is an author, activist, journalist, and blogger who writes about the future of the Internet, digital culture, media, and society. He's been associated with various projects and organizations, including Fringeware, WholeEarth, WorldChanging, Mondo 2000, bOING bOING, Factsheet Five, The WELL, the Austin Chronicle, EFF-Austin, Society of Participatory Medicine, Extreme Democracy, Digital Convergence Initiative, Plutopia Productions, Polycot Consulting, Social Web Strategies, Solar Austin, Well Aware, Project VRM, and currently Reality Augmented Blog. He is also a web strategist and developer via Polycot Associates.
Jon Lebkowsky (jonl) Tue 28 Aug 12 05:52
You're a computer scientist, cryptographer, and security expert, and your father was a Supreme Court judge in Brooklyn. Has his background in law influenced your thinking about security and trust?
Bruce Schneier (bruceschneier) Tue 28 Aug 12 11:09
Bruce Schneier (bruceschneier) Tue 28 Aug 12 13:08
My initial answer is "no," but perhaps it would be better to say "not directly." Throughout history, the legal profession has been one that is inherently focused on issues relating to security -- fairness, trust, deceit, reciprocity -- both between individuals, and between individuals and government. Quite a lot of legal scholarship relates to security, and I found a lot of very useful material in law journals while researching _Liars and Outliers_. But I didn't come to security by way of the law. I came to the law, and all the other social sciences that touch on security issues, through mathematics, computer science, and technology in general. I tend to be a meta meta meta guy: always interested in the context and environment of the problems I'm currently working on. And in security, that inevitably lead me to people and their interrelationships -- and the sciences that study that. _Liars and Outliers_ relies more on sociology, psychology, and economics (and law as well) than it does on computer science and cryptography. Let me reprint something for those in this conversation who have not read the book. This is a summary I wrote for John Scalzi's blog. He has a feature about new books he calls "The Big Idea," where he basically asks the question of different book authors. http://whatever.scalzi.com/2012/02/16/the-big-idea-bruce-schneier/ *************** My big idea is a big question. Every cooperative system contains parasites. How do we ensure that society's parasites don't destroy society's systems? It's all about trust, really. Not the intimate trust we have in our close friends and relatives, but the more impersonal trust we have in the various people and systems we interact with in society. I trust airline pilots, hotel clerks, ATMs, restaurant kitchens, and the company that built the computer I'm writing this short essay on. I trust that they have acted and will act in the ways I expect them to. This type of trust is more a matter of consistency or predictability than of intimacy. Of course, all of these systems contain parasites. Most people are naturally trustworthy, but some are not. There are hotel clerks who will steal your credit card information. There are ATMs that have been hacked by criminals. Some restaurant kitchens serve tainted food. There was even an airline pilot who deliberately crashed his Boeing 767 into the Atlantic Ocean in 1999. My central metaphor is the Prisoner's Dilemma, which nicely exposes the tension between group interest and self-interest. And the dilemma even gives us a terminology to use: cooperators act in the group interest, and defectors act in their own selfish interest, to the detriment of the group. Too many defectors, and everyone suffers -- often catastrophically. The Prisoner's Dilemma is not only useful in describing the problem, but also serves as a way to organize solutions. We humans have developed four basic mechanisms for ways to limit defectors: what I call societal pressure. We use morals, reputation, laws, and security systems. It's all coercion, really, although we don't call it that. I'll spare you the details; it would require a book to explain. And it did. This book marks another chapter in my career's endless series of generalizations. From mathematical security - cryptography - to computer and network security; from there to security technology in general; then to the economics of security and the psychology of security; and now to - I suppose - the sociology of security. The more I try to understand how security works, the more of the world I need to encompass within my model. When I started out writing this book, I thought I'd be talking a lot about the global financial crisis of 2008. It's an excellent example of group interest vs. self-interest, and how a small minority of parasites almost destroyed the planet's financial system. I even had a great quote by former Federal Reserve Chairman Alan Greenspan, where he admitted a "flaw" in his worldview. The exchange, which took place when he was being questioned by Congressman Alan Waxman at a 2008 Congressional hearing, was once the opening paragraphs of my book. I called the defectors "the dishonest minority," which was my original title. That unifying example eventually faded into the background, to be replaced by a lot of separate examples. I talk about overfishing, childhood immunizations, paying taxes, voting, stealing, airplane security, gay marriage, and a whole lot of other things. I dumped the phrase "dishonest minority" entirely, partly because I didn't need it and partly because a vocal few early readers were reading it not as "the small percentage of us that are dishonest" but as "the minority group that is dishonest" -- not at all the meaning I was trying to convey. I didn't even realize I was talking about trust until most of the way through. It was a couple of early readers who -- coincidentally, on the same day -- told me my book wasn't about security, it was about trust. More specifically, it was about how different societal pressures, security included, induce trust. This interplay between cooperators and defectors, trust and security, compliance and coercion, affects everything having to do with people. In the book, I wander through a dizzying array of academic disciplines: experimental psychology, evolutionary psychology, sociology, economics, behavioral economics, evolutionary biology, neuroscience, game theory, systems dynamics, anthropology, archeology, history, political science, law, philosophy, theology, cognitive science, and computer security. It sometimes felt as if I were blundering through a university, kicking down doors and demanding answers. "You anthropologists: what can you tell me about early human transgressions and punishments?" "Okay neuroscientists, what's the brain chemistry of cooperation? And you evolutionary psychologists, how can you explain that?" "Hey philosophers, what have you got?" I downloaded thousands -- literally of academic papers. In pre-Internet days I would have had to move into an academic library. What's really interesting to me is what this all means for the future. We've never been able to eliminate defections. No matter how much societal pressure we bring to bear, we can't bring the murder rate in society to zero. We'll never see the end of bad corporate behavior, or embezzlement, or rude people who make cell phone calls in movie theaters. That's fine, but it starts getting interesting when technology makes each individual defection more dangerous. That is, fishermen will survive even if a few of them defect and overfish -- until defectors can deploy driftnets and single-handedly collapse the fishing stock. The occasional terrorist with a machine gun isn't a problem for society in the overall scheme of things; but a terrorist with a nuclear weapon could be. Also -- and this is the final kicker -- not all defectors are bad. If you think about the notions of cooperating and defecting, they're defined in terms of the societal norm. Cooperators are people who follow the formal or informal rules of society. Defectors are people who, for whatever reason, break the rules. That definition says nothing about the absolute morality of the society or its rules. When society is in the wrong, it's defectors who are in the vanguard for change. So it was defectors who helped escaped slaves in the antebellum American South. It's defectors who are agitating to overthrow repressive regimes in the Middle East. And it's defectors who are fueling the Occupy Wall Street movement. Without defectors, society stagnates. We simultaneously need more societal pressure to deal with the effects of technology, and less societal pressure to ensure an open, free, and evolving society. This is our big challenge for the coming decade. ************* The book's website is http://www.schneier.com/book-lo.html. It has links to the complete Chapter 1 -- and parts of three other chapters -- a bunch of text, audio, and video interviews about the book, every review of the book I have stumbled upon, a way you can buy a signed copy directly from me, links to several on-line bookstores, and a bunch of other things besides. I look forward to the conversation.
Jon Lebkowsky (jonl) Tue 28 Aug 12 14:33
<scribbled by jonl Wed 29 Aug 12 04:45>
Jon Lebkowsky (jonl) Wed 29 Aug 12 04:47
You say a little above about how the book evolved as you were writing it. Stepping back, can you say a little about the history of the project - how you decided to write this specific book, what your goals were for the project in the beginning?
Bruce Schneier (bruceschneier) Wed 29 Aug 12 10:07
For me, the process of writing a book is the process of understanding my topic. Initially, my goal was to understand security very generally. At its core, security involves two people. The first person wants to behave in a certain way: view the computer file, hide his identity, go through the door, take the pile of money, etc. The second person wants that first person to behave in a different way: not view the computer file, expose his identity, not go through the door, not take the pile of money, etc. Security is how that second person imposes his will on the first person. This dynamic led me to think about that second person as a group of people. And that led me to thinking about group interest versus self-interest. And then on into trust. My initial hope was a grand philosophical theory of security. I don't think I went that far, but I do believe I have a framework for looking at the world that has some valuable explanatory power.
Jon Lebkowsky (jonl) Wed 29 Aug 12 11:31
You discuss game theory and the prisoner's dilemma, and say a lot about the evolution of cooperation, which is also the name of a book by Robert Axelrod. To what extent was Axelrod an influence?
Bruce Schneier (bruceschneier) Wed 29 Aug 12 14:26
Directly, not very much. I read his book when it first came out in the 1980s, and I skimmed it again while writing this book. Chapter 3 of my book is even called "The Evolution of Cooperation." But while Axelrod's book was groundbreaking, and certainly affected my background thinking, most of what I drew on in that chapter was both more recent and more biological. Game theory provides a good theoretical basis for looking at cooperation and how it evolved, but there's much better evidence from natural history. Indirectly, I think Axelrod's work, specifically his analysis of the repeated Prisoner's Dilemma and the "tit for tat" strategy, has been a strong influence on everyone who has analyzed this problem since his work. Pretty much everyone cites it. And amazingly, just this year a paper was published that both generalizes and improves on Axelrod's strategy. http://www.newscientist.com/article/mg21428663.900-a-dirty-twist-on-beating-th e-prisoners-dilemma.html
Jon Lebkowsky (jonl) Wed 29 Aug 12 21:37
Can you say more about natural history vs. game theory? What are your thoughts about how cooperation evolved and how it works now?
Bruce Schneier (bruceschneier) Thu 30 Aug 12 08:13
Theories on the evolution of cooperation are surprisingly contentious. It's easy to explain how cooperation evolved between genetically related individuals, and you can see examples of this in species ranging from insects to mammals. Cooperation between non-kin is much harder to explain, and much rarer in the natural world. Humans cooperate on a scale that far exceeds any other species on the planet. And it's something we do naturally. When we get together as strangers, we naturally cooperate. Even under dangerous and stressful situations, we're largely cooperative. It works so well that we don't even think about it. (Right now, I am writing this in the middle of a conference session. There are about 200 people in a room, all sitting quietly and listening to a single speaker. No one is running around and shouting. And no one is attacking the person sitting next to him or her. Even among a roomful of chimpanzees, this group behavior would be impossible.) Cooperation is based on a recognition that it is in our long-term self-interest to act in the short-term group interest. That recognition requires that we trust the other people in the group. This sort of trust is as old as society. Certainly it's as old as division of labor. If both hunting and gathering are required for survival, I am not able to specialize in gathering unless I trust that you will do the hunting. And vice versa. Of course, not everyone is trustworthy. As soon as a group of people builds a social system based on trust, it needs to ensure that enough people are sufficiently trustworthy for the system to work. And that's where security comes in. This is the central focus of my book.
Ari Davidow (ari) Thu 30 Aug 12 09:22
I think the contention that I felt least sure of in the book is exactly that--that humans cooperate more than any other creature. At one point in the book I thought I read you as saying that humans are the only species in which groups can work together. As I read that, I kept thinking of an image burned into my mind back in grade school, of two elephants helping a wounded or unfirm third walk. Certainly we are better at cooperation than our nearest cousins by far, but is "conscious" cooperation so very rare throughout the animal kingdom?
Administrivia (jonl) Thu 30 Aug 12 10:35
Short url for linking this discussion is http://bit.ly/schneier-well If you're not a member of the WELL, but you have a comment or question you'd like us to post in this discussion, scroll to the bottom of the page and click the link at "Non-members: Submit a comment or question," or send via email to inkwell at well.com.
Bruce Schneier (bruceschneier) Thu 30 Aug 12 11:26
Jef Poskanzer (jef) Thu 30 Aug 12 11:32
Before the Prisoner's Dilemma got so popular, The Tragedy of the Commons got similar mindshare. As a metaphor it expresses something that Prisoner misses: potential defectors seeing lots of other folks defecting and deciding to get their share before it runs out.
Ted Newcomb (tcn) Thu 30 Aug 12 13:31
I'm thinking of three things which maybe you could talk about in respect to trust, reputation, and the Prisoner's Dilemma. In the United States the present political quagmire in Washington is reflected in the lowest public confidence ratings in our history. The economic meltdown of 2008 and present attitudes towards Wall Street seem to reflect the belief that parasites are ruling the day. And in Europe there's great concern over whether or not an economic union can be sustained. All of these dilemmas seem to me to be examples of where trust has broken down. First, am I off the mark, or are these valid examples, and, if so, I'm wondering what you think needs to take place to reestablish trust and finds ways to move forward?
Bruce Schneier (bruceschneier) Thu 30 Aug 12 13:36
Bruce Schneier (bruceschneier) Thu 30 Aug 12 15:08
In response to <ari>'s <11>: Those three elephants were almost certainly related to each other. In the animal kingdom, observed cooperation is among kin. Older papers rely on observations to make this claim, but newer research uses genetic testing to determine kinship -- and the results are even more unequivocal. There is some non-kin cooperation in the animal world. Unrelated individuals hunt in packs. They deliberately limit their own aggressive behavior against unrelated members of their own species (e.g., ritualized mating fights). These simple forms of cooperation can easily be modeled, and make evolutionary sense, even though they're rare. The sort of cooperation that humans regularly engage in, based on reciprocal altruism, is much hard to explain. There are models, and I talk about them in my book, but nothing is universally accepted in the scientific community. I spend a lot of time on something called the Hawk-Dove Game, another game theory experiment that illustrates the tension between cooperating and following your own self-interest. That explains human cooperation somewhat, but I think you need to add the very human traits of altruism, fairness, and trust. It's actually kind of amazing that it works so well.
Bruce Schneier (bruceschneier) Thu 30 Aug 12 15:10
In response to <jef>'s <14>: They're both different ways of describing the same problem. The Tragedy of the Commons is really just a multiparty Prisoner's Dilemma. (Similarly, the Hawk-Dove game is yet another way of describing the problem.) And, yes, social norms play a large part of enforcing compliance...and encouraging defection. This is a security mechanism that is often missed, and a very important one. People are more likely to pay their taxes when they think everyone else is paying their taxes, and more likely to cheat when they think that others are getting away with cheating. There are some clever experiments that tease this behavior out, and some equally clever security systems that take advantage of this very human desire for fairness.
Bruce Schneier (bruceschneier) Thu 30 Aug 12 15:13
In response to <tcn>'s <15>: They are examples of trust breaking down. And it's not without cause; our trust in government and corporations is at historic lows because these organizations behave in an untrustworthy manner. That truth points the way to reestablish trust. We can't just pretend the world is different than it is and suddenly start trusting these organizations. They need to be trustworthy, and then they will be trusted. To prime the pump, we need to punish defecting organizations -- both political and corporate -- to make the very public statement that we will not tolerate untrustworthy behavior. If people believe that crooked politicians and corporate CEOs will be punished, they will start trusting those organizations. I don't think we have the political will to do any of that right now, but that's the only positive way forward I see.
Jon Lebkowsky (jonl) Fri 31 Aug 12 00:26
I think you're talking about using social pressure to push us back in the direction of trust-worthiness, and the will required to create those pressures. Can that kind of pressure emerge and be driven from the bottom up? How are real-world manifestations of the kinds of game-concepts you describe relevant to the distribution of power, and to the concept of democracy?
Ari Davidow (ari) Fri 31 Aug 12 07:44
Not to digress too far, but I think that the Tragedy of the Commons is different than being a multi-party Prisoner's Dilemma. In one of the business books I was reading a few years ago it was described convincingly as what happens when the results of one's actions are too delayed (I'll have to go home and look up the description, because I am missing some of the point the author was making in trying my own summary.) Having said that, one of the things that rings most worrisome in your discussion of reputation in the book is the extent to which our political processis frayed. Both sides have taken to outright lying (see, for instance, the Fox News blog article on lies Paul Ryan told at the just-ended Republican convention). What to do? Given that people remember the negative more vividly, is there hope for our political process? In the book you seem to punt the question to a suggestion that current governmental systems are based on thinking that is now centuries old, and it's time for us to figure out something new.
Jef Poskanzer (jef) Fri 31 Aug 12 08:57
The political issue is interesting. One wild-card possibility in the medium term is ubiquitous surveillance - cameras in all public spaces and enough compute power to monitor them. Ignoring for the moment the potential for evil, it seems like this kind of system could prevent even the petty anti-social behaviors that we currently just accept as part of the cost of living in a modern society. Drop a cigarette butt? Fail to pick up your dog's crap? Roll through a stop sign? The cameras see you and mail you a ticket. But here's the thing. Our political process already operates under this kind of surveillance. Our politicians' every public moment is captured on video and analyzed by an army of pundits and bloggers. And yet they still lie all the time. Well, the Republicans do anyway. I'm not going to make the usual "they're both bad" false equivalence - the Republicans really are much much worse. So if heavy surveillance doesn't make politicians into cooperators, maybe it won't work on the general population either.
Susan Sarandon, tractors, etc. (rocket) Fri 31 Aug 12 09:31
Bruce Schneier (bruceschneier) Fri 31 Aug 12 12:04
In response to <jonl>'s <20>: I use the more general term "societal pressure," and traditionally it is all bottom up. Let me take a step back. There are four basic societal pressures that we as society use to induce cooperation. The first is morals, which I mean to be anything inside your head: guilt, shame, altruism, a sense of fairness, etc. The second is reputation, by which I mean anything that involves what other people think of you. The third is institutional rules: basically, laws. And the fourth is technological measures like door locks and burglar alarms. They all work together. Think of someone deciding whether or not to steal. Most people don't steal because they know it's wrong. Most others don't steal because of what other people would think. Some others don't steal because it's illegal and they'll go to jail. And the few remaining don't steal because of anti-theft technology. Of course, it's more complicated than this -- and I spend much of the book trying to understand how these four different societal pressures work against individuals and organizations. Power differentials matter a lot. Throughout most of history, the cohort in power has been able to do pretty much whatever it wanted to the populace. And regularly, major change has been driven from the bottom up. Even today in the U.S., there are popular groundswells that force change. These changes have largely been around the edges, but they have the potential to be much greater. That's the paradox of democracy: individually, we have no power, but collectively, we have an amazing amount of power. If I have any advice, or "solution," it is to organize and to protest. It's the only hope we have to make our institutions trustworthy again.
Bruce Schneier (bruceschneier) Fri 31 Aug 12 12:09
In response to <ari>'s <21>: The Prisoner's Dilemma and the Tragedy of the Commons are both specific examples of a more general dilemma. Both force the player to choose between his own self-interest and the group interest. Both give maximum advantage to the player if he chooses his own self-interest and everyone else chooses the group interest. And both have the worst possible outcome if everyone chooses his or her own self-interest. Yes, they're differences, but they're much more important to game theorists than they are to me. And yes, I punt on a general discussion of government systems. I punt on a lot of questions. One of the most difficult things I found when writing this book was where to stop. My topics are so general, so expansive, that I could have written forever. I did the best I could putting bounds around the book, and that necessarily left a lot out. I do believe that our current representative democracy is the best form of government mid-18th-century technology could invent. It was based on the facts that travel and communications were hard, and that geographical groups needed to choose someone amongst themselves to make laws in their name. The assumptions that made that the best idea no longer apply. But while that's a fascinating topic, and probably the subject of a book in its own right, it's not what my book is about. My book is about how to think about the more general group security and trust issues, and would inform any discussions of a 21st-century governmental system.
Members: Enter the conference to participate