Cupido, Ergo Denego (robertflink) Fri 19 Oct 07 05:35
A problem with some decision-making models is that implementation is often seen as the work of minions and too petty a consideration for the "deciders" (ala Iraq (need I say more?)) who move on to other "important" decisions appropriate to their considerable "wisdom". One of the reasons that implementation problems are discounted is that they interfere with the abstract thinking that experts enjoy so much. Such is the continual lure of Plato's idea of the rule by philosopher-kings.
Jon Lebkowsky (jonl) Fri 19 Oct 07 07:01
Yes, that's one part of the problem. What I'm interested in is how we would make valid and timely decisions with the philosopher-kings out of the picture. One aspect of that question is how you gather and grasp knowledge, and how you separate information from noise. In the context explored by _Intervention_, the relevant question is how you create realistic and somewhat authoritative risk assessments and incorporate them into the policy process. Consider the 1998 attempt to get the FDA to perform mandatory safety testing on transgenic foods. Documents from within the FDA showed that there was serious concern among experts within the agency about the risks of transgenic foods, yet the agency had taken an official position that there were no such risks. Policy ignored elevant knowledge (or at least relevant concern). So part of the question is how you gather and assess information for a decision, another is how you factor that information into policy. Can we assume that a deliberative body would give more weight to perceived risk to the community, and be influenced less by commercial interests? That seems quite a bit a factor of determining who are the relevant stakeholders and who has authoritative knowledge relevant to the decision...
Denise Caruso (denisecaruso) Fri 19 Oct 07 08:51
Nope. What you can assume is that the commercial interests will be at the table representing their interests and perspectives with authority, as will representatives of the community, who have similar authority and knowledge in their domains. And between them (and with the help of other experts and stakeholders at the table) they will be able to bash out a *real* cost benefit analysis -- that is, who really pays and who really benefits, and is there a benefit, and is the issue under consideration even really a problem?, are there alternatives that are less risky, etc. But depending on whether or not the problem being deliberated requires a new law or regulation, the 'factor that information into policy' question is, as you both have noted, the key issue if the present situation is going to change. Jon mentioned earlier the studies where consultants were polled, etc. It's actually even worse than that. The studies were *commissioned* by the regulatory agencies, and they weren't (necessarily) polling consultants -- the committee members, creme de la creme in their fields, were gathering peer reviewed evidence. And still their recommendations were ignored. I don't have the answer to this, at least not yet. I have some suggestions, though, and one of them is to bring back an organization like the late and much mourned Office of Technology Assessment, as an unbiased and authoritative place that can (a) conduct these kinds of risk assessments and (b) has a direct line to Congress, rather than to the regulators. An OTA charged with conducting these kinds of transparent processes has at least a chance that the advice won't be ignored by the regulator-industry cabal.
Gail Williams (gail) Fri 19 Oct 07 10:12
> And still their recommendations were ignored. That's the drumbeat in almost every modern crisis.
Jon Lebkowsky (jonl) Fri 19 Oct 07 11:43
Has a chance, assuming it's not staffed solely by members of the regulator-industry cabal. Whups, I'm sounding cynical. In recent discussions with a group advocating democratic practice, it dawned on me that the first step toward a more participatory system is education - not just about specific subjects, but something more meta, about how become informed, focusing primarily on media literacy. What are your thoughts about creating better-informed stakeholders?
Denise Caruso (denisecaruso) Fri 19 Oct 07 14:10
I feel you. But honestly, given the state of education and critical thinking in this country, I think that the only realistic path to creating better-informed stakeholders is one issue/subject at a time. And that's *another* benefit of this process. It's an opportunity to educate people about the issues that affect them directly, and in a context where the strongest available science or the perspective with the biggest mouth or most dismissive attitude will not be able to win the day just by being a bully. My suspicion is that over time, there would be a trickle-down empowerment effect. People would begin to see that hey! my question *is* valid and actually it's a very good question -- even though in the world outside I'd been told that it wasn't anything I needed to worry my pretty head about. Which then encourages me to ask more questions and demand respectful answers -- and to be far more attenuated to wonder why, when I'm being dissed or disregarded. I think of it as being like the jury system when it's at its best.
Cupido, Ergo Denego (robertflink) Fri 19 Oct 07 19:58
Since practical decisions lead to real risks, both known and unknown, are the stakeholders that make the decisions willing to take responsibility for any damaging consequences of their decisions? I think that anyone that wants to have a substantial impact on important decisions should be willing to accept a substantial part of the blame for any bad side-effects of such decisions. One of the difficulties with group decision-making is that groups tend not to own up to the mistakes that will inevitably occur. Often the implementers are used as scapegoats. BTW, I don't see how a regulatory agency can delegate any responsibility to an advisory committee beyond giving advice.
Jamais Cascio (cascio) Fri 19 Oct 07 20:33
Hi Denise! How early in the life of a new technology should these deliberative processes start? Would it make sense to begin before the details of how a new tech works are fully developed (such as, say, synthetic biology), or even while the technology is still at a proposal stage (such as molecular manufacturing)?
Tell your piteous heart there's no harm done. (krome) Sat 20 Oct 07 00:43
IMHO, the way to create more and better informed stakeholders is to get them to read 1984 in all Jr. Highs while watching video of the Bush admin. I think this is what the Daily Show and Colbert have been trying to do. You have to try to get past the denial phase when people are saying, "there's nothing that can be done". The first thing to do is to ask questions constantly. Ask where the money is. I mean, I haven't spent alot of time on it, but I'm still not sure where the money is that made Bush veto SCHIP and even Stewart and Colbert don't seem to know who the real controlling villians are. Political/Public Health/Technology decisions are *never* made in a vacuum. Never. There is always someone going to make a buck on the solution/implementation. We have seen it on a grand scale in Iraq and so long as these salesmen have more time and money and tenacity than you do, they win. Why so many CEOs started out wealthy and with Time on their hands.
Denise Caruso (denisecaruso) Sat 20 Oct 07 06:49
Great question about accepting responsibility for decisions, Cupido. One of the great myths about the public is that they demand zero risk from their technologies. Even if you set aside the big obvious factor of side effects of prescription drugs, which we accept by the billions every day, people are constantly making practical decisions that weigh risk against benefit. We understand instinctively that we can't know everything in advance of making a decision, and that there might be consequences we didn't anticipate -- or that we did anticipate, but we accept because the benefit is worth it to us. The same holds true for collaborative risk deliberations, only moreso. And your question about taking responsibility is actually one of the most important reasons to use these processes, because the "to us" part is put on the table. Even under the worst of circumstances, you'd be hard pressed to be able to blame someone else if you were part of a group process that spent some considerable amount of time openly and freely considering risk and benefit. You'd have no credibility whatsoever if you tried to pass the buck. IMHO, the consensus result is a completely overlooked and tragically underutilized boon of doing these risk processes, for regulators as well as for companies or industries with a new technology that they want to sell to the public. What better way to (a) look for flaws or blind spots in your own risk assessments (which will come back to bite you later, sometimes to the tune of billions of dollars), and (b) ensure an enthusiastic future market and terrific credibility, than to ask for help from the experts who are interested in your technology and stakeholders who will be affected by the product? As for regulatory agencies delegating responsibility to advisors: this is always a tricky question.* My concern is how to get accountability into the existing system -- how do you make regulators accountable for ignoring advice, as they so often do today and which often results in tragedy and disaster down the road (Katrina, DES ... the history of science is littered with these examples)? The late, great Office of Technology Assessment did a terrific job brokering these kinds of situations, because they were perceived by both political parties as neutral. Abolishing this place of neutrality contributed tremendously to the politicization of regulation in this country. * Although these deliberative advisory groups do seem to directly inform policy making in Northern European countries like Sweden and Denmark, where they are frequently used for big hairy socio-scientific issues like disposal of nuclear waste.
Denise Caruso (denisecaruso) Sat 20 Oct 07 07:11
Hey Jamais, thanks for joining us! At what point the deliberations should start depends on where you stick your pin in the map of the problem. What I mean is, there is a tremendous benefit to researchers to do it as soon as possible -- as soon as there's a relatively solid foundation for the concept. Synthetic biology is a particularly good example, because there is virtually nothing but a concept but already there is an industry salivating for product to fund and push to market. For those who are unfamiliar, synthetic biology (also sometimes referred to as DNA 2.0 or extreme genetic engineering) is the design and animation living artificial organisms, building them from scratch from man-made biological parts. With only one patent filed -- which I suspect is largely for show -- there are already more than 60 commercial companies pursuing synthetic biology applications, financed by at least five venture capital funds. There is even a high profile international competition for undergraduates, sponsored by biotech companies and venture funds from around the world, to build genetically engineered machines and add them to a Registry of Standard Biological Parts. And there is virtually no conversation about risk. The folks doing this work are spinning it like a top, despite the potential for abuse (creating and releasing virulent life forms) and just plain mistakes to be made based on the tremendous scientific uncertainties that still exist. I'm working with someone now to start this conversation; I'll keep you posted. But since we aren't likely get to this stage of anticipation any time soon, in the case of a new technology or scientific innovation, I think it's more realistic to think about starting the deliberative process as soon as the patent is filed. That way, the idea is protected and, considering the twitchy trigger finger that most companies have when it comes to filing patents, you could be fairly certain that the product wouldn't be too far into the development cycle. That's important because part of the problem here, at least in the assessment of innovations, is that once the investments have been made the entire regulatory process supports forward motion to market, not reflection. Official assessments aside, I think it is completely feasible and important to deploy these risk assessment processes right away, independently of the regulatory framework, to look at innovations like molecular manufacturing. With the right participants who could help raise the visibility of the results, it would be a terrific way to start a global conversation -- whether the researchers or regulators want to or not.
Denise Caruso (denisecaruso) Sat 20 Oct 07 07:25
I am pretty cynical about most things in this country, and so I have totally given up on the idea of the "informed citizen." I know that krome said "informed stakeholder," but that's what I haven't given up on. I really believe that the only way to jump-start critical thinking and better decision making in this country is to find a way to institutionalize some kind of stakeholder process like the one I wrote about. I think I mentioned earlier that the only way I think you can get informed stakeholders is by getting them involved in one problem at a time. And yes, a really important benefit to doing this kind of work is that you give people the freedom -- and the responsibility -- to ask questions. That's how the deliberation works. An analyst might say, 'Here's the data we have on the ability for herbicide resistant soy to spread to related plants and create superweeds." All the experts and stakeholders in the room would be expected to ask questions. If the unofficial conversations about transgenics are any indication, what would happen is that the people involved in the process would start to pick apart the data. They'd challenge the assumptions that were used to model the problem, ask who funded the study, ask for proof of the benefits of engineering herbicide resistance into a crop plant, explore what the findings didn't look at and what research hasn't yet been done, what the consequences would be if the findings turned out to be wrong, etc. etc. My own favorite question, which started me on this whole quest, was, "How do you know that?"
Tell your piteous heart there's no harm done. (krome) Sat 20 Oct 07 07:39
That's the best question to ask of somebody's model. It is our scientifice modeling failures(and the assumptions which go into them) which have gotten us in the most trouble in the last 50 years. One of the main reasons I'm not too hot to get back into biochem. I find it *very* difficult to not ask impertinent questions like that one of people who don't want to hear it and will make your life hell if they hear it too often. What I have often wanted to say to some scientist making bold assumptions is simply, "Remember Thalidomide babies".
Jon Lebkowsky (jonl) Sat 20 Oct 07 07:52
In the book you suggest that many scientists don't know what they don't know. How does this affect policy about science & technology.
Jamais Cascio (cascio) Sat 20 Oct 07 12:49
(Just as a disclaimer, I recently signed on as the Director of Impacts Analysis at the Center for Responsible Nanotechnology, which looks at the risks and potential surrounding molecular manufacturing.) To what degree do you think it's important for "informed stakeholders" to help look for ways to mitigate the risks, not just identify them? And a parallel question: to what degree should the deliberative process take into account the potential benefits of a new technology in assessing the overall risk profile?
Denise Caruso (denisecaruso) Sat 20 Oct 07 13:36
Scientists not knowing what they don't know isn't so much the problem as the fact that they don't acknowledge the unknown, or the uncertainty in their knowledge, when they make statements about "evidence" of risk -- or of benefit, for that matter, which I'll get into in a moment. Traditional risk analyses are based on data, evidence that's framed in a model of the problem that's in turn bounded by assumptions -- the primary assumption being, "This is the correct model." Which is positively ludicrous in the context of technologies (read: products) based on scientific breakthroughs where there is a scarcity of historical data, and no way to see the problem in a larger context. As for mitigating risks ... in the case of nanotech, that's very important, since there are already hundreds of nanotech products on the market that have not been subject to any regulatory scrutiny whatsoever. Still, I think the issues become clearer if the risk deliberation is done first. And any decision that's taken about risk has to take into account the potential benefits. For me, that's what an overall risk profile is -- risk and benefit. In that regard, one issue we're faced with today is that proponents (scientists or investors) present us with the benefits of a new technology as though they were fact, but they are not required to provide evidence for those benefits in the same way that the skeptical are required to present evidence of risk. I find this stunningly irresponsible. (And congrats on the new position, Jamais -- let me know if I can help.)
Jamais Cascio (cascio) Sat 20 Oct 07 14:17
(Thanks, Denise!) I like the idea of the potential benefits and potential risks of a new technology being held to the same evidence standards. That would require both advocates and detractors to examine the panoply of results, not just preferred outcomes. Hmmm. I don't recall you talking about this notion in INTERVENTION, but it has been some months since I read it -- have you gone into more detail on the idea?
paralyzed by a question like that (debunix) Sat 20 Oct 07 14:30
I'm still working my way through the book, as I've had to stop and argue with it here and there, which has slowed my reading quite a bit. Although I am no longer working primarily in molecular biology, I still identify myself as a scientist as much as anything else. I have worked with transgenic organisms in biomedical research and see them as a key to better medicine for me and my patients. At the same time, I am as frustrated by the unthinking confidence of some biotechnology proponents in the benignity of their work as I am by the unthinking fears of some members of the public. It's so important to get past the paranoia and overconfidence both to the more nuanced and, in the long run, more important consequences of what we're doing. The discussion that was linked about about the unforeseen effects of transgenic crops expressing Bt toxin on aquatic wildlife is a great example. While I am not worried at all about eating Bt corn myself--I am not worrying that it's going to give me cancer or mutagenize my offspring--I am very concerned about the potential effects on the ecosystem. This study emphasizes the potential for collateral damage to non-targeted bystander organisms, in this case, aquatic insects consuming pollen from the corn that grow slower or have decreased reproductive success, insects that are no danger to the crops, but are harmed in a way that then has cascading effects on the other invertebrates, fish, and birds that may depend on those insects for food. I want to help reframe the overall public debate away from the rote formula of "GM foods are going to turn you into a mutant" vs "GM foods are perfectly safe for you" where the participants talk past each other to the larger question of the effects on our ecosystem as a whole. I think this is the kind of discussion you're trying to foster via the strategies discussed so far, bringing together people with diverse expertise and different stakes in the outcome, to help bring out risks and benefits that are not so obvious. But once you get the discussion going, how do you get the wider public interested in the results? Hard as it is to get your local congresscritter to pay attention, how to we tackle the harder job of getting John Q Public to listen too, without resorting to fear-mongering and sloganeering?
Tell your piteous heart there's no harm done. (krome) Sat 20 Oct 07 19:29
"In that regard, one issue we're faced with today is that proponents (scientists or investors) present us with the benefits of a new technology as though they were fact, but they are not required to provide evidence for those benefits in the same way that the skeptical are required to present evidence of risk. I find this stunningly irresponsible. " Well said. The risks seem to have to have an extra onus of proof so large compared to theoretical benefits that irresponsibility is allowed free reign until the onus is met. I worked only 1 biotech lab job before I went instrument sales and service so both these examples are from there but I have no doubt that they are widely applicable. This place was a start-up with 12 people when I started and 80 when I left after a year, growing very rapidly with injections of hundreds of millions at a time from the VC community from which our president and CEO had come. They had an idea that by connecting 2 of the molecules, or analogues thereof, to which the target bacterium had become almost immune(or is staph a virus?)they would have a new product that just might work until staph was developed immunity to it, at which point, presumably, they would have something else ready. Talk about shortsighted and irresponsible. So, in order for any RBA to come about(at the FDA) for their proposed product(s) they had to make something(s). Making these somethings uses a surprisingly large amount of resources and produces a surprising amount of waste before you ever get around to seeing which will not kill the rats and rabbits. Mounds and mounds of glassware which cannot be recycled for the contamination. I remember seeing that the testtube/HPLC fraction collection tubes we went through by the hundreds a day were made in some 3rd world country. And then the large streams of solvents flowing into 50 gallon barrels to be incinerated. Much of this is not the pure solvent you started the reaction with so you don't necessarily know what it is. Now imagine every lab now deserted in South San Francisco doing this, all before anybody has any idea what they will come up with. The Biotech bubble was only slightly less profligate than that of the dotcom. And the only authority overseeing this insanity was the SSF safety commission or whatever is was which made sure that once the barrels of waste left the bulding they went straight and only to the waste 'shed', a concrete bunker designed to explode upward rather than out. One of our chemists once dropped a bottle of cyclohexane on the floor and nobody knew where the barrier sponges and/or absorbant materials were. I went to the office of our 'safety officer' where I found him eagerly involved in a bottle of scotch with a new chemist they were recruiting. He was of no help. (for length and because Firefox has been acting wierd today, continued)
Tell your piteous heart there's no harm done. (krome) Sat 20 Oct 07 19:48
developed s/b developing and is s/b it above. You know where they were. Anyway, the second example, from a concrete picnic table outside the first bulding this company was in. Having lunch with two PhD chemists(I am only a BS) and, essentially, touching on debunix's argument: You simply cannot, out of hand,dismiss the effects of some technology on some 'minor' species. just about everything is connected and I for one am almost certain that the waste streams noted above will come back to haunt us all and given that the goal of these excercises wasn't irradication of the problem it was bound to be repeated again and again. I spent 20 minutes or so on my butterfly/hurricane spiel(from here known as the BHS) and one of the chemists thanked me in a way that led me to believe that all this had simply never occured to him. It is at this point that I equate Citizen and Stakeholder. We are all stakeholders in what we, as a species, make. Acknowledgement of one's stakeholderness makes one a true citizen.
Denise Caruso (denisecaruso) Sat 20 Oct 07 21:12
I did talk about the lack of rigor in declaring benefits, Jamais. Lemme do a quick check ... here it is, Chapter 8, 'The Promise of Transgenics,' page 111. It's the chapter where the National Academy study director couldn't get a 'benefits of transgenics' study off the ground because the biotech industry wouldn't submit the data for peer review. Sigh.
Denise Caruso (denisecaruso) Sat 20 Oct 07 21:31
Nice post on reframing the public debate, debunix. Before I answer your question, I want to ask you one (and it isn't a presupposing or 'I've already made up my mind' question): Why do you exclude yourself and your patients from the pool of organisms that might be affected by the unintended consequences of transgenics? You asked, 'But once you get the discussion going, how do you get the wider public interested in the results?' In my perfect world, the discussion is an official risk assessment, done by some governmental or government-affiliated organization, as a prerequisite to passing a regulation. So John and Jane Q don't have to be interested in the results per se. I'm more concerned with using the proper methods that yield the proper protection than I am with getting the public at large interested in the results. I mean, I would like them to be interested, and I think they *will* be interested, because (a) it's interesting, and (b) it's relevant to our health and safety. The transgenics conversation certainly is plenty interesting to people outside of the U.S., where it is not considered a done deal and where countries continue to ban transgenics (I think Italy was the most recent). And I know from talking to people who've read the book that they are horrified by the state of affairs, and tend to buy copies of the book in bulk to give to their friends. (And I do love them for that.) But in the U.S. as you have no doubt noticed, the media reports this like a soap opera -- Monsanto v. the activists. Or as you say, fear-mongering and sloganeering. We're so used to this kind of crappy, content-free, disrespectful public 'conversation' that we just turn it off. Why should we pay attention? We figure if it were dangerous, the FDA would have told us so. Never knowing, of course, how or why the FDA made its decision in the first place -- even though that information is readily available, knowable and at least to me, shocking.
Denise Caruso (denisecaruso) Sat 20 Oct 07 21:47
That is a remarkable story, krome. The saddest thing about it to me is that your buddies at the picnic table were so surprised. Scientists just aren't trained to think about what happens outside the bounds of the things they can measure at the lab bench. Oddly, it makes me think about 'Four Arguments for the Elimination of Television,' one of my favorite books. One of the discussions has to do with how we were never taught to think about the fact that there is an entire world operating outside the bounds of the frame that could radically alter our interpretation of what we think we are seeing. This subsequently radically confines our ability to experience reality, which in turn makes us much easier to manipulate by those who control the framing.
paralyzed by a question like that (debunix) Sat 20 Oct 07 22:04
I don't mean to exclude myself from the ecosystem that is at risk from any human endeavor; but my stomach is not the same as the insects that Bt affects. When I consume Bt corn, the Bt is another bit of protein that I digest. It does not injure my gut or cause diarrhea or cause DNA damage that leads to cancer. And a lot of the debate about GM foods has been at this level of anxiety--the idea that consuming a bit of Bt toxin is directly harmful to the human who eats it. I don't think the eating, itself, is harmful. I am much more concerned about the effects at the level of the ecosystem, but it's a lot harder to talk about those effects, that are more serious, pervasive, and subtle.
Denise Caruso (denisecaruso) Sat 20 Oct 07 23:15
All but a handful of the tests for the safety of eating transgenic food were conducted using synthetic DNA dissolved in simulated gastric juices -- not by having an actual animal eat the actual transgenic plant material and actually digest it. So it can't really be said with any rigorously scientific conviction that eating it has no effect on our guts. Yet because those so-called "feeding tests" showed no negative results, all commercial transgenics to date have been dubbed "substantially equivalent" to their traditionally bred counterparts. And therefore no one is officially or unofficially monitoring these crops for potential health effects. They may very well be absolutely safe to eat. But then again, there are plenty of health effects that don't rise to the level of 'acute,' as diarrhea or cancer do, yet the industry and regulators already have laid down the law in their declarations of safety. By your comments so far, I think you know that I am not trying to cause anxiety by saying these things. I just think that not to acknowledge the degree to which scientists and regulators have failed to fully investigate the potential hazards of transgenics -- on humans as part of the ecosystem, as well as on other elements of the ecosystem -- is to be in denial.
Members: Enter the conference to participate