Last summer, the geneticist Craig Venter, whose private company was the first to sequence the full human genome in 2001, made a startling announcement. His lab, the Venter Institute, had devised genome transplantation methods that allowed them literally to transform one species of bacteria into another, simply by transplanting the DNA in its nucleus. In Venter’s words,
Now we know we can boot up a chromosome system. It doesn’t matter if the DNA is chemically made in a cell or made in a test tube [and injected into a cell]. This is a major advance in the field of synthetic genomics. We now know we can create a synthetic organism. It’s not a question of ‘if’, or ‘how’, but ‘when’, and in this regard, think weeks and months, not years.
This development, not surprisingly, has led to much renewal of hand-wringing among ethicists, along the lines of “we shouldn’t be playing God”, and that by tinkering directly with the source-code of life, we will begin a slide down a slippery slope towards genocide, eugenics or some other disaster.
The well-known counterargument, by Venter and others, is that human beings have been “playing God” for thousands of years, through selective breeding of plants and animals for agriculture, animal labor, companionship and entertainment. Thus the work of Venter’s lab and the dozens of others working in this field is a huge leap forward, but it’s a leap in a direction we’ve been headed since the beginning of civilization.
Yet another defense for Venter’s work can be found in the history of other “dangerous ideas” in science. As Matt Mason argues in 2008’s The Pirate’s Dilemma, ambitious entrepreneurs have rarely been discouraged by the mere inconvenience of laws and regulations. If a market exists, and a way to make money from it with a new technology, people will do so, regardless of whether the tech is banned, regulated or stigmatized.
Whether we like it or not, whether we’re even vaguely comfortable with it, designing and redesigning living things to suit our own purposes — synthetic biology — is a capability that’s rapidly developing towards the same point digital computing and software design have reached today. In a discussion with Venter and others at the annual Edge retreat in 2007, the physicist Freeman Dyson introduced the idea of “domesticated biotechnology”:
[We now have] personal computers of all kinds. Digital cameras. And the GPS navigation system. All those wonders of technology, which have suddenly descended from the sky to the earth. They have become domesticated. That has been a tremendous change, something we never predicted. I remember when [John] von Neumann was developing the first programmable computer at Princeton [in the 1950s]. I happened to be there, and he talked a lot about the future of computing, and he thought of computers as getting bigger and bigger and more and more expensive, so they belonged to big corporations and governments and big research labs. He never in his wildest dreams imagined computers being owned by three-year-olds, and being part of the normal upbringing of children. It’s said that somebody asked him at one point, how many computers would the United States need? How large would the market be? And he answered, eighteen.
….My prediction or prognostication is that the same thing is going to happen to biotech in the next 50 years, perhaps 20 years; that it’s going to be domesticated. And I take the example of the flower show in Philadelphia and the reptile show in San Diego, at both of which I saw demonstrations of the enormous market there is for people who are skilled breeders of plants and animals. And they’re itching to get their hands on this new technology. As soon as it’s available I believe it’s going to catch fire, the way computers did when they became available to people like you.
It’s essentially writing and reading DNA. Breeding new kinds of plants and trees and bushes by writing the genomes at home on your personal machine. Just a little DNA reader and a little DNA writer on your desk, and you play the game with seeds and eggs instead of with pictures on the screen. That’s all.
This is already happening today in the laboratory. Biotechnologists at MIT, Harvard and around the world are engineering new species of bacteria that excrete, as their waste products, diesel, methane and other high-value fuels and materials. And it’s leaking out of the lab and into society: George Church at Harvard University Medical School has begun the Personal Genome Project to mass sequence the individual genomes of 100,000 volunteers.. The slippery slope is already upon us — and it could be the path that saves us from dependency on fossil fuels and death from cancer.
So the question becomes, as Venter put it, not “if” or “how” we will gain complete design control over the rest of life on Earth, but “when” — and maybe more importantly, “by whom” and “what for”. This is a huge, unanswered, and largely unasked question in the design community. But as ID’s dean Patrick Whitney likes to say, when industry has learned “how” to make anything, it turns to design for the tools to decide “what” to make. This theory takes on a new and even more urgent meaning when the “things” being made are living creatures.
I think designers need to start a conversation about this, soon, with each other and with our colleagues in the sciences and engineering. Perhaps no other group is as well equipped to guide synthetic biology and similar developing capabilities in a direction that helps humanity, rather than threatens it.
The questions designers will wrestle with here are the epitome of “big hairy problems”. Like: What kind of living things should we make, and to solve what kinds of problems? What kinds of problems are living entities “good at” solving, and what are they not good at? Can we list the forms of life we are ethically willing to tinker with, here and now? Will this list expand over time with our capabilities — bacteria today, plants tomorrow, mammals next year? What are these different life forms differentially good at? Maybe most importantly, what happens to the concept of “human-centered” design, when the things humans are designing are increasingly alive and intelligent themselves? Or “user-centered” design, when the object being designed is itself a self-aware stakeholder in the process? Is, in fact, the primary guiding principle for designers going to become being “life-centered”?
A fictional case study
Consider yourself in the following scenario seven years from now:
You have been hired to do a design research and planning project for the world’s largest biotech company. The company recently went public on Wall Street, with a market cap already twice as big as Google’s. The company is moving from pure R&D and industrial solutions, into retail offerings based on their numerous biotech and genetic patents. Your design brief is to plan out the company’s — the world’s — first integrated portfolio of biotechnology products and services for household use. The goal is to provide everything from entry-level to state-of-the-art, pro-am solutions in the area of bio-design and small-scale fabrication — “a little DNA reader and little DNA writer on your desk, with seeds and eggs instead of pictures on a screen”, as Dyson put it.
This is analogous to how personal computing was defined by Apple and Microsoft/Intel in the 1980s, as a platform on which hundreds of thousands of products and services could be based, from desktop publishing to CAD to, eventually, the Internet, Google and cloud computing. It is clearly an exercise in thinking and designing as broadly and systemically as possible around an emerging, enabling technology platform. Only instead of working and playing with bits and bytes of computerized cultural, social and economic information — resulting in new combinations of words, pictures, equations, conversations — you are designing the tools and systems people will use to manipulate bits and bytes of physical, reproductive and metabolic information — resulting in new body types, metabolic by-products, species, evolutionary pathways.
The potential is vast; the mandate is even vaster; and the uncertainty about even what categories of solutions we should be aiming for is vaster still. This is a perfect challenge for designers. Are we ready?