For the first installment of “What is ID?”, we asked Chuck Owen, Distinguished Professor Emeritus of ID, for his thoughts on the place of artificial intelligence in design. As with everything we publish here on the New Idiom, the thoughts and ideas expressed are those of the authors, and not the official position of ID. Many thanks to Professor Owen for his response.
In his book “What Computers Can’t Do,” Hubert Dreyfus asserts that computers are incapable of reproducing certain human behaviors. In a seminal work of systems theory, “General Systems Theory,” Ludwig von Bertalanffy argues that simple parts can produce sophisticated - and unexpected - results when they act in concert. How can designers avoid what some might call the “irrational exuberance” of mid-century computer scientists, who expected to build thinking machines and, at the same time, remain aware of the potential to catalyze radical changes in complex systems?
Ancient (1960’s) history had it that computers would become “intelligent” within 10 years. In the 1970’s, that became the 1980’s. In the 80’s the experts guessed the 90’s, and so on. True artificial intelligence always seems to be just about ten years away. Brains are rather sophisticated and, while they may not be as fast as computers on some things, they seem to have marvelous ways of dealing with ambiguity, ill-defined situations and what Horst Rittel called “wicked” problems. The grail for designers shouldn’t be automatic design or thinking machines as such, but using computers to do what they do best in support of humandesigners, and in managing systems that support human activity.
Conceived from the beginning as partnering processes, computer-supported design tools can greatly extend design capabilities. And I don’t mean just better computer graphics. One of our failures as a profession is that we haven’t seriously built computer-supported toolmaking into our spectrum of design research. A wealth of opportunities exists for designers who can program to develop computer-supported tools. Some of the many areas that need some help are problem finding, context analysis, dynamic diagramming, simulation, form generation and group decision making, to name just a few that come immediately to mind. In fact, just about any aspect of the design process is fair game even now, almost 50 years into the information age. Our profession is well behind the curve on this one.
On the “thinking machine” side of the question, a good homemade example for how designers should respond is from our “House of the Future” project of several years ago. An important goal of that project was to integrate computer power into the home (the project was commissioned by IEEE). One such application was a “robotic kitchen”. That opportunity could have been the go sign for a totally automatic food preparation system, but that is not what we did. A lot of people actually like to cook! Our system gave members of the family options to cook on their own, have the robotic chef cook up breakfast (or other meals) when everyone was in a hurry, or share the load: the human chef doing the tricky, fun things (the souffle), while the robotic chef does the mashed potatoes. The point is, good design thinking in this case transforms the problem from a robotic “thinking machine” into a partnering system in the service of human values.
From where we stand now, it is pretty clear that molecular nanotechnology is going to create a massive technological revolution in this century. In 1993 the Institute of Design took on this future as a project (you can see an illustrated report here). Nanotechnology combines materials technology with computer science, biology and engineering to scale up problems and opportunities to levels vastly exceeding what we worried about with early possibilities of computer power. Imagine the possibilities, for example, of super computers smaller than a blood cell and too inexpensive to worry about except in giga-quantities… As we got into the project, which was to envision what might be possible in the home, it became more and more obvious that, for design, all the old rules were not going to apply in the not-too-distant future.
Today (and previously), we operate and are constrained pretty much at the human scale — and we design within the limits of materials and technologies that apply at that scale. Tomorrow, much of that disappears as materials constructed atom by atom can have billions of embedded computers controlling billions of embedded sensors and actuators that can literally morph forms and process materials, information and energy with efficiency and functionality heretofore unimaginable (see the April 2009 issue of Discover magazine for nanotechnology now being applied to control the path of light. Invisibility is one outcome; optical microscopes able to distinguish objects at Angstrom-level resolution is another, and these are just a beginning).
The great insight from the project for us was that we need to find new sources for the principles, guidelines and limitations that shape our goal-setting and planning as designers. If we can do anything—material properties and technological limitations no longer being constraints—where do we look for paths and boundaries? The best answer we had was to our most fundamental values: what is good for humanity and the environment. An interesting problem for the profession and design education!