|
 |
 |
 |
 |

Mike King: "The
Art of Artificial Life"
This essay is by request from the organiser of Kunstig Liv,
an exhibition and symposium held in Stavanger, Norway, in August 2003.
Its theme is Artificial Life.
For many years I have worked as a polymath, researching across the
fields of art, science and the spiritual. I like to preserve what
is distinct about each field, yet at the same time try and allow each
to inform the other. Some twenty years ago, while undertaking a Masters
degree in software engineering at Oxford University, I became interested,
if not to say obsessed with artificial life. For some years previous
to that I had been listening to the music of Gary Numan. There were
plenty of better popular singers around at the time, but both a certain
quality of Numans musical aesthetic, and the lyrics themselves
suggested that he was singing about artificial life actually
being a robot. One song in particular, called I dream of wires
seemed to hauntingly capture either the lives of humans who had become
robots, or robots who were semi-human. Newly exposed to computer systems,
writing software, and thinking about software and hardware night and
day (intensive programming affects your dreaming) while on the course
at Oxford, I underwent a strange experience: I imaginatively entered
into the life of being an android. The experience was brief, but so
intense, that I started to write a science-fiction novel about it.
The most bizarre part was that a phrase kept coming into my mind:
the only way out is up, and, like the character in Spielbergs
Close Encounters of the Third Kind, who became obsessed
with making mountains out of shaving foam, I kept returning to that
little phrase. I found myself typing it in to keyboards when I should
have been programming or writing essays. Writing the novel became
my way of dealing with it, and, allowing the characters and scenario
that formed in the novel to take on a life of their own, I eventually
found a resolution to the little phrase. (The novel remains unpublished,
but is online at http://web.ukonline.co.uk/mr.king/writings/scifi.)
Ever since that period, I have been reading about artificial life,
artificial intelligence, artificial consciousness and so on. It seemed
to me that these fields raise two issues: firstly that collectively
we seem determined to build the artificial human, and secondly profound
questions about identity. On the first question it is clear that no
laboratory or research programme round the world admits to aiming
to build the artificial human, but that collectively all the components
are in place: robotics, cognitive systems, artificial intelligence,
and many other elements, all researched separately and uncoordinated.
On the second question the nature of human identity
I have looked further afield, including philosophy, neuroscience,
and Eastern philosophy including Buddhism and Hinduism. It has been
an abiding belief that when we do pull all the strands together and
build the artificial human, we will need to draw on all possible sources
to answer the questions raised about our human identity. Is our
human biological identity, our consciousness, our place in the universe,
special? Or are we just a pack of neurons, a carbon-based, rule-based
system that can be reverse engineered and implemented in any suitable
substrate?
We will look at how Eastern thought and neuroscience seem to curiously
agree on this question, but for now I want to look at how artists
have responded to these ideas. Artists, according to Leonard Schlain
in his excellent book Art and Physics, always anticipate the findings
of the scientists. Whether this is true or not, the remarkable ability
of artists is to sense what is in the air, culturally speaking,
and to respond to it intuitively, seeming regardless of whether
it is political, scientific, or cultural. It is as though artists
are equipped with invisible antennae that pick up what is in the
ether and allow it to work through into their creations, allowing
them to work things out much faster than through the rational approaches
of other disciplines. Perhaps both Gary Numan and myself were doing
that, Numan in music, and myself in science fiction. Fine artists
have done the same, and so has Hollywood, responding to the issues
of the day, and in this case, the potential for artificial life.
In fact what the general public know about artificial
life is almost invariably shaped by Hollywood productions: Tron
or Terminator; or confused with virtual reality scenarios such as
in Lawnmower Man or The Matrix. The truth is of course that the
science, despite what I said earlier, is decades, possibly centuries
away from building the artificial human.
Artificial Life (AL) as a science bears no relation in fact to the
popular view. It is in fact a branch of the biological sciences that
uses computer simulations to explore theories about life, based on
a bottom-up approach. Whereas the Artificial Intelligence (AI) community
is starting at the end of evolution, trying to build computer systems
that mimic the function of a human brain, the AL community takes the
cell as the starting point and allow it to evolve towards complexity.
This difference, between the megabucks glamorous Hollywood top-down
approach of AI, and the small-scale bottom-up approach of AL is instructive:
the first can be seen as autocratic and modernist, while the second
distributive and postmodern. The former runs on hype, while the latter
runs on humility.
So what is the problem with the megabucks artificial intelligence
project, the project that secretly wants to build the artificial
human, the project that could be described as suffering Frankenstein
Syndrome? My answer to this is summed up in the phrase real-soon-nowism.
Real-soon-nowism is the mistaking of science fiction for science
fact, an optimism about the scientific project itself, a kind of
Tomorrows World approach to todays world.
I first came across its manifestation in a book on robots, which
must have been published 20 or 30 years ago. It had in it an essay
by a prominent professor of robotics who said: real soon now
we will have robots that will do the housework. He (it was
a he of course) has probably long since retired and collected his
pension, leaving a new generation of robotics professors to say:
real soon now we will have robots that will do the housework.
A typical example was a recent televised Royal Society lecture led
by Professor Kevin Warwick from Reading University, who assembled
the worlds most sophisticated robots to impress us. Only Professor
Warwick seemed unphased by the complete shambles that followed,
a performance that reminded me of pet owners whose dogs wont
perform their trick in front of the camera. The fact is that you
could bring a robot worth tens of millions of dollars to my small
first floor flat, and it wouldnt even get from the kitchen
to my living room, let alone do any useful cleaning. I could leave
it in the morning, and I would bet any money you care for that by
the time I returned it would have tumbled down the stairs, greeting
me at the front door with pitiful twitchings.
The strange thing is, that during the period at Oxford, one of the
qualities of the android mind as I imaginatively and obsessively allowed
it to grip my psyche, was of service. Artificial Life, as practised
by megacorporations, will be produced though not real soon
now at all to do the stuff we dont want to do, like housework.
Effectively these entities will be slaves, confronting us with a moral
dilemma only resolvable eventually, and after enormous political upheaval,
by giving them equal rights: this was the theme of my novel. But this
future-gazing is all very well; more interesting is what it tells
us about our human psyches right now. At a simple level it tells us
that Mr Robotics Professor doesnt want to do anything as demeaning
as housework, a distaste for the menial work that is at the core of
Western exploitation of other cultures. As an educated person he has
a sense of identity that is related to his intelligence,
never having examined the idea that the haptic intelligence of manual
work, the ability to negotiate and clean my little flat for example,
is an intelligence the miracle of which would stagger sextillions
of infidels as Walt Whitman would say. Archaeological evidence
now suggests that the human brain evolved after, and as a response
to, the development of our hands. We got smart because we had hands,
not the other way round. Haptic intelligence is also at the core
of the artists remarkable range of abilities. It doesnt
matter if the art is minimalist or even conceptual, artists engage
with the intransigent stuff of materiality.
All the artists in the Kunstig Liv show so far, regardless
of the concept, its relation to artificial life, and its realisation,
have had to struggle with the unforgiving materiality of creating
an art exhibition. Morten Kvammes 37 Degrees can be thought
of as rule-based art with a single rule: keep a space as close to
37 degrees Celcius (body temperature) as possible. Dan Mihaltianus
installation in a huge unused brewing cylinder represents an encounter
between a concept, its initial realisation, and then the preparation
for the exhibition as a site-specific artwork: how do the pieces
fit? Jane Prophets work, 3D digital interventions in photographic
landscape, involve the use of custom-written software systems. One
might ask: where is the haptic challenge in that? My answer is that
not only does the question of exhibition arise as usual (shall I
black out the window with tape or black emulsion paint, did I bring
a change of workclothes?) but also the unforgiving nature of computer
programming. Having written hundreds of thousands of lines of code
myself, I know just what kind of discipline is involved: the soft
in software makes it harder than non-programmers can imagine.
But what of the broader response from artists to artificial life?
We have to understand this in the broader context of artistic intuition
about a range of related issues: virtual reality, cyberspace, cybernetic
organisms (cyborgs), robotics, AI, AL, emergence, chaos theory,
complexity, and so on. These all deal with our relationship with
technology, and in particular with what the computer offers us,
either viewed through real-soon-nowism (also referred
to as techno-optimism) or techno-pessimism: apocalyptic visions
of a distopic future. One of the greatest explorers of the human-machine
interface in recent years is Australian artist Stelarc. His robotic
performances, where he dances semi-naked but enveloped in electrodes
and electro-mechanical prostheses, has left his audiences with a
deep impression. The vulnerability of the flesh, many of its functions
handed over to electrical signals that override the human will,
is deliberately contrasted with the visual overkill of machinery
and deafening amplification of its servo-motors. Stelarc asks the
specific question for us: what does it mean to surrender our will
to a machine?
Artificial Life, practices as an artform, follows a different route,
but asks related questions of our identity. AL as art is historically
traceable to rule-based musical composition and painting. All early
computer art, starting with the pioneers in the field, Herbert Franke
in Germany and Ben Laposky in the US, was created on an algorithmic
basis, i.e. using rule-based systems. (We can date the origins of
these early experiments to around 1956.) One extraordinary pioneer,
who pushed the rule-based idea away from the conventionally algorithmic,
and into artificial intelligence, was British painter Harold Cohen.
His system, called AARON (capital letters from the days when we
felt we had to shout at computers) embodies his own
creative rule-set as a painter. It is fashionable for todays
art students to be required to articulate the rules of their creative
practice, but Cohen pushed this further than anyone by actually
surrendering his creativity to the machine. Yet, it represents,
according to this discussion, the top-down approach of artificial
intelligence. In contrast, the computer artists who have been working
with the bottom-up technologies of artificial life, including Stephen
Bell, Karl Sims, William Latham, John McCormack, Richard Brown,
Kenneth E. Rinaldo, and others, have all ultimately derived their
ideas from Conways Game of Life, a programme based
on cellular automota. This is the simplest visual instantiation
of an artificial life scenario comprising cells that live, breed,
and die, producing as a result higher-order behaviours from lower-order
rules. This phenomenon is called emergence, and is at the heart
of the fascination with AL.
The Artificial Life artist must build a system, and then stand
back as it cycles through generations, each loop allowing
for complexity to emerge. This is the basis of evolutionary art,
and has posed difficult technical questions of the fitness function
what is the criteria by which the artificial entities should
survive and pass on their characteristics to the next generation?
For William Latham and others this might be purely aesthetic, the
artist acting as a kind of gardener, condemning the varieties that
displease him or her to the scrapheap. Others attempt to automate
this process, thus allowing for a world to evolve in which no God
intervenes with externally imposed teleologies. Igor Aleksander, one
of the contributors to Richard Browns book Biotica on his AL
project, comments that artists drive their artistic expression by
mastering the bridge between the cellular and the emergent.
Aleksander is Professor of Neural Engineering Systems at Imperial
College. He explains how emergence is not a phenomenon
originally welcomed by the computer science community, indeed we might
say that it counters the hierarchical top-down monolithic science
of old. The challenge of emergence, as Aleksander points out, is how
to relate the low-level workings of the cellular automota, i.e. how
to construct their internal organisation and rules of interaction,
with the emergent behaviour that is going to be interesting.
Let us return to the theme of just what it is that AL and related
technologies do for us in terms of challenging our sense of identity.
Stelarc surrenders muscle control to a machine (or to another human
being a thousand miles away via the Internet), Cohen surrenders his
artistic painterly creativity. Rule-based AL systems demonstrate emergent
behaviours that share qualities of life with us, humans.
Could we ourselves be understood solely in terms of rule-based systems?
The materialist neuroscientists of today certainly think so. It is
hard sometimes to grasp how seriously scientists take their materialism,
how determined they are to have only half the Cartesian divide: the
extended stuff. But Francis Crick, the co-discoverer of DNA, is a
good example. He tells us that he was always so convinced an atheist
that he sought out the two scientific questions that would do the
most damage to religious belief: DNA as the fundamental rule-base
of life, and, since that triumph, consciousness as a neuronal activity.
His work on the neuronal basis of consciousness is driven by the realisation
that some neuronal activity is more related to conscious experience
than others, so ultimately it must be possible to find the neural
correlate of consciousness itself. He is not alone: brain science
as a whole, while getting closer and closer to the workings of the
biological substance, find less and less reason to credit us with
a non-material mind. There is no self to be
found in the brain; there is no one in there.
Strangely, the Buddha came to the same conclusion 2,500 years ago,
formulating it in his famous doctrine of Anatman
meaning literally no self or no soul. This is where I find the investigations
of the great spiritual geniuses just as interesting and as challenging
as the practitioners of science or art. What all three have in common
is a practice, one that pits human longings and imagination against
the unforgiving pre-existing structures of the world. The scientist
operates at the totally objective level, denying anything but matter,
while the religionist operates at the totally subjective level,
denying anything that cannot be directly known. Artists operate
inbetween, in a fluid undogmatic space, sometimes a hall of mirrors
that cloud and befog us, at other times with brilliant lucidity
and penetrating insight. The ultimately objective science is physics,
and as such is remote, abstract and inhuman. The ultimately subjective
inquiry is mysticism, and as such is remote, abstract, and
we cannot say inhuman, but perhaps unhuman. Most artists encountering
physics will have no problem with the first assertion, even finding
scientists themselves alarmingly constrained by their materialist
paradigm. Few in todays society would agree with my assertion
about the mystics, but then it all depends on whether they have
a spiritual practice or not. For now I want to put forward this
model of art, science and the spiritual as a spectrum, with physics
at one end, mysticism at the other end, and art inbetween, acting
as a bridge between the ultimately objective and the ultimately
subjective. Postmodernism of course denies that there are ultimate
truths, only the relative, but I am going to stick my neck out and
say, okay for the bulk of the middle bit, but not at the two ends.
(This is despite the theories of Thomas Kuhn for physics, and Derrida
for religion). I am going to suggest that at the two extremes, physics
and mysticism, we hear a similar message about personal identity:
it is a mirage.
This is the key issue that artificial life presents us, and for
my money it is artists who are best equipped to probe it, using
their chaotic and intuitive and ultimately so human
mix of objective and subjective methods. But isnt this really
a question for philosophy, you may ask? Is it not, after all, the
domain of all domains, the one discipline that can ask questions
across the whole range of human experience? And here is the crux
of my own approach: I prefer to be grounded in a practice, or range
of practices. Philosophy, as practised in the West is ungrounded,
that is it has become a system of speculative investigation. There
was of course no divergence between art, science and philosophy
in the time of the Scholastics: all three served the purpose of
religion. But since the 17th century they have all gone their separate
ways. Alarmingly, not a single philosopher of the Enlightenment
seems to have understood the very catalyst of Rationalism: the physics
given birth to by Newtons Optiks and Principia. Hume and Kant
showed their disdain for the new science, and the pattern was set:
philosophers did not engage in scientific practice. Likewise philosophy
disengaged itself from spiritual practice, the closest it getting
was in Husserls and Merleau-Pontys phenomenology. Artists
on the other have a practice; what is more, as suggested above,
it brings into play not just intellectual ability, but haptic intelligence
and the full spectrum of creative human forces (even including the
destructive). Whats more, like the philosopher, they are given
licence by the wider community to roam over all domains of human
experience. Hence we find them in this context exploring artificial
life.
However the philosopher has of course an important role, I just
want it to be seen a little more in proportion. We can take an example
related to artificial life / artificial intelligence to conclude
with. The American philosopher John Searle has made a famous contribution
to the debate with his Chinese Room scenario (demonstrating
what philosophers are good at in my opinion). Searles argument
is as follows: imagine a room with only one window through which
messages are passed in and out. Searle is sitting inside receiving
the messages, which happen to be written in Chinese, as a series
of symbols. He knows nothing of Chinese, but is able to look up
each symbol in an instruction manual, written in English, and as
a result he selects another symbol to pass out of the room. The
entire scenario is a metaphor for a computer programme, and Searles
point is that the programme, or the computer as a whole, has no
sense of meaning in its symbol manipulation, any more than Searle
would have in the Chinese Room. The computer takes input, processes
it according to a set of rules, and produces an output. The process
is devoid of all that which makes human experience unique and special:
meaning, cognition, awareness, consciousness itself. No, argues
Margaret Boden, Professor of Philosophy and Psychology at the University
of Sussex. Technically, Searles argument hinges on a lack
of semantics in the computer programme, and this is where he gets
it wrong says Boden: if the programme effects change in the external
world then it has a toehold in semantics.
I find this debate fascinating, because Searle assumes that humans
dont do exactly what the programme does: process symbols that
in themselves have no a priori meaning. The Buddha, basing his conclusion
on exhaustive meditation practice, effectively concluded that we
are just like the computer. He did not have this metaphor at his
disposal of course, neither would he have realised that this symbol
processing is done at such speed and complexity that emergent properties
would arise. But his language of co-conditioned origination,
is not that dissimilar. More recent Indian proponents of Enlightenment
in the Advaita (non-dual) tradition, have availed themselves of
the metaphor of the computer to illustrate what the Buddha referred
to as Anatman. These include spiritual teachers like
Nisargadatta Maharaj and Ramesh Balsekar (who incidentally was President
of the Bank of India for many years). They were not the first to
use Western technology as metaphors in this way: over a hundred
years previously the famous mystic Ramakrishna compared the self
to a railway engine, with God as the driver. Is this not the surrender
that Stelarc and Cohen are exploring, but theirs in a different,
artistic, and secular fashion? Doesnt it show that we are
fascinated by the whole issue of who is in charge? Maybe no one
at all?
Would my house-cleaning robot, when it is finally delivered to me
(several lifetimes away I am inclined to think) just be a glorified
processor of symbols? This is where Bodens idea, so simple in
its utterance, takes on meaning for me. The robot will have a toehold
in semantics if it can actually do anything as mundane as clean
my flat. Yes of course it will be a rule-based artificial lifeform,
processing symbols at an unimaginable rate, symbols that in themselves
dont mean anything. But taken as a whole the entity will possess
that extraordinary haptic intelligence needed to negotiate and make
effective change in the world, in this case a cleaner flat. If I come
home pleased with the androids work, that is semantics; if I
come home and find a broken vase, Ill get angry: that is semantics.
Its internal rule-set will demand a response to my response; my internal
rule-set (I like a clean flat and dislike broken property) provides
the response in the first place. The world has meaning because
we interact: this is the fundamental lesson of artificial life.
And yet. The brain scientist may be happy to prove that there is no-one
in there, and the mystics may have come to the same conclusion
through direct meditational experience. But in the world that lies
between these two extremes we find that personality abounds. In this
middle world, where the artist reigns supreme, we don't concern ourselves
with the ultimate, but with the messy, contingent, quotidian stuff
of daily life, a stuff that is inevitably anthropomorphic. It may
be delightfully so, or horribly so, it does not matter. In this anthropomorphism
lies our fascination with Rinaldos flocking machines, or New
Mexican-born Chico MacMurtries Tumbling Man. This
anthropomorphism also inevitably dictates that my flat-cleaning robot,
with its rule-based non-biological substrate, will not merely have
a toe-hold in semantics, it will be a person. And somehow,
this perception, undoubtedly unconscious in its source, led Gary Numan
to ask Are Friends Electric?
Dr. Mike King
Department of Art, Media and Design London Metropolitan University
>
Mike Kings hjemmeside
> Tilbake til symposium |
 |
|