History [of complexity
definitions]
There have been terms for complexity
in everyday language since antiquity. But the idea of
treating complexity as a coherent scientific concept
potentially amenable to explicit definition is quite
new: indeed this became popular only in the late
1980s--in part as a result of my own
efforts. That what one would usually call complexity can
be present in mathematical systems was for example
already noted in the 1890s by Henri
Poincaré in connection with the three-body problem
(see page 972).
And in the 1920s the issue of quantifying the complexity
of simple mathematical formulas had come up in work on
assessing statistical models (compare page 1083).
By the 1940s general comments about biological, social
and occasionally other systems being characterized by
high complexity were common, particularly in connection
with the cybernetics movement. Most often complexity
seems to have been thought of as associated with the
presence of large numbers of components with different
types or behavior, and typically also with the presence
of extensive interconnections or interdependencies. But
occasionally--especially in some areas of
social science--complexity was instead
thought of as being characterized by somehow going
beyond what human minds can handle. In the 1950s there
was some discussion in pure mathematics of notions of
complexity associated variously with sizes of axioms for
logical theories, and with numbers of ways to satisfy
such axioms. The development of information theory in
the late 1940s--followed by the discovery
of the structure of DNA in 1953--led to the
idea that perhaps complexity might be related to
information content. And when the notion of algorithmic
information content as the length of a shortest program
(see page 1067)
emerged in the 1960s it was suggested that this might be
an appropriate definition for complexity. Several other
definitions used in specific fields in the 1960s and
1970s were also based on sizes of descriptions: examples
were optimal orders of models in systems theory, lengths
of logic expressions for circuit and program design, and
numbers of factors in Krohn-Rhodes decompositions of
semigroups. Beginning in the 1970s computational
complexity theory took a somewhat different direction,
defining what it called complexity in terms of resources
needed to perform computational tasks. Starting in the
1980s with the rise of complex systems research (see
page 862)
it was considered important by many physicists to find a
definition that would provide some kind of numerical
measure of complexity. It was noted that both very
ordered and very disordered systems normally seem to be
of low complexity, and much was made of the observation
that systems on the border between these
extremes--particularly class 4 cellular
automata--seem to have higher complexity.
In addition, the presence of some kind of hierarchy was
often taken to indicate higher complexity, as was
evidence of computational capabilities. It was also
usually assumed that living systems should have the
highest complexity--perhaps as a result of
their long evolutionary history. And this made informal
definitions of complexity often include all sorts of
detailed features of life (see page 1178).
One attempt at an abstract definition was what Charles
Bennett called logical depth: the number of
computational steps needed to reproduce something from
its shortest description. Many simpler definitions of
complexity were proposed in the 1980s. Quite a few were
based just on changing pi Log[pi] in the definition of entropy
to a quantity vanishing for both ordered and disordered
pi. Many others
were based on looking at correlations and mutual
information measures--and using the fact
that in a system with many interdependent and
potentially hierarchical parts this should go on
changing as one looks at more and more. Some were based
purely on fractal dimensions or dimensions associated
with strange attractors. Following my 1984 study of
minimal sizes of finite automata capable of reproducing
states in cellular automaton evolution (see page 276)
a whole series of definitions were developed based on
minimal sizes of descriptions in terms of deterministic
and probabilistic finite automata (see page 1084).
In general it is possible to imagine setting up all
sorts of definitions for quantities that one chooses to
call complexity. But what is most relevant for my
purposes in this book is instead to find ways to capture
everyday notions of complexity--and then to
see how systems can produce these. (Note that since the
1980s there has been interest in finding measures of
complexity that instead for example allow
maintainability and robustness of software and
management systems to be assessed. Sometimes these have
been based on observations of humans trying to
understand or verify systems, but more often they have
just been based for example on simple properties of
networks that define the flow of control or
data--or in some cases on the length of
documentation needed.) (The kind of complexity discussed
here has nothing directly to do with complex numbers
such as Sqrt[-1]
introduced into mathematics since the 1600s.)
|
 |
 |
PAGE
IMAGE
|