Landing for the Concepts

This is a nice home page for this section, not sure what goes here.

218:218 - Tipping Points
Related
Principles: 26 

64:64 - Self-Organized Criticality
Related
Principles: 23 

214:214 - Self-Organization
Related
Principles: 24 

217:217 - Scale-Free
Related
Principles: 23 

213:213 - Rules
Related
Principles: 22 

66:66 - Power Laws
Related
Principles: 23 

93:93 - Path Dependency
Related
Principles: 26 

84:84 - Open / Dissipative
Related
Principles: 25 

75:75 - Networks
Related
Principles: 25 

56:56 - Iterations
Related
Principles: 

73:73 - Information
Related
Principles: 25 

59:59 - Fitness
Related
Principles: 24 

88:88 - Feedback
Related
Principles: 21 

212:212 - Far From Equilibrium
Related
Principles: 26 

78:78 - Degrees of Freedom
Related
Principles: 22 

53:53 - Cybernetics
Related
Principles: 

72:72 - Attractor States
Related
Principles: 24 

 

Tipping Points

A tipping point (often referred to as a 'critical point') is a threshold within a system where the system shifts from manifesting one set of qualities to another.

Complex systems do not follow linear, predictable chains of cause and effect. Instead, system trajectories can diverge wildly into entirely different regimes.


Most of us are familiar with the phrase 'tipping point'. We tend to associate it with moments of no return: when overfishing crosses a threshold that causes fish stocks to collapse or when social unrest reaches a breaking point resulting in riot or revolution. The concept is often associated with an extreme shift, brought about by what seems to be a slight variance in what had been incremental change. A system that seemed stable is pushed until it reaches a breaking point, at which point a small additional push results in a dramatic shift in outcomes.

While the phrase 'tipping point' tends to connote a destructive shift, the phrase 'critical point' (which also refers to a large shift in outcomes due to what appears to be a small shift of the system context) does not carry such value-laden implications. Complex systems tend to move into different kinds of regimes of behavior, and the shift from one behavior to another can be quite abrupt: indicating that the system has passed through a critical point.

Example:

Water molecules respond to two critical points: zero degrees, when they shifts from fluid to solid state; and one hundred degrees, when they shift from fluid to vapor state. We see that the kinds of behavior that water molecules will obey is context dependent:  they maintain fluid behaviors within, and only within, the context of a certain temperature range. If we examine why the behavior of the water changes, we realize that fluid behavior within the zero to 100 range is the behavior that involves the least possible energy expenditure on the part of the water molecules given their environmental context. Once this context shifts - becoming too cold or too hot - a particular behavioral mode is no longer that which best conserves energy. Water molecules have the capacity to enact three different kinds of behavioral modes - frozen, fluid, or vapor - and the way these modes come to be enacted is subject to whichever mode involved the least energy expenditure within a given context.

Minimizing Processes:

Another way to think about this, using a complex systems perspective, is that the global behavioral dynamics are moving from one Attractor States to another. When the context changes, the water molecules are forced into a different "basin of attraction" (another word for an attractor state), and this triggers a switch in their mode.

In all complex systems this switch from one basin of attraction to another is simply the result of a system moving from a regime of behavior that, up until a certain point, involved a minimized energy expenditure. Beyond that point (the tipping point) another kind of behavioral regime encounters less resistance, conserving energy expenditures given a shifting context.

A tipping point, or critical point is one where a system moves from one regime of 'fit' behavior into another. We can imagine the point above as a water molecule poised at zero degrees - with the capacity to manifest either in a fluid or frozen energy state.

Of course, what we mean by 'conserving energy' is highly context-dependent. For example, even though the individual members of a political uprising are very different actors from individual water molecules in a fluid medium, the dynamics at play are in fact very similar. Up until a certain critical mass is obtained, resisting a government or a policy involves encountering a great deal of resistance. The effort might feel futile - 'a waste of energy'. But when a movement begins to gain momentum, there can be a sense that the force of the movement is stronger than the institutions that it opposes. Being 'carried along' with the movement (joining an uprising), is in fact the course of action that is most in alignment with the forces being unleashed.

Further, once a critical mass is reached, a movement will tend to accelerate its pace due to positive feedback. This can have both positive and negative societal consequences: some mass movement such as lynching mobs or bank-runs show us the downside of tipping points that move beyond a threshold and then spiral out of control.

That said, understanding that critical points may exist in the system (beyond which new kinds of behavior become feasible), can help us move outside of 'ruts' or 'taken for granted' scenarios. In the North American context, smoking was an acceptable social practice in public space. Over time, societal norms pushed public smoking beyond a threshold of acceptability, at which point smoking went from being a normative behavior to something that, while tolerated, is ostracized in the public realm.

What other kinds of activities might we wish to encourage and discourage? If we realize that a behavioral norm is close to a critical point, then perhaps with minimal effort we can provide that additional 'push' that moves it over the edge.

Shifting Environmental Context:

Of course these examples are somewhat metaphoric in nature, but the point being made is that there can be changes in physical dynamics and changes in cultural dynamics that cause different kinds of behaviors to become more (or less) viable within the constraints of the surrounding context.

Returning to physical systems, slime mould is a very unique organism that has the capacity to operate either as a collective unit, or as a collection of individual cells, depending on the inputs provided by the environmental context. As long as food sources are readily available, the mould operates as single cells. That said, when food becomes scarce, a critical point is reached when cells agglomerate to form a collective body with differentiated functions. This new body has capacities for movement and food detection not available at the individual cell level, as well as other kinds of reproductive capacities.

Accordingly, we cannot think about the behavior of a complex system without considering the context within which it is embedded. The system may have the different kinds of capacities depending on how the environment interacts with and 'triggers' the system. It is therefore important to be very aware of the environmental coupling of a system. What might appear to be stable behavior might in fact be behavior that is relying on certain environmental features being present - change these features and entirely new kinds of behaviors might manifest.

This is to say that tipping points might be extended both from intrinsic forces and extrinsic forces (also termed endogenous vs exogenus aspects). This is to say that a shift might be due to dynamics at play within the system, that push it beyond a critical threshold, or they may be due to dynamics external to the system, that alter the system context or inputs in such a way that a system's particular behavior can no longer be maintained and is pushed into a new regime. When the forces are external, we can think of this as shifts to the Fitness Landscape, where a particular mode of operation is no longer viable due to differences in the environmental context.

Back to {{key-concepts}}

Back to {{complexity}}


 

Governing Features ↑

Self-Organized Criticality

CAS tend to organize to a 'critical state' where, regardless of the scale of a given input, the scale of corresponding output observes of a power-law distribution.

Strike a match and drop it in the forest. How big will the resulting fire be? The forest is dry but not overly so... vegetation is relatively thick. Will the fire burn a few trees and then flame out, or will it jump from branch to branch, burning thousands of acres to the ground?


Weirdly uncorrelated cause and effect:

We might think that the scale of an event is relative to the scale of a cause, and in some instances this is indeed the case. But in the context of complex systems, we find an interesting phenomena. These systems appear to 'tune' themselves to a point whereby system inputs of identical intensities (two matches lit on two different days, otherwise same conditions), result in outputs that diverge wildly (a small fire; a massive fire event). The frequency distribution of intense system outputs (relative to equivalent system inputs) follows power-law regularities.

According to Per Bak, a variety of systems naturally 'tune' themselves to operate at a threshold where such dynamics occurs. He defined this 'tuning' as Self-Organized Criticality.  A feature of critical states is that, once reached, system components become highly correlated or linked to other system components. That said, the links are exactly balanced: the system elements are linked just tightly enough so that an input at any point can cascade through the entire system, but just loosely enough, so that there are no redundant links needed to make this occur.

Example:

One might think about this like an array of domino-like entities that, instead of being rectangular, are vertical cylinders: able to topple in any direction. The dominos, instead of being arranged in rows, are arranged in a field, with gaps between some cylinders. Accordingly, when a cylinder falls it might strike a gap in the field, with no additional cylinders toppling. Alternately, it might strike an adjacent neighbor, in which case this neighbor will also fall in a particular direction, potentially striking another or potentially dying out. The analogy is made stronger if we imagine that an arrangement whereby, regardless of the direction from which a cylinder is struck, it will wobble and then can fall in any direction.  When a system is self-critical, it has reached a state where we can randomly choose any domino to topple and the impact on the overall field will vary according to a power-law distribution. That is to say, that some disturbance will affect only a small number of surrounding dominos, while others will propogate throughout the entire system, causing all cylinders to fall. The occurrence of these large scale versus small-scale cascades follow Power Laws distributions.

Sand Piles and Avalanches

We can imagine that it would be very difficult to, from the top down, create a specific arrangement where such dynamics occur. What is surprising, and what Bak and his colleagues showed, is that natural systems will independently 'tune' themselves to such arrangements. Bak famously provides us with the 'sand pile' model as an example of self-organized criticality:

Imagine that we begin to drop a steady stream of grains of sand onto a surface. The sand begins to pile up, forming a cone shape. As more sand is added, the height of the sand cone grows, and there begins to be a series of competing forces: the force of gravity that tends to drag grains of sand downwards, the friction between grains of sand that tends to hold them in place, and the input of new sand grains that tends to put pressure on both of these forces.

What Bak demonstrates is that, as grains are added sand will dislodge itself from the pile, cascading downwards. What is amazing is that it is impossible to predict whether dropping an individual sand grain will result in a tiny dislodgment of sand cascades, or a massive sand avalanche. That said, it is possible to predict the ratio of cascade events over time - which follows a power-law distribution.

What this suggests is that the sand grains cease to operate independently to forces, and instead their response to forces is highly correlated with that of the other sand grains. We no longer have a collection of grains, acting independently, but a system of grains whereby system-wide behaviors are displayed. Accordingly, an input that affects one element in the system might die out then and there, or, because of the correlation amongst all elements, create a chain reaction.

Information Transfer

It remains unclear exactly how such system-wide correlations emerge, but we do know something about the nature of these correlations - they are tuned to the point where information is able to propagate through the system with maximum efficiency. In other words, a message or input at one node in the system (a grain of sand, burning tree, or toppling cylinder) has the capacity to reach all other nodes, but this with the least redundancy possible. In other words, there are gaps in the system which means that a majority of inputs ultimately die out, but not so many gaps that it is impossible for an input to reach all elements of the system.

Coming back to our original example, when we strike a match in a forest, if the forest has achieved a 'self-critical' state, then we cannot know whether the resulting fire will spread only to a few trees, a large cluster of trees, or cascade through the entire forest. The only thing that we can know is that the largest scale events will happen with diminishing frequency in comparison to the small scale events.

One possible way of understanding why self-organized criticality occurs is to position it as a process that emerges in systems that are affected both by a pressure to have elements couple with one another (sand-grains becoming interlocked by friction or 'sticky') and some mechanism that acts upon the system to loosen such couplings (the force of gravity pulling grains apart). The feedback between these two pressures 'tunes' the system to a critical state.

Complex systems that exhibit power-laws would seem to exhibit such interactions between two competing and unbalanced forces.


 

Governing Features ↑

Self-Organization

Self-organization refers to processes whereby coordinated patterns or behaviors manifest in a system without the need for top-down control.

A system is considered to be self-organizing when the behavior of elements in the system can, together, arrive at a globally more optimal functional regimes compared to if each system element behaved independently. This occurs without the benefit of any controller or director of action. Instead, the system contains elements acting in parallel that will gradually manifest organized, correlated behaviors: Emergence.


Emergent  behaviors become organized into a regular form or pattern. Furthermore, this pattern has properties that do not exist at the level of the independent elements - that is, there is a degree of unexpectedness or novelty in what manifests at the group level as opposed to what occurs at the individual level.

An example of an emergent phenomena generated by self-organization is flock behavior, where the flock manifests an overall identity distinct from that of any individual bird.

Characterizing 'the self' in 'Self'-organization

Let us begin by disambiguating self-organizing emergence from other kinds of processes that might also lead to global, collective outcomes.

Example - Back to School:

Imagine you are a school teacher, telling your students to form a line leading to their classroom. After a bit of chaos and jostling you will see a linear pattern form that is composed of individual students. At this point, 'the line' has a collective identity that transcends that of any given individual: it is a collective manifestation with an intrinsic identity (don't cut in the line!).  The line is created by individual components, expresses new global properties, but it's appearance is not the result of self-organization, it is the result of a top-down control mechanism.

Clearly 'selves' organize in this example, but not in ways that are 'self-organizing'.

Now imagine instead that you are a school teacher wanting the same group of students to play a game of tug-a-war in the school gym. Beginning with a blended room of classmates, you ask the students to pick teams. The room quickly partitions into two collectives:  one composed entirely of girls and the other entirely of boys. As a teacher, you might not appreciate this self-organization, and attempt to exert top-down control in an effort to balance team gender. What is interesting about this case is that it does not require any one boy calling out 'all the boys on this side', or any one girl doing the same: the phenomena of 'boys versus girls' self-organizes.

In the example above, we can well imagine the reasons why school teams might tend to partition into 'girls vs boys' even without explicit coordination (of course these dynamics don't always appear, but I am sure the reader can imagine lots of situations where they do).

Here, there are slight preferences (we can think of these as differentials), that generate a tendency for the elements of the system to adjust their behaviors one way vs another. In the case of the school children, the tendencies of girls to cluster with girls manifests due to tacit practices: friends cluster near friends, and as clusters appear students switch sides to be nearer those most 'like' them. Even if an individual child within this group has no strong preference - is equally friends with girls and boys - the pressures of patterns formed by the collective will tend to tip the balance. One girl alone in a team of boys will register that their behavior is non-conforming and feel pressured to switch sides, even if this is not explicitly stated.

Here there are 'selves' with individual preferences, but global behaviors are tipped into uniformity by virtue of slight system differences that tend to coordinate action.

Conscious vs unconscious self-organization:

While the gym example should be pretty intuitive, what is interesting is that there are many physical systems that produce this same kind of pattern formation but that do not require social cues or other forms of intentional volition. Instead, self-organization occurs naturally in a host of processes. Whether we are talking about schools of fish, ripples of wind-blown sand, or water molecules freezing into snowflakes, self-organization leading to emergent global features is a ubiquitous phenomena.

While the features of self-organization manifest differently depending on the nature of the system, there are common dynamics at play regardless of system. Agents in the system participate in a shared context wherein there exists some form of differential. The agents in the system adjust their behaviors in accordance with slight biases in their shared context and these adjustments, though initially minor, are then amplified through reinforcing feedback that cascades through the system. Finally an emergent phenomena can be recognized.

Sync!

Let us consider the sound of cicadas chirping:

cicadas chirping in sync

The cicadas chirp in a regular rhythm. There is no conductor to orchestrate the beat of the rhythm, no head cicada leading the chorus, no one in charge. The process by which the rhythm of sound (an emergent phenomena) manifests is governed purely by the mechanism of self-organization. Let us break down the system:

  1. Agents: Chirping Cicadas
  2. Shared Context: the acoustic environment shared by all cicadas
  3. Differential: the timing of the chirps
  4. Agent Bias: adjust chirp to minimize timing differences with nearby chirps
  5. Feedback: As more agents begin to chirp in more regular rhythms, this reinforces a rhythmic tendency, further syncing chirping rhythms.
  6. Emergent Phenomena: Regular chirping rhythm.

Even if all agents in the system start off with completely different  (random) behaviors, the system dynamics will lead to the coordination of chirping behaviors.

For another example of the power of self-organization, consider this proposition: You are tasked with getting one thousand people to walk across a bridge, with their movements coordinated so that their steps are aligned in perfect rhythm. You must achieve this feat on the first try (with a group of strangers of all ages who have never met one another).

It is difficult to imagine this top-down directive ending in anything other than an uncoordinated mess. But place people on the Millennium bridge in London for its grand opening and this is precisely what we get:

as the video progresses, watch the movement of people fall into sync

There are a variety of mechanisms that permit such self-organization to occur. In the millennium bridge video, the bridge provides the shared context or environment for  the walkers (who are the agents in the system). As this shared context sways slightly (differential) it throws each agent just a little bit off balance (feedback).  Each individual then slightly adjusts their stance and weight to counteract this sway (agent bias), which serves only to reinforces the collective sway direction. Over time, as the bridge sways ever more violently, people are forced to move in a coordinated collective motion (emergence) in order to traverse the bridge.

What is important to note in this example is that we do not require the agents to agree with one another in order for self-organization to occur. In our earlier example - that of school children forming teams - we can imagine that a variety of factors are at work that have to do with active volition on the part of the children. But in the example above, movement preferences have nothing to do with observed walking behavior or individual preferences. Instead, the agents have become entangled with their context (which is partially formed of other agents), in ways that constrain their movement options.

Enslaved Behavior

Accordingly, in self-organizing systems agents that might initially possess a high number of possible states that are able to enact (see also Degrees of Freedom) the possible range of freedom becoming increasingly limited, until such time as only a narrow band of behavior is possible.

Further, while the shared context of the agents might initially be the source of difference in the system (with difference gradually being amplified over time), in reality the context for each agent is a combination of two aspects: both the broader shared context (the bridge) and the emerging behaviors of all the other agents within that context. This is to say that once a global behavior emerges, subsequent self-organization of the agents is constrained by the emerged context agents are also a part of.

Back to {{key-concepts}}

Back to {{complexity}}


 

Governing Features ↑

Scale-Free

'Scale-free' networks are ones in which identical system structure is observed for any level of network magnification.

Complex systems tend towards scale-free, nested hierarchies. By 'Scale-free', we mean to say that we can zoom in on the system at any level of magnification, and observe the same kind of structural relations.


If we look at visualizations of the world wide web, we see a few instances of highly connected nodes (youtube), many instances of weakly connected nodes (your mom's cooking blog), as well as a mid-range of intermediate nodes falling somewhere in between. The weakly connected nodes greatly outnumber the highly connected nodes, but the overall statistical distribution of connected vs unconnected nodes follows a power-law distribution. Thus, if we 'zoom in' on any part of the network (at different levels of magnification), we see similar, repeated patterns.

'Scale Free' entities are therefore sometimes fractal-like, although there are scale-free systems that are more about the scaling of connections or flows, rather than scaling of pictoral imagery (which is what we associate with Fractals or objects that exhibit Self Similarity. Accordingly, a pictoral representation of links in the world wide web does not exactly 'look' like a fractal, but its distributions of connections observes mathematical regularities consistent with what we observe in fractals (that is to say, {{power-laws}} ).

A good example here is the fractal features of a leaf:

We can think of the capillary network as the minimum structure required to reach the maximum surface area.

Nature's Optimizing Algorithm

Here, the scale-free structure of the capillary network allows the most efficient transport of nutrients to all parts of the leaf surface within the overall shortest capillary path length. This 'shortest overall path length'  is one of the reasons that we might often see scale-free features in nature: this may well be the natural outcome of nature 'solving' the problem of how to best economize flow networks.

minimum global path length to reach all nodes

The two images serve to illustrate the idea of shortest overall path length. If we wish to get resources from a central node to 16 nodes distributed along a surrounding boundary, we can either trace a direct path to each point from the center, or we can partition the path into splitting segments that gradually work their way towards the boundary. While each individual pathway from the center to an individual node is longer in the right hand image, the total aggregate of all pathways to reach all nodes from the center is shorter. Thus the image on the right (which shows scale-free characteristics), is the more efficient delivery network.

Example - Street Networks:

We should therefore expect to see such forms of scale-free dynamics in other non-natural systems that carry and distribute flows: thus, if we think of size distribution of road networks in a city, we would expect a small number of key expressways carrying large traffic flows, followed by a moderate number of mid-scaled arteries carrying mid-scale flows, then a large number of neighborhood streets carrying moderate flows, and finally a very high number of extremely small alleys and roads that each carry very small flows to their respective destinations.

mud fractals and street networks

Fractals, scale-free networks, self-similar entities and power-law distributions are concepts that can be difficult to disambiguate. Not all scale-free networks look like fractals, but all fractals and scale-free networks follow power-laws. Finally, there are many power-law distributions that neither 'look' like fractals, nor follow scale-free network characteristics: if we take a frozen potato and smash it on the ground, then classify the size of each piece, we would find that the distribution of smashed potato pieces follows a power law (but is not nearly as pretty as a fractal!). Finally, self-similar entities (like the romanesco broccoli shown below) are fractal-like (you can zoom in and see similar structure at different scales), but are not as mathematically precise as a fractal.

credit: Wikimedia commons  (Jon Sullivan)


Back to {{key-concepts}}

Back to {{complexity}}


 

Governing Features ↑

Rules

Complex systems are composed of agents governed by simple input/output rules that determine their behaviors.

One of the intriguing characteristics of complex systems is that highly sophisticated emergent phenomena can be generated by seemingly simple agents. These agents follow very simple rules - with dramatic results.


Simple Rules - Complex Outcomes

How does one replicate the efficiencies of the Tokyo subway map? Simple - enlist slime mould and let them discover it!  Results such as these are highly counterintuitive: when we see complicated phenomena, we expect the causal structure at work to be similarly complex. However, in complex systems this is not the case. Even if the agents in a complex system are very simple, the interactions generated amongst them can have the capacity to yield highly complex phenomena
.

Slime mold forming the Tokyo subway map

Take it in Context

We can conceptualize  bottom-up agents as simple entities with limited action possibilities. The decision of which action possibility to deploy is regulated by basic rules that pertain to the context in which the agents find themselves. Another way to think of 'rules' is therefore to relate them to the idea of a simple set of input/output criteria.

An agent exists within a particular context that contains a series of factors considered as relevant inputs: one input might pertain to the agent's previous state (moving left or right); one might pertain to some differential in the agent's context (more or less light; and one might relate to the state of surrounding agents (greater or fewer). An agent processes these inputs and, according to a particular rule set, generates an output: 'stay the course', 'shift left', 'back up'.

input/output rule factoring three variables

In complex adaptive systems, an aspect of this 'context' must include the output behaviors generated by surrounding agents. Further, while for natural systems the agent's context might include all kinds of factors that serve as relevant inputs, in artificial complex systems novel emergent behavior can manifest even if the only thing informing the context is surrounding agent behaviors.

Example:

Early complexity models focused  precisely on the generative capacity of simple rules within a context composed purely of other agents. For example, John Conway's 'Game of Life' is a prime example of how a very basic rule set can generate a host of complex phenomena. Starting from agents arranged on a cellular grid, with fixed rules of being either 'on' or 'off' depending on the status of the agents in neighboring cells, we see the generation of a host of rich forms. The game unfolds using only four rules, that govern whether an agent is 'on' (alive) or 'off' (dead).  For every iteration:
  1. 'Off' cells turn 'On' IF they have three 'alive' neighbors;
  2. 'On' cells stay 'On' IF they have two or three 'alive' neighbors;
  3. 'On' cells turn 'Off' IF they have one or fewer 'alive' neighbors;
  4. 'On' cells turn 'Off' IF they have four or more 'alive' neighbors.
The resulting behavior has an 'alive' quality: agents flash on and off over multiple iterations, seem to converge, move along the grid, swallow other forms, duplicate, and reproduce.

Conway's Game of Life

Principle: One agent's output is another agent's input!

As we can see from the Game of Life, starting with very basic agents, who rely only on other agents outputs as their input, a basic rule set can nonetheless generate extremely rich outputs.

While the Game of Life is an artificial complex system (modeled in a computer), we can, in all real-world examples of complexity, observe that the agents of the system are both responders to inputs from their environmental context, as well as shapers of that same environmental context. This means that the behaviors of all agents necessarily become entangled - entering into feedback loops with one another.

Adjusting rules to targets

It is intriguing to observe that, simply by virtue of simple rule protocols that are pre-set by the programmer and played out over multiple iterations, complex emergent behavior can be produced. Here we observe the 'fact' of emergence from simple rules. But we can also imagine natural complex systems where agent rules also shift over time. While this could happen arbitrarily, it makes sense from an evolutionary perspective when some agent rules are more 'fit' then others. This results in a kind of selection pressure, determining which rule protocols are preserved and maintained. Here, the discovery of simple rule sets that yield better enacted results exemplifies the 'function' of emergence.

When we couple the notions of 'rules' with context, we are therefore stating that we are not interested in just any rule set that can generate emergent outcomes, but with specific rule sets that generate emergent outcomes that are in some way  'better' with respect to a given context. Successful rule systems imply a fit between the rules the agents are employing, and how well these rules assists agents (as a collective) in achieving a particular goal within a given setting.

As a general principle we can think of successful rules as ones that minimize agent effort (energy output) to resolve a given task. That said, in complex systems we need to go a step further to analyze the collective energy output. Thus in a system the best rules will be the ones that result in minimal energy output for the system as a whole to resolve a given task. This may require 'sacrifice' on the part of an individual agent, but this sacrifice (from a game theory perspective), is still worth it from the overall system level.

As agents in a complex system enact particular rule sets, rules might be revised based on how quickly or effectively they succeed at reaching a particular target.

When targets are achieved - 'food found!' - this information becomes a relevant system input.  Agents that receive this input may have a rule that advises them to persist in the behavior that led to the input, whereas agents that fail to achieve this input may have a rule that demands they revise their rule set!

Agents are therefore not only be conditioned by a set of pre-established inputs and outputs but also be able to revise their rules. This requires them to gain feedback about the success of their rules and test modications. A way of thinking about this is captured in the idea of an agent held {{schemata}} about their behavior relative to their context that can be updated over time so as to better align. Further, if multiple agents test different rule regimes simultaneously, then there may be other 'rules' that help agents learn from one another. If a particular rule leads agents to food, on average, in ten steps, and another rule, on average, leads agents to food in 6 steps, then agents adopting the second rule should have the capacity to disseminate their rule set to other agents, eventually suppressing the first, weaker rule. This processor dissemination requires some form of communication or steering, which is often done via the use of Stigmergy.

Enacted 'rules' are therefore provisional tests of how well an output protocol achieves  a given goal. The test results then become another form of input:

bad test results also become an agent input,  telling the agent to: "generate a rule mutation as part of your next enacted output".

Novel Rule formation:

Rules might be altered in different ways. At the level of the individual -

  • an agent might choose to revise how it values or factors inputs in a new way;
  • an agent might choose to revise the nature of its outputs in a new way.

In the first instance, the impact or value assigned to particular inputs (needed to trigger an output) might change based on how successfully previous input weighting strategies were in relationship to reaching a target goal.  In order for this to occur, the agent must have the capacity to assign new 'weights' (the value or significance) for an input, in different ways.

In the second instance, the agent requires enough inherent flexibility or 'Degrees of Freedom' to generate more than one kind of output. For example, if an agent can only be in one of two states, it has very little ability to realign outputs. But if an agent has the capacity to deploy itself in multiple ways, then there is more flexibility in the character rules it can formulate. This ties back to the idea of {{adaptive-capacity}}.

Rules might also be revised through processes occurring at the group level. Here, even if agents are unable to alter their performance at the individual level, there may still be mechanisms operating at the level of the group which result in better rules propagating. In this case, we would have a population of agents, each with specific rule sets that vary amongst them. Even if each individual agent has no ability to revise their particular rules, at the level of the population -

  • poor rules result in agent death - there is no internal recalibration - but agents with bad rules simply cease to exist;
  • 'good' rules can be reproduced - there is no internal recalibration - but agents with good rules persist and reproduce.

We can imagine that the two means of rule revision - those working at the individual level and those at the population level - might work in tandem. While all of this should not seem new (it is analogous to General Darwinism), since complex systems are not always biological ones, it can be helpful to consider how the processes of system adaptation (evolution) can instead be thought of as a process of rule revision.

Through agent to agent interaction, over multiple iterations, weaker protocols are filtered out, and stronger protocols are maintained and grow. That said, the ways in which rules are revised is not entirely predictable - there are many ways in which rules might be revised, and more than one kind of revision may prove successful (as the saying goes - there is more than one way to skin a cat). Accordingly, the trajectory of these systems is always contingent and  subject to historical conditions.

Fixed Rules with thresholds of enactment

Not all complex adaptive behaviors require that rules be revised. We began with artificial systems - cellular automata - where the agent rules are fixed but we still see complex behaviors. There are also example of natural complex systems where rules are fixed, but still drive complex behaviors. These rules, rather than being the result of a computer programmer arbitrarily determining an input/output protocol, are the result of fundamental laws  (or rules) of physics or chemistry.

One particularly beautiful example of non-programmed natural rules resulting in complex behaviors is the Belousov-Zhabotinsky (BZ) chemical oscillator.  Here, fixed chemical interaction rules lead to complex form generation:

BZ chemical oscillator

In this particular reaction, as in other chemical oscillators, there are two interacting chemicals, or two 'agent populations' which react in ways that are auto-catalytic. The output generated by the production of one of the chemicals, becomes the input needed for the generation of the other chemical. Each chemical is associated with a particular color, which appears only when that chemical present in sufficient concentrations. The concentrations of these chemicals augments and diminishes at different reaction speeds, leading to shifting concentrations of the coupled pair.  As concentrations rise and fall, we see emergent and oscillating color arrays.

Back to {{key-concepts}}

Back to {{complexity}}


 

Governing Features ↑

Power Laws

Complex System behaviors often exhibit power-laws: with a small number of system features wielding a large amount of system impact.

Power laws are particular mathematical distributions that appear in contexts where a very small number of system events or entities exist that, while rare, are highly impactful, alongside of a very large number of system events or entities exist that, while plentiful, have very little impact.


Power laws arise in both natural and social system, in contexts as diverse as earthquake intensities, city population sizes, and word frequency use.

'Normal' vs 'Power Law' Distributions

Complex systems are often characterized by power law distributions. A power law is a kind of mathematical distribution that we see in many different kinds of systems. It has different properties from the well known 'bell curve' 'normal' or 'Gaussian' distribution.

 Let's look at the two here:

Power-law (left) vs Bell-curve (right)

Most people likely remember the bell curve from high school. The fat middle (highlighted) is the 'norm' and the two sides or edges represent the extremes. Accordingly, a bell curve can illustrate things like people's heights - with 'typical' heights being distributed around a large cluster at the middle, and extreme heights (both very tall and very short people), being represented by much smaller numbers at the extremes. There are many, many, phenomena that can be graphed using a bell curve. It is suitable for depicting systems that hover around a normative 'middle' and for systems where there are no driving correlations amongst members of the set. That is to say: the height of one person in classroom is not constrained or affected by heights of other people.

Power-law distributions are likely as common as bell-curve distributions, but for some reason people are not as familiar with them. They occur in systems where there is no normative middle where most phenomena occur. Furthermore, entities within a power-law set enjoy some kind of calibration feedback relation amongst them - meaning that the size of one entity in the system is in someway correlated with, (or has an impact) on the size and frequency of other entities. These  systems are characterized by a small percent of phenomena or entities in the system, accounting for a great deal of influence or system impact.

This small percent is shown on the far left hand side of the diagram (highlighted), where the 'y' axis (vertical) indicates intensity or impact (of some phenomena), and the 'x' axis indicates the frequency of events, actors, or components associated with the impact. The left hand side of the diagram is sometimes called the 'fat head', and as we move along to the right hand side of the diagram, we see what is called 'the long tail'. Like the bell curve, which we can use to chart phenomena such as housing prices, heights, test scores, or household water consumption, the power law distribution can illustrate many different kinds of things. 

Occasionally, we can illustrate the same phenomena using bell curves and power law distributions, while simultaneously highlighting different aspects of the same phenomena.

Example:

Let's say we chart income levels on a bell curve. The majority of people earn a moderate income, and a smaller number of people earn both very high and the very low incomes at the extreme sides. Showing this data, we get a chart that looks like the one below:

Wealth in the USA plotted as a bell curve (source: pseudoerasmus)

But, we can think of income distribution another way - the impact or intensity of incomes. Consider this fact of wealth distribution: in the US, if we look at the right side of the bell curve above (the wealthiest people who make up a small fraction or 1% of the population) these few people control around 45% of entire US wealth. Clearly, the bell curve does not capture the importance of this small fraction of extreme wealth holders.

Imagine that instead of plotting the number of people in different income brackets we were to instead plot the intensities of incomes themselves. In this case we would generate a plot showing:

  • 1% (a few people)  controlling  45% (a large chunk) of total wealth;
  • 19% (a moderate number of people) controlling  35% ( a moderate chunk) of total wealth;
  • 80% (the bulk of the population) controlling 20% (a small fraction) to total wealth.

These ratios plot as a power law, with approximately 20% of the people controlling 80% of the wealth resource.

80/20 Rule

These numbers, while not precisely aligning with US statistics, are not that far off, and they align with what is referred to as the '80/20' rule: where 20 percent of a system's components are responsible for 80 percent of the system's key functions or impacts. This phenomena was first noted by {{Pareto}}, and is also referred to as a a Pareto Distribution. We can find Pareto distributions in many different kinds of phenomena where the distributions might be applied to aspects such as - quantities, frequencies, or intensities. Thus:

  • 20% of our wardrobe is worn 80% of the time;
  • 20% of all English words are used 80% of the time;
  • 20% of all roads attract 80% of all traffic;
  • 20% of all grocery items account for 80% of all grocery sales;

Finally, if we smash a frozen potato against a wall and sort out the resulting broken chunks:

  • 20% of the potato chunks will account for 80% of the total smashed potato.

Such ratios are so common that if you are unsure of a statistic then - provided it follows the 80/20 rule - you are likely safe to make it up! (the frozen potato being a case in point :))

Source: themediaconsortium.org

Rank Order

Another way to help understand how power law distributions work is to consider systems in terms of what is called their 'rank order'.  We can illustrate this with language. Consider a few words from English:

  • 'The' is the most commonly used word in the English language -
    • We rank it 'first' and it accounts for 7% of all word use (rank 1) .
  • "Of" is the second most commonly used word -
    • We rank it 'second' and it accounts for 3.5% of all word use (1/2 of the rank 1 word)

If we were to continues, say looking at the 7th most frequently used word, we would expect to see it use 1/7th as frequently as the most commonly used word. And in fact -

  • 'For' is the seventh most commonly word,
    • We rank it seventh  and it accounts for 1% of all word use  (1/7 of the rank 1).

This power-law phenomena is known as 'Zipf's Law' for George Kingsley Zipf, the man who first identified it. Zipf's law indicates that if, for example,  you have 100 items in a group, the 99th item will occur 1/99th as frequently as the first item.  For any element in the group, you simply need to know its rank in the order - 1st, 3rd, 25th - to understand its frequency (relative to the top ranked item in the group).

The constant in Zipf's law is '1/n' , where the 'nth' ranked word in a list is used 1/nth as often as the most popular word.

Were all power-laws to follow a Zipf's law then:

  • the 20th largest city would be 1/20th the size of the largest;
  • the 10th most popular child's name would be used 1/10 of the time compared to the most popular;
  • the 3rd largest earthquake in California in 100 years would be 1/3 of the size of the largest;
  • the 50th most popular product would sell 1/50th as often as the most popular .

This is a very easy and neat set, and it is represents perhaps the most straightforward power law.  That said, there can be other power law ratios between elements which, - while remaining constant, are not always such a 'clean' constant.  These follow the same principle but are just more difficult to express (and calculate). For example"

'1.07/n'  would be a power-law where the 'nth' ranked word in a list is used 1/1.07 times as often as the most popular word.

Pretty in Pink

Clearly '1.07/f' is a less satisfactory ratio then 1/n. In fact, the 1/n ratio is so pleasing that it has a few different names. 1/n is mathematically equivalent to 1/f ratio where, but instead of highlighting the rank in the list, 1/f highlights the frequency of an element in a list (the format is different but the meaning is the same).

'1/f' is also described as 'pink noise' - which is a statistical pattern distinct from 'brown' or 'white' noise. Each class of 'noise' pertains to different kinds of randomness in a system. In other words, while many systems exhibit random behaviors, some random behaviors differ from others. We can think of 'pink', 'white', and 'brownian', noise as being different 'flavors' of randomness. Without getting into too much detail here, 1/f noise seems to occur frequently in natural systems, and can be associated with beauty. In non-mathematical terms, pink noise involves a frequency ratio of component distributions such that there is just enough correlation between elements to provide a sense of unity, and just enough unexpectedness to provide variety. The human mind seems to enjoy this balance between the two, which is why pink noise can be found in music or artworks that we find beautiful. White noise is too random (no correlation) and brownian noise is too correlated (no unexpected interested).

Dynamics generating Power-laws

Power laws distributions have been identified in many complex system behaviors, such as:

  • earthquake size and frequency
  • neuron activity
  • stock prices
  • web site popularity
  • academic citation network structure
  • city sizes
  • word use frequency
  • ....and much more!

Much time and energy has gone into identifying where these distributions occur and also trying to understand why they occur.


Growing Riches

A strong contender for the presence of power-law dynamics is that they may be present in  systems that involve both growth and Preferential Attachment. Understood colloquially as 'the rich get richer', preferential attachment is the idea is that popular things tend to attract more attention, thereby becoming more popular. Similarly, wealth begets wealth.  The idea of growth and preferential attachment is therefore associated with positive feedback. It can be used to explain the presence of power-law distributions in the size and number of cities (bigger cities attract more industry thereby attracting more people...) the distributions of citations in academic publishing (highly cited authors are read more thereby attracting more citations), and the accumulation of wealth (rich people can make more investments, thereby attracting more wealth).


Push forward and Push back

Further, power-laws might be understood as a phenomena that occur in systems that involve both positive and negative feedback interactions as co-evolving drivers of the entities within the system. Such systems would involve feedback dynamics that are out of balance: some feedback dynamics ({{positive-feedback}}) are amplifying certain system features, while others system dynamics ({{negative-feedback}}) are simultaneously  'dampening' or constraining these same system features. Simultaneously there is a correlation between these push and pull dynamics - so the greater the push forward the more it generates a pull back, and vice versa. The imbalance between this push and pull interplay between interacting forces creates feedback loops that lead to power-law features.

An example of this would be that of reproducing species in an eco-system with limited carrying capacity. Plentiful food would tend to amplify reproduction and survival rates (positive feedback), but as population expands this begins to put pressure on the food resources, leading to a push back (lower survival rates), and consequently a drop in population levels. The two driving factors in the system -  growing population and dwindling food - are causally intertwined with one another and are not necessarily in balance. If the system achieves a perfect balance then the system will find an equilibrium - the reproduction rate will settle to a point where it matches the carrying capacity. But if there are forces that drive the system out of balance, or if there is a lag time between how the two 'push' and 'pull' (amplifying and constraining) dynamics interact, then the system cannot reach equilibrium and instead keeps oscillating between states (see {{Bifurcations}} ). 


Example: What's in a Name?

It has been shown that the frequency of baby name occurrences follows a power-law distribution. In this example, what is the push/pull interplay that might lead to the emergence of this regularity?

While each set of parents chooses their child's name independently, they do so within a system where their choices are somewhat driven or constrained by the choices being made by parents around them. Suppose there is a name that, for some reason, has become prevalent in popular consciousness - perhaps a character name in a popular book or tv series.  It is not necessary to know the precise reasons why this particular name becomes popular, but we can imagine that certain names seem to resonate in popular consciousness or 'the zeitgeist'. Let us take the name 'Jennifer'. An obscure name in the 1930s,  it became the most popular girl's name in the 1970s. During that time, if you were one of the approximately 16 million girls born in the US, there was  a 2.8% chance you would be named Jennifer!  And yet,  the name had plummeted back to 1940s levels by the time we get to 2012.

the rise and fall of Jennifer

But how can the rise and fall of 'Jennifer' be described using push and pull forces? We can imagine a popular name being like a contagion, where a given name catches on in popular consciousness. During its initial spread, the name is highlighted even further in popular consciousness, potentially expanding its appeal.  At the same time, the very fact that the name is popular causes a tendency for resistance - if Jennifer is on a short list of possible baby names, but a sibling or close friend names their child 'Jennifer', this has an impact on your naming choice. In fact, the more popular the name becomes, the more pullback we can expect. As more and more people tap into the popularity of a name, it becomes more and more commonplace, leading to a sense of overuse, leading to a search for new novelty. The interactions of push and pull cause the name to both rise and fall. In a system of names, Jennifer is a name that had an expansion rate caused by rising popularity feedback, but then a decay rate caused by overuse and loss of freshness.


The Long Tail

An additional feature of power law distribution that should not be overlooked is what is sometimes called the "power of the long tail". While power law systems have a few strongly performing elements in the upper 20%, there are still many important actors in the remaining 80% of the distribution. One recent feature of information technologies is that it is easier to "find" the specificity of this 80%. If we think about bookstores from only a decade ago, they needed to carry only the "best-sellers": if your reading interests fell outside of the norm than it would be difficult to find books that would serve as the right "fit" or "niche" for your reading interests. Today, with information flows having become so inexpensive, online bookstores are not limited by the number of titles they can carry, so people can find the niche books they actually want to read rather than having to compromise around the average. In some ways this echoes eco-systems, where there can be a few top players, but where there also exist many viable micro-niches that can be populated. There are many domains where accessing this "long tail" will lead to more choice and precision in complex systems. 


Proviso

While power-laws are often pointed to as 'the fingerprint of complexity', It should. be noted that their recent ubiquity is not without controversy.  While many studies highlight the presence of these mathematical regularities in a host of diverse systems, other argue that the statistics upon which these findings are based are often skewed, and that power-laws may not be as common in systems as is frequently stated. It is a problem of researchers looking to affirm the existence of these patterns that may cause them to ignore results where they do not occur, and attribute their presence in systems that may or may not actually hold these properties. 


Back to {{key-concepts}}

Back to {{complexity}}



 

Governing Features ↑

Path Dependency

'Path-dependent' systems are ones where the system's history matters - the present state is contingent upon random factors that governed system unfolding, and that could have easily resulted in other viable trajectories.

Complex systems can follow many potential trajectories: the actualization of any given trajectory can be dependent on small variables, or "changes to initial conditions" that are actually pretty trivial. Accordingly, if we truly wish to understand system dynamics, we need to pay attention to all system pathways (or the system's phase space) rather than the pathway that happened to unfold.


Inherent vs Contingent causality

Why is one academic cited more than another, one song more popular than another, or one city more populated than another? We tend to imagine that the reason must have to do with inherent differences between academics, songs or cities. While this may be the case, the dynamics of complex systems may lead one to doubt such seemingly common-sense assumptions.

We describe complex systems as being non-linear - this means that small changes in the system can have cascading large effects - think the butterfly effect - but what it also implies is that history, in a very real way matters. If we were to play out the identical system with very slight changes, the specific history of each system would play a tangible role in what we perceive to be significant or insignificant.

Think about a cat video going viral. Why this video? Why this particular cat? If on a given day 100 new cat videos are uploaded, what is to say that the one going viral is inherently cuter than the other 99 out there? Perhaps this particular cat video really is more special. But a complexity perspective might counter with the idea of path-dependency: that amongst many potentially viral cat videos, a particular one played  this potentiality out - but this is an accident of a specific historical trajectory, rather than a statement about the cuteness of this particular cat.

Butterfly Effects:

The reason for this returns to the idea of the Path Dependency nature of the system, the fact that it is Sensitive to Initial Conditions. Suppose we have six cat videos that are of inherently equal entertainment value. All are posted at the same time. We now roll a six sided die to determine which of these gets an initial 'like'.  This initial roll of the die now causes subsequent rolls to be slighted weighted - whatever received an initial 'like' has a fractionally larger chance of being highlighted in subsequent video feeds. Let us assume that subsequent rolls reinforce, in a non-linear manner, the first 'like'. Over time, like begets like, the rich get richer, and we see one video going viral.

If we were to play out the identical scenario in a parallel universe, with the first random toss of the dice falling differently, then an entirely different trajectory would unfold. Such is the notion of 'path-dependency'. Of course, it is normal to assume that given the choice of two pathways into an unknown future,  the path we take matters, and will change outcomes. But in complex systems this constitutes an inherent part of the dynamics, and a 'choice' is not something that one actively elects to make,  as much as something that arises due to random system fluctuations.

Another way to think about this is with regards to the concept of Phase Space. Any complex system has a broad state of potential trajectories (its phase space), and the actualization of any given trajectory is subject to historical conditions. Thus, if we want to understand the dynamics of the system, we should not only attune to the path that actually unfolded - rather we should consider the trajectories of all possible pathways. This is because the actual unfolding of any given pathway within a system is not inherently more important then all of the other pathways that could equally have unfolded.

One of the reasons that computer modeling is popular in understanding complex systems has to do with this notion of phase space and path dependency. A computer model allows us to 'explore the phase space' of a complex system: seeing if system trajectories are inherently stable and repeat themselves consistently, or if they are inherently unstable and might manifest in quite different ways.

Sometimes we can imagine that a system does unfold differently in phase space, but that this unfolding tends towards particular behaviors. We call these system tendencies Attractor States. One of the features of complex systems is that they often have multiple attractors, and it is only by allowing the system to unfold that we are able to determine which attractor the system ultimately converges towards. It would be a mistake, however, to grant a particular attractor as being more important than another based only upon one given instance of a system unfolding.

Another feature of path dependency is that, once a particular path is enacted, it can be very difficult to move the system away from that pathway, even if better alternatives exist.

A great example of path dependency is the battle between VHS and BETA as competing video formats. According to most analysts, BETA was the superior format, but due to factors involving path dependency, VHS was able to take over the market and squeeze out its superior competitor.

Another example is that of the QWERTY key board. While initially a solution to the problem of keys jamming when pressed too quickly on a manual keyboard, the solution actually slows down the process of typing. However, even though we have long since moved to electronic and digital keyboards where jamming is not a factor, we are 'stuck' in the attractor space that is the QWERTY system. This is partially due to the historical trajectory of the system, but also all of the reinforcing feedback that works to maintain QWERTY: once people have learnt to type on one system, it is difficult to instigate change. One way of saying this is to refer to the system being 'locked-in' or referring to "Enslaved States".

An Urban example may also be instructive: In Holland people bike as a normal mode of transport, in North America they drive. We can make arguments that there are inherent differences in North American and Dutch cultures that create these differences, but a complexity argument might propose, instead, that such differences are due to path-dependency. Perhaps any preferences that the Dutch have for biking are only random. That being said, over time, infrastructure has been created in the Netherlands that incentivizes biking (routes everywhere), and disincentives driving (many streets closed to traffic, lack of parking, inconvenient, slow commutes). In North America, we have created infrastructure that incentivizes driving: big streets, huge parking areas close to where we work, and lack of other transport alternatives. We then arrive at a situation where the Dutch bike and the North-American drives. But place a North American in Holland and they will soon find themselves happily biking, and place a Dutchman in the USA and they will soon find themselves purchasing a vehicle to drive along with everyone else. Neither driving nor biking is inherently 'better' in so far as the commuter is concerned (although there may be more environmental and health benefits associated with one versus the other), but the pathways each country have taken wind up mattering, and reinforcing behaviors through feedback systems.

If we are able to better understand how to break out of ill-suited path-dependency, we may be able to solve a variety of problems that seem to be 'inherent' or 'natural' choices or preferences.

Back to {{key-concepts}}

Back to {{complexity}}


 

Governing Features ↑

Open / Dissipative

Open & dissipative systems, while 'bounded' by internal dynamics,  nonetheless exchange energy with their external environment.

A system is considered to be open and dissipative when energy or inputs can be absorbed into the system, and 'waste' discharged. Here, system inputs like heat, energy, food, etc., can traverse the open boundaries of the system and ‘drive’ it towards order: seemingly in violation of the second law of thermodynamics.


Complex OutLaws!

For those who are haven't dusted off their high school science textbooks recently, it is worth a quick refresher on the 2nd law of thermo-dynamics. Initially formulated by Sadi Carnot in 1824 (he was looking at the flow of heat in steam engines), the law has been expressed in various technically precise ways. For our purposes, the importance characteristic of these definitions is the idea of loss of order. Any ordered system will eventually move towards disorder. There is no way of getting around it. Things get messy over time - that's the Second Law. Everything ultimately decays. You, me, the world, the universe.

We can contemplate the metaphysical implications of this (the 2nd law is a bit of a downer) over a cup of coffee, while watching this video. We see illustrated the sad,  inevitable decrease in the cream's order as it meets with coffee (it's pretty relaxing actually):

Cream dis-ordering as it enters coffee

What the 2nd Law states is that something is ultimately lost in every interaction, and because of that, more and more disorder is ultimately created. We can ask heat to do work in driving a steam engine, but some of the heat will always be lost in translation, so that even if we are able to produce localized work or order, more disorder has ultimately been created in the universe as a whole. We call this inevitable increase in disorder 'entropy'.

But wait - you say - there is order all around us! While this may appear true, it is because what appear to be violations of the 2nd Law are achieved within the boundaries of a particular system. While a particular system can gain order, it is only because its disorder is simultaneously being dissipated into the surrounding context. Local order (within the system) is thus maintained at the expense of global disorder (within the environment). Were the system to be fully closed from its context, it would be unable to maintain this local order.

Thus, the ability to increase order in violation of the 2nd Law is called Negentropy - and one of ways in which Negentropy can be generated is by creating a system that is 'open and dissipative': meaning that an energy source can flow in to drive order, and waste can flow out to dissipate disorder.

Example:

A famous example of this dynamic is in  Benard/Rayleigh convection Rolls (a phenomena studied by {{ilya-prigogine-isabelle-stengers}} as an example of self-organizing behavior). In this example, we have fluid in a small Petri dish, heated by a source placed under the dish. The behavior of the fluid is the system that we wish to observe, but this system is not closed: it is open to the input of heat that traverses the boundary of the Petri dish. Further, while heat can 'get into' the system, it can also be lost to the air above as the fluid cools. Note that the overall system clearly has a defined 'inside'  (the fluid in the Petri dish), and a defined 'outside' (the surrounding environment and the the heat acting upon the Petri dish), but there is not full closure between the inside and outside. This is what is meant when we say that complex systems are Open / Dissipative . We understand them as bounded, (with relations primarily internal to that boundary), but nonetheless interacting in some way with their surroundings.  Were the boundary fully closed no increase in order could not occur.

Let us turn now to the flows driving the system. As heat is increased, the energy of this heat is transferred to the fluid, and the temperature differential between the top and the bottom of the liquid causes heated molecules to be driven upwards. At the same time, the force of gravity causes the cooler, molecules in the fluid to be driven downwards. Finally, the drag forces acting between rising and falling molecules cause their behaviors to become coordinated, resulting in 'roll' patterns associated with Benard convection.

Rayleigh/Benard Convection (fluid of oil/ silver paint)

The roll patterns that we observe are a pattern: a global structure that emerges from the interactions of many agitated molecules without being 'coordinated' by them. What helps drive this coordination is the dynamics of the interacting forces that the molecules are subjected to (driving heat flows and counteracting gravity pressures), as well as how the independent molecular responses to these pressures feedback to reinforce one another (through the drag forces exerted between molecules).  That said, the fluid molecules do nothing on their own absent the input of heat. Instead, heat is the flow that drives the system behavior. Further, as the intensity of this flow is amplified (more heat added), the behavior of the fluid shifts from that of regular roll patterns to more turbulent patterns.


Setting boundaries

{{ilya-prigogine-isabelle-stengers}} were the first to highlight the importance of open dissipative structures in generating complexity dynamics. Earlier works in General Systems Theory ({{ludwig-v-bertalanffy}}) attuned to the complex dynamics at work within an internal structure, but did not make a distinction between open and closed structures. Closed structures in contrast to open structures do not process new inputs, and therefore are unable to generate novelty.

At the same time, systems need some sort of boundary or structure so as to hold together components of enough collective identity that they can work in tandem to process flows. It is therefore important to determine what is the appropriate boundary of any complex system under study, and what kinds of flows are relevant in terms of crossing those boundaries. 

Often, complexity involves multiple overlapping systems, each with their own internal dynamics and external flows, but systems can become entangled as one systems exports become another's inputs. In order to simplify these dynamics, it is perhaps helpful to try to identify which groups of agents in a system belong to a particular class of that shares a common driving flow and then examine the dynamics with respect to only those flows and behaviors. Systems can then be layered onto systems to build a more complete understanding of the dynamics at play. 


Back to {{key-concepts}}

Back to {{complexity}}


 

Governing Features ↑

Networks

Network theory allows us think about how the dynamics of agent interactions in a complex system can affect the performance of that system.

Network theory is a huge topic in and of itself, and can be looked at on its own, or in relation to complex systems. There are various formal, mathematical ways of studying networks, as well as looser, more fluid ways of understanding how networks can serve as a structuring mechanism.


Why Networks?

We can think of networks in fairly simple terms: imagine, for example, a network of aircraft traveling between hubs and terminals, or a network of people working together in an office. Network analysis operates under the premise that,  by looking at the structure of the network alone, we can deduce something about how the network will function, and potential information about potential nodes within the network. For example, the image below could illustrate many different kinds of networks: perhaps it is an amazon delivery network, or a social network, or an academic citation network. What is interesting is that, even without knowing anything about the kind of network it is, we can still say some things about how it is structured. The network below has some pretty big hubs - around 6 of them that are well connected to other nodes, but not strongly connected to one another. What would be the dynamics of this network if it were a social network, or a network of a company?

What might we learn from the network?

By looking at the diagram we might learn about how information or control is exerted, about which entities are isolated, and about how protracted communication channels might be. A work network in which I need to talk to my superior, who in turn talks to his boss, who in turn is one of three bosses who only talk to each other, creates very different dynamics than a network where I have connections to everyone, or where there is only one chain of command rather than three.

Network theory attempts to understand how different network structures might lead to different kinds of system performances. The field uses domain specific language - speaking of nodes, edges, degree centrality, etc. - with much of this detail falling outside of the scope of this website.

What is important is that complex systems are made up of individual entities and, accordingly, the ways in which these entities relate to one another matter in terms of how the whole is structured. Networks in complex adaptive systems are composed of individual agents, and the relationships between these agents tend to evolve in ways that lead to power law distributions between highly and weakly connected agents. This is due to the dynamics of Preferential Attachment whereby 'the rich get richer'.

At its most extreme, network theory advances the idea that relationships between objects can have primacy over the objects themselves. Here, the causal chain is flipped from considering objects or entities as being the primary causal figure that structures relationships, to instead exploring how relationships might in fact be the primary driver that act to structure objects or entities.

Generalizing Network Knowledge

In the social sciences, Systems Theory (developed by Ludwig V. Bertalanffy), was the first to endeavor to examine how networks could play a key structuring role in how a range of entities function. Systems theory positioned itself as a meta-framework that could be applied in disparate domains - including  physics, biology, and the social sciences - and it attracted a wide following. Rather than focusing upon the atomistic properties of the things that make up the system, systems theory instead attuned to the relationships that joined entities, and how these relationships were structured.

Gregory Bateson, illustrates this point nicely when he considers the notion of a  hand: he asks, what kind of entity are we looking at when considering a hand? The answer depends on one's perspective: We can say we are looking at five digits, and this is perhaps the most common answer (or four fingers and a thumb). If we look at the components of the hand in this manner, we remain focused on the nature of the parts - we might look at the properties of each finger and how these are structured. However, we can answer the question another way: instead of seeing five digits we can say that we see four relationships. Bateson's point was that the way in which the genome of an organism understands or structures the entity 'hand' is more closely aligned with the notion of relationships rather than that of digits or objects. Accordingly, if we are to better understand natural entities we should begin to examine these from the perspective of relations rather than objects.

“You have probably been taught that you have five fingers. That is, on the whole, incorrect. It is the way language subdivides things into things. Probably the biological truth is that in the growth of this thing – in your embryology, which you scarcely remember – what was important was not five, but four relations between pairs of fingers.” - Gregory Bateson

In a similar vein, {{Alan-Turing}} (father of the computer!) tried to analyze the range of fur patterns that are seen on animals (spots, patches or lines), as being different manifestations of a common driving mechanism  - where shifting the timing and intensities of the relationships of the driving mechanism would result in shifts in which pattern manifests. Rather than thinking of these distinctive markings as things 'in and of themselves' Turing wanted to understand how they might simply be different manifestation of more fundamental driving relationships.

Turing based his ideas on a reaction/diffusion model showing how shifting intensities of chemical relationships could create different distinct patterns.


Networks in Complexity Theory

Network theory is important in complexity thinking because of how the structure of the network can affect the way in which emergence occurs: certain dynamics manifest in more tightly or loosely bound networks, and information, a key driver of complex processes, moves differently depending on the nature of a given network.

Small Worlds, Growth & Preferential Attachment, Boolean Networks

Key work of network theorists include that of:

  • {{steven-strogatz}} who developed small world networks where information can move quickly across the network; 
  • {{Albert-laszlo-barabasi}} who showed how networks that observe {{power-laws}} can be generated, following the rules involving both 'growth' and '{{preferential-attachment}}'.
  • {{Stuart-Kauffman}} who developed the theory of 'boolean' networks, where any series of linked nodes will ultimately move into regular regimes or cycles of behavior over multiple iterations in time.

Philosophical Interpretations

Alongside of these more technical ways of understanding networks, the appreciation of the more fundamental role of networks in structuring reality has also gained prominence. Networks would imply that functionality is something that is distributed, non-centralized, and shifting. In the social sciences, Actor Network Theory considers how agents power can be formed through network interactions. For philosopher Gilles Deleuze, the world is composed of what he terms a {{rhizomes}}, a concept that parallels that of a network in the sense of it being non-centralized, shifting, and entangled.


Historic Roots

The origins of network theory stretch back to earlier 'graph theory' a branch of mathematics developed by Leonard Euler, and made famous by his using graph theory to solve the "Konigsberg bridge problem". For a quick intro watch the video here:



This kind of graph analysis was considered as a relatively minor sub-field of mathematics, and only resurged when Barabasi reinvigorated the field (and transformed it into Network theory), with his network analysis work. Barabasi's work gained prominence as he was able to show how network theory could be applied to understanding the structural and functional properties of things like the world-wide-web. Today, network analysis is used in a huge array of disciplines in order to try to understand how the structure of relationships affects the functioning of a given entity - both at the level of the entire structure, and at the level of individual nodes (people, roads, websites, etc.), within the network.

Limitations?

It is perhaps worth noting that, along with computational modeling, network analysis is one of the central ways in which complexity dynamics are explored in many fields. While this kind of analysis can potentially be very helpful, the ubiquity of this strategy may have overshadowed some of its potential shortcomings. Network analysis can be very effective at demonstrating how {{driving-flows}} can move through a system, and how {{information-theory}} that steers the system can be relayed, but the precise configuration of networks often has surprisingly little to do with "classic" complex systems that we observe in the natural world.

If we are interested in the dynamics that form ripples in sand dunes, roll patterns in Benard cells, murmurations of starlings or even the emergence of ordered entities in Conway's Game of Life, then network structures do not appear to be playing any particular role. It is not as though graphing relationships between individual grains of sand on a dune will help us unravel the dynamics that form the emergent ripples. While network analysis often tries to pinpoint distinct actors in a system, very often agents in a complex system do not behave in distinctive ways. It is therefore somewhat surprising that Network Analysis has garnered so much strength as a key tool in complex systems research. Again, this is not to say that networks do not matter - certainly there are certain features of complex systems like the internet have key nodes like "wikipedia" that once entrenched help steer the system dynamics. It is just that there are many other features of complexity dynamics that may be overlooked if our primary focus is only on network relationships in a system.


Back to {{key-concepts}}

Back to {{complexity}}


 

Governing Features ↑

Iterations

CAS systems unfold over time, with agents continuously adjusting behaviors in response to feedback. Each iteration moves the system towards more coordinated, complex behaviors.

The concept of interactive, incremental shifts in a system might seem innocent - but with enough agents and enough increments we are able to tap into something incredibly powerful. Evolutionary change proceeds in incremental steps - and with enough of these steps, accompanied by feedback at each step, we can achieve fit outcomes. Any strategies for increasing the frequency of these iterations will further drive the effectiveness of this iterative search.


One of the keys to enabling complex adaptation to manifest in a given system is the ability for these systems to unfold, with system complexity or fitness being enhanced with each ensuing step.

That said, the kinds of outcomes we see being derived from iterative unfolding differ somewhat in kind: some iterative processes lead to 'fitness' with respect to a given context, whereas other kinds of iterative processes generate pattern, but not necessarily fitness as the term would more generally be understood. Whether or not patterns might fulfill some other kind of fitness criteria is a more nebulous question, which we will get into later.

Differences and Repetitions 

Prior to that, let us first clarify what we mean by an iteration. We can think of an iteration in two distinct ways: the first involving sequential iterations, and the second involving parallel iterations. Thus we can imagine a system that unfolds over the course of 100 generations, or we can imagine a system that has 100 components. Each generation can undergo a mutation, testing a different strategy of performance, or, in the case of the simultaneous system, each component of the system can have a slightly different performance strategy. Thus, while we tend to think of iterations as sequential 'versions' of a class of elements, in essence we can also have multiple 'versions' that operate in parallel rather than sequentially. If we recall the example of ants searching for food, we have many ants performing search in parallel - many iterations of ant behavior proceeding simultaneously.

That said the notion of sequence is important, because it implies the possibility of feedback: that each version of action can be assessed, and undergo a modification at every time step based on feedback: has a particular strategy moved closer to or further from a goal?

Example

Lets start with 100 ants and give them 10 seconds to scurry around a table where we have placed one tasty peanut butter sandwich. Let's say only one ant finds this big cache of food on the first go  - Victory! This particular food locating strategy played out successfully for this particular ant in what we can call 'Version 1.0': What now needs to propagate through the colony as a whole is how to repeat this success - and this is where feedback enters into the picture. As part of Version 1.0 the victorious ant has gleefully deposited a bunch of pheromone traces enroute to the cache.  "Version 2.0"  has a couple of ants pick up on that trail, find the food, and pump up the pheromone signal, and so forth: through an iterative sequence of time steps - seeking, finding, and signaling - more and more ants are drawn to the yummy sandwich.

It is worth noting that even in round one, all ants had equal capacity to find food - the fact that one versus another ant was successful was effectively random. Thus what needed to propagate through the system was not some unique new superpower that this particular ant had (like some extra peanut butter receptors), but instead what needed to be replicated was the way in which the ants random search strategies were directed.

We can think of other examples of sequential iterations where the dynamics differ slightly in terms of what is being iterated. For example, we can state that the pathway from the first PC to today's smartphone was one that also proceeded by iteration, but the feedback driving each iteration entailed incremental system enhancements - a step by step learning from the previous model, (feedback), and then adjustments and improvements in the next round. 

There is thus a subtle difference between iterations that involve feedback that is generative in terms of modifying or enhancing the inherent nature of the agents in the system and feedback that is more about propagating a behavior that is already available to all the agents in the system (but only randomly enacted by some and not other agents).  

Many of the dynamics observed in complex systems have more to do with the iterative propagation of a particular behavioral regime. One form of propagation dynamics involves relaying a particular strategy that helps deliver a given resource or energy source to the group as a whole (patterns emerging that help direct slime mould or ants to viable food sources). Another propagation strategy involves driving a system towards regimes that minimize frictions or energy expenditures the system is encountering: water molecules coalescing into movement patterns that reduce internal drag differentials (generated by processing heat in Benard Rolls), metronomes synching so as to minimize movement frictions produced by their differentials). 

With each iteration of these systems, the overall performance gets just a little bit better: energy sources that fuel a group become easier to find and access, and energy expenditures demanded of a group (due to the forces imposed by an external driver) are modified so as to process these drives in a more frictionless, smooth manner. In both cases we can think of the system as trying to enter into regimes that  minimize global effort

 

Iterations for Fitness:

If a system can exist in many different kinds of states, with some states being more 'fit' than others, it is helpful if that system has an opportunity to explore different state possibilities. The faster it can explore the possibilities, the more likely it is to chance upon a state that is more productive or useful than another. This is why it is useful if a complex system has either a lot of agents, a lot of generations of agents, or both.

If we imagine a complex system as being capable of existing in many different kinds of states, than we can think of iterations as ways in which this {{phase-space}} of system possibilities is explored. It is therefore useful to think about whether a given group of agents in a system is being offered enough iterative capacity to explore this phase space quickly enough to learn anything of use. An ant colony of only 10 ants might do a very poor job of finding food - exhausting itself to death before it succeeds in finding nourishment. In principle, nothing is wrong with the ants (agents), the driving flows (food), or the signaling (pheromones). There is simply not enough system iterative capacity to learn.


Iterations for Pattern

Fractals:

The examples described above pertain to how iterations combined with feedback can steer a system towards effective behavior. But there is another way in which iterations are explored, in terms of their capacity to produce emergent pattern through simple step by step rules.

Here we would describe the nature of {{fractals-1}} generations, and how only a few rule steps, repeated over and over, can generate complex form that might be described as "emergent". Fractals like the Koch Curve or Serpinski Triangle are generated by simple geometric steps, (which we can call iterations) and more complex fractals like the Mandelbrot set can be created by creating a simple formulae that proceeds in recursive steps.

While these processes are iterative, and the patterns produced are emergent in that their spectacular aesthetic and harmonious qualities are not self-evident from their generative rules, these kinds of phenomena can not be seen to be 'learning' or becoming more 'fit'  in the same way as described above.  


Automata

Similar in terms of pattern generation, Conway's Game of Life (see Rules) is a prime example of complexity generated by such simple rules, that, repeated over multiple time step iterations yields highly complex behaviors.

Returning to the question of fitness and learning, this famous example of emergent complexity is entitle 'the game of life', but is it really life? While the emergent outcomes of the automata are rich in variety, can we say that the system adapts or learns, or becomes more fit? One feature of the output is that some of the 'creatures' generated in the game are able to enter into iterative loops, meaning that once these forms emerge they continuously reproduce versions of themselves. If proliferation within the grid of the game is thus considered to be a form of higher evolution (or Fitness), then perhaps this could be seen as a form of learning. That said, the Game of Life does not seem to 'learn' in ways we would associate with the word. 

Game of Life from Wikimedia Commons


Explorations

Returning to notions of fitness in the more traditional sense,  it is helpful to think of iterations as the way in which a complex system explores the scope of possibility within a {{fitness-landscape}}. As described in more detail elsewhere, a fitness landscape represents the differential structure of possibilities within a space of all possible behaviors {{phase-space}}, where more successful strategies within that space of possibility are conceptualized as peaks. Agent iterations can then be seen as processes of stepping around the fitness landscape, testing to see which steps take us up to higher peaks. In terms of these exploratory journeys, some agents may choose to incrementally modify whatever strategy they initially stumble upon (making small modifications and testing to see whether or not those modifications moved them higher or lower), and other strategies involve a more random 'jumping': abandoning a given set of strategies to test an altogether different set of alternatives. These jumps can be productive if they land agents on what are fundamentally higher peaks. These dynamics are unpacked in more detail on the pages referenced, but what is important to note is that the size of a step (or iteration), can vary between small local steps, and big global jumps.


 

Governing Features ↑

Information

What drives complexity? The answer involves a kind of sorting of the differences the system must navigate. These differences can be understood as flows of energy or information.

In order to be responsive to a world consisting of different kinds of inputs, complex systems tune themselves to states holding just enough variety to be interesting (keeping responsive) and just enough homogeneity to remain organized (keeping stable). To understand how this works, we need to understand flows of information in complex systems, and what "information" means.


Complex Systems are ones that would appear to violate the second law of thermo-dynamics: that is to say, order manifests out of disorder. Another way to state this it that actions within the boundary of the systems are one where order ({{negentropy}}), increases over time. This appears counter to the second law of thermodynamics which states that, left to its own devices, a system's disorder (entropy) will increase. Thus, we expect that, over time, buildings break down, and a stream of cream poured into a cup of coffee will dissipate. We don’t expect a building to rise from the dust, nor a creamy cup of coffee to partition itself into distinct layers of and cream and coffee.

Yet similar forms of unexpected order arise in complex systems. The reason this can occurs is that complex systems are not fully bounded - they are {{open-dissipative}} structures that are subject to some form of energy entering from the outside, and within these "loose" boundaries, we see glimpses of temporary order.  Disorder is, however, still being ejected outside of these same boundaries - stuff comes in, stuff goes out -  in some other form. It is only within the boundaries that we see temporary pockets of order. In order to get a better grasp on how these pockets of temporary order appear, we need to understand the relationship between entropy  (disorder, or randomness) and information.

PART I: Understanding Information

Shannonian Information

An important way of thinking about this increase in order relates to concepts based in information theory.  Information theory, as developed by Claude Shannon, evaluates systems based upon the amount of information or 'bits' required to describe them.

Shannon might ask, what is the amount of information required to know where a specific molecule of cream is located in a cup of coffee? Further, in what kinds of situations would we require more or less information to specify a location?

Example:

In a mixed, creamy cup of coffee, any location is equally probable for any molecule of cream. We therefore have maximum uncertainty about location:  the situation has high entropy, high uncertainty, and requires high information content to specify a location.  By contrast, if the cream and coffee were to be separated (say in two equal layers with the cream at the top) we would now have a more limited range of locations where a particular bit of cream might be placed. Our degree of uncertainty about the cream's location has been reduced by half, since we now know that any bit of cream has to be located somewhere in the upper half of the cup - all locations at the bottom of the cup can be safely ignored.

Information vs Knowledge

Counterintuitively, the more Shannonian information required to describe a system, the less structured or "orderly" it appears to us.  Thus, as a system become more differentiated and orderly - or as emergence features arise - its level of Shannon information diminishes.

This, in a way, is unfortunate: our colloquial understanding of 'having a lot of information', pertains to us knowing more about something. Thus, seeing a cup of coffee divided in cream and coffee layers, we perceive something with more structure, more logic, and we might assume this it should follow that this conveys more information to us (at least in our normal ways of thinking about information - in this case that coffee and cream are different things!). A second, stirred cup appears more homogenous – it has less structure or organization. And yet, it requires more Shannon information to describe it.

A difficulty thus lies in how we tend to intuitively consider the words ‘disorder’ and ‘information’. We associate disorder with lack of structure (or low amounts of information) and order with more knowledge and, therefore, more information).

While intuitively correct, unfortunately this is not how things works from the perspective of information and communication signals - which is what Shannon was concerned with when formulating his ideas.  Shannon  (who worked for Bell Laboratories) was trying to understand the required bits of information needed to relay the state of a system (or a signal).

Example:

Imagine I have an extremely messy dresser and I am looking for my favorite shirt. I open my dresser drawers and see a jumble of miscellaneous clothes: socks, shirts, shorts, underwear. I rifle through each drawer examining each item to see if it, indeed, is the shirt I am seeking. To find the shirt I want (which could be anywhere in the dresser), I require maximum information, since the dresser is in a state of maximum disorder.
Thankfully I spend the weekend sorting through my clothes. I divide the dresser by category, with separate socks, shirts, shorts, and underwear drawers. Now, if I wish to find my shirt,  my uncertainty about its location has been reduced by one quarter (assuming four drawers in the dresser). To discover the shirt in the dresser's more ordered state requires less information:  I can limit myself to looking in one drawer only.

Let us take the above example a little further:

Imagine that I love this particular shirt so much that I buy 100 copies of it, so many that they now fill my entire dresser. The following morning, upon waking, I don't even bother to turn on the lights. I reach into a drawer (any drawer will do), and pull out my favorite shirt!

My former, messy dresser had maximum disorder (high entropy), and required a maximum amount of Shannon Information ('bits' of information to find a particular shirt).  By contrast, the dresser of identical shirts, has maximum order (negentropy), and requires a minimal amount of Shannon Information (bits) to find the desired shirt.

Interesting information:  States that matter!

It should be noted that the two extreme states illustrated above are both pretty uninteresting. A fully random dresser (maximum information) is pretty meaningless, but so is a dresser filled will identical shirts (minimum information). While each are described by contrasting states of Shannonian information, neither maximum nor minimum information systems appear very interesting.

One might also imagine that neither the random nor the homogeneous systems are all that functional. A dresser filled with identical shirts does not do a very good job of meeting my diverse requirements for dressing (clothing for different occasions or different body parts), but my random dresser, while meeting these needs, can't function well because it takes me forever to sort through.

Similarly, systems with too much order cannot respond to a world filled with different kinds of situations. Furthermore, they are more vulnerable to system disruption. If you have a forest filled with identical tree species, one destructive insect infestation might have the capacity to wipe out the entire system. If I own 100 identical shirts and it goes out of style, I suddenly have nothing to wear.

Meanwhile, if everything is distributed at random then functional differences can't arise: a mature forest eco-system has collections of species that work together, processing environmental inputs in ways that syphon resources effectively - certain species are needed moreso than others. In my dresser, I need to find the right balance between shirts, socks, and shorts: some things are worn more than others and I will run into shortages of some, and excesses of others, if I am not careful.

PART II:  Information Sorting in Complex Systems

Between Order and Disorder

What is interesting in Complexity, is that it appears that, in order to be responsive to a world that consists of different kinds of inputs, complex systems tune themselves to information states involving just enough variety (lots of different kinds of clothes/lots of different tree species) and just enough homogeneity (clusters of appropriately scaled groups of clothing or species). While within their boundaries these systems violate the second-law of thermodynamics (gaining order), they do not gain so much order as to become homogenous. The phrase 'poised at the edge of order and chaos' seems to capture this dynamic.

Tuning a complex system -  decreasing uncertainty

Imagine we have a system looking to optimize a particular behavior - say an ant colony seeking food. We place an assortment of various-sized bread crumbs on a kitchen table, and leave our kitchen window open overnight. Ants march in through the window, along the floor, and up the leg of the table.

Which way should they go?

From the ants perspective, there is maximum uncertainty about the situation: or maximum Shannonian information. The ants spread out in all directions, seeking food at random. Suddenly, one ant finds food, and joyfully secretes some pheromones as it carries it away. The terrain of the table is no longer totally random: there is a signal - food here! Nearby ants pick up the pheromone signal and, rather than moving at random, they slightly adjust their trajectories. The ant's level of uncertainty about the situation has been reduced or, put another way, the pheromone trail represents a compression of informational uncertainty - going from 'maximum information required' (search every space), to 'reduced information required' (search only spaces near the pheromone trace).

If all ants had to independently search every square inch of tabletop to find food, each would require maximum information about all table states. If, instead, they can be steered by signals (see {{stigmergy}}) deployed by other ants, they can then limit their search to only some table states. By virtue of the collective, the table has become more ‘organized’ in that it requires less information to navigate towards food. There is a reduction of uncertainty, or reduction of 'information bits', required by each ant to find the location of 'food bits'. Accordingly, these are more easily discovered. It is worth noting that in this particular system, the "food bits" are effectively the {{driving-flows}} that energize the system and thereby help fuel the localized order. The second law is preserved, since the ants will ultimately dissipate this order (through heat generated in their movements, through ant deffication as they process food, and ultimately through death and decay). 

Reduce information | Reduce effort

Suppose we are playing 20 questions.  I am thinking of the concept ‘gold’, and you are required to go through all lists of persons, places, and things in order to eventually identify ‘gold’ as the correct entity. Out of a million possible entities that I might be thinking of, how long would it take to find the right one in a sequential manner? Clearly, this would involve a huge length of time. The system has maximum uncertainty (1 million bits), and each sequential random guess reduces that uncertainty by only 1 bit (999,999 bits to go after the first guess!). While I might 'strike gold' at any point, the odds are low!

From an information perspective, we can greatly reduce the time it takes to guess the correct answer if we structure our questions so as to minimize our uncertainty at every step. Thus if I have 1,000,000 possible answers in the game ‘twenty questions’, I am looking for questions that will reduce these possibilities to the greatest extent at each step. If, with every question, I can reduce the possible answers in half (binary search) then, within 20 question, I can generally arrive at the solution. In fact the game, when played by a computer, can solve for any given entity within an average of six guesses! With each guess, the degree of uncertainty regarding the correct answer (or the amount of Shannonian information required), is reduced.

Sorting a system so there is less to sort

From a Complex Systems standpoint, this information sorting by agents within a system will allow it to channel resources more effectively – that is, focus on work (or questions) that move towards success while engaging in less wasted effort.

To illustrate:  imagine that I wish to move to a new city to find a job. I can choose one of ten cities, but other than their names, I know nothing about them, including their populations. I relocate at random and find myself in a city of 50 people, with no job postings. My next random choice might bring me to a bigger center, but, without any information, I need to keep re-locating until I land in a place where I can find work.

If, instead, the only piece of information that I have is the city populations, I can make a judgement: If I start off my job hunt in larger centers then there is a better chance that jobs matching my skills will be on offer. I use the population sizes as a way to filter out certain cities from my search - perhaps with a 'rule' stating that I won't consider relocating to cities with less than 1 million inhabitants. This rule might cross out six cities from my search list, and this 'crossing out' is equivalent to reducing information bits required to find a job: I can decide that my efforts are better spent focusing on a job search in only four cities instead of ten (this may also be the reason why, in studying cities as complex systems, we often observe the phenomena of growth and preferential attachment, which manifests as {{power-laws}} in population distributions).

By now it should have become clear that this is equivalent to my looking for a given cream molecule in only half the coffee cup, or ants looking for food only on some parts of the table, or my search in 20 questions being limited only to items in the 'mineral' category.

All these processes involve a kind of information sorting that gives rise to order, which in turn makes things go smoother: from random cities to differentiated cities; from random words to differentiated categories of words.

What complex systems are able to do is take a context that, while initially undifferentiated, can be sorted by the agents in the system such that the agents in the system can navigate through it more efficiently. This always involves a local violation of the second law of thermodynamics, since the amount of Shannonian information (the entropy or disorder of the system), is always being reduced. That said, this can only occur if there is some inherent difference in the system, or 'something to sort' in the first place. If a context is truly homogeneous (going back to our dresser of identical shirts), then no amount of system rearranging can make it easier to navigate. Note that an undifferentiated system is different from a homogenous system. A random string of letters is undifferentiated; a string composed solely of the letter 'A' is homogeneous.

Accordingly, complex systems need to operate in a context where some kind of differential  (in the form of {{driving-flows}} are present. The system then has something to work with, in terms of sorting through the kinds of differences that might be relevant.

One thing to be very aware of in the above example, is how difficult it is to disambiguate information from orderliness. As our knowledge of probable system states becomes more orderly, Shannonian information is reduced.  This is a frustrating aspect of the term ‘information’, and can lead to a lot of confusion.

This Christmas Story illustrates how binary search can quickly identify an entity


Back to {{key-concepts}}

Back to {{complexity}}


 

Governing Features ↑

Fitness

Complex Adaptive Systems become more 'fit' over time. Depending on the system, Fitness can take many forms,  but all involve states that achieve more while expending less energy.

What do we mean when we speak of Fitness? For ants, fitness might be discovering a source of food that is abundant and easy to reach. For a city, fitness might be moving the maximum number of people in the minimum amount of time. But fitness criteria can also vary - what might be fit for one agent isn't necessarily fit for all.


Getting Fit!

The idea of fitness in any complex system is not necessarily a fixed point. There can be many different kinds of fitness, and we need to examine each specific system to determine what factors are at play. For example, what makes a hotel room 'fit'? Is it location, or price, or cleanliness, or amenities, or all of the above? For different people, these various factors or parameters have different 'weights'. For a backpacker traveling through Europe, maybe the price is the only thing worth worrying about, whereas for a wealthy business person it may not factor in at all.

Despite these variations, there are certain principles that remain somewhat consistent, and this pertains to the idea of minimizing processes. We can imagine that certain behaviors in a system require more or less energy to perform. Agents in a system are always trying to minimize an energy expenditure, but what might entail a high energy expenditure for one agent might be a low energy expenditure for another (depending on what forms of energy they each have available to them. If an ant wants to find food, it prefers to find a source that takes less time to get to than one that is further away. Further, a bigger source of food is better than a smaller source of food, as more ants in the colony can benefit. Complex systems generally gravitate towards regimes that therefore in some way minimize energy expenditure to achieve a particular goal. However, this energy rationing depends both on the nature of the goal, and the resources available to reach it.

Example:

Returning to the example of finding a hotel room, consider the popular website 'Airbnb' as a complex adaptive system. Here, two sets of bottom-up agents (room providers and room seekers) coordinate their actions in order for useful room occupancy patterns to emerge. Some of these patterns might be unexpected. For example, a particular district in Paris might emerge as a very popular neighborhood for travelers to stay in, even though it is not in the center of the city. Perhaps it is just at a 'sweet-spot' in terms of price, amenities, and access to transport to the center. This is an example of an emergent phenomena that might not be predictable but nonetheless emerges over the course of time. In that case, rooms in that district might be more 'fit' than in another, because the factors listed (its relevant parameter settings in that particular zone) are highly appealing to a broad swath of room-seekers.

So in what way is the above example 'energy minimizing'? We can think of the room seekers as having different packages of energy rations they are willing to expend over the course of their holiday. One package might hold money, one might hold time, and one might hold patience for dealing with irritations (noisy neighbors that keep them from sleeping, or willingness to tolerate a dirty bathroom...). Each agent in the system is trying to manage these packets of energy in the most effective way possible to minimize discomfort and maximize holiday pleasure. So if a room is close to the center of the city, it might preserve time energy, but this needs to be balanced with preserving money energy.

We can begin to see that fitness is not going to come in a 'one size fits all' form. Some agents will have more energy resources available to spend on time, and others will have more energy resources to allocate in the form of money. Further, an agent in the system might be willing to spend much more money if it results in much more time being saved, or vice versa. We can imagine that an agent might reach a decision point where two equally viable trajectories are placed in front of them. The choice of time or money might be likened to a flipping of a coin, but the resulting 'fit' regime might appear as very different.

In order to better understand these dynamics, two features of CAS, that of a Fitness Landscape and ideas surrounding Bifurcations, clarify how CAS can unfold in multiple fit trajectories, but despite these differences the underlying principles of energy minimizing holds true.

Avoiding Work and the Prisoner's Dilemma

In the above example the agents (room seekers), employ cognitive decision-making processes to determine what a 'fit' regime is. But physical systems will all naturally gravitate to these energy minimizing regimes.

Example: 

When molecules in a soap bubble solution are subject to being blown through a soap wand, nobody tells them to form a bubble, and the molecules themselves don't consider this outcome. Instead, the bubble is the soap mixture's solution to the problem of finding a form that minimizes surface area and therefore frictions. The soap bubble can therefore be considered as an energy minimizing emergent phenomena  (for a detailed explanination, follow this link to an article on the subject: note the phrase, 'a bubble's surface will minimize until the force of the air pressures within is equal to the 'pull' of the soap film'). We can also think of a sphere as being the natural Attractor States of a soap solution: seeking to absorb maximum air with minimum surface - or doing the most with the least.

We can derive from these examples that one way we can examine complex systems is to equate 'fitness' with avoiding unnecessary work or effort. While this is important for individual cases of agents (specific birds in a flock, or specific fish in a school), what is also interesting in systems exhibiting {{self-organization}} (bird flocks and fish schools), is that this principle is extended to the include the group level. Thus the system, as a whole, finds a regime that expends the minimum effort to achieve a goal on the part of the group rather than on the part of the individual. This might involve individual sacrifices in order to enable overall group behavior to succeed.

These kinds of dynamics involving individual sacrifices (or trade-offs) where group performance ultimately matters are the subject of game theory. The Prisoner's Dilemma, for example, is a classic case where the most 'fit' long term strategy is for both players to sacrifice some potential individual gain, in favor of longer term collective gain. Fit strategies differ depending on whether or not the game is played once or multiple times, so in natural systems that have ongoing interactions of agents than there are different fitness incentives than in non-repeating scenarios.


Short Explanation of the Prisoner's Dilemma:



Back to {{key-concepts}}

Back to {{complexity}}


 

Governing Features ↑

Feedback

Feedback loops occur in system where an environmental input guides system behavior, but the system behavior (the output), in turn alters the environmental context.

This coupling between input affecting output - thereby affecting input - creates unique dynamics and interdependencies between the two.


There are two kinds of feedback that are important in our study of complex systems: {{positive-feedback}}  and {{negative-feedback}}. Despite the value-laden connotations of these designations, there is no inherent value judgement regarding 'positive' (good) versus 'negative' (bad) feedback. Instead, the terms can more accurately be described as referring to reinforcing deviation (positive) versus suppressing or dampening deviation (negative). Reinforcing feedback thus amplifies slight tendencies in a system's behavior, whereas dampening feedback works to restrain any changes to system behavior.

Negative Feedback

We can think of a thermostat and temperature regulation as a classic example of dampening (negative) feedback at work. The thermostat has a desired state that it wishes to maintain, and it is constantly monitoring an input about whether or not it is achieving that target. If the temperature exceeds the target, then the thermostat activates a cooling mechanism; if the temperature falls short of the target then the thermostat activates a heating mechanism. The thermostat is therefore situated within an environment, (acted upon by outside forces) but is simultaneously helping create this environment (by being one of the environmental activating forces). It is able to respond to the input of the environment by activating and output that suppressed any deviation from the goal state (the goldilocks temperature).

Because of how Negative Feedback helps maintain a particular status quo, it is an important dimension of life, or {{homeostasis}}. Our body's ability to maintain a somewhat steady state is something we often take for granted, but it is worth pausing to reflect upon the amount of constant adjustments that are required in order to keep things like our temperature or glucose levels steady in light of extreme environmental fluctuations. 

Maintaining our own body temperatures within a narrow, healthy range requires three aspects: an input (temperature) a sensor (or brain) and a viable output (shiver to raise temperature if cold; sweat to lower temperature if hot).  While this is somewhat similar to the thermostat example, there are some slight differences: even though the act of sweating or shivering does in fact have a minute impact on the environment we are located within, these outputs do not have a significant enough impact on the environment to alter the input.


Cybernetics 

While homeostasis refers specifically to biological systems able to maintain themselves, in fact for any system where the goal is to avoid deviance - to maintain a steady state or goal for some given target such as temperature - these same three elements - inputs, sensors and outputs - need to be present.

{{Cybernetics}} is a field dedicated to understanding a whole host of systems from different disciplines in light of these characteristics, to better understand the means of self regulation in entities that seek to maintain a particular target behavior. The field emerged in the 1940s, and it, (along with general systems theory, which shares many similarities with complexity research but deals with closed rather than open-systems), is in many ways is a pre-cursor to complex adaptive systems thinking.

In many cybernetic systems, the dynamics become quite interesting, in that an output can flow back into the system as an input, in ways that we can think of as more directly 'self-regulating'.  A fly-ball governor is one such self-regulating mechanism (described on the {{cybernetics}} page), where the self-regulating dynamics of the mechanisms cause it to slow down when it exceeds a particular speed. Anther such self-regulating or self-governing dynamics can be observed in eco-systems, where if a population of animals increases beyond the environment's carrying capacity, that environment ceases to sustain those high numbers resulting in a die-off of excess animals.  Similarly, if population numbers drop significantly, then those remaining will have a high availability of food, and any offspring will thrive, leading to population growth. These two competing forces  -population growth and carrying capacity - work in tandem to  dampen the fluctuation of population numbers, preventing them from getting too high or too low. 

Another other classic example is the idea of an oarsman on a boat, trying to reach an island, and constantly adjusting the movement of the oar to compensate for the deviations caused by the environmental factors (water currents, wind, etc. ).

Cybernetic systems differ from complex adaptive systems in that the CAS features such as  {{emergence}} are typically associated as being the result of amplifying or positive feedback  vs Cybernetic systems that work more to maintain a stable state


Positive  Feedback

If negative feedback relied on an input, a sensor, and an output, then positive feedback operates in a equivalent way: the difference being that the output does not counteract the input, but instead builds upon, or reinforces it in some way.

We can observe that in many systems driven by simple rules, such as {{Fractals-1}} that over iterative sequences of graphic generation, become differentiated by more detail, more variation, and more pattern becoming apparent.

But Fractals are a specific class of entity that is limited to the domain of mathematics. Again, these kinds of positive feedback systems can exist in a wide range of non-mathematical domains, with the same principles at work.

Viral Orders:

In discussion the {{non-linear}} of Complex Systems, we used the example of a cat video going viral, in order to illustrate how a small, early amplification of a system preference can cause a massive shift in system outcome. Using the analogy of the 'rich get richer', cat videos that initially might get a few more clicks are recommended more frequently, leading to more views, leading to more recommendations and so forth. This illustrates the power of  positive feedback to amplify a particular aspect of a system such that it grows in importance in a non-linear way.

Another example of this comes from Network Theory dynamics, that examines how networks characterized by {{power-laws}} can be generated when the network is constantly growing, and when new nodes of the network can be added anywhere at random, but will affix preferentially to nodes in the network that are already highly linked. This, phenomena of "growth and preferential attachment" is again an example of positive feedback, and such dynamics are thought to explain things like scaling patterns seen in different cities within a given region.

Network examples are of interest because while behaviors are initially random, because of the nature of positive feedback, these random intensities come to be reinforced over time: leading to an increase in structure and pattern. The situation is almost the opposite of that of fractals: in fractal growth a very simple formula ultimately leads to greater and greater visual complexity: in complex systems steered by positive feedback, an initially random distribution of entities (human habitations, websites, cat videos, cricket chirping), gradually become more ordered and organized, with a few dominant entities emerging and thereafter constraining the performance, behaviors, or success of other entities in the system. We can say that the system, after a time, moves into {{enslaved-states}} with only a few behavioral regimes succeeding following feedback. 

In these instances, an initially random situation has small variations amplified, to the extent that  an initially random factor or actor becomes dominant, now steering the system. I do wish to point out that this would seem to muddy our earlier contrast of 'amplifying' vs 'restraining': once a particular behavior is amplified, it in turn winds up constraining the system, as deviance from that behavior is now more difficult. To illustrate: Wikipedia has become the default encyclopedic website due to positive feedback: now that it exists, it is in fact stabilized and resists being disrupted. Its amplified strength as a website is part of what now gives it stability, dampening further disruptions. This is a characteristic of {{Emergence}}, in that emergent systems like schools of fish or flocks of birds are driven into being through positive feedback, but then exert a kind of top down resistance to future change.


Dynamics of systems subject to both Positive & Negative Feedback:

Some very interesting complex systems are governed by a combination of both positive and negative feedback. 

For instance, in the example of governing animal population fluctuations described above, we can imagine that rather than settling into one steady-state population, a particular species might oscillate between two regimes - booms and busts in population as the carrying capacity environment undergoes stress and then recovery. When we examine the system more closely, we realize that there are actually both kinds of feedback at play: reproduction rate is an example of amplifying feedback - if every two rabbits that reproduces make four rabbits and those four rabbits go on to make 8 rabbits (and so forth), then we have the kind of accelerating growth associated with positive feedback. What then happens is that this drive towards amplification, is suppressed by resistance (the carrying capacity), which works to counter balance the growth. So if we start with 8 garden rows of carrots and at every generation of new rabbits the rate of carrot row consumption proceeds faster and faster, pretty quickly all the carrots are done (and by extension, all the unfed hungry rabbit are done too). In a way, the  terminology can be muddy in that our definitions of positive and negative can rely on what is considered to be "amplifying" feedback. If we shift the lens, we could think of carrots as the agent in the system (rather than rabbits), and we could state that, due to the positive feedback in their environment (rabbit reproduction), the rate at which carrots are being consumed is increasing (even as the number of carrots is diminishing). Accordingly, what we mean by 'positive' and 'negative' are often context dependent, and can shift depending on how we describe the features of 'amplification' or 'suppression'.

What is nonetheless very interesting is that we can have systems that involve competing forces of feedback - one that drives the system forward, the other that resists or suppresses this drive (as in the case of rabbit reproduction and dwindling carrot supplies). Depending on the extent to which these co-evolving system features are out of sync in terms of these respective rates (rate of rabbit reproduction vs rate of carrot growth), the system can begin to oscillate in irregular ways. These kinds of irregular oscillations can be observed in the logistic map (also called bifurcation diagram), which illustrate how systems can cycle between many different behavioral states - with extremes arising, being dampened and then arising again (to greater and lesser degrees). Many interesting complex systems are therefore neither being entirely steered towards stability (like cybernetic systems), nor steered towards unified amplification (like crickets chirping in sync), but instead ride cascading waves between different states.

The characteristics of how feedback is moving through the system, and whether or not the system is subject to one or more interdependent feedback loops is therefore at the heart of some of the most complex dynamics we observe in complex system, and why systems composed of seemingly simple agents can nonetheless produce very complex dynamics (the complexity is in the nature of the feedback, rather than the inherent characteristics of the system.

As a final thought on this, we can observe the double pendulum experiment, where we see the irregular motion of a pendulum, the motion of which is subject to interwoven feedback from competing sources - while a simple system, the patterns traced exhibit complex dynamics:

source, wikipedia



Back to {{key-concepts}}

Back to {{complexity}}


 

Governing Features ↑

Far From Equilibrium

Left to themselves, systems tend towards regimes that become increasingly homogenous or neutral: complex systems differ - channeling continuous energy flows, gaining structure, and thereby operating far from equilibrium.

The Second Law of Thermodynamics is typically at play in most systems - shattered glasses don't reconstitute themselves and pencils don't stay balanced on their tips. But Complex Systems exhibit some pretty strange behaviors that violate these norms...


Equilibrium

In order to appreciate what we mean by 'far from equilibrium' we first need to start by understanding what is meant by 'equilibrium'. We can understand equilibrium using two examples: that of a pendulum, and that of a glass containing ice cubes and water.

If we set a pendulum in motion, it will oscillate back and forth, slowing down gradually, and coming 'to rest' in a position where it hangs vertically downwards. We would not expect the pendulum to rest sideways, nor to stand vertically from its fulcrum point.

We understand that the pendulum has expended its energy and now finds itself in the position where there is no energy - or competing forces -  left to be expended. The forces exerted upon it are that of gravity, and this causes the weight to hang low. The pendulum has arrived at the point where all acting forces have been canceled out : equilibrium.

Similarly, if we place ice cubes in a glass of water, we initially have a system (ice and water) where the water molecules within the system have very different states (solid and liquid). Over time, the water will cool slightly, while the ice will warm slightly (beginning to melt), and gradually we will arrive at a point in time when all the differences in the system will have cancelled out. Ignoring the temperature of the external environment, we can consider that all water molecules in the glass will come to be of the same temperature.

Again, we have a system where competing differences in the system are gradually smoothed out, until such time as the system arrives at a state where no change can occur: equilibrium.

In a complex system, we see very different dynamics: part of the strangeness of emergence arises from the idea that we might see ice spontaneously manifesting out of a glass of water! This is what we mean by 'far from equilibrium': systems that are constantly being driven away from the most neutral state (which would follow the second law of thermodynamics), towards states that are more complex or improbable. In order to understand how this can occur, we need to look at the flows that drive the system, and how these offer an ongoing input source that pushes the system away from equilibrium.

Example:

Lets take a look at one of our favorite examples, an ant colony seeking food. Lets start 100 ants off on a kitchen table (we left them there earlier when we were looking at {{driving-flows}}. The ants begin to wander around the table, moving at random, looking for food. If there are crumbs on the table, then some ants will find them, and direct the colony towards food sources through the intermediary signal of pheromones. As we see trails form (a clear line forming out of randomness like an ice cube fusing itself out of a glass of water!), we observe the system moving far from equilibrium. But imagine instead that there is no food. The ants just keep moving at random. No emergence, nothing of statistical interest happening. When we remove the driving external flow (food) that is outside of the ant system itself then the ants become like our molecules of water in a glass. Moving around in neutral, random configurations.  Eventually, without food, the ants will die - arriving at an even more extreme form of equilibrium (and then decay)!

Origins

The phrase "far from equilibrium" was originally coined by Ilya Prigogine, and was used to characterize such phenomena as Benard Rolls (see also {{ilya-prigogine-isabelle-stengers}}). Prigogine and Stengers were interested in how system that were driven by external inputs could gain order (as exhibited by the rolls), and how the increase in these external inputs could in turn drive order in increasingly interesting ways. 

Another way to say this is that systems in equilibrium lack energy inputs needing processing  whereas system from from equilibrium are characterized by having some kind of energy driver or differential at play.  


Muddying the Waters

While the above should now be somewhat clear, it is also true that complex systems, while indeed operating "far from equilibrium" can exhibit behaviors that imply a different kind of equilibrium: one that is not part of the domain of physics or chemistry but rather that of Game Theory (and economics). 

There are various multi-actor systems examined by Game Theorists and Economists, where actors (or agents) use competing strategies to see which will yield (or 'win') some form of allocation. Such games might be played once, to show optimum game choices, or multiple times, to see what occurs when past strategies plays a role in current strategies. Depending on how multiple agents deploy their strategies games might produce win/win outcomes (where multiple agents gain allocations),  win/lose outcomes (where my win results in your loss or vice versa), or lose/lose scenarios (where in efforts to outcompete one another, all agents wind up leaving empty handed). Game Theory can examine the kinds of strategies most viable for an individual agent in the system, but they can also analyze what strategies are most viable not solely for an individual agent, but for the collection gain of all agents in the system.

Such 'collective benefit' systems are described as being "Pareto efficient": and occur in instances when strategies result in dynamics whereby no agent can improve its own effectiveness without diminishing the effectiveness of the overall group. Another way to frame this is in terms of what would constitute a Pareto improvement: when system behavior can be enhanced in such a way that at least one agent is better off, and the system performance as a whole has not been made worse off by this change.

Example:

Imagine we are placing 100 trash cans in a park. We don't know where they should go, so we distribute them at random, but we add a few special features:

1. Each trash can has a sensor that can track how quickly it is filled

2. Each is also able to receive and relay a signal to its nearest neighbors - indicating its             rate of trash accumulation

3. Each is set on a rolling platform, that allowing it to navigate to a new location in the park.

Accordingly: the agent in the system is the trash can; the fitness criteria is gathering trash; the adaptive capacity is the ability to relocate; and the differential driving flows are the variable intensities of trash generation.

We can imagine this system to be driven now by simple rules: each trash can monitors, broadcasts, and receives information about its own rate of trash acquisition, as well as that of its nearest neighbors. At various time steps it makes a decision: remain in place or move - with movement direction weighted depending in accordance with more successful neighbors. Each movement entails a Pareto improvement.

It should be relatively intuitive to note that, over time, trash cans will move until such point as all cans are collecting identical amounts. At that point, the system has arrived at a Pareto Optima, where movement cannot occur without a reduction in overall system fitness (it should be noted that this state may only be a local optimum:  for more information). The system has calibrated itself to perform in the most effective way possible, restricted only by the scope of state spaces it was able to explore.*

* One proviso regarding this example is that the system may be  trapped in a local optima (see {{fitness-landscape}}). As a result, the system above will function more effectively if individual agents occasionally engage in random search regardless of neighboring states. This allows potential untapped domains of trash production to be discovered and the recruited for.

The reason it is worth pointing out this particular dynamic, is that game theory often discusses such optimizing strategies as finding "Equilibria".  Accordingly, we have the famous "Nash Equilibrium" as a kind of game theory state (see the Prisoner's Dilemma Game), as well as other game theory protocols that use the term "Equilibrium" to refer to end states strategies. While we normally speak of "Pareto efficient" or "Pareto Optimum" rather than "Pareto Equilibrium", there is a notional slipperiness at work here, meaning that it is easy to think of complex systems as arriving at a kind of steady state where the system has found a kind of poised balance (as in the trash cans above). This kind of calibration and balancing act within their environment might be described as existing in a state of ecological equilibrium (rather than being far from it).

The muddiness comes from how the term is technically applied in physics or chemistry versus how it is used in economics and game theory. 


Back to {{key-concepts}}

Back to {{complexity}}


 

Governing Features ↑

Degrees of Freedom

'Degrees of freedom' is the way to describe the potential range of behaviors available within a given system. Without some freedom for a system to change its state, no complex adaptation can occur.

Understanding the degrees of freedom available within a complex system is important because it helps us understand the overall scope of potential ways in which a system can unfold. We can imagine that a given complex system is subject to a variety of inputs (many of which are unknown), but then we must ask, what is the system's range of possible outputs?


The notion of degrees of freedom comes to us from physics and statistics, where it describes the number of possible states a system can occupy. For example, a swinging pendulum is constrained to a fixed number of 'states' (positions in space) that the pendulum can occupy. We can imagine that it is possible to map out all the potential locations of the pendulum's swing, and therefore the limits of all it behaviors. The degrees of freedom thus tells us something about what a system is capable of doing: it's potential. The system cannot act outside of the boundaries of this action potential.

For example, if we wanted to examine the maximum capacity of motion for a three-dimensional object in space, this can be provided using just six degrees of freedom, which together define changes both in orientation: (rotation) that occurs via the 'roll', 'yaw' and 'pitch' motions; and for changes related to displacement in space: through the 'up/down', 'back/forward' and 'left/right' parameters. We can see that all potentialities of movement are covered within this framework.

If we were to eliminate any of these parameters - for example the 'up/down' potential, then we have fewer degrees of freedom, certain types of movement would no longer be possible. Phase Space - thereby captures the sum total of all potential behaviors - sometimes referred to as a system's 'possibility space'.

(image courtesy of Wikimedia commons)

In addition to there being a range of phase space potentials, there may also be particular behaviors in phase space that are more likely to occur. Accordingly, if we were to map all of the potential states of a pendulum's behavior from any given starting position in phase space, we  would have what is known as a 'phase portrait' of that pendulum. This is to say that there are particular trajectories that the pendulum will follow within phase space. Different systems might have phase portraits that highlight certain special regions of phase space as being {{attractor-statesbasins}} which a system will tend to gravitate towards.


Human Systems

So far we have been speaking about physical degrees of freedom, but we might also imagine degrees of freedom in relation to behavioral possibilities.

Example:

Imagine we are wanting to stay at an airbnb. We could think of each airbnb option as being an agent in a complex system, competing to win us over by broadcasting its 'fitness' for our stay. Each Airbnb would be able to adjust a number of parameters that one might consider as important in choosing accommodation. These parameters could include cost, cleanliness, distance to center, size, and quietness. Different people might value (or weigh) these parameters differently, and choose their airbnb accordingly. At the same time, we can imagine that each airbnb has a capacity to adjust its 'state' to different degrees. Location is clearly a limited parameter: a given airbnb has no capacity to simply change its location. But it does have the capacity to adjust its price point. Size is also difficult to alter. But cleanliness might have more flexibility. Thus certain categories have more range in terms of their degrees of freedom than others. If airbnbs are considered as agents in a complex system, each competing to find patrons who wish to stay at their location, then they each have to operate within their particular bounds of freedom in terms of how they adjust to align themselves better with user needs. Thus if they can't compete on the basis of location then they can attempt to compete on the basis of cost.

More Than Three Dimensions - No Problem!

The airbnb case should also serve to illustrate that, in many scenarios, the degrees of freedom available to an agent in a complex system cannot be easily plotted in three dimensional space (that is a 'space' bounded by an x, y, and z axis). That said, just because phase space cannot always be easily visualized in three-dimensional space, it doesn't mean we have to bend our minds to imagine more than 3 degrees of freedom. Just because we can't easily draw a graph of all these potential parameters, we can certainly imagine sorting different priorities simply as parameter bars with weights. We then have a multi-parameter space that the agent is calibrated within.

Multiple Degrees of Freedom thought of as sliding parameter bars: 

Requisite Variety

Analyzing  agents in complex system according to their Degrees of freedom can thus be thought about as examining their range of possible parameter settings. This can be an extremely helpful way of thinking about the {{adaptive-capacity}} of the system: or what it can and cannot do in response to environmental changes or fluctuations. Another way to describe this is the idea of a system's {{large-number-elements}}: a phrase coined by Ross Ashby to highlight the amount of variability a system can enact. According to Ashby, a system needs to have a variety of responses commensurate with the variety of inputs. This responsive capacity can be defined more precisely by means of defining the agent's degrees of freedoms. 




Back to {{key-concepts}}

Back to {{complexity}}


 

Governing Features ↑

Cybernetics

Cybernetics is the study of systems that self-regulate: Adjusting their own performance to keep aligned with a pre-determined outcome, using processes of negative-feedback to help self-correct.

The word Cybernetics comes from the Greek 'Kybernetes', meaning 'steersman' or 'oarsman'. It is the etymological root of the English 'Governor'. Cybernetics is related to an interest in dynamics that lead to internal rather than external governing.


Cybernetic thought is an important early precursor to Complex Systems thinking.

Imagine a ship, sailing towards a target (say an island). There are various forces (wind and currents) that act upon the ship to push it away from its trajectory. In order to maintain a trajectory towards the island, the steersman need not be aware of the speed or direction of the wind, or the velocity of the waves. Instead, he (or she), just needs to keep their eye on the target, and keep adjusting the rudder of the ship to correct for any deviations from the route.

In a sense, we have here a complete system that works to correct for any disturbances. The system is comprised of the target, any and all forces pushing the ship away from the target, the steersman registering the amount of deviation, and subsequently counterbalancing this through means of interaction with the rudder.

While it is true that the steersman is the agent that 'activates' the rudder,  it is also true that the amount of deviation the target presents also 'activates' the steersman.  Finally, the forces acting upon the ship are what activates the deviation. We thus have a complete cybernetic system, where the forces at work form a continuous loop, and where the loop, in turn, is able to self-regulate.

A cybernetic system works to dampen  any disturbances or amplifying feedback that would move the trajectory away from a given optimum range. Thermostats work on cybernetic principles, where temperature fluctuations are dampened.

Like CAS, Cybernetics is concerned with how a system interacts with its environment. However, Cybernetics focus on systems subject to negative feedback: ones self-regulating to maintain regimes of stable equilibrium where disruptions (or Perturbations) are dampened.

Macy Conferences

Control

Stafford Beer, an early proponents of Cybernetics, discusses the Watt Flyball Regulator




 

Governing Features ↑

Attractor States

Complex Systems can unfold in multiple trajectories. However, there may be trajectories that are more stable or 'fit'. Such states are considered 'attractor states'.

Complex Adaptive Systems do not obey predictable, linear trajectories. They are "Sensitive to Initial Conditions", such that small changes in these conditions can lead the system to unfold in unexpected ways. That said, in some systems, particular 'potential unfoldings' are more likely to occur than others. We can think of these as 'attractor states' to which a system will tend to gravitate.


What's so Attractive?

Often a system has the capacity to unfold in many different ways - it has a large 'possibility space'. That being said, there can be regions of this possibility space that exert more force or 'attraction' for the system.

In some kinds of systems these zones of attraction exist because of pre-determined energy minimizing characteristics of these regimes. For example, if we blow a soap bubble it 'wants' to become a sphere: this is the state where there is the most volume for the least surface area, and therefore also the best configuration for soapy molecules given that it locates them in their lowest energy state: one that best balances competing forces, being the expansion forces of the air pushing the system outwards, and the resistance forces of the soapy solution not wanting to waste any surface area. The spherical shape of the bubble is thus a kind of pre-given, and when we blow a bubble this it is this shape - rather than a cube or a conical form, that we can safely anticipate the form will take. 

Similarly, if we toss a marble in a vortex sphere at a science museum we know it will spin around the surface, but then ultimately make its way down to the bottom: this is the state of minimum resistance to the forces of gravity acting upon it.

It is this 'minimizing behavior' that is characteristic of attractor states - of all possible states within a given system {{phase-space}} (the space of all possibilities) some regions may require less energy expenditure to move towards others. We will see that there can also be systems that have more than one such minimizing regime.

Lock In!

While the two physical systems described above have natural attractors, there are also social system dynamics that can cause similar attractor dynamics to arise.

In these scenarios, attractor states are not necessarily pre-determined by natural forces, but can instead emerge over time, as the system evolves, and in light of {{feedback-loops}} forces.  That said, once present they can reinforce themselves by constraining subsequent actions of agents forming the system.

Example:

We can think of Silicon Valley as being an emergent attractor for tech firms, that has, over time, reinforced its position. What is interesting about this example, is that even though it comes from the social sciences rather than physical sciences, in some way the same minimizing principle applies - it is just a different form of minimization that does not have to do with the laws of nature, but instead the social laws of human interaction.

To put this another way, once Silicon Valley established itself as the main tech hub, any new entrants to the tech field could, in principle, have chosen to locate themselves elsewhere - there were multiple locational possibilities win {{phase-space}}. However, if they were to choose these other locations, they would be far more likely to encounter additional "resistance" or frictions that would inhibit success. This is because these non-silicon valley sites would lack factors such as of supporting infrastructure, abundant knowledge spillovers, experienced and readily available workers, etc. In a sense, the smoothest, least resistant course of business action for a technology firm is thus to locate where these kinds of external inputs are most easily accessed: a 'state of least resistance' - which in this case equates to Silicon Valley.

The emergence of such clusters of expertise is not limited to Silicon Valley. We often see that groupings of similar business co-locate in space (referred to as agglomerations), rather than distributing themselves evenly across a region. In a particular city we will see groupings ranging from jewelry stores, or cell phone service providers, or bridal salons, tending to coalesce in co-located groupings.

The precise locations of these groupings is not something that is established in advance in the way that the spherical shape of the soap bubble is. Instead, in these instances it is the processes of {{feedback-loops}} that, over time, reinforce minor locational advantages, such that the kind of spill-over advantages discussed in the Silicon Valley example lead businesses that co-locate to have a better chance of success compared to their far-flung competitors. 

Once these kinds of concentrations of expertise have coalesced in a particular region, it then attracts new entrants to the field, in the same way that the spherical form attracts the soap molecules. Any systems that enter into this kind of regime, where new behavior is directed according to what has occurred before in ways that are constraining and directing, can be considered to have entered into "Enslaved States(a term popularized by Hermann Haken. The concept of 'Enslavement' captures the notion that certain attractor states can emerge from agent interaction, and once present,  will constrain the future action of these agents and all that come after them. The same idea is referred to as 'Lock-in' in the field of Evolutionary Economic Geography.


Shake it Up

We can see that in the example of the soap bubble, and the example of Silicon Valley, we have two very different kinds of system that are nonetheless both still trying to limit unnecessary energy expenditures. For the soap, the concern is minimizing surface tensions or stress, for the business owner, minimizing the tensions and stresses involved with finding good employees, or  access to good internet, etc. In this way the dynamics, while at first completely different, nonetheless run parallel. What is different is that in the human system the 'laws' at play are not stable over time. What might be best practice at one instant is not necessarily best practice at a later time. This is the risk of Lock-in:  that systems begin to perpetuate themselves beyond the point that they were helpful (the QWERTY key board, designed to slow down typists to ensure that the mechanical typing hammers would not jam, is a great example of this kind of lock-in).

In these kinds of lock-in systems not governed by physical laws, it is occasionally worth 'shaking the system up' in order to see if it can be dislodged from a weak regime and encouraged to explore alternative behaviors. This is described as introducing a system {{perturbation}}, a disturbance intended to jostle a system and then see what it settles back into.

Example

For much of human history, the most effective way for individuals to access goods was for them to converge towards a central market-place. This was the area for trade, and by being centralized and co-located, efforts to find goods could be minimized on the part of the consumer, and efforts to find customers could be minimized on the part of the seller.  This was the most "fit" way of achieving the goal of the acquiring and dispersing goods.

In recent decades, this model has begun to shift on its head. With the advent of information technologies, combined with innovations in transport logistics, it has become increasingly viable for companies to deliver goods directly to the homes of consumers. Rather than coming to a central market-place, goods are able to move directly from manufacturer to consumer. Frictions about what is needed where have been reduced, and costs and energy associated with physical markets vs virtual markets have been similarly reduced. 

We can think of each of these regimes of behavior and as each two separate attractor basins within a variegated possibility space of goods acquisition and dispersal strategies.  With changes in technology, one basin of attraction has, over time become more viable (therefore deeper), and the other seems to have shrunk back in relevance and depth. We seem to have arrived at a tipping point today, where the minimizing forces favoring e-commerce vs physical commerce have shifted. That said, the legacy system tends to persist (old habits die hard).

Enter a global pandemic: this is a great example of a system perturbation, which shakes up standard patterns of behavior. Indeed, Covid caused many people who had never shopped online to try this behavior, and realize that it does, indeed, minimize effort in new ways. This kind of system disturbance has moved many people out of their taken for granted regime of behavior, and caused them to move into new regimes. 

We can see from this example how a system perturbation can act as a kind of productive 'shock' that, if large enough, is able to move a system out of a prior attractor state and potentially into a new regime.


Multiple Attractors

In discussing the example above, we slipped in the idea that a system may have more than one 'well' or basin of attraction. It is worth exploring this a bit more, since we can imagine different kinds of possibility spaces - some that only have one deep well to which everything will ultimately  tumble (a single attractor like that which the pendulum moves towards), others can have multiple attractors, some deeper, some shallower, with a system able to explore multiple regimes of behaviors within the space.

Further, complex systems can sometimes oscillate between attractor states, both of which are equally viable. This can be described as a system having Multiple Equilibria. The example of Benard Rolls is a case in point - liquid is heated from below, and forces churn the water molecules so as to cause them to minimize resistance by moving into a "roll" pattern. That said, the direction of the roll -cascading left or cascading right- or two equally viable minimizing behaviors, both of which the liquid can move into. The system therefore has multiple equilibria

In addition, we can have systems that oscillate between attractors, rather than settling into a specific regime. An example would be a predator/prey system, where the population numbers of each species each rise and crash in recurring patterns over multiple generations. In this case, two attractors are coupled whereby as one intensifies (the prey reproduces a lot), in generate a response in another part of the system that is counterbalancing  (the predator finds a ready food source and is able to reproduce a lot). This creates a back and forth oscillation between high prey and high predator numbers, with each regime counterbalancing the other.

The same dynamics can be seen in what are known as chemical oscillators, where we have a phenomena of multiple attractors described as follows:

  • a reaction intensifies certain chemical behaviors;
  • beyond a certain threshold  these behaviors catalyze a new, counter behavior;
  • this counter behavior intensifies...;
  • beyond a certain threshold this counter behavior catalyzes the first behavior;
  • etc. 

The result of these reactions can be quite surprising, as seen below!

Check out the Multiple Attractors in the Briggs Rauscher chemical oscillator.


Back to {{key-concepts}}

Back to {{complexity}}



 

Governing Features ↑

 

Hello There

This is a nice home page for this section, not sure what goes here.

26:26 - Non-Linearity
Related
Concepts - 218 93 212 
Fields - 11 14 19 15 12 18 20 

23:23 - Nested Orders
Related
Concepts - 64 217 66 
Fields - 11 16 14 

24:24 - Emergence
Related
Concepts - 214 59 72 
Fields - 11 16 28 13 12 18 20 

25:25 - Driving Flows
Related
Concepts - 84 75 73 
Fields - 28 17 19 10 15 12 18 20 

22:22 - Bottom-up Agents
Related
Concepts - 213 56 
Fields - 11 16 14 10 13 12 18 

21:21 - Adaptive Capacity
Related
Concepts - 88 78 
Fields - 11 16 17 10 15 13 12 

 

Non-Linearity

Non-linear systems are ones where the scale or size of effects is not correlated with the scale of causes, making  them very difficult to predict.

Non-linear systems are ones in which a small change to initial conditions can result in a large scale change to the system's behavior over the course of time. This is due to the fact that such systems are subject to cascading feedback loops, that amplify slight changes. The notion has been popularized in the concept of 'the butterfly effect'. This effect - the idea that the beating of a butterfly's wings in Brazil, might set off a Tornado in Texas - is counterintuitive because of the scale difference. We tend to think that big effects are the result of big causes. Non-linear systems do not work that way, and instead a very small shift in initial conditions can result in massive system change.


This is because the behavior of non-linear systems is governed by what is known as Positive Feedback. What is interesting about positive feedback and the dynamics of non-linear systems is that they are counterintuitive: we tend to think that big effects need to have been created due to big causes. Non-linear systems do not work that way, and instead a very small shift in initial conditions can result in massive system change. It therefore becomes very difficult to determine how an input or change will affect the system, with small actions inadvertently leading to big, unforeseen consequences.

Clarifying Terminology: Positive feedback does not imply a value judgement, with 'positive' being equated with 'good'! Urban decay is an example of a situation where positive feedback may lead to negative outcomes. A cycle of feedback might involve people divesting in a neighborhood, such that the quality of the housing stock goes down, leading to dropping property values at neighboring sites, further dis-incentivizing improvements, leading to further disinvestment, etc.

History Matters!

The non-linearity of complex systems make them very difficult to predict, and instead we may think of complex adaptive systems as needing to unfold. Hence, History Matters, since slight variances in a system's history can lead to very different system behaviors.

Example:

A good example of this is comparing the nature of a regular pendulum to a double pendulum. In the case of a regular pendulum,  regardless of how we start the pendulum swinging, it will stabilize into a regular oscillating pattern. The history of how, precisely, the pendulum starts off swinging does not really affect the ultimate system behavior. The pendulum will stabilize in a regular pattern regardless of the starting point, a behavior that can be replicated over multiple trials.

The situation changes dramatically when we move to a double pendulum (a pendulum attached to another pendulum with a hinge point). When we start the pendulum moving the system will display erratic swinging behaviors - looping over itself and spinning in unpredictable sequences. If we were to restart the pendulum swinging one hundred times, we would see one hundred different patterns of behavior, with no particular sequence repeating itself. Hence, we cannot predict the pendulum's behavior, we can only watch the swinging system unfold. At best, we might observe that the system has certain tendencies, but we cannot outline the exact trajectory of the system's behavior without allowing it to 'play out' in time: 

watch the double pendulum!

We can think of the difference between this non-linear behavior and linear systems: if we wish to know the behavior of a billiard ball being shot into a corner pocket, we can calculate the angle and speed of the shot, and reliably determine the trajectory of the ball. A slight change in the angle of the shot leads to only a slight change in the ball's trajectory.  Accordingly, people are able to master the game of pool based on practicing their shots! If the behavior of a billiard ball on a pool table were like that of a complex system, it would be impossible to master: with even the most minute variation in our initial shot trajectory, the balls would find their ways to completely different positions on the table with every shot.

System Tendencies

That said, a non-linear system might still exhibit certain tendencies. If we allow a complex system to unfold many times (say in a computer simulation), while each simulation yields a different outcome (and some yield highly divergent outcomes), the system may have a tendency to gravitate towards particular regimes. Such regimes of behavior are known as Attractor States. Returning to the pendulum, in our single pendulum experiment the system always goes to the same attractor, oscillating back and forth. But a complex systems features multiple attractors, and the 'decision' of what attractor the system tends towards varies according to the initial conditions.

Complex systems can be very difficult to understand due to this non-linearity. We cannot know if a 'big effect' is due to an inherent 'big cause' or if it is something that simply plays out due to reinforcing feedback loops. Such loops amplify small behaviors in ways that can be misleading.

Example:

If a particular scholar is cited frequently, does this necessarily mean that their work has more intrinsic value then that of another scholar with far fewer citations?

Where is this all going?!

Intuitively we would expect that a high level of citations is co-related with a high quality of research output, but some studies have suggested that scholarly impact might also be attributed to the dynamics of {{positive-feedback}}: a scholar who is randomly cited slightly more often than another scholar of equal merit will have a tendency to attract more attention, which then attracts more citations, which attracts more attention, etc.. Had the scholarly system unfolded in a slightly different manner (with a different scholar initially receiving a few additional citations), the dynamics of the system could have led to a completely divergent outcome - citation networks may be  subject to historical {{contingency}}, that could have played out differently, with different scholars assuming primary positions in the citation hierarchy. Thus, when we say that complex systems are "Sensitive to Initial Conditions"  this is effectively another way of speaking about the non-linearity of the system, and how slight, seemingly innocuous variation in the history of the system can have a dramatic impact on how things ultimately unfold. 

Another way of thinking about this is to describe a system's {{non-linear}} : this is a key concept linked to the idea of non-linearity, that indicates that we need to follow a sequence of the system's unfolding to see what is going to happen. Tied to the idea of a system's path that needs to be followed, is the idea of a {{tipping-point}}, a kind of 'point of no return' where  a system veers from one trajectory to another, thereby closing off other potential pathways. A tipping point can be a system poised at a juncture between two states (either of which could viably unfold - ie VHS or BETA), or a tipping point can be a moment where the pressure on the system is such that it can no longer continue to operate in a particular mode that, until that point, was viable. At that juncture the system needs to move into a different kind of behavioral regime. Water turning to Ice or to steam is a tipping point of this latter kind, where the water molecules move beyond a certain threshold of agitation, and can no longer maintain the state of either solid, liquid, or vapor, beyond that threshold. 


Implications

In many domains of complexity, computer models are the primary tool used to understand these systems. Computers are very effective at emulating the step by step, rule based processes undertaken by multiple agents in parallel, that can result in emergent, unexpected outcomes. There are reasons why this can be very helpful, particularly if the system being modeled can be shown to have a tendency to move towards particular regimes, despite their non-linear features (these system tendencies can be thought of as 'attractors' for the system).

That said, many complex systems do not have specific attractors, or have attractors that change in unexpected ways depending on the environmental context at play. Real world complex systems will gravitate towards 'fit' behaviors, but fitness changes with context, their can be multiple, divergent fit 'solutions',  and the variables governing a system's unfolding can change.

Because of the non-linear nature of complex systems, predictive models, in principle,  are not going to be an effective means to gain insight into ultimate system trajectories. This is not to say that we can't learn from the dynamics that unfold in simulations, only that it is hard to consider them as predictive tools given the inherent uncertainty of these systems.

So what do we do? One answer is that we accept our lack of ability to predict specific outcomes, and try something else. This 'something else' has to do with learning from complexity dynamics so as to gain the tools to enact complexity dynamics:

Enacting vs Predicting.

What is we could set up systems that hold the ability to unfold in ways that lead towards fit behaviors? Rather than build a complex system in a model, what if we could make real things in the world modeled on complexity dynamics? We would have to accept a kind of uncertainty - we won't know where the systems will ultimately look like, but we might still be able to know how the systems will behave. And if we design these systems correctly, they will behave in ways that ensure that energy or resources fueling the system is processed effectively, and that individual agents, are gradually steered into regimes of behavior that maximizes the fitness of all agents, as a whole.

While the precise form such systems take will be subject to contingent, non-linear dynamics, they performance of the system will be something that we can instead rely upon to serve a given purpose.




 

Nested Orders

Complex Systems tend to organize themselves into systems of nested orders, where new features emerge at each level of order: cells forming organs, organs forming bodies, bodies forming societies.

Complex systems exhibit important scalar dynamics from two perspectives. First, they are often built up from nested sub-systems, which themselves may be complex systems. Second, at a given scale of inquiry within the system, there will be a tendency for the system to exhibit Power Laws  (or scale-free) dynamics in terms of how the system operates. This simply means that there will be a tendency in the system for a small number of elements within the system to dominate: this system domination can manifest in different ways, such as intensity (earthquakes) frequency (citations) or physical size (road networks). In all cases a small ratio of system components (earthquakes, citations, or roads) carry a large ratio of system impact. Understanding how and why this operates is important to the study of complexity.


Nested Orders

To understand what we mean by 'nested', we can think of the human body. At one level of magnification we can regard it as a collection of cells, at another as a collection of organs, at another as a complete body. Further, each body is itself part of a larger collection - perhaps a family, a clan or a tribe - and these in turn, may be part of other, even larger wholes:  cities or nations. In complex systems we constantly think of both parts and wholes, with the whole (at one level of magnification) becoming just a part (at another level of magnification). While we always need to select a scale to focus upon, it is important to note that complex systems are open - so they are affected by what occurs at other scales of inquiry. When trying to understand any given system within this hierarchy, the impact of subsystems typically occurs near adjacent scales. Thus, while a society can be understood as being composed of humans, composed of bodies, composed of organs, composed of cells, we do not tend to consider the role that cells play in affecting societies. Instead, we attune to understanding interactions between the relevant scales of whatever system we are examining.  Depending on the level of enquiry we choose,  we may look at the same entity (for example a single human being) and consider it be an emergent 'whole',  or simply a component part (or agent) within a larger emergent entity (one being within a complex society).

Various definitions of complexity try to capture this shifting nature of agent versus whole, and how this alters depending on the scale of inquiry. Definitions thus point to complex adaptive systems as being hierarchical, or operating at micro, meso, and macro level.  In his seminal article The Architecture of Complexity, Herbert Simon describes such systems as  'composed of interrelated sub-systems, each of the latter being, in turn, hierarchic in structure until we reach some lowest level of elementary subsystem'.

Why is this the case? And why does it matter?

Simon argues that, by partitioning systems into nested hierarchies, wholes are more apt to remain robust. They maintain their integrity even if parts of the system are compromised. He provides the example of two watch-makers, each of whom build watches made up of one thousand parts. One watchmaker organizes the watch's components as independently entities - each of which needs to be integrated into the whole in order for the watch to hold together as a stable entity. If one piece is disturbed in the course of the watchmaking, the whole disintegrates, and the watchmaking process needs to start anew. The second watchmaker organizes the watch parts into hierarchical sub-assemblies: ten individual parts make one unit, ten units make one component, and ten components make one watch. For the second watchmaker, each sub-assembly holds together as a stable, integrated entity, so if work is disrupted in the course of making an assembly, the disruption affects only that component (meaning a maximum of ten assembly steps are lost).  The remainder of the assembled components remain intact.

If Simon is correct, then natural systems may preserve robustness by creating sub-assemblies that each operate as wholes. Accordingly, it is worth considering how human systems might benefit from similar strategies.

Nested System Partitioning

Simon's watchmaker is a top-down operator who organizes his work flow into parts and wholes to keep the watch components partitioned and robust, creating a more efficient watch-making process. What is noteworthy is that self-organizing, bottom-up systems also seem to have inherent dynamics that appear to push systems towards such partitioning, and that this partitioning holds specific structural properties related to mathematical regularities.

A host of complex systems thus exhibit what is known as Self Similarity - meaning that we can 'zoom in' at any level of magnification and find repeated, nested scales.  These scale-free hierarchies follow the mathematical regularities of Power Laws distributions.  These distributions are so common in complex systems, that they are often referred to as 'the fingerprint of self-organization" (see Ricardo Solé).  We find power-law distributions in systems as diverse as the frequency and magnitude of earthquakes, the structure of academic citation networks, the prices of stocks, and the structure of the World Wide Web.


Scalar Events 

Further, complex systems tend to 'tune' themselves to what is referred to as Self-Organized Criticality: a state at which the scale or scope of a system's response to any given input will follow power-law distribution,  regardless of the intensity (or scope) of the input. Imagine a pile of sand, to which one grain is added to the top, then another, then another. There is a moment when the pile reaches a certain threshold, that if we add a grain the pile will endure a kind of  small 'collapse': an added grain will dislodge an existing one, which cascades downwards off the pile. When sand piles (or other complex systems), are in the 'critical' state, we cannot predict the impact of that singular grain of sand: whether it will dislodge one or two grains, or whether it will set off an avalanche of several hundred grains. If the addition of one grain causes a massive avalanche, we might think that the avalanche was the 'result' of a major 'cause'. But this is an error (see {{non-linearity}} ). That single grain could just as easily have set off any size of avalanche, and the frequency of which these avalanches of different sizes occur follows a power law (see also  {{per-bak}}).

While not fully understood, it is believed that systems gravitate towards these critical states because,  it is within these regimes that systems are able to maximize performance while simultaneously using the minimum amount of available energy. When system are poised at this state they also have maximum connectivity with the minimum amount of redundancy. It is also believed that they are the most effective information processors when poised within this critical regime.

Why Nested and not Hierarchical?

The attentive surfer of this website may notice that in the various definitions of complexity being circulated, the term 'hierarchical' is used to describe what we call here 'nested orders'. We have avoided using this term as it holds several connotations that appear unhelpful. First, a hierarchy generally assumes a kind of priority, with 'upper' levels being more significant than lower. Second, it implies control emanating from the top down. Neither of these connotations are appropriate when speaking about complex systems. Each level of nested orders is both a part and a whole, and causality flows both ways as the emergent order is generated by its constituent parts, and steered by those parts as much as it steers (or constrains) its parts once present. We hope that the idea of 'nested orders' is more neutral vis-a-vis notions of primacy and control, but still captures the idea of systems embedded within systems of different scales.


Implications

When considering the design of a system for which we are hoping to achieve complex dynamic unfolding, it is therefore important to think about two aspects.

The first is to consider how we might partition systems into different sub-units of similar components, that can operate as a unit without doing damage to units operating either at a higher or lower level. To take an urban example, we might think about the furnishings that operate together to form the unit of a room, rooms that together form the unit of a building, and buildings that operate together to form the unit of a block. Each level operates with respect to the levels above and below, but can be thought of as systems on their own. 

But this is not all - there is a dialogue between levels, such that it is not simply a hierarchy that runs from the block down through the building and into the furniture. Instead, each level emerges from the level below, is stabilized over time, and in turn constrains what happens at the scale below. Units emerging from units then constraining these same units, while also forming the {{building-blocks}} of what happens above.

The second is to be careful about how we interpret extreme events: if we look at large sand pile avalanches as somehow fundamentally different from small sand pile cascades, we are unlikely to understand that the same cause tripped off both effects.  The same dynamics may be at play for many phenomena, so we should be aware of how much emphasis we place on causal factors in 'extreme' events, if the event is one taking place within a complex system that may be  in the critical regime.

To put another way, if we wish to know why a particular cat video went viral, it might not be that productive to look into the details of the cat, its actions, or the quality of the video. That particular video might simply be the sand grain of cat videos - setting of a chain of viewing that would have eventually cascaded simply due to the number of cat videos poised to go viral at any given moment. While it is true that this example does not exactly parallel the sand-pile case, it expresses the same basic premise, that extreme events may simply be one scale of event in a system that is poised to unfold at all potential scales.








 

Emergence

Complex Adaptive Systems display emergent global features: ones transcending that of the system's individual elements.

Emergence refers to the unexpected manifestion of unique phenomena appearing in a complex system in the absence of top-down control. Emergent, integrated wholes are able to manifest through self-organizing, bottom-up processes, with these wholes exhibiting clear, functional, structures. These phenomena are intriguing in part due to their unexpectedness. Coordinated behaviors yield an emergent pattern or synchronized outcome that holds properties distinct from that of the individual agents in the system. Emergence can refer both to these novel global phenomena themselves (such as ant trails, Benard rolls or traffic jams) or to the mathematical regularities - such as power-laws -  associated with them.


Starling Murmuration - an emergent phenomena

When we see flocks of birds or schools of fish, they appear to operate as integrated wholes, yet the whole is somehow produced without any specific bird or fish being 'in-charge'. The processes leading to such phenomena are driven by networks of interactions that, because of feedback mechanisms,  gradually impose constraints or limits upon the members of the system (see Degrees of Freedom).  Recursive feedback between these members (or 'agents') take what was initially 'free' behavior, and gradually constrain or enslaves the behavior into coordinated regimes.

These coordinated, emergent regimes generally feature new behavioral or operational capacities that are not available to the individual element of the system. In addition, emergent systems often exhibit mathematical pattern regularities (in the form of {{power-laws}} ) pertaining to the intensity of the emergent phenomena. These intensities tend to be observed in aspects such as spatial, topological or temporal distributions of the emergent features. For example, there are pattern regularities associated with earthquake magnitudes (across time) city sizes (across space), and website popularity (across links (or 'topologically')).

Quite a lot of research in complexity is interested in the emergence of these mathematical regularities, and sometimes it is difficult to decipher which feature of complexity is more important - what the emergent phenomena do (in and of themselves), versus the structural patterns or regularities that these emergent phenomena manifest.

Relation to Self-Organization:

Closely linked to the idea of emergence is that of self-organization, although there are some instances where emergence and self-organization occur in isolation from one another.

Example:

One interesting case of emergence without self-organization is associated with the so-called 'wisdom of crowds'. A classic example of the phenomena, (described in the book of the same name), involves estimating the weight of a cow at a county fair. Simultaneously, experts as well as non-experts are asked to estimate a cow's weight. Fair attendees are given the chance to guess a weight and put their guess into a box.  None of the attendees are aware of the estimates being made by others. Nonetheless, when all the guesses from the attendees are tallied (and divided by the number of guesses), the weight of the cow that the 'crowd' collectively determined is closer than the weight of the cow estimated by experts. The correct weight of the cow 'emerges' from the collective, but no self-organizing processes are involved - simply independent guesses.

Despite there being examples of emergence without self-organization (as well as self organization without emergence), in the case of Complex Adaptive Systems these two concepts are highly linked, making it is difficult to speak about one without the other. If there is a meaningful distinction, it is that Self-Organization focuses on the character of interactions occurring amongst the Bottom-up Agents of a complex system, whereas Emergence highlights the global phenomena that appear in light of theses interactions.

Enslavement:

At the same time, the concepts are interwoven, since emergent properties of a system tend to constrain the behaviors of the agents forming that system. Hermann Haken frames this through the idea of an Enslaved State, where agents in a system come to be constrained as a result of phenomena they themselves created.

Example:

An interesting illustration of the phenomena of 'enslavement' can be found in ant-trail formation. Ants, that initially explore food sources at random, gradually have their random explorations constrained due to the signals provided by pheromones (which are deployed as ants that randomly discover food). The ants, responding in a bottom-up manner to these signals, gradually self-organize their search and generate a trail. The trail is the emergent phenomena, and self-organization - as a collective dynamic that is distributed across the colony - 'works' to steer individual ant behavior. That said, once a trail emerges, it acts as a kind of 'top-down' device that constrains subsequent ant trajectories.

Emergence poses ontological questions concerning where agency is located - that is, what is acting upon what. The source of agency becomes muddy as phenomena arising from agent behaviors (the local level) gives rise to emergent manifestations (the global level) which subsequently constrains further agent behaviors (and so forth). This is of interest to those interested in the philosophical implications of complexity.

There is a very tight coupling in these dynamics between a system's components and the environment that the components act within. Thus, a specific characteristic of the environment is that it also consists of system elements. Consequently, as elements shift in response to their environmental context, they are, in turn helping produce a new environmental context for themselves. This results in the systems components and the system environment forming a kind of closed loop of interactions. These kinds of loops of behaviors, that lead to forms of self-regulation, were the object of study for early Cybernetics thinkers.

Urban Interpretations:

The concept of Emergence has become increasingly popular in urban discourses. While some urban features come about through top-planning (like, for example, the decision to build a park), other kinds of urban phenomena seem to arise through bottom-up emergent processes (for example a particular park becoming the site of drug deals). It should be noted that not all emergent phenomena are positive! In some cases, we may wish to help steward along emergent characteristics that we deem to be positive for urban health, while in other cases we may wish to try to dismantle the kinds of feedback mechanisms that create spirals of decay or crime.

The concept of emergence can be approached very differently depending on the aims of a particular discourse. For example, Urban Modeling often highlights the emergence of Power Laws in the ratio of different kinds of urban phenomena. A classic example is the presence of power law distributions in city sizes, which looks at how the populations of cities in a country follows a power-law distribution,  but one can also examine power law distributions within rather than between cities, examining such characteristics such as road systems, restaurants, or other civic amenities.

Others, such as those engaged in the field of Evolutionary Economic Geography. (EEG) are intrigued by different kinds of physical patterns of organization.  EEG attunes to how 'clusters' of firms or 'agglomerations' appear in various settings, in the absence of top-down coordination.  They try to unpack the mechanisms whereby firms are able able to self-organize to create these clusters, rather then looking at any particular mathematical regularities or power-law attributes associated with such clusters.

Still other urban discourses, including Relational Geography and Assemblage Geography, are focused on how agents come together to create new structures or agents entities: which might  be buildings, institutions, building plans, etc. These discourses tend to focus on coordination mechanisms and flows that steer how such entities come to emerge.

Accordingly, different discourses attune to very different aspects fo complexity.

Proviso:

While this entry provides a general introduction to emergence (and self-organization), there are other interpretations of these phenomena that disambiguate these concepts with reference to Information theory. These interpretations focus upon the amount of information (in a Shannononian sense) required to describe self-organizing versus emergent dynamics.

While these definitions can be instructive, they remain somewhat controversial. There is no absolute consensus about how complexity can be defined using mathematical measures (for an excellent review on various measures, check the FEED for Ladyman, Lambert and Weisner, 2012). Often, an appeal is made to the idea of 'somewhere between order and randomness'. But this only tells us what complexity is not, rather than what it is. The explanation provided here is intended to outline the terminology in a more intuitive way, that, while not mathematically precise, makes the concepts workable.





 

Driving Flows

Complex Systems exchange energy and information  with their surroundings. These input flows help structure the system.

Complex systems, while operating as bounded 'wholes', are not entirely bounded. They remain open to the environment, which, in some fashion, 'feeds' or 'drives' the system: providing energy that can be used by the system to build and retain structure. Thus complex systems violate the second law of thermodynamics in that, rather than tending towards disorder (entropy), they are pushed towards order (negentropy). This would not be possible in the absence of some external source of input. This input can be thought of as the "fuel" for the agents within the systems, that could be in the form of food for ants, clicks for a website, or trades for a stock market.


According to the second law of thermodynamics a system, left to its own devices, will eventually lose order: hot coffee poured into cold will dissipate its heat until all the coffee in the cup is of the same temperature; matter breaks down over time when exposed to the elements; and systems lose structure and differentiation. The same is not true for complex systems. They gain order and structure over time.

What constitutes a flow?

In general, we can conceptualize flows as some form of energy that helps drive or push the system. But what do we mean by energy? And what kinds of energy flows should we pay attention to in the context of complexity?

In some cases, the source of system energy aligns with a strictly technical definition of what we think of when we say 'energy'. Such is the case in the classic example of 'Benard rolls' (see Open / Dissipative for a video of this phenomena). Here, a coherent, emergent 'roll' pattern is generated by exciting water molecules by means of a heat source.  It becomes relatively straightforward to identify thermal energy as the flow driving the system: heat enters the water system from below, dissipates to the environment above, and drives emergent water roll activity in between.

But there are a host of different kinds of complex systems where we see all kinds of driving flows that do not necessarily have their dynamics directed in accordances with this strict conception of 'energy'.

Example:

In an academic citation network, citations could be perceived as the 'energy' or flow that drives the system towards self-organization. As more citations are gathered, a scholar's reputation is enhanced, and more citations flow towards that scholar.  A pattern of scholarly achievement emerges (that follows a {{power-law}} distribution), due to the way in which the 'energy flows' of scholarly recognition (citations), are distributed within the system. While we tend to think that citations are based on merit, a number of studies have been able to replicate patterns that echo citation distribution ratios using only the kinds of mechanisms we would expect to see within a complex system - with no inherent merit required (see also Preferential Attachment!).
Similarly, the stock market can be considered as a complex adaptive system, with stock prices forming the flow which helps to steer system behavior; the world wide web can be considered as a complex adaptive system, with the number of website clicks serving as a key flow; the ways in which Netflix organizes recommendations can be considered as a complex adaptive systems, with movies watched serving as the flow that directs the system towards new recommendations.

Clearly, it is helpful to understand the specific nature of the driving flows within any given complex system, as this is what helps push the system along a particular trajectory. For ants, (who form emergent trails), food is the energy driving the system. The ants adjust their behaviors in order to gain access to differential flows (or sources) of food in the most effective way possible given the knowledge of the colony. In this case, the global caloric value of food stocks found is a good way to track the effectiveness of ant behavior.

If we look at different systems, we should be able to somehow 'count' how flow is directed and processed: citation counts, stock prices, website clicks, movies watched.

Multiple Flows:

Often complex systems are subject to more than one kind of flow that steers dynamics. For example, we can look at the complex population dynamics of a species within an ecosystem with a limited carrying capacity. Here, two flows are of interest: the intensity of reproduction (or the flow of new entrants into the environmental context), and the flow of food supplies (that limits how much population can be sustained). Here one flow rate drives the system (reproductive rate), while another flow rate chokes the system (carrying capacity). This interactions between two input flows (one driving and the other constraining), produces very interesting emergent dynamics that lead the system to oscillate or move periodically from one 'state' (or Attractor States) to another. A more colloquial way of thinking about this periodic cycling is captured in the idea of 'boom' and 'bust' cycles, although there are other kinds of cycles that involve moving between not just two, but many additional cycling regimes (see Bifurcations for more!).

Go with the flow:

Flow is the source of energy that drives self-organizating processes. A complex system is a collection of agents that are operating within a kind of loose or Open / Dissipative boundary, and flow is what comes in from the outside and is then processed by these agents.  Food is not part of the ant colony system, but it is what drives colony dynamics. The magic of self-organization is that, rather than each agent needing to independently figure out how best to access and optimize this external flow, each agent can learn from what its neighbors are doing.

Accordingly, there are two kinds of flows in a complex system - the external flow that needs to be internalized and processed, and the internal flows amongst agents that help signal the best way to perform within a given environment (and thereby process these external flows). The act of generating these signals is what Pierre Paul Grasse describes as Stigmergy -  a process that in some way marks are alters the shared environment of all agents in ways that can thereby steer agent behavior. For example, ants depositing pheromones on a path leading to food is an example a stigmergic signal.

An environment characterized by stigmergic signals is no longer neutral - it has areas or zones of intensity that affect all agents in the system that are in proximity to these signals. Thus, although agents may function  in random ways, stigmergy shifts the probability that agents in a system will behave in one way versus another:  the more intensity a particular zone of stigmergy has, the more likely agents will be drawn into the behavior directed by that zone.

Using stigmergy signals to help direct the processing of flows,  agents gradually move into regimes that process these flows utilizing minimal energy requirements: through Positive Feedback they draw other agents along into similar regimes of behavior making the system, as a whole, an efficient energy processor.

Its all about difference:

Every complex system channels its own specific form of driving flow.

In every case, it is important to look beyond technical definitions of energy flows in complex systems, to instead understand these as the differences that matter to the agents in the system, or as Gregory Bateson states 'the difference that makes a difference'. All complex systems involve some sort of differential, and this differential is regulated by an imbalance of flows, that thereby steers subsequent agent actions.  As the system realigns itself through  attuning to these differentials, new behaviors or patterns emerge that, in some way, optimize behaviors.

Inherent Laziness: Its everywhere!

A nice way to think about this is to imagine that everything in the world is essentially trying to do the least possible work - particularly when being pushed around by some outside force. The Driving Flows are the outside force, which are basically come into the agent territory.

Responsive Agents, Differential Flows:

Sometimes, all the agents really care about is basically shaking off the disturbance: the liquid molecules being heated in the Benard Rolls were happily drifting about, only to have some annoying heat energy start to come along that they now need to contend with. At that point, the regime that allows the heat to pass through the system and rise to the top is for the molecules to get into neater alignments of rolls that allows these currents to go through with less overall disruption. The same is true in the action of sand grains forming ridges, in response to the driving flows of the winds. In both cases, the agents, left to themselves, are not driving flows in and of themselves.

Active Agents, Differential Flows:

At other times, the agents are themselves a kind of driving force, that need and external driving flow to achieve a goal (eat, reproduce, etc.), but they are trying to figure out how to claim the prize without wasted effort. Unlike the agitated fluid, or the disturbed sand, the ants will move to seek the driving flow, whether or not it is present, ultimately running out of steam. We can see here that the ants are different from the sand grains, because the sand grains are passive without the external input, whereas the ant behavior actively seeks out the external input. A tree growing does the same thing - its roots look for nutrients, its branches and leaves extend towards the sun - the environment and the agent work together to minimize the effort of the growing tree to get what it needs without expending unnecessary resources.

Flowing Agents, Differential Context

A final example inverts the situation entirely, where the driving flow is coming strictly from an agent in an environment. Imagine I want to walk up a hill. My drive is to get to the top, but I want to do so expending the least amount of energy in terms of the parameters of both time and effort. I can charge directly upwards - using the principle that the shortest distance between two points is a straight line. But while this might initially appear to be a good solution, I quickly discover that the effort of the direct vertical path takes a toll on my energy level. Instead, if I extend the distance of travel, but reduce the slope (thereby moving at a lateral incline), my energy of each step is reduced. That said, the more I reduce the energy of movement, the longer the lateral inclines - meaning that more time to get to the top is extending. Our bodies make a balanced calculation to find the zig-zagging path up the hill that is able to account both for the time parameter and the energy parameter. The path is an emergent outcome of this calculation. The best solution for reaching the top while expending the minimum amount of resources (as a function of both time and energy). It is still worth noting that this activity is still happening in an environment with a differential - the differential this time being the slope of the terrain - but this differential is not one that is being produced by a flow moving into the system (like the heat differential in Benard Rolls), it is instead that we have an agent trying to flow through a differential environment.


 

Bottom-up Agents

Complex Adaptive Systems are comprised of multiple, parallel agents, whose coordinated behaviors lead to emergent global outcomes.

CAS are composed of populations of discreet elements - be they water molecules, ants, neurons, etc. - that nonetheless behave as a group. At the group level, novel forms of global order arise strictly due to simple interactions occurring at the level of the elements. Accordingly, CAS are described as "Bottom-up": global order is generated from below rather than coordinated from above.  That said, once global features have manifested they stabilize - spurring a recursive loop that alters the environment within which the elements operate and constraining subsequent system performance.


What might an Agent 'B'?

Complex systems are composed of populations of independent entities that nonetheless form a particular 'class' of entities sharing common features. Agents might be ants, or stocks, or websites. Furthermore, they might be Bikes, Barber shops, Beer glasses, or Benches (what I will refer to  below as the 'B' list). We can ask what an agent is but we could equally ask what an agent is not!

Defining an agent is not so much about focusing on a particular kind of entity, but instead about defining a particular kind performance within a given system and within that system's context. Many elements of day-to-day life might be thought of agents, but to do so, we need to first ask how agency is operationalized.


Example:

Imagine that I have a collection of 1000 bicycles that I wish to make available for rent across a city. Could I conceive of a self-organizing system where bikes are agents - where the best bike distributions and locations emerge, with bikes helping each other 'learn' where the best flow of consumers is? If a bike's 'destiny' is to be ridden as much as possible, and some rental locations are more likely to enable bikes to fulfill this destiny than others, how could the bikes distribute themselves to as to maximize fulfillment of their collective destiny?

What if I have 50 barber-shops in a town of 500 000 inhabitants - should the shops be placed in a row next to one another? Placed equidistant apart? Distributed in clusters of varying sizes and distances apart (maybe following power laws?). Might the barber shops be conceptualized as agents competing for flows of customers in a civic context, and trying to maximize gains while learning from their competitors?

And what about beer glasses: if I have a street festival where I want all beer glasses to wind up being recycled and not littering the ground, what mechanisms would I need to put into place in order to encourage the beer glasses to act as agents - who are more 'fit' if they find their ways into recycling depots? How could I operationalize the beer glasses so that they co-opt their consumers to assist in ensuring that this occurs?. What would a 'fit' beer glass be like in this case (hint: high priced deposit?). 

Finally, who is to say where the best place is to put a park bench? If a bench is an agent, and 100 benches in a park are a system, could benches self-organize to position themselves where they are most 'fit'?

The examples above are somewhat fanciful but they are being used to illustrate a point: there is no inherent constraint on the kinds of entities we might position as agents within a complex system. Instead, we need to look at how we frame the system, and do so in ways where entities can be operationalized as agents.

Operational Characteristics:

The agents above can each move into more fit behavioral regimes provided that certain operational dynamics are in place: 

  • having a common {{fitness}} criteria shared amongst agents (with some performances being better than others),
  • having an ability to exchange {{information-theory}} amongst other agents, which helps direct and constrain how each agent behaves  (get to better performance faster).
  • having an ability to shift performance, or {{adaptive-processes}}  (see also Requisite Variety),
  • operating in an environment where there is a meaningful difference available that drives behavior (see Driving Flows)


Thought Experiment:

Let's take just one of the examples above. The location of bikes (you can also find another example of the park benches on the {{principles}} page text.

Let's begin by co-opting a number of parking spaces in a city as temporary bike rental stations. Bikes are affixed to a small rolling platform in a vacant parking stall that holds 4 locked bikes.  These bike stations are then distributed, at random around a neighborhood. Individuals subscribe to a service that allows them to use bikes in exchange for money or bike credits.

  • Let us assume that the ultimate 'destiny' of a bike is to be ridden. Then the frequency at which this destiny is manifested would be considered its measure of fitness. For purposes of this thought experiment lets assume that each bike can measure this fitness: it has a sensor that detects ridership.
  • Let us then assume that each bike station is equipped to receive signals from the bike stations in its vicinity, indicating if bikes at those stations are being borrowed or not. With this information a bike station can calibrate which of its nearest neighbors are most readily fulfilling their destiny of being utilized.
  • Let us then assume that the bike platforms are given a bit of 'smart' functionality - they are connected to an app, that those subscribing to the rental service have on their phone. If a bike station is under-performing in comparison to its neighbors, it will offer a credit to any user of the service who will hitch up a bike to the rolling bike station, and move it to the nearest location of higher use.  This gives the bike stations the ability to shift location, providing adaptive capacity.
  • Finally let us assume that enough people are using the app, such that variations in use frequency provide enough data to mark trends or be useful. These usage flows then mark trends within the bike rental system, with certain bike station locations being popular, others not so.  As people rent or do not rent bikes, a source of difference enters the system, with certain bikes receiving more or less flows of users

It should be rather intuitive to image what would happen in this system. Some bike stations will capture more flows of people than others - the reasons for this might not be clear, and may vary from day to day depending on different conditions.  The reasons do not necessarily matter. From the perspective of the bike stations (as the agent in the system) the reason why a particular location is better or worse is not important, what matters is that bikes that are underutilized will gradually readjust their position in the city so as to better capture the flows they crave. Overtime, sites that have a high usage demand will achieve consolidations of bike stations, with each station adjusting its position based on information gathered from its nearest neighbors. This will continue until such time as all stations are positioned in ways where they are all capturing an equal number of usage flows, with none able to move to a better location. A kind of system equilibrium has been reached. Other equilibrium states may also exist, and so it is helpful if bike stations occasionally abandon this stable state, to randomly explore other potential, unoccupied sites that may in fact harbor unharnessed flows of bike ridership. It should be noted that the density of the emerging bike hubs can vary dramatically. There may be areas where 10 stations, 20 or only 1 station is viable. The point is that the agents in the system can distribute themselves, over time, to service this differential need without need for top down control. Here we have an example of a kind of 'swarm' urbanism.

This example is not typical of those given in complex adaptive systems theory, but it helps illustrate how it is possible, at the most basic level,  to conceptual a systems of complex unfolding by using only the notions of Agents, {{fitness}}, {{adaptive-processes}}, {{driving-flows}} and {{information-theory}}. There are other more nuances, but any of the systems listed above (the bicyles, barber shops, or beer glasses), could be made to function using the same basic strategies. 


'Classic' Agents

The list of potential agentic entities offered above - the 'B' list - is somewhat odd.  We begin with them so as to avoid limiting the scope of what may or may not be an agent. That said, this collection of potential agents are not part of what might be thought of as the 'canonic' Agent examples  - what we might call the 'A' list -  within complexity theory. Let us turn to these now:

Those drawn to the study of complex systems are often compelled to explore agent dynamics because of certain examples that demonstrate highly unexpected emergent aspects. These include 'the classics' (described elsewhere on this website) such as:  emergent ant trails, coordinated by individual ants, emergent percolation patterns, coordinate by water molecules in Benard/Rayleigh convection, emergent higher thought processes, coordinated by individual neurons firing.

In each case, we see natural systems composed of a multitude of entities (agents) that, without any level of higher control, are able to work together to coalesce into something that has characteristics that go above and beyond the properties of the individual agents. But if we consider the operational characteristics at play, they are no different from the more counter-intuitive examples listed above. Take ants as an example. They are an agent that has:

  • a common fitness criteria shared amongst agents (getting food),
  • the adaptive capacity to shift performance (searching a different place)
  • an ability to exchange information amongst other agents (deploying/detecting pheromones)
  • an environment where there is a meaningful difference that drives behavior (presence of food sources/flows)

Ant trails emerge as a result of ant interaction, but the agents in the system are not actively striving to achieve any predetermined  'global' structure or pattern: they are simply behaving in ways that involve an optimization of their own performance within a given context, with that context including the signals or information gleaned form other agents pursuinng similar performance goals. Since all agents pursue identical goals, coordination amongst agents leads to a faster discovery of fit performance regimes. What is unexpected is that, taken as a collective, the coordinated regime has global, novel features. This is the case in ALL complex systems, regardless of the kinds of agents involved.

Finally, once emergent states appear, they constrain subsequent agent behavior, which then tends to replicate itself.  Useful here are {{Humberto-maturana-francisco-varela}}'s notion of autopoiesis as well as Hermann Haken's concept of Enslaved States. Global order or patterns (that emerge through random behaviors conditioned by feedback) tend to stabilize and self-maintain.

Modeling Agents:

While the agents that inspired interest in complexity operate in the real world, scientists quickly realized that computers provided a perfect medium with which to explore the kind of agent behaviors we see operating. Computers are ideal for exploring agent behavior since many 'real world' agents obey very simple rules or behavioral protocols, and because the emergence of complexity occurs as a step by step (iterative) process.  At each time step each agent takes stock of its context, and adjusts its next action or movement based on feedback from its last move and from the last moves of its neighbors.

Computers are an ideal format to mimic these processes since, with code, it is straightforward to replicate a vast population of agents and run simulations that enable each individual agent to adjust its strategy at every time step. Investigations into such 'automata' informed the research of early computer scientists, including such luminaries as {{josh-epstein-and-rob-aztell}}, {{Von-Neumann}}, {{stephen-wolfram}}, {{john-conway}} and others (for more on their contributions see also {{key-thinkers}} on the upper right.

In the most basic versions of these automata, agents are considered as cells on an infinite grid, and cell behavior can be either 'on' or 'off' depending on a rule set that uses neighboring cell states as the input source.

Conway's Game of Life: A classic cellular automata

These early simulations employed Cellular Automata (CA), and later moved on to Agent-Based Models (ABM) which were able to create more heterogeneous collections of agents with more diverse rule sets. Both CA and ABM aimed to discover if patterns of global agent behaviors would emerge through interactions carried out over multiple iterations at the local level. These experiments successfully demonstrated how order does emerge through simple agent rules, and simulations have become, by far, the most common way of engaging with complexity sciences.

While these models can be quite dramatic, they are just one tool for exploring the field and should not be confused with the field itself. Models are very good at helping us understand certain aspects of complexity, but less effective in helping us operationalize complexity dynamics in real-world settings. Further, while CA and ABM demonstrate how emergent, complex features can arise from simple rules, the rule sets involved are established by the programmer and do not evolve within the program.


Agent Learning

A further exploration of agents in CAS incorporates the ways in which bottom-up agents might independently evolve rules in response to feedback. Here, agents test various Rules/Schemata over the course of multiple iterations. Through this trial and error process, involving Time/Iterations, they are able to assess their success through Feedback and retain useful patterns that increase Fitness. This is at the root of machine learning, with strategies such as genetic algorithms mimicking evolutionary trial and error in light of a given task.

competing agents are more fit as they walk faster!

John Holland describes how agents, each independently exploring suitable schema, actions, or rules, can be viewed as adopting General Darwinian processes involving Adaptive processes to carry out 'search' algorithms. In order for this search to proceed in a viable manner, agents need to possess what {{Ross-Ashby}} dubs Requisite Variety: sufficient heterogeneity to test multiple scenarios or rule enactment strategies. Without this variety, little can occur.  It follows that, we should always examine the range of capacities agents have to respond to their context, and determine if that capacity is sufficient to deal with the flows and forces they are likely to encounter.

Further, we can speed up the discovery of 'fit' strategies if we have one of two things: more agents testing (parallel populations of agents) or more sequential iterations of tests. Finally, we benefit if improvements achieved by one agent can propagate (be reproduced), within the broader population of the general agents.


 

Adaptive Capacity

Complex systems adjust behaviors in response to inputs. This allows them to achieve a better 'fit' within their context.

We are all familiar with the concept of adaptation as it relates to evolution, with Darwin outlining how species diversity is made possible by mutations that enhance a species' capacity to survive and thereby reproduce. Over time, mutations that are well-adapted to a given context will survive, and ill-adapted ones will perish. Through this simple process - repeated in parallel over multiple generations - species are generated that are supremely tuned to their environmental context. While originating in biological realms, a more 'general' Darwinism looks to processes outside this context to examine how similar mechanisms may be at play in a broad range of systems. Accordingly, ANY system - biological or not - that has the capacity for Variation, Selection, and Retention (VSR), is able to adapt and become more 'fit'.


Eye on the target - Identifying what is being adapted for:

All complex systems involve channeling flows in the most efficient way possible - achieving the maximum gain for the minimal output - and 'discovering' this efficiency can be thought of as achieving a 'fit' behavior. When looking at a system's adaptive behavior, one therefore needs to first understand how fit regimes are operationalized, by considering:

  1. What constitutes a 'fit' outcome;
  2. How the system registers behaviors that move closer to this outcome (see Feedback and Stigmergy);
  3. The capacity of agents in the system to adjust their behaviors so as to better align with strategies moving closer to the 'fit' goal.

It is this third point, point pertaining to the 'adaptive capacity' of agents that we wish to examine in more depth.

Variation, Selection, Retention (VSR):

If we consider the example of ant trail formation, behaviors that lead to the discovery of food would be those that ants wish to select for as more 'fit'. Using the lens of Variation, Selection and Retention, the system unfolds as follows:

  1. A collection of agents (ants), seek food (environmental differential) following random trajectories (Variation).
  2. Ants that randomly stumble upon food leave a pheromone signal in the vicinity. This pheromone signal indicates to other ants that certain trajectories within their random search are more viable then others (Selection).
  3. Ants adjust their random trajectories according to the pheromone traces, reinforcing successful food pathways and broadcasting these to surrounding members of the colony (Retention).

What emerges from this adaptive process is an ant trail: a self-organizing phenomena that has been steered by the adaptive dynamics of the system seeking to minimize the global system energy expended in finding food. What is important to note is that the adaptation occurs at the level of the entire group, or system. The colony as a whole coordinates their behavior to achieve overall fitness, with food availability (the source of fitness) being the differential input that drives the system. The ants help steer one another and, overall, the behavior of the colony is adaptive. Individual ants might still veer off track and deplete energy looking for food, but this is actually helpful in the long run - as it remains a useful strategy in cases where existing food sources become depleted. Transfer of information about successful strategies is critical to ensuring that more effective variants of behavior propagate throughout the colony.

None of this is meant to imply that, if the ants follow this protocol, they will find the most abundant food source available. Complexity does not necessarily result in perfect emergent outcomes. What it does result in is outcomes that are 'satisficing' and that allocate system resources as effectively as possible within the constraints of limited knowledge. Further, the system can change over time, meaning that other, more optimum performance regimes may be discovered as time unfolds.

What is also noteworthy about this example is that it employs Darwinian processes of variation, selection and retention, but not by means of genetic mutation. Instead, the ants themselves, each with their own strategy, are operating as ongoing mutations of behavior, in terms of their individual random search trajectories. Unlike in natural selection, agents in the system are able to broadcast successful strategies: not through a reproduction of their genes, but through an environmental signal that solicits a reproduction of their actions.

Capacity to Change:

An agent's ability to vary its behavior, select for behaviors that bring it closer to a goal, and then retain (or reproduce), these behaviors, is what makes agents in a complex system 'adaptive'. If agents do not possess the capacity to change their outputs in response to environmental inputs, then no adaptive processes can occur.

While this might at first seem self-evident, this basic concept can often be overlooked. In particular, it is easy to think about a system composed of diverse components as being 'complex' without considering whether or not the elements within the system have some inherent ability to adjust in relation to this complex context -

Example:

Consider an airplane. It is a system comprised of a host of components and together these components interact in ways that makes flight possible. That said, each component is not imbued with the inherent ability to adjust its behavior in response to shifting environmental inputs. The range of behaviors available to the plane's components are fixed according to pre-determined design specifications. The machine components are not intended to learn how to fly better (adjusting how they operate) in response to feedback they receive over the course of every flight.

If we try to understand an airplane as a complex system, and identify 'flying better' (using less energy to go further) as our measure of fitness, then would it be meaningful to speak about the system adapting? If the agents in the plane's system are the individual components, are they capable of variation, selection, and retention? Even if we were to model system behavior from the top down, to test design variations in components, the system itself would not be 'self-organizing': without external tinkering nothing would happen.

'Seeking' fitness without volition:

Does it follow that inanimate objects are incapable of self-organization without top down control? From the example of the airplane, we might thing not, but in reality it depends on the nature of the system.

It is reasonably easy to understand adaptation within a system where the agents posess some form of volition. What is intriguing is that many complex systems move towards fit regimes, regardless of whether or not the agents of the system have any sort of 'agency' or awareness regarding what they do or do not do.

Example: Coordination of Metronomes:

In the video below, we see a group of metronomes gradually coordinating their behaviors so as to synchronize to a regular rhythm and direction of motion. While this is an emergent outcome, it is initially unclear how one might see this as a kind of 'adaptation'. But if we look to the principles of VSR, we see how this occurs. First we observe a series of agents (metronomes), displaying a high degree of variety in how they beat (in relation to one another). The system has a shared environmental context (the plank upon which the metronomes sit), which acts as a subtle means of signal transfer between the metronomes. The plank moves parallel to the direction of metronome motion, creating resistance or 'drag' in relation to the oscillation of the metronomes on its surface. Individual metronomes encounter more movement resistance in relation to this environment (the sliding plank), while some metronome movements encounter less (a more efficient use of energy). These differentials cause each metronome to encounter drag, leading to ever so slight alterations in rhythm. Over time, these alterations lead all metronomes to move into sync.

Watch the metronomes go into sync!

Considered as VRS we observe the following:

  1. There is a Variation in the metronome movements with certain oscillatory trajectories encountering more friction and resistance then others;
  2. The physics of these resistance forces creates a Selection mechanism, whereby each metronome alters its oscillatory patterns in response to how much resistance it encounters.
  3. As more metronomes enter into coordinated oscillating regimes, this in turn generates more resistance force being exerted on any outliers, gradually pushing them into sync. Once tuned to this synchronized behavior,  the system as a whole optimizes its energy expenditure, and the behavior persists (Retention).

Keep it to a minimum!:

The system adapts to the point where overall resistance to motion is minimized. The metronomes 'achieve' the most for the least effort: a kind of fitness within their context.

While the form of 'minimization' varies, all complex systems involve seeking out behaviors that conserve energy - where the system, as a whole,  processes the flows it is encountering using the least possible redundant energy. While this cannot always be perfectly achieved, it is this minimizing trajectory that helps steer the system dynamics.

Agent Options:

What is perhaps surprising in this example is the lack of volition on the part of the metronomes. They are not trying to get together as part of a harmonious consensus in a metronome universe of peace and unity. They are simply subject to a shared environment, where the behavior of any given metronome in the system has an impact on the behavior of all others. This is an interesting characteristic of all complex systems - they are in fact a system, where agents cannot operate in isolation. What is equally important is the fact that agents in the system have a behavior that can, in some way, be altered : a metronome moves, and this movement has the capacity to alter if affected by an external input (in this case frictions and drag forces). We could imagine metronomes of a different design, where movement is time precisely to a clock and where, once set, nothing can change how the metronome behaves. So for a complex system we need to have agents that have a certain degree of adaptive capacity - something about them that can change, or respond to an environment. The change might be very subtle, but it is important to identify what kind of adaptive capacity each complex system contains, in order to be able to better understand and steer its behavior.


 


 

Fields Galore!

This is a nice home page for this section, not sure what goes here.

11:11 - Urban Modeling
Related

217, 213, 56, 88, 72, 
26, 23, 24, 22, 

16:16 - Urban Informalities
Related

213, 66, 56, 88, 
23, 24, 22, 21, 

28:28 - Urban Datascapes
Related

218, 66, 73, 59, 72, 
24, 25, 22, 

17:17 - Tactical Urbanism
Related

218, 
25, 21, 

14:14 - Resilient Urbanism
Related

218, 59, 
26, 23, 22, 

19:19 - Relational Geography
Related

218, 93, 84, 75, 
26, 25, 

10:10 - Parametric Urbanism
Related

213, 75, 78, 
25, 22, 21, 

15:15 - Landscape Urbanism
Related

93, 56, 88, 
26, 25, 21, 

13:13 - Incremental Urbanism
Related

56, 59, 88, 
24, 21, 

12:12 - Evolutionary Geography
Related

218, 93, 73, 59, 88, 72, 
26, 24, 25, 22, 21, 

18:18 - Communicative Planning
Related

75, 73, 
24, 25, 22, 

20:20 - Assemblage Geography
Related

93, 84, 
26, 24, 25, 

 

Urban Modeling

Cellular Automata & Agent-Based Models offer city simulations whose behaviors we learn from. What are the strengths & weaknesses of this mode of engaging urban complexity?

Governing Features ↑

There is a large body of research that employs computational techniques - in particular agent based modeling (ABM) and cellular automata (CA) to understand complex urban dynamics. This strategy looks at how rule based systems yield emergent structures.


Creating computer models is one of the most common ways to integrate complexity ideas into many fields - so much so that this methodological approach is often confused with the domain of knowledge itself. This is largely the case in urban discourses, where the construction of simulation models - either agent-based or cellular automata - is perhaps the most frequently employed strategy to try to grapple with complexity (though other communicative and relational approaches in planning have recently been gaining increased traction). It is therefore important to understand how these models work, and what aspects of complexity they highlight.

Cellular Automata

Early investigations as to the dynamics underlying complex systems came via early computational models, which illustrated how simple program rules could produce unexpectedly rich (or complex) results. John Conway's Game of Life (from 1970) was amongst the first of these models, composed of computer cells on a three dimensional lattice that could either be in an 'on' or 'off' mode. An initial random state launches the model, after which each cell updates its status depending on the state of directly neighboring cells (the model is described in detail under Bottom-up Agents). Conway was able to demonstrate that, despite the simplicity of the model rules, unexpected explosions of pattern and emergent orders were produced as the model proceeded through ongoing iterations.

At around the same time, Economist Thomas Schelling developed his segregation model, using a cellular lattice to explore the amount of bias it would require for "neighborhoods" of cells to become increasingly segregated. Cities in the US, in particular, had been experiencing the physical segregation of cities by race, with the assumption being that such spatial divisions were the result of strong biases amongst residents. With his model, Shelling demonstrated that, in effect, total segregation could occur even when agent 'rules' were only slightly biased towards maintaining neighborhood homogeneity. While the model does not explain why spatial segregation occurs in real-world settings, it does shed light on the idea that strong racial preferences are not, by necessity, the only reason why spatial partitioning may occur.

Because of the implicitly spatial qualities of models like Conway's and Shelling's, both computer programmers and urban thinkers began to wonder if models might help explain the kinds of spatial and formal patterns seen in urban development. If so, then by testing different rule sets one might be able to predict how iterative, distributed (or bottom-up) decision-making ultimately affects city form.

This is a unique direction for planning, in that most urban strategies focus on generating broad, top-down master-plans, where the details are ultimately filled in at a lower level. Here, the strategy is inverted. Models place decision-making at the level of the individual cell in a lattice, and it is through interacting populations of these cells that some form of organization is achieved. Models were able to demonstrate that, depending on the nature of the interaction rules, the formal characteristics of this emergent order can differ dramatically.

Ultimately, by running multiple models, and observing what kinds of rule-sets generate particular, recurrent kinds of pattern and form, modelers are able to speculate on what policy-decisions around planning are most likely to achieve forms deemed 'desirable' (on the assumption that the models are capturing the most salient feature of the real world conditions, which is not always the easiest assumption to make!).

Agent Based Models

Cellular Automata simulations are formulated within a lattice-type framework, but clearly this has its limits. The assumption of the model is that populations of cells within the lattice have inter-changeable rule sets, and that emergent features are derived from interactions amongst these identical populations. Clearly the range of players within a real-world urban context are quite variable, and populations of uniformly behaving cells do not capture this variance. Accordingly, with the growth in computing power, a new kind of "agent-based model" was able to liberate cells (or agents) from their lattices, as well as enabling programmers to provide differing rule-sets for multiple, differing agents.

In such models, we might have two sets of agents, (predator/prey), or agents moving in a non-static environments (flocking birds/schools of fish). Simple rules sets are then tested and calibrated to see if behaviors emerge within the models that emulate real-world observations. These models then demonstrate how different populations of actors or 'agents' with differing goals and rule sets interact.

Models that are straightforward to code (Net Logo is a good example, which can be deployed either as a CA or an Agent-Based model), showcase how different populations/agents within a model interact, producing unexpected results. Rules of interaction can be easily varied, according to a limited number of defined parameters.

That said, depending on how variables are calibrated, very different kinds of global behaviors or patterns emerge.

Urban Applications:

All of this is of great interest to urban computational geographers, who attempt to employ computer models as stand-ins for real world situations. From an urban standpoint, an agent might be a resident, a business owner, a shop-keeper, etc. Depending on the rules for growth, purchase pricing, development restrictions, or formal (physical) attributes, these agents can be programmed to interact upon an urban field, with multiple urban simulations (that use the same rule sets), serving to probe the 'field of possibilities' to see if any regularities emerge across different scenarios or iterations. If such patterns are observed, then the rules can be altered - in an attempt to derive which rule characteristics are the most salient in terms of generating either favorable or unfavorable spatial conditions (again, with the proviso that the interpretation of 'favorability' might well be contested).

Such models, for example, might attempt to show the impact of a new roadway on traffic patterns, with various rules around time, destination, starting position, etc. By running various tests of road locations, a modeler might attempt to determine the 'best' location for a new road - with the 'fitness' of this selection tying in to pre-determined policy parameters, such as land costs associated with location, reduction of congestion/travel times, or other factors. The promise of these models is very powerful: to simulate real-world conditions within a computer and then build multiple test 'worlds' prior to real-life implementation. This allows modelers to minimize policy risk of unknown consequences that are teased out in simulations.


Inherent Risks

That said, in practice there is always the concern of what models do not include: are the assumptions of the model in fact in alignment with the real world? To alleviate this, modelers attempt to calibrate their models to real world conditions by using data sets wherever possible, but they remain limited by which data types are available to them.  Furthermore,  the fact that a given data-set is available for use/calibration purposes, does not necessarily mean that the features the data captures are in fact related to the most salient indicators or features of the real-world system.

Models, can often be seen as 'objective' or 'scientific', since once the code has been written, the models provide reliable, quantitative results, but this does not mean that the consistency of the model is consistent with the real-world conditions being model. The model is still subject to the biases of the coding, the decisions of the modeler, and ideas around what to include and what to disregard as unimportant.

In an effort to include more and more potential factors (and again, with rising computer power) agent-based models have become increasingly sophisticated, integrating additional real-world conditions.  However, as the models grow to contain more and more conditions, actors, and rules, their relationship to complex adaptive systems perspectives has become increasingly tenuous. Scientists originally interested in the dynamics of complex systems were struck by the fact that simple systems with simple rules could generate complex orders. It should not, however, be surprising that complex models, with increasingly complex rule sets can generate complex orders, but the effort going into the creation of such models, their calibration, and their interpretation (in terms of how they guide policy), seems to have moved increasingly far away from the underpinnings of their inspiration.

What seems to have been preserved from complexity theory - rather than the simplicity of complex systems dynamics - are three ideas: that of "bottom-up" rather than top-down logic - whereby the order of the system emerges without need for top down control, and the idea of "emergence": that interacting agents within the model can generate novel global patterns or behaviors that have not been explicitly programmed into the system. Finally, at the individual agent level, the rules can still retain a simplicity.

While many individual researchers and research clusters investigate urban form through modeling, it is worth making special note of CASA - The Center for Advanced Spatial Analysis at the Bartlett in London, a group led by Professor Mike Batty.

Model Attributes: Fractals and Power Laws

Of interest to Urban Modelers is not just the emergent patterns found in simulations, but also the ways in which these patterns correspond to features associated with complex systems. For example, many models display {{fractals-1}} qualities. The illustration below (taken from an article by Mike Batty) show variants of how CA rules generate settlement decisions, with  fractal patterns emerging in each case. Different initial conditions/constraints yield different kinds of fractal behavior (except in starting condition B).

Example of Emergent Fractal spatial characteristics, 'A digital breeder for designing cities' (2009)

Similarly, models often exhibit {{power-laws}} in their emergent characteristics  - whether this be factors such as population distributions of cities in a model, or distributions of various physical attributes within a given city. For example, an analysis of internal city road networks might reveal that road use frequency in a given city follows a power-law distribution; another analysis might reveal that cities within a given country can be ordered by size, and that populations between cities follow a power law distribution (this characteristic seems to hold for cities that together form part of a relatively interdependent network - for example the grouping of all cities in the USA, or France, but not groupings of all cities in the world, suggesting that these are not part of the same system).

Example of power-law distribution of city populations in Nigeria, which closely follow Zipf's law: Image from Scientific Report "There is More than a Power Law in Zipf" by Cristelli, Batty and Pietronero (2012)

Many academic papers from the urban modeling world stress these attributes, which are not planned for and which are often characterized as being the 'fingerprint of complexity'.


Model Dynamics: Tipping Points & Contingency

Alongside these observed attributes of models - power laws and fractals - modelers are also interested in how models unfold over time. One of the interesting aspects of models is that, while the overall characteristics of emergent features might be similar across different models, the specificity of these characteristics will vary.

For example, a model might wish to consider locational choices of individual within a region, and including populations of agents that include such categories as: 'job opportunities', 'job seekers', and 'land rent rates'. In such a scenario, what begins as a neutral lattice of agent populations will ultimately begin to partition and differentiate with varying areas of population intensity (cities, towns) emerging. The size of these various populations centers might then follow a power-law.  If we repeat the simulation with the same rules in place, we would expect to see similar power-law population patterns emerge, but the specificity of exactly where these centers are located is contingent - varying from simulation to simulation.

This raises the question of the true nature of cities and population dynamics: for example, the fact that Chicago is a larger urban hub than St. Louis might be taken as a given - the result of some 'natural' advantage that St Louis does not have. But model simulations might suggest otherwise - that the emergence of Chicago as the major mid-west hub is a contingent, emergent phenomena: and that history could have played out differently.

Models therefore allow geographers to understand alternative histories, and consider how what  might seems like a 'natural' outcome, seen as part of a clear causal chain, are in fact much more tenuous and contingent phenomena.  Had the rules playing out just a little differently, from a slightly different starting point, a completely different result might have ensured. Here, we are left with the realization that {{history}}, and that {{contingency}} plays a key role in the make-up of our lived world.

Another way this can be thought of is the idea of a Tipping Points:  that whether or not Chicago or St Louis became a major urban center was the result of a small variable that pushed the system towards one regime, whereas another, but completely different regime was equally viable.

Tipping Points are discussed elsewhere on this site, but it is important to state that they can be thought of in two different ways: the first is this idea of a minor fluctuation that launches a given system along one particular path versus another, due to reinforcing feedback. The second looks at how an incremental increase in the stress or input to a system, once moved beyond a certain threshold,  can push a system into an entirely new form of behavior.

This second idea becomes important in modeling the amount of stress or inputs a given urban system can tolerate (or absorb) before one behavioral regime shifts to another. For example incrementally rising fuel prices might reach a point where people opt to take public transit. Or a certain critical mass of successful business ventures might eventually result in a new neighborhood hub, at which point rents increase substantially. What is interesting about these points is that the shift is often abrupt, as people recalibrate their expectations and behaviors around a new set of parameters that have exceeded a particular threshold. Models can display these abrupt shifts, or Phase Transitions, where certain patterns disappear only to be replaced by others.

A sketch outlining some of the ideas and individuals associated with urban modeling



Back to {{urbanism}}

Back to {{complexity}}


 

Urban Informalities

Many cities around the world self-build without top-down control. What do these processes have in common with complexity?

Governing Features ↑

Cities around the world are growing without the capacity for top-down control. Informal urbanism is an example of bottom-up processes that shape the city. Can these processes be harnessed in ways that make them more effective and productive?


Self-Built Settlements

Across the globe there are many areas where urban planning plays only the most minimal of roles. Instead, people themselves are responsible for creating their own homes, and the aggregate actions of these individuals result in what are known as 'informal settlements' or 'urban informalities'. These are in contrast to the 'planned' areas of housing and neighborhoods in cities that are controlled from the top down. For a long time, such settlements were overlooked or pushed to the sidelines, considered to be chaotic and disorderly. They were characterized as 'slums' in need of clean up or retrofitting.

Only over time have planners begun to recognize that such informalities may offer valuable lessons: that their bottom-up organization results in unexpected order, and that robust patterns emerge despite the seeming lack of coordination between individuals in these settlements. Urban thinkers interested in complexity have begun to look at these settlements for signs of order, efficiency, and resilience, and to try to understand how coordinated patterns emerge over time, in iterative modifications.

As part of this, thinkers have looked to older settlement patterns that yielded emergent order: settlements that pre-date controlled planning but are characterized by a kind of organic 'fit' between the environment and its settlers. An early contribution to this effort, a book called 'Architecture without Architects' (1972) by Bernard Rudofsky, did not reference complexity explicitly,  but did note how harmonious patterns emerge within such settlements despite the fact that there is no central control.

This area of research can therefore be divided into two parts: urban thinkers who aim to learn from traditional settlements, built slowly and incrementally over generations that achieve  harmonious, coherent features, and those interested in how the much faster-paced settlements - built in the face of population shifts that have drawn people, en-masse into cities - nonetheless display emergent structure.

Finally, a number of researchers have attempted to draw from both these areas to see how new planning policies might apply 'lessons learnt' from these examples of bottom-up settlements, in order to infuse more vitality - but also autonomy - into new developments.

Rule-Based Settlements

Today, urban development are typically regulated by various planning rules and codes, which set limits and constraints around what can and cannot happen: areas of limited function (zoning), limitations on built form (building set-backs, height restrictions, etc.), mandatory ancillary requirements (parking spaces per dwelling unit), and much more.

One key characteristic of these constraints and limits is that they are determined by planners and then 'set' for a particular area or building type. Rules are imposed from the planner's office and do not vary to accommodate emerging conditions on the ground.

By contrast, are much older rules that came in another form: relational rules that were codes of building behavior that were much more context dependent. Effectively, what could be built hinged, somewhat, on what had been built around you before.  This local, unfolding history steered what was built, what the 'next step' was, in terms of urban growth. Each construction, in turn, placed constraints on what could happen next.

If this sounds familiar it should, as it echoes, in many ways, the manner in which cellular automata models unfold over time. There is a rule set, but it is a rule-set that is deployed in a relational context. Unlike in master zoning plans, there are no 'rules' that if a cell is located in specific position on the lattice  it needs to observe certain behaviors associated with that square.  Instead cell behaviors are constrained only by the emerging neighboring context, which is never set or pre-determined.

Example:  if we look at this image of a Greek village, we can note that the street character is unified and holistic, despite the fact that there are many individual properties.  In his book, "Mediterranean Urbanism', {{Besim-Hakim}} discusses this unity in terms of a series of urban 'rules' that constrain what neighbors can and cannot (or their {{degrees-of-freedom}}).

What is noteworthy in this study is that, unlike in contemporary planning, the nature of these rules is contextual. A rule might pertain to where a door or window can be placed, but only insofar as this has an impact on doors and windows pre-existing in the neighboring context. In this way, building specificity proceeds iteratively. These locally codified {{rule-based}} constraints are then supplemented with tacit rules around the means of construction. By using local building methods and materials, ones proven successful over countless generations, each individual builder constrains their material and construction choices in accordance with local practices. For most of human history there was no need to make such rules explicit, as construction technologies were quite regional. As a result, construction practices can be said to have been tested over time, and thereby 'evolved' to produce a coherent fit within their context.

In a similar vein, Mustafa {{ben-hamouche}} analyzes the emergence of Muslim cities.  He states that urban structure is the result of a number of tacit rules that, while not necessarily codified, provided a general normative understanding around the ethos of construction. In addition to the kinds of relational rules explored by Hakim, Hamouche points to how the nature of inheritance practices served to divide building sites.  Alongside of this, Islamic law gave priority for those holding neighboring properties to obtain a kind of 'right of first refusal' should adjacent property become available. This resulted in an ongoing process of both disaggregation (inheritance divisions), and aggregation (adjacent property fusions). Iterated over each passing generation, these dynamics resulted in certain global morphological characteristics that seem to exhibit {{fractals-1}} in structure.

The resulting geometries are complex, particularly since subdivided properties needed to maintain functionality - with the need for additional arrays of lanes and access points. Finally, due to the limits on space, adjacent owners often became intertwined in various kinds of complex property infringement agreements - for example one offering access to a rooftop for the other, with the other offering access through their garden to the other's entry. In this way, singular properties became intertwined in a variety of manners, resulting in more organic, holistic spatial organization.

Here, the city gains structure from the bottom-up actions of individuals, taking specific iterative steps that give form to their dwellings - all with reference to how these steps ultimately impact their neighbors. These localized, incremental actions, are therefore not entirely independent, but rather locally constrained in such a way that, over time, a collective, coherent urban form could emerge. These cities gain long-term adaptive fitness due to iterative adjustments made over time, allowing them to take on a complex natural order responding to the needs of their inhabitants.

Informal Settlements

In addition to these traditional settlements, today we can point to innumerable regions characterized by unplanned, informal settlements. The growing rural to urban trend has long since passed the threshold where more people live in cities than in the countryside, and housing cannot keep pace with this trend. Accordingly, people are forced to build their own houses in an effort to settle in areas where they can gain access to employment opportunities. These settlements are seen as problematic, due to a host of issues including lack of sanitation, safety concerns, infrastructural and transport issues, etc. 

That said, there are many ways in which we can, nonetheless, learn from informalities.  While the characteristics of urban informalities vary, many of them have been quite successful in achieving vibrant, livable communities. Furthermore, these settlements are often the source of a great deal of civic creativity and ingenuity. While there is always the risk of romanticizing these locales, for those interested in bottom-up self-organization, they would seem to offer a prime case-study for how effective solutions can be achieved without need for top-down control.

The character of these settlements changes incrementally in two key ways:  morphologically and materially. Initially, a dwelling will be built using the bare minimum size and construction required in order to satisfy the need for shelter from the elements. Construction is speedy and may rely on assistance from other family or community members. Once a given zone of habitation has been carved out, two modifications will tend to occur: the material quality will be improved/replaced as resources become available, and/or extensions may be added. Living spaces may also be extended to incorporate outdoor surroundings, which may include the appropriation of air space (balconies) or rooftops. Over time, as primary needs of housing are met, an informal settlement will begin to see other forms of basic functions crop up: including shops, repair, or other service infrastructures. 

The quality of informal settlements is often contingent upon whether or not occupants feel secure in their land tenure. In Turkey, for example, where land tenure is relatively secure for those who have settled informally (due to particular aspects of Ottoman Law), the processes described above (incremental expansion, alongside of material replacement, gradual functional support services), mean that many environments that appear to have been planned parts of the city are in fact examples of robust, evolved, informalities.

In addition to the physical characteristics of these matured informalities, they also often develop to have their own internal social and governance structures, which help ensure safety,  resolve disputes, and relay knowledge. Within a settlement, networks of individuals develop who assist others in navigating through uncertain situations, with knowledge and experience relayed throughout the group. Thus, in additional to the hard, material  infrastructure of the physical settlement itself, there are less tangible, but equally important {{network-topology}} of community that develop.  When these settlements are intervened upon by outside actors -  'cut down' or razed to the ground in order to make way for more progressive, controlled, and top-down housing developments - this accretion of knowledge and organization is lost. Areas that are developing towards these self-organized structures are stripped of the opportunity to go through the processes of incremental succession that can lead to quite successful communities.

Informalities of this nature are studied by many researchers, including Hesam Kamalipour, {{Kim-Dovey}}, and {{Juval-Portugali}}. Each draw links between informalities and the dynamics of complex adaptive systems. 


Learning From Informalities: Urban Experiments in Self-Organization 

Much of the research on informalities centers around efforts to better understand and steward their functioning (rather than simply destroying and replacing them). That said, planners working within the more normative development context have begun to ask if it is possible to apply this kind of rule-based,  incremental, and context dependent approach to planning to European or North American contexts.

There is perhaps no better example of this than the case of Almere, Oosterwold, a project designed by the Architecture and Urban Design group MVRDV in the Netherlands. The project employs a series of conditional rules that allows individuals to purchase plots and then constrains how these plots are developed by reference to a number of rules that must be preserved for the development as a whole. At the same time, certain characteristics of each plot development hinges on site conditions of surrounding neighboring plots, reducing the {{degrees-of-freedom}} available for subsequent development.  

Individuals are responsible for the provision of a number of personal and site infrastructures, and are otherwise left to their own devices in terms of determining how, precisely, to go about developing their own site. The project is an interesting example of bottom-up self-organization in planning, that incorporates both rule-based thinking and bottom-up agents. Furthermore, the project has no pre-determined end-vision. Instead, depending on the nature of the non-linear process of land acquisition and development, a whole range of outcomes may be possible. Rather than being proscribed in advance by a vision or master-plan, the intent is for the settlement pattern to be one characterized by {{emergence}} over time.

Back to {{urbanism}}

Back to {{complexity}}



 

Urban Datascapes

Increasingly, data is guiding how cities are built and managed. 'Datascapes' are both derived from our actions but then can also steer them. How do humans and data interact in complex ways?

Governing Features ↑

More and more, the proliferation of data is leading to new opportunities in how we inhabit space. How might a data-steered environment operate as a complex system?


In the long history of urbanization, infrastructural elements have been critical in defining the nature of settlement. Be it the river-routes that formed trade channels constraining settlements, the rail-lines defining where frontier towns would be situated, or the freeways marking a shift from urbanization to sub-urbanization, different infrastructural regimes have played a key role in determining where and how we live. Further infrastructural layers made new modes of life possible: the power-grid shifted daily rhythms so as to extend the workday into the night hours; telecommunication lines enabled physically distant transactions to occur with ease; highway and sewage infrastructures helped spur massive suburban expansion. These infrastructures - carrying people, goods, and ultimately ideas - have formed the skeletal framework upon which lifestyles and livelihoods are anchored.

As we move into an age increasingly mediated by digital infrastructures and the flows they channel, we ask the question:  what kind of worlds will these new regimes make possible, and how will these be steered to ensure ‘fit’ urban practices? What does ‘fit’ even mean within this context? Whether through driverless cars, the internet of things, or digitally enabled access economies, cities are poised to afford new kinds of behaviors and lifestyle options.

From Bell Curves to Power Laws

To date, individuals have been expected to live their civic lives in ways that cater largely to the average needs of population, rather than particular, exceptional requirements. Cities, meet standards. This, despite the fact that needs differ, and may differ both across individuals, and for the same individual across time. Nonetheless, we tend to relegate our urban systems to support a narrow range of options that remain relatively fixed. Historically, this has made sense, because individuated needs that shift or differ from norms are too variable and have, until now, been difficult if not impossible to track and accommodate.

While norms remain important (and if assumed to be governed by a power-law distribution, would align with the small number of urban offerings (20%) that meet the greatest proportion of urban needs (80%)), this leaves 80% of the more particular and finely tuned needs unharnessed.

Chris Anderson (2004) describe this full breadth of differential offerings - the non-impactful 80% - as ‘the long tail’:  the huge scope of ongoing (but small) demand that is not part of the "fat head" of the power law distribution.  Anderson argues that highly tuned niche offerings in this long tail are viable but, until now, have not been fully tapped due to the difficulties in pinpointing where and when they exist.

Today, new information technologies are changing all this, providing detailed access to the long tail of highly tuned offerings that may appeal only to the very few or for a very brief time, but would nonetheless be viable if there were a way to match needs to offerings. Anderson writes that, ‘many of our assumptions about popular taste are actually artifacts of poor supply-and-demand matching — a market response to inefficient distribution’.  Mass supply of standard urban environments or infrastructures may appeal to the norm but, in the end, no one is actually getting precisely what they want, when they want it. Instead, they are getting what the market has the capacity to supply with its coarse information availability.

Furthermore, they are getting what would seem to be viable given notions of "economy of scale". But these perspectives can shift when information coordination becomes more  efficient: instead of economies of scale, we can begin to activate access economies, which enable the pooling of diverse resources which can be accessed by individuals on an as-needed basis. Economies of Scale suggest Mass Transit Systems; Access Economies suggest Uber. One is fine tuned to individual needs, the other is not.

Fine Tuning: An Example

Considering the rise of Airbnb. Big hotel chains are based on a model that offers accommodations appealing to the widest possible demographics within certain price point. Accordingly, when making comparisons within a given price category, rooms offered by large chains appear generic and interchangeable. Airbnb changed this (and dramatically altered the accommodation industry) by providing a platform able to match highly specified needs with highly specified offerings. If I am looking for a vegan and pet-friendly one-bedroom apartment with a bicycle in the 16th arrondissement in Paris, I am now able to identify this niche with surprising speed and accuracy.  The capacity for Airbnb to offer highly specific information, tuned to individual preferences, that is also deemed reliable (because of reviews), allows individuals to stay in accommodation tailored to their personal requirements rather than generic ones.

Airbnb's success is based, in part, on how it is able to illuminate this broad array of atypical and variable niches – the long tail. This long tail shifts.  Accordingly, when I travel I may wish to stay in the normative Holiday Inn 50% of the time, a quaint bed and breakfast 49% of the time and a vegan glamping yurt only 1% of the time.  Until now, it has been very difficult to enact the behaviors desired only 1% of the time. But these niches, if made visible and accessible are in fact viable.

Today's data technologies now illuminate these.

From the Standard to the Particular

Airbnb is a classic example of how information technologies are making previously invisible urban assets more tangible and accessible for people. But such technologies  are also changing how we perceive the urban environments around us.  If hotel locations could previously be mapped and located according to their proximity to normative assets (for example major highway interchanges, major business centers, or major entertainment facilities), then today's data of occupied Airbnb sites might reveal a host of other locational preferences - ones that are irrelevant at the macro scale, but or interest to individuals at the micro-scale. We can imagine a new kind of mapping of these urban niches as having a more nuanced and variegated quality - ones capturing and relaying multiple kinds of urban flows and revealing latent flows not previously channeled.

Consider a host of other urban assets: when do people use particular roads, or trains, or bike routes? What routes are the fastest at a given hour of the day? Or perhaps speed is not important - what routes then are the quietest? Or the prettiest?

Or, consider the new potentials of the Access Economy. Here, it becomes less important that I have constant, physical possession of an urban asset (a car for example), and more important that I have easy, on-demand, and customized access to this asset (any make of car I want in a given instant; any video I want to watch on Netflix). The Access economy does not mean that all cars (from a car-sharing service) nor all videos (from a streaming service) will be  accessed in identical ways: certain cars and videos will be part of the fat head of the power law. But the long-tail is now on offer as well.

If previous city planning strategies only had the power to attune to normative needs (the fastest road), today we can construct civic Datascapes tuned to individuated desires. In a sense, data allows us to increase the city's Degrees of Freedom. Thus, if a standardized bus route was, at one point, the most effective way to transport people along "common" routes from A to B, then Uber offers a way for individuals to construct their own specified routes from E to Z. We can think of this shift as being one that moves us from mass-standardization to mass-customization, all of which is discovered and made tangible through individual data: our preferences when we call an Uber, or stay in an AirBnB.  At the same time data-scapes emerge on the other side of this: pleasant bike routes that are crowd-sourced and then promoted; quirky accommodation options rise to star status; pop-up events are made visible through social media posts.

This is a different kind of city: one viewed primarily through intensities of data, that can be curated so as to be viewed and filtered according to individual needs. Accordingly, my teenage daughter's view of the city is informed and highlighted by pathways, infrastructures and gathering places all of which constitute data points that are most salient to her: my tech colleague's perspective of the city will have its own matrix of data points. Neither will ride the same bus, nor stay in the same hotel, nor gather in the same meet-up spots. The "central square" will no longer be centralized. But there will be niches of localized interests and intensities that emerge, over time.

Data-scapes:

This is what we mean when we introduce the idea of "data-scapes". The term is used here to capture a range of interests which are still in nascent form - not quite yet emerged as a clear line of urban enquiry -  but which is "in the air" in various ways. Some of the Smart City discourses touch upon it, but the emphasis is more on big-data collection for optimization. Speculations around the Internet of Things relate to this area, as do investigations around the Access Economy.

What binds these research themes is a common awareness that information is now able to help steer how we experience and data-scape of the city, with material conditions being supplemented by informational conditions that alter the ways in which we engage with the material world. Apps on cell phones become the tools we use to navigate these scapes, which the city no longer something that is seen primarily as fixed pattern, but rather as something that can be activated and drawn from in unique ways.

Complexity How?

Bottom-up:

One of the ways in which these dynamics of civic activation and appropriation differ from current models is that the ways in which common needs or goods come to the forefront need no longer rest be driven from the top down. There are far greater opportunities for special niches to emerge from the collective actions of Bottom-up Agents, with novel and surprising features gaining prominence. In a civic data-scape, a particular club might gain prominence on social media on a particular evening - going 'viral' in the same way that a cat video might, and thereby gaining prominence in the shared Datascape of club-goers.

Contingency and Non-Linearity:

We see as well from the club example that some of the dynamics that generates points of prominence in data-scapes may in fact be caused by initial random-fluctuations, that gradually self-perpetuate,  (as is seen in systems phenomena governed by growth and Preferential Attachment. For example,  in the data-scape of accommodation, or restaurants, small changes in initial conditions may have a disproportionate impact on system performance: with certain sites gaining prominence in the Datascape even though are not inherently superior to others.

Driven by Flows

We often think of civic form as coming first - that we put in a road and then the road directs flows. Traffic engineers might look at a city plan and make decisions about location not because of existing flows, but instead because of existing cheap real-estate upon which to build a corridor.  Datascapes flip this relationship, by first determining flows and then allowing these Driving Flows to direct civic infrastructure. The simplest example of this is comparing 20 Uber passengers with 20 passengers bus passengers. The bus forces people to conform to its pre-determined course of navigation, whereas the flow of Ubers are instead driven by their desires. What is of interest is that, once this relationship is flipped we may observe new patterns of flows that are consistent and coherent, but previously invisible. This is also why the phrase 'data-scape' is invoked, because what emerges in tracing the pathways of 1000 Uber rides (in contrast to a 1000 bus rides) is a new kind of mapping about cities not evident before.

Thought Experiment:

For more insights into how IoT technologies might combine with complexity principles to reveal data-scapes of fit urban conditions, check out the "Urban Lemna" student project in the InDepth "Resources" tab to the right.

Sections of this text were extracted and modified from an earlier paper by S Wohl and R Revariah: Fluid Urbanism : How Information Steered Architecture Might Reshape the Dynamics of Civic Dwelling, published 2018 in The Plan Journal. See also "Sensing the City: Legibility in the Context of Mediated Spatial Terrains, published in 2018 in Space and Culture.


 

Tactical Urbanism

Tactical interventions are light, quick and cheap - but if deployed using a complexity lens, could they be a generative learning tool that helps make our cities more fit?

Governing Features ↑

Tactical Urbanism is a branch of urban thinking that tries to understand the role of grassroots, bottom-up initiatives in creating meaningful urban space. While not associating itself directly with complexity theory, many of the tools it employs -particularly its way of 'learning by doing' - ties in with adaptive and emergent concepts from complexity.


Tactical Urbanism is an approach to urban intervention which removes the need for prediction: rather than attempting to forecast what might work in a given environment, tactical strategies instead simply enact various small short-term interventions. Examples might include: putting temporary barricades up on a street to allow for a festival; temporarily allowing a traffic lane to become a bike lane; shifting parking stalls to be pocket parks or outdoor cafe tables; etc. With many of these kinds of interventions beginning to crop up in cities around the world, the term "Tactical Urbanism" was introduced by {{mike-lydon-and-anthony-garcia}}, to capture these kinds of activities. 

These kinds of short-term tactics can enliven public space, while avoiding the red-tape of more permanent interventions. They are thus easier to implement given their quick and temporary scope. They often are the result of grass-roots community activism, and are typically described in the context of community empowerment.

At the same time, these kinds of interventions can be related to complexity thinking if they are conceived not as "one -offs", but instead as strategic tests that serve as a kind of environmental probe.  The nature of such interventions are that they are "light, quick, and cheap", meaning that they are also {{safe-to-fail}}. Because of their temporary and "light" nature, they can quickly be mobilized on different sites, on different days. This means that they have the inherent ability to provide quick and adaptive {{timeiterations}} that can support urban 'learning'. 

How the City Learns

In what way might a city learn? Urban Designers often depict renderings of lovely civic interventions: bike paths filled with happy cyclists; amphitheaters enlivened by performers and audiences; sidewalk cafes brimming with smiling people. But are these projections accurate? Too often once spaces are built, they fail to perform in the ways anticipated - but at that point it is too late. Too much capital has been sunk into the project to rip it up and start over again, so we are left with dis-functioning environments.

We can therefore think about tactical approaches as a way to increase the number of functional {{variables}} a particular urban environment can explore. One iteration might involve populating a street with a market, another might be about partially closing it for a bike path, another might test turning sections into pop-up parks. Each of these can be considered to be potentially viable urban functions that are seeking the right "fit" within a given context - ones looking for a supportive niche. It is therefore possible to see tactical interventions as "fitness" probes used to explore the {{fitness-landscape}} of an urban environments. Given that different urban environments are subject to different underlying dynamics (or {{driving-flows}} ) the success of a particular test probe can tell us something about what are suitable niches for longer term interventions.

Example: Play me I'm Yours

Play me I'm Yours began in 2008 as an artist installation by Luke Jerram, by placing pianos in various locations in a city. The project gained international traction and has since been replicated globally. Musicians find pianos in unexpected locations and are able to animate the surrounding environment by playing music. While the project is compelling in and of itself, it also interesting to position it not merely as an artistic intervention, but also as an experiment in probing the city for viable music locations. Each piano, in a sense, could be thought of as a sensor, monitoring how often it is activated by players. Together all pianos thereby gain data about the underlying capacity or propensity for music performance in a section of the city. If we think of each piano as an agent in a complex system, and we think of "being played" as a measure of that agent's fitness, then the pianos can, in a sense, monitor which positions best serve to gather their relevant input (piano playing individuals). Here, the civic environment carries these driving resource flows in differential ways (with some locations being richer in flows than others). These are thereby more "fit" locations.

While this example has its limits, it can be extended to imagine other, similar kinds of civic systems. For example, imagine that we create a temporary pop-up playground set,  capable of being easily dismantled and assembled, and then deployed to different vacant lots in the city. We could then imagine equipping this set with sensors, to determine where and when it is activated and used. This would not involve the top-down monitoring of individual kids (a risk often associated with big data collection), but instead would simply involve the monitoring of the equipment itself: do the swings swing, are the slides being slidden upon, etc.  We can think of each of these activities being a measure of 'fitness' for the playground equipment. A slide, for example, as an agent within this complex system aims to fufill its 'destiny' by being used for sliding: sensors monitoring the frequency of its use can then be used as a measure of its fitness. The various pop-up locations are different niches, each of which provide the slides with differential flows of a particular resource - in this case the energy of sliding children - that the slides are hungry to gather. The deployments of the playground equipment can then be seen as explorations of the fitness landscape, {{timeiterations}} through which the slide gathers {{feedback-loops}} about locational success. 

It should be apparent that this is a system capable of learning, with each tactical mutation of {{variable}} serving as a test of fit strategies. Furthermore, the system can be thought of as made up of {{nested-orders}} of components, so we have the fitness of the playground as a whole that can be assessed, but we can also examine the fitness of the different sub-elements making up the park: how much a sandbox or a swing-set, or a slide are each activated as part of that whole.

Tactical Strategies as a Method of Deploying Complexity on the Ground

Tactical strategies are most typically lauded as a way to gain grass-roots advocacy, but they are presented here in relationship to complexity, as tangible,  operational way to employ complexity thinking in real-world situations. These strategies, alongside the idea of {{urban-datascapes}}, are a way of gathering meaningful data about the differential needs and functional requirements of the city. This information gathering can either be done using high-tech sensors (leveraging the power of the Internet of Things), simple observation strategies (does a pop-up market look busy or dead), or by figuring out how success can leave an environmental trace {{stigmergy}}.

In the case of stigmergic signals, we need to think about how the environment is structured in ways where it is capable of collecting signals. For example, if we wish to take a tactical approach to placing pathways in a park, rather than setting these in stone, we might instead simply plant grass. Grass, as a medium, is capable of collecting traces of differential flows of footsteps - recording the {{driving-flows}} where routes converge. In this way, what are known as 'desire lines' manifest on the grass as an emergent phenomena, revealing bottom-up flows rather than imposed flows. If the "fitness" of a sidewalk paving stone, pertains to where it best gathers footfalls, then desire lines reveal the optimum location to place these stones.

We can, of course, force these flows into other regimes that will become well-trodden: if there is only one way to go then people will go that way, but just because we have locked-in people to a given behavior by forcing them into this conformance, does not mean that it is best. We can think of the QWERTY keyboard as imposing a limit on more effective ways of typing, but just because lots of people use this keyboard does not make it the most fit of all possible keyboards.

Tactical Urbanism can therefore be seen as a useful tool for designers thinking about how they might explore the underlying fitness landscapes of the city - shaped by different flows and potentials. The challenges are in learning how to conceptualize material artifacts in the city - ranging from movable chairs in parks, to movable buses on self-organizing bus routes - in more tactical ways. 


 

Resilient Urbanism

How can our cities adapt and evolve in the face of change? Can complexity theory help us provide our cities with more adaptive capacity to respond to uncertain circumstances?

Governing Features ↑

Increasingly, we are becoming concerned with how we can make cities capable of responding to change and stress. Resilient urbanism takes guidance from some complexity principles with regards to how the urban fabric can adapt to change.


Urban resilience refers to the ability of an urban system-and all its constituent socio-ecological and socio-technical networks across temporal and spatial scales - to maintain or rapidly return to desired functions in the face of a disturbance, to adapt to change, and to quickly transform systems that limit current or future adaptive capacity (Meerow et al, 2015, Landscape and Urban Planning)

MORE COMING SOON!


Back to {{urbanism}}

Back to {{complexity}}



 

Relational Geography

If geography is not composed of places, but rather places are the result of relations, then how can an understanding of complex flows and network dynamics help us unravel the nature of place?

Governing Features ↑

Relational Geographers examine how particular places are constituted by forces and flows that operate at a distance. They recognize that flows of energy, people, resources and materials are what activate place, and focus their attention upon understanding the nature of these flows.


Networked Space:

Which two cities are closer together - London and Blackpool or London and New York? From a strictly metric geographic sense, we would answer that London and Blackpool are closer, and for a long time that would be how geographers would respond. But in recent decades geographers have become increasingly interested in how places are constituted not so much according to fixed, metric qualities, but in terms of how different kinds of flows tie spaces together. These spaces might be quite far from one another in a geographic sense, but quite close together in terms of how they relate: hence relational geography.

Looking at these three cities from a relational perspective, we would consider the kinds of flows that move between them - flows that might be constituted of people, ideas, money, resources, etc. From this perspective, we could reasonably argue that London and New York have far greater intensities of flows, drawing them closer together than the Blackpool counterpart situated in the the UK.

Relational geography is thus interested both in the kinds of {{network-topology}} that exist between places, as well as the {{driving-flows}} that these networks carry. Rather than seeing places as being primary and the relations between places being as a secondary outcome of these primary "things", relational geography flips this relationship on its head: arguing that we need to look at the relational flows first with particular places then being constituted by the nature of how these flows come to be grounded or moored in particular settings (see for example the work of {{John-urry}}).  It employs Network theory to help think about how the dynamics of agent interactions - the flows moving between them -  affect the performance  of complex geographical systems.


Complexity and Relational Geography

Given these interests, it is stands to reason that geographers interested in understanding how to think through this orientation would notice  similarities to complexity theory. Relational geographers, thus began to draw inspiration from complexity dynamics, particularly as it pertains to such phenomena as {{emergence}}, {{non-linearity}}, and {{driving-flows}}. Relational geographers are not particularly engaged with the nature of self-similar or nested orders in complex systems, and if focused on individual agents, then these are most often thought of not at the scale of humans in cities, but as cities themselves as agents in a global network.  

Relational Geography, attunes in particular to how network structure may have an effect on the kinds of urbanization patterns that emerge; how present day patterns of habitation are not necessarily 'natural' outgrowths of previous patterns in a clear, logical chain, but instead how {{history}} and {{contingency}} plays a key role. They may employ the language of complexity, using terms like {{bifurcations}} to try to capture the contingent, non-linear dynamics at play.

Thus, what makes a "world class" city, vs a local city, and what are the driving forces at play that weave this city into global versus local networks of influence. How can cities who may be at the fringes move to steer more driving flows of resources and people into their sphere or influence? What geographical regions are left behind? For example, how does the location of a particular rail line, and its stations, change the dynamics of proximity in ways that may privilege certain regions, while marginalizing others that are left with poorer access to these flows of mobility?

These kinds of questions slide up alongside many of the terms and concepts used in complexity thinking.

map of global airline routes - Wikimedia commons






Back to {{urbanism}}

Back to {{complexity}}


 

Parametric Urbanism

New ways of modeling the physical shape of cities allows us to shape-shift at the touch of a keystroke.  Can this ability to generate a multiplicity of possible future urbanities help make better cities?

Governing Features ↑

Parametric approaches to urban design are based on creating responsive models of urban contexts that are programmed to change form according to how inputs are varied. Rather than the architect creating a final product, they instead create a space of possibilities ({{phase-space}}) that is activated according to how various flow variables - economic, environmental, or social, are tweaked. This architectural form-making approachholds similarities to complex systems in terms of how entities are framed: less as objects in and of themselves, and more as responsive, adaptive agents, activated by differential inputs.


More Coming Soon! In the meantime, check out the tutorial under the "Resources" section. 

Relates to topology;

Relates to variations;

Relates to differentials

Back to {{urbanism}}

Back to {{complexity}}


 

Landscape Urbanism

Landscape Urbanists are interested in adaptation, processes, and flows: with their work often drawing from the lexicon of complexity sciences.

Governing Features ↑

A large body of contemporary landscape design thinking tries to understand how designs can be less about making things, and more about stewarding processes that create a 'fit' between the intervention and the context. Landscape Urbanists advancing these techniques draw concepts and vocabulary from complex adaptive systems theory.


“Landscape Urbanism” (LU) is a phrased coined by theorist {{Charles-Waldheim}} to describe a new sensibility towards space, that emerged in the late 1980s and early 1990s.  It's roots trace back to a number of key theorists and practitioners based at the University of Pennsylvania, the Harvard Graduate School of Design, and the University of Illinois, Chicago. Their writings became main-stream in the late 90s and mid 2000s, being circulated in two highly influential texts - Recovering Landscape (1999) and The Landscape Urbanism Reader. These helped disseminate key ideas within the discourse, as well as highlighting seminal projects advancing the movement's ideas in the form of competition entries as well as built works.   

These texts and projects positioned LU as a break from traditional landscape interests, which tended to focus on the sceno-graphic or pictorial qualities of space. Instead, Landscape Urbanism attunes to the nature of landscape performance in an unfolding context.  LU practitioners and theorists, are thereby less attentive to the physical dimensions of plans (how they look), and more to the performative aspects of plans and how these come to be enacted over time. Here, practitioners acknowledge the limits to their foresight, and instead try to work with {{contingency}}. They accept that {{history}} in terms of the specifics of how places will come to emerge. 

The movement recognizes that prediction is impossible, allowing for sites that are not so much constructed but performed in space and time, by means of differential forces engaging with the site. This performance takes place within a spatial arena that is structured so as to not only permit but also afford a broad range of site potentials – different manners in which the site might be “played” or from which different variations of performance can be extracted . To prime these mutable settings, LU practitioners speak of ‘seeding’ an area’, ‘irrigating’ a territory, or ‘staging’ the ground - all alluding to an active and catalyzing engagement with the site that anticipates and prepares the ground for possibility - while still maintaining an open-endedness in terms of which future possibilities are enacted (see {{James-Corner}}). This idea of creating a flexible framework that can be activated in different ways is described as creating {{open-scaffolds}} in landscape, but can be tied back to the idea of setting up {{variables}} that are then activated so as to support different {{driving-flows}} 

Thus, LU does not just leave a space ‘open’, but instead aims to increase a physical environment’s capacity to foster the emergence of contingent events: ones constituted on territories where these flows coalesce. Here, the concept of ‘staging’ or creating {{affordances}} is key. Affordance is the term coined by James Gibson to describe the capacity of designed objects or environments to invite multiple kinds of appropriations that in turn, manifest as different ‘states’, that are in alignment with different kinds of user needs or requirements. The choice of which ‘afforded’ state manifests is contingent upon the kinds of imbricated relationships activated by users. That said, not all sites offer equal affordances to shift into different regimes of behavior - if too specific, territories do not have the plasticity required; if too open-ended, they become neutral - with little capacity to meaningfully afford or support programmatic specificity.

By creating a range of affordances that support programmatic potential, Landscape Urbanists accept the future as non-linear, open-ended and contingent, but still act to curate meaningful material territories that can be appropriated and modified when and where contingent forces coalesce.

This notion of {{affordances}} is closely aligned to that of {{phase-space}}. Both concepts engage the idea (central to both complexity and {{assemblage-geography}}, that material entities have certain capacities that exist within {{the-virtual}} and contingent;  and that these are activated and manifested only under particular circumstances. That said, material affordances are not completely open-ended – there are still limits, and the way in which the capacities of material form are ‘called forth’ is through practices that integrate the {{driving-flows}} of agency present in a given situation.

This emerging body of work integrates an acceptance of process, evolution, and unknown site dynamics, with the actualization of site features occurring in accordance with non-linear interactions. Strategies involve the creation of multiple enabling sites (or niches) within the territory of the city that permit different kinds of programs to find their best ‘fit’ in response to evolving relationships

For a more-in depth look at Landscape Urbanism approaches, including examples of projects and their relationship to complexity thinking, please watch the tutorial featured in the "In Depth" resources.


Back to {{urbanism}}

Back to {{complexity}}




 

Incremental Urbanism

Cities traditionally evolved over time,  shifting to meet user needs. How might complexity theory help us  emulate such processes to generate 'fit' cities?

Governing Features ↑

This branch of Urban Thinking consider how the nature of the morphologic characteristics of the built environment factors into its ability to evolve over time. Here, we study the ways in which the built fabric can be designed to support incremental evolution


Typically, designers see the "masterplan" as the foremost solution to urban planning. Often these masterplans are characterized by large-scale, hierarchical, high capital, inflexible, and centralized ways of city planning. These masterplans fail to integrate the complex and rich dynamics of cities, with the importance of architectural forms and visions overshadowing ongoing social, economical, and political characteristics.

Incremental Urbanism, by contrast, considers the complexity of these variables and instead aims to support a city that can grow and evolve over time. Here, individual occupants or builders respond to the constantly changing environment and resources around them. The city is built piece by piece as individuals get more information, develop more aspirations,  and better identify their own needs and capacities.

At the same time, people's ability to modify the city is also tied to the nature of its underlying  morphologic conditions. Certain characteristics enable evolution to proceed incrementally over time, whereas other conditions resist change, and alterations require more radical processes of destruction and reconstruction - impeding the ability for iterative learning. Thus, the inherent flexibility of the floor plates of canal houses of Amsterdam enable these to host a wide array of functions - be it warehousing, housing, restaurants, offices, or shops - whereas other kinds of spaces resist such flexibility of appropriation.


Example:

Consider the images below, in the upper set of images, functions are built with a morphological specificity that resists easy conversion. While it is possible to swap out these functions into the other spaces, it is unlikely. Accordingly, if one function ceases to be fit, mutations for new functions are not easily enabled.
In contrast, if we look at the canal buildings in Amsterdam, we see that the built characteristics allow for change in programming to easily take place, allowing new kinds of behaviors to be activated and supported by the identical built fabric. 


 


Modularity

This branch of urban thinking considers time and evolution key to generating fit urban spaces. {{jeremy-till-t-schneider}}, in their book "Flexible Housing"  discuss how housing units can be developed by means of {{modular}}, allowing projects to evolve incrementally over time and create larger spaces only as needed. The ultimate building scale may involve additions to structures such as an additional story, the expansion of a room, or an additional detached small unit. This type of development happens constantly and gradually over time, resulting in no large disruption to the neighborhood. Each new modification respects the existing context so that, as growth and change happen, features of of the original character remain.

Incremental development can therefore happens at many scales (or at differing 'grains' of urban fabric). The designer’s goals are to generate effective spaces that can range from single-family homes to large apartment complexes or even office buildings. This wide spectrum of spaces evolves over time by adding more modules together to create a more fit urban space. 

Iterations

We can think about this kind of incrementalism as being consistent with the iterative nature of complex systems, built as a series of {{patterns-of-interactions}} that is steered by the collective behaviors of {{bottom-up-agents}} in the form of occupants. That said, these occupants need to inhabit spaces that are capable of being modified in this incremental manner  - a built fabric that has the {{adaptive-processes}} to respond to shifting needs and forces.   

In Julia King's "What is the Incremental City"  she writes, "the incremental city achieves what the ‘natural city’ achieves as it is developed in a piece-meal way responding to local conditions, desires, and aspirations.”  This flexibility allows developments to freely react to new variables  - the {{driving-flows}} of urban conditions that continuously establish an array of new possible system states. We can think of these reactions as {{feedback-loops}} with the built environment self-regulating and organizing over time. King states that Incrementalism encourages individuals to shape and affect their environments. They activate the  incremental improvements, additions, or modifications in the face of novel inputs - instilling a bottom-up personal agency not typical in top-down master-planned projects. 


{{Patterns-of-Interactions}}

Many of the dynamics we see at play in incremental approaches depend on what is occurring in the surrounding context. Thus, similar to agent-based simulation models where cells shift their states based on the performance of neighboring cells, in an incremental approach there are both morphological components at play, as well as the variations on morphological conditions being influenced and constrained by what is happening on neighboring sites. 


Example:

Aspects of Incremental Urbanism can be demonstrated in the game of Carcassonne. The game consists of a set of tiles that display sections of grass, roads, or cities. As tiles are iteratively placed, they must progressively adapt to adjacent predetermined conditions to keep roads and cities correctly matched together. Incremental cities and Carcassonne develop unpredictable and diverse landscapes formed by means of such small, incremental steps that are constrained by surrounding decision-making. Traditional civic growth also observes this model -  evolving naturally and organically with little or no planning, while modern practices attempt to foresee out civic development ahead of time.

While incremental shifts can occur in any setting, cities designed using top-down strategies tend to have a slower pace of incremental development because of the pre-imposed limits already in place. It takes longer for agents within these cities to evolve and shape their environments, as they are already locked-in to a predetermined form. On the other hand, traditionally designed  cities emerge through {{patterns-of-interactions}}, shaped via incremental changes there are a product of the needs of the agents within the system. (more on traditional civic growth found on the {{informal-urbanism}} page)



Text adapted from a contribution by Samantha Barger, Michael Gehl, Shivang Patel, Kevin Tokarczyk; Iowa State University, 2021

Back to {{urbanism}}

Back to {{complexity}}


 

Evolutionary Geography

Across the globe we find spatial clusters of similar economic activity. How does complexity help us understand the path-dependent emergence of these economic clusters?

Governing Features ↑

Evolutionary Economic Geography (EEG) tries to understand how economic agglomerations or clusters emerge from the bottom-up. This branch of economics draws significantly from principles of complexity and emergence, seeing the rise of particular regions as path-dependent, and looking to understand the forces that drive change for firms - seen as the agents evolving within an economic environment.


Evolutionary Economic Geography is a branch of economics that tries to understand how the same kinds of processes observed in evolution can be applied to geographically situated economic clusters. It shares some similarities to {{Relational-Geography}} in that it sees the specificity of the physical environment as something that arises due to networks of driving flows. Where it differs is partially in terms of its specific focus - that of economic actors situated in urban contexts (that is firms with particular expertise and economic output) - rather than the broader multiplicity of actors found within cities. Further, the field forefronts more of the dynamics of complexity than relational geography: attuning in particular to the {{bottom-up-agents}} (in the form of firms), that make up these economic systems, as well the dynamics underlying their {{adaptive-processes}} to become more fit. Accordingly, the field employs what is known as "General Darwinism": using principles of variation, selection and retention (VSR) that we see in organic evolving systems, and applying these same principles to non-organic systems. 

Examples of the kinds of geographic phenomena that these evolutionary geographers might consider of interest would be the rise of Silicon Valley as a Tech hub, Holland's Tulip growing fields, or Taiwan's Orchid growing sector (see video below). These kinds of regions of specialized intensifications are called "agglomerations", and are described as arising in ways that conceptually correspond with {{emergence}}. Thus, these kinds of intensities of expertise were not necessarily pre-planned from the top-down, but instead arose due to processes that are more akin to the evolutionary dynamics we see in nature. Furthermore, the ways in which these dynamics unfold are tied to how {{bottom-up-agents}} in complex systems are steered towards fitness. Here, individual firms are seen as "agents" in an economic system, all of which are competing to find niches for success. These firms are steered not only by the {{feedback-loops}} gathered from monitoring the success of their own actions, but also the signals gathered by attuning to the actions of their nearest competitors. 

Spill-overs and Negentropy

These signals help steer individual firm success, due to the benefits of what are known as "spill-over" effects. Another way to think about this is that, left to the their own independent devices, each firm needs to navigate the economic landscape with maximum uncertainty about how best to proceed in order to "harness" the {{driving-flows}} of monetary gain. By co-locating near similar agents, the amount of uncertainty to achieve this can be reduced (see {{information-theory}}). Uncertainty in this case, might pertain to industry "best practices" that are coming to the fore, personnel that are knowledgeable and available in the region to be hired, and synergetic support businesses present and able to carry out aspects of the delivery model. Thus, the backdrop of Silicon Valley provides expertise and support "in the air" to give businesses in the region a competitive edge over others located in more isolated regions. 

Intensifying Flows & Feedback

Some of the dynamics pertaining to why a particular economic agglomeration emerges involve the kinds of network effects seen in conditions of growth and {{preferential-attachment}}. As certain business sectors begin - potentially at random - to co-locate in a particular region, other support services become attracted to that area, which then attract further businesses, and so on. We see again the mechanism of {{positive-feedback}} reinforcing particular patterns, which then take hold as {{attractor-statesbasins}} for agents in the system. 

Fitness

We can therefore consider firms in a regions as competing {{bottom-up-agents}}, each trying to tweak the {{variables}} of their business models so as to outcompete their neighbors. Yet even though they are engaged in competition, they nonetheless have some reliance on their competitors: it is through their co-presence that many simultaneous business protocol {{timeiterations}} can be tested in parallel, with the overall expertise of the co-located enterprises being enhanced. Accordingly, agglomerations of these co-located competing firms are more likely to increase their {{fitness}} than firms operating at a distance. 

Enslavement or "Lock-in"

It becomes very difficult to disrupt an agglomeration once it has emerged. Too many of the flows related to a particular sector become concentrated in this geographic regions, meaning that massive structural shifts are required to rearrange these flows. This is not to say that this can never occur. Detroit, for example, was for many years the power-house for automotive manufacturing. It was only with the advent of major underlying shifts of flows - tied to such aspects as wages, access to cheaper workers, and lowered shipping costs - that these flows gradually reconstituted themselves in new geographic locations off-shore. But these major shifts are rare, with regions of expertise reproducing themselves over time, even in the face of other underlying disruptions. Such systems can be described as being in {{enslaved-states}}, or what is called "lock-in" by Evolutionary Economic Geographers. 


The video below outlines an example of an emergent agglomeration: that of Orchid growing in Taiwan.




Back to {{urbanism}}

Back to {{complexity}}


 

Communicative Planning

Communicative planning  broadens the scope of voices engaged in planning processes. How does complexity help  us understand the productive capacity of these diverse agents?

Governing Features ↑

A growing number of spatial planners are realizing that they need to harness many voices in order to navigate the complexities of the planning process. Communicative strategies aim to move from a top-down approach of planning, to one that engages many voices from the bottom up.


Backdrop

Communicative planning is a specific strategic approach to developing plans in cooperation with a broader range of actors. If master plans relied on the expertise of the top-down planner, then communicative approaches aim to broaden the number of voices engaged in the process, include more perspectives, and garner more wisdom from harnessing the bottom-up "wisdom of crowds". 

Here, planning is positioned as being a "wicked problem":  one with poor boundaries, many diverging and overlapping concerns, and no direct pathway for problem "solutions". It is therefore seen as a problem in "complexity" - the term largely adopted so as to refer to the messiness of the problem domain. In this reading, agents in the system are considered as individual stake-holders, each of which have personal interests that need to be resolved or addressed. At issue is how best to 'strategically navigate' amongst these players, so that an "emergent" solution can be reached. 

Within this reading,  it is helpful to consider the differential power that each stakeholder wields, so as to better balance dynamics that might lead to unfair planning solutions. Such situations arise, for example when a particular party (such as a developer), holds inequitable resources available to influence planning decision-making. Accordingly communicative planners try to understand the relative flows of agency available within the process, and then channel these in more equitable, balanced ways.


Relation to Complexity

Planners with these interests are often drawn to principles from complexity, not least because of the fact that one of the key thinkers in the domain {{patsy-healey}}, has a seminal book, the title of which is "Urban Complexity and Spatial Strategies". The approach, does indeed, relate to complexity in that it emphasis a bottom-up approach, by which a consensual strategy for planning emerges. Here, the use of the word "complexity" may be more metaphorical then technical (if we assume that in this context it is simply suggesting that planning is 'complicated'). Similarly, there are other aspects of complexity theory that are appropriated in this discourse, some in more direct, others in more metaphorical manners.

Networks

Communicative Planners have a strong interest in how the nature of the actor {{network-topology}} affect how decision-making takes place (and whose voices dominate the network). There is a strong link between Communicative approaches and Actor Network Theory (or ANT), which examines network dynamics as what ultimately constitutes certain forms or protocols previously accepted as 'givens'. Here, similar to the approach of relational geography, we see that the relations constituting a given entity are being more fundamental than the entities themselves. 

Agents

Part of the objectives of Network analysis are to understand what nodes in the network hold more power, tracing which agents in the system play a larger causal role in driving it forward.  Communicative Planners, consider how {{bottom-up-agents}}, in the form of diverse stake-holders steer the process,  and where differences in agency lie.  While consensus can "emerge" from many kinds of bottom-up agents interactions, such emergence can be subject to inequitable steering depending on how stakeholders are empowered or disempowered in the process. The concern for agents here is thus less about "rule-based" decision making, or how such agents adapt, but moreso about how so-called bottom-up dynamics need to be facilitated so as to ensure that the meaningful input of all agents can be garnered in discovering a planning solution. The concern is that some processes leave agents out - unable to contribute to the emergent characteristics of a given planning strategy.

Emergence

For communicative planners, the concept of emergence is again used more as a metaphorical tool than in a technical manner. To illustrate: even though diverse ants in a complex system form an emergent trail, they do not do so by sitting around together in a colony deliberating and weighing which course of action to take. Emergence in the more technical sense relates to actions that are performed in an environment, where the agents involved  - be it sand grains or ants - need not be consciously cooperating. By the same token, ants need not compromise their own needs on the part of the colony. This is not to say that the communicative approach towards emergent consensus is not of value, only that it is probably not of the same kind as what we would see in natural complex systems.

That said, the language and terms drawn from complexity seem to offer communicative planners with a useful set of concepts: able to convey something meaningful about developing a more contingent, more open-ended, more bottom-up approach and more relational approach to decision making.




Back to {{urbanism}}

Back to {{complexity}}


 

Assemblage Geography

Might the world we live in be made up of contingent, emergent 'assemblages'? If so, how might complexity theory help us understand such assemblages?

Governing Features ↑

Assemblage geographers consider space in ways similar to relational geographers. However, they focus more on the temporary and contingent ways in which forces and flows come together to form stable entities. Thus, they are less attuned to the mechanics of how specific relations coalesce, and more to the contingent and agentic aspects of the assemblages that manifest.



Assemblage draws from the work of Gilles Deleuze who coined the term 'agencement' (translated to "assemblage" in English) which in the original French refer both to "coming together' as well as to 'agency'. The philosophy draws attention to the contingency of material things as well as their agentic power: emphasizing that things retain both virtual capacities, which remain latent, as well as ones that are actualized when entering into relation with other forces or actors. 

Example:

Consider the power of a mongol warrior.  Here three separate entities, the individual warrior, the horse that he rides, and the stirrup that enables him to stand with his weapon while in motion. Each of these separate aspects cannot conquer a territory on their own, but together the three entities can enter into an assemblage that has additional agentic power to have a major effect. Such an assemblage can 'stabilize' into this configuration , while each component still maintains its own identity. Assemblage provides a way to speak about such entities, but also about how certain capacities can be latent within entities until they are forged together in contingent, temporary assemblages. 

Relation to Complexity

Assemblage theorists adopt the concept of Emergence, but engage with it in a much more philosophical manner. Following the works of the philosophers Gilles Deleuze and Felix Guattari, they describe concrete urban entities as emergent, indeterminant and historically contingent Stabilized Assemblages.  Assemblages are configurations of inter-meshed forces and distributed agencies - human/non-human, local/non-local, material, technical, social, etc., that are stabilized at particular moments. Once in place assemblages - like emergent features - may have unique properties or capacities not associated with their constituent elements, and thereupon yield agency in structuring further events.  'Assemblage' ideas therefore echo those of Emergence: something is produced from constituent agents that is able to act in novel ways. This conceptual overlap has led geographer {{Kim-dovey}} to suggest that the phrase 'Complex Adaptive Assemblage' be used in place of 'Complex Adaptive System' in the spatial disciplines. 

Agents in a particular assemblage have particular capacities which one might see as analogous to Degrees of Freedom, but how these capacities manifest is subject to Contingency: predicated on the nature of flows, forces, or the Patterns of Interactions at play in a given situation. Assemblage geographers thus import the language of {{non-linearity}} and Bifurcations: trying to understand the chance events that determine the trajectory of urban systems which are sensitive to historical unfolding.

This sense that {{history}}, runs counter to the historical determinism that previously dominated geographical investigations, where a coherent logical chain of cause and effect was seen as the primary driver of outer geographical difference. For assemblage thinkers, history does indeed matter, but only insofar as one particular trajectory is realized vs another.  Manuel de Landa, for example, argues that in order to properly conceptualize the importance of any given actualized geographical space, it is necessary to see this space as but a single manifestation - situated within the broader Phase Space of The Virtual - with all its unrealized potentials. This emphasis on the role of history situates urban systems as subject to Contingency, with actual unfolding representing only one possible trajectory of broader system potential.

Assemblage Geography thus engages with many concepts present in Complex Adaptive Systems Theory, but primarily focuses on the nature of contingent, causal flows (including both human and non-human flows) and how these come to be realized in particular physical manifestations.


Accordingly, the field is less attuned to aspects of complexity surrounding, for example, rule-based systems, mathematical regularities, or the adaptive capacities of bottom-up agents.


Back to {{urbanism}}

Back to {{complexity}}


 


 

Explore - Navigating Complexity

Landing for the Concepts

This is a nice home page for this section, not sure what goes here.

218:218 - Tipping Points
Related
Principles: 26 

64:64 - Self-Organized Criticality
Related
Principles: 23 

214:214 - Self-Organization
Related
Principles: 24 

217:217 - Scale-Free
Related
Principles: 23 

213:213 - Rules
Related
Principles: 22 

66:66 - Power Laws
Related
Principles: 23 

93:93 - Path Dependency
Related
Principles: 26 

84:84 - Open / Dissipative
Related
Principles: 25 

75:75 - Networks
Related
Principles: 25 

56:56 - Iterations
Related
Principles: 

73:73 - Information
Related
Principles: 25 

59:59 - Fitness
Related
Principles: 24 

88:88 - Feedback
Related
Principles: 21 

212:212 - Far From Equilibrium
Related
Principles: 26 

78:78 - Degrees of Freedom
Related
Principles: 22 

53:53 - Cybernetics
Related
Principles: 

72:72 - Attractor States
Related
Principles: 24 

 

Tipping Points

A tipping point (often referred to as a 'critical point') is a threshold within a system where the system shifts from manifesting one set of qualities to another.

Complex systems do not follow linear, predictable chains of cause and effect. Instead, system trajectories can diverge wildly into entirely different regimes.


Most of us are familiar with the phrase 'tipping point'. We tend to associate it with moments of no return: when overfishing crosses a threshold that causes fish stocks to collapse or when social unrest reaches a breaking point resulting in riot or revolution. The concept is often associated with an extreme shift, brought about by what seems to be a slight variance in what had been incremental change. A system that seemed stable is pushed until it reaches a breaking point, at which point a small additional push results in a dramatic shift in outcomes.

While the phrase 'tipping point' tends to connote a destructive shift, the phrase 'critical point' (which also refers to a large shift in outcomes due to what appears to be a small shift of the system context) does not carry such value-laden implications. Complex systems tend to move into different kinds of regimes of behavior, and the shift from one behavior to another can be quite abrupt: indicating that the system has passed through a critical point.

Example:

Water molecules respond to two critical points: zero degrees, when they shifts from fluid to solid state; and one hundred degrees, when they shift from fluid to vapor state. We see that the kinds of behavior that water molecules will obey is context dependent:  they maintain fluid behaviors within, and only within, the context of a certain temperature range. If we examine why the behavior of the water changes, we realize that fluid behavior within the zero to 100 range is the behavior that involves the least possible energy expenditure on the part of the water molecules given their environmental context. Once this context shifts - becoming too cold or too hot - a particular behavioral mode is no longer that which best conserves energy. Water molecules have the capacity to enact three different kinds of behavioral modes - frozen, fluid, or vapor - and the way these modes come to be enacted is subject to whichever mode involved the least energy expenditure within a given context.

Minimizing Processes:

Another way to think about this, using a complex systems perspective, is that the global behavioral dynamics are moving from one Attractor States to another. When the context changes, the water molecules are forced into a different "basin of attraction" (another word for an attractor state), and this triggers a switch in their mode.

In all complex systems this switch from one basin of attraction to another is simply the result of a system moving from a regime of behavior that, up until a certain point, involved a minimized energy expenditure. Beyond that point (the tipping point) another kind of behavioral regime encounters less resistance, conserving energy expenditures given a shifting context.

A tipping point, or critical point is one where a system moves from one regime of 'fit' behavior into another. We can imagine the point above as a water molecule poised at zero degrees - with the capacity to manifest either in a fluid or frozen energy state.

Of course, what we mean by 'conserving energy' is highly context-dependent. For example, even though the individual members of a political uprising are very different actors from individual water molecules in a fluid medium, the dynamics at play are in fact very similar. Up until a certain critical mass is obtained, resisting a government or a policy involves encountering a great deal of resistance. The effort might feel futile - 'a waste of energy'. But when a movement begins to gain momentum, there can be a sense that the force of the movement is stronger than the institutions that it opposes. Being 'carried along' with the movement (joining an uprising), is in fact the course of action that is most in alignment with the forces being unleashed.

Further, once a critical mass is reached, a movement will tend to accelerate its pace due to positive feedback. This can have both positive and negative societal consequences: some mass movement such as lynching mobs or bank-runs show us the downside of tipping points that move beyond a threshold and then spiral out of control.

That said, understanding that critical points may exist in the system (beyond which new kinds of behavior become feasible), can help us move outside of 'ruts' or 'taken for granted' scenarios. In the North American context, smoking was an acceptable social practice in public space. Over time, societal norms pushed public smoking beyond a threshold of acceptability, at which point smoking went from being a normative behavior to something that, while tolerated, is ostracized in the public realm.

What other kinds of activities might we wish to encourage and discourage? If we realize that a behavioral norm is close to a critical point, then perhaps with minimal effort we can provide that additional 'push' that moves it over the edge.

Shifting Environmental Context:

Of course these examples are somewhat metaphoric in nature, but the point being made is that there can be changes in physical dynamics and changes in cultural dynamics that cause different kinds of behaviors to become more (or less) viable within the constraints of the surrounding context.

Returning to physical systems, slime mould is a very unique organism that has the capacity to operate either as a collective unit, or as a collection of individual cells, depending on the inputs provided by the environmental context. As long as food sources are readily available, the mould operates as single cells. That said, when food becomes scarce, a critical point is reached when cells agglomerate to form a collective body with differentiated functions. This new body has capacities for movement and food detection not available at the individual cell level, as well as other kinds of reproductive capacities.

Accordingly, we cannot think about the behavior of a complex system without considering the context within which it is embedded. The system may have the different kinds of capacities depending on how the environment interacts with and 'triggers' the system. It is therefore important to be very aware of the environmental coupling of a system. What might appear to be stable behavior might in fact be behavior that is relying on certain environmental features being present - change these features and entirely new kinds of behaviors might manifest.

This is to say that tipping points might be extended both from intrinsic forces and extrinsic forces (also termed endogenous vs exogenus aspects). This is to say that a shift might be due to dynamics at play within the system, that push it beyond a critical threshold, or they may be due to dynamics external to the system, that alter the system context or inputs in such a way that a system's particular behavior can no longer be maintained and is pushed into a new regime. When the forces are external, we can think of this as shifts to the Fitness Landscape, where a particular mode of operation is no longer viable due to differences in the environmental context.

Back to {{key-concepts}}

Back to {{complexity}}


 

Governing Features ↑

Self-Organized Criticality

CAS tend to organize to a 'critical state' where, regardless of the scale of a given input, the scale of corresponding output observes of a power-law distribution.

Strike a match and drop it in the forest. How big will the resulting fire be? The forest is dry but not overly so... vegetation is relatively thick. Will the fire burn a few trees and then flame out, or will it jump from branch to branch, burning thousands of acres to the ground?


Weirdly uncorrelated cause and effect:

We might think that the scale of an event is relative to the scale of a cause, and in some instances this is indeed the case. But in the context of complex systems, we find an interesting phenomena. These systems appear to 'tune' themselves to a point whereby system inputs of identical intensities (two matches lit on two different days, otherwise same conditions), result in outputs that diverge wildly (a small fire; a massive fire event). The frequency distribution of intense system outputs (relative to equivalent system inputs) follows power-law regularities.

According to Per Bak, a variety of systems naturally 'tune' themselves to operate at a threshold where such dynamics occurs. He defined this 'tuning' as Self-Organized Criticality.  A feature of critical states is that, once reached, system components become highly correlated or linked to other system components. That said, the links are exactly balanced: the system elements are linked just tightly enough so that an input at any point can cascade through the entire system, but just loosely enough, so that there are no redundant links needed to make this occur.

Example:

One might think about this like an array of domino-like entities that, instead of being rectangular, are vertical cylinders: able to topple in any direction. The dominos, instead of being arranged in rows, are arranged in a field, with gaps between some cylinders. Accordingly, when a cylinder falls it might strike a gap in the field, with no additional cylinders toppling. Alternately, it might strike an adjacent neighbor, in which case this neighbor will also fall in a particular direction, potentially striking another or potentially dying out. The analogy is made stronger if we imagine that an arrangement whereby, regardless of the direction from which a cylinder is struck, it will wobble and then can fall in any direction.  When a system is self-critical, it has reached a state where we can randomly choose any domino to topple and the impact on the overall field will vary according to a power-law distribution. That is to say, that some disturbance will affect only a small number of surrounding dominos, while others will propogate throughout the entire system, causing all cylinders to fall. The occurrence of these large scale versus small-scale cascades follow Power Laws distributions.

Sand Piles and Avalanches

We can imagine that it would be very difficult to, from the top down, create a specific arrangement where such dynamics occur. What is surprising, and what Bak and his colleagues showed, is that natural systems will independently 'tune' themselves to such arrangements. Bak famously provides us with the 'sand pile' model as an example of self-organized criticality:

Imagine that we begin to drop a steady stream of grains of sand onto a surface. The sand begins to pile up, forming a cone shape. As more sand is added, the height of the sand cone grows, and there begins to be a series of competing forces: the force of gravity that tends to drag grains of sand downwards, the friction between grains of sand that tends to hold them in place, and the input of new sand grains that tends to put pressure on both of these forces.

What Bak demonstrates is that, as grains are added sand will dislodge itself from the pile, cascading downwards. What is amazing is that it is impossible to predict whether dropping an individual sand grain will result in a tiny dislodgment of sand cascades, or a massive sand avalanche. That said, it is possible to predict the ratio of cascade events over time - which follows a power-law distribution.

What this suggests is that the sand grains cease to operate independently to forces, and instead their response to forces is highly correlated with that of the other sand grains. We no longer have a collection of grains, acting independently, but a system of grains whereby system-wide behaviors are displayed. Accordingly, an input that affects one element in the system might die out then and there, or, because of the correlation amongst all elements, create a chain reaction.

Information Transfer

It remains unclear exactly how such system-wide correlations emerge, but we do know something about the nature of these correlations - they are tuned to the point where information is able to propagate through the system with maximum efficiency. In other words, a message or input at one node in the system (a grain of sand, burning tree, or toppling cylinder) has the capacity to reach all other nodes, but this with the least redundancy possible. In other words, there are gaps in the system which means that a majority of inputs ultimately die out, but not so many gaps that it is impossible for an input to reach all elements of the system.

Coming back to our original example, when we strike a match in a forest, if the forest has achieved a 'self-critical' state, then we cannot know whether the resulting fire will spread only to a few trees, a large cluster of trees, or cascade through the entire forest. The only thing that we can know is that the largest scale events will happen with diminishing frequency in comparison to the small scale events.

One possible way of understanding why self-organized criticality occurs is to position it as a process that emerges in systems that are affected both by a pressure to have elements couple with one another (sand-grains becoming interlocked by friction or 'sticky') and some mechanism that acts upon the system to loosen such couplings (the force of gravity pulling grains apart). The feedback between these two pressures 'tunes' the system to a critical state.

Complex systems that exhibit power-laws would seem to exhibit such interactions between two competing and unbalanced forces.


 

Governing Features ↑

Self-Organization

Self-organization refers to processes whereby coordinated patterns or behaviors manifest in a system without the need for top-down control.

A system is considered to be self-organizing when the behavior of elements in the system can, together, arrive at a globally more optimal functional regimes compared to if each system element behaved independently. This occurs without the benefit of any controller or director of action. Instead, the system contains elements acting in parallel that will gradually manifest organized, correlated behaviors: Emergence.


Emergent  behaviors become organized into a regular form or pattern. Furthermore, this pattern has properties that do not exist at the level of the independent elements - that is, there is a degree of unexpectedness or novelty in what manifests at the group level as opposed to what occurs at the individual level.

An example of an emergent phenomena generated by self-organization is flock behavior, where the flock manifests an overall identity distinct from that of any individual bird.

Characterizing 'the self' in 'Self'-organization

Let us begin by disambiguating self-organizing emergence from other kinds of processes that might also lead to global, collective outcomes.

Example - Back to School:

Imagine you are a school teacher, telling your students to form a line leading to their classroom. After a bit of chaos and jostling you will see a linear pattern form that is composed of individual students. At this point, 'the line' has a collective identity that transcends that of any given individual: it is a collective manifestation with an intrinsic identity (don't cut in the line!).  The line is created by individual components, expresses new global properties, but it's appearance is not the result of self-organization, it is the result of a top-down control mechanism.

Clearly 'selves' organize in this example, but not in ways that are 'self-organizing'.

Now imagine instead that you are a school teacher wanting the same group of students to play a game of tug-a-war in the school gym. Beginning with a blended room of classmates, you ask the students to pick teams. The room quickly partitions into two collectives:  one composed entirely of girls and the other entirely of boys. As a teacher, you might not appreciate this self-organization, and attempt to exert top-down control in an effort to balance team gender. What is interesting about this case is that it does not require any one boy calling out 'all the boys on this side', or any one girl doing the same: the phenomena of 'boys versus girls' self-organizes.

In the example above, we can well imagine the reasons why school teams might tend to partition into 'girls vs boys' even without explicit coordination (of course these dynamics don't always appear, but I am sure the reader can imagine lots of situations where they do).

Here, there are slight preferences (we can think of these as differentials), that generate a tendency for the elements of the system to adjust their behaviors one way vs another. In the case of the school children, the tendencies of girls to cluster with girls manifests due to tacit practices: friends cluster near friends, and as clusters appear students switch sides to be nearer those most 'like' them. Even if an individual child within this group has no strong preference - is equally friends with girls and boys - the pressures of patterns formed by the collective will tend to tip the balance. One girl alone in a team of boys will register that their behavior is non-conforming and feel pressured to switch sides, even if this is not explicitly stated.

Here there are 'selves' with individual preferences, but global behaviors are tipped into uniformity by virtue of slight system differences that tend to coordinate action.

Conscious vs unconscious self-organization:

While the gym example should be pretty intuitive, what is interesting is that there are many physical systems that produce this same kind of pattern formation but that do not require social cues or other forms of intentional volition. Instead, self-organization occurs naturally in a host of processes. Whether we are talking about schools of fish, ripples of wind-blown sand, or water molecules freezing into snowflakes, self-organization leading to emergent global features is a ubiquitous phenomena.

While the features of self-organization manifest differently depending on the nature of the system, there are common dynamics at play regardless of system. Agents in the system participate in a shared context wherein there exists some form of differential. The agents in the system adjust their behaviors in accordance with slight biases in their shared context and these adjustments, though initially minor, are then amplified through reinforcing feedback that cascades through the system. Finally an emergent phenomena can be recognized.

Sync!

Let us consider the sound of cicadas chirping:

cicadas chirping in sync

The cicadas chirp in a regular rhythm. There is no conductor to orchestrate the beat of the rhythm, no head cicada leading the chorus, no one in charge. The process by which the rhythm of sound (an emergent phenomena) manifests is governed purely by the mechanism of self-organization. Let us break down the system:

  1. Agents: Chirping Cicadas
  2. Shared Context: the acoustic environment shared by all cicadas
  3. Differential: the timing of the chirps
  4. Agent Bias: adjust chirp to minimize timing differences with nearby chirps
  5. Feedback: As more agents begin to chirp in more regular rhythms, this reinforces a rhythmic tendency, further syncing chirping rhythms.
  6. Emergent Phenomena: Regular chirping rhythm.

Even if all agents in the system start off with completely different  (random) behaviors, the system dynamics will lead to the coordination of chirping behaviors.

For another example of the power of self-organization, consider this proposition: You are tasked with getting one thousand people to walk across a bridge, with their movements coordinated so that their steps are aligned in perfect rhythm. You must achieve this feat on the first try (with a group of strangers of all ages who have never met one another).

It is difficult to imagine this top-down directive ending in anything other than an uncoordinated mess. But place people on the Millennium bridge in London for its grand opening and this is precisely what we get:

as the video progresses, watch the movement of people fall into sync

There are a variety of mechanisms that permit such self-organization to occur. In the millennium bridge video, the bridge provides the shared context or environment for  the walkers (who are the agents in the system). As this shared context sways slightly (differential) it throws each agent just a little bit off balance (feedback).  Each individual then slightly adjusts their stance and weight to counteract this sway (agent bias), which serves only to reinforces the collective sway direction. Over time, as the bridge sways ever more violently, people are forced to move in a coordinated collective motion (emergence) in order to traverse the bridge.

What is important to note in this example is that we do not require the agents to agree with one another in order for self-organization to occur. In our earlier example - that of school children forming teams - we can imagine that a variety of factors are at work that have to do with active volition on the part of the children. But in the example above, movement preferences have nothing to do with observed walking behavior or individual preferences. Instead, the agents have become entangled with their context (which is partially formed of other agents), in ways that constrain their movement options.

Enslaved Behavior

Accordingly, in self-organizing systems agents that might initially possess a high number of possible states that are able to enact (see also Degrees of Freedom) the possible range of freedom becoming increasingly limited, until such time as only a narrow band of behavior is possible.

Further, while the shared context of the agents might initially be the source of difference in the system (with difference gradually being amplified over time), in reality the context for each agent is a combination of two aspects: both the broader shared context (the bridge) and the emerging behaviors of all the other agents within that context. This is to say that once a global behavior emerges, subsequent self-organization of the agents is constrained by the emerged context agents are also a part of.

Back to {{key-concepts}}

Back to {{complexity}}


 

Governing Features ↑

Scale-Free

'Scale-free' networks are ones in which identical system structure is observed for any level of network magnification.

Complex systems tend towards scale-free, nested hierarchies. By 'Scale-free', we mean to say that we can zoom in on the system at any level of magnification, and observe the same kind of structural relations.


If we look at visualizations of the world wide web, we see a few instances of highly connected nodes (youtube), many instances of weakly connected nodes (your mom's cooking blog), as well as a mid-range of intermediate nodes falling somewhere in between. The weakly connected nodes greatly outnumber the highly connected nodes, but the overall statistical distribution of connected vs unconnected nodes follows a power-law distribution. Thus, if we 'zoom in' on any part of the network (at different levels of magnification), we see similar, repeated patterns.

'Scale Free' entities are therefore sometimes fractal-like, although there are scale-free systems that are more about the scaling of connections or flows, rather than scaling of pictoral imagery (which is what we associate with Fractals or objects that exhibit Self Similarity. Accordingly, a pictoral representation of links in the world wide web does not exactly 'look' like a fractal, but its distributions of connections observes mathematical regularities consistent with what we observe in fractals (that is to say, {{power-laws}} ).

A good example here is the fractal features of a leaf:

We can think of the capillary network as the minimum structure required to reach the maximum surface area.

Nature's Optimizing Algorithm

Here, the scale-free structure of the capillary network allows the most efficient transport of nutrients to all parts of the leaf surface within the overall shortest capillary path length. This 'shortest overall path length'  is one of the reasons that we might often see scale-free features in nature: this may well be the natural outcome of nature 'solving' the problem of how to best economize flow networks.

minimum global path length to reach all nodes

The two images serve to illustrate the idea of shortest overall path length. If we wish to get resources from a central node to 16 nodes distributed along a surrounding boundary, we can either trace a direct path to each point from the center, or we can partition the path into splitting segments that gradually work their way towards the boundary. While each individual pathway from the center to an individual node is longer in the right hand image, the total aggregate of all pathways to reach all nodes from the center is shorter. Thus the image on the right (which shows scale-free characteristics), is the more efficient delivery network.

Example - Street Networks:

We should therefore expect to see such forms of scale-free dynamics in other non-natural systems that carry and distribute flows: thus, if we think of size distribution of road networks in a city, we would expect a small number of key expressways carrying large traffic flows, followed by a moderate number of mid-scaled arteries carrying mid-scale flows, then a large number of neighborhood streets carrying moderate flows, and finally a very high number of extremely small alleys and roads that each carry very small flows to their respective destinations.

mud fractals and street networks

Fractals, scale-free networks, self-similar entities and power-law distributions are concepts that can be difficult to disambiguate. Not all scale-free networks look like fractals, but all fractals and scale-free networks follow power-laws. Finally, there are many power-law distributions that neither 'look' like fractals, nor follow scale-free network characteristics: if we take a frozen potato and smash it on the ground, then classify the size of each piece, we would find that the distribution of smashed potato pieces follows a power law (but is not nearly as pretty as a fractal!). Finally, self-similar entities (like the romanesco broccoli shown below) are fractal-like (you can zoom in and see similar structure at different scales), but are not as mathematically precise as a fractal.

credit: Wikimedia commons  (Jon Sullivan)


Back to {{key-concepts}}

Back to {{complexity}}


 

Governing Features ↑

Rules

Complex systems are composed of agents governed by simple input/output rules that determine their behaviors.

One of the intriguing characteristics of complex systems is that highly sophisticated emergent phenomena can be generated by seemingly simple agents. These agents follow very simple rules - with dramatic results.


Simple Rules - Complex Outcomes

How does one replicate the efficiencies of the Tokyo subway map? Simple - enlist slime mould and let them discover it!  Results such as these are highly counterintuitive: when we see complicated phenomena, we expect the causal structure at work to be similarly complex. However, in complex systems this is not the case. Even if the agents in a complex system are very simple, the interactions generated amongst them can have the capacity to yield highly complex phenomena
.

Slime mold forming the Tokyo subway map

Take it in Context

We can conceptualize  bottom-up agents as simple entities with limited action possibilities. The decision of which action possibility to deploy is regulated by basic rules that pertain to the context in which the agents find themselves. Another way to think of 'rules' is therefore to relate them to the idea of a simple set of input/output criteria.

An agent exists within a particular context that contains a series of factors considered as relevant inputs: one input might pertain to the agent's previous state (moving left or right); one might pertain to some differential in the agent's context (more or less light; and one might relate to the state of surrounding agents (greater or fewer). An agent processes these inputs and, according to a particular rule set, generates an output: 'stay the course', 'shift left', 'back up'.

input/output rule factoring three variables

In complex adaptive systems, an aspect of this 'context' must include the output behaviors generated by surrounding agents. Further, while for natural systems the agent's context might include all kinds of factors that serve as relevant inputs, in artificial complex systems novel emergent behavior can manifest even if the only thing informing the context is surrounding agent behaviors.

Example:

Early complexity models focused  precisely on the generative capacity of simple rules within a context composed purely of other agents. For example, John Conway's 'Game of Life' is a prime example of how a very basic rule set can generate a host of complex phenomena. Starting from agents arranged on a cellular grid, with fixed rules of being either 'on' or 'off' depending on the status of the agents in neighboring cells, we see the generation of a host of rich forms. The game unfolds using only four rules, that govern whether an agent is 'on' (alive) or 'off' (dead).  For every iteration:
  1. 'Off' cells turn 'On' IF they have three 'alive' neighbors;
  2. 'On' cells stay 'On' IF they have two or three 'alive' neighbors;
  3. 'On' cells turn 'Off' IF they have one or fewer 'alive' neighbors;
  4. 'On' cells turn 'Off' IF they have four or more 'alive' neighbors.
The resulting behavior has an 'alive' quality: agents flash on and off over multiple iterations, seem to converge, move along the grid, swallow other forms, duplicate, and reproduce.

Conway's Game of Life

Principle: One agent's output is another agent's input!

As we can see from the Game of Life, starting with very basic agents, who rely only on other agents outputs as their input, a basic rule set can nonetheless generate extremely rich outputs.

While the Game of Life is an artificial complex system (modeled in a computer), we can, in all real-world examples of complexity, observe that the agents of the system are both responders to inputs from their environmental context, as well as shapers of that same environmental context. This means that the behaviors of all agents necessarily become entangled - entering into feedback loops with one another.

Adjusting rules to targets

It is intriguing to observe that, simply by virtue of simple rule protocols that are pre-set by the programmer and played out over multiple iterations, complex emergent behavior can be produced. Here we observe the 'fact' of emergence from simple rules. But we can also imagine natural complex systems where agent rules also shift over time. While this could happen arbitrarily, it makes sense from an evolutionary perspective when some agent rules are more 'fit' then others. This results in a kind of selection pressure, determining which rule protocols are preserved and maintained. Here, the discovery of simple rule sets that yield better enacted results exemplifies the 'function' of emergence.

When we couple the notions of 'rules' with context, we are therefore stating that we are not interested in just any rule set that can generate emergent outcomes, but with specific rule sets that generate emergent outcomes that are in some way  'better' with respect to a given context. Successful rule systems imply a fit between the rules the agents are employing, and how well these rules assists agents (as a collective) in achieving a particular goal within a given setting.

As a general principle we can think of successful rules as ones that minimize agent effort (energy output) to resolve a given task. That said, in complex systems we need to go a step further to analyze the collective energy output. Thus in a system the best rules will be the ones that result in minimal energy output for the system as a whole to resolve a given task. This may require 'sacrifice' on the part of an individual agent, but this sacrifice (from a game theory perspective), is still worth it from the overall system level.

As agents in a complex system enact particular rule sets, rules might be revised based on how quickly or effectively they succeed at reaching a particular target.

When targets are achieved - 'food found!' - this information becomes a relevant system input.  Agents that receive this input may have a rule that advises them to persist in the behavior that led to the input, whereas agents that fail to achieve this input may have a rule that demands they revise their rule set!

Agents are therefore not only be conditioned by a set of pre-established inputs and outputs but also be able to revise their rules. This requires them to gain feedback about the success of their rules and test modications. A way of thinking about this is captured in the idea of an agent held {{schemata}} about their behavior relative to their context that can be updated over time so as to better align. Further, if multiple agents test different rule regimes simultaneously, then there may be other 'rules' that help agents learn from one another. If a particular rule leads agents to food, on average, in ten steps, and another rule, on average, leads agents to food in 6 steps, then agents adopting the second rule should have the capacity to disseminate their rule set to other agents, eventually suppressing the first, weaker rule. This processor dissemination requires some form of communication or steering, which is often done via the use of Stigmergy.

Enacted 'rules' are therefore provisional tests of how well an output protocol achieves  a given goal. The test results then become another form of input:

bad test results also become an agent input,  telling the agent to: "generate a rule mutation as part of your next enacted output".

Novel Rule formation:

Rules might be altered in different ways. At the level of the individual -

  • an agent might choose to revise how it values or factors inputs in a new way;
  • an agent might choose to revise the nature of its outputs in a new way.

In the first instance, the impact or value assigned to particular inputs (needed to trigger an output) might change based on how successfully previous input weighting strategies were in relationship to reaching a target goal.  In order for this to occur, the agent must have the capacity to assign new 'weights' (the value or significance) for an input, in different ways.

In the second instance, the agent requires enough inherent flexibility or 'Degrees of Freedom' to generate more than one kind of output. For example, if an agent can only be in one of two states, it has very little ability to realign outputs. But if an agent has the capacity to deploy itself in multiple ways, then there is more flexibility in the character rules it can formulate. This ties back to the idea of {{adaptive-capacity}}.

Rules might also be revised through processes occurring at the group level. Here, even if agents are unable to alter their performance at the individual level, there may still be mechanisms operating at the level of the group which result in better rules propagating. In this case, we would have a population of agents, each with specific rule sets that vary amongst them. Even if each individual agent has no ability to revise their particular rules, at the level of the population -

  • poor rules result in agent death - there is no internal recalibration - but agents with bad rules simply cease to exist;
  • 'good' rules can be reproduced - there is no internal recalibration - but agents with good rules persist and reproduce.

We can imagine that the two means of rule revision - those working at the individual level and those at the population level - might work in tandem. While all of this should not seem new (it is analogous to General Darwinism), since complex systems are not always biological ones, it can be helpful to consider how the processes of system adaptation (evolution) can instead be thought of as a process of rule revision.

Through agent to agent interaction, over multiple iterations, weaker protocols are filtered out, and stronger protocols are maintained and grow. That said, the ways in which rules are revised is not entirely predictable - there are many ways in which rules might be revised, and more than one kind of revision may prove successful (as the saying goes - there is more than one way to skin a cat). Accordingly, the trajectory of these systems is always contingent and  subject to historical conditions.

Fixed Rules with thresholds of enactment

Not all complex adaptive behaviors require that rules be revised. We began with artificial systems - cellular automata - where the agent rules are fixed but we still see complex behaviors. There are also example of natural complex systems where rules are fixed, but still drive complex behaviors. These rules, rather than being the result of a computer programmer arbitrarily determining an input/output protocol, are the result of fundamental laws  (or rules) of physics or chemistry.

One particularly beautiful example of non-programmed natural rules resulting in complex behaviors is the Belousov-Zhabotinsky (BZ) chemical oscillator.  Here, fixed chemical interaction rules lead to complex form generation:

BZ chemical oscillator

In this particular reaction, as in other chemical oscillators, there are two interacting chemicals, or two 'agent populations' which react in ways that are auto-catalytic. The output generated by the production of one of the chemicals, becomes the input needed for the generation of the other chemical. Each chemical is associated with a particular color, which appears only when that chemical present in sufficient concentrations. The concentrations of these chemicals augments and diminishes at different reaction speeds, leading to shifting concentrations of the coupled pair.  As concentrations rise and fall, we see emergent and oscillating color arrays.

Back to {{key-concepts}}

Back to {{complexity}}


 

Governing Features ↑

Power Laws

Complex System behaviors often exhibit power-laws: with a small number of system features wielding a large amount of system impact.

Power laws are particular mathematical distributions that appear in contexts where a very small number of system events or entities exist that, while rare, are highly impactful, alongside of a very large number of system events or entities exist that, while plentiful, have very little impact.


Power laws arise in both natural and social system, in contexts as diverse as earthquake intensities, city population sizes, and word frequency use.

'Normal' vs 'Power Law' Distributions

Complex systems are often characterized by power law distributions. A power law is a kind of mathematical distribution that we see in many different kinds of systems. It has different properties from the well known 'bell curve' 'normal' or 'Gaussian' distribution.

 Let's look at the two here:

Power-law (left) vs Bell-curve (right)

Most people likely remember the bell curve from high school. The fat middle (highlighted) is the 'norm' and the two sides or edges represent the extremes. Accordingly, a bell curve can illustrate things like people's heights - with 'typical' heights being distributed around a large cluster at the middle, and extreme heights (both very tall and very short people), being represented by much smaller numbers at the extremes. There are many, many, phenomena that can be graphed using a bell curve. It is suitable for depicting systems that hover around a normative 'middle' and for systems where there are no driving correlations amongst members of the set. That is to say: the height of one person in classroom is not constrained or affected by heights of other people.

Power-law distributions are likely as common as bell-curve distributions, but for some reason people are not as familiar with them. They occur in systems where there is no normative middle where most phenomena occur. Furthermore, entities within a power-law set enjoy some kind of calibration feedback relation amongst them - meaning that the size of one entity in the system is in someway correlated with, (or has an impact) on the size and frequency of other entities. These  systems are characterized by a small percent of phenomena or entities in the system, accounting for a great deal of influence or system impact.

This small percent is shown on the far left hand side of the diagram (highlighted), where the 'y' axis (vertical) indicates intensity or impact (of some phenomena), and the 'x' axis indicates the frequency of events, actors, or components associated with the impact. The left hand side of the diagram is sometimes called the 'fat head', and as we move along to the right hand side of the diagram, we see what is called 'the long tail'. Like the bell curve, which we can use to chart phenomena such as housing prices, heights, test scores, or household water consumption, the power law distribution can illustrate many different kinds of things. 

Occasionally, we can illustrate the same phenomena using bell curves and power law distributions, while simultaneously highlighting different aspects of the same phenomena.

Example:

Let's say we chart income levels on a bell curve. The majority of people earn a moderate income, and a smaller number of people earn both very high and the very low incomes at the extreme sides. Showing this data, we get a chart that looks like the one below:

Wealth in the USA plotted as a bell curve (source: pseudoerasmus)

But, we can think of income distribution another way - the impact or intensity of incomes. Consider this fact of wealth distribution: in the US, if we look at the right side of the bell curve above (the wealthiest people who make up a small fraction or 1% of the population) these few people control around 45% of entire US wealth. Clearly, the bell curve does not capture the importance of this small fraction of extreme wealth holders.

Imagine that instead of plotting the number of people in different income brackets we were to instead plot the intensities of incomes themselves. In this case we would generate a plot showing:

  • 1% (a few people)  controlling  45% (a large chunk) of total wealth;
  • 19% (a moderate number of people) controlling  35% ( a moderate chunk) of total wealth;
  • 80% (the bulk of the population) controlling 20% (a small fraction) to total wealth.

These ratios plot as a power law, with approximately 20% of the people controlling 80% of the wealth resource.

80/20 Rule

These numbers, while not precisely aligning with US statistics, are not that far off, and they align with what is referred to as the '80/20' rule: where 20 percent of a system's components are responsible for 80 percent of the system's key functions or impacts. This phenomena was first noted by {{Pareto}}, and is also referred to as a a Pareto Distribution. We can find Pareto distributions in many different kinds of phenomena where the distributions might be applied to aspects such as - quantities, frequencies, or intensities. Thus:

  • 20% of our wardrobe is worn 80% of the time;
  • 20% of all English words are used 80% of the time;
  • 20% of all roads attract 80% of all traffic;
  • 20% of all grocery items account for 80% of all grocery sales;

Finally, if we smash a frozen potato against a wall and sort out the resulting broken chunks:

  • 20% of the potato chunks will account for 80% of the total smashed potato.

Such ratios are so common that if you are unsure of a statistic then - provided it follows the 80/20 rule - you are likely safe to make it up! (the frozen potato being a case in point :))

Source: themediaconsortium.org

Rank Order

Another way to help understand how power law distributions work is to consider systems in terms of what is called their 'rank order'.  We can illustrate this with language. Consider a few words from English:

  • 'The' is the most commonly used word in the English language -
    • We rank it 'first' and it accounts for 7% of all word use (rank 1) .
  • "Of" is the second most commonly used word -
    • We rank it 'second' and it accounts for 3.5% of all word use (1/2 of the rank 1 word)

If we were to continues, say looking at the 7th most frequently used word, we would expect to see it use 1/7th as frequently as the most commonly used word. And in fact -

  • 'For' is the seventh most commonly word,
    • We rank it seventh  and it accounts for 1% of all word use  (1/7 of the rank 1).

This power-law phenomena is known as 'Zipf's Law' for George Kingsley Zipf, the man who first identified it. Zipf's law indicates that if, for example,  you have 100 items in a group, the 99th item will occur 1/99th as frequently as the first item.  For any element in the group, you simply need to know its rank in the order - 1st, 3rd, 25th - to understand its frequency (relative to the top ranked item in the group).

The constant in Zipf's law is '1/n' , where the 'nth' ranked word in a list is used 1/nth as often as the most popular word.

Were all power-laws to follow a Zipf's law then:

  • the 20th largest city would be 1/20th the size of the largest;
  • the 10th most popular child's name would be used 1/10 of the time compared to the most popular;
  • the 3rd largest earthquake in California in 100 years would be 1/3 of the size of the largest;
  • the 50th most popular product would sell 1/50th as often as the most popular .

This is a very easy and neat set, and it is represents perhaps the most straightforward power law.  That said, there can be other power law ratios between elements which, - while remaining constant, are not always such a 'clean' constant.  These follow the same principle but are just more difficult to express (and calculate). For example"

'1.07/n'  would be a power-law where the 'nth' ranked word in a list is used 1/1.07 times as often as the most popular word.

Pretty in Pink

Clearly '1.07/f' is a less satisfactory ratio then 1/n. In fact, the 1/n ratio is so pleasing that it has a few different names. 1/n is mathematically equivalent to 1/f ratio where, but instead of highlighting the rank in the list, 1/f highlights the frequency of an element in a list (the format is different but the meaning is the same).

'1/f' is also described as 'pink noise' - which is a statistical pattern distinct from 'brown' or 'white' noise. Each class of 'noise' pertains to different kinds of randomness in a system. In other words, while many systems exhibit random behaviors, some random behaviors differ from others. We can think of 'pink', 'white', and 'brownian', noise as being different 'flavors' of randomness. Without getting into too much detail here, 1/f noise seems to occur frequently in natural systems, and can be associated with beauty. In non-mathematical terms, pink noise involves a frequency ratio of component distributions such that there is just enough correlation between elements to provide a sense of unity, and just enough unexpectedness to provide variety. The human mind seems to enjoy this balance between the two, which is why pink noise can be found in music or artworks that we find beautiful. White noise is too random (no correlation) and brownian noise is too correlated (no unexpected interested).

Dynamics generating Power-laws

Power laws distributions have been identified in many complex system behaviors, such as:

  • earthquake size and frequency
  • neuron activity
  • stock prices
  • web site popularity
  • academic citation network structure
  • city sizes
  • word use frequency
  • ....and much more!

Much time and energy has gone into identifying where these distributions occur and also trying to understand why they occur.


Growing Riches

A strong contender for the presence of power-law dynamics is that they may be present in  systems that involve both growth and Preferential Attachment. Understood colloquially as 'the rich get richer', preferential attachment is the idea is that popular things tend to attract more attention, thereby becoming more popular. Similarly, wealth begets wealth.  The idea of growth and preferential attachment is therefore associated with positive feedback. It can be used to explain the presence of power-law distributions in the size and number of cities (bigger cities attract more industry thereby attracting more people...) the distributions of citations in academic publishing (highly cited authors are read more thereby attracting more citations), and the accumulation of wealth (rich people can make more investments, thereby attracting more wealth).


Push forward and Push back

Further, power-laws might be understood as a phenomena that occur in systems that involve both positive and negative feedback interactions as co-evolving drivers of the entities within the system. Such systems would involve feedback dynamics that are out of balance: some feedback dynamics ({{positive-feedback}}) are amplifying certain system features, while others system dynamics ({{negative-feedback}}) are simultaneously  'dampening' or constraining these same system features. Simultaneously there is a correlation between these push and pull dynamics - so the greater the push forward the more it generates a pull back, and vice versa. The imbalance between this push and pull interplay between interacting forces creates feedback loops that lead to power-law features.

An example of this would be that of reproducing species in an eco-system with limited carrying capacity. Plentiful food would tend to amplify reproduction and survival rates (positive feedback), but as population expands this begins to put pressure on the food resources, leading to a push back (lower survival rates), and consequently a drop in population levels. The two driving factors in the system -  growing population and dwindling food - are causally intertwined with one another and are not necessarily in balance. If the system achieves a perfect balance then the system will find an equilibrium - the reproduction rate will settle to a point where it matches the carrying capacity. But if there are forces that drive the system out of balance, or if there is a lag time between how the two 'push' and 'pull' (amplifying and constraining) dynamics interact, then the system cannot reach equilibrium and instead keeps oscillating between states (see {{Bifurcations}} ). 


Example: What's in a Name?

It has been shown that the frequency of baby name occurrences follows a power-law distribution. In this example, what is the push/pull interplay that might lead to the emergence of this regularity?

While each set of parents chooses their child's name independently, they do so within a system where their choices are somewhat driven or constrained by the choices being made by parents around them. Suppose there is a name that, for some reason, has become prevalent in popular consciousness - perhaps a character name in a popular book or tv series.  It is not necessary to know the precise reasons why this particular name becomes popular, but we can imagine that certain names seem to resonate in popular consciousness or 'the zeitgeist'. Let us take the name 'Jennifer'. An obscure name in the 1930s,  it became the most popular girl's name in the 1970s. During that time, if you were one of the approximately 16 million girls born in the US, there was  a 2.8% chance you would be named Jennifer!  And yet,  the name had plummeted back to 1940s levels by the time we get to 2012.

the rise and fall of Jennifer

But how can the rise and fall of 'Jennifer' be described using push and pull forces? We can imagine a popular name being like a contagion, where a given name catches on in popular consciousness. During its initial spread, the name is highlighted even further in popular consciousness, potentially expanding its appeal.  At the same time, the very fact that the name is popular causes a tendency for resistance - if Jennifer is on a short list of possible baby names, but a sibling or close friend names their child 'Jennifer', this has an impact on your naming choice. In fact, the more popular the name becomes, the more pullback we can expect. As more and more people tap into the popularity of a name, it becomes more and more commonplace, leading to a sense of overuse, leading to a search for new novelty. The interactions of push and pull cause the name to both rise and fall. In a system of names, Jennifer is a name that had an expansion rate caused by rising popularity feedback, but then a decay rate caused by overuse and loss of freshness.


The Long Tail

An additional feature of power law distribution that should not be overlooked is what is sometimes called the "power of the long tail". While power law systems have a few strongly performing elements in the upper 20%, there are still many important actors in the remaining 80% of the distribution. One recent feature of information technologies is that it is easier to "find" the specificity of this 80%. If we think about bookstores from only a decade ago, they needed to carry only the "best-sellers": if your reading interests fell outside of the norm than it would be difficult to find books that would serve as the right "fit" or "niche" for your reading interests. Today, with information flows having become so inexpensive, online bookstores are not limited by the number of titles they can carry, so people can find the niche books they actually want to read rather than having to compromise around the average. In some ways this echoes eco-systems, where there can be a few top players, but where there also exist many viable micro-niches that can be populated. There are many domains where accessing this "long tail" will lead to more choice and precision in complex systems. 


Proviso

While power-laws are often pointed to as 'the fingerprint of complexity', It should. be noted that their recent ubiquity is not without controversy.  While many studies highlight the presence of these mathematical regularities in a host of diverse systems, other argue that the statistics upon which these findings are based are often skewed, and that power-laws may not be as common in systems as is frequently stated. It is a problem of researchers looking to affirm the existence of these patterns that may cause them to ignore results where they do not occur, and attribute their presence in systems that may or may not actually hold these properties. 


Back to {{key-concepts}}

Back to {{complexity}}



 

Governing Features ↑

Path Dependency

'Path-dependent' systems are ones where the system's history matters - the present state is contingent upon random factors that governed system unfolding, and that could have easily resulted in other viable trajectories.

Complex systems can follow many potential trajectories: the actualization of any given trajectory can be dependent on small variables, or "changes to initial conditions" that are actually pretty trivial. Accordingly, if we truly wish to understand system dynamics, we need to pay attention to all system pathways (or the system's phase space) rather than the pathway that happened to unfold.


Inherent vs Contingent causality

Why is one academic cited more than another, one song more popular than another, or one city more populated than another? We tend to imagine that the reason must have to do with inherent differences between academics, songs or cities. While this may be the case, the dynamics of complex systems may lead one to doubt such seemingly common-sense assumptions.

We describe complex systems as being non-linear - this means that small changes in the system can have cascading large effects - think the butterfly effect - but what it also implies is that history, in a very real way matters. If we were to play out the identical system with very slight changes, the specific history of each system would play a tangible role in what we perceive to be significant or insignificant.

Think about a cat video going viral. Why this video? Why this particular cat? If on a given day 100 new cat videos are uploaded, what is to say that the one going viral is inherently cuter than the other 99 out there? Perhaps this particular cat video really is more special. But a complexity perspective might counter with the idea of path-dependency: that amongst many potentially viral cat videos, a particular one played  this potentiality out - but this is an accident of a specific historical trajectory, rather than a statement about the cuteness of this particular cat.

Butterfly Effects:

The reason for this returns to the idea of the Path Dependency nature of the system, the fact that it is Sensitive to Initial Conditions. Suppose we have six cat videos that are of inherently equal entertainment value. All are posted at the same time. We now roll a six sided die to determine which of these gets an initial 'like'.  This initial roll of the die now causes subsequent rolls to be slighted weighted - whatever received an initial 'like' has a fractionally larger chance of being highlighted in subsequent video feeds. Let us assume that subsequent rolls reinforce, in a non-linear manner, the first 'like'. Over time, like begets like, the rich get richer, and we see one video going viral.

If we were to play out the identical scenario in a parallel universe, with the first random toss of the dice falling differently, then an entirely different trajectory would unfold. Such is the notion of 'path-dependency'. Of course, it is normal to assume that given the choice of two pathways into an unknown future,  the path we take matters, and will change outcomes. But in complex systems this constitutes an inherent part of the dynamics, and a 'choice' is not something that one actively elects to make,  as much as something that arises due to random system fluctuations.

Another way to think about this is with regards to the concept of Phase Space. Any complex system has a broad state of potential trajectories (its phase space), and the actualization of any given trajectory is subject to historical conditions. Thus, if we want to understand the dynamics of the system, we should not only attune to the path that actually unfolded - rather we should consider the trajectories of all possible pathways. This is because the actual unfolding of any given pathway within a system is not inherently more important then all of the other pathways that could equally have unfolded.

One of the reasons that computer modeling is popular in understanding complex systems has to do with this notion of phase space and path dependency. A computer model allows us to 'explore the phase space' of a complex system: seeing if system trajectories are inherently stable and repeat themselves consistently, or if they are inherently unstable and might manifest in quite different ways.

Sometimes we can imagine that a system does unfold differently in phase space, but that this unfolding tends towards particular behaviors. We call these system tendencies Attractor States. One of the features of complex systems is that they often have multiple attractors, and it is only by allowing the system to unfold that we are able to determine which attractor the system ultimately converges towards. It would be a mistake, however, to grant a particular attractor as being more important than another based only upon one given instance of a system unfolding.

Another feature of path dependency is that, once a particular path is enacted, it can be very difficult to move the system away from that pathway, even if better alternatives exist.

A great example of path dependency is the battle between VHS and BETA as competing video formats. According to most analysts, BETA was the superior format, but due to factors involving path dependency, VHS was able to take over the market and squeeze out its superior competitor.

Another example is that of the QWERTY key board. While initially a solution to the problem of keys jamming when pressed too quickly on a manual keyboard, the solution actually slows down the process of typing. However, even though we have long since moved to electronic and digital keyboards where jamming is not a factor, we are 'stuck' in the attractor space that is the QWERTY system. This is partially due to the historical trajectory of the system, but also all of the reinforcing feedback that works to maintain QWERTY: once people have learnt to type on one system, it is difficult to instigate change. One way of saying this is to refer to the system being 'locked-in' or referring to "Enslaved States".

An Urban example may also be instructive: In Holland people bike as a normal mode of transport, in North America they drive. We can make arguments that there are inherent differences in North American and Dutch cultures that create these differences, but a complexity argument might propose, instead, that such differences are due to path-dependency. Perhaps any preferences that the Dutch have for biking are only random. That being said, over time, infrastructure has been created in the Netherlands that incentivizes biking (routes everywhere), and disincentives driving (many streets closed to traffic, lack of parking, inconvenient, slow commutes). In North America, we have created infrastructure that incentivizes driving: big streets, huge parking areas close to where we work, and lack of other transport alternatives. We then arrive at a situation where the Dutch bike and the North-American drives. But place a North American in Holland and they will soon find themselves happily biking, and place a Dutchman in the USA and they will soon find themselves purchasing a vehicle to drive along with everyone else. Neither driving nor biking is inherently 'better' in so far as the commuter is concerned (although there may be more environmental and health benefits associated with one versus the other), but the pathways each country have taken wind up mattering, and reinforcing behaviors through feedback systems.

If we are able to better understand how to break out of ill-suited path-dependency, we may be able to solve a variety of problems that seem to be 'inherent' or 'natural' choices or preferences.

Back to {{key-concepts}}

Back to {{complexity}}


 

Governing Features ↑

Open / Dissipative

Open & dissipative systems, while 'bounded' by internal dynamics,  nonetheless exchange energy with their external environment.

A system is considered to be open and dissipative when energy or inputs can be absorbed into the system, and 'waste' discharged. Here, system inputs like heat, energy, food, etc., can traverse the open boundaries of the system and ‘drive’ it towards order: seemingly in violation of the second law of thermodynamics.


Complex OutLaws!

For those who are haven't dusted off their high school science textbooks recently, it is worth a quick refresher on the 2nd law of thermo-dynamics. Initially formulated by Sadi Carnot in 1824 (he was looking at the flow of heat in steam engines), the law has been expressed in various technically precise ways. For our purposes, the importance characteristic of these definitions is the idea of loss of order. Any ordered system will eventually move towards disorder. There is no way of getting around it. Things get messy over time - that's the Second Law. Everything ultimately decays. You, me, the world, the universe.

We can contemplate the metaphysical implications of this (the 2nd law is a bit of a downer) over a cup of coffee, while watching this video. We see illustrated the sad,  inevitable decrease in the cream's order as it meets with coffee (it's pretty relaxing actually):

Cream dis-ordering as it enters coffee

What the 2nd Law states is that something is ultimately lost in every interaction, and because of that, more and more disorder is ultimately created. We can ask heat to do work in driving a steam engine, but some of the heat will always be lost in translation, so that even if we are able to produce localized work or order, more disorder has ultimately been created in the universe as a whole. We call this inevitable increase in disorder 'entropy'.

But wait - you say - there is order all around us! While this may appear true, it is because what appear to be violations of the 2nd Law are achieved within the boundaries of a particular system. While a particular system can gain order, it is only because its disorder is simultaneously being dissipated into the surrounding context. Local order (within the system) is thus maintained at the expense of global disorder (within the environment). Were the system to be fully closed from its context, it would be unable to maintain this local order.

Thus, the ability to increase order in violation of the 2nd Law is called Negentropy - and one of ways in which Negentropy can be generated is by creating a system that is 'open and dissipative': meaning that an energy source can flow in to drive order, and waste can flow out to dissipate disorder.

Example:

A famous example of this dynamic is in  Benard/Rayleigh convection Rolls (a phenomena studied by {{ilya-prigogine-isabelle-stengers}} as an example of self-organizing behavior). In this example, we have fluid in a small Petri dish, heated by a source placed under the dish. The behavior of the fluid is the system that we wish to observe, but this system is not closed: it is open to the input of heat that traverses the boundary of the Petri dish. Further, while heat can 'get into' the system, it can also be lost to the air above as the fluid cools. Note that the overall system clearly has a defined 'inside'  (the fluid in the Petri dish), and a defined 'outside' (the surrounding environment and the the heat acting upon the Petri dish), but there is not full closure between the inside and outside. This is what is meant when we say that complex systems are Open / Dissipative . We understand them as bounded, (with relations primarily internal to that boundary), but nonetheless interacting in some way with their surroundings.  Were the boundary fully closed no increase in order could not occur.

Let us turn now to the flows driving the system. As heat is increased, the energy of this heat is transferred to the fluid, and the temperature differential between the top and the bottom of the liquid causes heated molecules to be driven upwards. At the same time, the force of gravity causes the cooler, molecules in the fluid to be driven downwards. Finally, the drag forces acting between rising and falling molecules cause their behaviors to become coordinated, resulting in 'roll' patterns associated with Benard convection.

Rayleigh/Benard Convection (fluid of oil/ silver paint)

The roll patterns that we observe are a pattern: a global structure that emerges from the interactions of many agitated molecules without being 'coordinated' by them. What helps drive this coordination is the dynamics of the interacting forces that the molecules are subjected to (driving heat flows and counteracting gravity pressures), as well as how the independent molecular responses to these pressures feedback to reinforce one another (through the drag forces exerted between molecules).  That said, the fluid molecules do nothing on their own absent the input of heat. Instead, heat is the flow that drives the system behavior. Further, as the intensity of this flow is amplified (more heat added), the behavior of the fluid shifts from that of regular roll patterns to more turbulent patterns.


Setting boundaries

{{ilya-prigogine-isabelle-stengers}} were the first to highlight the importance of open dissipative structures in generating complexity dynamics. Earlier works in General Systems Theory ({{ludwig-v-bertalanffy}}) attuned to the complex dynamics at work within an internal structure, but did not make a distinction between open and closed structures. Closed structures in contrast to open structures do not process new inputs, and therefore are unable to generate novelty.

At the same time, systems need some sort of boundary or structure so as to hold together components of enough collective identity that they can work in tandem to process flows. It is therefore important to determine what is the appropriate boundary of any complex system under study, and what kinds of flows are relevant in terms of crossing those boundaries. 

Often, complexity involves multiple overlapping systems, each with their own internal dynamics and external flows, but systems can become entangled as one systems exports become another's inputs. In order to simplify these dynamics, it is perhaps helpful to try to identify which groups of agents in a system belong to a particular class of that shares a common driving flow and then examine the dynamics with respect to only those flows and behaviors. Systems can then be layered onto systems to build a more complete understanding of the dynamics at play. 


Back to {{key-concepts}}

Back to {{complexity}}


 

Governing Features ↑

Networks

Network theory allows us think about how the dynamics of agent interactions in a complex system can affect the performance of that system.

Network theory is a huge topic in and of itself, and can be looked at on its own, or in relation to complex systems. There are various formal, mathematical ways of studying networks, as well as looser, more fluid ways of understanding how networks can serve as a structuring mechanism.


Why Networks?

We can think of networks in fairly simple terms: imagine, for example, a network of aircraft traveling between hubs and terminals, or a network of people working together in an office. Network analysis operates under the premise that,  by looking at the structure of the network alone, we can deduce something about how the network will function, and potential information about potential nodes within the network. For example, the image below could illustrate many different kinds of networks: perhaps it is an amazon delivery network, or a social network, or an academic citation network. What is interesting is that, even without knowing anything about the kind of network it is, we can still say some things about how it is structured. The network below has some pretty big hubs - around 6 of them that are well connected to other nodes, but not strongly connected to one another. What would be the dynamics of this network if it were a social network, or a network of a company?

What might we learn from the network?

By looking at the diagram we might learn about how information or control is exerted, about which entities are isolated, and about how protracted communication channels might be. A work network in which I need to talk to my superior, who in turn talks to his boss, who in turn is one of three bosses who only talk to each other, creates very different dynamics than a network where I have connections to everyone, or where there is only one chain of command rather than three.

Network theory attempts to understand how different network structures might lead to different kinds of system performances. The field uses domain specific language - speaking of nodes, edges, degree centrality, etc. - with much of this detail falling outside of the scope of this website.

What is important is that complex systems are made up of individual entities and, accordingly, the ways in which these entities relate to one another matter in terms of how the whole is structured. Networks in complex adaptive systems are composed of individual agents, and the relationships between these agents tend to evolve in ways that lead to power law distributions between highly and weakly connected agents. This is due to the dynamics of Preferential Attachment whereby 'the rich get richer'.

At its most extreme, network theory advances the idea that relationships between objects can have primacy over the objects themselves. Here, the causal chain is flipped from considering objects or entities as being the primary causal figure that structures relationships, to instead exploring how relationships might in fact be the primary driver that act to structure objects or entities.

Generalizing Network Knowledge

In the social sciences, Systems Theory (developed by Ludwig V. Bertalanffy), was the first to endeavor to examine how networks could play a key structuring role in how a range of entities function. Systems theory positioned itself as a meta-framework that could be applied in disparate domains - including  physics, biology, and the social sciences - and it attracted a wide following. Rather than focusing upon the atomistic properties of the things that make up the system, systems theory instead attuned to the relationships that joined entities, and how these relationships were structured.

Gregory Bateson, illustrates this point nicely when he considers the notion of a  hand: he asks, what kind of entity are we looking at when considering a hand? The answer depends on one's perspective: We can say we are looking at five digits, and this is perhaps the most common answer (or four fingers and a thumb). If we look at the components of the hand in this manner, we remain focused on the nature of the parts - we might look at the properties of each finger and how these are structured. However, we can answer the question another way: instead of seeing five digits we can say that we see four relationships. Bateson's point was that the way in which the genome of an organism understands or structures the entity 'hand' is more closely aligned with the notion of relationships rather than that of digits or objects. Accordingly, if we are to better understand natural entities we should begin to examine these from the perspective of relations rather than objects.

“You have probably been taught that you have five fingers. That is, on the whole, incorrect. It is the way language subdivides things into things. Probably the biological truth is that in the growth of this thing – in your embryology, which you scarcely remember – what was important was not five, but four relations between pairs of fingers.” - Gregory Bateson

In a similar vein, {{Alan-Turing}} (father of the computer!) tried to analyze the range of fur patterns that are seen on animals (spots, patches or lines), as being different manifestations of a common driving mechanism  - where shifting the timing and intensities of the relationships of the driving mechanism would result in shifts in which pattern manifests. Rather than thinking of these distinctive markings as things 'in and of themselves' Turing wanted to understand how they might simply be different manifestation of more fundamental driving relationships.

Turing based his ideas on a reaction/diffusion model showing how shifting intensities of chemical relationships could create different distinct patterns.


Networks in Complexity Theory

Network theory is important in complexity thinking because of how the structure of the network can affect the way in which emergence occurs: certain dynamics manifest in more tightly or loosely bound networks, and information, a key driver of complex processes, moves differently depending on the nature of a given network.

Small Worlds, Growth & Preferential Attachment, Boolean Networks

Key work of network theorists include that of:

  • {{steven-strogatz}} who developed small world networks where information can move quickly across the network; 
  • {{Albert-laszlo-barabasi}} who showed how networks that observe {{power-laws}} can be generated, following the rules involving both 'growth' and '{{preferential-attachment}}'.
  • {{Stuart-Kauffman}} who developed the theory of 'boolean' networks, where any series of linked nodes will ultimately move into regular regimes or cycles of behavior over multiple iterations in time.

Philosophical Interpretations

Alongside of these more technical ways of understanding networks, the appreciation of the more fundamental role of networks in structuring reality has also gained prominence. Networks would imply that functionality is something that is distributed, non-centralized, and shifting. In the social sciences, Actor Network Theory considers how agents power can be formed through network interactions. For philosopher Gilles Deleuze, the world is composed of what he terms a {{rhizomes}}, a concept that parallels that of a network in the sense of it being non-centralized, shifting, and entangled.


Historic Roots

The origins of network theory stretch back to earlier 'graph theory' a branch of mathematics developed by Leonard Euler, and made famous by his using graph theory to solve the "Konigsberg bridge problem". For a quick intro watch the video here:



This kind of graph analysis was considered as a relatively minor sub-field of mathematics, and only resurged when Barabasi reinvigorated the field (and transformed it into Network theory), with his network analysis work. Barabasi's work gained prominence as he was able to show how network theory could be applied to understanding the structural and functional properties of things like the world-wide-web. Today, network analysis is used in a huge array of disciplines in order to try to understand how the structure of relationships affects the functioning of a given entity - both at the level of the entire structure, and at the level of individual nodes (people, roads, websites, etc.), within the network.

Limitations?

It is perhaps worth noting that, along with computational modeling, network analysis is one of the central ways in which complexity dynamics are explored in many fields. While this kind of analysis can potentially be very helpful, the ubiquity of this strategy may have overshadowed some of its potential shortcomings. Network analysis can be very effective at demonstrating how {{driving-flows}} can move through a system, and how {{information-theory}} that steers the system can be relayed, but the precise configuration of networks often has surprisingly little to do with "classic" complex systems that we observe in the natural world.

If we are interested in the dynamics that form ripples in sand dunes, roll patterns in Benard cells, murmurations of starlings or even the emergence of ordered entities in Conway's Game of Life, then network structures do not appear to be playing any particular role. It is not as though graphing relationships between individual grains of sand on a dune will help us unravel the dynamics that form the emergent ripples. While network analysis often tries to pinpoint distinct actors in a system, very often agents in a complex system do not behave in distinctive ways. It is therefore somewhat surprising that Network Analysis has garnered so much strength as a key tool in complex systems research. Again, this is not to say that networks do not matter - certainly there are certain features of complex systems like the internet have key nodes like "wikipedia" that once entrenched help steer the system dynamics. It is just that there are many other features of complexity dynamics that may be overlooked if our primary focus is only on network relationships in a system.


Back to {{key-concepts}}

Back to {{complexity}}


 

Governing Features ↑

Iterations

CAS systems unfold over time, with agents continuously adjusting behaviors in response to feedback. Each iteration moves the system towards more coordinated, complex behaviors.

The concept of interactive, incremental shifts in a system might seem innocent - but with enough agents and enough increments we are able to tap into something incredibly powerful. Evolutionary change proceeds in incremental steps - and with enough of these steps, accompanied by feedback at each step, we can achieve fit outcomes. Any strategies for increasing the frequency of these iterations will further drive the effectiveness of this iterative search.


One of the keys to enabling complex adaptation to manifest in a given system is the ability for these systems to unfold, with system complexity or fitness being enhanced with each ensuing step.

That said, the kinds of outcomes we see being derived from iterative unfolding differ somewhat in kind: some iterative processes lead to 'fitness' with respect to a given context, whereas other kinds of iterative processes generate pattern, but not necessarily fitness as the term would more generally be understood. Whether or not patterns might fulfill some other kind of fitness criteria is a more nebulous question, which we will get into later.

Differences and Repetitions 

Prior to that, let us first clarify what we mean by an iteration. We can think of an iteration in two distinct ways: the first involving sequential iterations, and the second involving parallel iterations. Thus we can imagine a system that unfolds over the course of 100 generations, or we can imagine a system that has 100 components. Each generation can undergo a mutation, testing a different strategy of performance, or, in the case of the simultaneous system, each component of the system can have a slightly different performance strategy. Thus, while we tend to think of iterations as sequential 'versions' of a class of elements, in essence we can also have multiple 'versions' that operate in parallel rather than sequentially. If we recall the example of ants searching for food, we have many ants performing search in parallel - many iterations of ant behavior proceeding simultaneously.

That said the notion of sequence is important, because it implies the possibility of feedback: that each version of action can be assessed, and undergo a modification at every time step based on feedback: has a particular strategy moved closer to or further from a goal?

Example

Lets start with 100 ants and give them 10 seconds to scurry around a table where we have placed one tasty peanut butter sandwich. Let's say only one ant finds this big cache of food on the first go  - Victory! This particular food locating strategy played out successfully for this particular ant in what we can call 'Version 1.0': What now needs to propagate through the colony as a whole is how to repeat this success - and this is where feedback enters into the picture. As part of Version 1.0 the victorious ant has gleefully deposited a bunch of pheromone traces enroute to the cache.  "Version 2.0"  has a couple of ants pick up on that trail, find the food, and pump up the pheromone signal, and so forth: through an iterative sequence of time steps - seeking, finding, and signaling - more and more ants are drawn to the yummy sandwich.

It is worth noting that even in round one, all ants had equal capacity to find food - the fact that one versus another ant was successful was effectively random. Thus what needed to propagate through the system was not some unique new superpower that this particular ant had (like some extra peanut butter receptors), but instead what needed to be replicated was the way in which the ants random search strategies were directed.

We can think of other examples of sequential iterations where the dynamics differ slightly in terms of what is being iterated. For example, we can state that the pathway from the first PC to today's smartphone was one that also proceeded by iteration, but the feedback driving each iteration entailed incremental system enhancements - a step by step learning from the previous model, (feedback), and then adjustments and improvements in the next round. 

There is thus a subtle difference between iterations that involve feedback that is generative in terms of modifying or enhancing the inherent nature of the agents in the system and feedback that is more about propagating a behavior that is already available to all the agents in the system (but only randomly enacted by some and not other agents).  

Many of the dynamics observed in complex systems have more to do with the iterative propagation of a particular behavioral regime. One form of propagation dynamics involves relaying a particular strategy that helps deliver a given resource or energy source to the group as a whole (patterns emerging that help direct slime mould or ants to viable food sources). Another propagation strategy involves driving a system towards regimes that minimize frictions or energy expenditures the system is encountering: water molecules coalescing into movement patterns that reduce internal drag differentials (generated by processing heat in Benard Rolls), metronomes synching so as to minimize movement frictions produced by their differentials). 

With each iteration of these systems, the overall performance gets just a little bit better: energy sources that fuel a group become easier to find and access, and energy expenditures demanded of a group (due to the forces imposed by an external driver) are modified so as to process these drives in a more frictionless, smooth manner. In both cases we can think of the system as trying to enter into regimes that  minimize global effort

 

Iterations for Fitness:

If a system can exist in many different kinds of states, with some states being more 'fit' than others, it is helpful if that system has an opportunity to explore different state possibilities. The faster it can explore the possibilities, the more likely it is to chance upon a state that is more productive or useful than another. This is why it is useful if a complex system has either a lot of agents, a lot of generations of agents, or both.

If we imagine a complex system as being capable of existing in many different kinds of states, than we can think of iterations as ways in which this {{phase-space}} of system possibilities is explored. It is therefore useful to think about whether a given group of agents in a system is being offered enough iterative capacity to explore this phase space quickly enough to learn anything of use. An ant colony of only 10 ants might do a very poor job of finding food - exhausting itself to death before it succeeds in finding nourishment. In principle, nothing is wrong with the ants (agents), the driving flows (food), or the signaling (pheromones). There is simply not enough system iterative capacity to learn.


Iterations for Pattern

Fractals:

The examples described above pertain to how iterations combined with feedback can steer a system towards effective behavior. But there is another way in which iterations are explored, in terms of their capacity to produce emergent pattern through simple step by step rules.

Here we would describe the nature of {{fractals-1}} generations, and how only a few rule steps, repeated over and over, can generate complex form that might be described as "emergent". Fractals like the Koch Curve or Serpinski Triangle are generated by simple geometric steps, (which we can call iterations) and more complex fractals like the Mandelbrot set can be created by creating a simple formulae that proceeds in recursive steps.

While these processes are iterative, and the patterns produced are emergent in that their spectacular aesthetic and harmonious qualities are not self-evident from their generative rules, these kinds of phenomena can not be seen to be 'learning' or becoming more 'fit'  in the same way as described above.  


Automata

Similar in terms of pattern generation, Conway's Game of Life (see Rules) is a prime example of complexity generated by such simple rules, that, repeated over multiple time step iterations yields highly complex behaviors.

Returning to the question of fitness and learning, this famous example of emergent complexity is entitle 'the game of life', but is it really life? While the emergent outcomes of the automata are rich in variety, can we say that the system adapts or learns, or becomes more fit? One feature of the output is that some of the 'creatures' generated in the game are able to enter into iterative loops, meaning that once these forms emerge they continuously reproduce versions of themselves. If proliferation within the grid of the game is thus considered to be a form of higher evolution (or Fitness), then perhaps this could be seen as a form of learning. That said, the Game of Life does not seem to 'learn' in ways we would associate with the word. 

Game of Life from Wikimedia Commons


Explorations

Returning to notions of fitness in the more traditional sense,  it is helpful to think of iterations as the way in which a complex system explores the scope of possibility within a {{fitness-landscape}}. As described in more detail elsewhere, a fitness landscape represents the differential structure of possibilities within a space of all possible behaviors {{phase-space}}, where more successful strategies within that space of possibility are conceptualized as peaks. Agent iterations can then be seen as processes of stepping around the fitness landscape, testing to see which steps take us up to higher peaks. In terms of these exploratory journeys, some agents may choose to incrementally modify whatever strategy they initially stumble upon (making small modifications and testing to see whether or not those modifications moved them higher or lower), and other strategies involve a more random 'jumping': abandoning a given set of strategies to test an altogether different set of alternatives. These jumps can be productive if they land agents on what are fundamentally higher peaks. These dynamics are unpacked in more detail on the pages referenced, but what is important to note is that the size of a step (or iteration), can vary between small local steps, and big global jumps.


 

Governing Features ↑

Information

What drives complexity? The answer involves a kind of sorting of the differences the system must navigate. These differences can be understood as flows of energy or information.

In order to be responsive to a world consisting of different kinds of inputs, complex systems tune themselves to states holding just enough variety to be interesting (keeping responsive) and just enough homogeneity to remain organized (keeping stable). To understand how this works, we need to understand flows of information in complex systems, and what "information" means.


Complex Systems are ones that would appear to violate the second law of thermo-dynamics: that is to say, order manifests out of disorder. Another way to state this it that actions within the boundary of the systems are one where order ({{negentropy}}), increases over time. This appears counter to the second law of thermodynamics which states that, left to its own devices, a system's disorder (entropy) will increase. Thus, we expect that, over time, buildings break down, and a stream of cream poured into a cup of coffee will dissipate. We don’t expect a building to rise from the dust, nor a creamy cup of coffee to partition itself into distinct layers of and cream and coffee.

Yet similar forms of unexpected order arise in complex systems. The reason this can occurs is that complex systems are not fully bounded - they are {{open-dissipative}} structures that are subject to some form of energy entering from the outside, and within these "loose" boundaries, we see glimpses of temporary order.  Disorder is, however, still being ejected outside of these same boundaries - stuff comes in, stuff goes out -  in some other form. It is only within the boundaries that we see temporary pockets of order. In order to get a better grasp on how these pockets of temporary order appear, we need to understand the relationship between entropy  (disorder, or randomness) and information.

PART I: Understanding Information

Shannonian Information

An important way of thinking about this increase in order relates to concepts based in information theory.  Information theory, as developed by Claude Shannon, evaluates systems based upon the amount of information or 'bits' required to describe them.

Shannon might ask, what is the amount of information required to know where a specific molecule of cream is located in a cup of coffee? Further, in what kinds of situations would we require more or less information to specify a location?

Example:

In a mixed, creamy cup of coffee, any location is equally probable for any molecule of cream. We therefore have maximum uncertainty about location:  the situation has high entropy, high uncertainty, and requires high information content to specify a location.  By contrast, if the cream and coffee were to be separated (say in two equal layers with the cream at the top) we would now have a more limited range of locations where a particular bit of cream might be placed. Our degree of uncertainty about the cream's location has been reduced by half, since we now know that any bit of cream has to be located somewhere in the upper half of the cup - all locations at the bottom of the cup can be safely ignored.

Information vs Knowledge

Counterintuitively, the more Shannonian information required to describe a system, the less structured or "orderly" it appears to us.  Thus, as a system become more differentiated and orderly - or as emergence features arise - its level of Shannon information diminishes.

This, in a way, is unfortunate: our colloquial understanding of 'having a lot of information', pertains to us knowing more about something. Thus, seeing a cup of coffee divided in cream and coffee layers, we perceive something with more structure, more logic, and we might assume this it should follow that this conveys more information to us (at least in our normal ways of thinking about information - in this case that coffee and cream are different things!). A second, stirred cup appears more homogenous – it has less structure or organization. And yet, it requires more Shannon information to describe it.

A difficulty thus lies in how we tend to intuitively consider the words ‘disorder’ and ‘information’. We associate disorder with lack of structure (or low amounts of information) and order with more knowledge and, therefore, more information).

While intuitively correct, unfortunately this is not how things works from the perspective of information and communication signals - which is what Shannon was concerned with when formulating his ideas.  Shannon  (who worked for Bell Laboratories) was trying to understand the required bits of information needed to relay the state of a system (or a signal).

Example:

Imagine I have an extremely messy dresser and I am looking for my favorite shirt. I open my dresser drawers and see a jumble of miscellaneous clothes: socks, shirts, shorts, underwear. I rifle through each drawer examining each item to see if it, indeed, is the shirt I am seeking. To find the shirt I want (which could be anywhere in the dresser), I require maximum information, since the dresser is in a state of maximum disorder.
Thankfully I spend the weekend sorting through my clothes. I divide the dresser by category, with separate socks, shirts, shorts, and underwear drawers. Now, if I wish to find my shirt,  my uncertainty about its location has been reduced by one quarter (assuming four drawers in the dresser). To discover the shirt in the dresser's more ordered state requires less information:  I can limit myself to looking in one drawer only.

Let us take the above example a little further:

Imagine that I love this particular shirt so much that I buy 100 copies of it, so many that they now fill my entire dresser. The following morning, upon waking, I don't even bother to turn on the lights. I reach into a drawer (any drawer will do), and pull out my favorite shirt!

My former, messy dresser had maximum disorder (high entropy), and required a maximum amount of Shannon Information ('bits' of information to find a particular shirt).  By contrast, the dresser of identical shirts, has maximum order (negentropy), and requires a minimal amount of Shannon Information (bits) to find the desired shirt.

Interesting information:  States that matter!

It should be noted that the two extreme states illustrated above are both pretty uninteresting. A fully random dresser (maximum information) is pretty meaningless, but so is a dresser filled will identical shirts (minimum information). While each are described by contrasting states of Shannonian information, neither maximum nor minimum information systems appear very interesting.

One might also imagine that neither the random nor the homogeneous systems are all that functional. A dresser filled with identical shirts does not do a very good job of meeting my diverse requirements for dressing (clothing for different occasions or different body parts), but my random dresser, while meeting these needs, can't function well because it takes me forever to sort through.

Similarly, systems with too much order cannot respond to a world filled with different kinds of situations. Furthermore, they are more vulnerable to system disruption. If you have a forest filled with identical tree species, one destructive insect infestation might have the capacity to wipe out the entire system. If I own 100 identical shirts and it goes out of style, I suddenly have nothing to wear.

Meanwhile, if everything is distributed at random then functional differences can't arise: a mature forest eco-system has collections of species that work together, processing environmental inputs in ways that syphon resources effectively - certain species are needed moreso than others. In my dresser, I need to find the right balance between shirts, socks, and shorts: some things are worn more than others and I will run into shortages of some, and excesses of others, if I am not careful.

PART II:  Information Sorting in Complex Systems

Between Order and Disorder

What is interesting in Complexity, is that it appears that, in order to be responsive to a world that consists of different kinds of inputs, complex systems tune themselves to information states involving just enough variety (lots of different kinds of clothes/lots of different tree species) and just enough homogeneity (clusters of appropriately scaled groups of clothing or species). While within their boundaries these systems violate the second-law of thermodynamics (gaining order), they do not gain so much order as to become homogenous. The phrase 'poised at the edge of order and chaos' seems to capture this dynamic.

Tuning a complex system -  decreasing uncertainty

Imagine we have a system looking to optimize a particular behavior - say an ant colony seeking food. We place an assortment of various-sized bread crumbs on a kitchen table, and leave our kitchen window open overnight. Ants march in through the window, along the floor, and up the leg of the table.

Which way should they go?

From the ants perspective, there is maximum uncertainty about the situation: or maximum Shannonian information. The ants spread out in all directions, seeking food at random. Suddenly, one ant finds food, and joyfully secretes some pheromones as it carries it away. The terrain of the table is no longer totally random: there is a signal - food here! Nearby ants pick up the pheromone signal and, rather than moving at random, they slightly adjust their trajectories. The ant's level of uncertainty about the situation has been reduced or, put another way, the pheromone trail represents a compression of informational uncertainty - going from 'maximum information required' (search every space), to 'reduced information required' (search only spaces near the pheromone trace).

If all ants had to independently search every square inch of tabletop to find food, each would require maximum information about all table states. If, instead, they can be steered by signals (see {{stigmergy}}) deployed by other ants, they can then limit their search to only some table states. By virtue of the collective, the table has become more ‘organized’ in that it requires less information to navigate towards food. There is a reduction of uncertainty, or reduction of 'information bits', required by each ant to find the location of 'food bits'. Accordingly, these are more easily discovered. It is worth noting that in this particular system, the "food bits" are effectively the {{driving-flows}} that energize the system and thereby help fuel the localized order. The second law is preserved, since the ants will ultimately dissipate this order (through heat generated in their movements, through ant deffication as they process food, and ultimately through death and decay). 

Reduce information | Reduce effort

Suppose we are playing 20 questions.  I am thinking of the concept ‘gold’, and you are required to go through all lists of persons, places, and things in order to eventually identify ‘gold’ as the correct entity. Out of a million possible entities that I might be thinking of, how long would it take to find the right one in a sequential manner? Clearly, this would involve a huge length of time. The system has maximum uncertainty (1 million bits), and each sequential random guess reduces that uncertainty by only 1 bit (999,999 bits to go after the first guess!). While I might 'strike gold' at any point, the odds are low!

From an information perspective, we can greatly reduce the time it takes to guess the correct answer if we structure our questions so as to minimize our uncertainty at every step. Thus if I have 1,000,000 possible answers in the game ‘twenty questions’, I am looking for questions that will reduce these possibilities to the greatest extent at each step. If, with every question, I can reduce the possible answers in half (binary search) then, within 20 question, I can generally arrive at the solution. In fact the game, when played by a computer, can solve for any given entity within an average of six guesses! With each guess, the degree of uncertainty regarding the correct answer (or the amount of Shannonian information required), is reduced.

Sorting a system so there is less to sort

From a Complex Systems standpoint, this information sorting by agents within a system will allow it to channel resources more effectively – that is, focus on work (or questions) that move towards success while engaging in less wasted effort.

To illustrate:  imagine that I wish to move to a new city to find a job. I can choose one of ten cities, but other than their names, I know nothing about them, including their populations. I relocate at random and find myself in a city of 50 people, with no job postings. My next random choice might bring me to a bigger center, but, without any information, I need to keep re-locating until I land in a place where I can find work.

If, instead, the only piece of information that I have is the city populations, I can make a judgement: If I start off my job hunt in larger centers then there is a better chance that jobs matching my skills will be on offer. I use the population sizes as a way to filter out certain cities from my search - perhaps with a 'rule' stating that I won't consider relocating to cities with less than 1 million inhabitants. This rule might cross out six cities from my search list, and this 'crossing out' is equivalent to reducing information bits required to find a job: I can decide that my efforts are better spent focusing on a job search in only four cities instead of ten (this may also be the reason why, in studying cities as complex systems, we often observe the phenomena of growth and preferential attachment, which manifests as {{power-laws}} in population distributions).

By now it should have become clear that this is equivalent to my looking for a given cream molecule in only half the coffee cup, or ants looking for food only on some parts of the table, or my search in 20 questions being limited only to items in the 'mineral' category.

All these processes involve a kind of information sorting that gives rise to order, which in turn makes things go smoother: from random cities to differentiated cities; from random words to differentiated categories of words.

What complex systems are able to do is take a context that, while initially undifferentiated, can be sorted by the agents in the system such that the agents in the system can navigate through it more efficiently. This always involves a local violation of the second law of thermodynamics, since the amount of Shannonian information (the entropy or disorder of the system), is always being reduced. That said, this can only occur if there is some inherent difference in the system, or 'something to sort' in the first place. If a context is truly homogeneous (going back to our dresser of identical shirts), then no amount of system rearranging can make it easier to navigate. Note that an undifferentiated system is different from a homogenous system. A random string of letters is undifferentiated; a string composed solely of the letter 'A' is homogeneous.

Accordingly, complex systems need to operate in a context where some kind of differential  (in the form of {{driving-flows}} are present. The system then has something to work with, in terms of sorting through the kinds of differences that might be relevant.

One thing to be very aware of in the above example, is how difficult it is to disambiguate information from orderliness. As our knowledge of probable system states becomes more orderly, Shannonian information is reduced.  This is a frustrating aspect of the term ‘information’, and can lead to a lot of confusion.

This Christmas Story illustrates how binary search can quickly identify an entity


Back to {{key-concepts}}

Back to {{complexity}}


 

Governing Features ↑

Fitness

Complex Adaptive Systems become more 'fit' over time. Depending on the system, Fitness can take many forms,  but all involve states that achieve more while expending less energy.

What do we mean when we speak of Fitness? For ants, fitness might be discovering a source of food that is abundant and easy to reach. For a city, fitness might be moving the maximum number of people in the minimum amount of time. But fitness criteria can also vary - what might be fit for one agent isn't necessarily fit for all.


Getting Fit!

The idea of fitness in any complex system is not necessarily a fixed point. There can be many different kinds of fitness, and we need to examine each specific system to determine what factors are at play. For example, what makes a hotel room 'fit'? Is it location, or price, or cleanliness, or amenities, or all of the above? For different people, these various factors or parameters have different 'weights'. For a backpacker traveling through Europe, maybe the price is the only thing worth worrying about, whereas for a wealthy business person it may not factor in at all.

Despite these variations, there are certain principles that remain somewhat consistent, and this pertains to the idea of minimizing processes. We can imagine that certain behaviors in a system require more or less energy to perform. Agents in a system are always trying to minimize an energy expenditure, but what might entail a high energy expenditure for one agent might be a low energy expenditure for another (depending on what forms of energy they each have available to them. If an ant wants to find food, it prefers to find a source that takes less time to get to than one that is further away. Further, a bigger source of food is better than a smaller source of food, as more ants in the colony can benefit. Complex systems generally gravitate towards regimes that therefore in some way minimize energy expenditure to achieve a particular goal. However, this energy rationing depends both on the nature of the goal, and the resources available to reach it.

Example:

Returning to the example of finding a hotel room, consider the popular website 'Airbnb' as a complex adaptive system. Here, two sets of bottom-up agents (room providers and room seekers) coordinate their actions in order for useful room occupancy patterns to emerge. Some of these patterns might be unexpected. For example, a particular district in Paris might emerge as a very popular neighborhood for travelers to stay in, even though it is not in the center of the city. Perhaps it is just at a 'sweet-spot' in terms of price, amenities, and access to transport to the center. This is an example of an emergent phenomena that might not be predictable but nonetheless emerges over the course of time. In that case, rooms in that district might be more 'fit' than in another, because the factors listed (its relevant parameter settings in that particular zone) are highly appealing to a broad swath of room-seekers.

So in what way is the above example 'energy minimizing'? We can think of the room seekers as having different packages of energy rations they are willing to expend over the course of their holiday. One package might hold money, one might hold time, and one might hold patience for dealing with irritations (noisy neighbors that keep them from sleeping, or willingness to tolerate a dirty bathroom...). Each agent in the system is trying to manage these packets of energy in the most effective way possible to minimize discomfort and maximize holiday pleasure. So if a room is close to the center of the city, it might preserve time energy, but this needs to be balanced with preserving money energy.

We can begin to see that fitness is not going to come in a 'one size fits all' form. Some agents will have more energy resources available to spend on time, and others will have more energy resources to allocate in the form of money. Further, an agent in the system might be willing to spend much more money if it results in much more time being saved, or vice versa. We can imagine that an agent might reach a decision point where two equally viable trajectories are placed in front of them. The choice of time or money might be likened to a flipping of a coin, but the resulting 'fit' regime might appear as very different.

In order to better understand these dynamics, two features of CAS, that of a Fitness Landscape and ideas surrounding Bifurcations, clarify how CAS can unfold in multiple fit trajectories, but despite these differences the underlying principles of energy minimizing holds true.

Avoiding Work and the Prisoner's Dilemma

In the above example the agents (room seekers), employ cognitive decision-making processes to determine what a 'fit' regime is. But physical systems will all naturally gravitate to these energy minimizing regimes.

Example: 

When molecules in a soap bubble solution are subject to being blown through a soap wand, nobody tells them to form a bubble, and the molecules themselves don't consider this outcome. Instead, the bubble is the soap mixture's solution to the problem of finding a form that minimizes surface area and therefore frictions. The soap bubble can therefore be considered as an energy minimizing emergent phenomena  (for a detailed explanination, follow this link to an article on the subject: note the phrase, 'a bubble's surface will minimize until the force of the air pressures within is equal to the 'pull' of the soap film'). We can also think of a sphere as being the natural Attractor States of a soap solution: seeking to absorb maximum air with minimum surface - or doing the most with the least.

We can derive from these examples that one way we can examine complex systems is to equate 'fitness' with avoiding unnecessary work or effort. While this is important for individual cases of agents (specific birds in a flock, or specific fish in a school), what is also interesting in systems exhibiting {{self-organization}} (bird flocks and fish schools), is that this principle is extended to the include the group level. Thus the system, as a whole, finds a regime that expends the minimum effort to achieve a goal on the part of the group rather than on the part of the individual. This might involve individual sacrifices in order to enable overall group behavior to succeed.

These kinds of dynamics involving individual sacrifices (or trade-offs) where group performance ultimately matters are the subject of game theory. The Prisoner's Dilemma, for example, is a classic case where the most 'fit' long term strategy is for both players to sacrifice some potential individual gain, in favor of longer term collective gain. Fit strategies differ depending on whether or not the game is played once or multiple times, so in natural systems that have ongoing interactions of agents than there are different fitness incentives than in non-repeating scenarios.


Short Explanation of the Prisoner's Dilemma:



Back to {{key-concepts}}

Back to {{complexity}}


 

Governing Features ↑

Feedback

Feedback loops occur in system where an environmental input guides system behavior, but the system behavior (the output), in turn alters the environmental context.

This coupling between input affecting output - thereby affecting input - creates unique dynamics and interdependencies between the two.


There are two kinds of feedback that are important in our study of complex systems: {{positive-feedback}}  and {{negative-feedback}}. Despite the value-laden connotations of these designations, there is no inherent value judgement regarding 'positive' (good) versus 'negative' (bad) feedback. Instead, the terms can more accurately be described as referring to reinforcing deviation (positive) versus suppressing or dampening deviation (negative). Reinforcing feedback thus amplifies slight tendencies in a system's behavior, whereas dampening feedback works to restrain any changes to system behavior.

Negative Feedback

We can think of a thermostat and temperature regulation as a classic example of dampening (negative) feedback at work. The thermostat has a desired state that it wishes to maintain, and it is constantly monitoring an input about whether or not it is achieving that target. If the temperature exceeds the target, then the thermostat activates a cooling mechanism; if the temperature falls short of the target then the thermostat activates a heating mechanism. The thermostat is therefore situated within an environment, (acted upon by outside forces) but is simultaneously helping create this environment (by being one of the environmental activating forces). It is able to respond to the input of the environment by activating and output that suppressed any deviation from the goal state (the goldilocks temperature).

Because of how Negative Feedback helps maintain a particular status quo, it is an important dimension of life, or {{homeostasis}}. Our body's ability to maintain a somewhat steady state is something we often take for granted, but it is worth pausing to reflect upon the amount of constant adjustments that are required in order to keep things like our temperature or glucose levels steady in light of extreme environmental fluctuations. 

Maintaining our own body temperatures within a narrow, healthy range requires three aspects: an input (temperature) a sensor (or brain) and a viable output (shiver to raise temperature if cold; sweat to lower temperature if hot).  While this is somewhat similar to the thermostat example, there are some slight differences: even though the act of sweating or shivering does in fact have a minute impact on the environment we are located within, these outputs do not have a significant enough impact on the environment to alter the input.


Cybernetics 

While homeostasis refers specifically to biological systems able to maintain themselves, in fact for any system where the goal is to avoid deviance - to maintain a steady state or goal for some given target such as temperature - these same three elements - inputs, sensors and outputs - need to be present.

{{Cybernetics}} is a field dedicated to understanding a whole host of systems from different disciplines in light of these characteristics, to better understand the means of self regulation in entities that seek to maintain a particular target behavior. The field emerged in the 1940s, and it, (along with general systems theory, which shares many similarities with complexity research but deals with closed rather than open-systems), is in many ways is a pre-cursor to complex adaptive systems thinking.

In many cybernetic systems, the dynamics become quite interesting, in that an output can flow back into the system as an input, in ways that we can think of as more directly 'self-regulating'.  A fly-ball governor is one such self-regulating mechanism (described on the {{cybernetics}} page), where the self-regulating dynamics of the mechanisms cause it to slow down when it exceeds a particular speed. Anther such self-regulating or self-governing dynamics can be observed in eco-systems, where if a population of animals increases beyond the environment's carrying capacity, that environment ceases to sustain those high numbers resulting in a die-off of excess animals.  Similarly, if population numbers drop significantly, then those remaining will have a high availability of food, and any offspring will thrive, leading to population growth. These two competing forces  -population growth and carrying capacity - work in tandem to  dampen the fluctuation of population numbers, preventing them from getting too high or too low. 

Another other classic example is the idea of an oarsman on a boat, trying to reach an island, and constantly adjusting the movement of the oar to compensate for the deviations caused by the environmental factors (water currents, wind, etc. ).

Cybernetic systems differ from complex adaptive systems in that the CAS features such as  {{emergence}} are typically associated as being the result of amplifying or positive feedback  vs Cybernetic systems that work more to maintain a stable state


Positive  Feedback

If negative feedback relied on an input, a sensor, and an output, then positive feedback operates in a equivalent way: the difference being that the output does not counteract the input, but instead builds upon, or reinforces it in some way.

We can observe that in many systems driven by simple rules, such as {{Fractals-1}} that over iterative sequences of graphic generation, become differentiated by more detail, more variation, and more pattern becoming apparent.

But Fractals are a specific class of entity that is limited to the domain of mathematics. Again, these kinds of positive feedback systems can exist in a wide range of non-mathematical domains, with the same principles at work.

Viral Orders:

In discussion the {{non-linear}} of Complex Systems, we used the example of a cat video going viral, in order to illustrate how a small, early amplification of a system preference can cause a massive shift in system outcome. Using the analogy of the 'rich get richer', cat videos that initially might get a few more clicks are recommended more frequently, leading to more views, leading to more recommendations and so forth. This illustrates the power of  positive feedback to amplify a particular aspect of a system such that it grows in importance in a non-linear way.

Another example of this comes from Network Theory dynamics, that examines how networks characterized by {{power-laws}} can be generated when the network is constantly growing, and when new nodes of the network can be added anywhere at random, but will affix preferentially to nodes in the network that are already highly linked. This, phenomena of "growth and preferential attachment" is again an example of positive feedback, and such dynamics are thought to explain things like scaling patterns seen in different cities within a given region.

Network examples are of interest because while behaviors are initially random, because of the nature of positive feedback, these random intensities come to be reinforced over time: leading to an increase in structure and pattern. The situation is almost the opposite of that of fractals: in fractal growth a very simple formula ultimately leads to greater and greater visual complexity: in complex systems steered by positive feedback, an initially random distribution of entities (human habitations, websites, cat videos, cricket chirping), gradually become more ordered and organized, with a few dominant entities emerging and thereafter constraining the performance, behaviors, or success of other entities in the system. We can say that the system, after a time, moves into {{enslaved-states}} with only a few behavioral regimes succeeding following feedback. 

In these instances, an initially random situation has small variations amplified, to the extent that  an initially random factor or actor becomes dominant, now steering the system. I do wish to point out that this would seem to muddy our earlier contrast of 'amplifying' vs 'restraining': once a particular behavior is amplified, it in turn winds up constraining the system, as deviance from that behavior is now more difficult. To illustrate: Wikipedia has become the default encyclopedic website due to positive feedback: now that it exists, it is in fact stabilized and resists being disrupted. Its amplified strength as a website is part of what now gives it stability, dampening further disruptions. This is a characteristic of {{Emergence}}, in that emergent systems like schools of fish or flocks of birds are driven into being through positive feedback, but then exert a kind of top down resistance to future change.


Dynamics of systems subject to both Positive & Negative Feedback:

Some very interesting complex systems are governed by a combination of both positive and negative feedback. 

For instance, in the example of governing animal population fluctuations described above, we can imagine that rather than settling into one steady-state population, a particular species might oscillate between two regimes - booms and busts in population as the carrying capacity environment undergoes stress and then recovery. When we examine the system more closely, we realize that there are actually both kinds of feedback at play: reproduction rate is an example of amplifying feedback - if every two rabbits that reproduces make four rabbits and those four rabbits go on to make 8 rabbits (and so forth), then we have the kind of accelerating growth associated with positive feedback. What then happens is that this drive towards amplification, is suppressed by resistance (the carrying capacity), which works to counter balance the growth. So if we start with 8 garden rows of carrots and at every generation of new rabbits the rate of carrot row consumption proceeds faster and faster, pretty quickly all the carrots are done (and by extension, all the unfed hungry rabbit are done too). In a way, the  terminology can be muddy in that our definitions of positive and negative can rely on what is considered to be "amplifying" feedback. If we shift the lens, we could think of carrots as the agent in the system (rather than rabbits), and we could state that, due to the positive feedback in their environment (rabbit reproduction), the rate at which carrots are being consumed is increasing (even as the number of carrots is diminishing). Accordingly, what we mean by 'positive' and 'negative' are often context dependent, and can shift depending on how we describe the features of 'amplification' or 'suppression'.

What is nonetheless very interesting is that we can have systems that involve competing forces of feedback - one that drives the system forward, the other that resists or suppresses this drive (as in the case of rabbit reproduction and dwindling carrot supplies). Depending on the extent to which these co-evolving system features are out of sync in terms of these respective rates (rate of rabbit reproduction vs rate of carrot growth), the system can begin to oscillate in irregular ways. These kinds of irregular oscillations can be observed in the logistic map (also called bifurcation diagram), which illustrate how systems can cycle between many different behavioral states - with extremes arising, being dampened and then arising again (to greater and lesser degrees). Many interesting complex systems are therefore neither being entirely steered towards stability (like cybernetic systems), nor steered towards unified amplification (like crickets chirping in sync), but instead ride cascading waves between different states.

The characteristics of how feedback is moving through the system, and whether or not the system is subject to one or more interdependent feedback loops is therefore at the heart of some of the most complex dynamics we observe in complex system, and why systems composed of seemingly simple agents can nonetheless produce very complex dynamics (the complexity is in the nature of the feedback, rather than the inherent characteristics of the system.

As a final thought on this, we can observe the double pendulum experiment, where we see the irregular motion of a pendulum, the motion of which is subject to interwoven feedback from competing sources - while a simple system, the patterns traced exhibit complex dynamics:

source, wikipedia



Back to {{key-concepts}}

Back to {{complexity}}


 

Governing Features ↑

Far From Equilibrium

Left to themselves, systems tend towards regimes that become increasingly homogenous or neutral: complex systems differ - channeling continuous energy flows, gaining structure, and thereby operating far from equilibrium.

The Second Law of Thermodynamics is typically at play in most systems - shattered glasses don't reconstitute themselves and pencils don't stay balanced on their tips. But Complex Systems exhibit some pretty strange behaviors that violate these norms...


Equilibrium

In order to appreciate what we mean by 'far from equilibrium' we first need to start by understanding what is meant by 'equilibrium'. We can understand equilibrium using two examples: that of a pendulum, and that of a glass containing ice cubes and water.

If we set a pendulum in motion, it will oscillate back and forth, slowing down gradually, and coming 'to rest' in a position where it hangs vertically downwards. We would not expect the pendulum to rest sideways, nor to stand vertically from its fulcrum point.

We understand that the pendulum has expended its energy and now finds itself in the position where there is no energy - or competing forces -  left to be expended. The forces exerted upon it are that of gravity, and this causes the weight to hang low. The pendulum has arrived at the point where all acting forces have been canceled out : equilibrium.

Similarly, if we place ice cubes in a glass of water, we initially have a system (ice and water) where the water molecules within the system have very different states (solid and liquid). Over time, the water will cool slightly, while the ice will warm slightly (beginning to melt), and gradually we will arrive at a point in time when all the differences in the system will have cancelled out. Ignoring the temperature of the external environment, we can consider that all water molecules in the glass will come to be of the same temperature.

Again, we have a system where competing differences in the system are gradually smoothed out, until such time as the system arrives at a state where no change can occur: equilibrium.

In a complex system, we see very different dynamics: part of the strangeness of emergence arises from the idea that we might see ice spontaneously manifesting out of a glass of water! This is what we mean by 'far from equilibrium': systems that are constantly being driven away from the most neutral state (which would follow the second law of thermodynamics), towards states that are more complex or improbable. In order to understand how this can occur, we need to look at the flows that drive the system, and how these offer an ongoing input source that pushes the system away from equilibrium.

Example:

Lets take a look at one of our favorite examples, an ant colony seeking food. Lets start 100 ants off on a kitchen table (we left them there earlier when we were looking at {{driving-flows}}. The ants begin to wander around the table, moving at random, looking for food. If there are crumbs on the table, then some ants will find them, and direct the colony towards food sources through the intermediary signal of pheromones. As we see trails form (a clear line forming out of randomness like an ice cube fusing itself out of a glass of water!), we observe the system moving far from equilibrium. But imagine instead that there is no food. The ants just keep moving at random. No emergence, nothing of statistical interest happening. When we remove the driving external flow (food) that is outside of the ant system itself then the ants become like our molecules of water in a glass. Moving around in neutral, random configurations.  Eventually, without food, the ants will die - arriving at an even more extreme form of equilibrium (and then decay)!

Origins

The phrase "far from equilibrium" was originally coined by Ilya Prigogine, and was used to characterize such phenomena as Benard Rolls (see also {{ilya-prigogine-isabelle-stengers}}). Prigogine and Stengers were interested in how system that were driven by external inputs could gain order (as exhibited by the rolls), and how the increase in these external inputs could in turn drive order in increasingly interesting ways. 

Another way to say this is that systems in equilibrium lack energy inputs needing processing  whereas system from from equilibrium are characterized by having some kind of energy driver or differential at play.  


Muddying the Waters

While the above should now be somewhat clear, it is also true that complex systems, while indeed operating "far from equilibrium" can exhibit behaviors that imply a different kind of equilibrium: one that is not part of the domain of physics or chemistry but rather that of Game Theory (and economics). 

There are various multi-actor systems examined by Game Theorists and Economists, where actors (or agents) use competing strategies to see which will yield (or 'win') some form of allocation. Such games might be played once, to show optimum game choices, or multiple times, to see what occurs when past strategies plays a role in current strategies. Depending on how multiple agents deploy their strategies games might produce win/win outcomes (where multiple agents gain allocations),  win/lose outcomes (where my win results in your loss or vice versa), or lose/lose scenarios (where in efforts to outcompete one another, all agents wind up leaving empty handed). Game Theory can examine the kinds of strategies most viable for an individual agent in the system, but they can also analyze what strategies are most viable not solely for an individual agent, but for the collection gain of all agents in the system.

Such 'collective benefit' systems are described as being "Pareto efficient": and occur in instances when strategies result in dynamics whereby no agent can improve its own effectiveness without diminishing the effectiveness of the overall group. Another way to frame this is in terms of what would constitute a Pareto improvement: when system behavior can be enhanced in such a way that at least one agent is better off, and the system performance as a whole has not been made worse off by this change.

Example:

Imagine we are placing 100 trash cans in a park. We don't know where they should go, so we distribute them at random, but we add a few special features:

1. Each trash can has a sensor that can track how quickly it is filled

2. Each is also able to receive and relay a signal to its nearest neighbors - indicating its             rate of trash accumulation

3. Each is set on a rolling platform, that allowing it to navigate to a new location in the park.

Accordingly: the agent in the system is the trash can; the fitness criteria is gathering trash; the adaptive capacity is the ability to relocate; and the differential driving flows are the variable intensities of trash generation.

We can imagine this system to be driven now by simple rules: each trash can monitors, broadcasts, and receives information about its own rate of trash acquisition, as well as that of its nearest neighbors. At various time steps it makes a decision: remain in place or move - with movement direction weighted depending in accordance with more successful neighbors. Each movement entails a Pareto improvement.

It should be relatively intuitive to note that, over time, trash cans will move until such point as all cans are collecting identical amounts. At that point, the system has arrived at a Pareto Optima, where movement cannot occur without a reduction in overall system fitness (it should be noted that this state may only be a local optimum:  for more information). The system has calibrated itself to perform in the most effective way possible, restricted only by the scope of state spaces it was able to explore.*

* One proviso regarding this example is that the system may be  trapped in a local optima (see {{fitness-landscape}}). As a result, the system above will function more effectively if individual agents occasionally engage in random search regardless of neighboring states. This allows potential untapped domains of trash production to be discovered and the recruited for.

The reason it is worth pointing out this particular dynamic, is that game theory often discusses such optimizing strategies as finding "Equilibria".  Accordingly, we have the famous "Nash Equilibrium" as a kind of game theory state (see the Prisoner's Dilemma Game), as well as other game theory protocols that use the term "Equilibrium" to refer to end states strategies. While we normally speak of "Pareto efficient" or "Pareto Optimum" rather than "Pareto Equilibrium", there is a notional slipperiness at work here, meaning that it is easy to think of complex systems as arriving at a kind of steady state where the system has found a kind of poised balance (as in the trash cans above). This kind of calibration and balancing act within their environment might be described as existing in a state of ecological equilibrium (rather than being far from it).

The muddiness comes from how the term is technically applied in physics or chemistry versus how it is used in economics and game theory. 


Back to {{key-concepts}}

Back to {{complexity}}


 

Governing Features ↑

Degrees of Freedom

'Degrees of freedom' is the way to describe the potential range of behaviors available within a given system. Without some freedom for a system to change its state, no complex adaptation can occur.

Understanding the degrees of freedom available within a complex system is important because it helps us understand the overall scope of potential ways in which a system can unfold. We can imagine that a given complex system is subject to a variety of inputs (many of which are unknown), but then we must ask, what is the system's range of possible outputs?


The notion of degrees of freedom comes to us from physics and statistics, where it describes the number of possible states a system can occupy. For example, a swinging pendulum is constrained to a fixed number of 'states' (positions in space) that the pendulum can occupy. We can imagine that it is possible to map out all the potential locations of the pendulum's swing, and therefore the limits of all it behaviors. The degrees of freedom thus tells us something about what a system is capable of doing: it's potential. The system cannot act outside of the boundaries of this action potential.

For example, if we wanted to examine the maximum capacity of motion for a three-dimensional object in space, this can be provided using just six degrees of freedom, which together define changes both in orientation: (rotation) that occurs via the 'roll', 'yaw' and 'pitch' motions; and for changes related to displacement in space: through the 'up/down', 'back/forward' and 'left/right' parameters. We can see that all potentialities of movement are covered within this framework.

If we were to eliminate any of these parameters - for example the 'up/down' potential, then we have fewer degrees of freedom, certain types of movement would no longer be possible. Phase Space - thereby captures the sum total of all potential behaviors - sometimes referred to as a system's 'possibility space'.

(image courtesy of Wikimedia commons)

In addition to there being a range of phase space potentials, there may also be particular behaviors in phase space that are more likely to occur. Accordingly, if we were to map all of the potential states of a pendulum's behavior from any given starting position in phase space, we  would have what is known as a 'phase portrait' of that pendulum. This is to say that there are particular trajectories that the pendulum will follow within phase space. Different systems might have phase portraits that highlight certain special regions of phase space as being {{attractor-statesbasins}} which a system will tend to gravitate towards.


Human Systems

So far we have been speaking about physical degrees of freedom, but we might also imagine degrees of freedom in relation to behavioral possibilities.

Example:

Imagine we are wanting to stay at an airbnb. We could think of each airbnb option as being an agent in a complex system, competing to win us over by broadcasting its 'fitness' for our stay. Each Airbnb would be able to adjust a number of parameters that one might consider as important in choosing accommodation. These parameters could include cost, cleanliness, distance to center, size, and quietness. Different people might value (or weigh) these parameters differently, and choose their airbnb accordingly. At the same time, we can imagine that each airbnb has a capacity to adjust its 'state' to different degrees. Location is clearly a limited parameter: a given airbnb has no capacity to simply change its location. But it does have the capacity to adjust its price point. Size is also difficult to alter. But cleanliness might have more flexibility. Thus certain categories have more range in terms of their degrees of freedom than others. If airbnbs are considered as agents in a complex system, each competing to find patrons who wish to stay at their location, then they each have to operate within their particular bounds of freedom in terms of how they adjust to align themselves better with user needs. Thus if they can't compete on the basis of location then they can attempt to compete on the basis of cost.

More Than Three Dimensions - No Problem!

The airbnb case should also serve to illustrate that, in many scenarios, the degrees of freedom available to an agent in a complex system cannot be easily plotted in three dimensional space (that is a 'space' bounded by an x, y, and z axis). That said, just because phase space cannot always be easily visualized in three-dimensional space, it doesn't mean we have to bend our minds to imagine more than 3 degrees of freedom. Just because we can't easily draw a graph of all these potential parameters, we can certainly imagine sorting different priorities simply as parameter bars with weights. We then have a multi-parameter space that the agent is calibrated within.

Multiple Degrees of Freedom thought of as sliding parameter bars: 

Requisite Variety

Analyzing  agents in complex system according to their Degrees of freedom can thus be thought about as examining their range of possible parameter settings. This can be an extremely helpful way of thinking about the {{adaptive-capacity}} of the system: or what it can and cannot do in response to environmental changes or fluctuations. Another way to describe this is the idea of a system's {{large-number-elements}}: a phrase coined by Ross Ashby to highlight the amount of variability a system can enact. According to Ashby, a system needs to have a variety of responses commensurate with the variety of inputs. This responsive capacity can be defined more precisely by means of defining the agent's degrees of freedoms. 




Back to {{key-concepts}}

Back to {{complexity}}


 

Governing Features ↑

Cybernetics

Cybernetics is the study of systems that self-regulate: Adjusting their own performance to keep aligned with a pre-determined outcome, using processes of negative-feedback to help self-correct.

The word Cybernetics comes from the Greek 'Kybernetes', meaning 'steersman' or 'oarsman'. It is the etymological root of the English 'Governor'. Cybernetics is related to an interest in dynamics that lead to internal rather than external governing.


Cybernetic thought is an important early precursor to Complex Systems thinking.

Imagine a ship, sailing towards a target (say an island). There are various forces (wind and currents) that act upon the ship to push it away from its trajectory. In order to maintain a trajectory towards the island, the steersman need not be aware of the speed or direction of the wind, or the velocity of the waves. Instead, he (or she), just needs to keep their eye on the target, and keep adjusting the rudder of the ship to correct for any deviations from the route.

In a sense, we have here a complete system that works to correct for any disturbances. The system is comprised of the target, any and all forces pushing the ship away from the target, the steersman registering the amount of deviation, and subsequently counterbalancing this through means of interaction with the rudder.

While it is true that the steersman is the agent that 'activates' the rudder,  it is also true that the amount of deviation the target presents also 'activates' the steersman.  Finally, the forces acting upon the ship are what activates the deviation. We thus have a complete cybernetic system, where the forces at work form a continuous loop, and where the loop, in turn, is able to self-regulate.

A cybernetic system works to dampen  any disturbances or amplifying feedback that would move the trajectory away from a given optimum range. Thermostats work on cybernetic principles, where temperature fluctuations are dampened.

Like CAS, Cybernetics is concerned with how a system interacts with its environment. However, Cybernetics focus on systems subject to negative feedback: ones self-regulating to maintain regimes of stable equilibrium where disruptions (or Perturbations) are dampened.

Macy Conferences

Control

Stafford Beer, an early proponents of Cybernetics, discusses the Watt Flyball Regulator




 

Governing Features ↑

Attractor States

Complex Systems can unfold in multiple trajectories. However, there may be trajectories that are more stable or 'fit'. Such states are considered 'attractor states'.

Complex Adaptive Systems do not obey predictable, linear trajectories. They are "Sensitive to Initial Conditions", such that small changes in these conditions can lead the system to unfold in unexpected ways. That said, in some systems, particular 'potential unfoldings' are more likely to occur than others. We can think of these as 'attractor states' to which a system will tend to gravitate.


What's so Attractive?

Often a system has the capacity to unfold in many different ways - it has a large 'possibility space'. That being said, there can be regions of this possibility space that exert more force or 'attraction' for the system.

In some kinds of systems these zones of attraction exist because of pre-determined energy minimizing characteristics of these regimes. For example, if we blow a soap bubble it 'wants' to become a sphere: this is the state where there is the most volume for the least surface area, and therefore also the best configuration for soapy molecules given that it locates them in their lowest energy state: one that best balances competing forces, being the expansion forces of the air pushing the system outwards, and the resistance forces of the soapy solution not wanting to waste any surface area. The spherical shape of the bubble is thus a kind of pre-given, and when we blow a bubble this it is this shape - rather than a cube or a conical form, that we can safely anticipate the form will take. 

Similarly, if we toss a marble in a vortex sphere at a science museum we know it will spin around the surface, but then ultimately make its way down to the bottom: this is the state of minimum resistance to the forces of gravity acting upon it.

It is this 'minimizing behavior' that is characteristic of attractor states - of all possible states within a given system {{phase-space}} (the space of all possibilities) some regions may require less energy expenditure to move towards others. We will see that there can also be systems that have more than one such minimizing regime.

Lock In!

While the two physical systems described above have natural attractors, there are also social system dynamics that can cause similar attractor dynamics to arise.

In these scenarios, attractor states are not necessarily pre-determined by natural forces, but can instead emerge over time, as the system evolves, and in light of {{feedback-loops}} forces.  That said, once present they can reinforce themselves by constraining subsequent actions of agents forming the system.

Example:

We can think of Silicon Valley as being an emergent attractor for tech firms, that has, over time, reinforced its position. What is interesting about this example, is that even though it comes from the social sciences rather than physical sciences, in some way the same minimizing principle applies - it is just a different form of minimization that does not have to do with the laws of nature, but instead the social laws of human interaction.

To put this another way, once Silicon Valley established itself as the main tech hub, any new entrants to the tech field could, in principle, have chosen to locate themselves elsewhere - there were multiple locational possibilities win {{phase-space}}. However, if they were to choose these other locations, they would be far more likely to encounter additional "resistance" or frictions that would inhibit success. This is because these non-silicon valley sites would lack factors such as of supporting infrastructure, abundant knowledge spillovers, experienced and readily available workers, etc. In a sense, the smoothest, least resistant course of business action for a technology firm is thus to locate where these kinds of external inputs are most easily accessed: a 'state of least resistance' - which in this case equates to Silicon Valley.

The emergence of such clusters of expertise is not limited to Silicon Valley. We often see that groupings of similar business co-locate in space (referred to as agglomerations), rather than distributing themselves evenly across a region. In a particular city we will see groupings ranging from jewelry stores, or cell phone service providers, or bridal salons, tending to coalesce in co-located groupings.

The precise locations of these groupings is not something that is established in advance in the way that the spherical shape of the soap bubble is. Instead, in these instances it is the processes of {{feedback-loops}} that, over time, reinforce minor locational advantages, such that the kind of spill-over advantages discussed in the Silicon Valley example lead businesses that co-locate to have a better chance of success compared to their far-flung competitors. 

Once these kinds of concentrations of expertise have coalesced in a particular region, it then attracts new entrants to the field, in the same way that the spherical form attracts the soap molecules. Any systems that enter into this kind of regime, where new behavior is directed according to what has occurred before in ways that are constraining and directing, can be considered to have entered into "Enslaved States(a term popularized by Hermann Haken. The concept of 'Enslavement' captures the notion that certain attractor states can emerge from agent interaction, and once present,  will constrain the future action of these agents and all that come after them. The same idea is referred to as 'Lock-in' in the field of Evolutionary Economic Geography.


Shake it Up

We can see that in the example of the soap bubble, and the example of Silicon Valley, we have two very different kinds of system that are nonetheless both still trying to limit unnecessary energy expenditures. For the soap, the concern is minimizing surface tensions or stress, for the business owner, minimizing the tensions and stresses involved with finding good employees, or  access to good internet, etc. In this way the dynamics, while at first completely different, nonetheless run parallel. What is different is that in the human system the 'laws' at play are not stable over time. What might be best practice at one instant is not necessarily best practice at a later time. This is the risk of Lock-in:  that systems begin to perpetuate themselves beyond the point that they were helpful (the QWERTY key board, designed to slow down typists to ensure that the mechanical typing hammers would not jam, is a great example of this kind of lock-in).

In these kinds of lock-in systems not governed by physical laws, it is occasionally worth 'shaking the system up' in order to see if it can be dislodged from a weak regime and encouraged to explore alternative behaviors. This is described as introducing a system {{perturbation}}, a disturbance intended to jostle a system and then see what it settles back into.

Example

For much of human history, the most effective way for individuals to access goods was for them to converge towards a central market-place. This was the area for trade, and by being centralized and co-located, efforts to find goods could be minimized on the part of the consumer, and efforts to find customers could be minimized on the part of the seller.  This was the most "fit" way of achieving the goal of the acquiring and dispersing goods.

In recent decades, this model has begun to shift on its head. With the advent of information technologies, combined with innovations in transport logistics, it has become increasingly viable for companies to deliver goods directly to the homes of consumers. Rather than coming to a central market-place, goods are able to move directly from manufacturer to consumer. Frictions about what is needed where have been reduced, and costs and energy associated with physical markets vs virtual markets have been similarly reduced. 

We can think of each of these regimes of behavior and as each two separate attractor basins within a variegated possibility space of goods acquisition and dispersal strategies.  With changes in technology, one basin of attraction has, over time become more viable (therefore deeper), and the other seems to have shrunk back in relevance and depth. We seem to have arrived at a tipping point today, where the minimizing forces favoring e-commerce vs physical commerce have shifted. That said, the legacy system tends to persist (old habits die hard).

Enter a global pandemic: this is a great example of a system perturbation, which shakes up standard patterns of behavior. Indeed, Covid caused many people who had never shopped online to try this behavior, and realize that it does, indeed, minimize effort in new ways. This kind of system disturbance has moved many people out of their taken for granted regime of behavior, and caused them to move into new regimes. 

We can see from this example how a system perturbation can act as a kind of productive 'shock' that, if large enough, is able to move a system out of a prior attractor state and potentially into a new regime.


Multiple Attractors

In discussing the example above, we slipped in the idea that a system may have more than one 'well' or basin of attraction. It is worth exploring this a bit more, since we can imagine different kinds of possibility spaces - some that only have one deep well to which everything will ultimately  tumble (a single attractor like that which the pendulum moves towards), others can have multiple attractors, some deeper, some shallower, with a system able to explore multiple regimes of behaviors within the space.

Further, complex systems can sometimes oscillate between attractor states, both of which are equally viable. This can be described as a system having Multiple Equilibria. The example of Benard Rolls is a case in point - liquid is heated from below, and forces churn the water molecules so as to cause them to minimize resistance by moving into a "roll" pattern. That said, the direction of the roll -cascading left or cascading right- or two equally viable minimizing behaviors, both of which the liquid can move into. The system therefore has multiple equilibria

In addition, we can have systems that oscillate between attractors, rather than settling into a specific regime. An example would be a predator/prey system, where the population numbers of each species each rise and crash in recurring patterns over multiple generations. In this case, two attractors are coupled whereby as one intensifies (the prey reproduces a lot), in generate a response in another part of the system that is counterbalancing  (the predator finds a ready food source and is able to reproduce a lot). This creates a back and forth oscillation between high prey and high predator numbers, with each regime counterbalancing the other.

The same dynamics can be seen in what are known as chemical oscillators, where we have a phenomena of multiple attractors described as follows:

  • a reaction intensifies certain chemical behaviors;
  • beyond a certain threshold  these behaviors catalyze a new, counter behavior;
  • this counter behavior intensifies...;
  • beyond a certain threshold this counter behavior catalyzes the first behavior;
  • etc. 

The result of these reactions can be quite surprising, as seen below!

Check out the Multiple Attractors in the Briggs Rauscher chemical oscillator.


Back to {{key-concepts}}

Back to {{complexity}}



 

Governing Features ↑

 

Hello There

This is a nice home page for this section, not sure what goes here.

26:26 - Non-Linearity
Related
Concepts - 218 93 212 
Fields - 11 14 19 15 12 18 20 

23:23 - Nested Orders
Related
Concepts - 64 217 66 
Fields - 11 16 14 

24:24 - Emergence
Related
Concepts - 214 59 72 
Fields - 11 16 28 13 12 18 20 

25:25 - Driving Flows
Related
Concepts - 84 75 73 
Fields - 28 17 19 10 15 12 18 20 

22:22 - Bottom-up Agents
Related
Concepts - 213 56 
Fields - 11 16 14 10 13 12 18 

21:21 - Adaptive Capacity
Related
Concepts - 88 78 
Fields - 11 16 17 10 15 13 12 

 

Non-Linearity

Non-linear systems are ones where the scale or size of effects is not correlated with the scale of causes, making  them very difficult to predict.

Non-linear systems are ones in which a small change to initial conditions can result in a large scale change to the system's behavior over the course of time. This is due to the fact that such systems are subject to cascading feedback loops, that amplify slight changes. The notion has been popularized in the concept of 'the butterfly effect'. This effect - the idea that the beating of a butterfly's wings in Brazil, might set off a Tornado in Texas - is counterintuitive because of the scale difference. We tend to think that big effects are the result of big causes. Non-linear systems do not work that way, and instead a very small shift in initial conditions can result in massive system change.


This is because the behavior of non-linear systems is governed by what is known as Positive Feedback. What is interesting about positive feedback and the dynamics of non-linear systems is that they are counterintuitive: we tend to think that big effects need to have been created due to big causes. Non-linear systems do not work that way, and instead a very small shift in initial conditions can result in massive system change. It therefore becomes very difficult to determine how an input or change will affect the system, with small actions inadvertently leading to big, unforeseen consequences.

Clarifying Terminology: Positive feedback does not imply a value judgement, with 'positive' being equated with 'good'! Urban decay is an example of a situation where positive feedback may lead to negative outcomes. A cycle of feedback might involve people divesting in a neighborhood, such that the quality of the housing stock goes down, leading to dropping property values at neighboring sites, further dis-incentivizing improvements, leading to further disinvestment, etc.

History Matters!

The non-linearity of complex systems make them very difficult to predict, and instead we may think of complex adaptive systems as needing to unfold. Hence, History Matters, since slight variances in a system's history can lead to very different system behaviors.

Example:

A good example of this is comparing the nature of a regular pendulum to a double pendulum. In the case of a regular pendulum,  regardless of how we start the pendulum swinging, it will stabilize into a regular oscillating pattern. The history of how, precisely, the pendulum starts off swinging does not really affect the ultimate system behavior. The pendulum will stabilize in a regular pattern regardless of the starting point, a behavior that can be replicated over multiple trials.

The situation changes dramatically when we move to a double pendulum (a pendulum attached to another pendulum with a hinge point). When we start the pendulum moving the system will display erratic swinging behaviors - looping over itself and spinning in unpredictable sequences. If we were to restart the pendulum swinging one hundred times, we would see one hundred different patterns of behavior, with no particular sequence repeating itself. Hence, we cannot predict the pendulum's behavior, we can only watch the swinging system unfold. At best, we might observe that the system has certain tendencies, but we cannot outline the exact trajectory of the system's behavior without allowing it to 'play out' in time: 

watch the double pendulum!

We can think of the difference between this non-linear behavior and linear systems: if we wish to know the behavior of a billiard ball being shot into a corner pocket, we can calculate the angle and speed of the shot, and reliably determine the trajectory of the ball. A slight change in the angle of the shot leads to only a slight change in the ball's trajectory.  Accordingly, people are able to master the game of pool based on practicing their shots! If the behavior of a billiard ball on a pool table were like that of a complex system, it would be impossible to master: with even the most minute variation in our initial shot trajectory, the balls would find their ways to completely different positions on the table with every shot.

System Tendencies

That said, a non-linear system might still exhibit certain tendencies. If we allow a complex system to unfold many times (say in a computer simulation), while each simulation yields a different outcome (and some yield highly divergent outcomes), the system may have a tendency to gravitate towards particular regimes. Such regimes of behavior are known as Attractor States. Returning to the pendulum, in our single pendulum experiment the system always goes to the same attractor, oscillating back and forth. But a complex systems features multiple attractors, and the 'decision' of what attractor the system tends towards varies according to the initial conditions.

Complex systems can be very difficult to understand due to this non-linearity. We cannot know if a 'big effect' is due to an inherent 'big cause' or if it is something that simply plays out due to reinforcing feedback loops. Such loops amplify small behaviors in ways that can be misleading.

Example:

If a particular scholar is cited frequently, does this necessarily mean that their work has more intrinsic value then that of another scholar with far fewer citations?

Where is this all going?!

Intuitively we would expect that a high level of citations is co-related with a high quality of research output, but some studies have suggested that scholarly impact might also be attributed to the dynamics of {{positive-feedback}}: a scholar who is randomly cited slightly more often than another scholar of equal merit will have a tendency to attract more attention, which then attracts more citations, which attracts more attention, etc.. Had the scholarly system unfolded in a slightly different manner (with a different scholar initially receiving a few additional citations), the dynamics of the system could have led to a completely divergent outcome - citation networks may be  subject to historical {{contingency}}, that could have played out differently, with different scholars assuming primary positions in the citation hierarchy. Thus, when we say that complex systems are "Sensitive to Initial Conditions"  this is effectively another way of speaking about the non-linearity of the system, and how slight, seemingly innocuous variation in the history of the system can have a dramatic impact on how things ultimately unfold. 

Another way of thinking about this is to describe a system's {{non-linear}} : this is a key concept linked to the idea of non-linearity, that indicates that we need to follow a sequence of the system's unfolding to see what is going to happen. Tied to the idea of a system's path that needs to be followed, is the idea of a {{tipping-point}}, a kind of 'point of no return' where  a system veers from one trajectory to another, thereby closing off other potential pathways. A tipping point can be a system poised at a juncture between two states (either of which could viably unfold - ie VHS or BETA), or a tipping point can be a moment where the pressure on the system is such that it can no longer continue to operate in a particular mode that, until that point, was viable. At that juncture the system needs to move into a different kind of behavioral regime. Water turning to Ice or to steam is a tipping point of this latter kind, where the water molecules move beyond a certain threshold of agitation, and can no longer maintain the state of either solid, liquid, or vapor, beyond that threshold. 


Implications

In many domains of complexity, computer models are the primary tool used to understand these systems. Computers are very effective at emulating the step by step, rule based processes undertaken by multiple agents in parallel, that can result in emergent, unexpected outcomes. There are reasons why this can be very helpful, particularly if the system being modeled can be shown to have a tendency to move towards particular regimes, despite their non-linear features (these system tendencies can be thought of as 'attractors' for the system).

That said, many complex systems do not have specific attractors, or have attractors that change in unexpected ways depending on the environmental context at play. Real world complex systems will gravitate towards 'fit' behaviors, but fitness changes with context, their can be multiple, divergent fit 'solutions',  and the variables governing a system's unfolding can change.

Because of the non-linear nature of complex systems, predictive models, in principle,  are not going to be an effective means to gain insight into ultimate system trajectories. This is not to say that we can't learn from the dynamics that unfold in simulations, only that it is hard to consider them as predictive tools given the inherent uncertainty of these systems.

So what do we do? One answer is that we accept our lack of ability to predict specific outcomes, and try something else. This 'something else' has to do with learning from complexity dynamics so as to gain the tools to enact complexity dynamics:

Enacting vs Predicting.

What is we could set up systems that hold the ability to unfold in ways that lead towards fit behaviors? Rather than build a complex system in a model, what if we could make real things in the world modeled on complexity dynamics? We would have to accept a kind of uncertainty - we won't know where the systems will ultimately look like, but we might still be able to know how the systems will behave. And if we design these systems correctly, they will behave in ways that ensure that energy or resources fueling the system is processed effectively, and that individual agents, are gradually steered into regimes of behavior that maximizes the fitness of all agents, as a whole.

While the precise form such systems take will be subject to contingent, non-linear dynamics, they performance of the system will be something that we can instead rely upon to serve a given purpose.




 

Nested Orders

Complex Systems tend to organize themselves into systems of nested orders, where new features emerge at each level of order: cells forming organs, organs forming bodies, bodies forming societies.

Complex systems exhibit important scalar dynamics from two perspectives. First, they are often built up from nested sub-systems, which themselves may be complex systems. Second, at a given scale of inquiry within the system, there will be a tendency for the system to exhibit Power Laws  (or scale-free) dynamics in terms of how the system operates. This simply means that there will be a tendency in the system for a small number of elements within the system to dominate: this system domination can manifest in different ways, such as intensity (earthquakes) frequency (citations) or physical size (road networks). In all cases a small ratio of system components (earthquakes, citations, or roads) carry a large ratio of system impact. Understanding how and why this operates is important to the study of complexity.


Nested Orders

To understand what we mean by 'nested', we can think of the human body. At one level of magnification we can regard it as a collection of cells, at another as a collection of organs, at another as a complete body. Further, each body is itself part of a larger collection - perhaps a family, a clan or a tribe - and these in turn, may be part of other, even larger wholes:  cities or nations. In complex systems we constantly think of both parts and wholes, with the whole (at one level of magnification) becoming just a part (at another level of magnification). While we always need to select a scale to focus upon, it is important to note that complex systems are open - so they are affected by what occurs at other scales of inquiry. When trying to understand any given system within this hierarchy, the impact of subsystems typically occurs near adjacent scales. Thus, while a society can be understood as being composed of humans, composed of bodies, composed of organs, composed of cells, we do not tend to consider the role that cells play in affecting societies. Instead, we attune to understanding interactions between the relevant scales of whatever system we are examining.  Depending on the level of enquiry we choose,  we may look at the same entity (for example a single human being) and consider it be an emergent 'whole',  or simply a component part (or agent) within a larger emergent entity (one being within a complex society).

Various definitions of complexity try to capture this shifting nature of agent versus whole, and how this alters depending on the scale of inquiry. Definitions thus point to complex adaptive systems as being hierarchical, or operating at micro, meso, and macro level.  In his seminal article The Architecture of Complexity, Herbert Simon describes such systems as  'composed of interrelated sub-systems, each of the latter being, in turn, hierarchic in structure until we reach some lowest level of elementary subsystem'.

Why is this the case? And why does it matter?

Simon argues that, by partitioning systems into nested hierarchies, wholes are more apt to remain robust. They maintain their integrity even if parts of the system are compromised. He provides the example of two watch-makers, each of whom build watches made up of one thousand parts. One watchmaker organizes the watch's components as independently entities - each of which needs to be integrated into the whole in order for the watch to hold together as a stable entity. If one piece is disturbed in the course of the watchmaking, the whole disintegrates, and the watchmaking process needs to start anew. The second watchmaker organizes the watch parts into hierarchical sub-assemblies: ten individual parts make one unit, ten units make one component, and ten components make one watch. For the second watchmaker, each sub-assembly holds together as a stable, integrated entity, so if work is disrupted in the course of making an assembly, the disruption affects only that component (meaning a maximum of ten assembly steps are lost).  The remainder of the assembled components remain intact.

If Simon is correct, then natural systems may preserve robustness by creating sub-assemblies that each operate as wholes. Accordingly, it is worth considering how human systems might benefit from similar strategies.

Nested System Partitioning

Simon's watchmaker is a top-down operator who organizes his work flow into parts and wholes to keep the watch components partitioned and robust, creating a more efficient watch-making process. What is noteworthy is that self-organizing, bottom-up systems also seem to have inherent dynamics that appear to push systems towards such partitioning, and that this partitioning holds specific structural properties related to mathematical regularities.

A host of complex systems thus exhibit what is known as Self Similarity - meaning that we can 'zoom in' at any level of magnification and find repeated, nested scales.  These scale-free hierarchies follow the mathematical regularities of Power Laws distributions.  These distributions are so common in complex systems, that they are often referred to as 'the fingerprint of self-organization" (see Ricardo Solé).  We find power-law distributions in systems as diverse as the frequency and magnitude of earthquakes, the structure of academic citation networks, the prices of stocks, and the structure of the World Wide Web.


Scalar Events 

Further, complex systems tend to 'tune' themselves to what is referred to as Self-Organized Criticality: a state at which the scale or scope of a system's response to any given input will follow power-law distribution,  regardless of the intensity (or scope) of the input. Imagine a pile of sand, to which one grain is added to the top, then another, then another. There is a moment when the pile reaches a certain threshold, that if we add a grain the pile will endure a kind of  small 'collapse': an added grain will dislodge an existing one, which cascades downwards off the pile. When sand piles (or other complex systems), are in the 'critical' state, we cannot predict the impact of that singular grain of sand: whether it will dislodge one or two grains, or whether it will set off an avalanche of several hundred grains. If the addition of one grain causes a massive avalanche, we might think that the avalanche was the 'result' of a major 'cause'. But this is an error (see {{non-linearity}} ). That single grain could just as easily have set off any size of avalanche, and the frequency of which these avalanches of different sizes occur follows a power law (see also  {{per-bak}}).

While not fully understood, it is believed that systems gravitate towards these critical states because,  it is within these regimes that systems are able to maximize performance while simultaneously using the minimum amount of available energy. When system are poised at this state they also have maximum connectivity with the minimum amount of redundancy. It is also believed that they are the most effective information processors when poised within this critical regime.

Why Nested and not Hierarchical?

The attentive surfer of this website may notice that in the various definitions of complexity being circulated, the term 'hierarchical' is used to describe what we call here 'nested orders'. We have avoided using this term as it holds several connotations that appear unhelpful. First, a hierarchy generally assumes a kind of priority, with 'upper' levels being more significant than lower. Second, it implies control emanating from the top down. Neither of these connotations are appropriate when speaking about complex systems. Each level of nested orders is both a part and a whole, and causality flows both ways as the emergent order is generated by its constituent parts, and steered by those parts as much as it steers (or constrains) its parts once present. We hope that the idea of 'nested orders' is more neutral vis-a-vis notions of primacy and control, but still captures the idea of systems embedded within systems of different scales.


Implications

When considering the design of a system for which we are hoping to achieve complex dynamic unfolding, it is therefore important to think about two aspects.

The first is to consider how we might partition systems into different sub-units of similar components, that can operate as a unit without doing damage to units operating either at a higher or lower level. To take an urban example, we might think about the furnishings that operate together to form the unit of a room, rooms that together form the unit of a building, and buildings that operate together to form the unit of a block. Each level operates with respect to the levels above and below, but can be thought of as systems on their own. 

But this is not all - there is a dialogue between levels, such that it is not simply a hierarchy that runs from the block down through the building and into the furniture. Instead, each level emerges from the level below, is stabilized over time, and in turn constrains what happens at the scale below. Units emerging from units then constraining these same units, while also forming the {{building-blocks}} of what happens above.

The second is to be careful about how we interpret extreme events: if we look at large sand pile avalanches as somehow fundamentally different from small sand pile cascades, we are unlikely to understand that the same cause tripped off both effects.  The same dynamics may be at play for many phenomena, so we should be aware of how much emphasis we place on causal factors in 'extreme' events, if the event is one taking place within a complex system that may be  in the critical regime.

To put another way, if we wish to know why a particular cat video went viral, it might not be that productive to look into the details of the cat, its actions, or the quality of the video. That particular video might simply be the sand grain of cat videos - setting of a chain of viewing that would have eventually cascaded simply due to the number of cat videos poised to go viral at any given moment. While it is true that this example does not exactly parallel the sand-pile case, it expresses the same basic premise, that extreme events may simply be one scale of event in a system that is poised to unfold at all potential scales.








 

Emergence

Complex Adaptive Systems display emergent global features: ones transcending that of the system's individual elements.

Emergence refers to the unexpected manifestion of unique phenomena appearing in a complex system in the absence of top-down control. Emergent, integrated wholes are able to manifest through self-organizing, bottom-up processes, with these wholes exhibiting clear, functional, structures. These phenomena are intriguing in part due to their unexpectedness. Coordinated behaviors yield an emergent pattern or synchronized outcome that holds properties distinct from that of the individual agents in the system. Emergence can refer both to these novel global phenomena themselves (such as ant trails, Benard rolls or traffic jams) or to the mathematical regularities - such as power-laws -  associated with them.


Starling Murmuration - an emergent phenomena

When we see flocks of birds or schools of fish, they appear to operate as integrated wholes, yet the whole is somehow produced without any specific bird or fish being 'in-charge'. The processes leading to such phenomena are driven by networks of interactions that, because of feedback mechanisms,  gradually impose constraints or limits upon the members of the system (see Degrees of Freedom).  Recursive feedback between these members (or 'agents') take what was initially 'free' behavior, and gradually constrain or enslaves the behavior into coordinated regimes.

These coordinated, emergent regimes generally feature new behavioral or operational capacities that are not available to the individual element of the system. In addition, emergent systems often exhibit mathematical pattern regularities (in the form of {{power-laws}} ) pertaining to the intensity of the emergent phenomena. These intensities tend to be observed in aspects such as spatial, topological or temporal distributions of the emergent features. For example, there are pattern regularities associated with earthquake magnitudes (across time) city sizes (across space), and website popularity (across links (or 'topologically')).

Quite a lot of research in complexity is interested in the emergence of these mathematical regularities, and sometimes it is difficult to decipher which feature of complexity is more important - what the emergent phenomena do (in and of themselves), versus the structural patterns or regularities that these emergent phenomena manifest.

Relation to Self-Organization:

Closely linked to the idea of emergence is that of self-organization, although there are some instances where emergence and self-organization occur in isolation from one another.

Example:

One interesting case of emergence without self-organization is associated with the so-called 'wisdom of crowds'. A classic example of the phenomena, (described in the book of the same name), involves estimating the weight of a cow at a county fair. Simultaneously, experts as well as non-experts are asked to estimate a cow's weight. Fair attendees are given the chance to guess a weight and put their guess into a box.  None of the attendees are aware of the estimates being made by others. Nonetheless, when all the guesses from the attendees are tallied (and divided by the number of guesses), the weight of the cow that the 'crowd' collectively determined is closer than the weight of the cow estimated by experts. The correct weight of the cow 'emerges' from the collective, but no self-organizing processes are involved - simply independent guesses.

Despite there being examples of emergence without self-organization (as well as self organization without emergence), in the case of Complex Adaptive Systems these two concepts are highly linked, making it is difficult to speak about one without the other. If there is a meaningful distinction, it is that Self-Organization focuses on the character of interactions occurring amongst the Bottom-up Agents of a complex system, whereas Emergence highlights the global phenomena that appear in light of theses interactions.

Enslavement:

At the same time, the concepts are interwoven, since emergent properties of a system tend to constrain the behaviors of the agents forming that system. Hermann Haken frames this through the idea of an Enslaved State, where agents in a system come to be constrained as a result of phenomena they themselves created.

Example:

An interesting illustration of the phenomena of 'enslavement' can be found in ant-trail formation. Ants, that initially explore food sources at random, gradually have their random explorations constrained due to the signals provided by pheromones (which are deployed as ants that randomly discover food). The ants, responding in a bottom-up manner to these signals, gradually self-organize their search and generate a trail. The trail is the emergent phenomena, and self-organization - as a collective dynamic that is distributed across the colony - 'works' to steer individual ant behavior. That said, once a trail emerges, it acts as a kind of 'top-down' device that constrains subsequent ant trajectories.

Emergence poses ontological questions concerning where agency is located - that is, what is acting upon what. The source of agency becomes muddy as phenomena arising from agent behaviors (the local level) gives rise to emergent manifestations (the global level) which subsequently constrains further agent behaviors (and so forth). This is of interest to those interested in the philosophical implications of complexity.

There is a very tight coupling in these dynamics between a system's components and the environment that the components act within. Thus, a specific characteristic of the environment is that it also consists of system elements. Consequently, as elements shift in response to their environmental context, they are, in turn helping produce a new environmental context for themselves. This results in the systems components and the system environment forming a kind of closed loop of interactions. These kinds of loops of behaviors, that lead to forms of self-regulation, were the object of study for early Cybernetics thinkers.

Urban Interpretations:

The concept of Emergence has become increasingly popular in urban discourses. While some urban features come about through top-planning (like, for example, the decision to build a park), other kinds of urban phenomena seem to arise through bottom-up emergent processes (for example a particular park becoming the site of drug deals). It should be noted that not all emergent phenomena are positive! In some cases, we may wish to help steward along emergent characteristics that we deem to be positive for urban health, while in other cases we may wish to try to dismantle the kinds of feedback mechanisms that create spirals of decay or crime.

The concept of emergence can be approached very differently depending on the aims of a particular discourse. For example, Urban Modeling often highlights the emergence of Power Laws in the ratio of different kinds of urban phenomena. A classic example is the presence of power law distributions in city sizes, which looks at how the populations of cities in a country follows a power-law distribution,  but one can also examine power law distributions within rather than between cities, examining such characteristics such as road systems, restaurants, or other civic amenities.

Others, such as those engaged in the field of Evolutionary Economic Geography. (EEG) are intrigued by different kinds of physical patterns of organization.  EEG attunes to how 'clusters' of firms or 'agglomerations' appear in various settings, in the absence of top-down coordination.  They try to unpack the mechanisms whereby firms are able able to self-organize to create these clusters, rather then looking at any particular mathematical regularities or power-law attributes associated with such clusters.

Still other urban discourses, including Relational Geography and Assemblage Geography, are focused on how agents come together to create new structures or agents entities: which might  be buildings, institutions, building plans, etc. These discourses tend to focus on coordination mechanisms and flows that steer how such entities come to emerge.

Accordingly, different discourses attune to very different aspects fo complexity.

Proviso:

While this entry provides a general introduction to emergence (and self-organization), there are other interpretations of these phenomena that disambiguate these concepts with reference to Information theory. These interpretations focus upon the amount of information (in a Shannononian sense) required to describe self-organizing versus emergent dynamics.

While these definitions can be instructive, they remain somewhat controversial. There is no absolute consensus about how complexity can be defined using mathematical measures (for an excellent review on various measures, check the FEED for Ladyman, Lambert and Weisner, 2012). Often, an appeal is made to the idea of 'somewhere between order and randomness'. But this only tells us what complexity is not, rather than what it is. The explanation provided here is intended to outline the terminology in a more intuitive way, that, while not mathematically precise, makes the concepts workable.





 

Driving Flows

Complex Systems exchange energy and information  with their surroundings. These input flows help structure the system.

Complex systems, while operating as bounded 'wholes', are not entirely bounded. They remain open to the environment, which, in some fashion, 'feeds' or 'drives' the system: providing energy that can be used by the system to build and retain structure. Thus complex systems violate the second law of thermodynamics in that, rather than tending towards disorder (entropy), they are pushed towards order (negentropy). This would not be possible in the absence of some external source of input. This input can be thought of as the "fuel" for the agents within the systems, that could be in the form of food for ants, clicks for a website, or trades for a stock market.


According to the second law of thermodynamics a system, left to its own devices, will eventually lose order: hot coffee poured into cold will dissipate its heat until all the coffee in the cup is of the same temperature; matter breaks down over time when exposed to the elements; and systems lose structure and differentiation. The same is not true for complex systems. They gain order and structure over time.

What constitutes a flow?

In general, we can conceptualize flows as some form of energy that helps drive or push the system. But what do we mean by energy? And what kinds of energy flows should we pay attention to in the context of complexity?

In some cases, the source of system energy aligns with a strictly technical definition of what we think of when we say 'energy'. Such is the case in the classic example of 'Benard rolls' (see Open / Dissipative for a video of this phenomena). Here, a coherent, emergent 'roll' pattern is generated by exciting water molecules by means of a heat source.  It becomes relatively straightforward to identify thermal energy as the flow driving the system: heat enters the water system from below, dissipates to the environment above, and drives emergent water roll activity in between.

But there are a host of different kinds of complex systems where we see all kinds of driving flows that do not necessarily have their dynamics directed in accordances with this strict conception of 'energy'.

Example:

In an academic citation network, citations could be perceived as the 'energy' or flow that drives the system towards self-organization. As more citations are gathered, a scholar's reputation is enhanced, and more citations flow towards that scholar.  A pattern of scholarly achievement emerges (that follows a {{power-law}} distribution), due to the way in which the 'energy flows' of scholarly recognition (citations), are distributed within the system. While we tend to think that citations are based on merit, a number of studies have been able to replicate patterns that echo citation distribution ratios using only the kinds of mechanisms we would expect to see within a complex system - with no inherent merit required (see also Preferential Attachment!).
Similarly, the stock market can be considered as a complex adaptive system, with stock prices forming the flow which helps to steer system behavior; the world wide web can be considered as a complex adaptive system, with the number of website clicks serving as a key flow; the ways in which Netflix organizes recommendations can be considered as a complex adaptive systems, with movies watched serving as the flow that directs the system towards new recommendations.

Clearly, it is helpful to understand the specific nature of the driving flows within any given complex system, as this is what helps push the system along a particular trajectory. For ants, (who form emergent trails), food is the energy driving the system. The ants adjust their behaviors in order to gain access to differential flows (or sources) of food in the most effective way possible given the knowledge of the colony. In this case, the global caloric value of food stocks found is a good way to track the effectiveness of ant behavior.

If we look at different systems, we should be able to somehow 'count' how flow is directed and processed: citation counts, stock prices, website clicks, movies watched.

Multiple Flows:

Often complex systems are subject to more than one kind of flow that steers dynamics. For example, we can look at the complex population dynamics of a species within an ecosystem with a limited carrying capacity. Here, two flows are of interest: the intensity of reproduction (or the flow of new entrants into the environmental context), and the flow of food supplies (that limits how much population can be sustained). Here one flow rate drives the system (reproductive rate), while another flow rate chokes the system (carrying capacity). This interactions between two input flows (one driving and the other constraining), produces very interesting emergent dynamics that lead the system to oscillate or move periodically from one 'state' (or Attractor States) to another. A more colloquial way of thinking about this periodic cycling is captured in the idea of 'boom' and 'bust' cycles, although there are other kinds of cycles that involve moving between not just two, but many additional cycling regimes (see Bifurcations for more!).

Go with the flow:

Flow is the source of energy that drives self-organizating processes. A complex system is a collection of agents that are operating within a kind of loose or Open / Dissipative boundary, and flow is what comes in from the outside and is then processed by these agents.  Food is not part of the ant colony system, but it is what drives colony dynamics. The magic of self-organization is that, rather than each agent needing to independently figure out how best to access and optimize this external flow, each agent can learn from what its neighbors are doing.

Accordingly, there are two kinds of flows in a complex system - the external flow that needs to be internalized and processed, and the internal flows amongst agents that help signal the best way to perform within a given environment (and thereby process these external flows). The act of generating these signals is what Pierre Paul Grasse describes as Stigmergy -  a process that in some way marks are alters the shared environment of all agents in ways that can thereby steer agent behavior. For example, ants depositing pheromones on a path leading to food is an example a stigmergic signal.

An environment characterized by stigmergic signals is no longer neutral - it has areas or zones of intensity that affect all agents in the system that are in proximity to these signals. Thus, although agents may function  in random ways, stigmergy shifts the probability that agents in a system will behave in one way versus another:  the more intensity a particular zone of stigmergy has, the more likely agents will be drawn into the behavior directed by that zone.

Using stigmergy signals to help direct the processing of flows,  agents gradually move into regimes that process these flows utilizing minimal energy requirements: through Positive Feedback they draw other agents along into similar regimes of behavior making the system, as a whole, an efficient energy processor.

Its all about difference:

Every complex system channels its own specific form of driving flow.

In every case, it is important to look beyond technical definitions of energy flows in complex systems, to instead understand these as the differences that matter to the agents in the system, or as Gregory Bateson states 'the difference that makes a difference'. All complex systems involve some sort of differential, and this differential is regulated by an imbalance of flows, that thereby steers subsequent agent actions.  As the system realigns itself through  attuning to these differentials, new behaviors or patterns emerge that, in some way, optimize behaviors.

Inherent Laziness: Its everywhere!

A nice way to think about this is to imagine that everything in the world is essentially trying to do the least possible work - particularly when being pushed around by some outside force. The Driving Flows are the outside force, which are basically come into the agent territory.

Responsive Agents, Differential Flows:

Sometimes, all the agents really care about is basically shaking off the disturbance: the liquid molecules being heated in the Benard Rolls were happily drifting about, only to have some annoying heat energy start to come along that they now need to contend with. At that point, the regime that allows the heat to pass through the system and rise to the top is for the molecules to get into neater alignments of rolls that allows these currents to go through with less overall disruption. The same is true in the action of sand grains forming ridges, in response to the driving flows of the winds. In both cases, the agents, left to themselves, are not driving flows in and of themselves.

Active Agents, Differential Flows:

At other times, the agents are themselves a kind of driving force, that need and external driving flow to achieve a goal (eat, reproduce, etc.), but they are trying to figure out how to claim the prize without wasted effort. Unlike the agitated fluid, or the disturbed sand, the ants will move to seek the driving flow, whether or not it is present, ultimately running out of steam. We can see here that the ants are different from the sand grains, because the sand grains are passive without the external input, whereas the ant behavior actively seeks out the external input. A tree growing does the same thing - its roots look for nutrients, its branches and leaves extend towards the sun - the environment and the agent work together to minimize the effort of the growing tree to get what it needs without expending unnecessary resources.

Flowing Agents, Differential Context

A final example inverts the situation entirely, where the driving flow is coming strictly from an agent in an environment. Imagine I want to walk up a hill. My drive is to get to the top, but I want to do so expending the least amount of energy in terms of the parameters of both time and effort. I can charge directly upwards - using the principle that the shortest distance between two points is a straight line. But while this might initially appear to be a good solution, I quickly discover that the effort of the direct vertical path takes a toll on my energy level. Instead, if I extend the distance of travel, but reduce the slope (thereby moving at a lateral incline), my energy of each step is reduced. That said, the more I reduce the energy of movement, the longer the lateral inclines - meaning that more time to get to the top is extending. Our bodies make a balanced calculation to find the zig-zagging path up the hill that is able to account both for the time parameter and the energy parameter. The path is an emergent outcome of this calculation. The best solution for reaching the top while expending the minimum amount of resources (as a function of both time and energy). It is still worth noting that this activity is still happening in an environment with a differential - the differential this time being the slope of the terrain - but this differential is not one that is being produced by a flow moving into the system (like the heat differential in Benard Rolls), it is instead that we have an agent trying to flow through a differential environment.


 

Bottom-up Agents

Complex Adaptive Systems are comprised of multiple, parallel agents, whose coordinated behaviors lead to emergent global outcomes.

CAS are composed of populations of discreet elements - be they water molecules, ants, neurons, etc. - that nonetheless behave as a group. At the group level, novel forms of global order arise strictly due to simple interactions occurring at the level of the elements. Accordingly, CAS are described as "Bottom-up": global order is generated from below rather than coordinated from above.  That said, once global features have manifested they stabilize - spurring a recursive loop that alters the environment within which the elements operate and constraining subsequent system performance.


What might an Agent 'B'?

Complex systems are composed of populations of independent entities that nonetheless form a particular 'class' of entities sharing common features. Agents might be ants, or stocks, or websites. Furthermore, they might be Bikes, Barber shops, Beer glasses, or Benches (what I will refer to  below as the 'B' list). We can ask what an agent is but we could equally ask what an agent is not!

Defining an agent is not so much about focusing on a particular kind of entity, but instead about defining a particular kind performance within a given system and within that system's context. Many elements of day-to-day life might be thought of agents, but to do so, we need to first ask how agency is operationalized.


Example:

Imagine that I have a collection of 1000 bicycles that I wish to make available for rent across a city. Could I conceive of a self-organizing system where bikes are agents - where the best bike distributions and locations emerge, with bikes helping each other 'learn' where the best flow of consumers is? If a bike's 'destiny' is to be ridden as much as possible, and some rental locations are more likely to enable bikes to fulfill this destiny than others, how could the bikes distribute themselves to as to maximize fulfillment of their collective destiny?

What if I have 50 barber-shops in a town of 500 000 inhabitants - should the shops be placed in a row next to one another? Placed equidistant apart? Distributed in clusters of varying sizes and distances apart (maybe following power laws?). Might the barber shops be conceptualized as agents competing for flows of customers in a civic context, and trying to maximize gains while learning from their competitors?

And what about beer glasses: if I have a street festival where I want all beer glasses to wind up being recycled and not littering the ground, what mechanisms would I need to put into place in order to encourage the beer glasses to act as agents - who are more 'fit' if they find their ways into recycling depots? How could I operationalize the beer glasses so that they co-opt their consumers to assist in ensuring that this occurs?. What would a 'fit' beer glass be like in this case (hint: high priced deposit?). 

Finally, who is to say where the best place is to put a park bench? If a bench is an agent, and 100 benches in a park are a system, could benches self-organize to position themselves where they are most 'fit'?

The examples above are somewhat fanciful but they are being used to illustrate a point: there is no inherent constraint on the kinds of entities we might position as agents within a complex system. Instead, we need to look at how we frame the system, and do so in ways where entities can be operationalized as agents.

Operational Characteristics:

The agents above can each move into more fit behavioral regimes provided that certain operational dynamics are in place: 

  • having a common {{fitness}} criteria shared amongst agents (with some performances being better than others),
  • having an ability to exchange {{information-theory}} amongst other agents, which helps direct and constrain how each agent behaves  (get to better performance faster).
  • having an ability to shift performance, or {{adaptive-processes}}  (see also Requisite Variety),
  • operating in an environment where there is a meaningful difference available that drives behavior (see Driving Flows)


Thought Experiment:

Let's take just one of the examples above. The location of bikes (you can also find another example of the park benches on the {{principles}} page text.

Let's begin by co-opting a number of parking spaces in a city as temporary bike rental stations. Bikes are affixed to a small rolling platform in a vacant parking stall that holds 4 locked bikes.  These bike stations are then distributed, at random around a neighborhood. Individuals subscribe to a service that allows them to use bikes in exchange for money or bike credits.

  • Let us assume that the ultimate 'destiny' of a bike is to be ridden. Then the frequency at which this destiny is manifested would be considered its measure of fitness. For purposes of this thought experiment lets assume that each bike can measure this fitness: it has a sensor that detects ridership.
  • Let us then assume that each bike station is equipped to receive signals from the bike stations in its vicinity, indicating if bikes at those stations are being borrowed or not. With this information a bike station can calibrate which of its nearest neighbors are most readily fulfilling their destiny of being utilized.
  • Let us then assume that the bike platforms are given a bit of 'smart' functionality - they are connected to an app, that those subscribing to the rental service have on their phone. If a bike station is under-performing in comparison to its neighbors, it will offer a credit to any user of the service who will hitch up a bike to the rolling bike station, and move it to the nearest location of higher use.  This gives the bike stations the ability to shift location, providing adaptive capacity.
  • Finally let us assume that enough people are using the app, such that variations in use frequency provide enough data to mark trends or be useful. These usage flows then mark trends within the bike rental system, with certain bike station locations being popular, others not so.  As people rent or do not rent bikes, a source of difference enters the system, with certain bikes receiving more or less flows of users

It should be rather intuitive to image what would happen in this system. Some bike stations will capture more flows of people than others - the reasons for this might not be clear, and may vary from day to day depending on different conditions.  The reasons do not necessarily matter. From the perspective of the bike stations (as the agent in the system) the reason why a particular location is better or worse is not important, what matters is that bikes that are underutilized will gradually readjust their position in the city so as to better capture the flows they crave. Overtime, sites that have a high usage demand will achieve consolidations of bike stations, with each station adjusting its position based on information gathered from its nearest neighbors. This will continue until such time as all stations are positioned in ways where they are all capturing an equal number of usage flows, with none able to move to a better location. A kind of system equilibrium has been reached. Other equilibrium states may also exist, and so it is helpful if bike stations occasionally abandon this stable state, to randomly explore other potential, unoccupied sites that may in fact harbor unharnessed flows of bike ridership. It should be noted that the density of the emerging bike hubs can vary dramatically. There may be areas where 10 stations, 20 or only 1 station is viable. The point is that the agents in the system can distribute themselves, over time, to service this differential need without need for top down control. Here we have an example of a kind of 'swarm' urbanism.

This example is not typical of those given in complex adaptive systems theory, but it helps illustrate how it is possible, at the most basic level,  to conceptual a systems of complex unfolding by using only the notions of Agents, {{fitness}}, {{adaptive-processes}}, {{driving-flows}} and {{information-theory}}. There are other more nuances, but any of the systems listed above (the bicyles, barber shops, or beer glasses), could be made to function using the same basic strategies. 


'Classic' Agents

The list of potential agentic entities offered above - the 'B' list - is somewhat odd.  We begin with them so as to avoid limiting the scope of what may or may not be an agent. That said, this collection of potential agents are not part of what might be thought of as the 'canonic' Agent examples  - what we might call the 'A' list -  within complexity theory. Let us turn to these now:

Those drawn to the study of complex systems are often compelled to explore agent dynamics because of certain examples that demonstrate highly unexpected emergent aspects. These include 'the classics' (described elsewhere on this website) such as:  emergent ant trails, coordinated by individual ants, emergent percolation patterns, coordinate by water molecules in Benard/Rayleigh convection, emergent higher thought processes, coordinated by individual neurons firing.

In each case, we see natural systems composed of a multitude of entities (agents) that, without any level of higher control, are able to work together to coalesce into something that has characteristics that go above and beyond the properties of the individual agents. But if we consider the operational characteristics at play, they are no different from the more counter-intuitive examples listed above. Take ants as an example. They are an agent that has:

  • a common fitness criteria shared amongst agents (getting food),
  • the adaptive capacity to shift performance (searching a different place)
  • an ability to exchange information amongst other agents (deploying/detecting pheromones)
  • an environment where there is a meaningful difference that drives behavior (presence of food sources/flows)

Ant trails emerge as a result of ant interaction, but the agents in the system are not actively striving to achieve any predetermined  'global' structure or pattern: they are simply behaving in ways that involve an optimization of their own performance within a given context, with that context including the signals or information gleaned form other agents pursuinng similar performance goals. Since all agents pursue identical goals, coordination amongst agents leads to a faster discovery of fit performance regimes. What is unexpected is that, taken as a collective, the coordinated regime has global, novel features. This is the case in ALL complex systems, regardless of the kinds of agents involved.

Finally, once emergent states appear, they constrain subsequent agent behavior, which then tends to replicate itself.  Useful here are {{Humberto-maturana-francisco-varela}}'s notion of autopoiesis as well as Hermann Haken's concept of Enslaved States. Global order or patterns (that emerge through random behaviors conditioned by feedback) tend to stabilize and self-maintain.

Modeling Agents:

While the agents that inspired interest in complexity operate in the real world, scientists quickly realized that computers provided a perfect medium with which to explore the kind of agent behaviors we see operating. Computers are ideal for exploring agent behavior since many 'real world' agents obey very simple rules or behavioral protocols, and because the emergence of complexity occurs as a step by step (iterative) process.  At each time step each agent takes stock of its context, and adjusts its next action or movement based on feedback from its last move and from the last moves of its neighbors.

Computers are an ideal format to mimic these processes since, with code, it is straightforward to replicate a vast population of agents and run simulations that enable each individual agent to adjust its strategy at every time step. Investigations into such 'automata' informed the research of early computer scientists, including such luminaries as {{josh-epstein-and-rob-aztell}}, {{Von-Neumann}}, {{stephen-wolfram}}, {{john-conway}} and others (for more on their contributions see also {{key-thinkers}} on the upper right.

In the most basic versions of these automata, agents are considered as cells on an infinite grid, and cell behavior can be either 'on' or 'off' depending on a rule set that uses neighboring cell states as the input source.

Conway's Game of Life: A classic cellular automata

These early simulations employed Cellular Automata (CA), and later moved on to Agent-Based Models (ABM) which were able to create more heterogeneous collections of agents with more diverse rule sets. Both CA and ABM aimed to discover if patterns of global agent behaviors would emerge through interactions carried out over multiple iterations at the local level. These experiments successfully demonstrated how order does emerge through simple agent rules, and simulations have become, by far, the most common way of engaging with complexity sciences.

While these models can be quite dramatic, they are just one tool for exploring the field and should not be confused with the field itself. Models are very good at helping us understand certain aspects of complexity, but less effective in helping us operationalize complexity dynamics in real-world settings. Further, while CA and ABM demonstrate how emergent, complex features can arise from simple rules, the rule sets involved are established by the programmer and do not evolve within the program.


Agent Learning

A further exploration of agents in CAS incorporates the ways in which bottom-up agents might independently evolve rules in response to feedback. Here, agents test various Rules/Schemata over the course of multiple iterations. Through this trial and error process, involving Time/Iterations, they are able to assess their success through Feedback and retain useful patterns that increase Fitness. This is at the root of machine learning, with strategies such as genetic algorithms mimicking evolutionary trial and error in light of a given task.

competing agents are more fit as they walk faster!

John Holland describes how agents, each independently exploring suitable schema, actions, or rules, can be viewed as adopting General Darwinian processes involving Adaptive processes to carry out 'search' algorithms. In order for this search to proceed in a viable manner, agents need to possess what {{Ross-Ashby}} dubs Requisite Variety: sufficient heterogeneity to test multiple scenarios or rule enactment strategies. Without this variety, little can occur.  It follows that, we should always examine the range of capacities agents have to respond to their context, and determine if that capacity is sufficient to deal with the flows and forces they are likely to encounter.

Further, we can speed up the discovery of 'fit' strategies if we have one of two things: more agents testing (parallel populations of agents) or more sequential iterations of tests. Finally, we benefit if improvements achieved by one agent can propagate (be reproduced), within the broader population of the general agents.


 

Adaptive Capacity

Complex systems adjust behaviors in response to inputs. This allows them to achieve a better 'fit' within their context.

We are all familiar with the concept of adaptation as it relates to evolution, with Darwin outlining how species diversity is made possible by mutations that enhance a species' capacity to survive and thereby reproduce. Over time, mutations that are well-adapted to a given context will survive, and ill-adapted ones will perish. Through this simple process - repeated in parallel over multiple generations - species are generated that are supremely tuned to their environmental context. While originating in biological realms, a more 'general' Darwinism looks to processes outside this context to examine how similar mechanisms may be at play in a broad range of systems. Accordingly, ANY system - biological or not - that has the capacity for Variation, Selection, and Retention (VSR), is able to adapt and become more 'fit'.


Eye on the target - Identifying what is being adapted for:

All complex systems involve channeling flows in the most efficient way possible - achieving the maximum gain for the minimal output - and 'discovering' this efficiency can be thought of as achieving a 'fit' behavior. When looking at a system's adaptive behavior, one therefore needs to first understand how fit regimes are operationalized, by considering:

  1. What constitutes a 'fit' outcome;
  2. How the system registers behaviors that move closer to this outcome (see Feedback and Stigmergy);
  3. The capacity of agents in the system to adjust their behaviors so as to better align with strategies moving closer to the 'fit' goal.

It is this third point, point pertaining to the 'adaptive capacity' of agents that we wish to examine in more depth.

Variation, Selection, Retention (VSR):

If we consider the example of ant trail formation, behaviors that lead to the discovery of food would be those that ants wish to select for as more 'fit'. Using the lens of Variation, Selection and Retention, the system unfolds as follows:

  1. A collection of agents (ants), seek food (environmental differential) following random trajectories (Variation).
  2. Ants that randomly stumble upon food leave a pheromone signal in the vicinity. This pheromone signal indicates to other ants that certain trajectories within their random search are more viable then others (Selection).
  3. Ants adjust their random trajectories according to the pheromone traces, reinforcing successful food pathways and broadcasting these to surrounding members of the colony (Retention).

What emerges from this adaptive process is an ant trail: a self-organizing phenomena that has been steered by the adaptive dynamics of the system seeking to minimize the global system energy expended in finding food. What is important to note is that the adaptation occurs at the level of the entire group, or system. The colony as a whole coordinates their behavior to achieve overall fitness, with food availability (the source of fitness) being the differential input that drives the system. The ants help steer one another and, overall, the behavior of the colony is adaptive. Individual ants might still veer off track and deplete energy looking for food, but this is actually helpful in the long run - as it remains a useful strategy in cases where existing food sources become depleted. Transfer of information about successful strategies is critical to ensuring that more effective variants of behavior propagate throughout the colony.

None of this is meant to imply that, if the ants follow this protocol, they will find the most abundant food source available. Complexity does not necessarily result in perfect emergent outcomes. What it does result in is outcomes that are 'satisficing' and that allocate system resources as effectively as possible within the constraints of limited knowledge. Further, the system can change over time, meaning that other, more optimum performance regimes may be discovered as time unfolds.

What is also noteworthy about this example is that it employs Darwinian processes of variation, selection and retention, but not by means of genetic mutation. Instead, the ants themselves, each with their own strategy, are operating as ongoing mutations of behavior, in terms of their individual random search trajectories. Unlike in natural selection, agents in the system are able to broadcast successful strategies: not through a reproduction of their genes, but through an environmental signal that solicits a reproduction of their actions.

Capacity to Change:

An agent's ability to vary its behavior, select for behaviors that bring it closer to a goal, and then retain (or reproduce), these behaviors, is what makes agents in a complex system 'adaptive'. If agents do not possess the capacity to change their outputs in response to environmental inputs, then no adaptive processes can occur.

While this might at first seem self-evident, this basic concept can often be overlooked. In particular, it is easy to think about a system composed of diverse components as being 'complex' without considering whether or not the elements within the system have some inherent ability to adjust in relation to this complex context -

Example:

Consider an airplane. It is a system comprised of a host of components and together these components interact in ways that makes flight possible. That said, each component is not imbued with the inherent ability to adjust its behavior in response to shifting environmental inputs. The range of behaviors available to the plane's components are fixed according to pre-determined design specifications. The machine components are not intended to learn how to fly better (adjusting how they operate) in response to feedback they receive over the course of every flight.

If we try to understand an airplane as a complex system, and identify 'flying better' (using less energy to go further) as our measure of fitness, then would it be meaningful to speak about the system adapting? If the agents in the plane's system are the individual components, are they capable of variation, selection, and retention? Even if we were to model system behavior from the top down, to test design variations in components, the system itself would not be 'self-organizing': without external tinkering nothing would happen.

'Seeking' fitness without volition:

Does it follow that inanimate objects are incapable of self-organization without top down control? From the example of the airplane, we might thing not, but in reality it depends on the nature of the system.

It is reasonably easy to understand adaptation within a system where the agents posess some form of volition. What is intriguing is that many complex systems move towards fit regimes, regardless of whether or not the agents of the system have any sort of 'agency' or awareness regarding what they do or do not do.

Example: Coordination of Metronomes:

In the video below, we see a group of metronomes gradually coordinating their behaviors so as to synchronize to a regular rhythm and direction of motion. While this is an emergent outcome, it is initially unclear how one might see this as a kind of 'adaptation'. But if we look to the principles of VSR, we see how this occurs. First we observe a series of agents (metronomes), displaying a high degree of variety in how they beat (in relation to one another). The system has a shared environmental context (the plank upon which the metronomes sit), which acts as a subtle means of signal transfer between the metronomes. The plank moves parallel to the direction of metronome motion, creating resistance or 'drag' in relation to the oscillation of the metronomes on its surface. Individual metronomes encounter more movement resistance in relation to this environment (the sliding plank), while some metronome movements encounter less (a more efficient use of energy). These differentials cause each metronome to encounter drag, leading to ever so slight alterations in rhythm. Over time, these alterations lead all metronomes to move into sync.

Watch the metronomes go into sync!

Considered as VRS we observe the following:

  1. There is a Variation in the metronome movements with certain oscillatory trajectories encountering more friction and resistance then others;
  2. The physics of these resistance forces creates a Selection mechanism, whereby each metronome alters its oscillatory patterns in response to how much resistance it encounters.
  3. As more metronomes enter into coordinated oscillating regimes, this in turn generates more resistance force being exerted on any outliers, gradually pushing them into sync. Once tuned to this synchronized behavior,  the system as a whole optimizes its energy expenditure, and the behavior persists (Retention).

Keep it to a minimum!:

The system adapts to the point where overall resistance to motion is minimized. The metronomes 'achieve' the most for the least effort: a kind of fitness within their context.

While the form of 'minimization' varies, all complex systems involve seeking out behaviors that conserve energy - where the system, as a whole,  processes the flows it is encountering using the least possible redundant energy. While this cannot always be perfectly achieved, it is this minimizing trajectory that helps steer the system dynamics.

Agent Options:

What is perhaps surprising in this example is the lack of volition on the part of the metronomes. They are not trying to get together as part of a harmonious consensus in a metronome universe of peace and unity. They are simply subject to a shared environment, where the behavior of any given metronome in the system has an impact on the behavior of all others. This is an interesting characteristic of all complex systems - they are in fact a system, where agents cannot operate in isolation. What is equally important is the fact that agents in the system have a behavior that can, in some way, be altered : a metronome moves, and this movement has the capacity to alter if affected by an external input (in this case frictions and drag forces). We could imagine metronomes of a different design, where movement is time precisely to a clock and where, once set, nothing can change how the metronome behaves. So for a complex system we need to have agents that have a certain degree of adaptive capacity - something about them that can change, or respond to an environment. The change might be very subtle, but it is important to identify what kind of adaptive capacity each complex system contains, in order to be able to better understand and steer its behavior.


 


 

Fields Galore!

This is a nice home page for this section, not sure what goes here.

11:11 - Urban Modeling
Related

217, 213, 56, 88, 72, 
26, 23, 24, 22, 

16:16 - Urban Informalities
Related

213, 66, 56, 88, 
23, 24, 22, 21, 

28:28 - Urban Datascapes
Related

218, 66, 73, 59, 72, 
24, 25, 22, 

17:17 - Tactical Urbanism
Related

218, 
25, 21, 

14:14 - Resilient Urbanism
Related

218, 59, 
26, 23, 22, 

19:19 - Relational Geography
Related

218, 93, 84, 75, 
26, 25, 

10:10 - Parametric Urbanism
Related

213, 75, 78, 
25, 22, 21, 

15:15 - Landscape Urbanism
Related

93, 56, 88, 
26, 25, 21, 

13:13 - Incremental Urbanism
Related

56, 59, 88, 
24, 21, 

12:12 - Evolutionary Geography
Related

218, 93, 73, 59, 88, 72, 
26, 24, 25, 22, 21, 

18:18 - Communicative Planning
Related

75, 73, 
24, 25, 22, 

20:20 - Assemblage Geography
Related

93, 84, 
26, 24, 25, 

 

Urban Modeling

Cellular Automata & Agent-Based Models offer city simulations whose behaviors we learn from. What are the strengths & weaknesses of this mode of engaging urban complexity?

Governing Features ↑

There is a large body of research that employs computational techniques - in particular agent based modeling (ABM) and cellular automata (CA) to understand complex urban dynamics. This strategy looks at how rule based systems yield emergent structures.


Creating computer models is one of the most common ways to integrate complexity ideas into many fields - so much so that this methodological approach is often confused with the domain of knowledge itself. This is largely the case in urban discourses, where the construction of simulation models - either agent-based or cellular automata - is perhaps the most frequently employed strategy to try to grapple with complexity (though other communicative and relational approaches in planning have recently been gaining increased traction). It is therefore important to understand how these models work, and what aspects of complexity they highlight.

Cellular Automata

Early investigations as to the dynamics underlying complex systems came via early computational models, which illustrated how simple program rules could produce unexpectedly rich (or complex) results. John Conway's Game of Life (from 1970) was amongst the first of these models, composed of computer cells on a three dimensional lattice that could either be in an 'on' or 'off' mode. An initial random state launches the model, after which each cell updates its status depending on the state of directly neighboring cells (the model is described in detail under Bottom-up Agents). Conway was able to demonstrate that, despite the simplicity of the model rules, unexpected explosions of pattern and emergent orders were produced as the model proceeded through ongoing iterations.

At around the same time, Economist Thomas Schelling developed his segregation model, using a cellular lattice to explore the amount of bias it would require for "neighborhoods" of cells to become increasingly segregated. Cities in the US, in particular, had been experiencing the physical segregation of cities by race, with the assumption being that such spatial divisions were the result of strong biases amongst residents. With his model, Shelling demonstrated that, in effect, total segregation could occur even when agent 'rules' were only slightly biased towards maintaining neighborhood homogeneity. While the model does not explain why spatial segregation occurs in real-world settings, it does shed light on the idea that strong racial preferences are not, by necessity, the only reason why spatial partitioning may occur.

Because of the implicitly spatial qualities of models like Conway's and Shelling's, both computer programmers and urban thinkers began to wonder if models might help explain the kinds of spatial and formal patterns seen in urban development. If so, then by testing different rule sets one might be able to predict how iterative, distributed (or bottom-up) decision-making ultimately affects city form.

This is a unique direction for planning, in that most urban strategies focus on generating broad, top-down master-plans, where the details are ultimately filled in at a lower level. Here, the strategy is inverted. Models place decision-making at the level of the individual cell in a lattice, and it is through interacting populations of these cells that some form of organization is achieved. Models were able to demonstrate that, depending on the nature of the interaction rules, the formal characteristics of this emergent order can differ dramatically.

Ultimately, by running multiple models, and observing what kinds of rule-sets generate particular, recurrent kinds of pattern and form, modelers are able to speculate on what policy-decisions around planning are most likely to achieve forms deemed 'desirable' (on the assumption that the models are capturing the most salient feature of the real world conditions, which is not always the easiest assumption to make!).

Agent Based Models

Cellular Automata simulations are formulated within a lattice-type framework, but clearly this has its limits. The assumption of the model is that populations of cells within the lattice have inter-changeable rule sets, and that emergent features are derived from interactions amongst these identical populations. Clearly the range of players within a real-world urban context are quite variable, and populations of uniformly behaving cells do not capture this variance. Accordingly, with the growth in computing power, a new kind of "agent-based model" was able to liberate cells (or agents) from their lattices, as well as enabling programmers to provide differing rule-sets for multiple, differing agents.

In such models, we might have two sets of agents, (predator/prey), or agents moving in a non-static environments (flocking birds/schools of fish). Simple rules sets are then tested and calibrated to see if behaviors emerge within the models that emulate real-world observations. These models then demonstrate how different populations of actors or 'agents' with differing goals and rule sets interact.

Models that are straightforward to code (Net Logo is a good example, which can be deployed either as a CA or an Agent-Based model), showcase how different populations/agents within a model interact, producing unexpected results. Rules of interaction can be easily varied, according to a limited number of defined parameters.

That said, depending on how variables are calibrated, very different kinds of global behaviors or patterns emerge.

Urban Applications:

All of this is of great interest to urban computational geographers, who attempt to employ computer models as stand-ins for real world situations. From an urban standpoint, an agent might be a resident, a business owner, a shop-keeper, etc. Depending on the rules for growth, purchase pricing, development restrictions, or formal (physical) attributes, these agents can be programmed to interact upon an urban field, with multiple urban simulations (that use the same rule sets), serving to probe the 'field of possibilities' to see if any regularities emerge across different scenarios or iterations. If such patterns are observed, then the rules can be altered - in an attempt to derive which rule characteristics are the most salient in terms of generating either favorable or unfavorable spatial conditions (again, with the proviso that the interpretation of 'favorability' might well be contested).

Such models, for example, might attempt to show the impact of a new roadway on traffic patterns, with various rules around time, destination, starting position, etc. By running various tests of road locations, a modeler might attempt to determine the 'best' location for a new road - with the 'fitness' of this selection tying in to pre-determined policy parameters, such as land costs associated with location, reduction of congestion/travel times, or other factors. The promise of these models is very powerful: to simulate real-world conditions within a computer and then build multiple test 'worlds' prior to real-life implementation. This allows modelers to minimize policy risk of unknown consequences that are teased out in simulations.


Inherent Risks

That said, in practice there is always the concern of what models do not include: are the assumptions of the model in fact in alignment with the real world? To alleviate this, modelers attempt to calibrate their models to real world conditions by using data sets wherever possible, but they remain limited by which data types are available to them.  Furthermore,  the fact that a given data-set is available for use/calibration purposes, does not necessarily mean that the features the data captures are in fact related to the most salient indicators or features of the real-world system.

Models, can often be seen as 'objective' or 'scientific', since once the code has been written, the models provide reliable, quantitative results, but this does not mean that the consistency of the model is consistent with the real-world conditions being model. The model is still subject to the biases of the coding, the decisions of the modeler, and ideas around what to include and what to disregard as unimportant.

In an effort to include more and more potential factors (and again, with rising computer power) agent-based models have become increasingly sophisticated, integrating additional real-world conditions.  However, as the models grow to contain more and more conditions, actors, and rules, their relationship to complex adaptive systems perspectives has become increasingly tenuous. Scientists originally interested in the dynamics of complex systems were struck by the fact that simple systems with simple rules could generate complex orders. It should not, however, be surprising that complex models, with increasingly complex rule sets can generate complex orders, but the effort going into the creation of such models, their calibration, and their interpretation (in terms of how they guide policy), seems to have moved increasingly far away from the underpinnings of their inspiration.

What seems to have been preserved from complexity theory - rather than the simplicity of complex systems dynamics - are three ideas: that of "bottom-up" rather than top-down logic - whereby the order of the system emerges without need for top down control, and the idea of "emergence": that interacting agents within the model can generate novel global patterns or behaviors that have not been explicitly programmed into the system. Finally, at the individual agent level, the rules can still retain a simplicity.

While many individual researchers and research clusters investigate urban form through modeling, it is worth making special note of CASA - The Center for Advanced Spatial Analysis at the Bartlett in London, a group led by Professor Mike Batty.

Model Attributes: Fractals and Power Laws

Of interest to Urban Modelers is not just the emergent patterns found in simulations, but also the ways in which these patterns correspond to features associated with complex systems. For example, many models display {{fractals-1}} qualities. The illustration below (taken from an article by Mike Batty) show variants of how CA rules generate settlement decisions, with  fractal patterns emerging in each case. Different initial conditions/constraints yield different kinds of fractal behavior (except in starting condition B).

Example of Emergent Fractal spatial characteristics, 'A digital breeder for designing cities' (2009)

Similarly, models often exhibit {{power-laws}} in their emergent characteristics  - whether this be factors such as population distributions of cities in a model, or distributions of various physical attributes within a given city. For example, an analysis of internal city road networks might reveal that road use frequency in a given city follows a power-law distribution; another analysis might reveal that cities within a given country can be ordered by size, and that populations between cities follow a power law distribution (this characteristic seems to hold for cities that together form part of a relatively interdependent network - for example the grouping of all cities in the USA, or France, but not groupings of all cities in the world, suggesting that these are not part of the same system).

Example of power-law distribution of city populations in Nigeria, which closely follow Zipf's law: Image from Scientific Report "There is More than a Power Law in Zipf" by Cristelli, Batty and Pietronero (2012)

Many academic papers from the urban modeling world stress these attributes, which are not planned for and which are often characterized as being the 'fingerprint of complexity'.


Model Dynamics: Tipping Points & Contingency

Alongside these observed attributes of models - power laws and fractals - modelers are also interested in how models unfold over time. One of the interesting aspects of models is that, while the overall characteristics of emergent features might be similar across different models, the specificity of these characteristics will vary.

For example, a model might wish to consider locational choices of individual within a region, and including populations of agents that include such categories as: 'job opportunities', 'job seekers', and 'land rent rates'. In such a scenario, what begins as a neutral lattice of agent populations will ultimately begin to partition and differentiate with varying areas of population intensity (cities, towns) emerging. The size of these various populations centers might then follow a power-law.  If we repeat the simulation with the same rules in place, we would expect to see similar power-law population patterns emerge, but the specificity of exactly where these centers are located is contingent - varying from simulation to simulation.

This raises the question of the true nature of cities and population dynamics: for example, the fact that Chicago is a larger urban hub than St. Louis might be taken as a given - the result of some 'natural' advantage that St Louis does not have. But model simulations might suggest otherwise - that the emergence of Chicago as the major mid-west hub is a contingent, emergent phenomena: and that history could have played out differently.

Models therefore allow geographers to understand alternative histories, and consider how what  might seems like a 'natural' outcome, seen as part of a clear causal chain, are in fact much more tenuous and contingent phenomena.  Had the rules playing out just a little differently, from a slightly different starting point, a completely different result might have ensured. Here, we are left with the realization that {{history}}, and that {{contingency}} plays a key role in the make-up of our lived world.

Another way this can be thought of is the idea of a Tipping Points:  that whether or not Chicago or St Louis became a major urban center was the result of a small variable that pushed the system towards one regime, whereas another, but completely different regime was equally viable.

Tipping Points are discussed elsewhere on this site, but it is important to state that they can be thought of in two different ways: the first is this idea of a minor fluctuation that launches a given system along one particular path versus another, due to reinforcing feedback. The second looks at how an incremental increase in the stress or input to a system, once moved beyond a certain threshold,  can push a system into an entirely new form of behavior.

This second idea becomes important in modeling the amount of stress or inputs a given urban system can tolerate (or absorb) before one behavioral regime shifts to another. For example incrementally rising fuel prices might reach a point where people opt to take public transit. Or a certain critical mass of successful business ventures might eventually result in a new neighborhood hub, at which point rents increase substantially. What is interesting about these points is that the shift is often abrupt, as people recalibrate their expectations and behaviors around a new set of parameters that have exceeded a particular threshold. Models can display these abrupt shifts, or Phase Transitions, where certain patterns disappear only to be replaced by others.

A sketch outlining some of the ideas and individuals associated with urban modeling



Back to {{urbanism}}

Back to {{complexity}}


 

Urban Informalities

Many cities around the world self-build without top-down control. What do these processes have in common with complexity?

Governing Features ↑

Cities around the world are growing without the capacity for top-down control. Informal urbanism is an example of bottom-up processes that shape the city. Can these processes be harnessed in ways that make them more effective and productive?


Self-Built Settlements

Across the globe there are many areas where urban planning plays only the most minimal of roles. Instead, people themselves are responsible for creating their own homes, and the aggregate actions of these individuals result in what are known as 'informal settlements' or 'urban informalities'. These are in contrast to the 'planned' areas of housing and neighborhoods in cities that are controlled from the top down. For a long time, such settlements were overlooked or pushed to the sidelines, considered to be chaotic and disorderly. They were characterized as 'slums' in need of clean up or retrofitting.

Only over time have planners begun to recognize that such informalities may offer valuable lessons: that their bottom-up organization results in unexpected order, and that robust patterns emerge despite the seeming lack of coordination between individuals in these settlements. Urban thinkers interested in complexity have begun to look at these settlements for signs of order, efficiency, and resilience, and to try to understand how coordinated patterns emerge over time, in iterative modifications.

As part of this, thinkers have looked to older settlement patterns that yielded emergent order: settlements that pre-date controlled planning but are characterized by a kind of organic 'fit' between the environment and its settlers. An early contribution to this effort, a book called 'Architecture without Architects' (1972) by Bernard Rudofsky, did not reference complexity explicitly,  but did note how harmonious patterns emerge within such settlements despite the fact that there is no central control.

This area of research can therefore be divided into two parts: urban thinkers who aim to learn from traditional settlements, built slowly and incrementally over generations that achieve  harmonious, coherent features, and those interested in how the much faster-paced settlements - built in the face of population shifts that have drawn people, en-masse into cities - nonetheless display emergent structure.

Finally, a number of researchers have attempted to draw from both these areas to see how new planning policies might apply 'lessons learnt' from these examples of bottom-up settlements, in order to infuse more vitality - but also autonomy - into new developments.

Rule-Based Settlements

Today, urban development are typically regulated by various planning rules and codes, which set limits and constraints around what can and cannot happen: areas of limited function (zoning), limitations on built form (building set-backs, height restrictions, etc.), mandatory ancillary requirements (parking spaces per dwelling unit), and much more.

One key characteristic of these constraints and limits is that they are determined by planners and then 'set' for a particular area or building type. Rules are imposed from the planner's office and do not vary to accommodate emerging conditions on the ground.

By contrast, are much older rules that came in another form: relational rules that were codes of building behavior that were much more context dependent. Effectively, what could be built hinged, somewhat, on what had been built around you before.  This local, unfolding history steered what was built, what the 'next step' was, in terms of urban growth. Each construction, in turn, placed constraints on what could happen next.

If this sounds familiar it should, as it echoes, in many ways, the manner in which cellular automata models unfold over time. There is a rule set, but it is a rule-set that is deployed in a relational context. Unlike in master zoning plans, there are no 'rules' that if a cell is located in specific position on the lattice  it needs to observe certain behaviors associated with that square.  Instead cell behaviors are constrained only by the emerging neighboring context, which is never set or pre-determined.

Example:  if we look at this image of a Greek village, we can note that the street character is unified and holistic, despite the fact that there are many individual properties.  In his book, "Mediterranean Urbanism', {{Besim-Hakim}} discusses this unity in terms of a series of urban 'rules' that constrain what neighbors can and cannot (or their {{degrees-of-freedom}}).

What is noteworthy in this study is that, unlike in contemporary planning, the nature of these rules is contextual. A rule might pertain to where a door or window can be placed, but only insofar as this has an impact on doors and windows pre-existing in the neighboring context. In this way, building specificity proceeds iteratively. These locally codified {{rule-based}} constraints are then supplemented with tacit rules around the means of construction. By using local building methods and materials, ones proven successful over countless generations, each individual builder constrains their material and construction choices in accordance with local practices. For most of human history there was no need to make such rules explicit, as construction technologies were quite regional. As a result, construction practices can be said to have been tested over time, and thereby 'evolved' to produce a coherent fit within their context.

In a similar vein, Mustafa {{ben-hamouche}} analyzes the emergence of Muslim cities.  He states that urban structure is the result of a number of tacit rules that, while not necessarily codified, provided a general normative understanding around the ethos of construction. In addition to the kinds of relational rules explored by Hakim, Hamouche points to how the nature of inheritance practices served to divide building sites.  Alongside of this, Islamic law gave priority for those holding neighboring properties to obtain a kind of 'right of first refusal' should adjacent property become available. This resulted in an ongoing process of both disaggregation (inheritance divisions), and aggregation (adjacent property fusions). Iterated over each passing generation, these dynamics resulted in certain global morphological characteristics that seem to exhibit {{fractals-1}} in structure.

The resulting geometries are complex, particularly since subdivided properties needed to maintain functionality - with the need for additional arrays of lanes and access points. Finally, due to the limits on space, adjacent owners often became intertwined in various kinds of complex property infringement agreements - for example one offering access to a rooftop for the other, with the other offering access through their garden to the other's entry. In this way, singular properties became intertwined in a variety of manners, resulting in more organic, holistic spatial organization.

Here, the city gains structure from the bottom-up actions of individuals, taking specific iterative steps that give form to their dwellings - all with reference to how these steps ultimately impact their neighbors. These localized, incremental actions, are therefore not entirely independent, but rather locally constrained in such a way that, over time, a collective, coherent urban form could emerge. These cities gain long-term adaptive fitness due to iterative adjustments made over time, allowing them to take on a complex natural order responding to the needs of their inhabitants.

Informal Settlements

In addition to these traditional settlements, today we can point to innumerable regions characterized by unplanned, informal settlements. The growing rural to urban trend has long since passed the threshold where more people live in cities than in the countryside, and housing cannot keep pace with this trend. Accordingly, people are forced to build their own houses in an effort to settle in areas where they can gain access to employment opportunities. These settlements are seen as problematic, due to a host of issues including lack of sanitation, safety concerns, infrastructural and transport issues, etc. 

That said, there are many ways in which we can, nonetheless, learn from informalities.  While the characteristics of urban informalities vary, many of them have been quite successful in achieving vibrant, livable communities. Furthermore, these settlements are often the source of a great deal of civic creativity and ingenuity. While there is always the risk of romanticizing these locales, for those interested in bottom-up self-organization, they would seem to offer a prime case-study for how effective solutions can be achieved without need for top-down control.

The character of these settlements changes incrementally in two key ways:  morphologically and materially. Initially, a dwelling will be built using the bare minimum size and construction required in order to satisfy the need for shelter from the elements. Construction is speedy and may rely on assistance from other family or community members. Once a given zone of habitation has been carved out, two modifications will tend to occur: the material quality will be improved/replaced as resources become available, and/or extensions may be added. Living spaces may also be extended to incorporate outdoor surroundings, which may include the appropriation of air space (balconies) or rooftops. Over time, as primary needs of housing are met, an informal settlement will begin to see other forms of basic functions crop up: including shops, repair, or other service infrastructures. 

The quality of informal settlements is often contingent upon whether or not occupants feel secure in their land tenure. In Turkey, for example, where land tenure is relatively secure for those who have settled informally (due to particular aspects of Ottoman Law), the processes described above (incremental expansion, alongside of material replacement, gradual functional support services), mean that many environments that appear to have been planned parts of the city are in fact examples of robust, evolved, informalities.

In addition to the physical characteristics of these matured informalities, they also often develop to have their own internal social and governance structures, which help ensure safety,  resolve disputes, and relay knowledge. Within a settlement, networks of individuals develop who assist others in navigating through uncertain situations, with knowledge and experience relayed throughout the group. Thus, in additional to the hard, material  infrastructure of the physical settlement itself, there are less tangible, but equally important {{network-topology}} of community that develop.  When these settlements are intervened upon by outside actors -  'cut down' or razed to the ground in order to make way for more progressive, controlled, and top-down housing developments - this accretion of knowledge and organization is lost. Areas that are developing towards these self-organized structures are stripped of the opportunity to go through the processes of incremental succession that can lead to quite successful communities.

Informalities of this nature are studied by many researchers, including Hesam Kamalipour, {{Kim-Dovey}}, and {{Juval-Portugali}}. Each draw links between informalities and the dynamics of complex adaptive systems. 


Learning From Informalities: Urban Experiments in Self-Organization 

Much of the research on informalities centers around efforts to better understand and steward their functioning (rather than simply destroying and replacing them). That said, planners working within the more normative development context have begun to ask if it is possible to apply this kind of rule-based,  incremental, and context dependent approach to planning to European or North American contexts.

There is perhaps no better example of this than the case of Almere, Oosterwold, a project designed by the Architecture and Urban Design group MVRDV in the Netherlands. The project employs a series of conditional rules that allows individuals to purchase plots and then constrains how these plots are developed by reference to a number of rules that must be preserved for the development as a whole. At the same time, certain characteristics of each plot development hinges on site conditions of surrounding neighboring plots, reducing the {{degrees-of-freedom}} available for subsequent development.  

Individuals are responsible for the provision of a number of personal and site infrastructures, and are otherwise left to their own devices in terms of determining how, precisely, to go about developing their own site. The project is an interesting example of bottom-up self-organization in planning, that incorporates both rule-based thinking and bottom-up agents. Furthermore, the project has no pre-determined end-vision. Instead, depending on the nature of the non-linear process of land acquisition and development, a whole range of outcomes may be possible. Rather than being proscribed in advance by a vision or master-plan, the intent is for the settlement pattern to be one characterized by {{emergence}} over time.

Back to {{urbanism}}

Back to {{complexity}}



 

Urban Datascapes

Increasingly, data is guiding how cities are built and managed. 'Datascapes' are both derived from our actions but then can also steer them. How do humans and data interact in complex ways?

Governing Features ↑

More and more, the proliferation of data is leading to new opportunities in how we inhabit space. How might a data-steered environment operate as a complex system?


In the long history of urbanization, infrastructural elements have been critical in defining the nature of settlement. Be it the river-routes that formed trade channels constraining settlements, the rail-lines defining where frontier towns would be situated, or the freeways marking a shift from urbanization to sub-urbanization, different infrastructural regimes have played a key role in determining where and how we live. Further infrastructural layers made new modes of life possible: the power-grid shifted daily rhythms so as to extend the workday into the night hours; telecommunication lines enabled physically distant transactions to occur with ease; highway and sewage infrastructures helped spur massive suburban expansion. These infrastructures - carrying people, goods, and ultimately ideas - have formed the skeletal framework upon which lifestyles and livelihoods are anchored.

As we move into an age increasingly mediated by digital infrastructures and the flows they channel, we ask the question:  what kind of worlds will these new regimes make possible, and how will these be steered to ensure ‘fit’ urban practices? What does ‘fit’ even mean within this context? Whether through driverless cars, the internet of things, or digitally enabled access economies, cities are poised to afford new kinds of behaviors and lifestyle options.

From Bell Curves to Power Laws

To date, individuals have been expected to live their civic lives in ways that cater largely to the average needs of population, rather than particular, exceptional requirements. Cities, meet standards. This, despite the fact that needs differ, and may differ both across individuals, and for the same individual across time. Nonetheless, we tend to relegate our urban systems to support a narrow range of options that remain relatively fixed. Historically, this has made sense, because individuated needs that shift or differ from norms are too variable and have, until now, been difficult if not impossible to track and accommodate.

While norms remain important (and if assumed to be governed by a power-law distribution, would align with the small number of urban offerings (20%) that meet the greatest proportion of urban needs (80%)), this leaves 80% of the more particular and finely tuned needs unharnessed.

Chris Anderson (2004) describe this full breadth of differential offerings - the non-impactful 80% - as ‘the long tail’:  the huge scope of ongoing (but small) demand that is not part of the "fat head" of the power law distribution.  Anderson argues that highly tuned niche offerings in this long tail are viable but, until now, have not been fully tapped due to the difficulties in pinpointing where and when they exist.

Today, new information technologies are changing all this, providing detailed access to the long tail of highly tuned offerings that may appeal only to the very few or for a very brief time, but would nonetheless be viable if there were a way to match needs to offerings. Anderson writes that, ‘many of our assumptions about popular taste are actually artifacts of poor supply-and-demand matching — a market response to inefficient distribution’.  Mass supply of standard urban environments or infrastructures may appeal to the norm but, in the end, no one is actually getting precisely what they want, when they want it. Instead, they are getting what the market has the capacity to supply with its coarse information availability.

Furthermore, they are getting what would seem to be viable given notions of "economy of scale". But these perspectives can shift when information coordination becomes more  efficient: instead of economies of scale, we can begin to activate access economies, which enable the pooling of diverse resources which can be accessed by individuals on an as-needed basis. Economies of Scale suggest Mass Transit Systems; Access Economies suggest Uber. One is fine tuned to individual needs, the other is not.

Fine Tuning: An Example

Considering the rise of Airbnb. Big hotel chains are based on a model that offers accommodations appealing to the widest possible demographics within certain price point. Accordingly, when making comparisons within a given price category, rooms offered by large chains appear generic and interchangeable. Airbnb changed this (and dramatically altered the accommodation industry) by providing a platform able to match highly specified needs with highly specified offerings. If I am looking for a vegan and pet-friendly one-bedroom apartment with a bicycle in the 16th arrondissement in Paris, I am now able to identify this niche with surprising speed and accuracy.  The capacity for Airbnb to offer highly specific information, tuned to individual preferences, that is also deemed reliable (because of reviews), allows individuals to stay in accommodation tailored to their personal requirements rather than generic ones.

Airbnb's success is based, in part, on how it is able to illuminate this broad array of atypical and variable niches – the long tail. This long tail shifts.  Accordingly, when I travel I may wish to stay in the normative Holiday Inn 50% of the time, a quaint bed and breakfast 49% of the time and a vegan glamping yurt only 1% of the time.  Until now, it has been very difficult to enact the behaviors desired only 1% of the time. But these niches, if made visible and accessible are in fact viable.

Today's data technologies now illuminate these.

From the Standard to the Particular

Airbnb is a classic example of how information technologies are making previously invisible urban assets more tangible and accessible for people. But such technologies  are also changing how we perceive the urban environments around us.  If hotel locations could previously be mapped and located according to their proximity to normative assets (for example major highway interchanges, major business centers, or major entertainment facilities), then today's data of occupied Airbnb sites might reveal a host of other locational preferences - ones that are irrelevant at the macro scale, but or interest to individuals at the micro-scale. We can imagine a new kind of mapping of these urban niches as having a more nuanced and variegated quality - ones capturing and relaying multiple kinds of urban flows and revealing latent flows not previously channeled.

Consider a host of other urban assets: when do people use particular roads, or trains, or bike routes? What routes are the fastest at a given hour of the day? Or perhaps speed is not important - what routes then are the quietest? Or the prettiest?

Or, consider the new potentials of the Access Economy. Here, it becomes less important that I have constant, physical possession of an urban asset (a car for example), and more important that I have easy, on-demand, and customized access to this asset (any make of car I want in a given instant; any video I want to watch on Netflix). The Access economy does not mean that all cars (from a car-sharing service) nor all videos (from a streaming service) will be  accessed in identical ways: certain cars and videos will be part of the fat head of the power law. But the long-tail is now on offer as well.

If previous city planning strategies only had the power to attune to normative needs (the fastest road), today we can construct civic Datascapes tuned to individuated desires. In a sense, data allows us to increase the city's Degrees of Freedom. Thus, if a standardized bus route was, at one point, the most effective way to transport people along "common" routes from A to B, then Uber offers a way for individuals to construct their own specified routes from E to Z. We can think of this shift as being one that moves us from mass-standardization to mass-customization, all of which is discovered and made tangible through individual data: our preferences when we call an Uber, or stay in an AirBnB.  At the same time data-scapes emerge on the other side of this: pleasant bike routes that are crowd-sourced and then promoted; quirky accommodation options rise to star status; pop-up events are made visible through social media posts.

This is a different kind of city: one viewed primarily through intensities of data, that can be curated so as to be viewed and filtered according to individual needs. Accordingly, my teenage daughter's view of the city is informed and highlighted by pathways, infrastructures and gathering places all of which constitute data points that are most salient to her: my tech colleague's perspective of the city will have its own matrix of data points. Neither will ride the same bus, nor stay in the same hotel, nor gather in the same meet-up spots. The "central square" will no longer be centralized. But there will be niches of localized interests and intensities that emerge, over time.

Data-scapes:

This is what we mean when we introduce the idea of "data-scapes". The term is used here to capture a range of interests which are still in nascent form - not quite yet emerged as a clear line of urban enquiry -  but which is "in the air" in various ways. Some of the Smart City discourses touch upon it, but the emphasis is more on big-data collection for optimization. Speculations around the Internet of Things relate to this area, as do investigations around the Access Economy.

What binds these research themes is a common awareness that information is now able to help steer how we experience and data-scape of the city, with material conditions being supplemented by informational conditions that alter the ways in which we engage with the material world. Apps on cell phones become the tools we use to navigate these scapes, which the city no longer something that is seen primarily as fixed pattern, but rather as something that can be activated and drawn from in unique ways.

Complexity How?

Bottom-up:

One of the ways in which these dynamics of civic activation and appropriation differ from current models is that the ways in which common needs or goods come to the forefront need no longer rest be driven from the top down. There are far greater opportunities for special niches to emerge from the collective actions of Bottom-up Agents, with novel and surprising features gaining prominence. In a civic data-scape, a particular club might gain prominence on social media on a particular evening - going 'viral' in the same way that a cat video might, and thereby gaining prominence in the shared Datascape of club-goers.

Contingency and Non-Linearity:

We see as well from the club example that some of the dynamics that generates points of prominence in data-scapes may in fact be caused by initial random-fluctuations, that gradually self-perpetuate,  (as is seen in systems phenomena governed by growth and Preferential Attachment. For example,  in the data-scape of accommodation, or restaurants, small changes in initial conditions may have a disproportionate impact on system performance: with certain sites gaining prominence in the Datascape even though are not inherently superior to others.

Driven by Flows

We often think of civic form as coming first - that we put in a road and then the road directs flows. Traffic engineers might look at a city plan and make decisions about location not because of existing flows, but instead because of existing cheap real-estate upon which to build a corridor.  Datascapes flip this relationship, by first determining flows and then allowing these Driving Flows to direct civic infrastructure. The simplest example of this is comparing 20 Uber passengers with 20 passengers bus passengers. The bus forces people to conform to its pre-determined course of navigation, whereas the flow of Ubers are instead driven by their desires. What is of interest is that, once this relationship is flipped we may observe new patterns of flows that are consistent and coherent, but previously invisible. This is also why the phrase 'data-scape' is invoked, because what emerges in tracing the pathways of 1000 Uber rides (in contrast to a 1000 bus rides) is a new kind of mapping about cities not evident before.

Thought Experiment:

For more insights into how IoT technologies might combine with complexity principles to reveal data-scapes of fit urban conditions, check out the "Urban Lemna" student project in the InDepth "Resources" tab to the right.

Sections of this text were extracted and modified from an earlier paper by S Wohl and R Revariah: Fluid Urbanism : How Information Steered Architecture Might Reshape the Dynamics of Civic Dwelling, published 2018 in The Plan Journal. See also "Sensing the City: Legibility in the Context of Mediated Spatial Terrains, published in 2018 in Space and Culture.


 

Tactical Urbanism

Tactical interventions are light, quick and cheap - but if deployed using a complexity lens, could they be a generative learning tool that helps make our cities more fit?

Governing Features ↑

Tactical Urbanism is a branch of urban thinking that tries to understand the role of grassroots, bottom-up initiatives in creating meaningful urban space. While not associating itself directly with complexity theory, many of the tools it employs -particularly its way of 'learning by doing' - ties in with adaptive and emergent concepts from complexity.


Tactical Urbanism is an approach to urban intervention which removes the need for prediction: rather than attempting to forecast what might work in a given environment, tactical strategies instead simply enact various small short-term interventions. Examples might include: putting temporary barricades up on a street to allow for a festival; temporarily allowing a traffic lane to become a bike lane; shifting parking stalls to be pocket parks or outdoor cafe tables; etc. With many of these kinds of interventions beginning to crop up in cities around the world, the term "Tactical Urbanism" was introduced by {{mike-lydon-and-anthony-garcia}}, to capture these kinds of activities. 

These kinds of short-term tactics can enliven public space, while avoiding the red-tape of more permanent interventions. They are thus easier to implement given their quick and temporary scope. They often are the result of grass-roots community activism, and are typically described in the context of community empowerment.

At the same time, these kinds of interventions can be related to complexity thinking if they are conceived not as "one -offs", but instead as strategic tests that serve as a kind of environmental probe.  The nature of such interventions are that they are "light, quick, and cheap", meaning that they are also {{safe-to-fail}}. Because of their temporary and "light" nature, they can quickly be mobilized on different sites, on different days. This means that they have the inherent ability to provide quick and adaptive {{timeiterations}} that can support urban 'learning'. 

How the City Learns

In what way might a city learn? Urban Designers often depict renderings of lovely civic interventions: bike paths filled with happy cyclists; amphitheaters enlivened by performers and audiences; sidewalk cafes brimming with smiling people. But are these projections accurate? Too often once spaces are built, they fail to perform in the ways anticipated - but at that point it is too late. Too much capital has been sunk into the project to rip it up and start over again, so we are left with dis-functioning environments.

We can therefore think about tactical approaches as a way to increase the number of functional {{variables}} a particular urban environment can explore. One iteration might involve populating a street with a market, another might be about partially closing it for a bike path, another might test turning sections into pop-up parks. Each of these can be considered to be potentially viable urban functions that are seeking the right "fit" within a given context - ones looking for a supportive niche. It is therefore possible to see tactical interventions as "fitness" probes used to explore the {{fitness-landscape}} of an urban environments. Given that different urban environments are subject to different underlying dynamics (or {{driving-flows}} ) the success of a particular test probe can tell us something about what are suitable niches for longer term interventions.

Example: Play me I'm Yours

Play me I'm Yours began in 2008 as an artist installation by Luke Jerram, by placing pianos in various locations in a city. The project gained international traction and has since been replicated globally. Musicians find pianos in unexpected locations and are able to animate the surrounding environment by playing music. While the project is compelling in and of itself, it also interesting to position it not merely as an artistic intervention, but also as an experiment in probing the city for viable music locations. Each piano, in a sense, could be thought of as a sensor, monitoring how often it is activated by players. Together all pianos thereby gain data about the underlying capacity or propensity for music performance in a section of the city. If we think of each piano as an agent in a complex system, and we think of "being played" as a measure of that agent's fitness, then the pianos can, in a sense, monitor which positions best serve to gather their relevant input (piano playing individuals). Here, the civic environment carries these driving resource flows in differential ways (with some locations being richer in flows than others). These are thereby more "fit" locations.

While this example has its limits, it can be extended to imagine other, similar kinds of civic systems. For example, imagine that we create a temporary pop-up playground set,  capable of being easily dismantled and assembled, and then deployed to different vacant lots in the city. We could then imagine equipping this set with sensors, to determine where and when it is activated and used. This would not involve the top-down monitoring of individual kids (a risk often associated with big data collection), but instead would simply involve the monitoring of the equipment itself: do the swings swing, are the slides being slidden upon, etc.  We can think of each of these activities being a measure of 'fitness' for the playground equipment. A slide, for example, as an agent within this complex system aims to fufill its 'destiny' by being used for sliding: sensors monitoring the frequency of its use can then be used as a measure of its fitness. The various pop-up locations are different niches, each of which provide the slides with differential flows of a particular resource - in this case the energy of sliding children - that the slides are hungry to gather. The deployments of the playground equipment can then be seen as explorations of the fitness landscape, {{timeiterations}} through which the slide gathers {{feedback-loops}} about locational success. 

It should be apparent that this is a system capable of learning, with each tactical mutation of {{variable}} serving as a test of fit strategies. Furthermore, the system can be thought of as made up of {{nested-orders}} of components, so we have the fitness of the playground as a whole that can be assessed, but we can also examine the fitness of the different sub-elements making up the park: how much a sandbox or a swing-set, or a slide are each activated as part of that whole.

Tactical Strategies as a Method of Deploying Complexity on the Ground

Tactical strategies are most typically lauded as a way to gain grass-roots advocacy, but they are presented here in relationship to complexity, as tangible,  operational way to employ complexity thinking in real-world situations. These strategies, alongside the idea of {{urban-datascapes}}, are a way of gathering meaningful data about the differential needs and functional requirements of the city. This information gathering can either be done using high-tech sensors (leveraging the power of the Internet of Things), simple observation strategies (does a pop-up market look busy or dead), or by figuring out how success can leave an environmental trace {{stigmergy}}.

In the case of stigmergic signals, we need to think about how the environment is structured in ways where it is capable of collecting signals. For example, if we wish to take a tactical approach to placing pathways in a park, rather than setting these in stone, we might instead simply plant grass. Grass, as a medium, is capable of collecting traces of differential flows of footsteps - recording the {{driving-flows}} where routes converge. In this way, what are known as 'desire lines' manifest on the grass as an emergent phenomena, revealing bottom-up flows rather than imposed flows. If the "fitness" of a sidewalk paving stone, pertains to where it best gathers footfalls, then desire lines reveal the optimum location to place these stones.

We can, of course, force these flows into other regimes that will become well-trodden: if there is only one way to go then people will go that way, but just because we have locked-in people to a given behavior by forcing them into this conformance, does not mean that it is best. We can think of the QWERTY keyboard as imposing a limit on more effective ways of typing, but just because lots of people use this keyboard does not make it the most fit of all possible keyboards.

Tactical Urbanism can therefore be seen as a useful tool for designers thinking about how they might explore the underlying fitness landscapes of the city - shaped by different flows and potentials. The challenges are in learning how to conceptualize material artifacts in the city - ranging from movable chairs in parks, to movable buses on self-organizing bus routes - in more tactical ways. 


 

Resilient Urbanism

How can our cities adapt and evolve in the face of change? Can complexity theory help us provide our cities with more adaptive capacity to respond to uncertain circumstances?

Governing Features ↑

Increasingly, we are becoming concerned with how we can make cities capable of responding to change and stress. Resilient urbanism takes guidance from some complexity principles with regards to how the urban fabric can adapt to change.


Urban resilience refers to the ability of an urban system-and all its constituent socio-ecological and socio-technical networks across temporal and spatial scales - to maintain or rapidly return to desired functions in the face of a disturbance, to adapt to change, and to quickly transform systems that limit current or future adaptive capacity (Meerow et al, 2015, Landscape and Urban Planning)

MORE COMING SOON!


Back to {{urbanism}}

Back to {{complexity}}



 

Relational Geography

If geography is not composed of places, but rather places are the result of relations, then how can an understanding of complex flows and network dynamics help us unravel the nature of place?

Governing Features ↑

Relational Geographers examine how particular places are constituted by forces and flows that operate at a distance. They recognize that flows of energy, people, resources and materials are what activate place, and focus their attention upon understanding the nature of these flows.


Networked Space:

Which two cities are closer together - London and Blackpool or London and New York? From a strictly metric geographic sense, we would answer that London and Blackpool are closer, and for a long time that would be how geographers would respond. But in recent decades geographers have become increasingly interested in how places are constituted not so much according to fixed, metric qualities, but in terms of how different kinds of flows tie spaces together. These spaces might be quite far from one another in a geographic sense, but quite close together in terms of how they relate: hence relational geography.

Looking at these three cities from a relational perspective, we would consider the kinds of flows that move between them - flows that might be constituted of people, ideas, money, resources, etc. From this perspective, we could reasonably argue that London and New York have far greater intensities of flows, drawing them closer together than the Blackpool counterpart situated in the the UK.

Relational geography is thus interested both in the kinds of {{network-topology}} that exist between places, as well as the {{driving-flows}} that these networks carry. Rather than seeing places as being primary and the relations between places being as a secondary outcome of these primary "things", relational geography flips this relationship on its head: arguing that we need to look at the relational flows first with particular places then being constituted by the nature of how these flows come to be grounded or moored in particular settings (see for example the work of {{John-urry}}).  It employs Network theory to help think about how the dynamics of agent interactions - the flows moving between them -  affect the performance  of complex geographical systems.


Complexity and Relational Geography

Given these interests, it is stands to reason that geographers interested in understanding how to think through this orientation would notice  similarities to complexity theory. Relational geographers, thus began to draw inspiration from complexity dynamics, particularly as it pertains to such phenomena as {{emergence}}, {{non-linearity}}, and {{driving-flows}}. Relational geographers are not particularly engaged with the nature of self-similar or nested orders in complex systems, and if focused on individual agents, then these are most often thought of not at the scale of humans in cities, but as cities themselves as agents in a global network.  

Relational Geography, attunes in particular to how network structure may have an effect on the kinds of urbanization patterns that emerge; how present day patterns of habitation are not necessarily 'natural' outgrowths of previous patterns in a clear, logical chain, but instead how {{history}} and {{contingency}} plays a key role. They may employ the language of complexity, using terms like {{bifurcations}} to try to capture the contingent, non-linear dynamics at play.

Thus, what makes a "world class" city, vs a local city, and what are the driving forces at play that weave this city into global versus local networks of influence. How can cities who may be at the fringes move to steer more driving flows of resources and people into their sphere or influence? What geographical regions are left behind? For example, how does the location of a particular rail line, and its stations, change the dynamics of proximity in ways that may privilege certain regions, while marginalizing others that are left with poorer access to these flows of mobility?

These kinds of questions slide up alongside many of the terms and concepts used in complexity thinking.

map of global airline routes - Wikimedia commons






Back to {{urbanism}}

Back to {{complexity}}


 

Parametric Urbanism

New ways of modeling the physical shape of cities allows us to shape-shift at the touch of a keystroke.  Can this ability to generate a multiplicity of possible future urbanities help make better cities?

Governing Features ↑

Parametric approaches to urban design are based on creating responsive models of urban contexts that are programmed to change form according to how inputs are varied. Rather than the architect creating a final product, they instead create a space of possibilities ({{phase-space}}) that is activated according to how various flow variables - economic, environmental, or social, are tweaked. This architectural form-making approachholds similarities to complex systems in terms of how entities are framed: less as objects in and of themselves, and more as responsive, adaptive agents, activated by differential inputs.


More Coming Soon! In the meantime, check out the tutorial under the "Resources" section. 

Relates to topology;

Relates to variations;

Relates to differentials

Back to {{urbanism}}

Back to {{complexity}}


 

Landscape Urbanism

Landscape Urbanists are interested in adaptation, processes, and flows: with their work often drawing from the lexicon of complexity sciences.

Governing Features ↑

A large body of contemporary landscape design thinking tries to understand how designs can be less about making things, and more about stewarding processes that create a 'fit' between the intervention and the context. Landscape Urbanists advancing these techniques draw concepts and vocabulary from complex adaptive systems theory.


“Landscape Urbanism” (LU) is a phrased coined by theorist {{Charles-Waldheim}} to describe a new sensibility towards space, that emerged in the late 1980s and early 1990s.  It's roots trace back to a number of key theorists and practitioners based at the University of Pennsylvania, the Harvard Graduate School of Design, and the University of Illinois, Chicago. Their writings became main-stream in the late 90s and mid 2000s, being circulated in two highly influential texts - Recovering Landscape (1999) and The Landscape Urbanism Reader. These helped disseminate key ideas within the discourse, as well as highlighting seminal projects advancing the movement's ideas in the form of competition entries as well as built works.   

These texts and projects positioned LU as a break from traditional landscape interests, which tended to focus on the sceno-graphic or pictorial qualities of space. Instead, Landscape Urbanism attunes to the nature of landscape performance in an unfolding context.  LU practitioners and theorists, are thereby less attentive to the physical dimensions of plans (how they look), and more to the performative aspects of plans and how these come to be enacted over time. Here, practitioners acknowledge the limits to their foresight, and instead try to work with {{contingency}}. They accept that {{history}} in terms of the specifics of how places will come to emerge. 

The movement recognizes that prediction is impossible, allowing for sites that are not so much constructed but performed in space and time, by means of differential forces engaging with the site. This performance takes place within a spatial arena that is structured so as to not only permit but also afford a broad range of site potentials – different manners in which the site might be “played” or from which different variations of performance can be extracted . To prime these mutable settings, LU practitioners speak of ‘seeding’ an area’, ‘irrigating’ a territory, or ‘staging’ the ground - all alluding to an active and catalyzing engagement with the site that anticipates and prepares the ground for possibility - while still maintaining an open-endedness in terms of which future possibilities are enacted (see {{James-Corner}}). This idea of creating a flexible framework that can be activated in different ways is described as creating {{open-scaffolds}} in landscape, but can be tied back to the idea of setting up {{variables}} that are then activated so as to support different {{driving-flows}} 

Thus, LU does not just leave a space ‘open’, but instead aims to increase a physical environment’s capacity to foster the emergence of contingent events: ones constituted on territories where these flows coalesce. Here, the concept of ‘staging’ or creating {{affordances}} is key. Affordance is the term coined by James Gibson to describe the capacity of designed objects or environments to invite multiple kinds of appropriations that in turn, manifest as different ‘states’, that are in alignment with different kinds of user needs or requirements. The choice of which ‘afforded’ state manifests is contingent upon the kinds of imbricated relationships activated by users. That said, not all sites offer equal affordances to shift into different regimes of behavior - if too specific, territories do not have the plasticity required; if too open-ended, they become neutral - with little capacity to meaningfully afford or support programmatic specificity.

By creating a range of affordances that support programmatic potential, Landscape Urbanists accept the future as non-linear, open-ended and contingent, but still act to curate meaningful material territories that can be appropriated and modified when and where contingent forces coalesce.

This notion of {{affordances}} is closely aligned to that of {{phase-space}}. Both concepts engage the idea (central to both complexity and {{assemblage-geography}}, that material entities have certain capacities that exist within {{the-virtual}} and contingent;  and that these are activated and manifested only under particular circumstances. That said, material affordances are not completely open-ended – there are still limits, and the way in which the capacities of material form are ‘called forth’ is through practices that integrate the {{driving-flows}} of agency present in a given situation.

This emerging body of work integrates an acceptance of process, evolution, and unknown site dynamics, with the actualization of site features occurring in accordance with non-linear interactions. Strategies involve the creation of multiple enabling sites (or niches) within the territory of the city that permit different kinds of programs to find their best ‘fit’ in response to evolving relationships

For a more-in depth look at Landscape Urbanism approaches, including examples of projects and their relationship to complexity thinking, please watch the tutorial featured in the "In Depth" resources.


Back to {{urbanism}}

Back to {{complexity}}




 

Incremental Urbanism

Cities traditionally evolved over time,  shifting to meet user needs. How might complexity theory help us  emulate such processes to generate 'fit' cities?

Governing Features ↑

This branch of Urban Thinking consider how the nature of the morphologic characteristics of the built environment factors into its ability to evolve over time. Here, we study the ways in which the built fabric can be designed to support incremental evolution


Typically, designers see the "masterplan" as the foremost solution to urban planning. Often these masterplans are characterized by large-scale, hierarchical, high capital, inflexible, and centralized ways of city planning. These masterplans fail to integrate the complex and rich dynamics of cities, with the importance of architectural forms and visions overshadowing ongoing social, economical, and political characteristics.

Incremental Urbanism, by contrast, considers the complexity of these variables and instead aims to support a city that can grow and evolve over time. Here, individual occupants or builders respond to the constantly changing environment and resources around them. The city is built piece by piece as individuals get more information, develop more aspirations,  and better identify their own needs and capacities.

At the same time, people's ability to modify the city is also tied to the nature of its underlying  morphologic conditions. Certain characteristics enable evolution to proceed incrementally over time, whereas other conditions resist change, and alterations require more radical processes of destruction and reconstruction - impeding the ability for iterative learning. Thus, the inherent flexibility of the floor plates of canal houses of Amsterdam enable these to host a wide array of functions - be it warehousing, housing, restaurants, offices, or shops - whereas other kinds of spaces resist such flexibility of appropriation.


Example:

Consider the images below, in the upper set of images, functions are built with a morphological specificity that resists easy conversion. While it is possible to swap out these functions into the other spaces, it is unlikely. Accordingly, if one function ceases to be fit, mutations for new functions are not easily enabled.
In contrast, if we look at the canal buildings in Amsterdam, we see that the built characteristics allow for change in programming to easily take place, allowing new kinds of behaviors to be activated and supported by the identical built fabric. 


 


Modularity

This branch of urban thinking considers time and evolution key to generating fit urban spaces. {{jeremy-till-t-schneider}}, in their book "Flexible Housing"  discuss how housing units can be developed by means of {{modular}}, allowing projects to evolve incrementally over time and create larger spaces only as needed. The ultimate building scale may involve additions to structures such as an additional story, the expansion of a room, or an additional detached small unit. This type of development happens constantly and gradually over time, resulting in no large disruption to the neighborhood. Each new modification respects the existing context so that, as growth and change happen, features of of the original character remain.

Incremental development can therefore happens at many scales (or at differing 'grains' of urban fabric). The designer’s goals are to generate effective spaces that can range from single-family homes to large apartment complexes or even office buildings. This wide spectrum of spaces evolves over time by adding more modules together to create a more fit urban space. 

Iterations

We can think about this kind of incrementalism as being consistent with the iterative nature of complex systems, built as a series of {{patterns-of-interactions}} that is steered by the collective behaviors of {{bottom-up-agents}} in the form of occupants. That said, these occupants need to inhabit spaces that are capable of being modified in this incremental manner  - a built fabric that has the {{adaptive-processes}} to respond to shifting needs and forces.   

In Julia King's "What is the Incremental City"  she writes, "the incremental city achieves what the ‘natural city’ achieves as it is developed in a piece-meal way responding to local conditions, desires, and aspirations.”  This flexibility allows developments to freely react to new variables  - the {{driving-flows}} of urban conditions that continuously establish an array of new possible system states. We can think of these reactions as {{feedback-loops}} with the built environment self-regulating and organizing over time. King states that Incrementalism encourages individuals to shape and affect their environments. They activate the  incremental improvements, additions, or modifications in the face of novel inputs - instilling a bottom-up personal agency not typical in top-down master-planned projects. 


{{Patterns-of-Interactions}}

Many of the dynamics we see at play in incremental approaches depend on what is occurring in the surrounding context. Thus, similar to agent-based simulation models where cells shift their states based on the performance of neighboring cells, in an incremental approach there are both morphological components at play, as well as the variations on morphological conditions being influenced and constrained by what is happening on neighboring sites. 


Example:

Aspects of Incremental Urbanism can be demonstrated in the game of Carcassonne. The game consists of a set of tiles that display sections of grass, roads, or cities. As tiles are iteratively placed, they must progressively adapt to adjacent predetermined conditions to keep roads and cities correctly matched together. Incremental cities and Carcassonne develop unpredictable and diverse landscapes formed by means of such small, incremental steps that are constrained by surrounding decision-making. Traditional civic growth also observes this model -  evolving naturally and organically with little or no planning, while modern practices attempt to foresee out civic development ahead of time.

While incremental shifts can occur in any setting, cities designed using top-down strategies tend to have a slower pace of incremental development because of the pre-imposed limits already in place. It takes longer for agents within these cities to evolve and shape their environments, as they are already locked-in to a predetermined form. On the other hand, traditionally designed  cities emerge through {{patterns-of-interactions}}, shaped via incremental changes there are a product of the needs of the agents within the system. (more on traditional civic growth found on the {{informal-urbanism}} page)



Text adapted from a contribution by Samantha Barger, Michael Gehl, Shivang Patel, Kevin Tokarczyk; Iowa State University, 2021

Back to {{urbanism}}

Back to {{complexity}}


 

Evolutionary Geography

Across the globe we find spatial clusters of similar economic activity. How does complexity help us understand the path-dependent emergence of these economic clusters?

Governing Features ↑

Evolutionary Economic Geography (EEG) tries to understand how economic agglomerations or clusters emerge from the bottom-up. This branch of economics draws significantly from principles of complexity and emergence, seeing the rise of particular regions as path-dependent, and looking to understand the forces that drive change for firms - seen as the agents evolving within an economic environment.


Evolutionary Economic Geography is a branch of economics that tries to understand how the same kinds of processes observed in evolution can be applied to geographically situated economic clusters. It shares some similarities to {{Relational-Geography}} in that it sees the specificity of the physical environment as something that arises due to networks of driving flows. Where it differs is partially in terms of its specific focus - that of economic actors situated in urban contexts (that is firms with particular expertise and economic output) - rather than the broader multiplicity of actors found within cities. Further, the field forefronts more of the dynamics of complexity than relational geography: attuning in particular to the {{bottom-up-agents}} (in the form of firms), that make up these economic systems, as well the dynamics underlying their {{adaptive-processes}} to become more fit. Accordingly, the field employs what is known as "General Darwinism": using principles of variation, selection and retention (VSR) that we see in organic evolving systems, and applying these same principles to non-organic systems. 

Examples of the kinds of geographic phenomena that these evolutionary geographers might consider of interest would be the rise of Silicon Valley as a Tech hub, Holland's Tulip growing fields, or Taiwan's Orchid growing sector (see video below). These kinds of regions of specialized intensifications are called "agglomerations", and are described as arising in ways that conceptually correspond with {{emergence}}. Thus, these kinds of intensities of expertise were not necessarily pre-planned from the top-down, but instead arose due to processes that are more akin to the evolutionary dynamics we see in nature. Furthermore, the ways in which these dynamics unfold are tied to how {{bottom-up-agents}} in complex systems are steered towards fitness. Here, individual firms are seen as "agents" in an economic system, all of which are competing to find niches for success. These firms are steered not only by the {{feedback-loops}} gathered from monitoring the success of their own actions, but also the signals gathered by attuning to the actions of their nearest competitors. 

Spill-overs and Negentropy

These signals help steer individual firm success, due to the benefits of what are known as "spill-over" effects. Another way to think about this is that, left to the their own independent devices, each firm needs to navigate the economic landscape with maximum uncertainty about how best to proceed in order to "harness" the {{driving-flows}} of monetary gain. By co-locating near similar agents, the amount of uncertainty to achieve this can be reduced (see {{information-theory}}). Uncertainty in this case, might pertain to industry "best practices" that are coming to the fore, personnel that are knowledgeable and available in the region to be hired, and synergetic support businesses present and able to carry out aspects of the delivery model. Thus, the backdrop of Silicon Valley provides expertise and support "in the air" to give businesses in the region a competitive edge over others located in more isolated regions. 

Intensifying Flows & Feedback

Some of the dynamics pertaining to why a particular economic agglomeration emerges involve the kinds of network effects seen in conditions of growth and {{preferential-attachment}}. As certain business sectors begin - potentially at random - to co-locate in a particular region, other support services become attracted to that area, which then attract further businesses, and so on. We see again the mechanism of {{positive-feedback}} reinforcing particular patterns, which then take hold as {{attractor-statesbasins}} for agents in the system. 

Fitness

We can therefore consider firms in a regions as competing {{bottom-up-agents}}, each trying to tweak the {{variables}} of their business models so as to outcompete their neighbors. Yet even though they are engaged in competition, they nonetheless have some reliance on their competitors: it is through their co-presence that many simultaneous business protocol {{timeiterations}} can be tested in parallel, with the overall expertise of the co-located enterprises being enhanced. Accordingly, agglomerations of these co-located competing firms are more likely to increase their {{fitness}} than firms operating at a distance. 

Enslavement or "Lock-in"

It becomes very difficult to disrupt an agglomeration once it has emerged. Too many of the flows related to a particular sector become concentrated in this geographic regions, meaning that massive structural shifts are required to rearrange these flows. This is not to say that this can never occur. Detroit, for example, was for many years the power-house for automotive manufacturing. It was only with the advent of major underlying shifts of flows - tied to such aspects as wages, access to cheaper workers, and lowered shipping costs - that these flows gradually reconstituted themselves in new geographic locations off-shore. But these major shifts are rare, with regions of expertise reproducing themselves over time, even in the face of other underlying disruptions. Such systems can be described as being in {{enslaved-states}}, or what is called "lock-in" by Evolutionary Economic Geographers. 


The video below outlines an example of an emergent agglomeration: that of Orchid growing in Taiwan.




Back to {{urbanism}}

Back to {{complexity}}


 

Communicative Planning

Communicative planning  broadens the scope of voices engaged in planning processes. How does complexity help  us understand the productive capacity of these diverse agents?

Governing Features ↑

A growing number of spatial planners are realizing that they need to harness many voices in order to navigate the complexities of the planning process. Communicative strategies aim to move from a top-down approach of planning, to one that engages many voices from the bottom up.


Backdrop

Communicative planning is a specific strategic approach to developing plans in cooperation with a broader range of actors. If master plans relied on the expertise of the top-down planner, then communicative approaches aim to broaden the number of voices engaged in the process, include more perspectives, and garner more wisdom from harnessing the bottom-up "wisdom of crowds". 

Here, planning is positioned as being a "wicked problem":  one with poor boundaries, many diverging and overlapping concerns, and no direct pathway for problem "solutions". It is therefore seen as a problem in "complexity" - the term largely adopted so as to refer to the messiness of the problem domain. In this reading, agents in the system are considered as individual stake-holders, each of which have personal interests that need to be resolved or addressed. At issue is how best to 'strategically navigate' amongst these players, so that an "emergent" solution can be reached. 

Within this reading,  it is helpful to consider the differential power that each stakeholder wields, so as to better balance dynamics that might lead to unfair planning solutions. Such situations arise, for example when a particular party (such as a developer), holds inequitable resources available to influence planning decision-making. Accordingly communicative planners try to understand the relative flows of agency available within the process, and then channel these in more equitable, balanced ways.


Relation to Complexity

Planners with these interests are often drawn to principles from complexity, not least because of the fact that one of the key thinkers in the domain {{patsy-healey}}, has a seminal book, the title of which is "Urban Complexity and Spatial Strategies". The approach, does indeed, relate to complexity in that it emphasis a bottom-up approach, by which a consensual strategy for planning emerges. Here, the use of the word "complexity" may be more metaphorical then technical (if we assume that in this context it is simply suggesting that planning is 'complicated'). Similarly, there are other aspects of complexity theory that are appropriated in this discourse, some in more direct, others in more metaphorical manners.

Networks

Communicative Planners have a strong interest in how the nature of the actor {{network-topology}} affect how decision-making takes place (and whose voices dominate the network). There is a strong link between Communicative approaches and Actor Network Theory (or ANT), which examines network dynamics as what ultimately constitutes certain forms or protocols previously accepted as 'givens'. Here, similar to the approach of relational geography, we see that the relations constituting a given entity are being more fundamental than the entities themselves. 

Agents

Part of the objectives of Network analysis are to understand what nodes in the network hold more power, tracing which agents in the system play a larger causal role in driving it forward.  Communicative Planners, consider how {{bottom-up-agents}}, in the form of diverse stake-holders steer the process,  and where differences in agency lie.  While consensus can "emerge" from many kinds of bottom-up agents interactions, such emergence can be subject to inequitable steering depending on how stakeholders are empowered or disempowered in the process. The concern for agents here is thus less about "rule-based" decision making, or how such agents adapt, but moreso about how so-called bottom-up dynamics need to be facilitated so as to ensure that the meaningful input of all agents can be garnered in discovering a planning solution. The concern is that some processes leave agents out - unable to contribute to the emergent characteristics of a given planning strategy.

Emergence

For communicative planners, the concept of emergence is again used more as a metaphorical tool than in a technical manner. To illustrate: even though diverse ants in a complex system form an emergent trail, they do not do so by sitting around together in a colony deliberating and weighing which course of action to take. Emergence in the more technical sense relates to actions that are performed in an environment, where the agents involved  - be it sand grains or ants - need not be consciously cooperating. By the same token, ants need not compromise their own needs on the part of the colony. This is not to say that the communicative approach towards emergent consensus is not of value, only that it is probably not of the same kind as what we would see in natural complex systems.

That said, the language and terms drawn from complexity seem to offer communicative planners with a useful set of concepts: able to convey something meaningful about developing a more contingent, more open-ended, more bottom-up approach and more relational approach to decision making.




Back to {{urbanism}}

Back to {{complexity}}


 

Assemblage Geography

Might the world we live in be made up of contingent, emergent 'assemblages'? If so, how might complexity theory help us understand such assemblages?

Governing Features ↑

Assemblage geographers consider space in ways similar to relational geographers. However, they focus more on the temporary and contingent ways in which forces and flows come together to form stable entities. Thus, they are less attuned to the mechanics of how specific relations coalesce, and more to the contingent and agentic aspects of the assemblages that manifest.



Assemblage draws from the work of Gilles Deleuze who coined the term 'agencement' (translated to "assemblage" in English) which in the original French refer both to "coming together' as well as to 'agency'. The philosophy draws attention to the contingency of material things as well as their agentic power: emphasizing that things retain both virtual capacities, which remain latent, as well as ones that are actualized when entering into relation with other forces or actors. 

Example:

Consider the power of a mongol warrior.  Here three separate entities, the individual warrior, the horse that he rides, and the stirrup that enables him to stand with his weapon while in motion. Each of these separate aspects cannot conquer a territory on their own, but together the three entities can enter into an assemblage that has additional agentic power to have a major effect. Such an assemblage can 'stabilize' into this configuration , while each component still maintains its own identity. Assemblage provides a way to speak about such entities, but also about how certain capacities can be latent within entities until they are forged together in contingent, temporary assemblages. 

Relation to Complexity

Assemblage theorists adopt the concept of Emergence, but engage with it in a much more philosophical manner. Following the works of the philosophers Gilles Deleuze and Felix Guattari, they describe concrete urban entities as emergent, indeterminant and historically contingent Stabilized Assemblages.  Assemblages are configurations of inter-meshed forces and distributed agencies - human/non-human, local/non-local, material, technical, social, etc., that are stabilized at particular moments. Once in place assemblages - like emergent features - may have unique properties or capacities not associated with their constituent elements, and thereupon yield agency in structuring further events.  'Assemblage' ideas therefore echo those of Emergence: something is produced from constituent agents that is able to act in novel ways. This conceptual overlap has led geographer {{Kim-dovey}} to suggest that the phrase 'Complex Adaptive Assemblage' be used in place of 'Complex Adaptive System' in the spatial disciplines. 

Agents in a particular assemblage have particular capacities which one might see as analogous to Degrees of Freedom, but how these capacities manifest is subject to Contingency: predicated on the nature of flows, forces, or the Patterns of Interactions at play in a given situation. Assemblage geographers thus import the language of {{non-linearity}} and Bifurcations: trying to understand the chance events that determine the trajectory of urban systems which are sensitive to historical unfolding.

This sense that {{history}}, runs counter to the historical determinism that previously dominated geographical investigations, where a coherent logical chain of cause and effect was seen as the primary driver of outer geographical difference. For assemblage thinkers, history does indeed matter, but only insofar as one particular trajectory is realized vs another.  Manuel de Landa, for example, argues that in order to properly conceptualize the importance of any given actualized geographical space, it is necessary to see this space as but a single manifestation - situated within the broader Phase Space of The Virtual - with all its unrealized potentials. This emphasis on the role of history situates urban systems as subject to Contingency, with actual unfolding representing only one possible trajectory of broader system potential.

Assemblage Geography thus engages with many concepts present in Complex Adaptive Systems Theory, but primarily focuses on the nature of contingent, causal flows (including both human and non-human flows) and how these come to be realized in particular physical manifestations.


Accordingly, the field is less attuned to aspects of complexity surrounding, for example, rule-based systems, mathematical regularities, or the adaptive capacities of bottom-up agents.


Back to {{urbanism}}

Back to {{complexity}}


 


 

Nothing over here yet

In Depth: Explore

This is the feed, a series of related links and resources. Add a link to the feed →

Nothing in the feed...yet.

This is a list of People that Explore is related to.

This is a list of Terms that Explore is related to.

This is a collection of books, websites, and videos related to Explore

This is a list of Urban Fields that Explore is related to.

This is a list of Key Concepts that Explore is related to.

There would be some thought experiments here.

Navigating Complexity © 2015-2024 Sharon Wohl, all rights reserved. Developed by Sean Wittmeyer
Sign In (SSO) | Sign In


Test Data
Related (this page): 
Section: 
Non-Linearity
Related (same section): 
Related (all): Urban Modeling (11, fields), Resilient Urbanism (14, fields), Relational Geography (19, fields), Landscape Urbanism (15, fields), Evolutionary Geography (12, fields), Communicative Planning (18, fields), Assemblage Geography (20, fields), Tipping Points (218, concepts), Path Dependency (93, concepts), Far From Equilibrium (212, concepts), 
Nested Orders
Related (same section): 
Related (all): Urban Modeling (11, fields), Urban Informalities (16, fields), Resilient Urbanism (14, fields), Self-Organized Criticality (64, concepts), Scale-Free (217, concepts), Power Laws (66, concepts), 
Emergence
Related (same section): 
Related (all): Urban Modeling (11, fields), Urban Informalities (16, fields), Urban Datascapes (28, fields), Incremental Urbanism (13, fields), Evolutionary Geography (12, fields), Communicative Planning (18, fields), Assemblage Geography (20, fields), Self-Organization (214, concepts), Fitness (59, concepts), Attractor States (72, concepts), 
Driving Flows
Related (same section): 
Related (all): Urban Datascapes (28, fields), Tactical Urbanism (17, fields), Relational Geography (19, fields), Parametric Urbanism (10, fields), Landscape Urbanism (15, fields), Evolutionary Geography (12, fields), Communicative Planning (18, fields), Assemblage Geography (20, fields), Open / Dissipative (84, concepts), Networks (75, concepts), Information (73, concepts), 
Bottom-up Agents
Related (same section): 
Related (all): Urban Modeling (11, fields), Urban Informalities (16, fields), Resilient Urbanism (14, fields), Parametric Urbanism (10, fields), Incremental Urbanism (13, fields), Evolutionary Geography (12, fields), Communicative Planning (18, fields), Rules (213, concepts), Iterations (56, concepts), 
Adaptive Capacity
Related (same section): 
Related (all): Urban Modeling (11, fields), Urban Informalities (16, fields), Tactical Urbanism (17, fields), Parametric Urbanism (10, fields), Landscape Urbanism (15, fields), Incremental Urbanism (13, fields), Evolutionary Geography (12, fields), Feedback (88, concepts), Degrees of Freedom (78, concepts),