This is the main body!

Bottom-up Agentstaxonomy/feature.php

Complex Adaptive Systems are comprised of multiple, parallel agents, whose coordinated behaviors lead to emergent global outcomes.

CAS are composed of populations of discreet elements - be they water molecules, ants, neurons, etc. - that nonetheless behave as a group. At the group level, novel forms of global order arise strictly due to simple interactions occurring at the level of the elements. Accordingly, CAS are described as "Bottom-up": global order is generated from below rather than coordinated from above.  That said, once global features have manifested they stabilize - spurring a recursive loop that alters the environment within which the elements operate and constraining subsequent system performance.

What is an Agent?

Bikes, Barber shops, Beer glasses, Benches. We can ask what an agent is, but we could equally ask what an agent is not!

Defining an agent is not so much about focusing on a particular kind of entity, but instead about defining a particular kind performance within a given system and that system's context. Many elements of day-to-day life might be thought of agents, but to do so, we need to ask how agency is operationalized.


Imagine that I have a collection of 1000 bikes that I wish to place for rent in the city. Could I conceive of a self-organizing system where bikes are agents - where the best bike arrangement emerges, with bikes helping each other 'learn' where the best flow of consumers is? What if I have 50 barber-shops in a town of 500 000 inhabitants - should the shops be placed in a row next to one another? Placed equidistant apart? Distributed in clusters of varying sizes and distances apart (maybe following power laws?). Might the barber shops be conceptualized as agents competing for flows of customers in a civic context, and trying to maximize gains while learning from their competitors? And what about beer glasses: if I have a street festival where I want all beer glasses to wind up being recycled and not littering the ground, what mechanisms would I need to put into place in order to encourage the beer glasses to act as agents - who are more 'fit' if they find their ways into recycling depots? How could I operationalize the beer glasses so that they co-opt their consumers to assist in ensuring that this occurs?. What would a 'fit' beer glass be like in this case (hint: high priced deposit?). Finally, who is to say where the best place is to put a park bench? If a bench is an agent, and 100 benches in a park are a system, could benches self-organize to position themselves where they are most 'fit'?

The examples above are somewhat fanciful but they are being used to illustrate a point: there is no inherent constraint on the kinds of entities we might position as agents within a complex system. Instead, we need to look at how we frame the system, and do so in ways where entities can be operationalized as agents.

Operational Characteristics:

  • having a common fitness criteria shared amongst agents (with some performances being better than others),
  • having an ability to shift performance (see Requisite Variety end handlebar),
  • having an ability to exchange information amongst other agents (get to better performance faster).
  • operating in an environment where there is a meaningful difference available that drives behavior (see Driving Flows end handlebar)

'Classic' Agents

An odd list of potential agentic entities have been provided above (the 'B' list) that might, under specific circumstances be transformed into operational agents. We start here to avoid the problem of limiting the scope of what may or not be an agent. That said, these are not part of what might be thought of as the complexity 'cannon' of Agents  (the 'A' list). Let us turn to these now:

Those drawn to the study of complex systems were initially compelled to explore agent dynamics because of certain examples that showed highly unexpected emergent aspects. These include 'the classics' (described elsewhere on this website) that include:  emergent ant trails, coordinated by individual ants, emergent percolation patterns, coordinate by water molecules in Benard/Rayleigh convection, emergent higher thought processes, coordinated by individual neuron firing.

In each case, we see natural systems composed of a multitude of entities (agents) that, without any level of higher control, are able to work together to coalesce into something that has characteristics that go above and beyond the properties of the individual agents. But if we consider the operational characteristics at play, they are no different from the more counter-intuitive examples listed above. Take ants as an example. They are an agent that has:

  • a common fitness criteria shared amongst agents (getting food),
  • an ability to shift performance (searching a different place)
  • an ability to exchange information amongst other agents (deploying/detecting pheromones)
  • an environment where there is a meaningful difference available that drives behavior (presence of food)

Ant trails emerge as a result of ant interaction, but the agents in the system are not actively striving to achieve any predetermined  'global' structure or pattern: they are simply behaving in ways that involve an optimization of their own performance within a given context, with that context including the signals or information gleaned form other agents pursuinng similar performance goals. Since all agents pursue identical goals, coordination amongst agents leads to a faster discovery of fit performance regimes. What is unexpected is that, taken as a collective, the coordinated regime has global, novel features. This is the case in ALL complex systems, regardless of the kinds of agents involved.

Finally, once emergent states appear, they constrain subsequent agent behavior, which then tends to replicate itself.  Useful here are Maturana and Varella's notion of autopoiesis as well as Hermann Haken's concept of Enslaved States. Global order or patterns (that emerge through random behaviors conditioned by feedback) tend to stabilize and self-maintain.

Modeling Agents:

While the agents that inspired interest in complexity operate in the real world, scientists quickly realized that computers provided a perfect medium with which to explore the kind of agent behaviors we see operating. Computers are ideal for exploring agent behavior since many 'real world' agents obey very simple rules or behavioral protocols, and because the emergence of complexity occurs as a step by step (iterative) process.  At each time step each agent takes stock of its context, and adjusts its next action or movement based on feedback from the last move and from the last move of its neighbors.

Computers are an ideal format to mimic these processes since, with code, it is straightforward to replicate a vast population of agents and to run simulations that enable each individual agent to adjust its strategy at every time step. Investigations into such 'automata' informed the research of early computer scientists, including such luminaries as Epstein & Axtell, Von-Neumann, Wolfram, Conway and (for more on their contributions check 'key thinkers in the FEED!).

In the most basic versions of these automata, agents are considered as cells on an infinite grid, and cell behavior can be either 'on' or 'off' depending on a rule set that uses neighboring cell states as the input source.

Conway's Game of Life: A classic cellular automata

These early simulations employed Cellular Automata (CA), and later moved on to Agent-Based Models (ABM) which were able to create more heterogeneous collections of agents with more diverse rule sets. Both CA and ABM aimed to discover if stable patterns of global agent behaviors would emerge through interactions carried out over multiple iterations at the local level. These experiments successfully demonstrated how order does emerge through simple agent rules, and simulations have become, by far, the most common way of engaging with complexity sciences.

While these models can be quite dramatic, they are just one tool for exploring the field and should not be confused with the field itself. Models are very good at helping us understand certain aspects of complexity, but less effective in helping us operationalize complexity dynamics in real-world settings. Further, while CA and ABM demonstrate how emergent, complex features can arise from simple rules, the rule sets involved are established by the programmer and do not evolve within the program.

A further exploration of agents in CAS incorporates the ways in which bottom-up agents might independently evolve rules in response to feedback. Here agents test various Rules/Schemata over the course of multiple iterations. Through this trial and error process, involving Time/Iterations, they are able to assess their success through Feedback and retain useful patterns that increase Fitness. This is at the root of machines learning, with strategy such as genetic algorithms mimicking evolutionary trial and error in light of a given task.

competing agents are more fit as they walk faster!

John Holland describes how agents, each independently exploring suitable schema, actions, or rules, can be viewed as adopting General Darwinian processes involving Adaptive processes to carry out 'search' algorithms. In order of this search to proceed in a viable manner, agents need to possess what Ross Ashby dubs Requisite Variety: sufficient heterogeneity to test multiple scenarios or rule enactment strategies. Without this variety, little can occur.  It follows that, we should always examine the capacity of agents to respond to their context, and determine if that capacity is sufficient to deal with the flows and forces they are likely to encounter.

Further, we can speed up the discovery of 'fit' strategies if we have one of two things: more agents testing or more sequential iterations of tests. Finally, we benefit if improvements achieved by one agent can propagate (reproduced), within the broader population of the general agents.

 

Nothing over here yet


In Depth... Bottom-up Agents

This is the feed, a series of related links and resources. Add a link to the feed →

Stephen Hawkings The Meaning of Life (John Conway's Game of Life segment)

Watch the video to see a demonstration of simple rules generating complex, emergent patterns!

Play with the tadpoles!

What rules control the agent's behaviors?

This is a list of People that Bottom-up Agents is related to.

Cellular Automata | Sugarscape

This is a default subtitle for this page. Read more and see related content for Josh Epstein and Rob Axtell →

Cellular Automata

This is a default subtitle for this page. Read more and see related content for John Von Neumann →

Building Blocks | Santa Fe

John Holland is considered one of the seminal thinkers in Complex Adaptive Systems theory.

Read more and see related content for John Holland →

Game of Life

This is a default subtitle for this page. Read more and see related content for John Conway →

Nested Scales | Building Blocks

This is a default subtitle for this page. Read more and see related content for Herbert Simon →

Game Theory

This is a default subtitle for this page. Read more and see related content for Robert Axelrod →

Cybernetics | Law of Requisite Variety

This is a default subtitle for this page. Read more and see related content for Ross Ashby →

Cellular Automata

This is a default subtitle for this page. Read more and see related content for Stephen Wolfram →
  • See all People
  • This is a list of Terms that Bottom-up Agents is related to.

    Complex systems are composed of agents governed by simple input/output rules that determine their behaviors.

    See also: {{schemata}}

    Simple Rules - Complex Outcomes

    One of the intriguing characteristics of complex systems is that highly sophisticated emergent phenomena can be generated by seemingly simple agents. How does one replicate the efficiencies of the Tokyo subway map? Simple - enlist slime mould and let them discover it!  Results such as these are highly counterintuitive: when we see complicated phenomena, we expect the causal structure at work to be similarly complex. However, in complex systems this is not the case. Even if the agents in a complex system are very simple, the interactions generated amongst them can have the capacity to yield highly complex phenomena.

    Read more and see related content for Rules →

    Requisite Variety In order for a complex system to adapt, it needs to contain agents that have the capacity to behave in different ways - to enact adaptation you need adaptable things. The breadth of adaptability is called 'requisite variety'.

    This is a default subtitle for this page. Read more and see related content for Requisite Variety →

    The notion of 'degrees of freedom' pertains to the potential range of behaviors available within a given system. Without some freedom for a system to change its state, no complex adaptation can occur.

    This is a default subtitle for this page. Read more and see related content for Degrees of Freedom →

    Complex Systems are generated from the local interactions of multiple AGENTS. These agents are not actively striving to achieve any form of 'global' structure or pattern, but simply behave in their own self interests. Hence, we describe CAS dynamics as being generated from 'the bottom up' as opposed to 'the top down' as would be the case in traditional hierarchical organizations.

    This is a default subtitle for this page. Read more and see related content for Bottom-up →
  • See all Terms
  • There would be some thought experiments here.

    Navigating Complexity © 2021 Sharon Wohl, all rights reserved. Developed by Sean Wittmeyer