When trying to understand any given system within this hierarchy, the impact of subsystems typically occurs near adjacent scales. Thus, while a society can be understood as being composed of humans, composed of bodies, composed of organs, composed of cells, we do not tend to consider the role that cells play in affecting societies. Instead, we attune to understanding interactions between the relevant scales of whatever system we are examining. Depending on the level of enquiry that we choose, we may look at the same entity (for example a single human being) and consider it be an emergent 'whole', or as simply a component part (or agent) within a larger emergent entity (one body within a complex society).
Various definitions of complexity try to capture this shifting nature of agent versus whole, and how this alters depending on the scale of inquiry. Definitions thus point to complex adaptive systems as being hierarchical, or operating at micro, meso, and macro level. In his seminal article The Architecture of Complexity, Herbert Simon describes such systems as 'composed of interrelated sub-systems, each of the latter being, in turn, hierarchic in structure until we reach some lowest level of elementary subsystem'.
Why is this the case? And why does it matter?
Simon argues that, by partitioning systems into nested hierarchies, wholes are more apt to remain robust. They maintain their integrity even if parts of the system are compromised. He provides the example of two watch-makers, each of whom build watches made up of one thousand parts. One watchmaker organizes the watch's components as independently entities - each of which needs to be integrated into the whole in order for the watch to hold together as a stable entity. If one piece is disturbed in the course of the watchmaking, the whole disintegrates, and the watchmaking process needs to start anew. The second watchmaker organizes the watch parts into hierarchical sub-assemblies: ten individual parts make one unit, ten units make one component, and ten components make one watch. For the second watchmaker, each sub-assembly holds together as a stable, integrated entity, so if work is disrupted in the course of making an assembly, the disruption affects only that component (meaning a maximum of ten assembly steps are lost). The remainder of the assembled components remain intact.
If Simon is correct, then natural systems may preserve robustness by creating sub-assemblies that each operate as wholes. Accordingly, it is worth considering how human systems might benefit from similar strategies.
Simon's watchmaker is a top-down operator who organizes his work flow into parts and wholes to keep the watch components partitioned and robust, creating a more efficient watch-making process. What is noteworthy is that self-organizing systems have inherent dynamics that appear to push systems towards such partitioning, and that this partitioning holds specific structural properties related to mathematical regularities.
A host of complex systems exhibit what is known as Self Similarity - meaning that we can 'zoom in' at any level of magnification and find repeated, nested scales. These scale-free hierarchies follow the mathematical regularities of Power Laws distributions. These distributions are so common in complex systems, that they are often referred to as 'the fingerprint of self-organization" (see Ricardo Solé). We find power-law distributions in systems as diverse as the frequency and magnitude of earthquakes, the structure of academic citation networks, the prices of stocks, and the structure of the World Wide Web.
Further, complex systems tend to 'tune' themselves to what is referred to as Self-Organized Criticality: a state at which the scale or scope of a system's response to an input will follow power-law distribution, regardless of the intensity (or scope) of the input. While not fully understood, it is believed that systems organize themselves this way because it is a regime in which systems are able to maximize performance while simultaneously using the minimum amount of available energy. When system are poised at this state they also have maximum connectivity with the minimum amount of redundancy. It is also believed that they are thus the most effective information processors in this regime.
Why Nested and not Hierarchical?
The attentive surfer of this website content may notice that in the various definitions of complexity being circulated, the term 'hierarchical' is used to describe what we call here 'nested scales'. We have avoided using this term as it holds several connotations that appear unhelpful. First, a hierarchy generally assumes a kind of priority, with 'upper' levels being more significant than lower. Second, it implies control emanating from the top down. Neither of these connotations are appropriate when speaking about complex systems. Each level of nested orders is both a part and a whole, and causality flows both ways as the emergent order is generated by its constituent parts, and steered by those parts as much as it steers (or constrains) its parts once present. We hope that the idea of 'nested scales' is more neutral vis-a-vis notions of primacy and control, but still captures the idea of systems embedded within systems of different scales.
To learn more about these phenomena, see