cas/definition/feature.php (core concept)
Diagram: Information

Information

What drives complexity? The answer involves a kind of sorting of the differences the system must navigate. These differences can be understood as flows of energy or information.

In order to be responsive to a world that consists of different kinds of inputs, complex systems tune themselves to information states involving just enough variety to be interesting (responding to new inputs) and just enough homogeneity to remain organized (keeping stable). To understand how this occurs, we need to understand the flow of information in complex systems, and what "information" means.


Complex Systems are ones that violate the second law of thermo-dynamics: that is to say, order manifests out of disorder. Another way to state this it that such systems are ones where order (negentropy), increases over time. This is counter to the second law which states that, left to its own devices, a system's disorder (entropy) will increase. Thus, we expect that, over time, buildings break down, and that a stream of cream poured into coffee will dissipate. We don’t expect a building to rise from the dust, nor a creamy cup of coffee to partition itself into distinct layers of and cream and coffee. Yet similar forms of unexpected order arise in complex systems.

PART I: Understanding Information

Shannonian Information

An important way of thinking about this increase in order relates to concepts based in information theory.  Information theory, as developed by Claude Shannon, evaluates systems based upon the amount of information or 'bits' required to describe them.

Shannon might ask, what is the amount of information required to know where a specific particle of cream is located in a cup of coffee? Further, in what kinds of situations would we require more or less information to specify a location?

Example:

In a mixed, creamy cup of coffee, any location is equally probable for any particle of cream. We therefore have maximum uncertainty about location:  the situation has high entropy, high uncertainty, and requires high information content to specify a location.  By contrast, if the cream and coffee were to be separated (say in two equal layers with the cream at the top) we would now have a more limited range of locations where a cream particle might be placed. Our degree of uncertainty about the cream's location has been reduced by half, since we now know that any particle of cream has to be located somewhere in the upper half of the cup - all locations at the bottom of the cup can be safely ignored.

Information vs Knowledge

Counterintuitively, the more Shannonian information required to describe a system, the less knowledge we have of it.  Hence, as a system become more differentiated and ordered - or as emergence features arise - its level of Shannon information diminishes.

This, in a way, is unfortunate: our colloquial understanding of 'having a lot of information', pertains to us knowing more about something. Thus, seeing a cup of coffee divided in cream and coffee layers, we perceive something with more structure, more logic, that we might assume conveys more information to us (that coffee and cream are different things). A second, stirred cup appears more homogenous – it has less structure or organization. And yet, it requires more Shannon information to describe it.

A difficulty thus lies in how we tend to consider the words ‘disorder’and ‘information’. We associate disorder with lack of knowledge (or lack of information) and order with knowledge and, therefore, more information).

While intuitively correct, unfortunately this is not how things works from the perspective of information and communication signals - which is what Shannon was concerned with when formulating his ideas.  Shannon was trying to understand the bits of information are required to convey the state of a system (or a signal).

Example:

Imagine I have an extremely messy dresser and I am looking for my favorite shirt. I open my dresser drawers and see a jumble of miscellaneous clothes: socks, shirts, shorts, underwear. I rifle through each drawer examining each item to see if it, indeed, is the shirt I am seeking. To find the shirt I want (which could be anywhere in the dresser), I require maximum information, since the dresser is in a state of maximum disorder.
Thankfully I spend the weekend sorting through my clothes. I divide the dresser by category, with socks, shirts, shorts and underwear drawers. Now, if I wish to find my shirt,  my uncertainty about its location has been reduced by one quarter (assuming four drawers in the dresser). To discover the shirt in the dresser's more ordered state requires less information:  I can limit myself to looking in one drawer only.

Let us take the above example a little further:

Imagine that I love this particular shirt so much that I buy 100 copies of it, so many that they now fill my entire dresser. The following morning, upon waking, I don't even bother to turn on the lights. I reach into a drawer (any drawer will do), and pull out my favorite shirt!

My former, messy dresser had maximum disorder (high entropy), and required a maximum amount of Shannon Information ('bits' of information to find a particular shirt).  By contrast, the dresser of identical shirts, has maximum order (negentropy), and requires a minimal amount of Shannon Information (bits) to find the desired shirt.

Interesting information:  States that matter!

It should be noted that the two extreme states illustrated above are both pretty uninteresting. A fully random dresser (maximum information) is pretty meaningless, but so is a dresser filled will identical shirts (minimum information). While each are described by contrasting states of Shannonian information, neither maximum nor minimum information systems appear very interesting.

One might also imagine that neither the random nor the homogeneous systems are all that functional. A dresser filled with identical shirts does not do a very good job of meeting my diverse requirements for dressing (clothing for different occasions or different body parts), but my random dresser, while meeting these needs, can't function well because it takes me forever to sort through.

Similarly, systems with too much order cannot respond to a world filled with different kinds of situations. Furthermore, they are more vulnerable to system disruption. If you have a forest filled with identical tree species, one destructive insect infestation might have the capacity to wipe out the entire system. If I own 100 identical shirts and it goes out of style, I suddenly have nothing to wear.

Meanwhile, if everything is distributed at random then functional differences can't arise: a mature forest eco-system has collections of species that work together, processing environmental inputs in ways that syphon resources effectively - certain species are needed moreso than others. In my dresser, I need to find the right balance between shirts, socks, and shorts: some things are worn more than others and I will run into shortages of some, and excesses of others, if I am not careful.

PART II:  Information Sorting in Complex Systems

Between Order and Disorder

It appears that in order to be responsive to a world that consists of different kinds of inputs, complex systems tune themselves to information states involving just enough variety (lots of different kinds of clothes/lots of different tree species) and just enough homogeneity (clusters of appropriately scaled groups of clothing or species). These systems violate the second-law of thermodynamics (gaining order), but not gaining so much order as to become homogenous.

Decrease in Shannonian information = decrease in uncertainty

Imagine we have a system looking to optimize a particular behavior - say an ant colony seeking food. We place an assortment of various-sized bread crumbs on a kitchen table, and leave our kitchen window open overnight. Ants march in through the window, along the floor, and up the leg of the table.

Which way should they go?

From the ants perspective, there is maximum uncertainty about the situation: or maximum Shannonian information. The ants spread out in all directions, seeking food at random. Suddenly, one ant finds food, and secretes some pheromones as he carries it away. The terrain of the table is no longer totally random: there is a signal - food here! Nearby ants pick up the pheromone signal and, rather than moving at random, they adjust their trajectories. The ant's level of uncertainty about the situation has been reduced or, put another way, the pheromone trail represents a compression of informational uncertainty - going from 'maximum information required' (search every space), to 'reduced information required' (search only spaces near the pheromone trace).

If all ants had to independently search every square inch of tabletop to find food, each would require maximum information about all table states. If, instead, they can be steered by signals deployed by other ants, they can limit their search to only some table states. By virtue of the collective, the table has become more ‘organized’ in that it requires less information to navigate towards food. There is a reduction of uncertainty, or reduction of 'information bits', associated with finding the location of 'food bits'. Accordingly, these are more easily discovered.

Sorting a system so there is less to sort:

Suppose we are playing 20 questions.  I am thinking of the concept ‘gold’, and you are required to go through all lists of persons, places and things in order to eventually identify ‘gold’ as the correct entity. Out of a million possible entities that I might be thinking of, how long would it take to find the right one in a sequential manner? Clearly, this would involve a huge length of time. The system has maximum uncertainty (1 million bits), and each sequential random guess reduces that uncertainty by only 1 bit (999,999 bits to go after the first guess!). While I might 'strike gold' at any point, the odds are low!

From an information perspective, we can greatly reduce the time it takes to guess the correct answer if we structure our questions so as to minimize our uncertainty at every step. Thus if I have 1,000,000 possible answers in the game ‘twenty questions’, I am looking for questions that will reduce these possibilities to the greatest extent at each step. If, with every question, I can reduce the possible answers in half (binary search) then, within 20 question, I can generally arrive at the solution. In fact the game, when played by a computer, can solve for any given entity within an average of six guesses! With each guess, the degree of uncertainty regarding the correct answer (or shannonian information), is reduced.

Reduce information | Reduce effort

Another way to think about that is that, as Shannon Information is reduced, the system can channel its resources more effectively – that is, focus on work (or questions) that move towards success while expending less wasted effort.

This may be the reason why, in complex systems,  we often observe the phenomena of growth and preferential attachment.

To illustrated, imagine that I wish to move to a new city to find a job. I can choose one of ten cities, but other then their names, I know nothing about them, including their populations. I relocate at random and find myself in a city of 50 people, with no job postings. My next random choice might bring me to a bigger center, but, without any information, I need to keep re-locating until I land in a place where I can find work.

If, instead, the only piece of information that I have is the city's populations, I can make a judgement: If I start off my job hunt in larger centers then there is a better chance that jobs matching my skills will be on offer. I use the population sizes as a way to filter out certain cities from my search - perhaps with a 'rule' stating that I won't consider relocating to cities with less than 1 million inhabitants. This rule might cross out six cities from my search list, and this 'crossing out' is equivalent to reducing information bits required to find a job: I can decide that my efforts are better spent focusing on a job search in only four cities instead of ten.

By now it should have become clear that this is equivalent to my looking for a given cream particle in only half the coffee cup, or ants looking for food only on some parts of the table, or my search in 20 questions being limited only to items in the 'mineral' category.

All of these processes involve a kind of information sorting that gives rise to order, which in turn makes things go smoother: from random cities to differentiated cities; from random words to differentiated categories of words.

What complex systems are able to do is take a context that, while initially undifferentiated, can be sorted by the agents in the system such that the agents in the system can navigate through it more efficiently. This always involves a violation of the second law of thermodynamics, since the amount of shannonian information (the entropy or disorder of the system), is reduced. That said, this can only occur if there is some inherent imbalance in the system, or 'something to sort' in the first place. If a context is truly homogeneous (going back to our dresser of identical shirts), then no amount of system rearranging can make it easier to navigate. Note that an undifferentiated system is different from a homogenous system. A random string of letters is undifferentiated; a string of composed solely of the letter 'A' is homogeneous.

Accordingly, complex systems need to operate in a context where some kind of differential is present. The system then has something to work with, in terms of sorting through the kinds of differences that might be relevant.

One thing to be very aware of in the above example, is how difficult it is to disambiguate information from knowledge. As we gain knowledge about probably system states, Shannonian information is reduced.  This is a frustrating aspect of the term ‘information’, and can lead to a lot of confusion.

This Christmas Story illustrates how binary search can quickly identify an entity


 


Cite this page:

Wohl, S. (2021, 8 July). Information. Retrieved from https://kapalicarsi.wittmeyer.io/definition/information-theory

Information was updated July 8th, 2021.

Nothing over here yet

In Depth: Information

This is the feed, a series of related links and resources. Add a link to the feed →

Nothing in the feed...yet.

This is a list of People that Information is related to.

Reaction/Diffusion | Computation

diffusion model spots Learn more →
  • See all People
  • This is a list of Terms that Information is related to.

    The violation of the second order of dynamics - whereby systems develop and maintain order

    This is a default subtitle for this page. Learn more →
  • See all Terms
  • This is a list of Urban Fields that Information is related to.

    Increasingly, data is guiding how cities are built and managed.  'Datascapes' are derived from our actions but also then steer them. How do humans and data interact in complex ways?

    More and more, the proliferation of data is changing the ways in which we inhabit space... and so forth.
    Learn more →

    Across the globe we find spatial clusters of similar economic activity. How does complexity help us understand the path-dependent emergence of economic clusters?

    Evolutionary Economic Geography (EEG) tries to understand how economic agglomerations or clusters emerge from the bottom-up. This branch of economics draws significantly from principles of complexity and emergence, seeing the rise of particular regions as being path-dependent, and trying to understand the forces at work that drive change for economic agents - the firms that make up our economic environment.
    Learn more →

    Communicative planning  broadens the scope of voices engaged in planning processes. How does complexity help  us understand the productive capacity of these diverse agents?

    A growing number of spatial planners are realizing that they need to harness many voices in order to navigate the complexity of the planning process. Communicative strategies aim to move from a top-down approach of planning, to one that engages many voices from the bottom up.
    Learn more →
  • See all Complexity & Urbanism
  • This is a list of Key Concepts that Information is related to.

  • See all Core Concepts
  • There would be some thought experiments here.

    Navigating Complexity © 2015-2021 Sharon Wohl, all rights reserved. Developed by Sean Wittmeyer
    Sign In (SSO) | Sign In


    Test Data
    Related (this page): Driving Flows (25), 
    Section: concepts
    Non-Linearity
    Related (same section): Tipping Points (218, concepts), Path Dependency (93, concepts), Far From Equilibrium (212, concepts), 
    Related (all): Urban Modeling (11, fields), Resilient Urbanism (14, fields), Relational Geography (19, fields), Landscape Urbanism (15, fields), Evolutionary Geography (12, fields), Communicative Planning (18, fields), Assemblage Geography (20, fields), 
    Nested Scales
    Related (same section): Self-Organized Criticality (64, concepts), Scale-Free (217, concepts), Power Laws (66, concepts), 
    Related (all): Urban Modeling (11, fields), Resilient Urbanism (14, fields), Informal Urbanism (16, fields), 
    Emergence
    Related (same section): Self-Organization (214, concepts), Fitness (59, concepts), Attractor States (72, concepts), 
    Related (all): Urban Modeling (11, fields), Urban Datascapes (28, fields), Informal Urbanism (16, fields), Incremental Urbanism (13, fields), Evolutionary Geography (12, fields), Assemblage Geography (20, fields), 
    Driving Flows
    Related (same section): Open / Dissipative (84, concepts), Networks (75, concepts), Information (73, concepts), 
    Related (all): Urban Datascapes (28, fields), Tactical Urbanism (17, fields), Relational Geography (19, fields), Parametric Urbanism (10, fields), Landscape Urbanism (15, fields), Evolutionary Geography (12, fields), Communicative Planning (18, fields), Assemblage Geography (20, fields), 
    Bottom-up Agents
    Related (same section): Rules (213, concepts), Degrees of Freedom (78, concepts), 
    Related (all): Urban Modeling (11, fields), Resilient Urbanism (14, fields), Parametric Urbanism (10, fields), Informal Urbanism (16, fields), Incremental Urbanism (13, fields), Evolutionary Geography (12, fields), Communicative Planning (18, fields), 
    Adaptive Capacity
    Related (same section): Iterations (56, concepts), Feedback (88, concepts), Cybernetics (53, concepts), 
    Related (all): Urban Modeling (11, fields), Tactical Urbanism (17, fields), Parametric Urbanism (10, fields), Landscape Urbanism (15, fields), Informal Urbanism (16, fields), Incremental Urbanism (13, fields), Evolutionary Geography (12, fields),