forked from pluralitybook/plurality
-
Notifications
You must be signed in to change notification settings - Fork 0
/
pluralworld.md.backup
213 lines (119 loc) · 53.8 KB
/
pluralworld.md.backup
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
<center>
<h1> Living in a Plural World </h1>
</center>
<center>
<i>Until lately the best thing that I was able to think in favor of civilization…was that it made possible the artist, the poet, the philosopher, and the man of science. But I think that is not the greatest thing. Now I believe that the greatest thing is a matter that comes directly home to us all. When it is said that we are too much occupied with the means of living to live, I answer that the chief worth of civilization is just that it makes the means of living more complex; that it calls for great and combined intellectual efforts, instead of simple, uncoordinated ones, in order that the crowd may be fed and clothed and houses and moved from place to place. Because more complex and intense intellectual efforts mean a fuller and richer life. They mean more life. Life is an end in itself, and the only question as to whether it is worth living is whether you have enough of it.</i>
<br><br>
Oliver Wendell Holmes, 1900[^1]
<i>(A)re…atoms independent elements of reality? No…as quantum theory shows: they are defined by their…interactions with the rest of the world…(Q)uantum physics may just be the realization that this ubiquitous relational structure of reality continues all the way down…Reality is not a collection of things, it’s a network of processes.</i>
Carlo Rovelli, 2022[^2]
<br>
</center>
Technology follows science. Thus, if we are to offer a different vision of the future of technology from AT and ES, we need to understand what is at the root of their understanding of science, what this might miss, and how correcting this can open new horizons. To do so, we now explore the philosophy of science behind these approaches and explore how in both the natural and social sciences the advances of the last century arose from moving beyond the limits of these perspectives to a plural, networked, relational, multiscale understanding of the reality we live in.
### Atoms and the universe
The simplest and most naïve way to think about science is what might be called “objectivist”, “rationalist” or, as we will dub it, “monist atomism”[^3]. The physical world has an objective state and obeys an ultimately quite simple set of laws, waiting to be discovered. These can be stated in mathematical terms and dictate the deterministic evolution of one state into another through the collision of atoms. Because these laws and the mathematical truths they obey are unitary and universal, everything that ever will happen can be predicted from the current state of the world. These laws are often expressed in “goal-seeking” or “teleological” terms: particles “take the path of least action”, chemical compounds “minimize free energy”, evolution maximizes fitness, economic agents “maximize utility”. Every phenomenon in the world, from human societies to the motion of the stars, can ultimately be reduced to these laws. All one needs to do — in this frame — is have sufficient computational power/intelligence, sufficiently precise observations, and the courage to strip away one’s superstitions/social constructs/biases and one will be, essentially, gods, omniscient, and possibly omnipotent.
The pattern of such thinking runs through almost every scientific field at some point in its development. Euclidean geometry, which aspires to deduce nearly all mathematical facts from a small set of axioms and concepts, and Newtonian mechanics, which describe the relationship between the motion of an object and the forces acting on it, are perhaps the most famous examples. In biology, the simple version of Darwinism focuses on the survival of the fittest species, with individual animals (or in later versions “selfish genes”) constantly struggling against each other to survive [^4]. In (primitive) neuroscience (especially phrenology), atoms are regions of the brain, each undertaking an atomic function that together add up to thought. In psychology, behaviorism saw thought as reducible to stimuli and response. In economics, the atoms are the self-interested individuals (or sometimes firms) of economic theory, each seeking their own advantage in the market. In computer science, the Church-Turing Thesis sees all possible operations as reducible to a series of operations on an idealized computer called a “Turing Machine”.
Whatever their limits, these approaches have achieved great successes that cannot be ignored. Newtonian mechanics explained a range of phenomena and helped inspire the technologies of the industrial revolution. Darwinism is the foundation of modern biology. Economics has been the most influential of the social sciences on public policy. And the Church-Turing vision of “general computation” helped inspire the idea of general-purpose computers that are so broadly used today.
They are also the foundation of the Abundance Technocracy (AT) and Entrepreneurial Sovereignty (ES) worldviews we discussed in the last chapter, though each emphasizes a different aspect. AT focuses on the unity of reason and science inherent in monism and seeks to similarly rationalize social life, harnessing technology. ES focuses on the fragmentation intrinsic to atomism and seeks to model “natural laws” for the interaction of these atoms (like natural selection and market processes). In this sense, while ES and AT seem opposite, they are opposites within an aligned scientific worldview.
For all that shared worldview has inspired, the science of the 20th century showed its limitations. Relativity and even more quantum mechanics upended the Newtonian universe. Gödel’s Theorem and a variety of following works undermined the unity and completeness of mathematics and a range of non-Euclidean geometries are now critical to science. Symbiosis, ecology, and extended evolutionary synthesis undermined “survival of the fittest” as the central biological paradigm. Neuroscience has been reimagined around networks and emergent capabilities, which in turn have become conceptually central to modern computation. Critical to all these developments are ideas such as “complexity”, “emergence”, “networks”, and “collective intelligence” that challenge the elegance of monist atomism.
### Complexity and emergence
The central idea of complexity science is that reduction of many natural phenomena to their atomic components (what we can call “reductionism”), even when conceptually possible, is often counterproductive. At the same time, studying complex systems as a single unit is often uninformative or impossible. Instead, structures (e.g. molecules, organisms, ecosystems, weather systems, societies) emerge from “atoms” at a range of (intersecting) scales that can be understood most usefully at least in part according to their own principles and laws rather than those governing their underlying components. Some of the common core arguments for “complexity”, or what we will call “pluralism”, in all the domains it is applied include:
- Computational complexity: Even when reductionism is feasible in principle/theory, the computation required to predict higher-level phenomena based on their components is so large that performing it is unlikely to be practically relevant. In fact, in some cases, it can be proven that the required computation would consume far more resources than could possibly be recovered through the understanding gained by such a reduction. This often makes the theoretical possibility of such reduction irrelevant and creates a strong practical barrier to reduction.
- Sensitivity, chaos, and irreducible uncertainty: To make matters worse, many even relatively simple systems have been shown to exhibit “chaotic” behavior. A system is chaotic if a tiny change in the initial conditions translates into radical shifts in its eventual behavior after an extended time has elapsed. The most famous example is weather systems, where it is often said that a butterfly flapping its wings can make the difference in causing a typhoon half-way across the world weeks later. In the presence of such chaotic effects, attempts at prediction via reduction require extreme and thus unrealistic degrees of precision. To make matters worse, there are often hard limits to how much precision is feasible, as precise instruments often interfere with the systems, they measure in ways that can lead to important changes due to the sensitivity mentioned previously. The most absolute version of this is the Heisenberg Uncertainty Principle, which puts physical upper limits on measurement precision based on this logic.
- Multiscale organization: While some might take the above observations as a council of scientific despair, an alternative is to view it as a reason to expect a diversity of analytic/scientific approaches to be fruitful under different conditions, at different scales of analysis and in ways that will intersect with each other. In this view, it is natural to seek to characterize these different approaches, their “scope conditions” (viz. when they are likely to be most useful), how they can interact with each other and to consider this sort of approach as a core part of the scientific endeavor.
- Relationality: Multiscale organization implies many imperfectly commensurable ways of knowing. But if these could each be sliced into distinct scientific spheres, could monist atomism still prevail within each of several scientific fields? Yet a critical element of complexity is that phenomena at different scales often determine the interactions between and even constitute the nature of items at other scales. Units at smaller scales, for example, may have their identities and the rules they obey constituted by the larger units they in turn make up. While approximations ignoring these interactions may be useful for some phenomena, it is frequently important to trace down these dependencies in other contexts and ensure one accounts for them.
- Embedded causality: As a result of the preceding points, causation can rarely be understood completely or exhaustively in a reductive manner, where the explanation of higher-level phenomena is reduced to simpler or more atomic components. Instead, while specific causal arrows may follow such a pattern, others in the same system will take an opposite form, where the behavior of “atoms” is explained by the way they are situated in larger systems. Causal analysis will thus have quasi-“circular” elements that form equilibria and independent causation will usually emerge from forces within these equilibria, rather than by predictable reduction to a constant set of atomic “unmoved movers”.
Together these elements constitute a basic reimagining of the scientific project compared to monist atomism. In monist atomism, the search for scientific truth and explanation resembles something of a process of digging from different start points on a planet’s surface towards its core: people may start from many different points, but as they strip away falsehood, superstition, error, and misunderstanding, they will all find the same underlying core of truth, reducing everything they see to the same fundamental elements.
In the plural view, the almost the exact opposite metaphor applies: the scientific pursuit resembles the building of structures outward from the surface of a planet. While these structures might initially crowd and compete, if they grow far enough out the space they have to fill expands into the infinite void beyond. As these structure branch out, they diversify and fragment, making the possibilities for them to interact and recombine ever richer and yet the potential of their converging to a single outcome ever more remote. Furthermore, each of these recombinations can, roughly as in sexual reproduction, form new structures that themselves extend further off on their own trajectories. Progress is complexity, diversification, and intersectional recombination.
While this plural vision doesn’t offer the hope of final or absolute truth that monist atomism does, it offers something perhaps as hopeful: an infinite vista of potential progress, expanding rather than contracting as it moves on. As the scientific revolutions of the 20th century so dramatically illustrated, shifting to such a plural perspective spells not the end of scientific progress, but rather an explosion of its possibilities.
### The plurality of scientific revolutions
The twentieth century, and in particularly the Golden Age highlighted in the previous chapter, was the most rapid period of scientific and technological advance in human history. These advances happened in a range of disparate fields, but one common thread runs through most: the transcendence of monist atomism and the embrace of the plural. We will illustrate this with examples from mathematics, physics, biology to neuroscience.
**Mathematics**
Perhaps the most surprising reach of pluralism has been into the structure of truth and thought itself. The gauntlet for twentieth century mathematics was thrown down by David Hilbert, who saw a complete and unified mathematical structure within grasp around the same time that Lord Kelvin saw the passing of the closing of the frontier in physics. Yet while the century began with Bertrand Russell and Alfred North Whitehead’s famous attempt to place all of mathematics on the grounds of a single axiomatic system, developments from that starting point have been quite opposite. Rather than reaching a single truth from which all else followed, mathematics shattered into a thousand luminous fragments.
Geometry and topology, once the province of Euclidean certainties, turned out to admit endless variations, just as the certainties of a flat earth vanished with circumnavigation. Axiomatic systems went from the hope for complete mathematical systems to being proven, by Kurt Gödel, Paul Cohen, and others to be inherently unable to resolve some mathematical problems and necessarily incomplete. Alonzo Church showed that other mathematical questions were undecidable by any computational process. Even the pure operations of logic and mathematics, it thus turned out, were nearly as plural as the fields of science we discussed above. To illustrate:
<img src="https://raw.githubusercontent.com/pluralitybook/plurality/main/figs/science.jpg" width="100%" alt="Science">
**Figure 1: The Mandelbrot Set (characterizing the chaotic behavior of simple quadratic functions depending on parameter values in the function) shown at two scales. Source: Wikipedia (left) and Stack Overflow (right).**
- Church proved that some mathematical problems were “undecidable” by computational processes and subsequent work in complexity theory has shown that even when mathematical problems might be in principle decidable, the computational complexity of arriving at such an answer is often immense. This dashed the dream of reducing all of mathematics to computations on basic axioms.
- Chaos has proven inherent even to many very simple mathematical problems. Perhaps the most famous example involves the behavior of the complex numbers of iterated application of quadratic polynomials. The behavior of such iterations turns out to form such intricate and rich patterns that characterizing them has become the source of “fractal art” as shown in Figure 1. These structures illustrate that even solutions to apparently “obvious” mathematical questions may depend on infinitely intricate details, that dazzle even our senses with their richness.
- While mathematics is not primarily concerned with phenomena well described by scales, the above phenomena have implied that rather than collapsing into a single field, twentieth century mathematics blossomed into an incredible diversity of subfields and sub-subfields, covering a range of phenomena. Geometry alone has a dozen major subfields from topology to projective geometry, studying radically different and only loosely intersecting elements of what was once a single, highly axiomatic, and largely closed set of phenomena.
- Relationality is a fundamental aspect of mathematics, as it concerns the study of the relationships between objects and the structures that emerge from those relationships. In mathematics, different branches are often interconnected, and insights from one area can be applied to another. For instance, algebraic structures are ubiquitous in many branches of mathematics, and they provide a language for expressing and exploring relationships between mathematical objects. Moreover, the study of topology is based on understanding the relationships between shapes and their properties. The mix of diversity and interconnectedness is perhaps the defining feature of modern mathematics
- Again, while “causation” is not quite the right way to understand pure mathematics, one of the most remarkable features of this modern field is its opposition to the reductionist approach, where seemingly simple questions are reduced to axioms and everything filters down through these. Perhaps the most famous example is Fermat’s Last Theorem, the claim by a the 17th century mathematician to have proven that a simple equation admits no whole number solutions. The eventual proof in the 1990s by Andrew Wiles building off centuries of intervening mathematics involved a range of techniques (especially related to so-called “elliptic curves”) developed for other purposes far more apparently advanced that the statement itself. The same is believed to be true of many other unsolved mathematical problems, such as the Riemann Hypothesis.
Many of these advances in pure mathematics have remained puzzles of curiosity and toys of the mind. Yet many of these apparently abstruse ideas have helped transform modern technology. The same elliptic curves that were central to Wiles’s proof are the foundation of one of the leading approaches to public key cryptography, given the intractability of certain solutions to problems involving them. Other advanced mathematics has proven core to the design of computer circuitry, medical image analysis, civil and aeronautical engineering, and more. Each of these applications depends on wildly different and only occasionality tangential areas of mathematics, rather than on the monolithic and integrated theory that Hilbert, Russell, and Whitehead once dreamed of.
In short, in sharp contrast to the monist atomist vision, the world-defining science and technology built on it in the twentieth century arose from their diversity: fields of knowledge proliferated and speciated, and each field internally, like a fractal, mirrored the same richness. The closer we looked into each area, the greater intricacy revealed itself. Surprising connections and relationships have emerged, but have only added to the complexity, rather than implying “unity”.
Structures at every level of intersecting scale and described from the perspective of every way of knowing have proven important to progress: nuclear bombs reshape human societies, setting off environmental changes that reshape weather, twisting human psychology and feeding into the designs of computational systems that help cure disease and so on.
**Physics**
Pluralism is perhaps least surprising in biological systems; we can see the complexity of these all around us in everyday life. More surprising, perhaps, is the way in which 20th century physics revealed that these principles go “all the way down”, to the heart of the physical sciences that Newtonian monist atomism pioneered.
At the end of the 19th century, Lord Kelvin infamously proclaimed that “There is nothing new to discover in physics now.” The next century proved, on the contrary, to be the most fertile and revolutionary in the history of the field. Relativity (special and especially general), quantum mechanics, and to a lesser extent thermodynamics/information theory and string theory upended the Newtonian universe, showing that the simple linear-time, Euclidean-space objective reality of colliding billiard balls was at best an approximation valid in familiar conditions. The (post-)modern physics that emerged from these revolutions beautifully illustrates pluralism in science, illustrating how pluralism is, as suggested by the epigraph from prominent physicist Carlo Rovelli, baked into the very fabric of reality.
- Computational complexity is the core reason for the field of thermodynamics and its many offshoots. In fact, the field of information theory so core to computer science is built almost entirely on top of concepts derived from thermodynamics. The impossibility of simulating the action of billions of sub-units (e.g., molecules in a gas or compound, electrons in a wire, etc.) implies the need for thermodynamic techniques describing the average behavior of these sub-units.
- The ideas of sensitivity, chaos, and irreducible uncertainty originate or at least achieved their first intellectual prominence in physics. The simplest example of a chaotic system is three comparably sized bodies acting under gravitational forces. The behavior of smoke, of ocean currents, of weather, and many more all exhibit chaos and sensitivity. And, as noted above, the most canonical and best-established example of irreducible uncertainty is “Heisenberg’s Uncertainty Principle”, under which the quantum nature of reality puts a firm upper limit on the precision with which the velocity and position of a particle can be measured.
- For both these reasons, modern physics is organized according to the study of a wide range of different scales, illustrated by the famous “scales of the universe” walk at New York’s Hayden planetarium that takes visitors from quarks through atoms, molecules, chemical compounds, objects, planets, stars, star systems, galaxies, etc. While all systems in theory obey the same set of underlying physical laws, the physics at each scale is radically different, as different forces and phenomena are dominant and in fact, physics at the smallest scales (quantum) has yet to be reconciled with those at the largest (general relativity).
- Perhaps the most striking and consistent feature of the revolutions in twentieth century physics was the way they upset assumptions about a fixed and objective external world. Relativity showed how time, space, acceleration, and even gravity were functions of the relationship among objects, rather than absolute features of an underlying reality. Quantum physics went even further, showing that even these relative relationships are not fixed until observed and thus are fundamentally interactions rather than objects, as highlighted by Rovelli above. His interpretations of more recent developments pull ideas of time and space further apart.
- Given the diversity of levels of reality, causation in physics is profoundly embedded, shifting and cycling across scales at dizzying speeds. Atomic interactions, carefully constructed by sentient beings harnessing nano-scale computing, can trigger explosions that destabilize a planet. Collisions between stars can lead to a collapse of a microscopic blackhole that becomes the center of a galaxy.
The applications of this rich and plural understanding of physical reality are at the very core of the tragedies of the twentieth century. Great powers harnessed the power of the atom to shape world affairs. Global corporations powered unprecedented communications and intelligence by harnessing their understanding of quantum physics to pack ever-tinier electronics into the palms of their customers’ hands. The burning of wood and coal by millions of families has become the cause of ecological devastation, political conflict, and world-spanning social movements based on information derived from microscopic sensors scattered around the world.
**Biology**
If the defining idea of 19th century macrobiology (concerning advanced organisms and their interactions) was the “natural selection”, the defining idea of the 20th century analog was “ecosystems”. Where natural selection emphasized the “Darwinian” competition for survival in the face of scarce resources, the ecosystem view (closely related to the idea of “extended evolutionary synthesis”) emphasizes:
- The persistent inability to form effective models of animal behavior on reductive concepts, such as behaviorism, neuroscience, and so forth, illustrating computational complexity.
• The ways in which systems of many diverse organisms (“ecosystems”) can exhibit features similar to multicellular life (homeostasis, fragility to destruction or over propagation of internal components, etc.) illustrating sensitivity and chaos.
- The emergence of higher-level organisms through the cooperation of simpler ones (e.g., multicellular life as cooperation among single-celled organisms or “eusocial” organisms like ants from individual insects) and the potential for mutation and selection to occur at all these levels, illustrating multi-scale organization.
- The diversity of interactions between different species, including traditional competition or predator and prey relationships, but also a range of “mutualism”, where organisms depend on services provided by other organisms and help sustain them in turn, exemplifying entanglement, and relationality.
- The recognition of genetics as coding only a portion of these behaviors and of “epigenetics” or other environmental features to play important roles in evolution and adaptation, illustrating embedded causality.
This shift wasn’t simply a matter of scientific theory. It led to some of the most important shifts in human behavior and interaction with nature of the 20th century. In particular, the environmental movement and the efforts it created to protect ecosystems, biodiversity, the ozone layer, and the climate all emerged from and have relied heavily on this science of “ecology”, to the point where this movement is often given that label.
While this point is easiest to illustrate with macrobiology, as it is more familiar to the public, the same lesson applies perhaps even more dramatically to microbiology (the study of the inner workings of life in complex organisms). That field has moved from a focus on individual organs and the mechanical study of genetic expression to a “systems” approach, integrating action on a range of scales and according to many different systems of natural laws. This may be best illustrated by focusing on perhaps the most complex and mysterious biological system of all, the human brain.
**Neuroscience**
Modern neuroscience emerged from two critical discoveries about the functioning of brains. First, in the late 19th century, Camillo Golgi, Santiago Ramón y Cajal, and collaborators isolated neurons and their electrical activations as the fundamental functional unit of the brain. This analysis was refined into clear physical models by the work of Hodgkin and Huxley, who built and tested in on animals their electrical theories of nervous communication. Second, and more diffusely, a rich and nuanced picture emerged over the course of the twentieth century complicating the traditional view, often derided as “phrenology” that each brain function was physically localized to one region of the brain. Instead, while researchers like Paul Broca found important evidence of physical localization of some functions by studying brain lesion patients, a variety of other evidence including mathematical modeling, brain imaging, and single-neuron activation experiments suggested that many if not most brain functions are distributed across regions of the brain, emerging from patterns of interactions rather than primarily physical localization.
The understanding that emerged from these findings was that of a “network” of “neurons”, each obeying relatively simple rules for activation based on inputs, and updating the underlying connections based on co-occurrence. Again, the themes of pluralism emerge elegantly:
- Of all fields, neuroscience showed most sharply the bounds imposed by computational complexity. As early as the late 1950s, researchers beginning with Frank Rosenblatt built the first “artificial neural network” models of the brain and hoped to simulate a full human brain within a few years, only to discover that task was computationally many decades off if ever attainable, forcing a great diversification of ways (both model-based and experiment-based) for studying the brain.
- Sensitivity and chaos example needed here[^5]
- The wide-ranging investigation of different forms of partial physical localization and interaction centers around multiscale organization, where some phenomena are localized to very small structures (a few physically proximate neurons), while others are distributed over large brain regions, but not the entirety of the brain and others still are physically distributed but appear to be localized, at different scales, to various consistent networks of brain activity.
- The Hebbian model of connections, where they are strengthened by repeated co-firing, is perhaps one of the most elegant illustrations of the idea of “relationality” in science, closely paralleling the way we typically imagine human relationships developing.
- Neuroscience also elegantly illustrates embedded causality. Brain structure is famously plastic to learning and what is learned depends heavily on the social contexts that humans inhabit and construct as well as on the nutrients human economic and social system provide to brains. Thus, the higher-level phenomena (societies, relationships, economies, educational systems), which one might hope to help explain with features of human neuropsychology, are some of the central factors that shape the nature and function of those brains. Causation thus traces a classic circular pattern across levels.
Modern neuroscience has transformed this understanding into a range of applications: treatments of patients with damaged brains, development of psychiatric medicine, some treatments and interventions based on transcranial stimulation and other brain activation approaches, and more. Yet the most transformative technologies inspired by neuroscience have been at least partly digital, rather than purely biomedical. Neuroscience is increasingly central to two of the more exotic and exciting areas of digital technology development: brain-computer interfaces and the use of brain organoids as a substrate for computation.
Most pervasively, the “neural network” architecture inspired by early mathematical models of the brain has become the foundation of the recent advances in “artificial intelligence”. Networks of trillions of nodes, each operating on fairly simple principles inspired by neurons of activation triggered by crossing a threshold determined by a linear combination of inputs, are the backbone of the “foundation models” such as BERT and the GPT models. These have taken the world by storm in the past half-decade and increasingly dominated the headlines in the last two years. All the critical features of neuroscience discussed above, and of pluralism more broadly (e.g., multiscale organization, relationality, embedded causation), manifest in the operation of these systems.
**The intersectional (in)dividual**
Plurality is, scientifically, the application of an analogous perspective to the understanding of human societies and, technologically, the attempt to build formal information and governance systems that account for and resemble these structures as physical technologies built on plural science do. As we will argue in the next chapter, many of the most prominent digital technologies of the twentieth century (e.g., the internet and the personal computer) may be seen as examples. Yet to begin, we must understand some of the core elements of a plural understanding of human societies and contrast them with those on which AT and ES are based.
Perhaps the founding thinker of plural social science is Georg Simmel, a German sociologist of the turn of the twentieth century who pioneered the idea of social networks and the mistranslation of whose work as focused on a “web” eventually went “worldwide”. In his 1955 translation of Simmel’s classic 1908 Soziologie, Reinhard Bendix chose to describe Simmel’s idea as describing a “web of social affiliations” over what he described as the “almost meaningless” direct translation “intersection of social circles”[^6]. Had he made the opposite choice perhaps one of the leading technologies of our era would have exchanged names with one of its leading social movements, and we would speak of the “intersecting circles of the internet” and of a “web of oppression”.
Simmel’s “intersectional” theory of identity offered an alternative to both the traditional individualist/atomist (characteristic at the time in sociology with the work of Max Weber and deeply influential on ES) and collectivist (characteristic at the time of the sociology of Karl Marx and deeply influential on AT) accounts. He saw both as representing extreme reductions/projections of a richer underlying theory.
In his view, humans are inherently social creatures and thus there is no original and separate individual identity. Humans gain their sense of self, their goals, and their meaning through participation in social, linguistic, and solidaristic groups. In simple societies (e.g., isolated, rural, or tribal), people spend most of their life interacting with the same group of others or, as he called it, the same “social circle”. This circle comes to (primarily) define their identity collectively, which is why most scholars of simple societies (for example, anthropologist Marshall Sahlins) tend to favor methodological collectivism.
However, in more complex/urban/modern societies, social circles are more diverse. People work with one circle, worship with another, support political causes with a third, recreate with a fourth, cheer for a sports team with a fifth, identify as discriminated against along with a sixth, and so on. These diverse identifications together constitute a person’s identity. The more numerous and diverse these affiliations become, the less likely it is that anyone else shares precisely the same intersection of affiliations.
As this occurs, people come to have, on average, less of their full sense of self in common with those around them at any time; they begin to feel “unique” (to put a positive spin on it) and “isolated/misunderstood” (to put a negative spin on it). This creates a sense of “individuality” that helps explain why social scientists focused on complex urban settings (such as economists) tend to favor methodological individualism. However, ironically as Simmel points out, such “individuation” occurs precisely because and to the extent that the “individual” becomes divided among many loyalties and thus dividual. Thus, while methodological individualism takes the “(in)dividual” as the irreducible element of social analysis, Simmel instead suggests that individuals become possible as an emergent property of the complexity and dynamism of modern, urban societies.
**Plural publics**
If (in)dividual identity is so fluid and dynamic, surely so too must be the social circles that intersect to constitute it. As Simmel highlights, new social groups are constantly forming, while older ones decline. Three examples he highlights are the for his time still recent formation of cross-sectoral “working men’s associations” that represented the general interest of labor and the just-then-emerging feminist associations and cross-sectoral employers’ interest groups. The critical pathway to creating such new circles was the establishment of places (e.g. workman’s halls) or publications (e.g. workman’s newspapers) where this new group could come to know one another and understand, and thus to have things in common they do not have with others in the broader society. Such bonds were strengthened by secrecy, as shared secrets allowed for a distinctive identity and culture, as well as the coordination in a common interest in ways unrecognizable by outsiders[^7]. Developing these shared, but hidden, knowledge allows the emerging social circle to act as a collective agent.
In his 1927, The Public and its Problems, John Dewey (perhaps America’s most prominent public intellectual) considered the political implications and dynamics of these “emergent publics” as he called them [^8]. While he acknowledged a range of forces for social dynamism, Dewey focused specifically on the role of technology in creating new forms of interdependence that created the necessity for new publics. Railroads connected people commercially and socially who would never have met. Radio created shared political understanding and action across thousands of miles. Pollution from industry was affecting rivers and urban air. All these technologies resulted from research, the benefits of which spread with little regards for local and national boundaries. The social challenges (e.g. governance railway tariffs, safety standards, and disease propagation; fairness in access to scarce radio) arising from these forms of interdependence are poorly managed by both capitalist markets and preexisting “democratic” governance structures.
Markets fail because these technologies create market power, pervasive externalities, and more generally exhibit “supermodularity” (sometimes called “increasing returns”), where the whole of the (e.g., railroad network) is greater than the sum of its parts. Capitalist enterprises cannot account for all the relevant “spillovers” and to the extent they do, they accumulate market power, raise prices and exclude participants, undermining the value created by increasing returns. Leaving these interdependencies “to the market” thus exacerbates their risks and harms while failing to leverage their potential.
Dewey revered democracy as the most fundamental principle of his career; barely a paragraph can pass without him harkening back to it. He firmly believed that democratic action could address the failings of markets. Yet he saw the limits of existing “democratic” institutions just as severely as those of capitalism. The problem is that existing democratic institutions are not, in Dewey’s view, truly democratic with regards to the emergent challenges created by technology.
In particular, what it means to say an institution is “democratic” is not just that it involves participation and voting. Many oligarchies had these forms, but did not include most citizens and thus were not democratic. Nor would, in Dewey’s mind, a global “democracy” directly managing the affairs of a village count as democratic. Core to true democracy is the idea that the “relevant public”, the set of people whose lives are actually shaped by the phenomenon in question, manage that challenge. Because technology is constantly throwing up new forms of interdependence, which will almost never correspond precisely to existing political boundaries, true democracy requires new publics to constantly emerge and reshape existing jurisdictions.
Furthermore, because new forms of interdependence are not easily perceived by most individuals in their everyday lives, Dewey saw a critical role for what he termed “social science experts” but we might with no more abuse of terminology call “entrepreneurs”, “leaders”, “founders”, “pioneer” or, as we prefer, “mirror”. The role of such mirrors is to perceive a new form of interdependence (e.g. solidarity among workers, the carbon-to-global-warming chain), explain it to those involved by both word and deed, and thereby empower a new public to come into existence. Once this emergent public is understood, recognized, and empowered to govern the new interdependence, the role of the mirror fades away.
Thus, as the mirror image of Simmel’s philosophy of (in)dividual identity, Dewey’s conception of democracy and emergent publics is at once profoundly democratic and yet challenges and even overturns our usual conception of democracy. Democracy, in this conception, is not the static system of representation of a nation-state with fixed borders. It is a process even more dynamic than in a market of invention led by a diverse range of entrepreneurial mirrors, who draw upon the ways they are themselves intersections of unresolved social tensions to renew and reimagine social institutions.
**Network society**
In these theories of the intersectional (in)dividual and of emergent and plural publics, we can see the seeds of plural understanding of human societies and information systems, parallel to those described above for biological and physical systems. Perhaps the best articulation of this vision appears in the work of the leading figure of network sociology, Mark Granovetter. There is no basic individual atom; (in)dividual identity fundamentally arises from social relationships and connections. Nor is there any fixed collective or even set of collectives: social groups do and must constantly shift and reconfigure. This bidirectional equilibrium between the diversity of people and the social groups they create is the essence of pluralist social science.
Moreover, these social groups exist at a variety of intersecting and non-hierarchical scales. Families, clubs, towns, provinces, religious groups of all sizes, businesses at every scale, demographic identities (gender, sexual identity, race, ethnicity, etc.), education and academic training, and many more co-existing and intersecting. For example, from the perspective of global Catholicism, the US is an important but “minority” country, with only about 6% of all Catholics living in the US; but the same could be said about Catholicism from the perspective of the US, with about 23% of Americans being Catholic.
While we have emphasized the positive vision of pluralistic social science (a “network society”), it is important to note that beyond its inherent plausibility, a key reason for adopting such a perspective is the impossibility of explaining most social problems using monistic atomism given both complexity and chaos. Even in the social science field, economics, that most consistently aims for “methodological individualism”, it is universally accepted that trying to model complex organizations exclusively as the outgrowth of individual behavior is unpromising.
The field of Industrial Organization, for example, treats firms rather than individuals as the central actors, while most macroeconomic models assume sufficient homogeneity to allow the construction of a “representative agent”, rather than reducing behavior to actual diverse individual choice. In fact, one fascinating features of economic models is that they tend to feature a range of different forms of organization as either the “central planner” (e.g., a technology platform operator or provincial government) or as the “individual actors” (e.g., a municipality or a manufacturer). This is hardly surprising given that a leading result in game theory (the most canonical approach to economic “reduction” of a group to individual behavior) is the “folk theorem”, a variant on chaos and irreducible uncertainty that states that when interactions are repeated, a very wide range of outcomes can be an equilibrium.
Yet, whatever level of explanation is chosen, actors are almost always modeled as atomistically self-interested and planners as coherent, objective maximizers, rather than socially-embedded intersections of group affiliations. The essence of understanding social phenomena as arising from a “network society” is to embrace this richness and build social systems, technologies, and policies that harness it, rather than viewing it as a distracting complication. Such systems need, among other things, to explicitly account for the social nature of motivations, to empower a diversity of social groups, to anticipate and support social dynamism and evolution, to ground personal identity in social affiliations and group choices in collective, democratic participation and to facilitate the establishment and maintenance of social context facilitating community.
While we do not have the space to review it in detail, a rich literature provides quantitative and social scientific evidence for the explanatory power of the pluralist perspective. Studies of industrial dynamics, of social and behavioral psychology, of economic development, of organizational cohesion, and much else, have shown the central role of social relationships that create and harness diversity[^9]. Instead, we will pull out just one example that perhaps will be both the most surprising and most related to the scientific themes above: the evolution of scientific knowledge itself.
A growing interdisciplinary academic field of “Science of Science” (SciSci) studies the emergence of scientific knowledge as a complex system[^10]. It charts the emergence and proliferation of scientific fields, the sources of scientific novelty and progress, the strategies of exploration scientists choose, and the impact of social structure on intellectual advance. Among other things, they find that, relative to the most efficient ways of discovering existing knowledge (in chemistry, as an example), scientific exploration is biased towards topics and connections related to social connections and previous publications within a field[^11]. It finds strong connections between research team size and diversity and the types of findings (risky and revolutionary v. normal science) developed and documents the increasingly dominant role of teams (as opposed to individual research) in modern science [^12]. The largest innovations tend to arise from a strong grounding in existing disciplines deployed in unusual and surprising combinations[^13]. It illustrates that most incentive structures used in science (based e.g. on publication quality and citation count) create perverse incentives that limit scientific creativity and has helped produce new metrics that can complement and offset these biases, creating a more pluralistic incentive set [^14].
Thus, even in understanding of the very practice of science, a pluralist perspective, grounded in many intersecting levels of social organization, is critical. To advance science and technology of any flavor, therefore, a pluralist outlook is critical.
**A future plural?**
Yet the assumptions on which both most existing formal social systems and the AT and ES visions of the future discussed above diverge sharply from such pluralist foundations. First, consider the status quo.
Most existing social institutions, especially in wealthy democracies, have monistic and atomistic, or at least highly socially rigid, foundations. Examples include:
- Fairly simplistic/atomistic forms of private property are the basic pattern in most democratic societies. While taxes, zoning rules, etc. limit the ability of individuals to unilaterally exercise many rights over possessions, community governance is unusual (in contrast to many historical examples) in the design of formal governance systems and legally establishing it in most countries requires jumping through elaborate hurdles to set up a corporation, a non-profit or the like.
- Most collective governance and representation are organized around purely physical and historically rigid jurisdictional boundaries, such as cities, provinces, nation-states, congressional districts, etc. These are the basis of almost all administrative and governance procedures, from jury selection to the organization of war and peace, even when they drastically mismatch the relevant social divisions.
- Most government identification is based on a narrow set of social signals (mostly birth certificates and address) and provides simple, uniform entitlements to e.g. one vote in a local or national election and eligibility for uniform social benefits.
These foundations, and others like them, ripple into almost every aspect of the economy and formal public administration, especially in modern liberal democracies. Yet rather than trying to correct the monism and atomism of these structures, the AT and ES visions take them to an extreme.
In the AT vision we discussed in the previous chapter, the “messiness” of existing administrative systems is to be replaced by a massive-scale, unified, rational, scientific, artificially intelligent planning system. Transcending locality and social diversity, this unified agent is imagined to give “unbiased” answers to any economic and social problem, transcending social cleavages and differences. As such, it seeks to at best paper over and at worst erase, rather than fostering and harnessing, the social diversity and heterogeneity that pluralist social science sees as defining the very objects of interest and value.
In the ES vision, the sovereignty of the atomistic individual (or in some versions, a homogeneous and tightly aligned group of individuals) is the central aspiration. Social relations are best understood in terms of “customers”, “exit” and other capitalist dynamics. Democracy and other means of coping with diversity are viewed as failure modes for systems that do not achieve sufficient alignment and freedom.
But these cannot be the only paths forward. Pluralist science has shown us the power of harnessing a plural understanding of the world to build physical technology. We have to ask what a society and information technology built on an analogous understanding of human societies would look like. Luckily, we need not only use our imaginations, as one country has emerged, from the forge of geopolitical and tectonic pressure, as a shining city on a mountain to which others can look.
[^1]: “Life as Joy, Duty, End”
[^2]: https://www.theguardian.com/books/2022/sep/05/the-big-idea-why-relationships-are-the-key-to-existence)
[^3]: “Objectivist” here is not meant only in the narrow sense of the philosophy of Ayn Rand, though perhaps she expresses this view most consistently, but rather in the broader sense of common sense, simplistic version of the philosophy of the Enlightenment.
[^4]: Dawkins, The Selfish Gene, Darwin, The Descent of Man.
[^5]: Here are some examples of these properties in neuroscience: **Sensitivity**: In neuroscience, sensitivity refers to the ability of the brain to detect and respond to small changes in its environment. One example of sensitivity in the brain is the phenomenon of synaptic plasticity, which is the ability of synapses (connections between neurons) to change in strength in response to activity. This sensitivity allows the brain to adapt and learn from experience. **Chaos**: Chaos is a property of complex systems that exhibit unpredictable behavior even though they are deterministic. In neuroscience, chaos has been observed in the activity of neurons in the brain. For example, studies have shown that the firing patterns of individual neurons can be highly irregular and chaotic, with no discernible pattern or rhythm. This chaotic activity may play a role in information processing and communication within the brain. **Sensitivity and chaos together:** Sensitivity and chaos can also interact in the brain to produce complex and adaptive behavior. For example, studies have shown that the brain can exhibit sensitivity to small changes in sensory input, but this sensitivity can also lead to chaotic activity in neural networks. However, this chaotic activity can be controlled and harnessed to produce adaptive behavior, such as in the case of motor control and coordination. The brain's ability to integrate sensitivity and chaos in this way is a hallmark of its remarkable complexity and adaptability.
[^6]: Simmel,“Soziologie” (1908)
[^7]: Simmel, “Sociology of Secrets and Secret Societies”
[^8]: John Dewey, The Public and its Problems (1927)
[^9]: ; Page, S. E. (2007). The difference: How the power of diversity creates better groups, firms, schools, and societies. Princeton University Press.; Hidalgo, C. A. (2015). Why information grows: The evolution of order, from atoms to economies. Basic Books.; Acemoglu, D., & Linn, J. (2004). Market size in innovation: Theory and evidence from the pharmaceutical industry. The Quarterly Journal of Economics, 119(3), 1049-1090.; Mercier, H., & Sperber, D. (2017). The enigma of reason. Harvard University Press.; Pentland, A. (2014). Social physics: How good ideas spread—the lessons from a new science. Penguin. Putnam, R. D. (2000). Bowling alone: The collapse and revival of American community. Simon and Schuster. Granovetter, M. (1973). The strength of weak ties. American Journal of Sociology, 78(6), 1360-1380. Uzzi, B. (1997). Social structure and competition in interfirm networks: The paradox of embeddedness. Administrative Science Quarterly, 42(1), 35-67.; Burt, R. S. (1992). Structural holes: The social structure of competition. Harvard University Press.; McPherson, M., Smith-Lovin, L., & Cook, J. M. (2001). Birds of a feather: Homophily in social networks. Annual Review of Sociology, 27(1), 415-444.
[^10]: See a summary in Fortunato et al. (2018)
[^11]: Rzhetsky et al. 2015
[^12]: Wu et al. 2019
[^13]: Foster et al. 2015
[^14]: Clauset et al. 2017