Say the word “mind” and most people immediately think about the workings of an individual brain. The idea that something larger than an individual might have a mind seems like science fiction—but modern evolutionary theory says otherwise.
It is now widely accepted that eusocial insect colonies—ants, bees, wasps, and termites—have collective minds, with members of the colony acting more like neurons than decision-making units in their own right. For example, a critical stage in the life of a honeybee colony is when it fissions and the swarm that leaves must find a new nest cavity. Exquisite research by Thomas Seeley and his associates shows that the swarm behaves like a discerning human house hunter, scouting the available options and evaluating them according to multiple criteria. Yet, most scouts visit only one cavity and have no basis for comparison. Instead, the comparison is made by a social process that takes place on the surface of the swarm, which is remarkably similar to the interactions among neurons that take place when we make decisions. After all, what is a multi-cellular organism but an elaborately organized society of cells?
The reason that multi-cellular organisms and eusocial insect colonies both have minds is because they are both units of selection. Lower-level interactions that result in collective survival and reproduction are retained, while lower-level interactions that result in dysfunctional outcomes pass out of existence. What we call “mind” focuses on the lower-level interactions that result in the gathering and processing of information, leading to adaptive collective action.
As soon as we associate “mind” with “unit of selection”, then the possibility of human group minds leaps into view. It is becoming widely accepted that our distant ancestors found ways to suppress disruptive self-serving behaviors within their groups, so that cooperating as a group became the primary evolutionary force. Cooperation takes familiar physical forms such as hunting, gathering, childcare, predator defense, an offense and defense against other human groups. Cooperation also takes mental forms, such as perception, memory, maintaining an inventory of symbols with shared meaning, and transmitting large amounts of learned information across generations. In fact, most cognitive abilities that are distinctively human are forms of mental cooperation that frequently take place beneath conscious awareness. It is not an exaggeration to say that small human groups are the primate equivalent of eusocial insect colonies, complete with their own group minds. As the great 19th century social theorist Alexis d’Toqueville observed, “The village or township is the only association which is so perfectly natural that, wherever a number of men are collected, it seems to constitute itself.”
The adjective “small” is needed because all human groups were small prior to ten thousand years ago, although a tribal scale of social organization needs to be recognized as important in addition to the fission-fusion bands within each tribe where most of the social interactions occurred. In addition, cultural evolution is a multi-level process, no less than genetic evolution. As Peter Turchin shows in his book Ultrasociety, the societies that replaced other societies during the last 10,000 years did so in part because of their ability to gather and process information, leading to effective collective action at ever larger scales, such as the nations of France and America which were the main objects of Toqueville’s attention. Some elements of culturally evolved group minds are consciously designed, but many others are the result of unplanned cultural evolution, taking place beneath conscious awareness. They work without anyone knowing how they work.
Not only do units of selection tell us where group minds are likely to exist, but also where they are unlikely to exist. In many animal societies, within-group selection is the primary evolutionary force, leading to behaviors that would be regarded as selfish and despotic in human terms. If these societies have group minds at all, they are highly impaired, unlike eusocial insect colonies. By the same token, despotic human societies have group minds that are highly impaired, unlike more cooperatively organized human societies.
Knowing all of this has tremendous potential for recognizing collective intelligence in human life where it currently exists and socially constructing it where it is needed. However, most of what I have recounted is new, emerging only within the last two or three decades, and is often not reflected in the thinking of otherwise smart people on the subject of collective intelligence. In particular, there is a tendency to naively assume that collective intelligence emerges spontaneously from complex interactions, without requiring a process of selection at the level of the collective unit.
Book Link It was therefore with trepidation that I began reading Big Mind: How Collective Intelligence Can Change Our World, by Geoff Mulgan—founder of the think tank Demos, director of the UK Prime Minister’s Strategy Unit and head of policy under Tony Blair, and current chief executive of Nesta, the UK’s National Endowment for Science. That made him smart—but was he smart about collective intelligence from a modern evolutionary perspective?
To my delight, I found him very well informed, clearly recognizing that collective intelligence only exists under very special conditions, which makes it both present and absent in human life. In addition to his conceptual understanding, his book is filled with examples from his extensive policy experience that were previously unknown to me, along with practical advice about how to enhance collective intelligence where it does not already exist. I therefore lost no time inviting him to have an email conversation, which he generously accepted. An excerpt of his book is featured on the online magazine Evonomics.com.
DSW: Welcome, Geoff, to TVOL and congratulations on your superb book. In our correspondence leading up to this conversation, you called my attention to a 1996 issue of Demos Quarterly devoted to evolutionary thinking. Tell us about your background and how you came to appreciate the relevance of evolutionary theory in relation to human affairs. Bear in mind that while you are already well known in some quarters, you will be new to many of our readers.
GM: My intellectual background is a combination of economics, philosophy, social science and telecommunications, the subject of my PhD. By the time I started becoming interested in public policy there was already widespread dissatisfaction with the overly mechanistic, equilibrium models of economics which failed adequately to explain patterns of change: how technologies arise and spread; how economies grow. Many of us looked to evolutionary thinking as a useful tool. It could provide metaphorical frames – understanding social change in terms of the generation of new possibilities, selection and then replication (which has subsequently helped feed a very dynamic field of social innovation); it gave some new insights into how we were formed as human beings, and new psychological insights into policy. The Demos Quarterly you mentioned was a good showcase of the state of the field at the time. But it had little immediate influence.
One interesting spin-off was what is now called behavioural economics, which adapted many insights from evolutionary biology into the language of economics. The next issue of Demos Quarterly in 1996 focused on that, and I later commissioned quite a bit of work in the UK government (including a big 2002 study on the implications of behavioural psychology for public policy). A few years later Nudge was published by Cass Sunstein and Richard Thaler and introduced these ideas to the mainstream, helping the creation of a behavioural insights team in the Prime Minister’s office in the UK.
Another result, which I write about quite a bit in the book, is to see large scale cognition, like evolution more generally, in terms of trade-offs. I call it cognitive economics: what selection or survival advantages are provided by certain kinds of cognition, and at what cost. A great deal of work has been done on this at the individual organism level in terms of the advantages of a larger, but very energy hungry, brain. I’m interested in the parallels for groups of organisations: if they spend scarce resources on abilities to observe, analyse, create or remember does that confer advantages? Can they overshoot – like the clan that spends so much time remembering its ancestors that it fails to protect itself from threats; or a company that spends so much time trying to create the new that it fails to attend to the present. My hunch is that a new discipline is possible that draws on evolutionary thinking to analyse these kinds of trade-offs in more precise ways.
DSW: That’s very helpful background. I don’t want to assume that we agree upon everything, so please comment on my rather lengthy introduction. Is there anything that you would like to add or amend, to set the conceptual stage broadly for our conversation?
GM: Your introduction makes a great deal of sense to me, and coming from a social science background it’s obvious that the group is a unit of selection. The question that animated me was a version of this: why do some nations, cities, organisations manage to thrive and adapt while others don’t, even though they appear to be endowed with superior intellectual resources or technologies? Why did some of the organizations that had invested the most in intelligence of all kinds – from firms like Lehmann Brothers to the USSR in the 1980s – fail to spot big facts in the world around them and so stumble? I was looking for a theory that could explain some of these patterns and understand how and when some groups are able to optimize for a particular environment and then adapt to a rapidly changing one.
DSW: Indeed! Now I’d like to focus on two conventional wisdoms that obscure clear thinking about collective intelligence. The first is the conventional wisdom that axiomatically takes the individual as the unit of analysis, including methodological individualism in the social sciences and the rational actor model in economics as a type of methodological individualism. This axiomatic view makes it difficult to conceive of the concept of mind above the level of an individual. What are your views about methodological individualism?
GM: Western intellectual life is dominated by traditions that reject any notion that a collective mind could be more than the sum of its parts. There were some good reasons to be suspicious of vague and mystical invocations of community, god, or national spirit. But I think it’s wrong to conclude that collective intelligence is nothing more than the aggregation of individual intelligences. We recognize that in any serious account of history, and to an extent in the law which can be guilty of crimes. There is little doubt in my mind that groups can think, and can have true or false beliefs. But the ways groups do these things are not precisely analogous to the ways individuals work. I try to provide a way of thinking about the degrees of ‘we-ness’ of groups, that relates this to the extent of integration of cognition in a group. Here I extend recent work on individual consciousness which relates it to the degree of integration of the brain while awake. This, I hope, more nuanced position sees individuals as shaped by groups, and groups as made up of individuals and is helped by the ways in which psychology and neuroscience have revealed that the individual mind is better understood not as a monolithic hierarchy, with a single will, but rather as a network of semiautonomous cells that sometimes collaborate and sometimes compete. If you accept that view, then it becomes more reasonable to see groups in a similar way, even if you differentiate the highly integrated individual mind from the less integrated group mind (in other words, not altogether integrated individual minds not altogether integrated into larger groups).
DSW: Thanks. The second conventional wisdom regards collective intelligence as an emergent property of complex interactions, without paying careful attention to the special conditions that are required. Here is how you put it on p 5 of your book: “To get at the right answers, we’ll have to reject appealing conventional wisdoms. One is the idea that a more networked world automatically becomes more intelligent through processes of organic self-organization. Although this view contains important grains of truth, it has been deeply misleading.” Please say more about this view, which in my experience is held by some people who are otherwise very smart, such as complex systems theorists who don’t have a strong background in evolutionary science.
GM: I offer several different challenges to the glib, but very common, view that the universe has some inner dynamic towards benign self-organisation. The first recognizes organization as costly, the lens of cognitive economics. When we study self-organisation more closely in any real situation – from markets to online collaborations like Wikipedia – they turn out to rely on a great deal of labour, provided by some people who choose to devote scarce time and money to the work of making things happen, rather than just having fun or sitting on the couch. Where there are insufficient motivations, incentives or habits for doing this self-organisation tends to disappoint. The second lens recognizes conflict, and a constant struggle between forces for cooperation and forces that aim to disrupt or misinform. Contemporary social media are an obvious example. The third more sociological lens recognizes that most real complex human societies combine multiple cultures, some hierarchical, some individualistic, and some more egalitarian and cooperative. These complement each other in all sorts of ways. Purely flat, self-organising egalitarian structures tend to fall apart, as do structures which are only hierarchical or only individualistic. I believe this is a fundamental insight of some modern social science (which I attribute particularly to the anthropologist Mary Douglas) which helps make sense of everything from grappling climate change to the everyday life of business. But many well-informed people are unaware of it.
DSW: I agree! Now that we have cleared the deck of misleading conventional wisdoms, could you please provide one of the best examples of human collective intelligence? Then we can discuss how it works mechanistically and how it came into existence historically.
GM: The global science system is probably the best single example, and nicely illustrates how real, living intelligence depends on each of the organizing principles described above. It has hierarchy within disciplines and within universities; strong individualist incentives; and a strong egalitarianism (the sociologist Robert Merton spoke of the communism of science, the assumption that knowledge is there to be shared). It depends on some common infrastructures; it orchestrates millions of minds and millions of machines; and it has to fight constant battles against its own internal and external enemies. The internal ones include the strong incentives for fraud or burying uncomfortable evidence (just last week a newly appointed professor of computational biology described being told by a superior that one should repeat experiments as many times as necessary to get the right result!).
Seen from afar the science system looks like a wonderful emergent system; seen up close it depends on many individuals devoting their lives to the hard work of building up a community, and establishing its norms, and persuading others to give it money, status and other resources.
DSW: This is indeed an excellent example to single out. Mechanistically, we can show that scientific inquiry requires a complex “social physiology” with regulatory processes enforced by norms. Historically, we can rely on books such as Steven Shapin’s Social History of Truth and Robert McCauley’s Why Religion is Natural and Science is Not. As the title of McCauley’s book implies, we can consult our deep genetic evolutionary past to show why we are not natural born truth seekers and require a socially constructed process to create a body of objective knowledge. We can also see how scientific and scholarly cultures have been torn apart in the past and should not be taken for granted in the present.
To continue, I’d like to focus on an example of collective intelligence that is a work in progress—the smart cities movement. To say that cities need to be made smart implies that they do not become smart on their own, affirming the point we have already made that collective intelligence does not spontaneously emerge. Yet, efforts to make cities smart often fail to reach their goals. What is your opinion of the smart cities movement?
GM: The smart city movement is simultaneously exciting and maddening. Its promise is that many of the everyday processes of city life can become more intelligent. Data flows can greatly improve the efficiency of energy management and transport flows. Central command centres like the one in Rio can spot emergencies and respond quickly. Sensors on lampposts can assess air quality, and families like mine can control heating in our home remotely, or check on our security. That’s the promise. Unfortunately far too much of what has been labelled as smart cities is either facile or irrelevant, often meeting needs that don’t exist (like refrigerators that tell you when you need to buy more milk). Too many plans failed to attend to the human element, or focused only on smart hardware not on helping people to be smart. These are just some of the reasons why so much investment in smart city technology has been wasted, why most of the prominent examples essentially failed – from Dongtan in China that was never built to Masdar in Abu Dhabi, or left us with rather soulless places like New Songdo in Korea. Few attended to the most pressing needs of cities – for health or jobs. And few learned basic lessons from evolution. The best cities have been given the space to evolve, to learn from experience and to reconfigure themselves. The smart city plans tend to be conceived as blueprints that simply need to be implemented with little engagement from the people who live in the city.
DSW: I like your point about the need for smart cities to be a collaboration with the residents. One of my former PhD students, Dan O’Brien, is involved in the smart cities movement in Boston. His research on 311 illustrates an important distinction between designing social systems and participating in the social systems that we design. As you know, 311 is a three-digit number that can be called to report problems such as a fallen tree or a pothole. It originated as a “cultural mutation” in Baltimore to handle calls that were inappropriately being made to 911, which is for emergency situations. Then people began to realize that 311 could serve as the “eyes and ears” of a city by having residents provide information about problems that the city could process and address. Today it is being used in hundreds of cities. With such a system in place, a city begins to resemble a single organism with a nervous system that receives and processes information in a way that leads to effective action. A lot of work is required to design and implement the system, but once it is in place, using it is as simple as punching three numbers into your smart phone when you see a problem. It seems to me that this distinction between designing social systems and participating in the social systems that we design is very general. Do you agree and can you provide some other examples?
GM: Yes I do. I was quite closely involved in variants of this in the 1990s when 111 services were designed, and then more recently when Nesta supported various tools that could show in real time the topics being called about, providing a real time map of the city’s concerns and problems. All of these examples have worked best with some space for iteration and learning. The UK introduced something very similar in the health service in the 1990s – called NHS Direct - to deal with more everyday issues and reduce pressures on hospitals. But this technically strong idea ran into challenges: resistance from some doctors who didn’t like diagnoses to be done by nurses helped by algorithms; concerns from lawyers about the risks of error, which together meant that too often the service recommended going to a doctor or hospital anyway; and the challenges of serving a large population without good English. In the same way my mentor Michael Young created a precursor in the 1980s, a call centre service of doctors and nurses for people with sensitive or embarrassing health conditions who didn’t want to interact face to face with a doctor; in this case he co-designed it with patients themselves. All of these are at their best examples of co-evolution – systems learn best by trying, fixing and following leads rather than abstract design.
DSW: Again, your emphasis on the need for policymaking to be iterative is important. In addition, any systems engineer will tell you that a complex system cannot be optimized by separately optimizing the parts. The parts and their interactions must be organized with the performance of the whole system in mind. If so, then collective intelligence at the global scale requires policies that are formulated with the welfare of the whole earth in mind. Nothing else will do and the concept that each nation can pursue a “my nation first” policy is collective idiocy. Do you agree with this assessment?
GM: The insight that optimization of parts can be suboptimal for the whole is one of the great contributions of systems engineering, and one that is often forgotten both in business and government, where the relentless pursuit of narrow targets can often have disastrous consequences. This is in part a technical challenge. The Intergovernmental Panel on Climate Change is a good example of a first attempt to create something more like a global collective intelligence that can ‘inform policies formulated with the welfare of the whole earth in mind’, as you put it. But its only a first stab. The huge number of variables involved in something like climate change, let alone its interaction with economies or social life, makes it next to impossible to model. The task is so far beyond the capability of either our brains or computers that we have to rely on rough and ready heuristics.
So to complement our imperfect tools we also need to cultivate a parallel ethical stance. In my work on public leadership I developed the idea that leaders should think of three concentric circles of accountability. The first is to the immediate task, or organization. The second is to the wider community they are part of. The third is accountability to humanity and the planet. Ideally we want leaders who can align all three. But if they try to sacrifice the first to the third then they are not likely to survive very long. Conversely the world would be disastrous if everyone focused only on interests of their immediate organization. We have to learn to strike a balance. To be compassionate only to those close at hand is narrow and mean. But to be compassionate only for the world as a whole and to ignore those closest to you can be just as bad.
DSW: I agree entirely. The parts must be organized in relation to the whole but the whole must sustain the parts. There is a lot to be pessimistic about, but can you leave our readers with some things to be optimistic about?
GM: We are in an odd phase of history which is simultaneously bringing extraordinary breakthroughs in artificial intelligence and unusually foolish or malign leaderships. Seen in the long view I tend to confident. There is some evidence of the Flynn effect, a long-term rise in individual intelligence; many of our institutions are becoming more capable, certainly if seen at a global level; we have greatly more awareness of global public goods, and greatly enhanced capacities to observe, analyse and predict; and we are in a period when millions more are becoming makers and shapers of technologies and of their world. Of course it’s not at certain that our capacity to think and act is advancing sufficiently fast. But I see strong reasons to err on the side of hope. One comes from family history. My cousin, John Mulgan, who was a New Zealand novelist and soldier, killed himself in 1945 because he was so depressed at what he saw happening in the world (amongst other things the British were reinstalling the Greek King in the country where he had been fighting for several years). He thought the outlook was hopeless. Yet in retrospect 1945 was one of the great years of new opportunity for the world, a reminder of how hard it is for us to judge our times accurately.
We shouldn’t rely on hope however, or fall into the trap of believing that there grand historical forces will either make things turn out for good or for bad. Systems – markets, humanity, science – don’t automatically generate solutions. History tells us that again and again and the space for agency – sparked by both imagination and fear – is where the most important work is done. I’m lucky to spend much of my time with practical innovators in the social innovation movement worldwide, in business, science and government; if you do so, you are inevitably infected by optimism.
They point to a final insight: over the years I’ve learned that the more detached people are from practical work the more they risk of slipping into negativity and fatalism. Action breeds hope not the other way around. Goethe said that in the beginning was the deed, not the word, and engagement with practice is probably the best way for us to keep hopeful and useful, and to make our contribution to the next stage of human evolution.
DSW: That’s great and describes my own experience as an activist. I’m also optimistic. The problems are wicked, but we are beginning to develop the tools for becoming “wise managers of evolutionary processes”.