Several years ago I was invited to write a book chapter on how we can make the best decisions in our complex, interconnected world. I wrote the chapter, aimed at a general audience and containing some interesting stories about how Benjamin Franklin, Charles Darwin, and others went about making up their minds, but the book never eventuated and my chapter lay languishing. I have just re-discovered it and, even though I say so myself, it makes a jolly good read. So here it is.

By the way, I wrote an overview for a recent conference on a similar theme , sponsored by the Royal Society of New South Wales, with some very high-class speakers https://royalsoc.org.au/four-academies-forum-2016-presentations.

How Can We Make Good Decisions in Complex Situations?                       

How can we make the best choices between different courses of action in a complex situation? Benjamin Franklin thought he had the answer in the form of his “moral algebra”, which he described in a letter to the English chemist Joseph Priestley1 in 1772:

“My way,” he wrote “is to divide half a sheet of paper by a line into two columns; writing over the one Pro, and over the other Con. Then, during three or four days consideration, I put down under the different heads short hints of the different motives, that at different times occur to me, for or against the measure. When I have thus got them all together in one view, I endeavor to estimate their respective weights; and where I find two, one on each side, that seem equal, I strike them both out. If I find a reason pro equal to some two reasons con, I strike out the three. If I judge some two reasons con, equal to three reasons pro, I strike out the five; and thus proceeding I find at length where the balance lies.”

Regrettably, as we now know, Franklin’s “balancing” approach can’t work for many of the common decision-making situations that we encounter. The world is just too complex.

Utility

One problem is that we are often trying to balance very disparate things. But how, for example, can we balance a strong desire to go out with friends for a pizza with a wish to stay in and watch a favourite television program, or a need to stay in and study for a test where we would like to get a good grade?

An approach that is often used by economists2, game theorists3 and sociologists4 is to consider the utility of each of these actions to ourselves. The idea seems simple in principle (according to one definition, it simply means “the total satisfaction received by a consumer from consuming a good or service”)5, but it can be ferociously difficult to apply in practice, especially when trying to compare the “utility” of the same thing to different people6.

 

The technical concept of utility was introduced into modern-day economic and social thinking by John von Neumann & Oskar Morgenstern7 in their seminal work Theory of Games and Economic Behavior, published in 1944. Its roots, however, go back at least to the eighteenth century, with the work of the English social reformer Jeremy Bentham8 and his philosophy of utilitarianism, encapsulated in the well-known phrase “the greatest good for the greatest number.”

Half a century after Bentham had advanced his idea, Charles Darwin decided to apply it to help him decide whether or not to marry his cousin Emma. In a notebook that is now on display in his old residence of Down House in the English county of Kent, he listed the utilities of getting married9 as a series of pros and cons.

The pros included companionship (“better than a dog, anyhow”)10, someone to take care of the house, and the fact that “these things are good for one’s health”, while the cons included the fact that he would have less money for books, wouldn’t be able to read in the evenings, and “If many children forced to gain one’s bread (But then it is bad for one’s health to work too much).”

The overall utility of the pros, in Darwin’s mind, outweighed those of the cons, and Darwin duly proposed and was accepted.

Economists call this sort of reasoning “cost-benefit” analysis, and many of us still use it in some form or another to make decisions in our daily lives. There is evidence, for example, that college seniors who use cost-benefit reasoning in their everyday decisions have higher grade point averages than those who do not11.

“Cost-benefit analysis” and the concept of utility have even been enshrined in law12. The United States Supreme Court, for example, uses it to decide what form of legal process is due to a citizen that the government is attempting to deprive of property, liberty or even life. The utility balancing act here is a cost-benefit analysis via the “Mathews test”, which attempts to establish a fair equilibrium between “the private interest affected by government action; the risk of erroneous deprivation of such interest; and the government’s interest, including the function involved and the burdens the government would face in providing greater process.” It all sounds very fair and reasonable, but as legal analyst Christopher J. Schmidt has pointed out13, such “balancing tests have little historical foundation and are ineffective at resolving due process issues.” As with Charles Darwin’s list of pros and cons, one can get any answer that one wants simply by changing one’s assessment of the relative ranking of the different utilities.

Decision Theory

“Utility,” nevertheless, forms the basis of modern-day causal decision theory14, which offers a rational way of making optimal decisions when we are uncertain about the consequences of our choices. The idea is simple in principle, with the choice of decision being based on a combination of the probability of a particular outcome and the expected utility of that outcome15.

It sounds very reasonable, but this approach can sometimes lead to paradoxical outcomes, such as the classical Newcomb’s problem16, which has implications for medical diagnosis. Nevertheless, causal decision theory is now widely used in practice, and there is an enormous literature on its application to problems in many areas, from economics to ecology, from the problems of individuals to the problems of society. This is not the place to discuss that literature, but simply to point out that there are serious caveats to applying what is essentially a mathematization of Franklin’s and Darwin’s approach to making decisions in an environment of complexity17. These caveats arise because i) the set of potential alternative outcomes of a decision is often unknown, and sometimes unknowable ii) even when the possible outcomes can be predicted, the probability of their occurrence can be impossible to estimate and iii) the net benefits of predictable outcomes can in any case also be impossible to estimate.

The Three Major Problems

The source of these caveats lies in the intrinsic character of complex systems. For a start, many of the important factors may be interdependent. Their interdependence is also likely to be strongly non-linear – in other words, when one factor changes, another can change in a disproportionate way. A third important characteristic is that the dependence can be in both directions, so that when A changes B, the change in B can then provoke a further change in A. Sometimes the change can be in a direction that helps to maintain the stability of a situation (negative feedback). At other times the changes in both directions may feed on each other (positive feedback) to produce runaway change and a critical transition to a totally different state18.

Many of the chapters in this book describe these processes in detail. Here I am concerned to discover whether there is any way that we can improve on causal decision theory to make good decisions in such an environment of uncertainty. My search has revealed that there are three basic ways to tackle the problem:

  1. i) Simplify the decision process.
  2. ii) Use different criteria to allow for complexity in making the decision.

iii) Change the system to improve control, resilience and predictability.

 

  1. i) Cutting the Gordian Knot

“Make it simple. Make it quick.”

Advice of title-winning English soccer coach Arthur Rowe.

One simple approach to solving complex problems was reputedly used by Alexander the Great when he visited the ancient city of Gordium19 in 333 B.C.E. According to legend, the quasi-mythical King Midas had, some five centuries earlier, tied an ox-cart to a pole by means of an intricate knot that no one had been able to unravel in the intervening centuries. Alexander at first tried to untie the knot and then, when he could not even find an end, solved the problem in a rather more direct manner by slicing the knot in half with his sword.

Gerd Gigerenzer and his colleagues at the Center for Adaptive Behavior and Cognition in Berlin have shown20 that Alexander’s direct, no-nonsense, simplifying approach can sometimes stand us in good stead when it comes to making decisions in complex situations. Rather than trying to allow for the complexities, they suggest, it can often be useful to adopt simple pragmatic rules that work in the majority of cases21.

The beginning point is that our minds are simply unable to digest and process all of the information that might be necessary to reach a perfectly rational decision in the majority of circumstances. Homo sapiens (“thinking man”) we may be, but Sherlock Holmes’s we are not.

Holmes was a fictional ideal; a combination of Homo omnisciens (“all-knowing man”) and Homo omnipotens (able to process complex information in a short space of time). Jonah Lehrer argues persuasively in The Decisive Moment22 that our brains don’t work like that at all. Using the results of modern neuroscience, and examples such as that of a quarter-back who has to make a split-second decision about a play, he demonstrates that we are instead a combination of Homo sapiens and Homo emoticus, with the emotional part of the brain informing the rational side. When the emotional side (located in the orbitofrontal cortex) is lost through accident or surgery, we lose the ability to make quick decisions – or, indeed, any decisions at all.

Gigerenzer argues that our normal brains have developed (presumably through a combination of emotional and rational experience) to use a range of simple practical heuristics as short-cuts to decision-making. Experiments by his group and others have shown that we can deliberately use such short-cuts (“fast and frugal heuristics”) to make better decisions in complex situations.

Four of the major approaches suggested by Gigerenzer23 are:

Recognition: If you are faced with a pair of alternatives, and recognize only one, choose that one (this approach can easily be extended to a choice between multiple alternatives). If you recognize more than one alternative, go with the one that you recognize most easily. This worked for me when I was lost in the city of Bangkok, and could not remember the name of my hotel or even the street that it was in. But I remembered the name of a couple of major streets, and by choosing these in preference to the ones I didn’t recognize as I walked around, I was able to find my way back.

The recognition heuristic may even be used to make money. In one study24, people with no prior knowledge of the stock market were able to construct portfolios that out-performed professionally managed funds, simply by investing in firms whose names they recognized.

Tallying: Look for cues that might help you to make a choice between options, and go with the option that has the greatest number of cues (or the greatest excess of positive over negative cues if both sorts are available) without bothering to try to rate them in order of importance.

When hiking or skiing in avalanche areas, for example, there are seven major cues (including whether there has been an avalanche in the past 48 hours and whether there is surface water from sudden warming) that indicate potential for an avalanche. Studies have shown that, where more than three of these cues are present, the situation should be considered dangerous. If this simple tallying strategy had always been used, 92% of historical accidents could have been avoided25.

An even more interesting exercise in tallying is a comparison between Magnetic Resonance Imaging (MRI) and simple bedside rules for the early detection of strokes26. The simple bedside eye examination consists of three tests, and raises an alarm if the patient fails any one of these tests. This simple tallying rule correctly detected 100% of patients who had had a stroke (with just one false positive out of 25 patients), and outperformed the complex MRI diffusion-weighted imaging, which detected only 88%.

Take the Best: When faced with a choice between two options, look for cues and work through them in the order of your expectation that they will lead to the best choice. Make your choice on the basis of the first cue that distinguishes between the alternatives.

Policemen and professional burglars alike use this method to assess which of two residential properties is the more ripe for burglary27, while lay people who try to give different weights to a range of factors that they then add up to make the same assessment take a lot longer and, in the end, do significantly worse. The latter technique is called “conjoint analysis” by consumer choice researchers, who have found to their surprise that the simple “Take the Best” strategy works as well or better than the more complex approach when it comes to making the best choice28.

Satisficing: Search through alternatives and choose the first one that exceeds your aspiration level. This technique has a rigorous mathematical basis29 that defines the odds of making the right choice – so long as we can make a reasonable guess at how many alternatives there might be without having to look at them all individually. If we are shopping in a flea market, for example, and looking for a bargain for a particular item where we think there might be a hundred possibilities all up, then simply by looking at the first fourteen and then choosing the next one that has a lower price than any of these, we have a whopping 84% chance of snatching a bargain in the bottom 10% of the price range. If we are happy with a price in the bottom 25%, we need only look at seven items before making a similar choice. Simple – and effective. Todd and Miller29 have even suggested that we might use the same technique in looking for a life partner!

 

  1. ii) Use Different Criteria to Allow for Complexity in Making the Decision.

“Everything should be made as simple as possible, but no simpler”

Albert Einstein

The simple heuristic criteria listed above (and many others that are described in the references quoted) can often be useful in making personal decisions. They somehow don’t feel quite so satisfying when it comes to making important decisions about big social, economic and environmental questions. Is there some other approach that we could use; one that avoids the Procrustean nature of heuristic decision-making, but which also overcome the difficulty of assessing “utility,” as required by classic decision theory?

Steering a course between such a Scylla and Charybdis of decision-making in complex situations is by no means easy. Three major possibilities for alternative criteria have been explored by Stephen Polasky et al in a seminal article on future environmental management30. These lines of attack are i) The Thresholds Approach; ii) Scenario Planning; and iii) Resilience Thinking. They may be used either in isolation or combination (Polasky et explore their combination with classic decision theory, although it seems to me that some combination with the heuristic approach could also have some value).

  1. i) The Thresholds Approach

Complex adaptive systems usually possess multiple basins of attraction, which (to mix a metaphor) act as islands of stability – sometimes veritable continents. The thresholds approach ignores these relatively stable or slowly changing environments, and focuses instead on potential transitions between them

These transitions, which are labelled as critical transitions or regime shifts, arise because the subtle balance between stabilizing negative feedback processes and runaway processes such as positive feedback have reached a point where the runaway processes take over, sometimes in dramatic fashion. Inland lakes may suddenly change from turbid to clear, or vice versa. Natural populations may suddenly mushroom, or just as suddenly collapse and even disappear entirely. Technical innovations, from the discovery of fire to the development of the personal computer, can transform our lives in a very short space of time. Banking systems may crash; revolutions may break out, whole societies, ecosystems and economies may suddenly burgeon or just as suddenly collapse. All of these are examples of critical transitions within complex systems, emerging directly from the nature of the system itself31.

The thresholds approach offers a screen to rule out actions which modelling and other approaches shows offer a high risk of crossing a threshold. At the least, it allows us to rank actions according to the likelihood of such risk. Computer modelling of such risk goes back to the Club of Rome 1972 report The Limits to Growth32, whose predictions, according to a recent study, still largely hold good33.

A particularly important application of the thresholds approach lies in the calculation of boundaries for various variables that affect our planetary ecosystem. One recent study, published in the prestigious scientific journal Nature under the title “A Safe Operating Space for Humanity,”34 provided conservative calculations for nine variables based on current knowledge, and concluded that three (climate change, the nitrogen cycle, and biodiversity) were already close to or (in the case of biodiversity) well beyond the safe limit.

That’s the science. The politics, as many despairing environmentalists and other concerned people will know, is quite a different matter. It is a truism that politicians do not understand how science works, but it is an equal truism that most scientists do not understand nor respect the constraints under which politicians operate. These are practical issues that need crucially to be resolved35

before any sensible approach to decision-making in the world’s complex socio-economic-ecological environment can be undertaken.

  1. ii) Scenario Planning

Scenario planning is science fiction for the real world. It conceptualizes the future by inventing plausible stories, supported by data and modelling, about how situations might evolve under different conditions if particular human decisions are made and acted on. By examining this range of potential futures, decision-makers can assess the robustness of alternative policies, and also hedge against “worst-case” scenarios.

Two contrasting cases36 illustrate the potential value of this approach to decision-making in complex situations. In the early 1970s, with oil prices low and predicted to remain so, Shell nevertheless considered scenarios where a consortium of oil-producing countries limited production to drive oil prices upwards. As a result, the company changed its strategy for refining and shipping oil. It was then able to adapt more rapidly than its competitors when the scenario became reality in the mid-1970s, and rapidly rose to become the second-largest oil company in the world.

By contrast, IBM failed to use scenario planning in the 1980s when predicting the market for personal computers, and withdrew from a market that became more than a hundred times larger than its forecasts.

The weakness of scenario planning lies in the difficulty in assessing the likelihood that alternative scenarios will actually arise. Even so, as the above examples illustrate, it can be useful as one of a portfolio of decision-making processes, and has the additional advantage that the stories that it tells can readily be understood by non-technical decision-makers. Perhaps this is why it finds such favour with government committees concerned with disaster planning.

iii) Resilience Thinking

One of the key indicators for the nearness of a critical transition in a complex social, economic or ecological system is a decrease in resilience37 – that is, a decreasing ability of the system to recover from small perturbations.

Resilience thinking focuses on promoting awareness of such warning signals, and also on the conservation of key processes so that the system is able to adapt most readily to sudden change if and when it arises.

The obvious problem here is that a very wide range of problems and options needs to be considered to make such planning possible. True interdisciplinarity is the key here – not just scientific interdisciplinarity, but social, economic and even political interdisciplinarity.

A second, major problem is that the time scale of most of the warning signs is unfortunately as short if not shorter than the current time-scale of many decision-making processes in society38.

The difficult, confronting conclusion is that successful planning for our complex future will almost surely require a totally different approach to managing our affairs, and will need new, rapidly adaptive ways of decision-making, such as using the rapid response time of the Internet as a part of the information-collating and decision-making processes39. Developing such an approach may require a measure of understanding and good will that is currently beyond us, but the decision criteria above (especially if used in combination) at least suggest that there is light at the end of the tunnel, even if there is a train coming the other way.

 

iii) Change the system.

“A centipede was happy – quite!

Until a toad in fun

Said, “Pray, which leg moves after which?”

This raised her doubts to such a pitch,

She fell exhausted in the ditch

Not knowing how to run.”

Katherine Craster “Pinafore Poems” (1871)

 

The plain fact is that complex systems, from our bodies to our social-economic-ecological environment, run reasonably well on their own self-generated rules for most of the time. We may not understand how they work, but there is a case for arguing that our attempts to understand and change them can only too easily make things more difficult.

It is a case that has some support in the fields of economics, ecology and society. Planned economies have a dismal record. Attempts to alter ecological systems for our own benefit have sometimes proved disastrous, as when the Hawaiian cane toad was introduced into Australia in an attempt to control the destructive cane beetle, only to prove itself to be the much more destructive agent itself. Attempts to set up planned utopian societies have almost inevitably ended in failure.

If we can’t easily foresee the consequences of our actions in complex situations, should we not simply leave the situation alone and watch what develops? The argument, cast in mathematical form by Wolfram40, has a beguiling appeal, especially if it appears that any action we take has an equal chance of improving the situation or making it worse, and that there is nothing else that we can do.

But often there is something else that we can do, in principle at least. We can change the system.

Most complex adaptive systems can be viewed as networks, which Dr Samuel Johnson defined in his famous dictionary of 1755 as “Any thing reticulated or decussated, at equal distances, with interstices between the intersections.” Like most of Johnson’s scientific and mathematical definitions, this one was elaborately-worded nonsense.

A network is, in fact, simply a set of hubs connected together by links. The hubs are the individual units, which may be people, business firms, countries, animals, plants, physical objects, etc., or a combination of all of these (which are sometimes called actors or agents). The links represent any process by which one hub may affect another. An adaptive network is one where hubs or links can change in response to their previous communication history.

Predicting change and evolution in even the simplest of networks is fraught with difficulty. The simplest network consists of just two hubs connected by one or more links. Even here prediction and decision-making is not a simple process. If the two hubs represent the partners in a relationship, and one partner responds badly to something that the other has said, there may be a positive feedback process where an argument rapidly develops, or a negative feedback process where the first person apologizes and calms the situation down. The “decision” of whether to use the first or second strategy can depend on other links between the partners, such as previous history. If we make the network bigger, to include (say) the first partner’s mother, the relationship with the mother may influence the way that things develop.

When it comes to the many extended networks in which we are all involved, multiple links can influence our decisions and behavior. Our actions in a two-way partnership, for example, may be influenced by the actions of a bank manager at a distant hub, whose decisions about a mortgage application may cause anxiety in a relationship and increase the possibility of arguments.

All of this is blindingly obvious, as is the fact that with increasing complexity the evolution of a complex adaptive network becomes increasingly difficult to predict. What is less obvious is that we can, in principle, control at least some aspects of the resilience and stability of the network by deliberately altering the nature and strength of the links, and removing or adding appropriate hubs.

An elaboration of network theory is beyond the scope of this chapter, and we are only at the beginning of understanding how this may be done. It is, however, worth making several key points:

1) As pointed out by ecologist Robert May and banking strategist Andrew Haldane41, modular configurations can in principle prevent contagion (from the outbreak of a disease to the collapse of a bank or an economy) from infecting a whole network (be it an ecological network, a social network or a banking network). “By limiting the potential for cascades,” they say “modularity protects the systemic resilience of both natural and constructed networks.”

“Modularity” in this context means breaking the system into blocks (sub-networks), with only limited links between the blocks. The problem here is to get economists, ecologists and others to understand the properties of networks, and in particular that those which are most efficient in the short term (sometimes through being non-modular) may carry within their very structure the seeds of long-term instability.

2) Modularity seems like a sound principle, but one must be aware that it is only applicable to certain types of network. It is difficult to visualize, for example, how the concept may be applied to the nested networks that are common in economics, ecology and society.

Nested networks also pose another problem. Paradoxically, the strongest contributors to the stability and persistence of the network as a whole are also those that are most vulnerable to extinction42. This stricture applies equally to ecological networks and networks of business firms. Before we start messing around with such networks, we need to know more about why this paradoxical effect occurs.

3) Finally, our understanding of how signals and other effects are propagated through networks (especially those that contain a human element) is by no means complete43. Why do some YouTube videos, for example, “go viral”, while others attract virtually no attention? How do the activities and habits of individuals affect the behavior of the network as a whole? Do people who appear as hubs with many connections really act as “opinion-formers” (the answer seems to be “no”44)? Why and how do some types of information and influence appear to travel through social networks in “bursts”45?

We are only at the beginning of understanding how these processes work, and the future is fascinating for researchers in the area. Let us hope that their results will appear in time to be useful in solving the serious problems that confront us as we attempt to make the best decisions in an increasingly complex world.

 

NOTES AND REFERENCES FOR FURTHER READING

  1. Benjamin Franklin, Letter to Joseph Priestley, September 19, 1772 (http://homepage3.nifty.com/hiway/dm/franklin.htm).
  2. Guillermo A. Calvo “Staggered prices in a utility-maximizing framework,” Journal of Monetary Economics 12 (1983) 383 – 398.

3.Itzakh Gilboa & David Schmeider “Maxmin expected utility with non-unique prior,” Journal of Mathematical Economics 18 (1989) 141 – 153.

  1. John C. Harsanyi “Cardinal Welfare, Individualistic Ethics and Interpersonal Comparisons of Utility,” Journal of Political Economy 63 (1955) 309 – 321.
  2. Investopedia (http://www.investopedia.com/terms/u/utility.asp).
  3. Len Fisher Rock, Paper, Scissors: Game Theory in Everyday Life (New York: Basic Books (2008) 39 – 45.
  4. John von Neumann & Oskar Morgenstern Theory of Games and Economic Behavior (Princeton, NJ: Princeton University Press (1944)).
  5. Bentham’s “auto-icon”, with his skeleton padded out with straw and dressed in his original clothes, is still on display at University College London, which he founded and where I used to work. I walked past it every day, thankful that the original mummified head had now been replaced with a wax model, reputedly after some medical students had used the original as a football. Bentham’s presence is still officially noted at University Senate meetings.
  6. The full text of Darwin’s list is available in The Complete Works of Charles Darwin Online (http://darwin-online.org.uk/content/frameset?viewtype=text&itemID=CUL-DAR210.8.2&pageseq=1)), together with an image of the original document.
  7. Dogs were ever-present in the Darwin household; they possessed eight over the course of the marriage (Fall, 2008 issue of the Southwest Review article, “It’s Dogged as Does It: A Biography of the Everpresent Canine in Charles Darwin’s Days.” http://wiki.answers.com/Q/What_dog_did_Charles_Darwin_have#ixzz1Wsd0Ak1r).
  8. Richard P. Larrick, Richard E. Nisbett & James N. Morgan “Who Uses Cost-Benefit Rules of Choice? Implications for the Normative Status of Microeconomic Theory” in Judgment and decision making: an interdisciplinary reader (eds. Terry Connolly, Hal R. Arkes & Kenneth R. Hammond. 2nd edn (2000) Cambridge University Press, 166 -182)).
  9. Frank I. Michelman “Property, Utility and Fairness: Comments on the Ethical Foundations of ‘Just Compensation’ Law” Harvard Law Review 80 (1967) 1165 – 1258.
  10. Christopher J. Schmidt “Ending the Mathews V. Eldridge Balancing Test: Time for a New Due Process Test” Southwestern Law Review (2008 – 2009), 287 – 305. My description of the Mathews test is paraphrased from this article.
  11. See, for example, Paul Weirich “Causal Decision Theory” Stanford Encyclopedia of Philosophy (2008) (http://plato.stanford.edu/entries/decision-causal/). This is the main type of decision theory that is now used, but it is not the only one. One alternative is evidential decision theory, where the “best action” is the one that gives you the best (happiest) expectations (not necessarily rational) for the outcome. This approach has been characterized by the philosopher David Lewis (in my view slightly unfairly) as “an irrational policy of managing the news” (D. Lewis “Causal decision theory,” Australasian Journal of Philosophy 59 (1981) 5–30).

Classical decision theory is an enormous subject that is divided into “normative” (“a theory about how decisions should be made in order to be rational”) and “descriptive”, which is about how decisions are actually made. Its tentacles penetrate into economics (purchasing and investment decisions), politics (voting and collective decision making), psychology (how we make decisions) and philosophy (what are the requirements for rationality in decisions?). I touch on some of these topics in this book, but make no attempts to cover the whole field. There are many standard textbooks, but for the interested reader who would like something in plain and understandable language I can highly recommend a summary by Sven Ove Hansson that is available free on the Internet (Decision Theory: A Brief Introduction: www.infra.kth.se/~soh/decisiontheory.pdf)

  1. Stephen Polasky, Stephen R. Carpenter, Carl Folke and Bonnie Keeler “Decision-making under great uncertainty: environmental management in an era of global change,” Trends in Ecology and Evolution 26 (2011) 398 – 404).

The authors describe the process of decision theory as: “In standard [causal] decision theory, uncertainty is represented by assuming a set of possible states of the system with a known probability for the occurrence of each state. The decision-maker chooses an action from a set of possible alternative actions. Outcomes are a joint product of the action and the state, generating a set of conditional probabilities of outcomes given the action. Each outcome yields a known net benefit (utility) … … The standard objective in decision theory is to choose the action that maximises expected utility, which equals the net benefit of an outcome times its probability of occurrence, summed over all possible outcomes.”

  1. In Newcomb’s problem, “an agent may choose either to take an opaque box or to take both the opaque box and a transparent box. The transparent box contains one thousand dollars that the agent plainly sees. The opaque box contains either nothing or one million dollars, depending on a prediction already made. The prediction was about the agent’s choice. If the prediction was that the agent will take both boxes, then the opaque box is empty. On the other hand, if the prediction was that the agent will take just the opaque box, then the opaque box contains a million dollars. The prediction is reliable. The agent knows all these features of his decision problem (Paul Weirich “Causal Decision Theory” Stanford Encyclopedia of Philosophy (2008), op. cit.).

The only problem is that the application of causal decision theory utility criteria lands the agent with one thousand dollars rather than a million. This may seem like an artificial problem, but it has strong parallels with some features of medical diagnosis, where the application of decision theory can produce incorrect correlations between a behavioural symptom and a medical condition.

  1. One big problem with decision theory is that it can all too readily lead to “black box” thinking, where the mental boxes of analysts and decision-makers contain only factors that are readily measurable or calculable, while the factors outside this mental box, which may be vastly more important, are simply ignored because they cannot be measured or calculated. Economists call such factors “externals,” and often continue to ignore them, despite their obvious importance in bank crashes, economic melt-downs and the like.
  2. Len Fisher Crashes, Crises and Calamities: How We Can Use Science to Read the Early-Warning Signs (New York: Basic Books (2011) 53 – 91.
  3. Gordium stood on the site of the modern-day Turkish town of Yassihüyük.
  4. Gerd Gigerenzer, Peter M. Todd and the ABC Reseach Group Simple Heuristics That Make Us Smart (Oxford: Oxford University Press (1999); Gerd Gigerenzer “Why Heuristics Work,” Perspectives on Psychological Science 3 (2008): 20 – 29; Gerd Gigerenzer & Henry Brighton “Homo Heuristicus: Why Biased Minds Make Better Inferences,” Topics in Cognitive Science 1 (2009): 107 – 143.
  5. For a detailed summary, see Len Fisher The Perfect Swarm (New York: Basic Books (2009)).
  6. Jonah Lehrer The Decisive Moment (Melbourne: The Text Publishing Company (2009)); published in the U.S. as How We Decide (New York: Houghton Milflin Harcourt Publishing Company (2009)).
  7. For a recent critical evaluation, see Gerd Gigerenzer & Wolfgang Gaissmaier “Heuristic Decision Making,” Annual Review of Psychology 62 (2011) 451 – 482 (www.annualreviews.org; doi: 10.1146/annurev-psych-120709-145346).
  8. A. Ortmann, G. Gigerenzer, B. Borges & D.G. Goldstein “The recognition heuristic: a fast and frugal way to investment choice?” In Handbook of Experimental Economics Results: Vol. 1 (Handbooks in Economics No. 28),

(eds. C.R. Plott, & V.L. Smith) (Amsterdam: North Holland (2008)) 993–1003.

  1. I.H. McCammon and P. Hageli “An evaluation of rule-based decision tools for travel in avalanche terrain” Cold Reg. Sci. Tech. 47 (2007) 193 – 206.
  2. J.C. Kattah, A.V. Talkad, D.Z. Wang, Y.H Hsieh & D.E. Newman-Toker 2009. “HINTS to diagnose stroke in the acute vestibular syndrome. Three-step bedside oculomotor examination more sensitive than early MRI diffusion-weighted imaging” Stroke 40 (2009) 3504–3510.
  3. R. García-Retamero, M. Takezawa & G. Gigerenzer “Does imitation benefit cue order learning?” Experimental Psychology 56 (2009) 307 – 320.
  4. J.R. Hauser, M. Ding & S.P. Gaskin “Non-compensatory (and compensatory) models of consideration-set decisions”. In Proceedings of the Sawtooth Software Conference (Delray Beach, FL (2009)).
  5. P.M. Todd & G.F. Miller “From Pride and Prejudice to Persuasion: Satisficing in Mate Search” in Gerd Gigerenzer, Peter M. Todd and the ABC Reseach Group Simple Heuristics That Make Us Smart (Oxford: Oxford University Press (1999).
  6. Stephen Polasky, Stephen R. Carpenter, Carl Folke and Bonnie Keeler “Decision-making under great uncertainty: environmental management in an era of global change,” Trends in Ecology and Evolution 26 (2011) 398 – 404.)
  7. For comprehensive and accessible critical overviews, see Marten Scheffer Critical Transitions in Nature and Society (Princeton, NJ: Princeton University Press (2009)); New England Complex Systems Institute “Solving Problems of Science and Society” (http://necsi.edu.news/); Len Fisher & Marie-Valentine Florin “Slowly Moving Risks with Potentially Catastrophic Consequences (Report of meeting of International Risk Governance Council, Venice, August 24-26 (2011)) Planet under Pressure Conference (London: March 2012) (forthcoming)
  8. Donella H. Meadows, Dennis L. Meadows, JØrgen Randers & William H. Behrens III The Limits to Growth (New York: Universe Books (1972)).
  9. Graham M. Turner “A Comparison of The Limits to Growth” with Thirty Years of Reality” Global Environmental Change 18 (2008): 397 – 411 (www.csiro.au/files/files/plje.pdf).
  10. Johan Rockström et al “A Safe Operating Space for Humanity” Nature 461 (2009) 472 -475.
  11. Len Fisher “Shaping Policy: Science and Politics Need More Empathy” Nature 481 (2012) 29.
  12. These examples are taken from Stephen Polasky, Stephen R. Carpenter, Carl Folke and Bonnie Keeler “Decision-making under great uncertainty: environmental management in an era of global change,” op. cit.
  13. Marten Scheffer et al “Early-Warning Signs for Critical Transitions” Nature 461 (2009) 53 – 59.
  14. See, for example, Reinette Biggs, Stephen R. Carpenter and Willam A. Brock “Turning back from the brink: Detecting an impending regime shift in time to avert it” Proceedings of the National Academy of Sciences of the USA 106 (2009) 826 – 831.
  15. See, for example, Victor Galaz et al “Can web crawlers revolutionize ecological monitoring?” Frontiers in Ecological Environment 8 (2010) 99 – 104.
  16. S. Wolfram “Computation theory of cellular automata” Communications in Mathematical Physics 96 (1984) 15 – 57.
  17. Andrew G. Haldane & Robert M. May “Systemic risk in banking ecosystems” Nature 469 (2011) 351 – 355.
  18. Serguei Saavedra, Daniel B. Stouffer, Brian Uzzi & Jordi Bascompte “Strong contributors to network persistence are most vulnerable to extinction” Nature 478 (2011) 233 – 236.
  19. Albert László Barabási Linked (New York: Penguin (2003)).
  20. Duncan Watts “Challenging the Influentials Hypothesis” Measuring Word of Mouth 3 (2007) 201 – 211.
  21. M. Karsai et al “Small but slow world: How network topology and burstiness slow down spreading” Physical Reviews E83 (2011) (http://pre.aps.org/abstract/PRE/v83/i2/e025102); Albert László Barabási Bursts (New York: Penguin (2010)).

Copyright: Len Fisher (2012)

Share This