Welcome to New World Same Humans, a new weekly newsletter by TrendWatching’s Global Head of Trends and Insights, David Mattin.
This week I thought about virus outbreaks, existential risk, energy exorcisms and modernity.
Let’s do this.
Notes on an Epidemic
At the time of writing there are over 14,000 confirmed cases of the coronavirus, and 304 confirmed deaths. It’s thought that 2019-nCoV jumped from animals to humans in a seafood market in Wuhan; human infection has now been reported in at least 16 other countries, including Japan, Australia, Canada and Germany.
For those old enough to remember it, the SARS outbreak of 2002 comes to mind. And part of what’s notable about news around the Wuhan virus is how it highlights many of the trends and innovations that have changed the world since SARS.
An AI epidemiological tool helped give early warning of the outbreak. US Republicans are using the virus as a lever to promote US industry over Chinese. Meanwhile, some Chinese villages are deploying drones to police the behaviour of their inhabitants; see his bemused senior citizen being approached and told to go inside and wash her hands. AI, the shifting tectonic plates of global power, and drones: new world, same humans indeed.
All this was much on my mind last week. On Monday I went to an event with the cosmologist Martin Rees, hosted by the new London-based ‘slow news’ startup Tortoise. Lord Rees is the Astronomer Royal. Early on, he shared with us the two most common questions he gets from the public. First, do you do the Queen’s horoscopes? His answer: ‘if she wanted one, I’m the person she’d ask’. Second, are you worried about a calamitous asteroid strike against Earth?
The second is closer to the mark, because Rees is an expert on existential risks. In his 2018 book On the Future: Prospects for Humanity he outlines several key risks that could significantly derail the human story, or even end it. He isn’t much worried about a catastrophic asteroid strike, because it is unfathomably unlikely. Nor is he much worried about long-running trends, such as the rising global population, because we can plan for these. What worries Lord Rees are the kind of out-of-nowhere events that are impossible to predict, and that may shatter us before we can muster a proper response. Think a nuclear exchange, a robot uprising, or a devastating financial crash. Or, indeed, a deadly pandemic.
Underlying all this is a broader point about an interconnected world. We live in a world, says Rees, made possible by networks: of electricity grids, computers, transport hubs and more. Those networks have fuelled incredible advances. But they also expose us to new forms of risk:
Unless these networks are highly resilient, their benefits could be outweighed by catastrophic (albeit rare) breakdown — real-world analogues of what happened to the financial system in 2008. Our cities would be paralysed without electricity. Supermarket shelves would be empty within days if supply chains were disrupted. Air travel can spread a pandemic worldwide within days. And social media can spread panic and rumour at the speed of light.
Humankind inside modernity is increasingly a single, highly advanced, interconnected civilization. And that exposes us to sudden and catastrophic shocks.
So what to do? I’m inevitably drawn back to one of my obsessions: the competition between the 21st-century’s two leading systems of government. On the one hand, liberal democracy. On the other, the unprecedented experiment in techno-authoritarianism taking shape in China. Which is better placed to meet the threat posed by existential risks in the decades ahead?
The central challenge is one of overwhelming complexity. Inhabitants of modernity are subject to a unified economic-social-technological crucible that is too complex for anyone to understand. The traditional, 20th-century analysis held that liberal democracy produces more effective government than authoritarianism because democracies disperse information processing — to individual citizens, markets, institutions — and so arrive at better answers. Meanwhile authoritarian governments, which centralise information processing around a few people, tend to crumble under the weight of their impossible task. But as many have pointed out, the maturing of AI is set to neutralize this advantage and even tip the balance the other way.
We already know that AI gave an early warning of the Wuhan virus. And that it could play a useful role in helping determine the distribution of resources in the event of a global pandemic. It seems inevitable that AI will play a far greater role in government in future, helping to allocate resources, set monetary policy, assess global foreign policy risks, and more.
A new army of AIs, under the aegis of a centralized government, will be able to process more information faster than any market or institution. That solves authoritarianism’s information problem. Sure, liberal democracies will also be able to use AI. But there will be endless, slow arguments: about the policy recommendations made by those AIs, about the value assumptions that led to those recommendations (more on this below), and so on. By contrast, policies can become reality fast when you don’t have to worry about internal politics. Wuhan has almost finished building a hospital that it started a little over a week ago: check out the livestream. In the 21st-century, it can seem liberal democracy will be the system burdened with functional disadvantages.
Meanwhile, Lord Rees says that to meet the challenge of existential risks we need more long-range planning. Here, too, the Chinese system has a natural edge. When politicians don’t have to worry about the next election, they’re more able to see beyond the end of next week.
For anyone with an investment in liberal democracy and the foundations that support it — human rights, individual freedom, consent and more — all this is uneasy-making. There’s a danger that old democracies, increasingly paralysed in the face of complexity that they cannot process, retreat instead into nostalgia and meaningless spectacle.
If liberal democracy is to remain vibrant — and cope with the 21st-century — perhaps it will need to find a way to wed its traditional strengths to those of a newly-emerging techno-centralisation. Sooner or later, a powerful, networked shock will come. Systems of government that can’t cope with it won’t be much use.
But the Wuhan virus is almost certainly not that shock. Early signs suggest that most people who are infected will experience a flu-like illness, and then recover.
In Search of the The Zeroth Law
Briefly: two intriguing reports for those who want to dive deeper into the above.
First, Cambridge University has just established a new Centre for the Future of Democracy. To mark launch week the Centre released a report, Global Satisfaction with Democracy 2020. The report finds that dissatisfaction with democracy is reaching an all-time high around the world, particularly in developed nations.
Second, if you buy the idea that AI will play an increasingly important role in governance in future, a few key problems quickly become apparent. Foremost among them is: what will these AIs believe? In other words, given that people disagree wildly on questions of politics and ethics, what starting values should our AI overlords possess?
Isaac Asimov famously developed his Zeroth Law of Robotics to address the question of how intelligent technologies should treat humans. It states: a robot may not harm humanity, or through inaction allow humanity to come to harm. In truth, it doesn’t get us far. Who decides what is harmful and what isn’t?
Researchers at DeepMind address these issues in a new philosophy paper called Artificial Intelligence, Values and Alignment. They suggest an approach to the problem that is based on Rawls’s original position, arguing that we can develop fair, balanced ethical principles for an AI by asking what such principles a collection of free and equal individuals would agree on.
Gwyneth, Modernity and Maslow
At TrendWatching we’re obsessed with consumerism. And anyone who watches consumer trends as intently as us will have been fascinated by the launch of a new show on Netflix last week. I speak, of course, of The Goop Lab.
As it sounds, The Goop Lab is essentially Gwyneth Paltrow’s infamous Goop wellness store made into a six-part Netflix documentary. In each episode, Gwyneth and Goop CEO Elise Loehnen walk starry-eyed through encounters with new, strange and wonderful wellness practises, such as energy exorcisms, psychic readings, and injecting your own blood into your face.
Of course, it’s easy to make fun of The Goop Lab. But just because something is easy, that doesn’t mean we shouldn’t do it. Most of the wellness treatments evangelised by the show are at best placebos for the credulous rich, and at worst potentially dangerous.
There’s no denying, though, that the show does clarify something about the strangeness of life inside the early(ish) 21st-century. Goop is a reminder that for billions around the world, modernity has been a stunningly successful project in the fulfilment of material human needs. By liberating us from the struggle for subsistence and basic comfort, modernity freed us to address a set of higher-order needs: wellness, creativity, status, and more. For billions of inhabitants of modernity today, it’s our obsession with these higher-order needs that defines our daily lives. Am I happy? Do I love well? What is my purpose? What do I believe in?
But there’s another side to this story: one that informed the discussion of existential risk earlier. New technologies haven’t only liberated us to address our higher-order needs. They also pose new threats to our foundational needs, for survival and basic security. They are the reason that we anxiously ask ourselves if we may one day be the victims of a robot uprising, or a nuclear exchange. If a new, genetically manipulated virus may be set loose on us. If financial complexity will become so great that the entire world economy tips over.
And technologies also expose us to worrying facts that we would otherwise have remained ignorant about. After all, it’s technology that allows people around the world, in the early weeks of 2020, to watch an epidemic spreading from its single point of origin and wonder grimly when it will reach their own shores.
Herein, I think, lies something of the strange, fractured, unreal feeling about life inside advanced techno-consumerism. On the one hand we strive endlessly to find our #bestselves, and obsess over our spiritual energy. On the other, we anxiously guard against a new global plague. To live inside modernity is to live at both ends of Maslow’s pyramid simultaneously, ever in danger of spiralling off the apex or crashing through the floor.
But in the end this is, I suppose, only a new take on an old predicament. After all, ancient humans prayed to their gods and worried about being mauraded by the tribe in the next valley over. The world changes, but the human essentials have a way of staying the same.
In Varietate Concordia
Time for me to go.
On Friday, the UK left the EU. Leading up to the day there had been some debate here about whether Big Ben should bong for Brexit Day. But the clock tower is currently covered in scaffolding — strange how life sometimes emulates the symbolism of a bad novel — so it didn’t.
Instead, a grainy picture of the tower was projected across Downing Street, and eleven recorded bongs sounded plaintively into the night air at 11pm on Friday. Who said British surrealism died with Monty Python?
So we’re out. A spirit of European fraternity, supercharged by the EU, has thrived these past 50 years. It will serve us — innovators, founders, business leaders, policy makers and more — to remember that no political instrument and no recorded bong can diminish that spirit, unless we let it.
Until next week,
P.S: kudos to Nikki Ritmeijer for the illustrations in this email.