2020 AWorldWithoutWorkTechnologyAuto

From GM-RKB
(Redirected from Susskind, 2020)
Jump to navigation Jump to search

Subject Headings: Age of Mass Technological Underemployment; Autor–Levy–Murnane Hypothesis.

Notes

Cited By

2021

  • https://onlinelibrary.wiley.com/doi/full/10.1111/ntwe.12186
    • QUOTE: ... In an important new book, Daniel Susskind argues that Keynes will be vindicated after all — if not quite according to schedule, then in short order thereafter. Let me summarise his argument. The world of work has been subjected to repeated technological revolutions, argues Susskind, which have relentlessly transformed working methods and employment patterns over time. Thus far, these transformations have been quite evenly balanced between a complementing force and a substituting force. The complementing force preserves existing roles (the ATM freeing up the bank teller to offer a more personalised service, without displacing labour), whilst the substituting force undermines existing roles and adds diminishes overall employment prospects (the industrial machinery which displaced skilled artisans organised in guilds). Technological displacement has effected high- and low-skilled occupations alternately, since some new technologies are either ‘skill-biased’ (and push up demand for technical workers capable of using them), whilst others are ‘unskilled-biased’ (displacing artisans by employing cheaper, non-technical labour). Whilst these biases have short-run negative impacts on particular groups of workers and economic sectors, the assumption amongst economists until quite recently was that they basically equilibrated over time.

      More recently, things have changed. Since the 1990s, both skilled and unskilled workers have benefitted from automation (in terms of employment opportunities). Meanwhile, middle-skill occupations—and with them, the social base of advanced capitalist economies—have quite rapidly shrunk. The ‘ALM’ (Autor–Levy–Murnane) hypothesis explained this puzzle by pointing out that jobs are, in fact, made up of multiple discrete tasks. Many middle class professions bundle routine tasks with complex intellectual and emotional labour. Susskind provides an overview of new digital technologies to demonstrate the vulnerability of many of these tasks to automation—from driving lorries to conducting legal reviews and making medical diagnoses. But the ALM hypothesis, argues Susskind, whilst an advance on the skill/unskill-bias dualism, reproduces its optimism bias—by assuming that middle-skill job displacement will boost incomes in general by expanding productivity and consequently lead to new employment growth areas.

      Here is where Susskind delivers his break with the existing economists of automation. He challenges the ‘canonical model’ of economic orthodoxy by arguing that there is no employment equilibrating dynamic to technological innovation. The impact of technological progress on labour markets is not a series of cyclical disruptions and resets, he argues, but a secular (and accelerating) process of rendering more and more tasks automatable. ...

Quotes

Preface

Introduction

The “Great Manure Crisis” of the 1890s should have come as no surprise.1 For some time, in big cities like London and New York, the most popular forms of transport had relied upon horses—hundreds of thousands of them — to heave cabs, carts, wagons, wains, and a variety of other vehicles through the streets. As locomotives, horses were not particularly efficient: they had to take a break to rest and recover every few miles, which partly explains why quite so many were needed.2 Operating a basic carriage, for example, required at least three animals: two working in rotation to pull it along, plus one in reserve in case of a breakdown. The horse-drawn tram, the transit mode of choice for New Yorkers, relied on a team of eight, which took turns dragging it on a set of specially laid tracks. And in London, thousands of horse-drawn double-decker buses, modestly sized versions of today’s red ones, demanded about a dozen animals apiece for the task.3

With these horses came manure — and lots of it. A healthy horse produces somewhere between fifteen and thirty pounds of manure a day, almost the weight of a two-year-old child.4 One enthusiastic health officer working in Rochester, New York, calculated that the horses in his city alone produced enough in a year to cover an acre of land to a height of 175 feet, almost as high as the Leaning Tower of Pisa.5 Apocryphally, people at the time extrapolated from these calculations to an inescapably manure-filled future: a New York commentator who predicted that piles would soon reach the height of third-story windows, a London reporter who imagined that by the middle of the twentieth century the streets would be buried under nine feet of the stuff.6 Nor was the crisis simply about manure. Thousands of putrefying dead horses littered the roads, many deliberately left to decay to a size that made for easier disposal. In 1880 alone, about fifteen thousand horse carcasses were removed from New York City.7

It is said that policymakers did not know what to do.8 They couldn’t simply ban horses from the streets: the animals were far too important. In 1872, when the so-called Horse Plague hit the United States, with horses struck down by one of the worst outbreaks of equine flu in recorded history, large parts of the country’s economy came to a halt.9 Some even blame the epidemic for that year’s Great Fire of Boston; seven hundred buildings burned to the ground, they claim, because there were not enough horses to pull firefighting equipment to the scene.10 But the twist in the tale is that, in the end, policymakers didn’t need to worry. In the 1870s, the first internal combustion engine was built. In the 1880s, it was installed in the first automobile. And only a few decades later, Henry Ford brought cars to the mass market with his famous Model T. By 1912, New York had more cars than horses. Five years after that, the last horse-drawn tram was decommissioned in the city.11 The Great Manure Crisis was over.

The “Parable of Horseshit,” as Elizabeth Kolbert called it in the New Yorker, has been told many times over the years.12 In most versions of the story, the decline of horses is cast in an optimistic light, as a tale of technological triumph, a reassuring reminder that it is important to remain open-minded even when you find yourself knee-deep in a foul, seemingly intractable problem. But for Wassily Leontief, the Russian-American economist who won the Nobel Prize in 1973, the same events suggested a more unsettling conclusion. What he saw instead was how a new technology, the combustion engine, had taken a creature that, for millennia, had played a central role in economic life—not only in cities but on farms and fields—and, in only a matter of decades, had banished it to the sidelines. In a set of articles written in the early 1980s, Leontief made one of the most infamous claims in modern economic thought. What technological progress had done to horses, he said, it would eventually do to human beings as well: drive us out of work. What cars and tractors were to them, he thought, computers and robots would be to us.13

Today, the world is gripped again by Leontief’s fear. In the United States, 30 percent of workers now believe their jobs are likely to be replaced by robots and computers in their lifetime. In the UK, the same proportion think it could happen in the next twenty years.14 And in this book, I want to explain why we have to take these sorts of fears seriously — not always their substance, as we shall see, but certainly their spirit. Will there be enough work for everyone to do in the twenty-first century? This is one of the great questions of our time. In the pages that follow, I will argue that the answer is “no” and explain why the threat of “technological unemployment” is now real. I will describe the different problems this will create for us—both now and in the future—and, most important, set out how we might respond.

It was John Maynard Keynes, the great British economist, who popularized the term “technological unemployment” almost fifty years before Leontief wrote down his worries, capturing in a pithy pairing of words the idea that new technologies might push people out of work. In what follows, I will draw on many of the economic arguments that have been developed since Keynes to try to gain a better look back at what happened in the past, and a clearer glimpse of what lies ahead. But I will also seek to go well beyond the narrow intellectual terrain inhabited by most economists working in this field. The future of work raises exciting and troubling questions that often have little to do with economics: questions about the nature of intelligence, about inequality and why it matters, about the political power of large technology companies, about what it means to live a meaningful life, about how we might live together in a world that looks very different from the one in which we have grown up. In my view, any story about the future of work that fails to engage with these questions as well is incomplete.

NOT A BIG BANG, BUT A GRADUAL WITHERING

An important starting point for thinking about the future of work is the fact that, in the past, many others have worried in similar ways about what lies ahead — and been very wrong. Today is not the first time that automation anxiety has spread, nor did it first appear in the 1930s with Keynes. In fact, ever since modern economic growth began, centuries ago, people have periodically suffered from bouts of intense panic about being replaced by machines. Yet those fears, time and again, have turned out to be misplaced. Despite a relentless flow of technological advances over the years, there has always been enough demand for the work of human beings to avoid the emergence of large pools of permanently displaced people.

And so, in the first part of the book, I begin with this history, investigating why those who worried about being replaced by machines turned out repeatedly to be so wrong, and exploring how economists have changed their minds over time about the impact of technology on work. Then I turn to the history of artificial intelligence (AI) — a technology that has captured our collective imagination over the last few years, and which is largely responsible for the renewed sense of unease that many now feel about the future. AI research, in fact, began many decades ago, with an initial burst of enthusiasm and excitement, but that was followed by a slump into a long, deep winter when little progress was made. In recent years, though, there has been a rebirth, an intellectual and practical revolution that caught flat-footed many economists, computer scientists, and others who had tried to predict which activities machines could never do.

In the second part of the book, building on this history, and trying to sidestep the intellectual mistakes that others have made before, I explain how technological unemployment is likely to unfold in the [[twenty-first century]. In a recent survey, leading computer scientists made the claim that there is a 50 percent chance that machines will outperform human beings at “every task” within forty-five years.15 But the argument I make does not rely on dramatic predictions like this turning out to be true. In fact, I find it hard to believe that they will. Even at the century’s end, tasks are likely to remain that are either hard to automate, unprofitable to automate, or possible and profitable to automate but which we will still prefer people to do. And despite the fears reflected in those polls of American and British workers, I also find it difficult to imagine that many of today’s jobs will vanish completely in years to come (to say nothing about new types of jobs that await in the future). Much of that work, I expect, will turn out to involve some tasks that lie beyond the reach of even the most capable machines.

The story I tell is a different one. Machines will not do everything in the future, but they will do more. And as they slowly, but relentlessly, take on more and more tasks, human beings will be forced to retreat to an ever-shrinking set of activities. It is unlikely that every person will be able to do what remains to be done; and there is no reason to imagine there will be enough demand for it to employ all those who are indeed able to do it.

In other words, if you picked up this book expecting an account of a dramatic technological big bang in the next few decades, after which lots of people suddenly wake up to find themselves without work, you will be disappointed. That scenario is not likely to happen: some work will almost certainly remain for quite some time to come. But, as time passes, that work is likely to sit beyond the reach of more and more people. And, as we move through the twenty-first century, the demand for the work of human beings is likely to wither away, gradually. Eventually, what is left will not be enough to provide everyone who wants it with traditional well-paid employment.

A useful way of thinking about what this means is to consider the impact that automation has had already had on farming and manufacturing in many parts of the world. Farmers and factory workers are still needed: those jobs have not completely vanished. But the number of workers that is needed has fallen in both cases, sometimes precipitously—even though these sectors produce more output than ever before. There is, in short, no longer enough demand for the work of human beings in these corners of the economy to keep the same number of people in work. Of course, as we shall see, this comparison has its limits. But it is still helpful in highlighting what should actually be worrying us about the future: not a world without any work at all, as some predict, but a world without enough work for everyone to do.

There is a tendency to treat technological unemployment as a radical discontinuity from economic life today, to dismiss it as a fantastical idea plucked out of the ether by overly neurotic shock-haired economists. By exploring how technological unemployment might actually happen, we will see why that attitude is a mistake. It is not a coincidence that, today, worries about economic inequality are intensifying at the exact same time that anxiety about automation is growing. These two problems — inequality and technological unemployment — are very closely related. Today, the labor market is the main way that we share out economic prosperity in society: most people’s jobs are their main, if not their only, source of income. The vast inequalities we already see in the labor market, with some workers receiving far less for their efforts than others, show that this approach is already creaking. Technological unemployment is simply a more extreme version of that story, but one that ends with some workers receiving nothing at all.

In the final part of the book, I untangle the different problems created by a world with less work and describe what we should do about them. The first is the economic problem just mentioned: how to share prosperity in society when the traditional mechanism for doing so, paying people for the work that they do, is less effective than in the past. Then I turn to two issues that have little to do with economics at all. One is the rise of Big Tech, since, in the future, our lives are likely to become dominated by a small number of large technology companies. In the twentieth century, our main worry may have been the economic power of corporations: but in the twenty-first, that will be replaced by fears about their political power instead. The other issue is the challenge of finding meaning in life. It is often said that work is not simply a means to a wage but a source of direction: if that is right, then a world with less work may be a world with less purpose as well. These are the problems we will face, and each of them will demand a response.

A PERSONAL STORY

The stories and arguments in this book are, to some extent, personal ones. About a decade ago, I began to think about technology and work in a serious way. Well before this, however, it had been an informal interest, something I often mulled over. My father, Richard Susskind, had written his doctorate in the 1980s at Oxford University on artificial intelligence and law. During those years, he had squirreled himself away in a computing laboratory, trying to build machines that could solve legal problems. (In 1988, he went on to codevelop the world’s first commercially available AI system in law.) In the decades that followed, his career built upon this work, so I grew up in a home where conundrums about technology were the sorts of things we chewed over in dinner-table conversation.

When I left home, I went to Oxford to study economics. And it was there, for the first time, that I was exposed to the way that economists tend to think about technology and work. It was enchanting. I was enthralled by the tightness of their prose, the precision of their models, the confidence of their claims. It seemed to me that they had found a way to strip away the disorienting messiness of real life and reveal the heart of the problems.

As time passed, my initial enchantment dulled. Eventually, it disappeared. After graduating, I joined the British government — first in the Prime Minister’s Strategy Unit, then in the Policy Unit in 10 Downing Street. There, buoyed by technologically inclined colleagues, I started to think more carefully about the future of work and whether the government might have to help in some way. But when I turned for help to the economics I had learned as an undergraduate, it was far less insightful than I had hoped. Many economists, as a matter of principle, want to anchor the stories they tell in past evidence alone. As one eminent economist put it, “Although we all enjoy science fiction, history books are usually a safer guide to the future.”16 I was not convinced by this sort of view. What was unfolding in the economy before me looked radically different from experiences of what had come before. I found this very disconcerting.

And so, I left my role in British government and, after time spent studying in America, returned to academia to explore various questions about the future of work. I completed a doctorate in economics, challenging the way that economists had traditionally thought about technology and work, and tried to devise a new way to think about what was happening in the labor market. At the same time, I teamed up with my father to write The Future of the Professions, a book that explored the impact of technology on expert white-collar workerslawyers, doctors, accountants, teachers, and others. When we began our research for that project a decade ago, there was a widespread presumption that automation would only affect blue-collar workers. It was thought that professionals were somehow immune from change. We challenged that idea, describing how new technologies would allow us to solve some of the most important problems in society—providing access to justice, keeping people in good health, educating our children—without relying on traditional professionals as we had done in the past.17

Insights from both my academic research and our book on the professions will reappear in the pages that follow, sanded into better shape through subsequent experience and thinking. In short, then, this book captures my own personal journey, a decade spent thinking almost entirely about one particular issue—the future of work.

GOOD PROBLEMS TO HAVE

Although these opening words may suggest otherwise, this book is optimistic about the future. The reason is simple: in decades to come, technological progress is likely to solve the economic problem that has dominated humanity until now. If we think of the economy as a pie, as economists like to do, the traditional challenge has been to make that pie large enough for everyone to live on. At the turn of the first century AD, if the global economic pie had been divided into equal slices for everyone in the world, each person would have received just a few hundred of today’s dollars per year. Most people lived around the poverty line. Roll forward one thousand years, and roughly the same would have been true. Some even claim that, as late as 1800, the average person was no more materially prosperous than her equivalent back in 100,000 BC.18

But over the last few hundred years, economic growth has soared, and this growth was driven by technological progress. Economic pies around the world have become much bigger. Today, global GDP per capita, the value of those equally sized individual slices, is already about $10,720 a year (an $80.7 trillion pie shared out among 7.53 billion people).19 If economies continue to grow at 2 percent per year, our children will be twice as rich as us. If we expect a measlier 1 percent annual growth, then our grandchildren will be twice as well off as we are today. We have, at least in principle, come very close to solving the problem that plagued our fellow human beings in the past. As the economist John Kenneth Galbraith so lyrically put it, “man has escaped for the moment the poverty which was for so long his all-embracing fate.”20

Technological unemployment, in a strange way, will be a symptom of that success. In the twenty-first century, technological progress will solve one problem, the question of how to make the pie large enough for everyone to live on. But, as we have seen, it will replace it with three others: the problems of inequality, power, and purpose. There will be disagreement about how we should meet these challenges, about how we should share out economic prosperity, constrain the political power of Big Tech, and provide meaning in a world with less work. These problems will require us to engage with some of the most difficult questions we can ask — about what the state should and should not do, about the nature of our obligations to our fellow human beings, about what it means to live a meaningful life. But these are, in the final analysis, far more attractive difficulties to grapple with than the one that haunted our ancestors for centuries — how to create enough for everyone to live on in the first place.

Leontief once said that “if horses could have joined the Democratic party and voted, what happened on farms might have been different.”21 It is a playful phrase with a serious point. Horses did not have any control over their collective fate, but we do. I am not a technological determinist: I do not think the future must be a certain way. I agree with the philosopher Karl Popper, the enemy of those who believe that the iron rails of our fate have already been set down for us to trundle along, when he says that “the future depends on ourselves, and we do not depend on any historical necessity.”22 But I am also a technological realist: I do think that our discretion is constrained. In the twenty-first century, we will build systems and machines that are far more capable than those we have today. I don’t believe we can escape that fact. These new technologies will continue to take on tasks that we thought only human beings would ever do. I do not believe we can avoid that, either. Our challenge, as I see it, is to take those unavoidable features of the future as given, and still build a world where all of us can flourish. That is what this book is about.

...

PART I: THE CONTEXT

1. A History of Misplaced Anxiety

Economic growth is a very recent phenomenon. In fact, for most of the three hundred thousand years that human beings have been around, economic life has been relatively stagnant. Our more distant ancestors simply hunted and gathered what little they needed to survive, and that was about it.1 But over the last few hundred years, that economic inactivity came to an explosive end. The amount each person produced increased about thirteen-fold, and world output rocketed nearly three hundredfold.2 Imagine that the sum of human existence was an hour-long: most of this action happened in the last half-second or so, in the literal blink of an eye.

Economists tend to agree with one another that this growth was propelled by sustained technological progress, though not on the reasons why it started just where and when it did—in Western Europe, toward the end of the eighteenth century.3 One reason may be geographical: certain countries had bountiful resources, a hospitable climate, and easily traversable coastlines and rivers for trade. Another may be cultural: people in different communities, shaped by very different intellectual histories and religions, had different attitudes toward the scientific method, finance, hard work, and each other (the level of “trust” in a society is said to be important). The most common explanation of all, though, is institutional: certain states protected property rights and enforced the rule of law in a way that encouraged risk-taking, hustle, and innovation, while others did not.

...

The Productivity Effect

...

... In other settings, new technologies may automate some tasks, taking them out the hands of workers, but make those same workers more productive at the tasks that remain for them to do in their jobs. ...

...

The Big Picture

...

... the future , they say ; holds both obsolescence and ever-greater relevance ; technology is a threat and an opportunity ; a rival and a partner , a foe and a friend ...

...

... Technological progress has brought many disruptions and dislocations, as we have seen; but from the Industrial Revolution until today, workers who worried that machines would permanently replace them have largely been proven wrong. Up until now, in the battle between the harmful substituting force and the helpful complementing force, the latter has won out, and there has always been a large enough demand for the work that human beings do. We can call this the Age of Labour.

2. The Age of Labor

...

The Twentieth Century and Before

...

= A New Story in the 21st Century

...

... The hollowing out of the labor market was a new puzzle. And the canonical model that dominated economic thinking in the late twentieth century was powerless to solve it. It was narrowly focused on just two groups of workers, the low-skilled and the high-skilled, and had no way to explain why middling-skilled workers were facing such a very different fate from their low- and high-skilled contemporaries. A new account was needed. Economists went back to their intellectual drawing boards. And over the past decade or so, intellectual support has emerged for an entirely different way of thinking about technology and work. Pioneered by a group of MIT economists—David Autor, Frank Levy, and Richard Murnane—it became known as the “Autor-Levy-Murnane hypothesis,” or the “ALM hypothesis” for short.21 A decade ago, when I began to think seriously about the future, this was the story I was handed to help me do so.22

The ALM hypothesis built upon two realizations. The first of these was simple: looking at the labor market in terms of “jobs,” as we often do, is misleading. When we talk about the future of work, we tend to think in terms of journalists and doctors, teachers and nurses, farmers and accountants; and we ask whether, one day, people who have one of these jobs might wake up and find a machine in their place. But thinking like this is unhelpful because it encourages us to imagine that a given job is a uniform, indivisible blob of activity: lawyers do “lawyering,” doctors “doctoring,” and so on. If you look closely at any particular job, though, it is obvious that people perform a wide variety of different tasks during their workday. To think clearly about technology and work, therefore, we have to start from the bottom up, focusing on the particular tasks that people do, rather than looking from the top down, looking only at the far more general job titles.

The second realization was subtler. With time, it became clear that the level of education required by human beings to perform a given task—how “skilled” those people were—was not always a helpful indication of whether a machine would find that same task easy or difficult. Instead, what appeared to matter was whether the task itself was what the economists called “routine.” By “routine,” they did not mean that the task was necessarily boring or dull. Rather, a task was regarded as “routine” if human beings found it straightforward to explain how they performed it—if it relied on what is known as “explicit” knowledge, knowledge which is easy to articulate, rather than “tacit” knowledge, which is not.23 Autor and his colleagues believed that these “routine” tasks must be easier to automate. Why? Because when these economists were trying to determine which tasks machines could do, they imagined that the only way to automate a task was to sit down with a human being, get her to explain how she would perform that task, and thenf write a set of instructions based on that explanation for machines to follow.24 For a machine to accomplish a task, Autor wrote, “a programmer must first fully understand the sequence of steps required to perform that task, and then must write a program that, in effect, causes the machine to precisely simulate these steps.” If a task was “non-routine”—in other words, if human beings struggled to explain how they performed it—then it would be difficult for programmers to specify it as a set of instructions for the machine.25

The ALM hypothesis brought these two ideas together. Machines, it said, could readily perform the “routine” tasks in a job, but would struggle with the “non-routine” tasks. ...

...

Insights from the ALM Hypothesis

...

3. The Pragmatist Revolution

...

Bottom-Up; Not Top-Down

...

... Just as the God of the Old Testament created in His image, the AI researchers tried to build their machines in their own image, too. ...

...

... The pragmatist revolution in AI requires us to make a similar reversal in how we think about where the abilities of man-made machines come from. Today, the most capable systems are not those that are designed in a top-down way by intelligent human beings. In fact, just as Derwin found a century before, remarkable capabilities can emerge gradually from blind, unthinking, bottom-up processes that do not resemble human intelligence at all. ...

...

4. Underestimating Machines

...

The Fall of Intelligence

...

... For the moment, human beings may be the most capable machines in existence – but there are a great many other possible designs that machines could take. ...

...

PART II: THE THREAT

5. Task Encroachment

...

Difference Paces in Difference Places

...

Different Tasks

...

... The first reason is the most straightforward: different economies are made up of very different types of jobs, some of which involve tasks chat are far harder to automate than others. It is inevitable, therefore, that certain technologies will be far more useful in some places and not others. ...

...

6. Frictional Technological Unemployment

...

Work, Out of Reach

...

... Despite all the technological accomplishments we have seen in recent decades, vast areas of human activity cannot yet be automated, and limits to task encroachment still remain in place. The historical trend – where there is always significant demand for the work of human beings – is likely to go on for a while. But as time passes, this will be of comfort to a shrinking group of people. Yes, many tasks are likely to remain beyond the capabilities of machines, and technological progress will tend to raise the demand for human beings to do them. ...

...

The Skills Mismatch

...

The Identity Mismatch

...

In South Korea, something like this is already happening. It is a country famed for the intensity of its academic culture, where about 70 percent of young people have degrees. But half of the unemployed there are college graduates as well.20 In part, this is because these highly qualified people are reluctant to take up the work that is available to them—poorly paid, insecure, or low-status roles, simply not what they imagined they were training to become.21

The fact that workers are willing to shun employment like this is particularly important because there is no reason to think that technological progress will necessarily create more appealing work in the future. There is a common fantasy that technological progress must make work more interesting—that machines will take on the unfulfilling, boring, dull tasks, leaving behind only meaningful things for people to do. They will free us up, it is often said, to “do what really makes us human.” (The thought is fossilized in the very language we use to talk about automation: the word robot comes from the Czech robota, meaning drudgery or toil.) But this is a misconception. We can already see that a lot of the tasks that technological progress has left for human beings to do today are the “non-routine” ones clustered in poorly paid roles at the bottom of the labor market, bearing little resemblance to the sorts of fulfilling activities that many imagined as being untouched by automation. There is no reason to think the future will be any different.

For adult men in the United States, a similar story is unfolding, where some workers likewise appear to have left the labor market out of choice rather than by necessity—though for a different reason. Displaced from manufacturing roles by new technologies, they prefer not to work at all rather than take up “pink-collar” work—an unfortunate term intended to capture the fact that many of the roles currently out of reach of machines are disproportionately held by women, like teaching (97.7 percent of preschool and kindergarten teachers are women), nursing (92.2 percent), hairdressing (92.6 percent), housekeeping (88 percent), social work (82.5 percent), and table-waiting (69.9 percent).22

...

The Place Mismatch

...

Not Just Unemployed

...

7. Structural Technological Unemployment

8. Technology and Inequality

...

Inequality in Capital Income

... the income from traditional capital is even more unevenly shared out across society than the income from salaries and wages. This fact is true ‘without exception’, notes Thomas Piketty, in all countries and at all times for which data is available. ...

...

PART III: THE RESPONSE

9. Education and Its Limits

...

How We Teach

...

... Teachers cannot tailor their material to the specific needs of every student, so in fact the education provided tends to be ‘one size fits none’. This is particularly frustrating because tailored tuition is known to be very effective: an average student who receives one-to-one tuition will tend to outperform 98 per cent of ordinary students in a traditional classroom. ...

...

... When Sebastian Thrun taught his computer science class to 200 Stanford students, and then to 160,000 non-Stanford students online, the top Stanford student ranked a measly 413th. ‘My God,’ cried Thrun on seeing this, ‘for every great Stanford student, there’s 412 amazingly great, even better students in the world.’ ...

...

10. The Big State

11. Big Tech

12. Meaning and Purpose Epilogue

...

References

;

 AuthorvolumeDate ValuetitletypejournaltitleUrldoinoteyear
2020 AWorldWithoutWorkTechnologyAutoDaniel SusskindA World Without Work: Technology, Automation and how We Should Respond2020