Mechanical Engineering magazine (paywalled until next month) and Financial Times, among others, recently reviewed the book Race Against the Machine by economists Erik Brynjolfsson and Andrew McAfee.  The FT reviewer writes:

Pattern recognition, the authors think, will quickly allow machines to branch out further. Computers will soon drive more safely than humans, a fact Google has demonstrated by allowing one to take out a Toyota Prius for a 1,000-mile spin. Truck and taxi drivers should be worried – but then so should medical professionals, lawyers and accountants; all of their jobs are at risk too. The outcome is a nightmarish but worryingly convincing vision of a future in which an ever-decreasing circle of professions is immune from robotic encirclement.

And ME magazine quotes McAfee in an interview:

Once computers get better than people, you don't have to hire people to do that job any more.  That doesn't mean that people can't find work.  There will always be an amount of work to do, but they won't like the wages they are offered.

Both reviewers also hint that McAfee and Brynjolfsson offer a partial explanation of the "jobless recovery", but either the book's argument is weak or the reviewers do a poor job summarizing it.  Such a purported explanation might be the main attraction for most readers, but I'm more interested in the longer-term picture.  Be it the "nightmarish vision" of the future mentioned in FT, or the simpler point about wages offered by McAfee, this might be a good hook to get the general public thinking about the long-term consequences of AI.

Is that a good idea?  Should sleeping general publics be left to lie?  There seems to be significant reluctance among many LessWrongers to stir the public, but have we ever hashed out the reasons for and against?  Please describe any non-obvious reasons on either side.

New Comment
34 comments, sorted by Click to highlight new comments since:

The first title reminds me of Mark Twain's "If work were so pleasant, the rich would keep it for themselves."

Thanks for the links. To your knowledge, has LessWrong ever directly addressed the question of whether this issue can, or should, be used to get the public thinking about the dramatic transformations AI might bring?

Thanks. For others not familiar with epub format - like me, a minute ago - you can use the free program calibre to convert to PDF, Kindle's format (MOBI), or TXT, among others.

Hold on, what's the legality of this?

It's a fair cop.

Well, never mind. I suppose linking to it isn't actually a crime in most places, yet.

I am not an accountant, but isn't the main job of accountants to prevent fraud? I would imagine that computing advancements would make help fraudsters as much as accountants, so we'll still need accountants.

Also, the healthcare industry is notoriously immune to progress. Most hospitals haven't even digitized their records, though they've had the technology for three decades.

I am not an accountant, but isn't the main job of accountants to prevent fraud?

Is it? I thought it was keeping books, reconciling statements and errors, preparing statistics and summaries, and complying with legal requirements.

Accountants solely as forensic investigators - that sounds like a very skilled, relatively unusual kind of definition of accountant, the sort of difficult-to-automate job that would be left after computers ate all the other jobs accountants have held since time immemorial.

But don't worry, I'm sure advances in machine learning and things like IBM Watson will eat that job too; if not in the next decade or three, then the next century or two.

(Edit: I see there is a "flickering" of votes (at least 3 up and 4 down so far). Is my post merely controversial, or does specific parts of it warrant up/down votes?)

To answer your question, the subject of specialized AI and its consequences is already out there. From what I see, the point of evil machines taking out our job is already going mainstream. The only remedy to this technophobia I can think of is to outpace it with more (justified) technophilia.

About the article you cite, I find this kind of talk increasingly crazy (emphasis mine):

Truck and taxi drivers should be worried – but then so should medical professionals, lawyers and accountants; all of their jobs are at risk too. The outcome is a nightmarish but worryingly convincing vision of a future in which an ever-decreasing circle of professions is immune from robotic encirclement.

So, machines are increasingly working for us, and it is supposed to be bad news?! To me, that's humanity being crazy, again.

Sure, if we insist on 40 hours work weeks, and use the productivity increase to produce more stuff instead of concentrating on, say, technological development and education, then yeah, that's very bad news: unemployment rises to unsustainable levels, resources depletes faster and faster, until the economic system collapses, and things get nasty from there. Or we get to a Bladerunner-like future. Or something Bad™.

We can do better. Machines are taking out our jobs? Cool! More free time! We could then:

  • work less, like 30, or even 20 hours per week.
  • work on other things, like (self) education, science, and even automating further work.
  • reduce, maybe stop the exploitation of the South by Western countries.
  • reduce, maybe stop pollution, instead of increasing it.
[-][anonymous]140

work on other things, like (self) education, science, and even automating further work.

Most people don't have the cognitive horsepower or personality type to do any of this. In any case doing science seems like a great candidate for automation.

reduce, maybe stop the exploitation of the South by Western countries.

"Western exploitation" seems to simply amount to investing money into third world countries and providing them with opportunities that are a vast improvement over what they would otherwise have. Arguably it has done them far more good than any international aid program attempted to date.

Cheap labour plus industrialization and capitalism mean societies grind their way to wealth as Taiwan, Korea, Japan and 19th century Europe did. And as China and Vietnam are doing now.

Make regular humans obsolete and we (as in our technological civilization) have literally nothing to do with the vast masses of third world peoples except perhaps give them welfare because we feel sorry for them.

Before my four bullet points, I wrote "we could". Time is a resource that can be used in various ways. I merely thought about some pressing problem we have right now.

Most people don't have the cognitive horsepower or personality type to do any of this.

I agree. Can be partly solved with education. But anyway, I didn't mean everyone to do science.

In any case doing science seems like a great candidate for automation.

I'm only talking about before Intelligence Explosion (any prediction I make about after that will probably be bogus). Which also means a world where science is not yet automated. (When it is, Intelligence Explosion will probably follow shortly after.) Beyond that point, we may chose to do science manually because it's more Fun. Or we may do something else.

"Western exploitation" seems to simply amount to investing money into third world countries and providing them with opportunities that are a vast improvement over what they would otherwise have. Arguably it has done them far more good than any international aid program attempted to date.

I hear that in some places (especially in Africa), the resources (oil, gold, cotton…) are sold really really cheap to western countries, when foreign companies do not extract them directly. In other places, the main activity seems to be the haphazard recycling of some of our more polluting garbage (most notably computer parts, where they burn the plastic to get to the metal, inhaling the fumes in the process). In other places still, they use their soil to sell cotton, or coffee, or soya, or palm oil… instead of growing food, so they must buy such food dearly, from elsewhere. On top of this, there's debt, which is rather crushing in the South (here in Europe it is merely worrying —though quite deeply so).

"Investing money" only means giving something to somebody in exchange for more, later. That interest rate mean you can basically lie down while others do the work for you. The only work you actually did was a bit of organization. It does have value, just probably far less than the interest rates. That difference I call "exploitation" (also applies when one contracts a mortgage to buy a house in the good-old Western Europe).

On average I would agree we are vastly better off compared 50 years ago. I'd also agree that this trend will continue for a while. However some places are definitely far worse of than they were before, precisely because of our use of recent technology.

As you suggest, international aid programs look like they don't work. Let's try something else.

we (as in our technological civilization) have literally nothing to do with the vast masses of third world peoples

I basically agree. The only thing it has to do with is, if technology gives us additional means of actions (like, free time), we can use those means to do whatever end we wish. Including easing up the pressure on the South. Or go to Mars. Or free Willy. Again, I only listed 4 possibilities out of many.

Which also means a world where science is not yet automated.

See I'd agree with this, except we've already started.

Okay, I take that back (except we still need the researchers, for now). I note however that this will probably result in finding results faster rather than cutting down research budgets. There's a limit to how much we can eat, but not to how much we can discover (yet).

Or maybe Intelligence Explosion is closer than I currently anticipate, in which case I really really hope Friendliness will catch up in time.

[-]gjm100

Unfortunately, I think this optimistic way of looking at the matter depends on confusion about who "we" are.

Oversimplified picture: Alex owns a business, Bill works in it, Chris is a customer. Alex buys a machine that does Bill's job better and cheaper. She benefits (higher profits). Chris benefits (lower prices). Bill loses (no job). In the (common?) case where the machine is only a little better at Bill's job than Bill was and Alex chooses to keep most of the profits, Alex benefits a lot, Bill loses a lot, Chris benefits a little, and the whole thing is sllightly positive-sum.

Now, suppose this happens on a large scale. There are lots of Bills and Chrises; there are way fewer Alexes. So a few Alexes get a lot richer; a lot of Bills-and-Chrises get a lot poorer (from losing their jobs) and a bit richer (from cheaper or better products). The whole thing is indeed positive-sum, but all the gains go to the Alexes, and along with those gains they also take a whole lot more from the Bills.

It's an oversimplification to say that all the businesses are owned by a small number of Alexes. Many of them will be publicly traded, which means that their ownership is widely distributed. However, substantial share ownership is the preserve of the wealthy, and the picture remains one of a large-scale wealth transfer from poorer people to richer ones.

For sure, the gains could instead be used to enable everyone to work shorter hours for the same pay, or to stop pollution, or whatever. But only if someone makes that happen. What the market will do, left largely to itself, is to allocate all the gains, and more, to those who are already wealthy. Distributing those other benefits to ordinary workers (who generally aren't already wealthy) and less-developed countries will depend on the generosity of those people. Or on large-scale government regulation of the sort that, in the US at least, is currently absolutely unthinkable politically because it looks too much like SOCIALISM!!!111eleven!!! to survive in the face of public opinion.

Do you see an actually-achievable way in which these technological changes end up bringing the benefits you hope for?

Like most people who believe in widespread market failures you've stopped your analysis too early. So Alex automates Bill's job away and pockets most of the savings, leaving Bill unemployed and some large number of Chrises slightly better off. What happens next?

Well, obviously Bill looks for work, and Alex's competitor Dave decides he needs to automate too. A year later Bill is working some other (possibly less desireable) job, and Alex and Dave are locked in a price war that gradually transfers most of the savings from automation to their customers. So as long as the automation process is gradual there's no particular need for intervention.

[-]gjm180

Or, a year later Bill is still unemployed -- I hear that's been happening a bit lately -- and Alex and Dave have found other ways of competing that don't require commoditizing their products and not taking any profits.

Does it look to you as if (e.g.) Microsoft, Apple and Google are taking minimal profits while their customers pocket all the benefits of their cleverness?

IIRC, the last three recessions have had largely-jobless recoveries, and the pattern of unemployment they've left behind them looks a whole lot like what you'd expect if people's jobs are being taken away by technical progress without new opportunities arising to give a lot of those people new jobs. And I don't see any sign that (e.g.) typical working hours are getting shorter to absorb all that surplus leisure.

At least in the US, the current long-run employment is caused by a severe ongoing recession that's being exacerbated by massive deficit spending, increased regulation and tax hikes. None of this has anything to do with technology one way or the other. While it's true that at some point you could have so much automation that you run out of jobs for people, we're not remotely close to approaching that point yet. Instead we see a few % of current jobs become obsolete each decade, most of which are replaced by new jobs that didn't exist before.

As for profit margins, my point was simply that degree of automation is irrelevant. Profit magins in a given industry are determined by a complex interplay between number of competitors, barriers to entry, regulation and dozens of other factors, none of which are much afected by the details of production. For instance, manufacturing has already seen 1-2 orders of magnitude increase in automation in the last 200 years but their profit margins generally don't show it.

Finally, you won't see people trading work hours for extra leisure unless their pay is already high enough that they'd prefer more free time to more money, which isn't going to happen unless average incomes are at least an order of magnitude higher than they are now. Most people don't start running out of things to spend money on until they have a mansion, several luxury cars and a few $100K of assorted toys, after all.

Microsoft does not meaningfully compete, Google doesn't seem to be taking excess profits, and Apple's mostly based on ripping off people who have too much money anyways. But look at the giants of a few decades ago - are GM and American Airlines known for excess profits today? Competing margins down takes time, but it does happen.

[-]Pfft40

This is not a market failure, the situation is Pareto optimal. It's just that capitalists get richer and laborers get poorer.

Yeah, we're basically screwed. I'll try to think of something anyway.

The goal is to redistribute wealth, from the rich to the poor. Doing it directly seems impossible. But we could do it indirectly, by redistributing power.

I think one of the most powerful lever currently available is unemployment. Currently, employees fear it, so they accept lower wages. If unemployment where somehow less terrible, or less common, employees would have more power, and higher salaries. Of the top of my head, I see two ways:

  • Universal income. It makes unemployment less unpleasant. If it gives you enough money, one could chose to just retire early and live frugally. I agree it sounds too communist however. I don't see it happening, especially in the US.

  • Reduce work hours. Go down to 4 days a week, while maintaining current salaries. There will be need for more people, which will mean less unemployment. When automation goes up again, go down to 3 days. And down. And down. This one sounds more plausible, but again, not in the US. The second someone talks about a 4 week day is the second you see some Republican stir up emotions about righten up the country through virtuous hard work. Maybe it could work in Europe (one country at a time), then spread. But we need to get past the idea that hard work is sacred first.

We could also spread basic rationality and basic political awareness. But frankly, this is difficult: most people don't have the energy to think when they get back home. They just watch TV, and collapse until tomorrow morning. This would change if they worked less, but you see the chicken-and-egg problem here…

Another possibility is the internet. Compared to mainstream media, ideas flow unfiltered here. As more people learn to use it, we will have more and more meaningful many-to-many communications, of which 4chan is only a baby step.

[-][anonymous]-10

wouldn't mass protests be inevitable in such a situation, and boycotting the businesses that keep the profits to themselves?

[-]gjm30

I don't see a lot of people boycotting very profitable businesses at the moment; do you?

[-][anonymous]00

doesn't your predicition say unemployment will become much higher than currently? I think but don't really know whether there are any large scale boycotts of profitable, unethical businesses.

[-]gjm20

I expect that as machines become able to do more jobs currently done by humans, unemployment will increase. In some cases, the machines' new capabilities will lead to new jobs; in some of those cases there might be more jobs created than lost. But in the extreme case (which is the one actually being discussed here) of a change that makes almost all human jobs redundant, it seems unlikely that the number of valuable new jobs created will be close to the number lost.

Is your point that we shouldn't expect mass boycotts yet because there isn't mass unemployment yet? Perhaps that's right. I'm claiming only that merely making big profits and keeping them instead of passing them to customers doesn't seem to be enough to trigger large-scale protests or boycotts, because there are companies doing that and they don't seem to be hated much for it.

Something like 95% of the jobs of 200 years ago have been destroyed by advancing technology - where's the persistent unemployment that's supposed to result?

[-]gjm00

Who says that persistent unemployment is supposed to result from that?

The hypothetical situation we're discussing, unless I'm desperately confused, is one in which machines are better than humans at all the things humans need or want doing. Not one where various particular human capabilities have been exceeded by precisely focused technology (which is what we've had time and time again in the past) but one where machines are simply better than we are at everything. This is not at all the same.

That is indeed the ultimate hypothetical situation we're discussing, but we're also discussing other situations in the present or very near future where only some human job-skills have been obsoleted. From the Mechanical Engineering article, I got the impression the Race Against the Machine authors thought that jobs were being obsoleted faster than people could re-train for the new ones. Thus, increased unemployment.

I doubt that a major chunk of current unemployment is thus explained, but I like the fact that this might get people thinking. They can connect the dots to the possible future situation you've named, and perhaps start thinking more seriously about AI.

[-]gjm30

There are two separate effects here. In the short term, a new technology may put people out of work faster than they can retrain. That's bad for them; it's likely to be bad for the world as a whole in the short term; but it may very well be a good thing for everyone in the long term, e.g. if it creates more jobs than it destroyed. But it may also happen that a new technology destroys jobs without creating any new ones. In that case, even if it produces an increase in total wealth, it may be bad overall (at least for people whose values assign substantial importance to the welfare of the worst-off) -- in the long term as well as the short.

So the following two claims need to be treated quite separately. (1) "Recent technological progress is obsoleting more jobs than it's creating, and lots of people are getting shafted as a result." (2) "Future technological progress may obsolete more jobs than it ever creates, directly or indirectly, and lots of people will get shafted as a result if so." And both are different from (1.5) "Recent technological progress is obsoleting more jobs than it's creating, and that loss of jobs will persist way into the future". It's (1.5) that's made less credible by observing that past progress doesn't seem to have left us with a population that's almost entirely jobless; that observation seems to me to have little to say about (1) and (2).

It seems to me that humanity is faced with an epochal choice in this century, whether to:

a) Obsolete ourselves by submitting fully to the machine superorganism/superintelligence and embracing our posthuman destiny, or

b) Reject the radical implications of technological progress and return to various theocratic and traditionalist forms of civilization which place strict limits on technology and consider all forms of change undesirable (see the 3000-year reign of the Pharaohs, or the million-year reign of the hunter-gatherers)

Is there a plausible third option? Can we really muddle along for much longer with this strange mix of religious “man is created in the image of God”, secular humanist “man is the measure of all things” and transhumanist “man is a bridge between animal and Superman” ideologies? And why do even Singularitarians insist that there must be a happy ending for homo sapiens, when all the scientific evidence suggests otherwise? I see nothing wrong with obsoleting humanity and replacing them with vastly superior “mind children.” As far as I’m concerned this should be our civilization’s summum bonum, a rational and worthy replacement for bankrupt religious and secular humanist ideals. Robots taking human jobs is another step toward bringing the curtain down permanently on the dead-end primate dramas, so it’s good news that should be celebrated!

Robots taking human jobs is another step toward bringing the curtain down permanently on the dead-end primate dramas

Well, so is large-scale primate extermination leaving an empty husk of a planet.

The question is not so much whether the primates exist in the future, but what exists in the future and whether it's something we should prefer to exist. I accept that there probably exists some X such that I prefer (X + no humans) to (humans), but it certainly isn't true that for all X I prefer that.

So whether bringing that curtain down on dead-end primate dramas is something I would celebrate depends an awful lot on the nature of our "mind children."

[-][anonymous]00

Well, so is large-scale primate extermination leaving an empty husk of a planet.

AlphaOmega is explicitly in favor of this, according to his posting history.

[This comment is no longer endorsed by its author]Reply

OK, but if we are positing the creation of artificial superintelligences, why wouldn't they also be morally superior to us? I find this fear of a superintelligence wanting to tile the universe with paperclips absurd; why is that likely to be the summum bonum to a being vastly smarter than us? Aren't smarter humans generally more benevolent toward animals than stupider humans and animals? Why shouldn't this hold for AI's? And if you say that the AI might be so much smarter than us that we will be like ants to it, then why would you care if such a species decides that the world would be better off without us? From a larger cosmic perspective, at that point we will have given birth to gods, and can happily meet our evolutionary fate knowing that our mind children will have vastly more interesting lives than we ever could have. So I don't really understand the problem here. I guess you could say that I have faith in the universe's capacity to evolve life toward more intelligent and interesting configurations, because for the last several billion years this has been the case, and I don't see any reason to think that this process will suddenly reverse itself.

if we are positing the creation of artificial superintelligences, why wouldn't they also be morally superior to us?

If morality is a natural product of intelligence, without reference to anything else, then they would be.

If morality is not solely a product of intelligence, but also depends on some other thing X in addition to intelligence, then they might not be, because of different values of X.

Would you agree with that so far? If not, you can ignore the rest of this comment, as it won't make much sense.

If so...a lot of folks here believe that morality is not solely a product of intelligence, but also depends on some other things, which we generally refer to as values. Two equally intelligent systems with different values might well have different moralities.

If that's true, then if we want to create a morally superior intelligence, we need to properly engineer both its intelligence and its values.

why is [tiling the universe with paperclips ] likely to be the summum bonum of a being vastly smarter than us?

It isn't, nor does anyone claim that it is.
If you've gotten the impression that the prevailing opinion here is that tiling the universe with paperclips is a particularly likely outcome, I suspect you are reading casually and failing to understand underlying intended meanings.

Aren't smarter humans generally more benevolent toward animals than stupider humans?

Maybe? I don't know that this is true. Even if it is true, it's problematic to infer causation from correlation, and even more problematic to infer particular causal mechanisms. It might be, for example, that expressed benevolence towards animals is a product of social signaling, which correlates with intelligence in complex ways. Or any of a thousand other things might be true.

Why shouldn't this hold for AI's?

Well, for one thing, because (as above) it might not even hold for humans outside of a narrow band of intelligence levels and social structures. For another, because what holds for humans might not hold for AIs if the AIs have different values.

And if you say that the AI might be so much smarter than us that we will be like ants to it, then why would you care if such a species exterminated us?

Because I might prefer we not be exterminated.

From a larger cosmic perspective, at that point we will have given birth to gods, and can happily meet our evolutionary fate knowing that our mind children will have vastly more interesting lives than we ever could have.

If that makes you happy, great.
It sounds like you're insisting that it ought to make me happy too. I disagree.
There are many types of gods I would not be happy to have replaced humanity with.

So I really don't understand the problem here

That's fine. You aren't obligated to.

-- you might say that I have faith in the universe's capacity to evolve life toward more intelligent and interesting configurations, because for the last several billion years this has been the case and I don't see any reason to think that this process will suddenly end.

Sure, you might very well have such faith.