25 Jun 2016

Mises Defeats the Robot Overlords

Economics, Shameless Self-Promotion 13 Comments

My latest FEE article responds to Scott Alexander’s intriguing (but baseless, I think) worries:

Yet these are mere quibbles. The real difficulty is that Alexander has implicitly assumed that the mines of iron ore (that’s how you make steel) are either unowned, or are owned by one of the two operations in the loop. There’s no danger of “out of control” robots creating trillions of copies of themselves without human approval, if the humans own the raw materials.

Finally, even if the robots could somehow multiply by only manipulating matter already within their legal control, the problem here would be one of ill-defined property rights. If it would bother humans to know that the solar system is filling up with robots, then assigning property rights to the various segments of space would be the solution.

In this context, Alexander’s worries about an “ascended economy” have nothing to do with private enterprise, and instead are analogous to someone worried about overgrazing cattle on public lands.

13 Responses to “Mises Defeats the Robot Overlords”

  1. Transformer says:

    ‘There’s no danger of “out of control” robots creating trillions of copies of themselves without human approval, if the humans own the raw materials.’

    What happens if the robots decide they will not respect human property rights ?

    • Bob Murphy says:

      Transformer, OK fine, but then let’s make it simpler: Someone might make a robot next year that goes crazy and kills us all. But that wouldn’t be an “ascended economy.”

      • Transformer says:

        Ok.

        To take a different tack:

        You say “The only resources the machines would control, would be ones they had acquired by providing goods and services to the original (human) owners. ”

        So why couldn’t the robots buy up all the land and raw materials they need to implement the “ascended economy.” If their subjective utility leads them to want to build ever more robots that they use to colonize space – what would stop them if they were really efficient at providing goods and services to whatever humans that had to buy the raw materials from? Humans would become very peripheral to the whole thing.

        Also, If the robots are the first intelligent being to colonize space – doesn’t that give them Lockean rights over this property ?

        • Major.Freedom says:

          Your argument boils down to “OK, but what if robots become humans?”

        • Bob Murphy says:

          Transformer, I’m not trying to make this super-charged, but it’s the quickest way I can get you to see my perspective: Suppose a white US writer wrote in 1855: “In the future, purely without any conscious choice by the whites, it’s possible that there could be huge economies of black Americans that have no connection to the preferences of white consumers. It could start out with one plantation owner selling cotton to a neighboring one, and next thing you know, there are 100 million black Americans who have no economic connection to whites, even though no white person meant for that to happen and even though nobody changed or violated any original property titles.”

          Does that help? Again, I’m not trying to make this uncomfortable, but I think we’re having trouble picturing the robot future because it’s so foreign.

          Edited to add: Assume in 1855 there are no free blacks. Now think about the thought experiment.

          Further, if you think about manumission, then I think that’s not “accidental” and unconnected with white preferences; the individual owner got more happiness from doing that. So if somebody wants to program his robots in the year 2300 to explore the solar system and stop taking commands from him, OK, but that’s hardly something humanity needs to worry about.

          • Transformer says:

            But surely the white writer may have been been right then just as Alexander might be right now from their perspective?

            Suppose the black population expanded much faster than the while population, and over time the black population bought up all the land, while the whites gradually used up all their capital.and had to live off charity.

            No economic laws or property rights have been violated – but it would still be true to say that
            “economic activity drifted further and further from white control until finally there was no relation at all.”., which is how Alexander describes his “ascended economy,”.

            If your point is that such a view would have been white-centric then, and Alexander’;s view is human-centric now – then I don’t disagree.

            • Craw says:

              I think Alexander is arguing that there is a flaw in the system, and that it can, just because of the way it operates, ‘ascend’ out of human control once a nonhuman appears. It’s a claim about the economic system not robots per se. And Bob’s counter is that he has ignored property rights: once you notice that the people still own things the “ascension’ without them cannot occur .

  2. Tel says:

    Tangentially related…

    http://ageofem.com/

    Robots may one day rule the world, but what is a robot-ruled Earth like?

    Many think the first truly smart robots will be brain emulations or ems. Scan a human brain, then run a model with the same connections on a fast computer, and you have a robot brain, but recognizably human.

    Train an em to do some job and copy it a million times: an army of workers is at your disposal. When they can be made cheaply, within perhaps a century, ems will displace humans in most jobs. In this new economic era, the world economy may double in size every few weeks.

  3. Tel says:

    This is an intriguing scenario, but ultimately it doesn’t make sense. It’s unrealistic to say that there are no extraneous inputs or “leakage” in the operation, such that one unit of a robot and one unit of steel can perpetually produce identical physical specimens. For one thing, that would violate the laws of thermodynamics.

    I guess we have to assume these robots are solar powered… and perhaps by some strange coincidence silicon might be a byproduct of steel production. I dunno, it’s not like philosophical bloggers are going to snatch the reins of industry any time soon…

    Finally, even if the robots could somehow multiply by only manipulating matter already within their legal control, the problem here would be one of ill-defined property rights. If it would bother humans to know that the solar system is filling up with robots, then assigning property rights to the various segments of space would be the solution.

    Hmmm, seems to me that a property right can only be created (at a minimum) by some sort of homesteading (that’s only the starting point, also requires a mechanism to defend said property right). I mean I could wave my arms and claim the whole of space right now (and so I do!!) but probably that’s going to be a weak claim.

    So if the robots are out there homesteading various orbits, seems like they own the property rights automatically (and they happen to also be in the best position to defend them).

    If you are going to say that these robots operate as slaves and are themselves owned by the company (that’s the normal legal status for the very limited AI’s we have right now), then same thing happens but the company claims the property rights on the basis that company slaves have occupied that space.

  4. Bob says:

    This all goes out the window if the robot has general purpose intelligence at a superhuman level (e.g. due to processing speed/parallelism). What could one or a few super geniuses do to the economy? That’s not something you can dismiss except by saying we don’t yet have superintelligent or general purpose AI. Maybe it will never happen, but from my perspective as a computer scientist I think we have good reason to think general purpose AI is possible. It’s not clear if we’ll get it tomorrow or in 100 years, but if we avoid extinction we’ll surely have it before 1,000 years have passed. At some point we need moral philosophers to step up and address this issue head-on. We can dismiss the easy cases with dumb resource-mining robots, but what if we had 1,000x robots programmed with the knowledge and skill of Dr. Murphy? What if these MurphyBots had perfect memory, instant recall of information, real-time multi-tasking ability to search the Internet and learn from materials — all at 100x the speed of the human? At some point the game changes fundamentally.

    • Bob Murphy says:

      but what if we had 1,000x robots programmed with the knowledge and skill of Dr. Murphy?

      You’re saying this would be a BAD thing?!

    • Bob Murphy says:

      Joking aside: Would humanity be better or worse off, if we went back in time and made Newton, Einstein, Ford, and Turing half as smart?

Leave a Reply