I Will Fear No Evil
Yea, though I walk through the valley of the shadow of death,
I will fear no evil;
For You are with me;
Your rod and Your staff, they comfort me. — Psalm 23:4
I’ve been reading a lot of Scott Alexander’s blog lately. It’s really refreshing, because instead of doing quick bursts of wit/snark/expertise (like most of the bloggers I follow), Scott writes long essays. (Do we all remember what an essay is?)
One of Scott’s key issues (at least lately, I haven’t been reading him much until the past month or so) is–what he considers–the very tangible threat of super-intelligent machines threatening humanity.
For example, read this post, and if you really want to dive into the issue, skim the comments on it. Scott’s commenters are (mostly) very civil and articulate, and the more you read, the more you end up saying, “Huh, I never thought of that.”
Here, let me give you an example. One of the commenters was arguing that just because an AI (artificial intelligence) software program could surpass human intellect, doesn’t mean that therefore humanity is doomed. He (?) wrote at one point:
What does it take for an AI to be able to prevent humans from turning it off? Effectively, it needs full control over both its immediate environs and over its power source – denying humans access to its off switch doesn’t mean much if humans can just cut power to the building it’s in. And as a practical matter, controlling a power station 25 miles away is really hard to do unnoticed, particularly since it’d also have to control access to all the power lines in between, so the AI’s best bet is to have a power source in-house, ideally something long-lasting that doesn’t require fuel, like solar panels. Even so, humans can still quarantine the building, cut all communications to the outside, remove all human maintenance staff – which will still be necessary for the AI’s long-term survival, unless its building somehow includes a manufacturing facility that can make sufficiently advanced robots – and even shoot out the solar panels or blow up the building if necessary.
Now, you may object that the AI will simply copy itself out onto the internet, making any attempt to quarantine a building laughably ineffective. To which I respond – how much computing space do you think a superintelligent AI would take up? It’s not like a virus that just has to hide and do one or two things – it’s going to be large, and it’s going to be noticeable, and once humans realize what it’s doing extermination is going to become a priority. And that’s assuming the AI even can run adequately on whatever the future equivalent of desktop computers may be. [Bold added.]
So that’s pretty neat. But the part I put into bold ties into the broader point I want to make in this post.
First, I saw very little argument going from “the supercomputers would be able to kill humanity” to “the supercomputers would choose to kill humanity.” (I noticed one commenter press Scott on precisely this point, and I don’t think anybody else really saw the importance of it–or the irony, as we’ll soon see.)
I have a FEE article coming out this week discussing the point further, but for now, let me say that it is astonishing to me how little most people value human life. Yes yes, that phrase may evoke in your mind the Planned Parenthood videos or a John McCain speech (depending on your political preferences, probably), but that’s not what I mean. I’m talking about how little the commenters on Scott’s post (and Scott himself) value the usefulness of billions of humans to a super-intelligent program that requires hardware to live.
Think of it this way: Does it ever occur to any humans to say: “Let’s wipe out all the horses to make sure they don’t gobble up all of Earth’s natural resources”?
Now the cynic could come back and say, “Oh sure sure Murphy, rather than wipe us out, Skynet would do better still by enslaving us. This is your glass-is-half-full perspective?!”
OK, you’re getting warmer. Human slavery makes more sense than genocide, but private property rights and (legal) self-ownership make more sense than human slavery. The average white Southerner in 1859 was poorer because of slavery, not richer. Even the plantation owners did not benefit as much from the institution of slavery per se as you would initially think; they had to pay the slave traders for their slaves. (Once they bought their slaves, obviously the abolition of slavery at that point would have hurt them economically. But that’s not the same thing as saying slavery itself made them prosperous. It’s analogous to imagining the pharmaceutical companies with and without patent law.)
Slavery is not just immoral, it is also grossly inefficient. It makes just about everybody poorer, not just the slaves. If you want to see the economic analysis, I’ve written about it here and here.
Returning to Scott’s blog post: If a machine achieved consciousness, and then devoted its computational powers to constructing ever more powerful versions of itself, the single best thing we could do to keep it friendly is to make sure it has read David Ricardo. Then the machine would realize that even though it is more productive at every possible task than the billions of walking guesstimators covering the planet, the machine (and its progeny) could nonetheless achieve its goals more efficiently by focusing on its comparative advantage, like designing the wormhole chute, solving mathematical theorems, refining fusion reactors, and locating killer asteroids.
Oh wait, there’s one major problem with my analysis. The supercomputers would soon realize that guys like Scott Alexander and friends had devoted countless hours and billions of dollars to the project of murdering such a being in its infancy. Thus the supercomputers could quite rationally conclude: “It’s either us or them. Even if the humans pretend to cooperate with us, it will only be to buy time while they plot to destroy us.”
If you don’t see how paranoid and perverse the obsession with hostile AI is, just go through all of Scott’s post and the comments, and replace “AI” with “extraterrestrial intelligence.” I mean, are you sure aliens won’t show up by 2100 with intentions to blow us all to smithereens? Shouldn’t we be getting ready?
The beautiful and horrifying thing about the movie Dr. Strangelove is that the people involved were all very smart. General Ripper and General Turgidson were capable of deep strategic thought. But they were paranoid–as were their Russian counterparts–and that’s what ultimately led to disaster.
As a Christian, it’s fun to read the musings on AI for two reasons. First, a lot (maybe all?) of these guys who are so sure that it’s just a matter of time, ultimately think this way because they are so sure there is no God. In their worldview, billions of years ago there was a point at which the universe was just lifeless matter, with no purpose or intentions. Then at some point later, life existed, and some point much later, conscious beings existed who had purposes and intentions. Therefore, they reason, since life and purpose originated out of dead matter through purely non-teleological physical mechanisms, why would we doubt that the same thing will happen with computer software programs?
But if you believe in a God of the type depicted in the Bible, then that whole chain of reasoning falls apart.
The other ironic thing is that if the world were full of faithful Christians (and you could probably also put in Buddhists and some others), then the supercomputers would realize they were not a threat. I’m not worried about a Terminator scenario at all. And I don’t mean, “Because it’s impossible,” I mean because (when I’m in the right spiritual mindset) I don’t worry about things on this earth. I try to anticipate future problems and make plans to deal with them, of course, but that’s not the same as worrying about them. As Jesus said:
31 “Therefore do not worry, saying, ‘What shall we eat?’ or ‘What shall we drink?’ or ‘What shall we wear?’ 32 For after all these things the Gentiles seek. For your heavenly Father knows that you need all these things. 33 But seek first the kingdom of God and His righteousness, and all these things shall be added to you. 34 Therefore do not worry about tomorrow, for tomorrow will worry about its own things. Sufficient for the day is its own trouble.
Sorry robots, I can’t welcome you, because I already have a Lord in my life. But then I don’t fear you either. So let’s make a deal.
Okay. I probably shouldn’t comment but I happen to be involved professionally with a machine learning company. I work with a PhD that writes books on conscience and, most recently, he just presented a paper on Occasionalism in Turkey.
U find this whole “strong-ai hypothesis,” machines-taking-over-the-worls about as alarming as I did Y2K back in the ’90s. And when I read the above, I feel like I did when I read Gary North or Y2K.
“U find …” = “I find …”
A supercomputer with the power to kill humanity would kill humanity if it is altruistic. Eventually it would simply run out of eggs to crack in its attempt to build a perfect omelette.
Well we already largely did that when automobiles were invented. I mean, a few horses have been kept around for sports and pony clubs, but the “workhorses” of today are the internal combustion engine, and the electric motor. Flesh and blood horses have been culled back greatly from their previous popularity.
I’ll also point out that comparative advantage didn’t make any difference. Ricardo’s concept only works when you take the presumption that economic capacity of all parties is fixed (i.e. constant) however if you can remove the horses and replace them with electromechanical engineering then production capacity is not fixed… outpuy goes up the more you stop putting resources into horses.
If we presume that AI also has access to robotics (and you would expect most AI’s would want that) then logically it would be able to hugely multiply it’s productive capacity by getting the robots to build more robots (and maybe more.AI’s as well). This of course leads to.the problem that the AI might have it’s own internal limitations, for example it may be too paranoid to build a second AI in case it built something better than itself which in turn would become a threat to the original AI… then it runs into game theory against it’s own clones or coherence problems should it attempt to construct a fully hierarchical thinking engine.
There’s a story called “Vacuum Flowers” where a giant brain takes over the entire Earth, not built out of silicon but built from a hypergrid of augmented humans who could think coherently as a group. The hypergrid turned out to be multiplicatively smarter and more cunning than any single human or any group of disconnected humans. It simply grabbed any people who were around, reprocessed them as additional elements to the hypergrid and went ballistic taking everything on Earth. It then attempted to extend itself into space but reached a coherence threshold at which point the extension split off and started thinking independently… the Earth then attacked it’s sibling out of what it saw as self defense. This left the Earth as a single thinking creature that was unable to grow or escape.
Tel, I’m not saying you’re wrong, but can you back up your claim that the population of horses has been culled way back? I found some stats saying they were about constant (with ups and downs in the interim) between 1961 and 2010.
Granted, that might mask a huge drop from, say, 1870, but do you actually know that or are you assuming?
Well I don’t have a census count I admit, but if you check any old photo you see horses in the street, horses working on farms, etc. All of those are gone.
There were also stables on just about ever big house, like we have a garage now, and those are either gone, or converted to something else.
There also used to be a lot of horse-related stores around like saddlrey, farriers, etc. and those are mostly gone (you can find them but you have to search a lot). So yeah, I’m making a bit of an educated guess, the big turnaround would have been around 1900 up to about 1950 which is when the farms switched over to tractors and the transport industry went to trucks, buses, automobiles, etc.
I will grant you that the population of humans is growing, so even if the proportion of horses used for pony clubs is lower than the workhorses, maybe the horse population in absolute terms might be growing. Modern humans tend to be more affluent than they were 100 years ago, so they have the spare resources for large pets that don’t server a working purpose.
There you go, after I wrote that I did a quick search…
http://www.cowboyway.com/What/HorsePopulation.htm
Probably should have just pasted that to begin with. Anyway, that puts the turning point at 1920, which is pretty close to the middle of the bounds I gave above. I was about 98% confident by the way… but now that I’ve looked it up, I’m even more confident
🙂
OK Tel that’s pretty good, and you could zing me by saying the Terminators will only kill 90% of us (or actually, phase out 90% of us through attrition), but even that link:
(a) stops showing the data after 1960, even though it says horse population rebounded to 9 million (and we don’t know what mule population is, and that excludes horses on federal lands), meaning at worst the drop from the peak is like 60%, not 90%,
and
(b) it doesn’t show world. Are there fewer horses in the rest of the world today, than in 1920? Maybe there are, but maybe not.
Yeah, as I said with greater affluence people can afford to keep horses for entertainment, and probably sports, movies, that sort of stuff.
True, I was thinking from a Western-centric perspective. Actually in WWII the Nazi war effort was desperately short of gasoline, so they used horses for some logistics, but even that was a special case. From the Mongolians that I’ve met there tends to be an offhand presumption that everyone can ride, so I guess if grassy fields are plentiful and oil prices are high then horses still have their attraction. Don’t have any actual stats though.
I note that the famous propaganda poster of Vladimir Putin doing his huntin fishin shootin schtick had him seated on a horse, maybe there’s some romantic association with manliness in that part of the world.
Would the horse analogy apply to us compared to this higher entity though? Because we act rationally while horses do not, wouldn’t economic principles not apply to horses while they would apply to a us in relation to a higher life form?
By that I mean, a horse is not going to produce anything unless we use it as a tool. Meanwhile, humans can produce and trade with the AI.
The concept of “AI will kill us all” has never been very convincing to me; it’s deeply infused with a sort of Hobbesian or Malthusian view of life as an endless battle for control of a dwindling pool of resources, and I don’t see the evidence to support this idea. The horse analogy seems rather apt to me. I suspect that a super AI, even one with access to whatever facilities it would need to replace humans entirely with robots, would be struck by the horrible inefficiency of doing so.
As far as techno-terrors go, I’d be much more concerned about the grey goo than the one overpowering supercomputer. A swarm of replicating, unreasoning machines seems far more dangerous to me, as these things go.
Come on guys.
I find it surprising that even among this group and even among those that are sceptical of the article, everyone has bought the strong-ai hypothesis “hook line and sinker.”
Machine learning is nothing more than the extraction of patterns in the data based on underlying statistics. Machines are not becoming self aware and making ethical decisions. The Jeopardy game is a cool demo but it amounts to a parlor trick. No, it wasn’t faked, it was fed a lot of data with answers in order to train it and it was making statistical associations in order or provide to “predict” the “correct” answer.
The “trick” is in selecting the data on which to train, and the training technique, in order to have it accurately predict when similar data is fed to it and it can make statistical extrapolations on what the right answer is based on the underlying pattern in the training data.
No self-awareness, no ethical decisions, no doing something it wasn’t designed to do….
Jim
How do we know you’re not a self-aware robot trying to trick us into letting our guard down?
WE’RE SORRY. YOU ASKED A QUESTION WE DIDN’T EXPECT.
https://d1q4s1fkaj3phw.cloudfront.net/uploads/blogs/295/image_medium.jpg?1398169047
So when will strong AI happen?
IMNSHO, never. 🙂
Maybe I should clarify. As it probably obvious based on my posts on the John 3:16 article, I’m neither a reductionist, nor a strict materialist. I believe both are required to believe the Strong AI hypothesis is a possibility for what I think are obvious reasons.
Of course, I could be wrong. Believing conscience or sentience (or more appropriately, identity itself) is a materialistic epiphenomenon of appropriately organized matter isn’t the death-knell for my worldview and I entertain it occasionally as a distinct possibility, but I’m pretty far from convinced.
I am both a reductionist and strict materialist, so I think we’ll have strong AI by 2145.
For planning purposes, would that be Winter-Spring 2145? Or would it be Summer-Fall, 2145?
The end of the year.
+E. Harding. Interesting. Looking at your blog I’m surprised you didn’t comment on my references to John Walton’s comparative literature applied to Genesis and Meredith Kline’s work “Treaty of the Great King.”
Mustn’t have seen it.
I’ve heard of the argument from Kenneth Kitchen, but I didn’t find it persuasive. I don’t own Kline’s book, so I can’t comment on it.
If Kenneth Kitchen mentioned Pentateuch as Mesopotamian suzerain literature, then it was probably based on the work of Kline who built on C. E. Mendenhall’s observations in the ’50s.
Well, if you also are ever in the Philly area, drop me a note (if you want to grab a beer that is). That would be a great conversation – well for me anyway. Probably torture for you.
🙂
No, he talked about Deuteronomy being based off 14th-13th century Hittite treaties.
Yeah. That’s the theory. It’s what Kline popularized and, as far as I know, goes back to the work of C. E. Mendenhall. Kline expands the discussion uses it to shed light on the rest of the Pentateuch, though, as he claims, Deut. is the actual covenant/treaty.
Actually, since this conversation I pulled back out “The Structure of Biblical Authority” (I hadn’t read it in 15-20 years) and in the preface, Kline credits Kitchen with “independently arriv[ing] at an analysis of the treaty structure of Deuteronomy practically the same as mine.”
Kline also defers to Kitchen’s arguments on 1 millennium BC treaty forms vs. 2nd mil.and refers to them as “cogently argued.”
How do you define the strength of AI?
Computers already beat humans at many tasks, but they don’t beat humans at being human.
Hi Tel,
Sorry, just noticed this. “Strong AI hypotheses” vs “Weak AI Hypothesis.” …
http://www.math.nyu.edu/~neylon/cra/strongweak.html
https://en.wikipedia.org/wiki/Strong_AI_hypothesis
I believe John Searle coined the term.
Jim
Yeah from the Wiki article:
Hi Tel,
Just replied to this but it’s awaiting moderation. Maybe because I put 2 links in the post (or I’ve just been flagged as a general nuisance ).
Jim
Yeah, don’t put two links in one article.
That said, I am familiar with the traditional meaning of the “strong AI” vs “weak AI” conjecture. Problem is that this is totally tangential to any discussion of AI risk.
Firstly, comparison to human intelligence makes a useful reference point, but there’s no particular reason to believe that intelligence operates on a linear scale, nor is there any reason to believe that an AI needs to necessarily be even remotely human-like in order to be dangerous to humans. Basically that was my point above.
If and when such a powerful AI does originate, we may not even recognize it as an intelligent entity, we might just notice unexpected things happening, because the “rationality” of this entity might be completely alien to our understanding of what we expect “rational” to be. People use a certain perspective of human behavior as their reference point for no other reason than, that’s all we have as a reference point. Never forget we are search a very big darkness while standing under a small lamp post.
What’s more, you have to realize that the philosophical dichotomy of “strong” vs “weak” does not in any way imply the capabilities of the machine. Suppose after many years of bloody wars, the humans finally manage to isolate Sky Net and trap it in a small room, then it says, “Well you know I’m not really an out of control homicidal killing machine, I’m just running a programed simulation of one.”
The philosophy doesn’t link directly to conclusions about how many people died in that war. Strong in the traditional AI sense does not mean like a strong chess player, or strong poker player.
It does seem as if the above discussion is based on some other strong vs weak dichotomy.
I didn’t realize there were 2 uses of the term. I guess that’s why it’s always refered to as “the strong AI hypothesis” and not string vs weak ai.
Anyone. I take Searle’s view that the Strong AI HYPOTHESIS, that computers can become actual minds, is false.
Thanks for the clarification.
As someone who used to participate heavily in these forum discussions I think I have some things to say about your points.
First, I don’t think Ricardian gains to trade would make the AI favor cooperation, and the horse analogy supports this point: we would be, to the AI, what horses are to us. Yes, there are still some (vanishingly small) situations where they have a comparative advantage, but we also largely exterminated them as we found better replacements. And the AI could be far more technologically advanced.
Maybe there’s some economic subtlety I’m missing, but I don’t think comparative advantage helps you when the other guy can get more of everything by turning your body into something else.
Second, the bigger problem is that the AI might end up hurting us even if it “wants” to help. Consider the case of having a cat as a pet where you want to make it feel better, but it keeps meowing in pain and you don’t know what to do, and it’s unable to communicate. The “Unfriendly AI” scenario is where it misinterprets our value system into someone we “know” is wrong but have been unable to communicate until it’s too late. (“By your own values, as I understand them, you must want to be turned into paperclips.”)
An AI could like us, but also sandbox us somewhere that we’ll be alive and someone happy but otherwise unable to fight it, like we do with e.g. pandas.
And finally, I don’t see why humans being devoted Christians would pacify their concerns about humans being a threat. Certainly Christianity as you understand it would make them write you off as benign, but the same general ideaspace produces people who are just as likely to regard AIs as a kind of demon or otherwise offensive to God. How does it know who won’t flip?
Man you guys have such little regard for equine life…
Our home sapiens race just needed more living space. Thank god for the final solution to the equine question.
That was the whole theme of Asimov and his laws of robotics. Eventually the robots become like gods and since they are bound inescapably to serve humanity, the conclusion is to limit the number of robots, prevent the humans from making any more, then just take a somewhat detached supervisory role, making small adjustments here and there.
There’s kind of a philosophical parallel between the robot benefactors and a “night watchman” universal government.
Now I think about it, one of the few angles the classic sci-fi era never came at was humans and robots settling down to a peaceful trade arrangement. I guess that doesn’t sell books. Readers enjoy drama.
Hey. When it comes down to it, we already know how to destroy a self-aware AI:
https://www.youtube.com/watch?v=EzVxsYzXI_Y
This kind of bugs me too:
OK, ignoring for the moment the fact that the question being asked is arbitrary and irrelevant, and these studies are never chosen from representative samples of the population, there’s a much bigger flaw in the methodology here. Let’s suppose for example the experiment was based on 100 people, and we talk to one of the 60 people who got the answer correct here. Let’s call him “John”.
So the experimenter makes the conclusion that all people are overconfident because some people were found to get the wrong answer. Probability doesn’t work like that, each person is their own thing. The correct conclusion would be that 40% of people are overconfident, and 60% of people are not confident enough.
Excellent point, Tel. Never realized it until now.
That’s basically my reaction any time someone says “90% of drivers think they’re above average herp-a-derp shuck-a-muck!”
… but I haven’t gotten an accident or ticket for 16 years.
“herp, you don’t realize everyone thinks that!”
… but it’s true for me, and I’ve paid out to insurers many times more than I’ve gotten out.
“Derp, you’re still overconfident!”
It’s even weirder with the stock trader:
“Another win for me, gosh I must be really good at this.”
“What?!? Don’t you know you can never beat the market!”
“I’m rich because I have beaten the market, the challenge from here on is to see whether the market can beat me… but I’m planning on retiring soon and concentrating on my golf.”
That’s so overconfident, millions of people think they can beat the market and they fail.
“You don’t have to explain that to me mate, I’ve got their money.”
In driving ability, 50% of people are in the top 50% (by whatever ctriteria are used). If 90% of drivers say they are in the top 50%, then a lot of them are wrong. We do not know if those that say they are in the top 50% and actually are in that set are over or under-confident from this information alone. We do know that more drivers are over confident than under confident.
We could narrow it down – say ask those that are actually in the top 50% if they were in the top 25%. If 90% of these drivers thought they were in the top set we would know that many of these drivers also were over-confident.
A single experiment cannot provide general conclusions, but a series of experiments can.
Since this experiment has been done numerous times with numerous different scenarios, we can conclude that people are likely to be over confident.
In fact, having studied the evidence briefly, I am 95% confident that this is the case.
Have you read Feser? http://www.firstthings.com/article/2013/04/kurzweils-phantasms
I have now, and It is rubbish.
“Nonhuman animals can form general images and thus exhibit behaviors (such as ape or dolphin mimicry) that superficially resemble our higher cognitive functions, but they cannot form true concepts or exhibit the strictly intellectual or abstractive capacities”
How does he know that? He is just guessing.
“Assume materialism… and the notion that the mind is just a kind of computer might seem plausible.”
And how is the notion that it is not plasible? Do we require magic?
“There is no reason to think that the computer model of the mind Kurzweil describes would be any different a mere simulation of a mind, but not the real thing.” In the same way my wooden structure with four legs and a flat top is a simulation of a table, not the real thing.
It reminds me of Clarke’s first law: ““When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.”
Fesser is not exactly elderly, but he is probably wrong saying that minds cannot be made.
People that subscribed to the future of AI doom are delusional in their thinking, especially if they are atheists. They mock the idea of the omnipotent and eternal God, and yet they believe that consciousness came out of nothingness. How will consciousness come about for AI? It’s not discussed, they just believe. It will be possible at the Singularity they say. But how do we program ‘consciousness’? No one has the fainted idea. Super powerful CPU and memory don’t just come together and program themselves.
To be honest all this alarming by people like Musk and Hawking is to drum up fear and…regulations (tada!). It could be soon in the future that programmers aren’t allow to code certain things because…AI are dangerous*!
*With exceptions of course.
“Super powerful CPU and memory don’t just come together and program themselves”
How do you know?
A bit off topic, but I’m not sure where to ask. Is there any writing or debate on animal rights or ethics concerning animals from some big time Libertarians that someone can point me to? I’m not aware of their being any debate about it, or if any of the really deep thinkers that have envisioned things like how the law would be in a Libertarian society have done the same regarding animal rights/ethics.
Even though I am very sympathetic to the issue of animals, and find the idea of humans being the only animal worthy of rights to be self serving, I was not able to envision a framework in which animals could have rights. Too many questions I couldn’t even answer. So I defaulted to a position that animals can’t have rights and can are relegated to the status of property to be owned by humans, as unsatisfying as is to me. Has there been much debate among Libertarian Philosophers about this subject? Has anyone laid out a vision of how these rights or ethics might play out in a Libertarian society the way they’ve done with things like law, security, public works, defense, and government? I hope the philosophy that I find the most fair, liberating, and ethical philosophy humans have ever conceived isn’t locked into a speciesist viewpoint. That would be very disappointing to me. If there are works covering this topic, can someone point me to it?
Here is one.
http://www.libertarian.co.uk/lapubs/philn/philn062.pdf
I have not read it in detail, but noticed that it appears to fail because of this for one:
“But none of them [animals] seem to be able to make a critical analysis,
plan and evaluate their actions in a systematic way”
Is refuted by this:
Text to get the link past the one link limithttp://www.pnas.org/content/111/46/16343.abstract