==> “The Fed and Oil Prices.” This posted Monday morning, and I actually wrote it in mid-month. (You can tell from the opening news hook, which doesn’t mention the U.S. stock market.) Anyway, a neat chart in there showing Fed’s balance sheet versus oil prices.
==> I continue to push back on Sumner’s “what bubble?” approach.
This is mostly tongue-in-cheek, but I actually think Daniel is kind of a jerk in this movie.
BTW there is one mildly dirty word at the end, so watch out…
This is the closest I’ll come to an “I told you so” victory lap. (In the future I may refer to the warnings people in my camp have been issuing, especially if people claim I’m making it up, but I will try not to be snarky about it.)
Here’s Scott 9 days ago, talking about China, but then he generalizes:
I’m a bit more optimistic [about the Chinese economy], as I think the reform process will continue. They’ll avoid the middle-income trap. But they haven’t yet even reached the trap—a lot more growth is ahead. If you want to know when that day of reckoning will finally arrive in China, don’t come here looking for answers. I will miss the collapse, blinded by the EMH, just as I missed every other dramatic economic shock in my entire lifetime. My predictions are boring, and always the same:
“More of the same ahead”
My predictions are usually right, but they get no respect, and don’t deserve any.
One thing as you guys slug it out in the comments: It’s not enough to say, “But the S&P500 is up if you go back 18 months!” (or whatever). You would also “be up” if you had bought Treasuries 18 months ago. Most people buying equities do not expect the market to be this volatile, and given that it is, you need to earn a lot more than “breaking even over 18 months.”
Yea, though I walk through the valley of the shadow of death,
I will fear no evil;
For You are with me;
Your rod and Your staff, they comfort me. — Psalm 23:4
I’ve been reading a lot of Scott Alexander’s blog lately. It’s really refreshing, because instead of doing quick bursts of wit/snark/expertise (like most of the bloggers I follow), Scott writes long essays. (Do we all remember what an essay is?)
One of Scott’s key issues (at least lately, I haven’t been reading him much until the past month or so) is–what he considers–the very tangible threat of super-intelligent machines threatening humanity.
For example, read this post, and if you really want to dive into the issue, skim the comments on it. Scott’s commenters are (mostly) very civil and articulate, and the more you read, the more you end up saying, “Huh, I never thought of that.”
Here, let me give you an example. One of the commenters was arguing that just because an AI (artificial intelligence) software program could surpass human intellect, doesn’t mean that therefore humanity is doomed. He (?) wrote at one point:
What does it take for an AI to be able to prevent humans from turning it off? Effectively, it needs full control over both its immediate environs and over its power source – denying humans access to its off switch doesn’t mean much if humans can just cut power to the building it’s in. And as a practical matter, controlling a power station 25 miles away is really hard to do unnoticed, particularly since it’d also have to control access to all the power lines in between, so the AI’s best bet is to have a power source in-house, ideally something long-lasting that doesn’t require fuel, like solar panels. Even so, humans can still quarantine the building, cut all communications to the outside, remove all human maintenance staff – which will still be necessary for the AI’s long-term survival, unless its building somehow includes a manufacturing facility that can make sufficiently advanced robots – and even shoot out the solar panels or blow up the building if necessary.
Now, you may object that the AI will simply copy itself out onto the internet, making any attempt to quarantine a building laughably ineffective. To which I respond – how much computing space do you think a superintelligent AI would take up? It’s not like a virus that just has to hide and do one or two things – it’s going to be large, and it’s going to be noticeable, and once humans realize what it’s doing extermination is going to become a priority. And that’s assuming the AI even can run adequately on whatever the future equivalent of desktop computers may be. [Bold added.]
So that’s pretty neat. But the part I put into bold ties into the broader point I want to make in this post.
First, I saw very little argument going from “the supercomputers would be able to kill humanity” to “the supercomputers would choose to kill humanity.” (I noticed one commenter press Scott on precisely this point, and I don’t think anybody else really saw the importance of it–or the irony, as we’ll soon see.)
I have a FEE article coming out this week discussing the point further, but for now, let me say that it is astonishing to me how little most people value human life. Yes yes, that phrase may evoke in your mind the Planned Parenthood videos or a John McCain speech (depending on your political preferences, probably), but that’s not what I mean. I’m talking about how little the commenters on Scott’s post (and Scott himself) value the usefulness of billions of humans to a super-intelligent program that requires hardware to live.
Think of it this way: Does it ever occur to any humans to say: “Let’s wipe out all the horses to make sure they don’t gobble up all of Earth’s natural resources”?
Now the cynic could come back and say, “Oh sure sure Murphy, rather than wipe us out, Skynet would do better still by enslaving us. This is your glass-is-half-full perspective?!”
OK, you’re getting warmer. Human slavery makes more sense than genocide, but private property rights and (legal) self-ownership make more sense than human slavery. The average white Southerner in 1859 was poorer because of slavery, not richer. Even the plantation owners did not benefit as much from the institution of slavery per se as you would initially think; they had to pay the slave traders for their slaves. (Once they bought their slaves, obviously the abolition of slavery at that point would have hurt them economically. But that’s not the same thing as saying slavery itself made them prosperous. It’s analogous to imagining the pharmaceutical companies with and without patent law.)
Returning to Scott’s blog post: If a machine achieved consciousness, and then devoted its computational powers to constructing ever more powerful versions of itself, the single best thing we could do to keep it friendly is to make sure it has read David Ricardo. Then the machine would realize that even though it is more productive at every possible task than the billions of walking guesstimators covering the planet, the machine (and its progeny) could nonetheless achieve its goals more efficiently by focusing on its comparative advantage, like designing the wormhole chute, solving mathematical theorems, refining fusion reactors, and locating killer asteroids.
Oh wait, there’s one major problem with my analysis. The supercomputers would soon realize that guys like Scott Alexander and friends had devoted countless hours and billions of dollars to the project of murdering such a being in its infancy. Thus the supercomputers could quite rationally conclude: “It’s either us or them. Even if the humans pretend to cooperate with us, it will only be to buy time while they plot to destroy us.”
If you don’t see how paranoid and perverse the obsession with hostile AI is, just go through all of Scott’s post and the comments, and replace “AI” with “extraterrestrial intelligence.” I mean, are you sure aliens won’t show up by 2100 with intentions to blow us all to smithereens? Shouldn’t we be getting ready?
The beautiful and horrifying thing about the movie Dr. Strangelove is that the people involved were all very smart. General Ripper and General Turgidson were capable of deep strategic thought. But they were paranoid–as were their Russian counterparts–and that’s what ultimately led to disaster.
As a Christian, it’s fun to read the musings on AI for two reasons. First, a lot (maybe all?) of these guys who are so sure that it’s just a matter of time, ultimately think this way because they are so sure there is no God. In their worldview, billions of years ago there was a point at which the universe was just lifeless matter, with no purpose or intentions. Then at some point later, life existed, and some point much later, conscious beings existed who had purposes and intentions. Therefore, they reason, since life and purpose originated out of dead matter through purely non-teleological physical mechanisms, why would we doubt that the same thing will happen with computer software programs?
But if you believe in a God of the type depicted in the Bible, then that whole chain of reasoning falls apart.
The other ironic thing is that if the world were full of faithful Christians (and you could probably also put in Buddhists and some others), then the supercomputers would realize they were not a threat. I’m not worried about a Terminator scenario at all. And I don’t mean, “Because it’s impossible,” I mean because (when I’m in the right spiritual mindset) I don’t worry about things on this earth. I try to anticipate future problems and make plans to deal with them, of course, but that’s not the same as worrying about them. As Jesus said:
31 “Therefore do not worry, saying, ‘What shall we eat?’ or ‘What shall we drink?’ or ‘What shall we wear?’ 32 For after all these things the Gentiles seek. For your heavenly Father knows that you need all these things. 33 But seek first the kingdom of God and His righteousness, and all these things shall be added to you. 34 Therefore do not worry about tomorrow, for tomorrow will worry about its own things. Sufficient for the day is its own trouble.
Sorry robots, I can’t welcome you, because I already have a Lord in my life. But then I don’t fear you either. So let’s make a deal.
This is really interesting, though you won’t really appreciate it if you have no idea how Bitcoin works. (Remember Silas Barta and I wrote a guide. And HT2 Tatiana Moroz for alerting me to all of this.)
Without having had time to read it, I previously linked to this article describing the imminent “fork.”
However, I now realize that that article was not a third party describing the conflict, but was instead explicitly coming from the point of view of the Bitcoin XT camp.
In contrast, this article explains things from the point of view of the Core camp. It also is biased, using the term “Judas block” in an illustrative example…
But here’s something really fascinating from the latter piece. After explaining why the two sides are at such an impasse, it says (and I’m adding line breaks for ease of reading):
So with all these scary uncertainties, you may ask why hasn’t Satoshi come out to speak on the behalf of one side or the other in order to settle the dispute? Indeed it would be akin to him coming out to act as a 3rd party mediator, such as when a parent comes in to break up a fight among siblings.
There has in fact been a post by someone claiming to be Satoshi, from a valid known Satoshi email address, claiming pretty much that the XT fork is unnecessarily dangerous, see here: Satoshi? Despite the many allegations that if this was really Satoshi, he would have signed his message with a known PGP key or perhaps moved some of his coins to prove that it was him, he has not done so. I for one do not believe that he would. If you read the message, (ignoring for a second who it is from) he is saying that Bitcoin’s vision is not one where it is subject to the egos of charismatic leaders, including Satoshi Nakamoto. People who harp on the fact that Satoshi has not made a provably authenticated statement is clearly missing the whole point of this message. If he were to do so, rest assured the whole of the community will rally with him, but that is exactly what he doesn’t want to happen, a whole community blindly following authority! Consistently so, the author points out that if it takes a benevolent dictator to pull us out of this mess “deux ex machina” then Bitcoin, as a project in decentralized money resistant to authority, has failed. That tautological statement, is provably true if you can wrap your head around it. Therefore, if Satoshi wants it to succeed, he won’t use his ‘God card’ and settle disputes.
If Bitcoin continually needs Satoshi to keep us from going astray, then Bitcoin isn’t worth saving. Considering that Satoshi has likely the most coins at risk than anyone else, and him coming forward to break the impasse would likely save us (and the value of his own coins) it is truly commendable that he has not done so. The fact that he hasn’t tells me that he (where ever he or she is) is truly acting in an altruistic manner. He is more willing to let Bitcoin die, than to let it continue on as a system that does not value consensus as its first and foremost priority.
Setting aside the technical dispute, it’s pretty cool that we are at a point where a guy writes matter-of-factly on the Internet about some hidden founder who nobody knows and yet this founder has the ability to settle a dispute because of his authority. (Or it could be a woman.) Furthermore, even though nobody knows who this guy is, he could prove that it’s “really him” if he wanted. (That’s what the writer means above with the “PGP key” stuff.) Also, he’s possibly worth $200 million because of what he created and gave to the world, yet–to repeat–nobody knows who he is.
==> Rand Paul and Mark Spitznagel vs. the Fed.
==> This Telegraph piece makes a compelling case for building your bunker. (HT2 Zero Hedge, I think)
==> Did I already link to this? I think Scott Alexander here is getting things backwards. Is this worth me pursuing?
==> Nick Rowe again showing that he eats reductio ad absurdums (absurda?) for breakfast.
==> Check out this interesting chart on minimum wages, nominal and real, from Pew (HT2 Cafe Hayek).
==> A story on Liberland.
==> Bitcoin is forking. Do you know where your kids are?
==> If you don’t work for the CIA, don’t tell people you do. And if you do work for the CIA, don’t tell people you do. (I sorta stole that joke from Josiah Neeley, but I think my version is funnier. This is how comedy improves. No one is original.)
==> When it comes to climate change policy, I say follow the money.
==> Nick Rowe is taking an awfully long time to get the simple point I’m trying to make in the comments of his post. Either I’m being dumb, Nick’s being dumb, or the importance of Austrian capital theory is even greater than I had realized. (Remember, Nick is no fool when it comes to capital theory. We were tag-teaming poor Piketty back in the day.)
This is pretty interesting if you have the time.
First, watch the infamous Kaufman / Lawler showdown on the David Letterman Show:
Now, just listen to Lawler explain the background. I don’t want to spoil anything, but it turns out to be pretty nuanced when you ask, “Wait, was that real or scripted?”
Also, at least in Lawyer’s telling, Kaufman was way cooler than Jim Carrey.
Last thing: Keep this episode in mind when you go nuts over things that Ann Coulter, Donald Trump, or Paul Krugman say.
==> I debate Walter Block on the Tom Woods show.
==> I talk socialist problems at the Freeman.