28 Mar 2016

Callahan Avoids the Rhetorical Traps Laid By His Interlocuters

Humor 76 Comments

In this post, Gene complains about the vacuity of the Turing Test. In the comments, three humans unanimously report that based on analysis of what Gene typed on his keyboard, they do not believe that he understands the Turing Test.

Gene, being a good Irishman, immediately perceived the danger to his position in the argument. If he took his readers’ reports to heart and thought maybe he was being unfair, then that itself would reinforce the spirit of the Turing Test, defeating the purpose of Gene’s post.

Thus Gene declared that his commenters were idiots who obviously didn’t understand the Turing Test like he did.

76 Responses to “Callahan Avoids the Rhetorical Traps Laid By His Interlocuters”

  1. Silas Barta says:

    Looks like Gene_Callahan shut off comments to his blog except from team members. Here is how I was going to reply:

    Gene_Callahan: So, the computer can’t rely on any programmers?!

    Me: It can’t rely on signals emitted by the programmers *after* the start of the test, just like:

    – In a school quiz, students can confer with their friends before the test starts but not after.
    – In the test you’ve described in your post, if you wanted to make it analogous to the TT, Emily could learn from the historians in general but could not have them with her in the box (or otherwise receive signals from them) when the test starts.

    Obviously the Turing spec requires that you “know what’s going on in the box” to the extent that it’s e.g. not simply relaying someone else’s signals. The imitation game does not require the judge to be agnostic on whether the subject has a phone line to a woman/man.

    • LK says:

      John Searle’s Chinese room argument. Read it.

      • Silas Barta says:

        Already did. What is that responding to?

        • Craw says:

          It is responding to the claim that prices adjust to clear markets.

  2. rob says:

    I’m sad Gene has decided not to allow comment any more – because it means I won’t be able to annoy him with “discussions” like this in the future.

    http://gene-callahan.blogspot.com/2016/03/bob-murphy-infallible-stock.html

    I liked the ritual of debating with him:

    1. identify an obvious flaw in his post and write a comment on it. (very easy on his Turing posts)
    2. He will first say you haven’t read his post properly
    3. When you defend yourself against that charge he will tell you that you have inadequate comprehension skills
    4. After that, any further comments will not be published.

  3. Tel says:

    I love Gene’s new “purple prose” decor.

  4. Tel says:

    While Jamal does OK, “Emily” easily beats him in the contest. Per Turing, we must conclude that Emily knows more about history than does Jamal.

    No, per Turing we must conclude that the box containing Emily and those other guys together knows more about history than the box containing Jamal.

    If the point of this story is that in practice there might be ways to cheat in a “black box” style test, well that’s probably true… and there’s ways to cheat various other tests too. I think it’s a bit presumptuous to claim that poor people are intrinsically more honest than wealthy people, but my experience marking engineering and programming assignments is that a lot of cheating goes on… and it’s detectable but extra effort to do so. Couldn’t tell you how it correlates with family fortune though.

  5. Major.Freedom says:

    THE MOUTH OF TRUTH HATH SPOKEN: “When truth is attempted to be spoken, it shall be deleted.”

    So sayeth the mouth.

  6. Major.Freedom says:

    I don’t get Callahan’s argument. It looks like he has no idea what the test is even about.

    The Turing Test is a test of whether or not you can tell the difference between a human and AI, by having communication with it.

    In the history trivia scenario he describes with Jamal and Emily, the “Turing Test” would come into play not so as to judge whether one person is smarter than the other based on the responses to the questions they are asked, but rather whether or not you are able to identify, based on the answers you receive, that the entity in the black box is a human or if it is AI. If you are not able to correctly identify whether the entity is human or AI, then IF the entity is AI then it is said to have “passed the Turing Test” with you.

    The alarm clock and rabbit cage scenarios could also undergo the Turing Test. Someone can place the alarm clock or the rabbit cage in a black box, and then you can start asking it questions and interacting with it. I’m pretty sure Callahan will realize very quickly he’s not dealing with a human, and the alarm clock and cage will have “failed the Turing Test.”

    I thought the was obvious?

    • Bob Murphy says:

      MF I actually think Gene could understandably sigh at your particular criticism. He knows what the standard use of the Turing Test is, and is trying to tweak it to show it would be ridiculous in another context.

      So the fact that he’s moved away from “Can we tell artificial intelligence from human intelligence?” to “Can we tell high intelligence from low intelligence on an academic test?” doesn’t seem dubious to me. Rather, the crucial flaw in Gene’s argument is the one I point out elsewhere in the comments here. Specifically, Gene is letting Emily get help from other people, and yet Gene thinks Turing would be forced to conclude that Emily alone should be accorded all of the merit that the group as a whole exhibited.

      • rob says:

        Gene thinks the Turing test is vacuous because its supposed to evaluate intelligence but even if a digital computer passed the test it would not (in Gene’s view) be truly intelligent.

        I suppose his history test example is meant to show that the use of the black boxes masks rather than reveals who is the best at history and by extension the real Turing test masks rather reveals what is truly intelligent.

        However what makes the “history test” version vacuous is that Emily cheats. Gene then just asserts that despite the cheating Turing would be forced to conclude that Emily knows more about history than does Jamal, which is clearly not correct. Presumably the results of any scientifically conducted Turing test would be deemed invalid if proper process had not been followed.

        Maybe Gene’s meta-point is deeper: Any digital computer that passed the Turing test could only be in the box because of the people who had programmed it and this makes them equivalent to the historians. Turing might conclude that the computer was intelligent but he would be as wrong to draw that conclusion as he would to conclude that Emily was the best at history. In Emily’s case it is the historians, and in the computers case the programmers whose presence (real or metaphorical) in the box led to the results of the tests.

        • Silas Barta says:

          “Maybe Gene’s meta-point is deeper: Any digital computer that passed the Turing test could only be in the box because of the people who had programmed it and this makes them equivalent to the historians”

          If that’s the point, it’s failed too, because of the “Einstein’s mother” argument — “If Einstein is so smart, that must just be because of how his mother raised him, so he’s really just reflecting her far greater intelligence.”

          I know what you’re going to say: “But the programmers have insight over a computer’s internal logic in a way that humans don’t have over their children.” Well, the line isn’t as sharp as you might think — the kinds of learning algorithms used for Go can render its model just as opaque to an outside inspector as a human’s brain.

      • Major.Freedom says:

        I don’t get it then how the sweeping conclusion “the Turing Test is nonsense” follows.

        What does that mean “in another context”? The test itself is the “standard” test. If the scenario is a black box full of historians, and a black box with one person, the test properly applied would test whether it is possible to tell whether the black box has human intelligence in it or AI.

        What am I missing here? Where in the Turing Test framework does it accommodate a test of how intelligent one human is relative to a bunch of humans? As far as I know, it isn’t a test about distinguishing one human from many humans, but about distinguishing a human from a machine.

        Is the group of humans supposed to be a metaphor for a machine?

        • Craw says:

          Correct on all points. Wait … I bet you read Turing’s paper didn’t you. Sneaky!

      • Craw says:

        This doesn’t make sense. If he is “tweaking” it to show it fails at some other task how can anyone present that as an argument the test is vacuous? If I take a Mercury barometer and tweak it so it fits in your mouth but it does a crappy job taking your temperature how does that prove its useless at measuring air pressure? How does the alarm clock thing make any sense at all? As M.F points out you don’t talk to alarm clocks and they’d fail any TT. Callahan talks about knowing more about waking him. The TT is not about knowing anything.
        It seems clear he does not know what the TT is at all.

  7. Bob Murphy says:

    For the record, I like Gene’s general attempt here to come up with a reductio ad absurdum, but I think he just executed it poorly and then refused to listen when three separate people tried to point it out in the comments.

    If you put a group of experts and a student in a box and “it” did well on an exam, the Turing Test would conclude that the system of things in that box was smarter than the other kid.

    If Gene’s critique worked, then we could come up with an even easier demonstration of the (alleged) vacuity of the Turing Test:

    Suppose Microsoft programmers take a self-contained computer system and put it behind a screen. A sample of 100 American teenagers with normal cognitive abilities each converses with the MS computer for an hour, and afterwards nobody can tell the difference between its responses and those of an actual teenager.

    Thus, per Turing, we would have to conclude that the computer’s power cord was self-aware.

    (If you just furrowed your brow, right, that was the point. Turing obviously would say *the entire computer system* behind the screen was functionally intelligent, not that one component of the system necessarily was. By the same token, if Emily and a group of experts together can pass a test, then Turing would say the whole system was smarter than the other kid–not just Emily herself, who is only a component in the system that passed the test.)

    • Major.Freedom says:

      “Turing obviously would say *the entire computer system* behind the screen was functionally intelligent, not that one component of the system necessarily was. By the same token, if Emily and a group of experts together can pass a test, then Turing would say the whole system was smarter than the other kid–not just Emily herself, who is only a component in the system that passed the test.”

      Bingo.

      If in the thermostat scenario we restrict the concept of a human being to a thermometer reader who pushes one button to warm the room and pushes another button to cool the room, then yes, in this contrvied example the thermostat might very well “pass the Turing Test”. But this doesn’t make the test obsurd or nonsense, it just means the machine is performing the task as well as the human.

      But Turing had in mind humans who were more than button presses, but communicators who could engage in real time unrestricted (by experimental design) conversations. A Turing Test would allow for a human, known human, to ask questions of the thermostat in the black box.

    • Silas Barta says:

      I’m still not seeing what great insight Gene_Callahan was offering …

      • Gene Callahan says:

        Yes, I agree you have missed that.

        • Silas Barta says:

          Though not for lack of trying on your part!

          Most people are able to communicate the need for an additional consideration beyond IO in judging intelligence *without* displaying a deep misunderstanding of the Turing Test. [1]

          Why can’t you?

          If your point was that there are considerations other than black-box IO in judging intlligence — great, I agree! Still doesn’t justify your highly-confused argument for that point or your highly vague language about the Turing Test being “absurd”.

          I know of two great papers that discuss the need for intelligence tests to worry about scalability; the first one argues that a lookup table for arithmetic wouldn’t “understand” arithmetic, but one that embodied a more general process for returning answers *would*. The second discussed the polynomial/exponential resource usage distinction and argues that the important point is whether it’s possible to pass the Turing Test with a *polynomial* amount of resources, or an exponential one (relative to the number questions asked). That nicely abstracts away the issue of whether the subject had help, because it’s the *scalability* that matters.

          (Not that it matters or anything, but both are by reductionists — such horror!)

          Also, your whole point about “lol it’s just the programmers” falls prey to the Einstein’s mother argument that I made right above where you inserted your snide remark.

          So yeah, Gene_Callahan, we could have had a very productive discussions about those points if you were capable of actually articulating clear arguments instead of just making yourself as obscure as possible and then mocking people for missing the point that you obscured.

          And when you get to that point — and stop blocking my criticisms from your blog — I’ll be happy to have that discussion! In the mean time, don’t flatter yourself 😉

          [1] Your latest clever attempt at re-writing your criticism *again* shows ignorance of the insight behind the Turing Test. It’s such a useful test because it allows the judge to refocus at will and make arbitrarily deep queries of arbitrary topics — meaning you’d need a legit implementation of the understanding to pass. So memorizing the first 10,000 questions a historian would be asked doesn’t help either, unless you could memorize all the possible followups and followups.

          • Craw says:

            Look at his blog now. The story NOW is he was making an analogy. He’s hoping we won’t notice he screwed up every part of the test. No communication, whole system vs power cord, that the test is for artificial intelligence, doesn’t measure degree. He got all this wrong and more — repeatedly, snidely, and loudly. He seems to have sloppily confused “ideological Turing tests” with Searle’s Chinese Room with the bizarre notion Turing is offering a logical proof, tossed in confusion about communication, added some of his usual charm, and puréed.
            Read Turing’s paper, it’s online, and see for yourself what a balls-up he made of it.

    • GerbilDerby says:

      “A sample of 100 American teenagers with normal cognitive abilities each converses with the MS computer for an hour, and afterwards nobody can tell the difference between its responses and those of an actual teenager.”

      Yes but the goal was to determine whether or not it could pass as human.

  8. LK says:

    The only problem with that post is his words “no way to determine the intelligence of anything else”. That is a step too far.

    The Turing test only tests simulated intelligence. The thing that simulates the intelligence (software) is as dead and unconscious as a washing machine. It lacks true conscious intelligence like ours.

    See John Searle on this:

    Searle, John R. 1982. “The Chinese Room Revisited,” The Behavioral and Brain Sciences 5: 345–348.

    Searle, John R. 1992. The Rediscovery of the Mind. MIT Press, Cambridge, Mass and London.

    http://socialdemocracy21stcentury.blogspot.com/2014/02/limits-of-artificial-intelligence.html

    Have you never read the Chinese room argument? Unless you understand it, you don’t understand this issue.

    • rob says:

      “The Turing test only tests simulated intelligence”

      No it doesn’t – it tests for a machine’s ability to exhibit intelligent behavior. I don’t think there is a test that even in theory could distinguish between real and simulated intelligence in an entity that passed the test, is there ?

      • guest says:

        Rob,

        If, for some reason, a particular Turing Test computer just happened to have canned answers to every question asked it – like from a table of responses, and not trying to evaluate context and such – and the humans incorrectly guessed that a human was giving the answers, would you still say that the computer passed the Test?

      • Tel says:

        I don’t think there is a test that even in theory could distinguish between real and simulated intelligence in an entity that passed the test, is there ?

        That’s the whole point of Searle’s argument… to assert the existence of something that cannot be measured. If there was an external way to measure the contents of the Chinese room then the “black box” would fail the Turing test. Thus, by definition this “essence of intelligence” cannot be measurable from outside the box.

        Once we are talking about the existence of unmeasurable things it becomes a spiritual discussion, nothing to do with science. Bring it up with your local priest who is equipped for such matters.

        • guest says:

          LK’s just trying to help clarify what the Turing Test is designed to do, and he is correct.

          But if we’re going to explore some of the implications of the Turing Test, then …:

          Did anyone else notice that LK just acknowledged that correct outputs can be given, without conscious effort, provided there’s a sophisticated enough algorithm?

          The algorithm isn’t predicting anything, but it comes up with correct answers.

          That’s what Praxeology does.

          *Takes a bow*

    • Tel says:

      Have you never read the Chinese room argument?

      Yeah, I know it, it’s nonsense.

      Firstly the argument is completely irrelevant because in a practical situation (like a robot doctor or a self driving car for example) the “Chinese room” makes no difference to the utility of such a device. But more than that, none of us apply Searle’s criteria in any situation where we communicate with a person we have not yet met in person. When you call up the relevant government department to ask about your tax status do you demand a brain scan of the person in their callcenter ?

      Unless you understand it, you don’t understand this issue.

      It’s pretty easy to understand, you and Searle believe in some secret essence of intelligence that cannot be measured, nor even can it be described but somehow it must be there like an immortal soul or something. If you were religious then at least you would be consistent.

      The “Chinese room” argument also demonstrates how.ridiculous strict reductionism can be… a car is just a collection of atoms so how can random atoms win a Formula One?

      Of course, the atoms aren’t random, they are very carefully organized pattern and that’s the important bit. Mess the pattern around it doesn’t work so good.

      If you blank a computer hard drive, a reductionist would argue that it still works the same… because all the same parts are still there, shouldn’t matter whether there’s any software loaded, right? Yeah, right… wouldn’t want to let someone like Searle do any engineering work.

      • Craw says:

        Well said.
        Has LK read my Chinese Brain paper? It is about a collection of neurons in a skull. None of the neurons speaks Chinese, but if you interact with the brain it seems to.

        • LK says:

          “It is about a collection of neurons in a skull.”

          Bingo.

          Neurons and neurotransmitters and all the biological and biochemical processes in the brain that work in a complex manner to cause the emergent property we call the conscious intelligent human mind.

          The AI buffs assume that conscious intelligence is just an outgrowth of information processing. But that is a necessary but not sufficient condition for conscious intelligence. It takes brain chemistry organised in particular ways too.

          • Craw says:

            To compute? False. For consciousness? Unproven and beside the point. Callahan clearly does not understand the mechanics of the Turing test much less its significance. To steal Murphy’s idea, he thinks it’s about the power cable. A cruder misunderstanding is hard to imagine.

          • Tel says:

            Now you have changed you story; because that is not Searle’s “Chinese room” argument. By implication, Searle denies emergent properties entirely.

            So your new story is yes we do have emergent properties, but for some reason a bunch of simulated neurons cannot possibly produce this effect… only real wet biochemistry can do it. Stated with no particular proof seems like nothing more than personal prejudice.

            • Craw says:

              Well said again. The Turing test focuses on the mental to get away from this prejudice for the biological. Of course if you start with the dictum that only wetware can think then you conclude machines can only simulate thought. But where does that dictum come from? Nowhere, it’s a prejudice pure and simple. I could say that water only comes from rain or rivers, so if you burn hydrogen you get only simulated water, not the real stuff. It doesn’t matter if I cannot give you a test to distinguish them, all that proves is the simulation is good.

        • guest says:

          “None of the neurons speaks Chinese, but if you interact with the brain it seems to.”

          Does “interacting” entail speaking Chinese, or just seem to?

          In other words, the input has to be coming from something more than neurons in a skull.

    • marris says:

      The best counter-argument that I have read to Searle’s Chinese room is from Scott Aaronson. It is given in Lecture 4 of his Democritus series: http://www.scottaaronson.com/democritus/lec4.html

      Search for Chinese room in the notes above.

      • guest says:

        Nah. Scott misses the point.

        The output of the Chinese Room is intelligent in the sense that the reader can understand it.

        But the Chinese Room, itself is not intelligent.

        If producing an output that is meaningful to the reader is a sufficient test of actual intelligence, then an old-timey two-tray weight scale would pass the Turing Test by answering the question, “Which of these two is the heavier?”

        • marris says:

          > But the Chinese Room, itself is not intelligent.

          What’s your argument for this?

          Note that we can make the Chinese Room “more human-like” with additional thought experiments. We can replace the extra human participant with a microphone and speaker. And we could shrink the whole apparatus down to the size of a human brain. We could even embed the smaller machine in a humanoid frame that changes its size over time (grows) and them fails (dies) after 85 years.

          During its lifetime, this being could have interacted with friends, co-workers, lovers, etc and done all the things that a human was capable of doing.

      • Craw says:

        Excellent article.
        Searle’s uses another trick, by having a conscious human in the process whose consciousness is ignored. The room with software not a person feels less odd. This goes to that article’s last point.

    • Craw says:

      You are not consistent even within one paragraph. If the test only applies to artificial intelligence then there is more wrong with it than just the few words you quote. Which is it?

      Plus you have either misstated the test or begged the question. It is a test of machine intelligence. Asserting it only tests simulated intelligence is either question begging circularity or a gross misrepresentation of what Turing said. In either case your argument is worthless.

  9. Josiah says:

    I don’t understand Gene’s analogy, but I do agree that the Turing Test is not a good test of whether machines think.

  10. Bob Murphy says:

    My gosh you guys, you’re starting to make me sympathize with Gene, when the whole point of this post was to point out the irony that Gene failed the Turing Test of “someone who understands the Turing Test.”

    Have you guys not seen the GMU crowd use this metaphor in other contexts? E.g., can Bryan Caplan pass the Turing Test of being a Keynesian? Don’t you see how that makes sense? In contrast, don’t you think Krugman would fail the Turing Test of being an anarcho-capitalist?

    So, Gene is trying to show what’s wrong with this approach, by (he thinks) constructing examples where somebody or something would pass a suitably altered Turing Test, yet we clearly know that the somebody or something should NOT be granted the epistemic status that the faulty Test awards in those examples. The problem is, I think Gene didn’t really do the Turing Test right in his examples. (Or at least the Emily one; I didn’t think too hard about the alarm clock one.)

    • rob says:

      But if all Turing-type tests are vacuous then whether Gene passes the Turing test for understanding Turing tests will give no insight into whether Gene actually understand the Turing test or not , right ?

      • rob says:

        Gene’s second example is “Similarly, per Turing, if we need to judge “Who knows most about when I must wake up,” my mom or an alarm clock, we can’t look at how either “system” came to wake me up: we can only say, “Well, given my mom came in my room at 6:30 and made a loud noise to wake me, and my alarm also made a loud noise at 6:30 to wake me, their knowledge is equal.”

        Its hard to fit this into a Turing test type scenario. But lets assume that Gene’s mom, and his alarm clock are put into black boxes and questioned about when he must wake up. It seems quite likely that Gene’s mom would win this particular contest and vindicate the Turing methodology!

      • Bob Murphy says:

        But if all Turing-type tests are vacuous then whether Gene passes the Turing test for understanding Turing tests will give no insight into whether Gene actually understand the Turing test or not , right ?

        Right, rob. So that’s why I originally crafted this post as congratulating Gene for dodging the traps you guys were setting for him…

    • Major.Freedom says:

      “Have you guys not seen the GMU crowd use this metaphor in other contexts? E.g., can Bryan Caplan pass the Turing Test of being a Keynesian?”

      I have not, and if that is how “Turing Test” is being used then it i being used completely differently from what it originally meant. It sounds like the term is being used loosely, as a metaphor, for comparing a person’s theories of ideas to a set of theories or ideas.

      Sort of like Ken Kesey borrowing the term “acid test” to refer to something very differently from its original meaning.

      The Turing Test is about human intelligence versus AI. If something else is being called nonsense, then it isn’t the Turing Test that is nonsense.

  11. Bob Murphy says:

    Here you go, guys. The GMU guys were talking about the “ideological Turing Test” for a while there. So, to put our current dispute with Gene into that framework:

    (a) If someone said to Bryan, “No no no Prof. Caplan, the Turing Test is about a computer passing off as intelligent. Why are you talking about ideology? Can’t you read?” then that would be goofy.

    (b) If someone said to Bryan, “This is vacuous! Suppose I have Paul Krugman come over to my house, and using my computer he gets the panel of judges to agree he sounds like a Keynesian. Then you’re saying I would be a Keynesian!! Ha ha how stupid,” then that would also be goofy.

    Many of you guys are doing (a) to Gene, but I think Gene himself did (b).

    • Transformer says:

      The key difference is that Caplan and the others are generally using the “Turing test for X” to shed light on X. Gene is using the ” “Turing test for X” to debunk the concept of the Turing test itself.

      So response a) would be out-of-context in the Caplan case.

      But as Gene is saying “The Turing test for X throws up false positives therefor the whole concept of the Turing test is vacuous”, it seems reasonable to say “But Gene – the Turing test for X is unlike the real Turin test in significant ways and therefore your analogy is false”.

    • Silas Barta says:

      … and we were are criticizing that his test setup had the subject relay messages from someone else, which breaks the purpose of the test and the validity of the analogy.

    • Grane Peer says:

      Bob, that Ideological Turing Test you linked to is vacuous. I identified as liberal, did well (4/5) on conservative views and my answers for liberal views will go into the pool for self-described liberals. At least with that particular test you have no idea what’s going on inside the box. Basically, Krugman answering for Caplan, or should I say, Caplan answering for Krugman.

      Even though (b) is goofy we at least have a case where a test is so poorly regulated that that can occur. If the point is to convince the tester and the tester has no idea what’s happening in the box(?)

      I think the problem with Gene’s post is closer to (a) insofar as what point does a Turing Test serve if the question is who has the most accumulated information. That would be like putting the contestants on Jeopardy off-camera and having them type their answers.

  12. rob says:

    I learned about Litmus testing last week. But I just used the test to see if water contains hydrogen and it came up with a false negative. The whole litmus test thing is totally vacuous.

    • Bob Murphy says:

      Oh you guys…

    • Silas Barta says:

      Hah — interestingly, I remember learning in chemistry class that table salt mixed with water should still have a pH of 7. But at work (my part time food service job), we had pH testing strips that would show salt water as being rather acidic.

  13. Craw says:

    Impossible to believe he understands the test at all after that. Redundant after his hash of a post where he gets everything wrong, but still impressive.

  14. Bob Murphy says:

    MF and Grane Peer: The “ideological Turing Test” grew out of the desire to know one’s opponent. I think I know Krugman’s model way better than he knows mine. Now how could we objectively test my belief?

    Surely a pretty good way is if Krugman and I are each behind a screen, and are asked questions about economic policy, and a panel of DeLong, Noah Smith, and Dean Baker can’t tell who is the real Keynesian and who is just faking it.

    In contrast, if Guido Hulsmann, Joe Salerno, and Roger Garrison started asking questions, they’d be able to identify the real Austrian more quickly.

    (Note: I’m just using this as an example. I couldn’t actually fake being Krugman with those guys, because they’d ask technical questions about the Heckscher–Ohlin model or something. But I could be a much better Scott Sumner or Paul Krugman than they could be me.)

    • Tel says:

      Isn’t that what trolling is for?

      If I can write a spoof Keynesian article that convinces live Keynesian economists to start advocating for some new and even more outrageous policy, then I have proven I can push their buttons. Extra points if the Austrian readers twig to it being a spoof. Lose morality points if the numbnuts go and implement that policy and everybody suffers as a consequence, but I’d have to start believing in “the public good” before worrying about something like that.

      PS, how long do you think before they realize the negative interest rate suggestion was just trolling?

      • Bob Murphy says:

        Tel I don’t suppose there is a YouTube video of you? Your comments are always so…unique…that I would like to be able to picture the guy who generates them. Right now you are a white box to me.

        • Tel says:

          Doesn’t that bone head with the sticking out ear, hairy chops and blue sky background come up next to my post?

          http://1.gravatar.com/avatar/d5380405efc665d1feea17da8b4ae804

          Long ago I used to be a model for those stone heads on Easter Island, but cheap imports pushed me into retraining.

          • Bob Murphy says:

            Right but I want to see you enact the English Room scenario.

            • Tel says:

              You think I secretly have someone here helping me write these ?!?

              Even hypothetically presuming I did feel the urge to show transparency, what approach would deliver adequate proof at a feasible cost? For example, you requested a video, but with minimal effort a video could hide something.

              Suppose halfway through making a point, I stop to check my favorite search engine and revise some data, maybe I refer to earlier notes, pull up a spreadsheet. Is that “me” doing those things, or is the output now tainted by external influence?

              Remember the case of the engineer who secretly outsourced his own job, the employer never detected the difference in output, it was tracked only by some connections to the company VPN coming from India or somewhere. If that track had been covered the guy would have been fine.

              • Bob Murphy says:

                Tel, check out reference (3) here:

                https://www.coursehero.com/file/p1tumjn/This-hypothetical-example-opposes-the-Turing-Test-a-test-which-is-passed-when-a/

                (I think the actual article isn’t online anymore.)

              • Keshav Srinivasan says:

                Bob, the Wayback Machine still has your article:

                web.archive.org/web/20130613150649/http://www.anti-state.com/article.php?article_id=247

              • Tel says:

                Try here…

                http://archive.is/TBQ1G

                For what it’s worth, I agree with you, I think it can be explained in fewer words.

                Getting back to the case of ideology; the ability to out-troll the other group (and there are various approaches) demonstrates an ability to understand the other guy’s ideology without agreeing with it (even better if you also understand the other guy’s tactical approach).

                There was a recent discussion between Stefan Molyneux and Vox Day about the “Social Justice Warrior” strategy. Day summed it up to three points:

                * The SJW gives higher priority to delivering the appropriate emotional “feel” than to sticking with actual facts and truth (i.e. signalling for tribal support).

                * The SJW will reliably escalate when under pressure or when questioned in any way… on the basis that many opponents can be made to back down with sufficient escalation. This can take the form of pretending to take enormous offense at any slight, or stepping up the name calling, or being playing the “how low can ypu go” game on personal attacks, or just declaring whole topics off limits and unspeakable (because those are inconvenient right now). Anything that helps shift the ground away from logic and rational analysis onto emotive mud slinging. Attacks on careers, business are common as well as parallel attacks on any supporters so as to isolate the target.

                * The SJW telegraphs their own mental state (including weakness and insecurities) by projection, blaming others for what they are doing or intend on doing.

                When you think about this, it’s easy to find ways to catch such people in very public displays of hypocrisy and often dishonesty as well. Donald Trump has mastered this bait and trap approach. Good work Trump IMHO.

                You say Trump is a bully, I think he is doing an excellent job standing up to bullies.

    • Grane Peer says:

      Bob, I think it is, first, bizarre to ask the question “who knows more history?” with a Turing Test. Be it man v. woman or man v. machine, there is no need to put the players in the box. When Watson appeared on Jeopardy there was no need to put the contestants in a box because the question was basically the same as Jamal and Emily’s test not to ask Alex if he thinks all the contestants were human or if he could tell the difference between player 1 and player 3. So, yes, the issue is not the machine or human distinction but from Gene’s example it seems he has missed the point of what a Turing Test is.

      In your test to see if you’re a better Krugman than he is a Murphy you two are behind a screen because you don’t look like him and he doesn’t look like you. If you guys could change your appearance then they would say “Paul doesn’t know as much about Keynesianism and Bob doesn’t know as much about Austrianism as we thought” So there must be an objective way to test your theory irrespective of your appearance. All this test would do is confirm who was better able to fool their respective interrogators. Is this Gene’s point? Is subterfuge a meaningful test of intelligence?

  15. rob says:

    According to wikipedia the Turing test is a test designed to “test a machine’s ability to exhibit intelligent behavior “.

    Gene’s objections to the test is that AI advocates (who he ,apparently literally, sees as part of new religion) go further and believe the test shows real as opposed to merely exhibited intelligence.

    It not clear what Gene’s view on what constitutes real intelligence is. He wavers from accepting that machines may thinks (see http://gene-callahan.blogspot.com/2015/02/a-stunning-misinterpreatation-of-point.html) to apparently stating that all intelligence in machines is derived from the humans (and therefor don’t really think?) who designed the machines as in his most recent posts.

    I am not sure how we can bridge the gap between “exhibiting intelligent behavior” and “really being intelligent”. Our fellow humans may appear intelligent but might actually be very sophisticated but not conscious robots.

    Whether we attribute “real intelligence” or not to things that exhibit intelligent behavior comes down to entirely subjective definitions and classifications.

    Gene seems on the verge of categorizing people who don’t accept bis subjective definitions and classifications as bad people. His next posts on the topic should be interesting.

    • Craw says:

      Interesting — in the way train wrecks fascinate — post.
      I noticed this bit:
      Commenter: Then, thinking animals must have evolved from non-thinking animals.

      Gene: Completely unwarranted assumption.

    • guest says:

      “Our fellow humans may appear intelligent but might actually be very sophisticated but not conscious robots.”

      Irrelevant. The point is that there is a distinction to be made, regardless of our ability to perceive it.

      That is, “real intelligence” involves deliberate acts of analysis, rather than just passive output as an effect of a cause.

  16. Michael says:

    Actually, while Gene makes a small mistake (including outside help), after looking through several of the posts on his blog I think hes more or less making a fine point. He seems to concede that a computer which can pass the test “appears” to be intelligent to the observer, or that the observer can’t distinguish.

    He’s not willing to concede the further point that this means the observer “must” admit that the computer is _actually_ intelligent. I think thats a totally fair point and perspective and don’t see why its even controversial.

  17. Stephanie Miller says:

    Agreed on all fronts with Gene

Leave a Reply to Gene Callahan

Cancel Reply