24 Feb 2020

Bob Murphy Show ep. 102: A Deeper Analysis of the Grievance Studies Hoax

Bob Murphy Show 7 Comments

Something never quite sat right with me when this story broke in late 2018, and I finally spelled out my concerns. To be clear, I’m not saying the three people shouldn’t have done the hoax; I’m just trying to isolate exactly what it was supposed to demonstrate.

7 Responses to “Bob Murphy Show ep. 102: A Deeper Analysis of the Grievance Studies Hoax”

  1. Harold says:

    I will listen, but have not yet done so. My comment on the hoaxes without that benefit.

    The accepted papers do, without doubt reflect badly on the journals, but it is not *quite* as bad as it may initially appear. One problem is that the papers would have had to be retracted even if they were promoting a mundane aspect of physics or chemistry because the data was made up. Supposed interviews and experiments never happened. I could perpetrate a similar hoax on a chemistry journal by making up experiments that looked reasonable but were just made up, say on the boiling point of solvent mixtures showing a slightly surprising inflection. If I made up all the data it would have to be retracted if discovered.

    The authors say “Having spent a year doing this work ourselves, we understand why this fatally flawed research is attractive,” Yet they did not do this work. They made up stuff that they thought looked like they were doing this work. They compare their papers with “White Fragility”, a paper by DiAngelo in 2011. There really is no comparison.

    Lets have a closer look at one paper. The “Hooters'” premise seems entirely plausible: “That men frequent “breasturants” like Hooters because they are nostalgic for patriarchal dominance and enjoy being able to order attractive women around. The environment that breastaurants provide for facilitating this encourages men to identify sexual objectification and sexual conquest, along with masculine toughness and male dominance, with “authentic masculinity.”

    Alternatively they may just find being around attractive women pleasant and not out of desire for erzatz sexual conquest, male dominance, masculine toughness and “authentic masculinity.”

    The fictional study is flawed, since it relies on one person interpreting all the data so bias cannot be ruled out and involves only a few subjects. The author points this out in the Limitations section. There is aklso a huge flaw in the lack of a control. there was no non-Hooters restaurant with which to compare. The conversations with a group of people from a Ju Jitsu club of the which the author was supposed to be a member were recorded over a 2 year period. Consent had been gained from all participants and restaurant management. The author listened to the recordings at least three times and codified comments according to methods apparently described in the literature into certain categories.

    He picked out several themes, including sexual objectification, sexual conquest, male dominance and control over women, and masculine toughness. The big problem is that after that great set-up, all he presents is basically anecdotal stuff that could have come from a casual visit a few times. The data analysis is almost non existent, consisting of “ubiquitous, all men”, “frequent, most men” or “Common, a few men.”

    The data analysis does seem to be so poor as to warrant a much more detail. He describes the methods he apparently used, then presents nothing based on them. He comments that since he knows these men from the Ju Jitsu club, these comments are specific to the restaurant, yet there is no analysis of conversation elsewhere, just an anecdotal observation.

    So yes, there does seem to be a failure of review here. In their defense, they believed these data were based on painstaking analysis of two years of recordings and so gave some credence to the reported frequencies. An apparently good introduction (I do not know the field, so I cannot judge if it really good) seems to have blinded them to the flaws.

    The purpose of the hoax was “To see if journals will publish papers that seek to problematize heterosexual men’s attraction to women…”

    Well, it did problematize men’s sexual objectification of women and desire to control women rather than men’s attraction to women, so it failed in its purpose. You could argue that of course these restaurants do this, so the conclusions of the paper are obvious. Yet many apparently obvious things turn out to be wrong when studied. Alternatively, you could argue that sexually objectifying women is not a problem.

    In conclusion, the hoaxes do reveal a problem in this field. They also fail (at least sometimes) to demonstrate what they think they do.

  2. Tel says:

    I doubt there’s anyone who deeply believes that the “peer review” process creates truth … although some will use it as a glib proxy for truth, when they can’t be bothered looking closely at an issue.

    For starters, many of the great scientific discoveries of history never went through anything like the modern “peer review” process, this is a thing invented by journals for the purpose of narrowing down what they will publish and therefore making the journal more interesting and gradually increasing prestige of that journal. More importantly the real review happens AFTER publication, if anyone makes the effort to reproduce this work or build on it in some way. For example, if an interesting scientific discovery gets published then probably engineers will think about ways to make a product out of that. Should the engineers be unable to get it to work, the word will get around that this is basically junk. This is what happened with Cold Fusion … everyone wanted to build one, but then no one could figure out how to make it work. There was a flurry and then they all gave up.

    So here’s one point about the “Grievance Studies” subjects … they don’t have any practical application other than political activism. You can’t build a better mouse trap out of Feminism. Therefore no one bothers to even attempt to reproduce the result, and basically no one cares what the result is, providing they have fodder for their political activism. These articles are intended to be as opaque as possible so that they can be cited with impunity, in order to show you have rigorous academic backing for whatever special pleading you care to attempt. Made up data is perfectly fine, until someone reveals that it was made up.

    There’s another point though … if the “Grievance Studies” subjects openly called themselves by what they really are, then suddenly they would be worthless as political fodder. Therefore they must at all times maintain the illusion that they are studying one thing, while actually be studying something else. This makes them intrinsically different to mathematics which ultimately is what it is. Even when a mathematical proof is wrong, it’s still wrong in a way that’s open and traceable. The point about “Grievance Studies” is that serving the political narrative subsumes the very concept of rightness and wrongness which ultimately become irrelevant.

    • Anonymous says:

      “I doubt there’s anyone who deeply believes that the “peer review” process creates truth…” I agree, it does not create truth. It is only part of the process. The rest, as you say is very important also. False ideas eventually get rejected.

      The idea of peer review is not to ensure that everything published is correct. It is to set a bar over which papers must pass. Published papers should have displayed an understanding of the field, as shown in the introduction, and set the findings of the paper in a proper context. Someone pushing a narrow idea that is not supported by the literature should get picked up in review. It should ensure that the conclusions are warranted by the data and the correct statistical methods have been used. It cannot generally detect fraud – that would come later as other researchers fail to replicate.

      It is likely that bad or wrong papers will eventually get found out, or simply ignored. The more important the paper, the more attention it will attract and the faster it will be confirmed or rejected. papers may be erroneously measuring noise, or interference from an unknown source. That does not make them wrong, just that the authors did not know what they were measuring. This is why papers use language like “appears that” and “might be.”

      Cold fusion is a good example. There was nothing wrong with the paper per se. The problem was the press conference announcing a revolutionary breakthrough that was premature, to say the least. The paper has not been retracted and maybe the findings do advance our understanding somewhat, maybe something is going on, although not in the way Ponds and Fleischman thought.

      The difference you describe between “hard” science and sociology / psychology is a big problem for the latter. The hard sciences do tend to fairly quickly find out when something important is wrong. Less so in the softer areas of study, hence the Replication problem. We do need to treat these conclusions with skepticism. The difference is also one of kind. Philosophy, for example, has many different interpretations. It is not so much about describing the world as it is, as with the hard sciences, as trying to determine how we should deal with the world. The philosophical arguments are not necessarily right or wrong, but we can disagree or agree.

      Early peer review was by the editor of the journal, who could be expected to be sort-of abreast of the entire field. That is not possible now.

  3. Harold says:

    Sorry, forgot to add name to the last comment, which I assume will appear shortly under “anonymous”

  4. Harold says:

    Listened to it now. We more or less came to the same conclusion on this one!

    15 mins about – I did not know about the kickback about the human subject research. They did do an experiment on the editors and referees. The prof. has to do some training, but we do not know what that entails. I don’t really know if this a reasonable response.

    There has been similar kickbacks on medics who have sent fake applications with made up, foreign sounding names to demonstrate racial bias.

  5. Paul says:

    You may want to review the interviews given at the time of public release. IIRC, they were all on Rogan and they discussed the following, which presents problems for some of your analysis. Note that this is from memory watching the interview when it was published.

    The authors stated they were motivated at least in part due to journals that were attempting to move into disciplines they had no business being in. Again, from memory, I believe they stated it was humanities journals trying to move into social sciences territory. The authors were reading articles in the humanities journals that had clear scientific problems, but their belief was that the editors of those journals were not equipped to discover the problems in the papers.

    They went on to create papers specifically designed to test this inadequacy. I would expect that real researchers would spend a lot of time generating such tests because they are testing other academicians, so would need to be certain they had their ducks in a row.

    In the interviews they stated that the papers should have been very easy to spot as fake by simple evaluation of the studies. If you look at the dog study and analyzed the actual numbers, you should immediately conclude there is no way for the researchers to have actually performed that study. Your discussing that even the titles were conspicuous seems to support this.

    Perhaps an appropriate analogy is not an economist submitting an economics paper to an Austrian Economics journal, but rather an Austrian Economics journal publishing chemistry articles but calling it economics.

Leave a Reply