Jump to content

Can we talk about the Michael LaCour falsified research debacle?


brown_eyed_girl

Recommended Posts

I even know a very close case! This person has a very long list of publications, but since she is the head of an office related to scientific work, she makes everybody who goes there to do part of a work or collaborate in a way with the Center to add her in the finished paper.

 

This is in a developing nation, though, and in a very incipient area, so she is totally getting away with "publishing" papers that she has not even read or had a clue what was it about (even papers in a field out of her "expertise").  Completely unethical, but nobody wants to say a word and look like a "trouble-maker" or be seen as somebody "wanting to make her look bad to take her job". Or worse, being blacklisted in one of the very few places to do science here.

 

Just want to say that in some fields, this may not be that shady at all. It's pretty common in my field for collaborators to be on papers on which they have little involvement with the scientific content (i.e. they would not necessarily know if the lead author is fabricating results). For example, one group I worked in had a core team of people that spent 7-8 years designing, building parts for, launching and retrieving the instruments on a telescope that flew in Antarctica (it was a telescope on a balloon). The agreement for anyone who wanted to use this data in any way was that everyone in the core team (about 15 people or so) must be on the author list (or at least, have right of first refusal). So, many papers that came out of this experiment involved a lot of people that were not directly involved with the data analysis (but did dedicate 7-8 years of their career to make the data possible). Of course, they are still ethically required to read over the papers that came out but since the application of the data might be a different field of expertise than their own background, and because they were not directly part of the analysis, an unscrupulous lead author could still trick the coauthors.

Link to comment
Share on other sites

Replicability/reproducibility is becoming more talked about in psychology, yes, but not more prestigious - at least in my experience. APS, especially, has spent a lot of time discussing what steps we should take in replicating research, like the registration of data you mentioned, spunky. But there's no reward in it, and that's the rub. Early career scholars need publications in order to get TT jobs and replications are difficult to publish and to convincingly talk about in job talks. Tenure-track scholars need publications for tenure, and the same issues come up there. And tenured scholars aren't going to be spending their time on replicating experiments. Some of them are still interested in promotion, and some of them have to fund significant portions of their salaries on grants, and the NIH and NSF aren't funding huge replication grants most of the time.

 

I think LaCour's story is an interesting entry in current rumbling conversations about academia and the need for reform in the field. I think LaCour lied because he's a lying liar, and the subsequent questions about his dissertation data, other papers he's published, and some awards and grants on his CV pretty much support that. But it does raise some questions about the pressures on young scholars, especially those at high-flying programs. I went to a top 10 program in my field and the pressure and expectation is very much on for you to get an elite R1 job just like the university you came from. It's so unrealistic and stupid - there aren't enough of those jobs to go around, and it's not like they really prepare you for that eventuality anyway. But it's also ridiculous! We hired 3 assistant professors in the time I was in my department, and I got to see the CVs of the finalists they invited to campus (we typically invited 5 instead of 3; don't know why). These people obviously never slept. As graduate students, postdocs, and assistant professors with less than two years of experience, they had 15-25 publications (that's a lot for my field at this early stage), grants, and teaching experience, plus awards. The one guy who had 10 pubs (which is still a lot for a grad student) had a first-authored publication in Science. But they all had some splashy, sexy area of research. None of them were doing replications of other people's work, or anything close.

 

Most people when confronted with the pressure wouldn't completely make up a study and fake some data, so LaCour's on his own. But when p = .06 means the difference between another first-authored publications and years of work wasted...yeah, I think a lot of people massage that data to get it down to p < .05 (which is generally the threshold for statistical significance, and by extension a publishable paper, in psychology). When you want the brass ring of a job at a top R1, or some new grant funding, or tenure - or all of those things - yeah, I think some shady things go down, and I think a large number of scientists probably do those shady things.

 

I don’t think researchers have a duty to verify the papers we cite. First of all, that’s an enormous undertaking - how could I ever? You have to trust that the majority of people are telling the truth (mostly) and that the journals have done their job in peer review. Even in peer review, reviewers aren’t paid - so it’s not like they have time to re-run study results. Journals have to take it on faith that authors are not making up their data and analyses from whole cloth, until we get to the point that we’re banking data on a regular basis. Collaborators are a different story, though. If you’re going to put your name on a paper, you should verify that the results in the paper are correct and valid. That’s why I have disdain for this famous Columbia professor who’s trying to distance himself from the whole thing and put the blame on LaCour. Yes, LaCour bears the most responsibility, but each author on a paper is responsible for the paper as a whole.

 

Would I turn in a fellow grad student? It depends on the extent of my knowledge and what they were doing. If I knew for a fact that they were making up data and I could prove it, and we worked for the same PI or they worked for a PI I felt comfortable with, then yes, I might say something. Otherwise…probably not.

Link to comment
Share on other sites

This was a really interesting and worrisome story to me because of how easy it is to alter and manipulate data before submitting the research to others or for publication. Something it reminds me of is food industry related research; specifically research I reviewed for an Ethics and Food Security course that involved GMO research. In my experience a study comes out and it is either accepted or questioned and if its questioned other authors may make a piece calling it out or attempt to replicate or disprove it, but with GMO research it's different. Instead of replicating or otherwise professionally critiquing the work, the work gets slandered and the authors associated are questioned based on their personal character and professional affiliations. I'm not native to food industry research but my finite exposure has left me a little discouraged by how some researchers operate. An easy example can be found here about a retracted study: 

 

http://www.geneticliteracyproject.org/2014/06/24/scientists-react-to-republished-seralini-maize-rat-study/

 

In criminology and criminal justice research, based on what other posters have said, there is considerable replication of research. The theories for crime causation that are most widely trusted and accepted are those which have been replicated several times in different ways and principles like "Crime Prevention through Environmental Design" are semi-proven with dozens of research based anecdotes and evaluation studies. My personal favorite means of determining the authenticity of a field of research is via meta-analysis. If you want to know whether crime displaces spatially when an intervention occurs few things can answer that better than a meta-analysis showing the results of 102 other displacement and diffusion of benefits studies (thank you, Guerette & Bowers, 2009). 

 

To me also it seems like some amount of blame has been placed on the reviewers of LaCour & Green's work. Personally that seems weird to me as my understanding of the review process is that reviewers are more for assuring the journal that the study makes a substantive contribution to the field, that the study's statistical conclusions are sound, that the study accurately and appropriately cited other work and the study was written competently. It isn't the reviewers job to be reviewing original data sets that the study drew from or do the investigative work that Broochman conducted to discern that the study was a fake. Am I wrong here?

Link to comment
Share on other sites

To me also it seems like some amount of blame has been placed on the reviewers of LaCour & Green's work. Personally that seems weird to me as my understanding of the review process is that reviewers are more for assuring the journal that the study makes a substantive contribution to the field, that the study's statistical conclusions are sound, that the study accurately and appropriately cited other work and the study was written competently. It isn't the reviewers job to be reviewing original data sets that the study drew from or do the investigative work that Broochman conducted to discern that the study was a fake. Am I wrong here?

 

In my field, the reviewers' duties would include ensuring that the statements made by the authors are scientifically sound (not just statistically sound). I agree that they should not be expected to reproduce the work themselves, but I do think it is the reviewer(s) duty to ensure that every conclusion drawn by the author(s) logically and scientifically follows from the stated assumption and premises. And that the assumptions are reasonable assumptions to make. 

Link to comment
Share on other sites

Sword_Saint, what you said about crime research is really interesting in light of the ongoing Chronicle commentary about the sociological work of Alice Goffman on young felons: http://chronicle.com/article/Conflict-Over-Sociologist-s/230883/

 

Thanks for linking to that, I heard about it in passing but never read into Ms.Goffman's situation.

 

During my undergraduate degree I was captivated by ethnographic research, especially works like: Code of the Street (1999), Dealing Crack: the social world of street-corner selling (1999), and Random Family: love, drugs, trouble and coming of age in the bronx (2003). It's wholly different than the quantitative work that I was really taught and it's uniqueness, how much light it could shed on a situation that quantitative analysis just couldn't, is something that appeals to me. Its a shame to me how difficult it is to conduct that kind of research because of the "publish or perish" atmosphere- and its also a shame how little ethnography is valued by some individuals as being 'less' than quantitative research. 

Link to comment
Share on other sites

In my field, the reviewers' duties would include ensuring that the statements made by the authors are scientifically sound (not just statistically sound). I agree that they should not be expected to reproduce the work themselves, but I do think it is the reviewer(s) duty to ensure that every conclusion drawn by the author(s) logically and scientifically follows from the stated assumption and premises. And that the assumptions are reasonable assumptions to make. 

 

Which is exactly how it's supposed to be done, but it's impossible to catch if someone is deliberately fudging their numbers. Even harder to sniff out someone that is throwing out a few samples here and there to make their findings significant (which I'm sure a lot of people are guilty of). If it gets to the point where one can't even trust someone's raw data, the whole system will come crashing down. A big problem with a lot of science, is that some of it is nearly impossible to replicate ( unless conditions are perfect and you are using the exact same samples/reagents), can cost huge dollars to replicate, or simply takes way to much time (scientifically speaking) for a replication study. On top of all of this, it's nearly impossible to get replication funded. 

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

This website uses cookies to ensure you get the best experience on our website. See our Privacy Policy and Terms of Use