Jump to content

Can we talk about the Michael LaCour falsified research debacle?


brown_eyed_girl

Recommended Posts

I assumed someone would have already started a thread about this, but since I don't see one -- can we just talk about the Michael LaCour falsified research debacle for a minute? Is anyone else as riveted and shocked by this case as I am? For anyone who may not have heard, this is the case I am referencing, though more and more details have come out: 

 

http://nymag.com/scienceofus/2015/05/how-a-grad-student-uncovered-a-huge-fraud.html?mid=fb-share-scienceofus

 

In short, a star PhD student at UCLA, whose study was considered groundbreaking and published in Science, turns out to have falsified much of the research in pretty major ways - from making up hundreds of thousands of dollars in grant money that never existed to beefing up his CV with nonexistent awards. 

 

This case raises so many questions for me. Why would someone falsify their research to this extent after investing so much time in their PhD? Didn't he know he'd get caught? On the other hand, are there many more like LaCour who haven't been caught, given how many people tried to talk the whistle-blower out of saying anything, even as his proof became undeniable? What is your duty as a researcher to verify the papers you cite and the research of collaborators? Would you turn in a fellow grad-student whose research methods were extremely suspect, in spite of warnings from mentors and colleagues that you couldn't gain from it so it would be safer to say nothing? 

 

Thoughts, anyone?

Link to comment
Share on other sites

Well done to Dr. Green for falling on the sword and fairly gracefully taking responsibility for his lapse.

 

However, this is the part that haunts me the most:

 

In fact, throughout the entire process, until the very last moment when multiple “smoking guns” finally appeared, Broockman was consistently told by friends and advisers to keep quiet about his concerns lest he earn a reputation as a troublemaker, or —  perhaps worse — someone who merely replicates and investigates others’ research rather than plant a flag of his own.
Link to comment
Share on other sites

Very worth reading the CHE thread on this- around 15 pages, and lots of interesting details and opinions.

 

I'm betting UCLA's Pol Sci department will have a very hard time recovering from this as a whole, and the IRB issues are going to haunt the school as well.

 

Also worth noting that there are allegations that at least one other of his recent papers used falsified data. 

 

http://polisci.emory.edu/faculty/gjmart2/papers/lacour_2014_comment.pdf

Link to comment
Share on other sites

 Is anyone else as riveted and shocked by this case as I am? 

Riveting, sure, shocking? Less so. The guy got to Princeton and almost got away with it! Reinhart and Rogoff also got exposed for errors in their work on the role of austerity in navigating the financial crash at Harvard — like the banks their ideas ended up protecting, it turned out they too were too big to fail. They didn't recant or exhibit remorse, or even express regret for the errors which, when exposed, served to render their conclusions as no more worthy than chance. The entire Eurozone justified punitive policies on the back of their bungled calculations - they didn't even care when the mistakes surfaced and neither did Harvard! It seems then, that the only time you actually get punished for dishonesty and being disingenuous is when you're the little guy. Marc Hauser, the evolutionary biologist, was also kicked out of Harvard, eventually, for falsifying data on experiments — after a protracted period of inquisition.

Given the pressures in academia, the catastrophic financial and personal consequences of failing and, of course, the lucrative remuneration and perks for getting a tenured gig at an Ivy or somewhere similar, based on a single paper, is it any wonder that struggling grad students sometimes give in to the temptation to make the "perfect" result — to fabricate the answers advisors are only too happy to accept? In short, I'd absolutely report a fellow grad student who is cheating and fabricating results. I'd see it more as a duty than a conflict - (s)he is sullying the discipline and the hard work of everyone else! How can you possibly tolerate the corruption of the field you, and others, are working hard honestly to further? I'd have a long hard word with him/her first, however, and try to get them to own their own mistakes. But the problem lies in the culture that pushes otherwise or once decent people to the point of cheating and risking everything for a chance to succeed. This is compounded by a lack of accountability for the big stars in academia who are seemingly permitted to err or mismanage things whilst their lessers are dismissed, contracts terminated, or permanently barred for the same transgressions.

Edited by CiaranD
Link to comment
Share on other sites

Very worth reading the CHE thread on this- around 15 pages, and lots of interesting details and opinions.

 

That thread is here: https://chronicle.com/forums/index.php/topic,183545.0.html

 

My favorite quote so far:

 

"They are going to throw him under the bus? There's no need. He might as well be lying prone in the bus lane of the transit authority lot holding a giant "Here I am! Please come and run over me!" sign that he wrote in his finest caligraphic hand using blood red fountain pen ink that he had custom-mixed for the purpose.

 
Heck, at the rate he's going In trying to clean up after himself he's all but chasing down any bus that misses so he can throw himself on the ground in front of it again."
Link to comment
Share on other sites

I am wondering as to whether or not he falsified his dissertation data as well--if this paper was a portion of it, he might lose his PhD for this. If he blatantly falsified his CV and this large paper, it's likely there is falsification in his dissertation. I'm curious to see what ucla will do.

Link to comment
Share on other sites

I keep on thinking than in this “publish-or-perish” environment where we live in, with ever-growing pressure to obtain grants and fellowships and publish articles in prestigious journals, etc… how can anyone be surprised at all that this is happening? If people feel the pressure to publish novel, surprising findings, each one more impressive than the previous one without the opportunity of being able to make mistakes and not lose their money, people are going to end up cheating. That’s just human nature.  

 

Over here in Psychology we had our own scandal back in 2011 with Diedrik Stapel (details here: http://en.wikipedia.org/wiki/Diederik_Stapel) which was pretty much the same thing: rockstar of social psych prof/researcher was found to have been faking it for a while. This threw the field of Psychology into a mindset that is usually called the “Crisis of Replicability” where some of the dirty little secrets of many social sciences are being exposed (the one that pertains to my area is just how ridiculously bad we can be at doing data analysis) but I think it’s just the general fact that the way in which we do science over here in social-science land is just not sustainable. 

 

In my years as a graduate student, I would be willing to bet my brownies that at least two students who I helped out with the methodology of their respective dissertations must have "embellished" their data because they needed to finish their dissertation before their funding ran out. Can't prove it, but I know in my gut they did something before defending. 

Edited by spunky
Link to comment
Share on other sites

LaCour reminds me of a guy in my undergrad class. Back in our 2nd year, one of our professors (actually a lecturer then) made us write about a topic of our interest in groups of 4-5, and then present it. So this guy in my friend's group never showed up when they wrote their group assignment paper, nor when they discussed the presentation. Then on the day of our presentations, he comes in looking all dashing and well-groomed and stuff, goes through their paper and the presentation slides, and gives the most magnetic and charismatic presentation of the part that my friend's group assigned to him, scoring the highest marks in their group. The guy knew how to present any given material.

 

The reason I mentioned this was LaCour too seems to have a ability to persuade you to believe in his data in his presentations. Usually, you'd expect a person with falsified results to not be able to prove themselves or present their data well enough. Additionally, there would be a sort of internal feelings of guilt pulling them down. You don't usually see people so eloquently boasting their empty/corrupted containers. I found this part of his character important. Sounds like a good politician.

Link to comment
Share on other sites

 

The reason I mentioned this was LaCour too seems to have a ability to persuade you to believe in his data in his presentations. Usually, you'd expect a person with falsified results to not be able to prove themselves or present their data well enough. Additionally, there would be a sort of internal feelings of guilt pulling them down. You don't usually see people so eloquently boasting their empty/corrupted containers. I found this part of his character important. Sounds like a good politician.

 

This is exactly correct

Link to comment
Share on other sites

Sounds like a good politician.

 

 

This is exactly correct

 

I've been thinking the same thing! His career in academia is likely over, but I have a feeling this isn't the last we'll hear of Michael LaCour...I predict there'll be a book tour following publication of his "confessional" memoir, followed by a slick political campaign for something like a state senate seat, some sort of meteoric rise toward the top, and then the undoing of his political career due to a scandal that manages to involve mind-boggling audacity of epic proportions. After that....similar pattern repeats itself, only this time it'll be a (new) new career in punditry. 

 

I grant that the specifics will likely vary from this prediction, but if Michael LaCour doesn't claw his way back to (relative) fame and fortune in one way or another...well, then my (user)name isn't an oblique reference to Duane Allman's astonishing virtuosity as captured by a certain 13-minute masterpiece recorded live in 1971!

Link to comment
Share on other sites

LeCour posted his rebuttal on his website on June 1.  Instead of an open letter of apology to the public his essay seems to be in pure LeCour fashion (based on what I know of him).  He briefly apologizes for misrepresenting the data and fundings without actually apologizing.  It was more or less an Oops.  Sorry about that.  He then goes on to point fingers at Brookman and Green followed by showing how the data really does support his claim.  In essence, it is a rebuttal meant to lend credence to his name.

 

People like LeCour are often rewarded.  He knows how to work people which will ultimately translate into a very lucrative career for him.   Master manipulators are very much valuable.  Besides, all he has to say is You know, academia is a dog eat dog world.  I was under tremendous pressure to produce, to stay a float, to remain competitive.  And then boom, he sounds like a normal human being and all is forgiven.

 

We are talking about a field [academia/academic research] were one's livelihood is dependent on fundings which is dependent on publishing which is dependent on data which is dependent on funding. This is also a field where your name is everything and one false move [cooked data] can be the end of your career... ...unless you are a rockstar or associated with a rockstar.  It's a cutthroat scene with everyone competing for the same prize.  I'd bet more papers are cooked than we know of.  For STEM, this issue is not new as it is known that some researchers alter data just to keep the funding coming in but I am not sure how widespread this is in academia as my only knowledge of it happening are in industry.  

 

I think the biggest issue here is that academia does not seem to be too concerned with how this reflects on public opinion; which in my opinion shows just how disconnected academia is from the rest of the world.  Think about it:  not sure about the rest of the world but for America the public is already weary of academia and pundits are going to have a field day over a Political Science student who altered data in support of "gay marriage".  However, instead of protecting the integrity of the institution the concern seems to be to protect one's own image.   I don't know which is worst? 

Edited by Crucial BBQ
Link to comment
Share on other sites

There's a nice piece in the Chronicle about what to do in the wake of this: http://chronicle.com/article/What-Social-Science-Can-Learn/230645/

 

It's interesting to me that there's so much talk about replication.

 

So why don’t more researchers replicate? Because replication isn’t sexy. Our professional incentives are to come up with novel ideas and data, not confirm other people’s prior work. Replication is the yeoman’s work of social science. It is time-consuming, it is frustrating, and it does not gain any accolades for your CV. Worse, critics of students' doing replications state that they are amateurs, or that they may jeopardize their reputations by starting their scientific careers as "error hunters." The LaCour scandal shows that critics could not be more wrong. Scientific knowledge is built on the edifice of prior work. Before we get to a stage where we need more new ideas, we need to have a better sense of what works given the data.

 

As someone who works more in the realm of qualitative social science, replication isn't even something that often gets talked about. It's so hard to replicate qualitative studies since it can be hard to find the exact same people they surveyed/interviewed and because people's attitudes, beliefs, and behaviors change over time. That is, I could go to what was a rural area 25 years ago and try to do someone's same study but get totally different results because of things like the internet, cell phones, and suburbanization/exurbanization leaving rural areas less isolated. But the author does have a point that grad students are often told not to waste their time trying to replicate findings, especially since that isn't what gets one slick publications or a TT job...

Link to comment
Share on other sites

Providing a perspective from the sciences on replication, we are also directly and indirectly discouraged from studies that are replication of other work. Grant proposals are always evaluated for novelty and groundbreaking research, never to confirm old work. This is unfortunate, because in many fields (including my own), decades of work have resulted from a result in an old study that was later proven to be wrong. 

 

It's a little weird because in intro science courses, we are trained to "be skeptical" and "question everything". However, in reality, this is not easily put into practice as everyone has the pressure to win more grants (for novel research) and journals don't publish replication. 

 

Fortunately, some of the work in my field have been replicated, but usually either accidentally (through multiple groups working on the same thing independently and finding the same result) and/or through work specifically pitched and marketed to the granting agency as novel improvements on existing work. I really think it's harmful to our field that our major granting agencies do not want to fund studies that are purely meant to replicate important results.

Link to comment
Share on other sites

Coming from the bench sciences, there's a lot of replication. 

 

It's not direct replication of the entirety of someone else's study, but there's a lot of procedural/comparative replication. 

 

If you're building off of (or comparing to) someone else's work, you would usually do a part (or all) of what they did, and then show why yours is better/worse. 

 

Same with synthetic work- you don't get credit for designing, say, a base molecule that someone else has already made, but there's a good chance a number of other people will make it on the way to something else, or to use for something else. 

 

It's not perfect, but errors do get quite frequently caught because of it- especially things that end up being environmental and not considered as a cause. There's a famous case of a synthesis that would only work with the tap water at one particular university- they happened to have copper in the pipes, which was leeching into the water. Once that was found, copper was added everywhere else in trace amounts, and then it was replicable. 

 

Biological work is harder to replicate, but molecular biology as a field is trying really hard to standardize- typing cell lines so direct comparisons can be run between different labs around the world, requiring (or strongly suggesting) profiles of cells used as well as typing data to ensure apples to apples in those comparisons.

 

Not all data gets replicated, and some stuff is pretty damn hard/impossible to replicate exactly, but the more "interesting" or groundbreaking work (like in LaCour's case that challenged all currently supported theories), you can be sure that dozens of groups around the world will try to duplicate your results within months of publication.

Link to comment
Share on other sites

Coming from the bench sciences, there's a lot of replication. 

 

It's not direct replication of the entirety of someone else's study, but there's a lot of procedural/comparative replication. 

 

If you're building off of (or comparing to) someone else's work, you would usually do a part (or all) of what they did, and then show why yours is better/worse. 

 

Same with synthetic work- you don't get credit for designing, say, a base molecule that someone else has already made, but there's a good chance a number of other people will make it on the way to something else, or to use for something else. 

 

It's not perfect, but errors do get quite frequently caught because of it- especially things that end up being environmental and not considered as a cause. There's a famous case of a synthesis that would only work with the tap water at one particular university- they happened to have copper in the pipes, which was leeching into the water. Once that was found, copper was added everywhere else in trace amounts, and then it was replicable. 

 

I should clarify and say this happens in my field too. However, as you mention here, the replication happens because replicating the procedure happens "on the way" to doing another, more extensive study. It's almost impossible to get telescope time or grant funding to simply reproduce a previous work (full stop), unless as you also say, it's something very controversial or very interesting (e.g. that study a few years ago that claimed to have measured neutrinos traveling faster than the speed of light).

 

Because of this, results that don't easily allow you to write a convincing grant proposal to extend the work or compare your results to existing results don't usually get a replication test. Similarly, if there is currently no interest in extending/comparing the previous result, these research activities are discouraged and not funded. 

Link to comment
Share on other sites

LeCour posted his rebuttal on his website on June 1.  Instead of an open letter of apology to the public his essay seems to be in pure LeCour fashion (based on what I know of him).  He briefly apologizes for misrepresenting the data and fundings without actually apologizing.  It was more or less an Oops.  Sorry about that.  He then goes on to point fingers at Brookman and Green followed by showing how the data really does support his claim.  In essence, it is a rebuttal meant to lend credence to his name.

 

People like LeCour are often rewarded.  He knows how to work people which will ultimately translate into a very lucrative career for him.   Master manipulators are very much valuable.  Besides, all he has to say is You know, academia is a dog eat dog world.  I was under tremendous pressure to produce, to stay a float, to remain competitive.  And then boom, he sounds like a normal human being and all is forgiven.

 

We are talking about a field [academia/academic research] were one's livelihood is dependent on fundings which is dependent on publishing which is dependent on data which is dependent on funding. This is also a field where your name is everything and one false move [cooked data] can be the end of your career... ...unless you are a rockstar or associated with a rockstar.  It's a cutthroat scene with everyone competing for the same prize.  I'd bet more papers are cooked than we know of.  For STEM, this issue is not new as it is known that some researchers alter data just to keep the funding coming in but I am not sure how widespread this is in academia as my only knowledge of it happening are in industry.  

 

I think the biggest issue here is that academia does not seem to be too concerned with how this reflects on public opinion; which in my opinion shows just how disconnected academia is from the rest of the world.  Think about it:  not sure about the rest of the world but for America the public is already weary of academia and pundits are going to have a field day over a Political Science student who altered data in support of "gay marriage".  However, instead of protecting the integrity of the institution the concern seems to be to protect one's own image.   I don't know which is worst? 

 

Link to his response for anyone interested: (dropbox pdf).

 

 

Very worth reading the CHE thread on this- around 15 pages, and lots of interesting details and opinions.

 

I'm betting UCLA's Pol Sci department will have a very hard time recovering from this as a whole, and the IRB issues are going to haunt the school as well.

 

Also worth noting that there are allegations that at least one other of his recent papers used falsified data. 

 

http://polisci.emory.edu/faculty/gjmart2/papers/lacour_2014_comment.pdf

 

Interview with the PoliSci professor from Emory University by Daily Bruin: (link).

 

Edit: Bonus: Similar fraud in cancer research not too long back. http://www.theverge.com/2015/6/9/8749841/science-frauds-potti-lacour

Edited by shinigamiasuka
Link to comment
Share on other sites

 

Biological work is harder to replicate, but molecular biology as a field is trying really hard to standardize- typing cell lines so direct comparisons can be run between different labs around the world, requiring (or strongly suggesting) profiles of cells used as well as typing data to ensure apples to apples in those comparisons.

 

What I have seen as being common is that there are cross cultural collaborations with biology.  That is, someone interested in the same organism or process would come in from another lab/hospital/university/etc., sometimes from another country, to collaborate on research for six months or so.  Then at the end, go their separate ways and publish their own respective papers. 

 

I worked on two projects as an undergrad where both PIs were in collaboration with other researchers at other universities; interesting to me as they were also in competition with each other to be the  first to publish the big finding (and with one project, competing against research hospitals and others in industry). With so many sharing information it would seem hard to pull the wool over anyone's eyes. 

Link to comment
Share on other sites

Oh man, this is super interesting and about ethnographic research in sociology: http://chronicle.com/article/Conflict-Over-Sociologists/230883/

The comments section is particularly... interesting.

 

As someone that has done ethnographic work, I totally understand destroying one's fieldnotes. I also understand being reluctant to share them. Still, this is an interesting case because it isn't threatening to destroy Goffman's career (at least not right now) but is raising questions about qualitative researchers do their work. 

Link to comment
Share on other sites

I agree with Spunky, with the "publish or perish" motto in academia, no wonder why retractionwatch.com is so busy!!

 

I even know a very close case! This person has a very long list of publications, but since she is the head of an office related to scientific work, she makes everybody who goes there to do part of a work or collaborate in a way with the Center to add her in the finished paper.

 

This is in a developing nation, though, and in a very incipient area, so she is totally getting away with "publishing" papers that she has not even read or had a clue what was it about (even papers in a field out of her "expertise").  Completely unethical, but nobody wants to say a word and look like a "trouble-maker" or be seen as somebody "wanting to make her look bad to take her job". Or worse, being blacklisted in one of the very few places to do science here.

Edited by Crafter
Link to comment
Share on other sites

There's a nice piece in the Chronicle about what to do in the wake of this: http://chronicle.com/article/What-Social-Science-Can-Learn/230645/

 

It's interesting to me that there's so much talk about replication.

 

As someone who works more in the realm of qualitative social science, replication isn't even something that often gets talked about. It's so hard to replicate qualitative studies since it can be hard to find the exact same people they surveyed/interviewed and because people's attitudes, beliefs, and behaviors change over time. That is, I could go to what was a rural area 25 years ago and try to do someone's same study but get totally different results because of things like the internet, cell phones, and suburbanization/exurbanization leaving rural areas less isolated. But the author does have a point that grad students are often told not to waste their time trying to replicate findings, especially since that isn't what gets one slick publications or a TT job...

 

I honestly feel that replication wasn’t sexy a few years ago but it is starting to become increasingly popular and very much in-demand in certain areas of the social sciences.

 

Last year, for instance (link: http://www.apa.org/monitor/2014/09/results.aspx) the American Psychological Association, APA,  reported on a $10 MILLION dollar grant that was given to the Centre for Open Science which focuses greatly on replication research. The Association for Psychological Science has its own Registered Replication initiative (https://www.psychologicalscience.org/index.php/replication) where people are encouraged to follow-up on the work of well-known psychological studies just to make sure the effects they claim to exist are reproducible. I mentioned the Diederik Stapel debacle before because if Political Science decides to follow on Psychology’s footsteps and learn from its mistakes, it may start becoming a lot more interested in replication studies. I feel like a lot of what is going on is still in its infancy and many of the measures being taken might be somewhat misdirected, but at least more and more people are becoming aware of this issue and the importance that it has to further our field as a science. It’s also indirectly making a lot of people very interested in statistics so that always puts a smile on my face :)

 

Stapel’s data fraud (which is remarkably similar to what happened with LaCour) has forced social psychologists (and Psychology in general, actually) to have a very honest look in the mirror and come to terms with the fact that they haven’t been as careful as they should have been in terms of how they conduct and publish their research. Things like the file-drawer effect from meta-analysis, questionable statistics, transparency in terms of data sharing, etc. are becoming more and more important in the eyes of journal editors and the community of scientific psychology in general.  I find it a little bit ironic (and sad) that each area of the social sciences needs to have its own personalized ‘scandal’ before they start questioning their own practices though. Both of the authors in the article you posted are political scientists and they both claim that replication studies are not sexy in their area. Psychology was like that 5-6 years ago but nowadays a good replication study can easily lead to a publication in a top journal.

 

You do bring an interesting point though in terms of the qualitative VS quantitative methodologies in the social sciences. My guess is that replication studies are “a thing” in quantitatively-bent social sciences (of which Psychology and Political Science are preeminent examples) but I’m not sure whether this same paradigm would make sense in, I dunno, anthropology or so. I mean, for replication to take place you kinda have to buy into the idea that the phenomenon under study exists outside of your own perception of it AND it can be measured. Otherwise you wouldn’t expect its influence to manifest itself repeatedly across various samples of the same population.  

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

This website uses cookies to ensure you get the best experience on our website. See our Privacy Policy and Terms of Use