Jump to content

Academic Politics - Something to Consider When Choosing an Adviser/Department


TXInstrument11

Recommended Posts

I'm currently in a PhD program, and I was forwarded a blog post that I would have found useful as an applicant. It's by a prominent "replication guru", Andrew Gelman.

I am not here to take sides in the replication debate, merely to pass along information that may help you more fully appreciate importance of politics in academia. Gelman's condemnation of a professor's former students is demonstrative. 

http://andrewgelman.com/2016/09/21/what-has-happened-down-here-is-the-winds-have-changed/

While reading this, some troubling rumors I heard about a few departments I applied to suddenly made a lot more sense, as did offhand negative comments I hear routinely from professors in this department. For better or for worse, the popularity of your adviser matters a lot - and arguably more than ever in the current climate.

At my undergrad institution, a practical "no-name" with few power player professors, I only heard whispers of these things from a few select people. If I had better understood the intensity and commonness of these academic cat fights, I might have taken better care in choosing departments to apply to, and I think now that I might have had a better chance of acceptance if I had gone that route by dodging departments that appear to be falling apart at the seams. 

For more examples, check out the feud between Uri Simonsohn and a fellow "replication guru", Greg Francis to see how ugly the mudslinging can get. 

Edited by TXInstrument11
Link to comment
Share on other sites

This is a great read. Just to clear things up: "replication guru" aren't really the right words to describe Andrew Gelman. He is one the (if not THE) leading minds in statistics. People who go to his talks literally ask for his autograph – he is just that good at what he does.

That said, he has a big problem with a lot of the things people do to leverage statistical testing in a way that favors their own theories, and his blog describes these things.

This is a problem with people doing bad science, not a political "I don't like you so I'll write a blog post about you" cat fight. The take away for me is – choose an advisor who keeps up with current methods.

Link to comment
Share on other sites

 

2 hours ago, The_Old_Wise_One said:

The take away for me is – choose an advisor who keeps up with current methods.

There's also: Choose an advisor who doesn't have a reputation for being an asshole. These political debates tarnish reputations on both sides, and my read on the field's sentiment is that it's a lot worse to be someone who rips on others via social media.

A lot of what's going on in social psychology lately reminds me of what's going on with Trump supporters: A minority that feels disenfranchised and embittered, and produces a lot of vitriol and aggression to try and provoke reform from the establishment. Make Science Great Again. 

Link to comment
Share on other sites

6 hours ago, The_Old_Wise_One said:

This is a great read. Just to clear things up: "replication guru" aren't really the right words to describe Andrew Gelman. He is one the (if not THE) leading minds in statistics. People who go to his talks literally ask for his autograph – he is just that good at what he does.

That said, he has a big problem with a lot of the things people do to leverage statistical testing in a way that favors their own theories, and his blog describes these things.

This is a problem with people doing bad science, not a political "I don't like you so I'll write a blog post about you" cat fight. The take away for me is – choose an advisor who keeps up with current methods.

Right or wrong, the Replication Movement is causing a lot of academics to tank in popularity. Just because they're right doesn't make it any less political.

Edit: I am part of the Replication Movement so I largely support the mission of people like Gelman, and his confrontational approach is probably what's needed to create change (Cohen, Meehl, & others have been talking about this for ages to no avail, after all), but this is still politics - just between scientists.

Edited by TXInstrument11
Link to comment
Share on other sites

6 hours ago, The_Old_Wise_One said:

This is a great read. Just to clear things up: "replication guru" aren't really the right words to describe Andrew Gelman. He is one the (if not THE) leading minds in statistics. People who go to his talks literally ask for his autograph – he is just that good at what he does.

That said, he has a big problem with a lot of the things people do to leverage statistical testing in a way that favors their own theories, and his blog describes these things.

This is a problem with people doing bad science, not a political "I don't like you so I'll write a blog post about you" cat fight. The take away for me is – choose an advisor who keeps up with current methods.

I second this. My advisor kicks up a lot of dirt publishing methods papers and bringing up issues not everyone wants to talk about. I respect her for that and I would choose to work with her again for her focus on methodological issues and doing research properly.

There's a real shift in the way people are thinking about science and doing science, and there are people who are not adapting to these new requirements. Is it politics to call them out on it?

Reputation in academia is one of those things everyone likes to talk about, but no one can really defend when it comes down to it. It's a very vague notion built on the assumption that people have to like you to cite your work, or if everyone likes everyone else that we will all be more successful. But that's not true IMO.

Link to comment
Share on other sites

 

 

41 minutes ago, eternallyephemeral said:

There's a real shift in the way people are thinking about science and doing science, and there are people who are not adapting to these new requirements. Is it politics to call them out on it?

 

People might ask: What are these "new requirements" and who are the people setting them? Science is a collaborative enterprise and things happen by consensus. "Call them out" is an interesting choice of words because I think that's what people chafe at because it sounds like admonishing and shaming; rather, there should be discussion, debate, and persuasion.

To highlight the point that it's possible to be a methods-focused person without being an asshole about it. The best examples I can think of are Preacher and Hayes. They realized that people weren't doing mediation analyses properly but instead of just kicking at past work, they created tools to facilitate proper mediation analyses. Their SPSS macro is enormously popular and has driven the field forward, allowing novel study designs that weren't possible in the past because we didn't have readily accessible means of analyzing the data. They have a good reputation because they're smart, creative, respective, productive, and collegial. 

The criticism I often hear leveled at (some) replicators and (some) methods people is that they want to rip apart what other people are producing without producing anything useful of their own. Build up, not tear down. When people like Alison Ledgerwood say: "We're trying to improve methods in our lab, here's what we've been doing if you want to try it too..." this is helpful and people respect it. When others, who I won't name, say that they'll only trust findings that were pre-registered because they assume those findings were p-hacked or whatever, that's not helpful. It signals distrust of and disrespect for your colleagues.

 

 

Edited by lewin
Link to comment
Share on other sites

3 hours ago, lewin said:

 

 

 

People might ask: What are these "new requirements" and who are the people setting them? Science is a collaborative enterprise and things happen by consensus. "Call them out" is an interesting choice of words because I think that's what people chafe at because it sounds like admonishing and shaming; rather, there should be discussion, debate, and persuasion.

To highlight the point that it's possible to be a methods-focused person without being an asshole about it. The best examples I can think of are Preacher and Hayes. They realized that people weren't doing mediation analyses properly but instead of just kicking at past work, they created tools to facilitate proper mediation analyses. Their SPSS macro is enormously popular and has driven the field forward, allowing novel study designs that weren't possible in the past because we didn't have readily accessible means of analyzing the data. They have a good reputation because they're smart, creative, respective, productive, and collegial. 

The criticism I often hear leveled at (some) replicators and (some) methods people is that they want to rip apart what other people are producing without producing anything useful of their own. Build up, not tear down. When people like Alison Ledgerwood say: "We're trying to improve methods in our lab, here's what we've been doing if you want to try it too..." this is helpful and people respect it. When others, who I won't name, say that they'll only trust findings that were pre-registered because they assume those findings were p-hacked or whatever, that's not helpful. It signals distrust of and disrespect for your colleagues.

 

 

I agree with you that it is important for people who do study methodology to create tools for others to use – but if you are using this argument against Gelman, you must not know how much he has contributed to the scientific community. He had published numerous textbooks on methodology, and he has also created state of the art software (Stan) for people to do Bayesian statistics.

That being said, most of his criticism is not on the methods themselves (e.g. ANOVA, regression, etc.) but instead he criticizes how people use and interpret these methods and their results. In other words, I can have an idea, design a study, collect multiple types of data, and then test every variable for the effect I want and when I find something significant – I can write it up as if that was my hypothesized finding all along. This will always lead to spurious results, and everyone knows it.  

"These new requirements" are not new in any temporal sense of the word, but they are "new" because people did not ask questions about significance in the past. As scientists responsible for creating knowledge for the world, it is our responsibility to think critically about the methods used to justify our claims – that's it. That is the whole idea that Gelman is trying to get across.

The problem is that nobody has been listening. People in high profile positions continue to publish research conducted using bad methodology, and they continue to train new scientists to do the same. Is that the kind of world you want to live in? One where you cannot even trust science? 

At this point, expressing ideas in the open for all to see is the best way to create a conversation about the changes that need to be made in science. It allows everyone to join the conversation, not just high profile researchers protected by their friends on the editor boards of journals.

Link to comment
Share on other sites

7 hours ago, lewin said:

 

There's also: Choose an advisor who doesn't have a reputation for being an asshole. These political debates tarnish reputations on both sides, and my read on the field's sentiment is that it's a lot worse to be someone who rips on others via social media.

A lot of what's going on in social psychology lately reminds me of what's going on with Trump supporters: A minority that feels disenfranchised and embittered, and produces a lot of vitriol and aggression to try and provoke reform from the establishment. Make Science Great Again. 

 

7 hours ago, lewin said:

 

There's also: Choose an advisor who doesn't have a reputation for being an asshole. These political debates tarnish reputations on both sides, and my read on the field's sentiment is that it's a lot worse to be someone who rips on others via social media.

A lot of what's going on in social psychology lately reminds me of what's going on with Trump supporters: A minority that feels disenfranchised and embittered, and produces a lot of vitriol and aggression to try and provoke reform from the establishment. Make Science Great Again. 

Gelman's reputation is far from tarnished. In fact, he is a hero in many people's mind for coming out and telling researchers that they are abusing statistical methods in order to perpetuate their own theories. The only people who don't appreciate what Gelman is doing for science are people who have not thought critically about the effects that bad methods have on society, and also those who refuse to admit that they are wrong. 

Comparing this with Trump is absurd. First off, it isn't a minority of people that are taking these issues seriously, it's a large number of people across every field. Second, Gelman has absolutely nothing to gain from doing so this; he is doing it because he wants to see people do better science. Others have tried in the past, and they have failed because they do not take a direct approach. 

Link to comment
Share on other sites

 

1 hour ago, The_Old_Wise_One said:

 

Gelman's reputation is far from tarnished. In fact, he is a hero in many people's mind for coming out and telling researchers that they are abusing statistical methods in order to perpetuate their own theories. The only people who don't appreciate what Gelman is doing for science are people who have not thought critically about the effects that bad methods have on society, and also those who refuse to admit that they are wrong. 

Comparing this with Trump is absurd. First off, it isn't a minority of people that are taking these issues seriously, it's a large number of people across every field. Second, Gelman has absolutely nothing to gain from doing so this; he is doing it because he wants to see people do better science. Others have tried in the past, and they have failed because they do not take a direct approach. 

Just to clarify, I didn't have Gelman in mind particularly when I wrote my other comment about tarnished reputations but in retrospect I see why it looked that way. Mea culpa. I'm not in statistics and don't know much firsthand about Gelman's rep one way or another. 

 

About my Trump metaphor... "minority" and "large number of people" aren't mutually exclusive terms; Trump has a large number of supporters but they're still a minority of Americans. Regardless, my point wasn't about the raw numbers of people involved but rather the sense of resentment and disenfranchisement that's occurring. Of course there are a lot of dissimilarities with Trumpers, but in common what I see is a some number of people--perhaps many numerically, but not the majority proportionally--that are nursing sense of umbrage and resentment that there are other popular people in the "establishment" that have status (measured in pubs, TED talks, or ivy league tenure) that is unearned, and those people deserve to be taken down a peg or two, and that the reformers have been unfairly shut out. Gelman probably gets increased support because he's a big name, and he's finally "calling out" the other popular kids.

 

Stepping back to the OP's point.... for new grad students my advice would be to keep your head down and avoid any kind of controversial advisors, i.e., too strongly in any camp. Establish a track record of good research and worry about the internecine feuds later.

 

Link to comment
Share on other sites

To mirror @lewin's return to the excellent main point of @TXInstrument11, I'll just add a couple of things from my experience.

1) Campus visits prior to deciding which offer to accept can be incredibly helpful. Pay attention to the dynamics between faculty when you're visiting. Ask grad students who is on their committee. If there's someone that seems like they should be on their committee but isn't, ask them why. (Note: I actually did this when visiting a program and found out that two people who you might logically want to put on a committee couldn't stand one another and refused to work with one another's students.) Similarly, ask grad students what they've heard about working with anyone and everyone you're considering working with. 

2) Think seriously and critically about the reputation of the person you're considering having as your PI/advisor. Does that person have a theoretical or methodological bent? If so, do you want to be closely associated with that for the next 10 years of your life? If not, move along to the next POI. 

3) Keep in mind that all of these things can change. People's reputations rise and fall. Feuds get settled and new ones begin. Some of this is out of your control.

Link to comment
Share on other sites

I don't want to sidetrack this thread, but I think this is a very important and timely issue that's worthy of more discussion.  I had a chat with one of my labmates about Susan Fiske's letter.  He seemed to side with Fiske, arguing that there are a lot of methods trolls out there who are on the hunt for mistakes in others' work, and that these people aren't making positive contributions themselves.  I tend to be a little more sympathetic to Gelman (although I'm not a fan of the strident approach he's using) and think that we SHOULD be pointing out methodological or statistical problems and logical lapses.  However, I'm in favor of being cordial and giving people the benefit of the doubt.  I'd like to think that researchers are part of a larger community, and I think it's important to preserve a positive, collegial atmosphere.  At the same time, we shouldn't be so afraid of stepping on someone's toes that we fail to speak up when we notice something amiss.  Social media and anonymity offer a vehicle for the less established to do this without jeopardizing their reputations, but they also enable harassment and digital mob behavior (see: Gamergate controversy).  People like Gelman and Neuroskeptic should also realize that their word carries a lot of power and has the potential to damage someone's career.

Also, this:

635980d1314369883-inherent-problem-internet-anonymity-215499488_8pszr-l-2.jpg

Edited by St0chastic
Link to comment
Share on other sites

3 hours ago, St0chastic said:

I don't want to sidetrack this thread, but I think this is a very important and timely issue that's worthy of more discussion.  I had a chat with one of my labmates about Susan Fiske's letter.  He seemed to side with Fiske, arguing that there are a lot of methods trolls out there who are on the hunt for mistakes in others' work, and that these people aren't making positive contributions themselves.  I tend to be a little more sympathetic to Gelman (although I'm not a fan of the strident approach he's using) and think that we SHOULD be pointing out methodological or statistical problems and logical lapses.  However, I'm in favor of being cordial and giving people the benefit of the doubt.  I'd like to think that researchers are part of a larger community, and I think it's important to preserve a positive, collegial atmosphere.  At the same time, we shouldn't be so afraid of stepping on someone's toes that we fail to speak up when we notice something amiss.  Social media and anonymity offer a vehicle for the less established to do this without jeopardizing their reputations, but they also enable harassment and digital mob behavior (see: Gamergate controversy).  People like Gelman and Neuroskeptic should also realize that their word carries a lot of power and has the potential to damage someone's career.

Also, this:

635980d1314369883-inherent-problem-internet-anonymity-215499488_8pszr-l-2.jpg

I agree that being cordial should be preferred, but only so long as it is effective. History shows us that cordiality in academia – when it comes to pointing out flaws in methodology – almost always leads nowhere. Academics engrossed in methodology write books, opinion articles, etc., and yet hardly anyone in the field bats an eye. Gelman makes an excellent point of this when he brings up Meehle's criticisms of social sciences. 

The major difference between people like Paul Meehle versus someone like Gelman is that Meehle never made it personal. In other words, he never said "X person did Y thing wrong". Obviously, Gelman is doing just that and it is causing some friction. However, this is exactly what science needs right now. What better way is there to create change? Since individual people are being criticized, they must now defend their reasoning. If they cannot defend their reasoning, then they are doing bad science. If they cannot admit to doing bad science, then they are obviously not trying to learn from their mistakes; learning from mistakes is an absolute in science – there is no debate on that.

All being said, if the reputations/careers of researchers – that refuse to admit and learn from their wrongs – are tarnished, what is the problem? Would we prefer that they continue on?   

Link to comment
Share on other sites

On 9/22/2016 at 3:05 PM, lewin said:

 only trust findings that were pre-registered because they assume those findings were p-hacked or whatever, that's not helpful. It signals distrust of and disrespect for your colleagues.

IMHO the reputation of field far outweighs the delicate egos of some researchers. Why wouldn't we want to preregister a hypotheses we are confident about?  I think the answer is obvious and preregistering in no way precludes being respectful and or cordial. Pre-registering simply keeps you honest, and before anyways starts with "honesty should be assumed" we aren't talking about a relationship between two people, this is suppose to be science.  Also, I'd argue that exploration isn't "bad", so long as you state it as such.

Edited by TenaciousBushLeaper
Link to comment
Share on other sites

17 hours ago, The_Old_Wise_One said:

All being said, if the reputations/careers of researchers – that refuse to admit and learn from their wrongs – are tarnished, what is the problem? Would we prefer that they continue on?   

I basically agree with you.  I just think that with social media there's the potential to be overly vitriolic and that's something we should keep in mind.  Also, there's a difference between someone who is negligently making methodological or statistical errors again and again and a rookie who goofs up once and then learns from that mistake.  As long as we're mostly pointing the spotlight at the former people and not the latter then I'm onboard with Gelman's approach.  

We also need to think about how we can reform incentive structures in science.  Why is it that people are p-hacking (usually inadvertently) and chasing media-friendly hypotheses?  It's because in the past academia rewarded people who did this.  Until we get rid of the publish or perish rat race there will always be a strong incentive to make spurious claims or to read more into your data than is warranted.  We need to start rewarding people for the quality of their work rather than the wow factor of their findings.  Also, with the infinite space of the internet, there's no reason why null results shouldn't get published online.

EDIT: And here's Neuroskeptic's response: http://blogs.discovermagazine.com/neuroskeptic/2016/09/25/fiske-jab-on-methodological-terrorism/

Edited by St0chastic
Link to comment
Share on other sites

On 9/25/2016 at 0:55 PM, St0chastic said:

We need to start rewarding people for the quality of their work rather than the wow factor of their findings

I think I agree, but the line between superficial merit "wow factor" and more substantive merit "this work has scientific value that your peers acknowledge" can be really fuzzy. Is "wow" just whether the work seems to get attention outside of the discipline? Or attracts media attention? Sometimes it's only clear in retrospect.

You're right that the incentives tilt towards superficial "wow" rather than substantive "wow".... I don't know whether it's still there, but a few years ago wow factor was relatively explicitly written into the reviewer guidelines at Psych Science ("would you go down the hallway to tell one of your colleagues in a different discipline about this?"). I applied for grants recently and "knowledge mobilization" (aka, putting your work out there to the [social] media, etc.) is a merit evaluation category just like student training. The problems are systematic and institutional... and given how pervasive they are, I find myself being really sympathetic towards individuals like Amy Cuddy who happened to play the publicity game really well--like everybody told us to do--and is now being ripped for it. I definitely don't think that quality and wow factor need to be in opposition to each other, in principle.

Edited by lewin
Link to comment
Share on other sites

On 9/24/2016 at 7:24 PM, The_Old_Wise_One said:

Meehle's criticisms of social sciences

Meehl also critiqued medical science quite a bit and those didn't take either ;)   Or if they had, much of medical diagnosis would take place by computer (Dawes, Faust, & Meehl, 1989) ;)

Link to comment
Share on other sites

2 hours ago, lewin said:

Meehl also critiqued medical science quite a bit and those didn't take either ;)   Or if they had, much of medical diagnosis would take place by computer (Dawes, Faust, & Meehl, 1989) ;)

Tangent: Isn't that what IBM is doing with their Watson AI?  Plus Google is now using deep learning to help screen for macular degeneration and diabetic retinopathy: https://www.theguardian.com/technology/2016/jul/05/google-deepmind-nhs-machine-learning-blindness

So in the end maybe Meehl will win.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

This website uses cookies to ensure you get the best experience on our website. See our Privacy Policy and Terms of Use