Jump to content

Recommended Posts

Posted

I was actually just working on this last night. It's a little more complicated than that. Three of the schools with the best placement record (ie. highest percentage of students securing TT jobs) included UCSD and UMass, programs outside of the top 20. NYU has 57% of its students receiving tenure track jobs; UCSD has 78%. The number I used to calculate this stuff came from the APA report on grad schools (although a few schools I had to use their website including UMass). When you plot ranking versus TT placement rate, the plot is pretty scattered. Doing a linear regression, the Rsquared value is about .09, which is a weak correlation between ranking and placement. 

I'm hoping to put something up blog-like with some of this information today or tomorrow, if anyone would want to check it out for themselves. 

I'm not sure this is a great idea. Make sure you read the the post that Establishment linked so you can avoid some of the errors that that study makes. Here's one thing you need to consider: placement data are backward looking. If you're looking for a meaningful correlation between PGR ranking and placement, you'd need to use past PGR rankings. Students placed in 2010-2014, would have joined their departments in 2003-2009 (give or take a few years) when those departments rankings could have differed substantially from their current (2011) rankings.

Posted

I'm not sure this is a great idea. Make sure you read the the post that Establishment linked so you can avoid some of the errors that that study makes. Here's one thing you need to consider: placement data are backward looking. If you're looking for a meaningful correlation between PGR ranking and placement, you'd need to use past PGR rankings. Students placed in 2010-2014, would have joined their departments in 2003-2009 (give or take a few years) when those departments rankings could have differed substantially from their current (2011) rankings.

Fair point. Although using 2006 or 2004 rankings with placement data from 2008 to 2013 is still give me low Rsquared values (.17 and .18) for a linear correlation. 

Posted (edited)

Fair point. Although using 2006 or 2004 rankings with placement data from 2008 to 2013 is still give me low Rsquared values (.17 and .18) for a linear correlation. 

I think there are lots of other questions you should be asking besides just "Does PGR ranking correlate with placement?" For example, why only count TT placement? Why not count postdocs, VAPs, lectureships, etc. and use some kind of weighting to measure the desirability of each type of position?

Also, isn't it plausible that placement should correlate better with particular faculty and not the entire department? If anything, it seems to me that you should  be using the specialty rankings and not the overall rankings. But this complicates things considerably.

Basically, while I think it's admirable that you're willing to investigate this, I'm worried that presenting your data analysis will mislead a lot of applicants. Unless you're a professional statistician (in which case, please proceed), I think we're better off looking into placement records the old-fashioned way, i.e. asking for the raw data from individual departments, talking with our advisors, talking to current grad students and alumni, etc.

 

EDIT: I should also add that I think the PGR overall ranking is an extremely crude, possibly bad, measure of "faculty quality" (for evidence just read the methodology page on the PGR site), and so it's not likely to yield interesting correlations. But that's a whole other can of worms...

Edited by aduh
Posted

If you read the post you linked to carefully, you'll see that it points to several major methodological flaws with that study. When the study came out, a few months back, it was heavily criticized (correctly, in my view) in the philosophy blogosphere. It's false that it supports the claim that "you shouldn't go to graduate school in philosophy at anything less that a top-20 program."

 

Yes, there are flaws, but it's not a wholly useless study. In fact, Leiter mentions that one of the study's flaws results in a weaker correlation than there actually is. And here's another attempt at rankings (http://phiplaces.wordpress.com/introduction/) which also finds a correlation.

 

Now, regardless of these sorts of empirical studies however, professors are giving such advice. Now, professors are often wrong about things. They try to give you application advice despite the fact that adcoms differ so very widely, etc., etc. Yet, we still consider their opinions because they've been through the process, they've sat on adcoms, and they're rather integrated in the field and have some intuitions about how others work. So the mere fact that professors are adopting this line of not attending programs below the top-20, is evidence itself. It's weak evidence, but any sort of evidence you can expect to acquire regarding these sorts of complicated events is going to be weak.

Posted

I think there are lots of other questions you should be asking besides just "Does PGR ranking correlate with placement?" For example, why only count TT placement? Why not count postdocs, VAPs, lectureships, etc. and use some kind of weighting to measure the desirability of each type of position?

Also, isn't it plausible that placement should correlate better with particular faculty and not the entire department? If anything, it seems to me that you should  be using the specialty rankings and not the overall rankings. But this complicates things considerably.

Basically, while I think it's admirable that you're willing to investigate this, I'm worried that presenting your data analysis will mislead a lot of applicants. Unless you're a professional statistician (in which case, please proceed), I think we're better off looking into placement records the old-fashioned way, i.e. asking for the raw data from individual departments, talking with our advisors, talking to current grad students and alumni, etc.

 

EDIT: I should also add that I think the PGR overall ranking is an extremely crude, possibly bad, measure of "faculty quality" (for evidence just read the methodology page on the PGR site), and so it's not likely to yield interesting correlations. But that's a whole other can of worms...

 

On the other hand, I don't disagree with most of your points. There are other things to consider than just TT placement, and one ought to also find a way to weight various placements. Not all TT placements are equal. And better placement correlation can be gotten from looking at particular faculty. That's why when you finally get accepted to a PhD program, you'll want to ask the departments for placement records of the professors you expect you'll be working with, because that information will be much more accurate.

 

The use of the PGR/PhD-Placement-Ranking that late in the game then is mainly to speak towards the school's average placement record, which can be useful when we consider that students often change interests.  If a school is overall strong, then a student will be able to switch interests without having to face lower chances of success. Although, it might also be possible that a schools overall reputation, regardless of it's more specific area record, plays a influence in hiring, and so shouldn't be entirely discounted.

Posted (edited)

I'm still waiting to hear from McGill and UMass.  TGC results had McGill's notices out on February 22nd last year.  We've all waited so many months in anticipation, that even waiting one more week seems impossible :)

 

Edit:  I haven't heard from UConn, either, but I'm assuming that's a rejection.

Edited by shelbyelisha
Posted (edited)

I'm still waiting to hear from McGill and UMass.  TGC results had McGill's notices out on February 22nd last year.  We've all waited so many months in anticipation, that even waiting one more week seems impossible :)

 

Edit:  I haven't heard from UConn, either, but I'm assuming that's a rejection.

 

You might want to contact them. I and others had to contact them last year to find out we were waitlisted. I don't know if they contacted waitlists this year, or maybe they only let those who are high on the WL know. (UConn, that is)

 

(I'm actually curious if UConn actually ever rejects students until April 15th? My suspicion is they just waitlist everybody they don't give an offer to right away just as a super safety precaution.)

Edited by Establishment
Posted

I'm still waiting to hear from McGill and UMass.  TGC results had McGill's notices out on February 22nd last year.  We've all waited so many months in anticipation, that even waiting one more week seems impossible :)

 

Edit:  I haven't heard from UConn, either, but I'm assuming that's a rejection.

I have confirmation that Umass have finished their first round but just haven't released them yet. Also the majority of their strongest candidates already have MA's apparently. I don't know however, when they will email/call people.

Posted

I'm not going to make the expected critique of the PGR rankings (i.e. the usual tirade from a "SPEP School" student), but now that I've talked to people who know Leiter, who have been involved in the ranking, and who come from both Leiter-ranked schools and otherwise, I feel that I should reiterate what I've been told a thousand times: PGR is the most reputable comprehensive ranking of graduate programs only because it was the first comprehensive ranking of graduate programs. The closer you look at the list (as several of you have done), you discover that plenty of "lower ranked" schools have stellar placement rates (and that "higher ranked" schools hardly guarantee placement).

 

But shouldn't a higher ranking indicate a better chance on the job market? You might also notice that faculty at lower ranked or non-ranked schools publish no less often (and in no less reputable journals) than faculty at higher ranked schools. I'm certainly not suggesting that all programs are equal, but one would do well to do his or her own research. Insert all that business about fit and research interests here, but also try and track down the dept's placement record. If a dept is placing its graduates consistently and the faculty is crazy about the same things you're crazy about, it matters little whether it's a "Top 20" or a "Top 50" school.

 

BTW, from the rumor mill, Leiter has been a little shady when it comes to moving schools up in the ranking. I've heard stories of departments asking him to reconsider their programs; he responds by inviting the department to invite him as a speaker, "so that he can get a better look at the department" (naturally, with the usual speaking fee). This is usually enough for a school to rise 5-10 places on the list. But that's only hearsay; I'm not entirely convinced, myself. But it might be enough to remind that the PGR isn't some federally run, impartial study on the quality of philosophy graduate programs and the success rates of their students--it's the pet project of one professor and his friends.

Posted (edited)

IBTW, from the rumor mill, Leiter has been a little shady when it comes to moving schools up in the ranking. I've heard stories of departments asking him to reconsider their programs; he responds by inviting the department to invite him as a speaker, "so that he can get a better look at the department" (naturally, with the usual speaking fee). This is usually enough for a school to rise 5-10 places on the list. But that's only hearsay; I'm not entirely convinced, myself. But it might be enough to remind that the PGR isn't some federally run, impartial study on the quality of philosophy graduate programs and the success rates of their students--it's the pet project of one professor and his friends.

 

Sorry, but that sounds like a bunch of unsubstantiated, intentionally made up bullshit. If you read about the methodology you would know that this isn't even possible. Leiter isn't asked to "reconsider programs" because Leiter isn't the one ranking...he isn't himself making any decisions other than setting up the machine to begin with. The 300ish philosophy professors are the ones that do the rankings. They each score every faculty list, and they also score faculty lists in their area of specialty. Leiter himself isn't just sitting behind a computer thinking up where he personally thinks schools should go. Getting rid of that mentality was the whole point of the rankings....

Edited by TheVineyard
Posted

But shouldn't a higher ranking indicate a better chance on the job market?

 

Well, no, considering the rankings are only a reflection of overall faculty quality and not of placement.

Posted

Maybe we need a Leiter Rankings debate thread? 

Posted

 

BTW, from the rumor mill, Leiter has been a little shady when it comes to moving schools up in the ranking. I've heard stories of departments asking him to reconsider their programs; he responds by inviting the department to invite him as a speaker, "so that he can get a better look at the department" (naturally, with the usual speaking fee). This is usually enough for a school to rise 5-10 places on the list. But that's only hearsay; I'm not entirely convinced, myself. But it might be enough to remind that the PGR isn't some federally run, impartial study on the quality of philosophy graduate programs and the success rates of their students--it's the pet project of one professor and his friends.

 

Agreed with TheVineyard. This is really unsubstantiated nonsense. It's also wrong that he "trades" being a speaker at a department for moving people up in the rankings - because as Vineyard has already pointed out - he doesn't control the rankings. He doesn't rank them individually, and it's not just "his friends" that take part in the rankings. The rankings aren't meant to be a catch-all (which is why there is a breakdown of rankings by specialty), but it is a good starting point for those trying to think about what programs are held in high esteem broadly. 

Posted

Sorry, but that sounds like a bunch of unsubstantiated, intentionally made up bullshit. If you read about the methodology you would know that this isn't even possible. Leiter isn't asked to "reconsider programs" because Leiter isn't the one ranking...he isn't himself making any decisions other than setting up the machine to begin with. The 300ish philosophy professors are the ones that do the rankings. They each score every faculty list, and they also score faculty lists in their area of specialty. Leiter himself isn't just sitting behind a computer thinking up where he personally thinks schools should go. Getting rid of that mentality was the whole point of the rankings....

I can't believe I'm saying this, but I 100% agree with the vineyard here. The PGR has a lot of problems, but Leiter just moving people up the list in exchange for speaking fees is absolutely absurd.

And maybe a PGR debate thread would be alright!

Posted

I think there are lots of other questions you should be asking besides just "Does PGR ranking correlate with placement?" For example, why only count TT placement? Why not count postdocs, VAPs, lectureships, etc. and use some kind of weighting to measure the desirability of each type of position?

Also, isn't it plausible that placement should correlate better with particular faculty and not the entire department? If anything, it seems to me that you should  be using the specialty rankings and not the overall rankings. But this complicates things considerably.

Basically, while I think it's admirable that you're willing to investigate this, I'm worried that presenting your data analysis will mislead a lot of applicants. Unless you're a professional statistician (in which case, please proceed), I think we're better off looking into placement records the old-fashioned way, i.e. asking for the raw data from individual departments, talking with our advisors, talking to current grad students and alumni, etc.

 

EDIT: I should also add that I think the PGR overall ranking is an extremely crude, possibly bad, measure of "faculty quality" (for evidence just read the methodology page on the PGR site), and so it's not likely to yield interesting correlations. But that's a whole other can of worms...

I used TT just because that's the distinction most programs use

For me, I did this to answer the question of "Do I need to go to a top 20 school to get a job?". For me, the answer is no. I wanted to share the results I found mostly because it seems like people focus on the rating more than the placement record of a school, even though the placement record tells you more information. 

I would love to look at the placement records for various specialties, but I think the sample sizes would get too small to return statistically significant results. I'm not a professional statistician, but my fiance does computational neuroscience and used the data set for one of his assignments. 

But really, low correlation values just mean that relatively little of the variance in placement rates is due to the ranking of the school, and is associated with whole host of other factors (some of which you identified), so if you're interested in a program with a good placement record, PGR is not the best way to evaluate that. 

Posted

I'm not going to make the expected critique of the PGR rankings (i.e. the usual tirade from a "SPEP School" student), but now that I've talked to people who know Leiter, who have been involved in the ranking, and who come from both Leiter-ranked schools and otherwise, I feel that I should reiterate what I've been told a thousand times: PGR is the most reputable comprehensive ranking of graduate programs only because it was the first comprehensive ranking of graduate programs. The closer you look at the list (as several of you have done), you discover that plenty of "lower ranked" schools have stellar placement rates (and that "higher ranked" schools hardly guarantee placement).

 

But shouldn't a higher ranking indicate a better chance on the job market? You might also notice that faculty at lower ranked or non-ranked schools publish no less often (and in no less reputable journals) than faculty at higher ranked schools. I'm certainly not suggesting that all programs are equal, but one would do well to do his or her own research. Insert all that business about fit and research interests here, but also try and track down the dept's placement record. If a dept is placing its graduates consistently and the faculty is crazy about the same things you're crazy about, it matters little whether it's a "Top 20" or a "Top 50" school.

 

BTW, from the rumor mill, Leiter has been a little shady when it comes to moving schools up in the ranking. I've heard stories of departments asking him to reconsider their programs; he responds by inviting the department to invite him as a speaker, "so that he can get a better look at the department" (naturally, with the usual speaking fee). This is usually enough for a school to rise 5-10 places on the list. But that's only hearsay; I'm not entirely convinced, myself. But it might be enough to remind that the PGR isn't some federally run, impartial study on the quality of philosophy graduate programs and the success rates of their students--it's the pet project of one professor and his friends.

 

If you were trying to help stressed-out applicants by arguing that getting into a department ranked high on the PGR isn't so important after all, I guess there's some intrinsic merit to that. But what you've said here just doesn't make any sense. You started off by saying that you're "not going to make the expected critique of the PGR-rankings". Then you proceed to make the expected critique of the PGR-rankings, with the added bonus of some nonsensical, conspiracy-theory-sounding gossip at the end. 

 

The PGR is not a ranking of placement records. No one said it was such a thing, though there's definitely a rough correlation between a department's ranking and its placement record. 

Posted

If I didn't get outright accepted from the graduate program I currently attend as a graduate student, and I didn't get in to the bottom-ranked University of Connecticut, Storrs, then I'm not getting into Princeton, Michigan, or Columbia. And I told him it is really counterproductive for him to keep saying "oh well you never know keep some hope," because at some point I have to start being realistic about my chances, and keeping up hope in the face of completely overwhelming odds is only setting myself up for more and more disappointment. 

 

Don't be so sure. I got notice from two places so far: Pittsburgh and Georgtown. Accepted to the former, rejected from the latter. In terms of how well I thought my interests aligned with faculty, FWIW, these two schools were actually at the top of my list. Sometimes the results just don't make sense.

 

Because the stronger ranked your program is, the easier your time on the job market. I've seen some professors who advocate a variant of the once standard advice that you just shouldn't go to graduate school in philosophy (unless you can't imagine yourself doing anything else). This variant is that, you shouldn't go to graduate school in philosophy at anything less than a top-20 program.

 

I've been told the latter, but not the former. I've actually been advised heavily against hte former. Don't go to graduate school unless you can imagine doing something else. My professors all advised me that I shouldn't go unless I could see myself doing something else after finishing my PhD, because a lot of people that get stuck in shitty positions (adjuncting forever, for example), end up there because they refuse to or can't see themselves doing anything else.

Posted

Don't be so sure. I got notice from two places so far: Pittsburgh and Georgtown. Accepted to the former, rejected from the latter. In terms of how well I thought my interests aligned with faculty, FWIW, these two schools were actually at the top of my list. Sometimes the results just don't make sense.

 

same with me: I got accepted to Notre Dame but rejected from Georgetown and Stony Brook, all of which I thought would be great fits for me. So don't give up hope! There's indeed no straightforward logic here

Posted

Well, no, considering the rankings are only a reflection of overall faculty quality and not of placement.

However, this is one of the shortcomings of the PGR.

P1. The primary thing most students should probably be considering in choosing a school is placement (if they want to make a career of philosophy).

P2. The PGR doesn't factor placement into the rankings

C: and thus the PGR doesn't factor the primary thing most students should probably be considering in choosing a school into the rankings

 

Leiter acknowledges this, saying that placement is backward looking, which is true. It's just important to realize the PGR is not, and does not claim to be, the end all be all source of information.

Posted

However, this is one of the shortcomings of the PGR.

P1. The primary thing most students should probably be considering in choosing a school is placement (if they want to make a career of philosophy).

P2. The PGR doesn't factor placement into the rankings

C: and thus the PGR doesn't factor the primary thing most students should probably be considering in choosing a school into the rankings

 

Leiter acknowledges this, saying that placement is backward looking, which is true. It's just important to realize the PGR is not, and does not claim to be, the end all be all source of information.

There are a lot of really great schools that are nowhere to be found in the PGR. Ultimately one should just do the work of researching various departments and deciding whether or not it seems it would be a good place to study given the properties a given student finds relevant. The PGR is helpful; I can grant that. I don't think it's very controversial to suggest that it is also very limited.

Posted (edited)

There are a lot of really great schools that are nowhere to be found in the PGR. Ultimately one should just do the work of researching various departments and deciding whether or not it seems it would be a good place to study given the properties a given student finds relevant. The PGR is helpful; I can grant that. I don't think it's very controversial to suggest that it is also very limited.

 

Assuming you mean that there are a lot of great *graduate* programs that aren't on the PGR - I'm curious, what departments do you have in mind? What are their* placement records like?

Edited by MattDest
Posted

South Florida, for instance, is only mentioned once on the PGR for its strength in 20th century continental philosophy, and I'm pretty sure its one of those ones that's been "inserted by the board." But USF has a number of research strengths across the discipline, including early modern philosophy, philosophy of science, and epistemology (plus a really cool track in philosophy and religious studies, which I have applied to). The people I've talked to there say that their students regularly land tenure-track positions and long-term visiting positions.

Posted

Assuming you mean that there are a lot of great *graduate* programs that aren't on the PGR - I'm curious, what departments do you have in mind? What are there placement records like?

I know that there are a few "Pluralist" departments that have decent placement, like Stony Brook, that aren't mentioned on the top 50 of the PGR. But I think a lot of schools that aren't on the top 50 PGR are pretty bad as well.

Posted

I know that there are a few "Pluralist" departments that have decent placement, like Stony Brook, that aren't mentioned on the top 50 of the PGR. But I think a lot of schools that aren't on the top 50 PGR are pretty bad as well.

There are a lot of great programs that aren't in the top 50. Boston College, Fordham, Purdue, Illinois UC, Iowa, Utah, Marquette, St. Louis and so on etc.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

This website uses cookies to ensure you get the best experience on our website. See our Privacy Policy and Terms of Use