Jump to content

Recommended Posts

Posted

I think the "holy shit" thread has not been very productive in providing future grad students with a clear guideline to assess graduate programs. It came to the point where people say "there is no real research going on in non-top 20 departments", which is completely nonsense. Those of us who are going to start graduate school in non-top 20 programs should better be hopeful towards the future, without being completely unrealistic. I guess, being disappointed and unhappy about the future program you will attend, only because of its position in the rankings, is the worst way to start graduate school. I admit that people should know the facts about academic hierarchy as often reminded by people already enrolled in top programs. But, one should also believe that she/he can be the most successful person in his/her cohort, or even in the whole history of that program. Even if one cannot reach such high goals, she can still be a very very strong sociologist at the end. People need motivation in the beginning of their graduate studies, so they should not be discouraged.

 

As a sidenote, I agree that it is almost impossible to get a job at Michigan if you have a PhD from a program outside the top-20. But, considering today's job market, it is also very unlikely that you will immediately land a job at top-20 schools with a Michigan PhD. There are also huge variations within the top 10-20 programs. Check out Stanford's placement record for the last 7 years: http://www.stanford.edu/dept/soc/doctoral/recgrads.html. Except 2011, which has been a great year for Stanford, I do not think it is better than UC Irvine's placement: http://www.sociology.uci.edu/soc_grad_placement.

 

I would like to know how much individual factors can make an impact on the game. Below, I listed some of the answers I encountered to the question "why are top programs better than lower ranked ones?". To what extent do they ring true? To what extent can individual factors supersede the particular shortcomings in lower-ranked programs?

 

 

1. Simply quality of training is better in top programs. By the time you get your PhD, you will have an excellent background in research methods/design, theory, and the particular sub-field you are interested in. Scholars in top programs are both the leading figures in their particular fields and excellent teachers. As you go below the rankings, quality of education deteriorates.

 

2. Quality of graduate students is better. Top programs are more selective. They recruit people who are better at asking sociologically relevant questions, thinking analytically, and organizing their arguments in a clear and persuasive style. In top programs, collaborating with really smart graduate students provides you with better opportunities to cultivate your research interests and get published.

 

3. There are more opportunities. Resources that supplement graduate student research are more abundant in top programs, such as funding, research centers, academic writing centers etc. You can participate in numerous conferences, research projects, workshops where you get the chance to work with or get inspired by leading sociologists. And, you will have money to do those things. 

 

4. Getting published in top sociology journals is easier. Publishing is the key to academic progress, including landing a job at top institutions. In top departments, there are more people who often publish papers in top journals (AJS, ASR, SF, T&S, SP etc.). Publishing in such journals requires a certain set of skills, and these people can help you learn the craft of academic writing. You can not learn these skills in low ranked programs, simply because there are not as many people who publish in top journals.

 

5. Departmental culture is more collaborative or competitive. Either in a collaborative or a competitive way, you feel the peer pressure and the faculty's high expectations of you, and that pushes you forward. You start to have ever higher expectations of yourself, and this reflects on your work.

 

6. Prestige matters. You are invited into an elite social network. Your professors often collaborate with other professors of other top departments. And their letters of recommendation will weigh more when it comes to the job market. The frequency of contacts between top programs is greater than that between a top program and a low ranked program. By the time you start to look for a job, the faculty at other top programs and potential recruiters will be familiar with you and your work, thanks to the frequent relations among elite departments. Also, if two candidates with the same credentials compete for the same job, the one who will be hired is going to be from the more prestigious school.

 

7. Organization of departments are better adjusted to rising trends in sociology and to the job market (I think this is the case at Princeton). There is a low rate of attrition. If you follow the guideline provided for you throughout your graduate studies, you will get a highly demanded job at the end. Your adviser knows what is and will be expected by the recruiter departments. You will be guided appropriately. If something becomes lacking in the program, the department will remedy it as soon as possible though hiring new faculty or providing new resources for grad students. They can do this in no time because they have lots of money.

Posted

Can I just say, not all of these things are true. To suggest that coming out of say a top 10 program will give someone a better grounding in theory than someone in a top 30 program, ignores the differences between programs. I am taking a class at UCLA right now and one of their first year students told me that they don't even have a graduate theory course. 

Posted

I'm going to use the example of my program, which is not and will never be (I don't think) ranked in the 'top' if only because it's interdisciplinary. One of those disciplines is sociology. There is certainly truth to the idea that programs will train and prepare you differently, but the idea that these can be stratified or utilized by some particular metric without thinking about a myriad of other factors just seems odd to me. I should also mention that, to my knowledge, no one has graduated with a degree in 'Global and Sociocultural Studies' just yet, most graduates have been in IR/Geography or Comparative Sociology and the change to the new department (which removed geography from IR and put it with sociology and anthropology) also brought about changes in the faculty that is hired.

 

1) Methods and theory training are not limited to the 'top' programs. We are required to take qualitative methods (ethnographic, interview-based, etc.), quantitative methods (big datasets, survey design and implementation, etc.), and an additional methods course that we can choose (can be GIS, ethnohistorical, a second level of quant, or from another department as long as your advisor and the program director approve it.) We also have three required theory courses, one in general social science theory, one in your discipline, and one in another discipline. The idea of this methods and theory training is that, regardless of what you end up using, you have the tools necessary to explain why you are using them. I will never use quant methods in my research. It makes absolutely no sense. However, I will know enough about quant methods to critique them, talk knowledgeably about the limitations to them in my research, and move forward from there. (We also just interviewed three quant methodologists as the primary one who was here did not get tenure. All three came from top programs and two had articles in SF and AJS. The one who came out on top of nearly everybody's list and was offered the position was the one who did not have an SF or AJS publication. He was also the only one who hadn't done a postdoc.)

 

2) I would love to see how you prove this. Considering we have people who have come here over 'top' programs, and at least in one case a person who chose a 'top' program and, after their first year asked if they were to apply again would they be offered the fellowship they were offered the first time, it doesn't seem like it would be too easy to prove such a general statement. Sure, there are less competitive programs and more competitive programs but it doesn't always correlate with rank. Even if it did, it would not necessarily mean that everyone else gets the leftovers.

 

3) We generally have just as good, if not better, funding opportunities than most universities including $750/yr from the department, $500/yr from the college, and $300/yr from the graduate school to attend conferences, professional development events, or preliminary research travel. While we only offer four years of funding through TA or RAships, the department has done an excellent job lately of helping students get external funding including the DPDF, IDRF, and DDIG. A visiting student deciding between us and other 'top' programs said we were the only place she visited where students weren't really worried about funding. Even the few students who have chosen to come without funding (one of whom turned down a funded offer from UPenn) have been able to find funding here.

 

4) I don't even follow the logic here. It's not that difficult or rare to get published in top, field-specific journals as long as you are conducting original research. If you are at a research university you should be. The idea in our program is that everything you write for publication should be submitted to the highest journal you can reasonably submit to (meaning, don't submit to a higher ranking journal just because it's higher ranking. Submit based on what the research is.) The problem, I think, is that people submit to the wrong journals too often and get rejected. Rather than thinking that it may be a case of journal fit they immediately drop journal tier. I've seen this type of thing happen and I would wager that it happens often.

 

5) Again, I don't follow your logic here. It's to the point where I can't even respond. Are you suggesting that non-'top' programs are neither collaborative nor competitive and instead people just sort of float around in the ether?

 

6) In the past two years I have met, talked to, and maintained connection with a number of top scholars including sociologists at 'top' programs. Our professors come from a number of the schools considered to be in the 'top' of their respective fields and their networks simply do not just go away once they leave. As a non-sociological example, my advisor and her husband (both geographers) just hosted Nancy Peluso, a major scholar in political ecology (my field), at their house while she did research in south Florida. I was also introduced throughout the year to a number of up-and-coming scholars in my field (i.e., people who will be on hiring committees more than likely) and was able to have lunch and really great conversations with them. I was also introduced, by one of our professors, to Diane Rocheleau (a major political ecologist) and Bruce Braun (another major political ecologist). Our graduate program director is actually leaving after 12 years here to take a position at Dartmouth. I'm sure that relationship is going to hurt me in the end.

 

7) Most departments at research universities know what it is that research universities are looking for. There is no magic formula that some departments have figured out and others don't. A stronger point would have been to connect this to #6 because a number of positions come down to favors (did I forget to mention that we have had people come from 'top' universities and talk to us about what actually goes on behind closed doors at search committee meetings? We had someone who had sat on several NSF panels come and talk about what it is the NSF is looking for and review all of the applications we were preparing in the department. Not everyone got awards but no one wasn't recommended by at least one reviewer which is a good sign for the next round.) Name matters but it only matters to a certain degree. The three candidates for the position I mentioned earlier were whittled down from a list of nearly 200 including a number from 'top' universities who didn't even get considered. Name alone does not get you a job unless it is a job that you probably don't deserve which will show quickly. You need a lot of other 'stuff' that plenty of departments are capable of giving you.

 

 

I'm not meaning this to be an advertisement for my department. I would say only about half of our students are actually interested in jobs as research academics. Some prefer teaching, some prefer policy, etc. We dissertated 7 students this year, we'll see where the academic focused ones end up I guess but I'm not particularly worried about my place outside of the 'top' and I don't get the feeling that anyone here is. Ranking is probably the most common question asked by prospective students and we still have more highly qualified applicants than we can accept. I'm sure there are several other departments out there like mine that can share similar stories. A number of the things you mentioned are absolutely important but to suggest that they are limited to 'top' programs seems absolutely absurd.

Posted

jmu, your comments are extremely useful, thank you very much. Regarding the point number 5, I paraphrased the idea that top programs have a different departmental culture. The argument goes like this: Top departments that prefer collaboration, for instance, offer the closest relationship between adviser and grad students, which is not available elsewhere; whereas other top departments that value competitiveness establish a cut-throat competitive environment where grad students compete for more funding, more support, more visibility etc. Just a reminder, I do not endorse any of these points, including this one. But, I have come upon these points mentioned over and over again during the admission process. Anyway, I thank you for your answers. They really clarified many things for me. And, sorry for my English.

Posted

I feel I should make a quick point here in case it wasn't clear. I think all of these things are absolutely important factors to consider but considering them based on rank alone is just ridiculous. What's the point of having a great network if you don't get the mentorship needed to utilize it? Find out how the programs you are interested in stack up based on what people who are in those programs can tell you. Don't ask broad questions, ask specific ones. Are you encouraged to publish early or later? Why? Have you met any interesting people through the program? What kind of speakers come for colloquia? Who do people in the department work with? 

 

If you want the truth, ask as late in the application cycle as possible when all of the current grad students are stressed, angry, and need to vent. If, after seeing them at their stressed, they still like the program and you think it's still a good program, then that is what matters. Make the connections you need to make while you are in school and don't rely entirely on introductions from your advisor or professors. When you are at a conference, talk to people, most especially the people on your panel. This is almost certainly what your advisor did and what most successful academics have done. I actually reached out to a recent PhD about her dissertation as our interests have a lot of overlap, as it turns out she knows one of the professors in our department because our professor discussed one of her papers at a conference. We've had an ongoing conversation since. This person is an ivy league grad now at one of the top anthro programs doing a postdoc. I would not have that connection to her if I hadn't reached out to her first even though we know the same people.

 

Grad school is, to some extent, your world. You have to fill it up with people and networks and not let someone else do it for you.

  • 2 weeks later...
Posted (edited)

4) I don't even follow the logic here. It's not that difficult or rare to get published in top, field-specific journals as long as you are conducting original research. If you are at a research university you should be. The idea in our program is that everything you write for publication should be submitted to the highest journal you can reasonably submit to...

 

@jmu: Huh. I think the acceptance rates at the top generalist journals (btw 6 and 9% at ASR) and top specialty journals (btw 5 and 7% at SPQ for example*) disproves the claim that it's "not that difficult or rare to get published". It is rare by definition. Difficult is a bit more subjective, but I wouldn't underestimate how hard it is to conduct original research that is both methodologically sound, theoretically important, and interesting to a wide enough audience to make it into a top journal. Not to mention the process from submission to publication can take many months (or longer). 

 

You are right to point out that many programs across the ranks do strongly encourage publication. However, the OP's point is that faculty in the higher ranked programs are more likely to have publications in the top journals and are more likely to review for top journals than those in lower ranked programs. This follows directly from the status hierarchy of the profession. It takes publishing in a top journal to get a job (and tenure) at top ranked program, thus the distribution of top journal publications tends to match up with the rankings. So long as these faculty are committed enough to their students to help them publish, then these students have a distinct advantage in getting an "inside look" at the top tier publishing circuit. 

I'll refrain from nitpicking all of your points, but I will say something on selection bias. It is obvious that more competitive students on average select into the top ranked programs. The GRE/GPA distributions will show as much. Does this make them "better" students in some inherent way? No. But if GRE/GPA correlate with publication and job placements, then we can at least say there is some predictive strength to the self selection story. Of course, all programs are able to one way or another attract students away from higher ranked programs. Sometimes it's funding, sometimes it's location, sometime it's fit, etc. But (being sociologists) we're talking averages here. Comparing to zero, there is significant positive relationship between competitive PhD applicants and program rank. 

 

So my question to you, jmu, is if none of the advantages that avatarmomo lists accrue systematically to higher ranked programs, then why do higher ranked programs place more students in more and "better" jobs? 

 

And @avatarmomo: I'd guess that the two big factors are your points 6 and 7. On point 7, higher ranked departments do seem more oriented toward placing their students at similarly ranked programs. Therefore, even if only 1 student out of every cohort gets a job in the top 10, the other students fall somewhere into the top 20, top 50, etc. It's a shoot for the moon, land among the stars strategy. You're right to point to Princeton, where the program requirements are quite explicitly meant to accumulate publications fast. And the number 6, prestige always matters. Famous advisors go a long way on the job market (but don't count for more than publications). However, prestige is a function of something else (even path dependency has to start somewhere), though exactly what we can't be certain. Prestigious PhDs accrue to the top ranked programs, but why? Higher salary, better research:teaching ratio, more prestige, better environment, it's hard to say. 

 

*EDIT: Here's the source for journal acceptance rates: http://www.asanet.org/journals/editors_report_2011.cfm#SOE

Edited by RefurbedScientist
Posted

And this is a good topic for a thread, by the way. Well thought out, @avatarmomo. Thanks for posting.

 

We often discuss what makes a program highly ranked, but there is so much reflexivity there that it's hard to get at the underlying factors. This is more about what program characteristics will help you get a job; something we all care about to some extent. 

Posted

@jmu: Huh. I think the acceptance rates at the top generalist journals (btw 6 and 9% at ASR) and top specialty journals (btw 5 and 7% at SPQ for example*) disproves the claim that it's "not that difficult or rare to get published". It is rare by definition. Difficult is a bit more subjective, but I wouldn't underestimate how hard it is to conduct original research that is both methodologically sound, theoretically important, and interesting to a wide enough audience to make it into a top journal. Not to mention the process from submission to publication can take many months (or longer). 

 

You are right to point out that many programs across the ranks do strongly encourage publication. However, the OP's point is that faculty in the higher ranked programs are more likely to have publications in the top journals and are more likely to review for top journals than those in lower ranked programs. This follows directly from the status hierarchy of the profession. It takes publishing in a top journal to get a job (and tenure) at top ranked program, thus the distribution of top journal publications tends to match up with the rankings. So long as these faculty are committed enough to their students to help them publish, then these students have a distinct advantage in getting an "inside look" at the top tier publishing circuit. 

I'll refrain from nitpicking all of your points, but I will say something on selection bias. It is obvious that more competitive students on average select into the top ranked programs. The GRE/GPA distributions will show as much. Does this make them "better" students in some inherent way? No. But if GRE/GPA correlate with publication and job placements, then we can at least say there is some predictive strength to the self selection story. Of course, all programs are able to one way or another attract students away from higher ranked programs. Sometimes it's funding, sometimes it's location, sometime it's fit, etc. But (being sociologists) we're talking averages here. Comparing to zero, there is significant positive relationship between competitive PhD applicants and program rank. 

 

So my question to you, jmu, is if none of the advantages that avatarmomo lists accrue systematically to higher ranked programs, then why do higher ranked programs place more students in more and "better" jobs? 

 

And @avatarmomo: I'd guess that the two big factors are your points 6 and 7. On point 7, higher ranked departments do seem more oriented toward placing their students at similarly ranked programs. Therefore, even if only 1 student out of every cohort gets a job in the top 10, the other students fall somewhere into the top 20, top 50, etc. It's a shoot for the moon, land among the stars strategy. You're right to point to Princeton, where the program requirements are quite explicitly meant to accumulate publications fast. And the number 6, prestige always matters. Famous advisors go a long way on the job market (but don't count for more than publications). However, prestige is a function of something else (even path dependency has to start somewhere), though exactly what we can't be certain. Prestigious PhDs accrue to the top ranked programs, but why? Higher salary, better research:teaching ratio, more prestige, better environment, it's hard to say. 

 

*EDIT: Here's the source for journal acceptance rates: http://www.asanet.org/journals/editors_report_2011.cfm#SOE

 

Those acceptance rates are calculated by final decision only. R&R is technically a final decision and is probably on of the most common initial responses to submitting to a journal (along with accepted pending major revision.) It can often take upwards of 2 years for something to get published in a journal like ASR due to the constant revisions needed. Calculating acceptance rate without factoring those in is extremely limiting ("we only accepted 100 articles but there are ~400 more that are likely to be accepted in the future" seems to be what they are implying there. Not exactly difficult if you stick with it.)

 

I'd like to see a source for this idea that faculty at lower ranked programs aren't publishing as much in top journals. If you're at an R1 you need to be publishing in top journals to get tenure, regardless of rank. My lowly unranked program just denied tenure to someone with a PhD from a top 10 sociology program because she wasn't publishing in top ranked journals.

 

GRE/GPA doesn't show student quality and most programs know this. Ask them and you will see.

 

You also failed to miss my overall point because you were so focused on your statistical argument. It's not the program ranking that matters but the networks and people you become enmeshed with during the process. This includes high ranked programs, of course, but it also includes programs that might be better suited to the individual that are low ranked or programs like mine that can't really be ranked or are too new to really have data for. Besides that, there is a flaw in your logic here. If, by your statement, going to a top ranked program trains someone on how to do research that gets published in top tier journals, and then that person gets a job at a low ranked university, where does that knowledge go? Do low and unranked programs have magical barriers that keep knowledge out? My committee has people from U of Iowa, UC-Berkeley, and two from Yale (including a sociologist with a PhD from Yale who was a full professor at Berkeley before coming here.) Because they are now at an unranked program do their networks disappear? What about their knowledge of academic publishing practices? Additionally, because of the program I'm in, I have more face time with all of them than many of my friends and colleagues at "better" programs.

 

It's not that the things OP mentioned aren't important. I think they are more important than they lead on. It's also not that higher ranked programs don't help in these things. The point is that higher ranked programs are not alone in helping people get TT jobs at R1 schools. Rather than focusing so much on program rank, I think people ought to be focusing on the programs themselves. Who is there, what kind of networks do they have, how often do they publish and where? These are things that aren't going to show up in rankings (which have notoriously low, and dropping response rates anyway.)

Posted (edited)

Those acceptance rates are calculated by final decision only. R&R is technically a final decision and is probably on of the most common initial responses to submitting to a journal (along with accepted pending major revision.) It can often take upwards of 2 years for something to get published in a journal like ASR due to the constant revisions needed. Calculating acceptance rate without factoring those in is extremely limiting ("we only accepted 100 articles but there are ~400 more that are likely to be accepted in the future" seems to be what they are implying there. Not exactly difficult if you stick with it.)

 

From the ASA webpage I linked:

"Using the traditional ASA indicator for the acceptance rate (that is, the number of accepted manuscripts divided by the number of overall decisions), ASR’s acceptance rate for the year was 6 percent. (Using the method of calculating the acceptance rate proposed by England [in Footnotes, March 2009], in which acceptances are divided only by final decisions, the ASR acceptance rate was 9 percent.)"

 

The range I gave accounted for both ways of calculating acceptance rates. No matter how you slice it, any percentage <50 means you have better odds of not getting published, and we're way below 50%. Obviously there are things you can do that make your work more publishable than the average submission, but the "Not exactly difficult if you stick with it" argument doesn't have a leg to stand on. The vast majority of papers submitted to ASR (for example of a highly selective journal) never make it into its pages. 

 

On your other counterarguments, we risk reaching an impasse if neither goes out and gets the data. Let me take the first crack. First, there's plenty of research in economics that the top journals are dominated by faculty at top ranked departments. Economics is definitely more stratified than sociology, but take it as a strong case. Second, look at the recent hires in sociology this year (scroll down to this list). I can't take the time to randomly sample from this group, but just going from top to bottom I looked up their CVs. The obvious trend is that students who went to higher ranked programs have more publications at better journals and get jobs at higher ranked program. 

1. PhD from top 25 program, hired at elite SLAC, has an AJS, a book, and two other pubs

2. PhD from ~90 program, couldn't find info on pubs

3. PhD from top 50, hired at ~80 public, 5 or so pubs in mid-tier journals

4. PhD from top 5 program, hired at top 25, lots of pubs in ASR, SF, etc. (not a fresh PhD)

5. PhD from top 25, hired at unranked public, 3 pubs in low tier journals

6a. PhD from top 5 program, hired at top 25 program, 7 pubs in top-mid tier journals

6b. PhD from top 5 program, hired at top 25 program, 5 pubs in mid tier journals and some books (an ethnographer)

7. PhD from top 50 program, hired at decent SLAC, 2 pubs in mid tier journals

 

I'll stop there, but the pattern stays the same. Now we can quibble about how to categorize the Journal of Health and Family (mid tier, low tier?) but the trend would be the same. And finally, and I think this should settle it, let's look at the most recent issues of some top general and specialty journals. What are the author affiliations?:

1. ASR April 2014: UC Berkeley, University of Memphis, UC Davis/Penn State, NYU/Penn, ESSEC Business School (france), UNC-CH, Stanford/Chicago

2. AJS January 2014: UCLA, Texas/Iowa, Michigan/Indiana, Columbia/Iowa

3 Social Forces March 2014: Cologne, Oregon State, Trinity College Dublin, Univ Toronto/Northwestern, Utrecht, Indiana, UMASS-Boston, Purdue, Penn state, Stanford/Princeton, , Chinese university of Honk Kong, UW Madison, UC-Riverside, UWashington/Uconn, Arizona

4. Gender & Society June 2014: Marquette, USC, CSU Northridge/UCLA, Penn, Ohio State/ Southwestern, CUNY 

5. Soc of Ed April 2014: EUI (Italy), NYU, UW Madison/Iowa, NYU/Penn

 

So how to interpret these? For one, it's obvious that not every paper is by a top 25 program. But the top programs are definitely overrepresented here (a "/" indicates multiple authors from different programs. Many of the co-authorships are between advisers and former students who landed jobs at lower ranked school, as is the norm). Also, as you move down a tier from, say, ASR to Social Forces, the field opens up a bit. 

 

So in response to your "I'd like to see a source for this idea that faculty at lower ranked programs aren't publishing as much in top journals": I don't know of any systematic study in sociology to match those down in economics, but a glance at the data support my (uncontroversial) claim that faculty at lower ranked programs appear less frequently in top journals. Of course, that's not news to anyone who regularly reads the top journals. The more interesting question is whether faculty publishing patterns affords any advantages to students in those programs. 

 

As for GRE/GPA. You say "GRE/GPA doesn't show student quality and most programs know this. Ask them and you will see." You will recall that I said that students with GRE/GPA aren't somehow inherently "better" students. Just that they select into (and are admitted to) higher ranked programs, and then PhDs from higher ranked programs have more job market success (see above list of recent soc hires). Thus, the correlation between GRE/GPA and job market outcomes > 0. And why ask professors what they think about GRE/GPA when I could see what they do (we are sociologists after all; better to observe in situ than take a post-hoc account)? Look at the results board here on Grad Cafe (admittedly not a random sample), and you will note that higher GRE/GPA candidates get accepted at higher ranked programs (and usually choose those over lower ranked programs). To argue otherwise is to defy the data. I also strong recommend you to this conversation on Orgtheory.net: http://orgtheory.wordpress.com/2011/05/09/an-inconvenient-truth-about-gre-scores/

 

As for the more general point: "You also failed to miss my overall point because you were so focused on your statistical argument. It's not the program ranking that matters but the networks and people you become enmeshed with during the process. "

 

I apologize if I missed your overall point. I was responding to the specific arguments you made to the OP, which were factually incorrect. But it looks like I missed the wood for the trees. On your overall point that "networks matter", I wholeheartedly agree. That networks matter regardless of rank is of course ridiculous. The most high-value (for getting a job anyway) networks accrue at the higher ranked programs. Of course, you show plenty of anecdotal evidence that high value networks can exist outside of the top ranks. I can't dispute that, and it sounds like your program does a great job attracting top candidates and students. But a few negative cases do not disprove an overwhelming trend.

 

Now let me be quite clear. I don't think rank carries some inherent value. Especially as it's measured by USNEWS, it's a purely reflexive proxy for prestige. There's all kind of status bias there. But the problem is that the status bias in rankings has material consequences on the lives of job candidates. That's nothing to sneeze at. So it's irresponsible to advise people to "ignore rank, focus on all the other stuff" when going to a program with a low rank affects your material wellbeing later on. This is not to say "ignore all the other stuff, just go for the highest rank". Nobody promotes that; me least of all. 

 

And then, following Bourdieu, we know that the symbolic capital of rank can buy other forms of capital: the networks, the talent, and the funding. So rather than X --> Y, where X is the meat-n-potatoes of a program and Y is the rank. We have Z-->Y--> X, where Z is some historical unknown that locked us into path dependency. And finally, none of what I am saying is new or surprising; this conversation has been had many times. e.g. http://asr.sagepub.com/content/69/2/239.short

 

Finally a disclaimer: for the sake of argument, it is easiest to operationalize "job market outcomes" as a function of the rank of the hiring program. I don't believe these are the only careers people should aspire for or pursue. The more we diversify, the better the outcomes for all of us. And if a TT job at an R1 (never mind in the top 25 soc programs) is not your career of choice, then it's probably not necessary to go to a top ranked program. But because the PhD granting programs are almost exclusively oriented toward placing PhDs in academic jobs, it makes the most sense to use that benchmark in argument. Again, that doesn't mean I think everyone needs to go that road. 

 

Edit: spelling corrections

Edited by RefurbedScientist
Posted

After reading your post I still don't think you get the point I'm trying to make here. Perhaps that's my fault but I don't see how I can make it any clearer.

 

Publications matter. Publications in top journals matter more. They aren't the only things that matter, though, and you seem, at points, to agree with me on that. So if we agree on these points let's stop discussing them. It's wasting too much of our time and not really contributing anything.

 

You argument, as I understand it, is that you are more likely to get publications at top universities (forget actual number rankings, the metrics are flawed.) You also contend that those at top universities are the ones best able to train others in getting those publications and thus jobs.

 

My argument is that it's nothing inherent to being a top university but rather being at an R1, in general. (Here I use R1 to mean Carnegie RUH and RUVH schools.) Yes, there are programs better than others at putting people into R1 positions than others from a numbers standpoint. However, you have to consider that, while everyone who goes to Berkeley or Penn is going to be looking for a TT position, not everyone at an R1 is going to be. This doesn't mean that people at other programs aren't getting jobs just that there are less people on the market at other programs. If each program graduates 10 students a year (just an example) and all 10 from one school go looking for a TT position but only 3 from another do, 5 from the first school get TT positions, the other 5 get VAPs and Post-Docs, and all 3 from the second school get TT positions, are you going to say that you have a better chance at the first school? Or would you say it depends?

 

That's basically my argument here. Rather than focusing on metrics like ranking actually do the work and talk to the programs and the people. Every program director I've talked to has been open about placement, about publishing, and about what they will do to prepare you for the job you want. Students in the program also know well what the prospects are because they see what happens when people graduate.

 

By the way, the list you posted is self-selecting and far from complete. It's a pretty poor way to illustrate your argument.

Posted (edited)

I was responding to this point for the most part. 

4) I don't even follow the logic here. It's not that difficult or rare to get published in top, field-specific journals as long as you are conducting original research. If you are at a research university you should be. The idea in our program is that everything you write for publication should be submitted to the highest journal you can reasonably submit to

 

Did I misunderstand it? You appear to be refuting the OP's hypothesis about the connection between program rank and publishing in top journals.  

 

You argument, as I understand it, is that you are more likely to get publications at top universities (forget actual number rankings, the metrics are flawed.) You also contend that those at top universities are the ones best able to train others in getting those publications and thus jobs.

 

The first half of that is right. There is an undeniable correlation between program rank and frequency/quality of publications. The second half I left as an open question (I said: "The more interesting question is whether faculty publishing patterns affords any advantages to students in those programs."). In other words, I can't explain the correlation just from my anecdotal experience. But from the (abbreviated) list of recent hires, we can see that those at the better ranked schools land the "better" jobs and had more publications. I don't know if this is a selection effect (those students would have done as well at a school ranked 50 as at a school ranked 5), or a status bias (the school's reputation alone gives its students a leg up), or network effects, or if it's an organizational thing, or if it's that the actually training is better at a higher ranked school than a lower one. You seem to be willing to rule out the many possibilities without a shred of data outside your own experience.

 

As for your critique of the data, I'm not sure what you mean. The soc-job-rumors list gets to be a pretty comprehensive list of academic hires, especially if you follow through the whole hiring season. I don't think people select themselves onto the list; the forum members collect the data themselves (but I can't say for certain if those people aren't one and the same). All that said, of course I don't have access to data on the real population of academic hires. But some data is better than no data as long as we know it's limitations. As far as I'm concerned, the burden of proof falls on you to show otherwise. 

 

Ok then on your main point:

 

My argument is that it's nothing inherent to being a top university but rather being at an R1, in general. (Here I use R1 to mean Carnegie RUH and RUVH schools.) Yes, there are programs better than others at putting people into R1 positions than others from a numbers standpoint. However, you have to consider that, while everyone who goes to Berkeley or Penn is going to be looking for a TT position, not everyone at an R1 is going to be. This doesn't mean that people at other programs aren't getting jobs just that there are less people on the market at other programs. If each program graduates 10 students a year (just an example) and all 10 from one school go looking for a TT position but only 3 from another do, 5 from the first school get TT positions, the other 5 get VAPs and Post-Docs, and all 3 from the second school get TT positions, are you going to say that you have a better chance at the first school? Or would you say it depends?

 

On the first sentence, it's clear that publications in top journals decrease more or less linearly as we move down the ranks. Berkeley faculty and students appear more in the top journals than CUNY students, and CUNY students more than FSU students (just to pick schools randomly from the distribution-- no disrespect to any of them). So you're right, all students and faculty at R1s (almost by definition) publish more than students and faculty as non-R1, but even with that category the variation is systematically associated with rank. 

And then I'm trying to understand the second half of what you say there. Do you mean, because not all people at lower ranked programs aim for an academic job, that we shouldn't count them toward the "job outcomes" calculation? For one, as I said earlier, I'm operationalizing job outcomes as TT academic jobs, because that's how PhD granting sociology programs are oriented and what the training is for (not because I believe it's the only worthwhile career choice). So by that measure, folks who get the PhD but don't go on to TT academic jobs are not seeing a good job outcome.* Or are you suggesting a Simpson's paradox? In either case, your hypothetical question doesn't strike me as realistic (i.e. You created a hypothetical where 100% of the lower ranked PhDs who want a TT job get one, but only 50% of the higher ranked program PhDs who want a TT job get one. If you ask me which looks better in the hypothetical you imagined, then you're leading me to only one possible answer. But who can say if that hypothetical fits reality? Show me the unranked or ~50 ranked program that places a higher percentage of its PhDs in TT jobs than Berkeley and Penn). 

 

But right, on the whole, we agree totally. Mentorship, relationships, training, etc. all make a student more likely to publish. And publishing brings the job offers. But based on the incredibly strong correlation between program rank and publishing, we have to take rank seriously. What does it proxy? Does it proxy better training, better mentorship, better networks? That's the OP's question. As a matter of indisputable fact, the top ranked programs place students better, on average.  So what are the causal factors underlying the correlation? 

 

And I couldn't agree more with you on "Rather than focusing on metrics like ranking actually do the work and talk to the programs and the people. Every program director I've talked to has been open about placement, about publishing, and about what they will do to prepare you for the job you want." But if the job you want is a TT R1 job in a sociology department, it would irresponsible to advise prospectives to ignore rank altogether. Great fit at FSU won't compensate for the je ne sais quoi at Princeton, unfortunately. 

 

*And if you don't want a TT R1 job in a sociology department, then rank (and publishing) may matter much less. But then it's worthwhile considering whether a PhD is the most useful way of achieving that goal. It may be, but it's unlikely. 

Edited by RefurbedScientist
Posted

To be fair, jmu, you have access to top scholars in your field at an otherwise low prestige university because you work in a relatively low prestige field.  I say that as someone who worked with a top scholar in a low prestige field at a low prestige university, so I'm not knocking the strategy or the importance of intellectual variation, but the strategy for getting ahead in Geography or Heterodox Economics is not the same for that in mainstream sociology, to which the modal reader on this board aspires.

Posted

Also, I find the indictment that one or another analysis or mode of analysis is "limiting" to be really pretentious.  The statement implies that the person saying it has access to truths that are beyond the bounds of the very mode of reason her opponent is thinking within.  It's a fancy way to call someone stupid, and in that, not a particularly constructive argument.  If one wants to invite people to see things their way, a better way to do that then indicting the entire mode of logic the other person is thinking within is to address their claims and show how they fail.  Otherwise we just start lining up into philosophical camps, accusing one another of subscribing to the wrong ism, and essentially name calling.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

This website uses cookies to ensure you get the best experience on our website. See our Privacy Policy and Terms of Use