Jump to content

Recommended Posts

Posted
3 hours ago, changeisgood said:

So far from what I've seen, there are a lot of people doing a lot of heavy duty math in our field, but the ones that do this kind of work often struggle to attach any meaning to what they are doing.  Math is nice, math is pretty, but if you're not contributing something to improve behavior outcomes, institutional operation, etc. or whatever your particular flavor is, it's just mental gymnastics for the sake of fiddling around.  I can't tell you how many methods articles I've read that end with something like "we really can't say much about the implications of all this, except to say that we need to use this method more often".

I dont really mean the heavy duty math as you call it. It is more the kind of stuff like: using an estimator with 200 observations that is known to be inconsistent with fewer than 500, assuming no auto-correlation in pooled models, using fixed effects plus a lagged dependent variable without clustered standard errors, not considering selection effects, using control variables that are determined in the model, etc. Then I usually look at the replication file and attempt to correct for those things just to find out that the results change and don't support the paper's argument anymore. I find that severely annoying and I think that a more in-depth training could alleviate these issues.

Posted
9 hours ago, PaperTrowel said:

Rejected this morning via website by GWU. Program wasn't a great fit and I know I struggled with the shorter SOP but still disappointing as a first result.

I got rejected as well. (was not a good fit as well) This is my first cycle of PhD application and my first result so I was quite unprepared and upset. I contacted my supervisor (I'm currently doing my master) at around 11:30 p.m. after being sunken all day to inform him about this news, trying hard to sound alright in my email. And... the first sentence of his reply was:

Rejection by all means is a tragedy, feel free to feel sad!

I was like..what??? I didn't know one rejection qualifies as a "tragedy"...but I now I feel so much better and cheered up after accepting that it is... +_+

LOL

Posted
1 hour ago, correlatesoftheory said:

I received a very nice call from Ohio State's DGS to discuss my application and their program. Didn't extend an offer but said he'd be in contact with me again in the near future. Very exciting!

Great. Hope you will be contacted again with an offer. 

Is your subfield IR? I guess that the DGS might be your POI. 

 

Posted

Did anyone hear from UIUC? I can find two acceptances in the results page. Since there are so many trolls, I cannot trust the information. 

Posted
6 hours ago, AnUglyBoringNerd said:

I got rejected as well. (was not a good fit as well) This is my first cycle of PhD application and my first result so I was quite unprepared and upset. I contacted my supervisor (I'm currently doing my master) at around 11:30 p.m. after being sunken all day to inform him about this news, trying hard to sound alright in my email. And... the first sentence of his reply was:

Rejection by all means is a tragedy, feel free to feel sad!

I was like..what??? I didn't know one rejection qualifies as a "tragedy"...but I now I feel so much better and cheered up after accepting that it is... +_+

LOL

I wish my supervisor was like this. He was all like "this is part of the discipline. Get used to it."

Posted
10 hours ago, ngsam191 said:

Someone posted an acceptance from Northwestern. Troll again? Just wonder if it's the same person doing all this trolling,

Maybe I'm being a little over the top, but if I see any results that seem out of place or are not confirmed in some way in the forum, I'm going to assume they are trolls. The results page is making me bitter.

Posted
13 hours ago, advark said:

If there isn't another one within 1hr, it is pretty safe to assume troll. Northwestern post is a definite troll. 

 

The problem is when you have multiple postings and several users here have not heard from the school. 

Yeah then one troll will just post multiple times.  

The results page is useful for looking at past years concerning when notices came out and what kind of GRE/GPAs were accepted.  If a school made a decision about your application, it will send you notice.  If you applied and haven't gotten notice, probably no one else has either.  

Posted

Yea, I would be hesitant to read much into what goes on the results board right now. Check your applications every couple days, but trust in getting an email notice at some point. Especially since it's still so early in the cycle.

Posted (edited)

Hmm, Rice seems to be coming in now.  I applied but it was by far the weakest fit of any place on my list.  I don't anticipate an offer from them.

Come on, Big 10 schools!

 

Edited by changeisgood
Posted

I know it's not over yet, but it's been a busy week already... who are people expecting to hear from next week? Any school that would not be a surprise?

Posted
2 minutes ago, RevTheory1126 said:

I know it's not over yet, but it's been a busy week already... who are people expecting to hear from next week? Any school that would not be a surprise?

I've heard vague UW-Madison whispers going around, so I wouldn't be shocked if their notices came towards the end of next week. 

Posted
5 minutes ago, RevTheory1126 said:

I know it's not over yet, but it's been a busy week already... who are people expecting to hear from next week? Any school that would not be a surprise?

I think Ohio State should be coming soon. Afaik, adcom met today (and may meet again once more next week?).

Posted
43 minutes ago, RevTheory1126 said:

I know it's not over yet, but it's been a busy week already... who are people expecting to hear from next week? Any school that would not be a surprise?

Yeah, OSU is going to come in soon. 

Posted
13 hours ago, RevTheory1126 said:

I know it's not over yet, but it's been a busy week already... who are people expecting to hear from next week? Any school that would not be a surprise?

Princeton.. 

Posted

Well i was told that they are half way done with reviewing  applications. ... so you never know. 

Posted
52 minutes ago, dagnabbit said:

Probably Davis next week, and then the rest of the UCs the week after.

And real UT Austin results.

Posted

 

55 minutes ago, GradNYC said:

Well i was told that they are half way done with reviewing  applications. ... so you never know. 

Yeah but that's the easy part. They'll be done in early to mid February just like every year.

Posted
On 1/19/2017 at 0:28 AM, Monody said:

I dont really mean the heavy duty math as you call it. It is more the kind of stuff like: using an estimator with 200 observations that is known to be inconsistent with fewer than 500, assuming no auto-correlation in pooled models, using fixed effects plus a lagged dependent variable without clustered standard errors, not considering selection effects, using control variables that are determined in the model, etc. Then I usually look at the replication file and attempt to correct for those things just to find out that the results change and don't support the paper's argument anymore. I find that severely annoying and I think that a more in-depth training could alleviate these issues.

So, I just finished writing an exam on probability theory so perhaps I'm a little salty and burnt out from a hectic semester. I think there are two issues here. I've previously taken a 3 course methods sequence, and one month of ICPSR's summer program, so I came into my PhD with more than an average knowledge of how statistical methods are applied in the field. I've still been clobbered by the math, and the requirement that, for example, we know how to derive the variance of a standard normal bivariate distribution by hand... I think there is something to knowing the mechanics and math operating underneath the concepts. At the same time, knowing how to derive something cold certainly won't save you from poor model building. My previous training was much more oriented around applied causal analysis - meaning, for e.g. the week we learned about synthetic control methods and matching, we talked about hypothetical research questions, how this method could resolve endogeneity issues, and how to do it right (and for the right reasons). This is where standards in the field have changed the most, even in the last 5-10 years. The focus now is not only on using statistics, but using them well. Most every methods course at the graduate level will require you to replicate a previous paper at some point to demonstrate the issues you suggest (I was required to do it previously, and will again in this program). Publications today don't ride on a simple replication - the focus is on both correcting poorly developed models, but also expanding on them. I think of the Andrew Rose and Goldstein Rivers and Tomz debate on the impact of the WTO on trade flows papers as a good example of this in IR. It's hard to believe the field was okay with shoddy models, but in essence there weren't necessarily a lot of people capable of policing how statistics were used (in terms of reviewers). Now there are. 

Of course, we're also coming to realize a lot of important issues with reliance on quantitative methods, and in some ways this comes from our field being a little behind others where statistics is the primary means of producing evidence. Consider Ai, Norton (2003) on problems with interaction terms in logit and probit models or Montgomery, Nyhan and Torres (2016) on conditioning on post treatment variables. There's another paper out there with a fantastic look at how a handful of countries in a large-N panel completely drive results due to the use of fixed effects. Any program with a thorough training in methods will have you see and talk about these things. They also need not come up in a methods class per se (I heard about the latter three in our IR seminar). Relying on substantive courses to highlight deficiencies in quantitative methods is also not a strong bet - we lucked out with a prof who is very concerned with these issues, but in other courses it was only ever raised as a cursory problem swept aside in favour of criticizing the underlying theories in papers. A good program will reinforce all of these issues, as will the diligent student. I should add this is no different for people pursuing processing tracing and interview methods, or people who are employed as faculty. I always thought it was weird that my MA advisor still went to methods workshops, but I see why now. There's always more to learn. It's part of what makes our field so dynamic, and this equally applies to survey, interview and archival research methods given the changing nature of technology and archival processes. 

At the end of the day, what's most important is being able to walk away from a program with a strong capacity to ask interesting relevant questions, to develop logical and conceptually clear theories, and the capacity to test those theories as rigorously as possible with a combination of tactics best suited to the issue at hand. This relies on more than a knowledge of math or statistics. It also requires a thorough understanding of what it's like to be in areas experiencing the phenomena we're interested in. Field work, or even interviews with people who have been involved, are really important. If there's one piece of advice I can lay out here it's to not lose sight of the real people underlying what we see to explain. 

Posted
12 minutes ago, CarefreeWritingsontheWall said:

Can confirm they had their first two days of meetings last week to draw up a shortlist. 

 Info I will never know: Whether I even make it to a shortlist. 

 

Thanks for the information!

Posted (edited)
1 hour ago, CarefreeWritingsontheWall said:

So, I just finished writing an exam on probability theory so perhaps I'm a little salty and burnt out from a hectic semester. I think there are two issues here. I've previously taken a 3 course methods sequence, and one month of ICPSR's summer program, so I came into my PhD with more than an average knowledge of how statistical methods are applied in the field. I've still been clobbered by the math, and the requirement that, for example, we know how to derive the variance of a standard normal bivariate distribution by hand... I think there is something to knowing the mechanics and math operating underneath the concepts. At the same time, knowing how to derive something cold certainly won't save you from poor model building. My previous training was much more oriented around applied causal analysis - meaning, for e.g. the week we learned about synthetic control methods and matching, we talked about hypothetical research questions, how this method could resolve endogeneity issues, and how to do it right (and for the right reasons). This is where standards in the field have changed the most, even in the last 5-10 years. The focus now is not only on using statistics, but using them well. Most every methods course at the graduate level will require you to replicate a previous paper at some point to demonstrate the issues you suggest (I was required to do it previously, and will again in this program). Publications today don't ride on a simple replication - the focus is on both correcting poorly developed models, but also expanding on them. I think of the Andrew Rose and Goldstein Rivers and Tomz debate on the impact of the WTO on trade flows papers as a good example of this in IR. It's hard to believe the field was okay with shoddy models, but in essence there weren't necessarily a lot of people capable of policing how statistics were used (in terms of reviewers). Now there are. 

Of course, we're also coming to realize a lot of important issues with reliance on quantitative methods, and in some ways this comes from our field being a little behind others where statistics is the primary means of producing evidence. Consider Ai, Norton (2003) on problems with interaction terms in logit and probit models or Montgomery, Nyhan and Torres (2016) on conditioning on post treatment variables. There's another paper out there with a fantastic look at how a handful of countries in a large-N panel completely drive results due to the use of fixed effects. Any program with a thorough training in methods will have you see and talk about these things. They also need not come up in a methods class per se (I heard about the latter three in our IR seminar). Relying on substantive courses to highlight deficiencies in quantitative methods is also not a strong bet - we lucked out with a prof who is very concerned with these issues, but in other courses it was only ever raised as a cursory problem swept aside in favour of criticizing the underlying theories in papers. A good program will reinforce all of these issues, as will the diligent student. I should add this is no different for people pursuing processing tracing and interview methods, or people who are employed as faculty. I always thought it was weird that my MA advisor still went to methods workshops, but I see why now. There's always more to learn. It's part of what makes our field so dynamic, and this equally applies to survey, interview and archival research methods given the changing nature of technology and archival processes. 

At the end of the day, what's most important is being able to walk away from a program with a strong capacity to ask interesting relevant questions, to develop logical and conceptually clear theories, and the capacity to test those theories as rigorously as possible with a combination of tactics best suited to the issue at hand. This relies on more than a knowledge of math or statistics. It also requires a thorough understanding of what it's like to be in areas experiencing the phenomena we're interested in. Field work, or even interviews with people who have been involved, are really important. If there's one piece of advice I can lay out here it's to not lose sight of the real people underlying what we see to explain. 

Very enlightening and heartening response. Thank you very much. Maybe just to add, I think that the progress is field and probably path-dependent. For example, I would argue that the progress on this front in my area (intrastate conflict research) is far less than in IPE for example, at least from what Ive read in the recent years. 

But coming back to my original question, what did you learn in your undergraduate method courses and what would you say is the average methodological knowledge they expect an undergraduate to have? Also in which program are you if I may ask?

Edited by Monody

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

This website uses cookies to ensure you get the best experience on our website. See our Privacy Policy and Terms of Use