Jump to content

IR/Comparative Stats Question


j.persephone

Recommended Posts

Hi everyone,
Let me start by saying, if this part of the forum is only supposed to be for admissions questions, I apologize.
 
I was hoping to get some insight into a question about statistics in political science. As an undergrad, I was taught that if you are using population data, even if it is a limited temporal scope (e.g. all countries between 1945-1995), there is no reason to interpret statistical significance. Even though the units are part of a theoretically larger population and it would be useful to know the thing statistical significance is trying to get at, using methods that are actually based around the properties of random sampling does not really tell you anything particularly meaningful.
 
I'm now in an MA program where I have some professors who subscribe to this and others who adamantly do not. I have noticed plenty of articles do interpret statistical significance while using what is arguably population data or at least in no way a random sample.
 
I would love it if anyone would be willing to share any insight on:
1) The status of this debate in the field more recently. Most of the articles on this that I have found are 10+ years old and I'm wondering whether a sort of unspoken consensus has been formed, or whether I have just been looking for discussions of this in the wrong places.
 
2) What are you teaching/being taught about the appropriateness of interpreting statistical significance with population data in your program?
 
Sorry if I am using some terms imprecisely or not explaining this well. The language of instruction for my undergraduate courses was not English so I may not have translated my thoughts well.
 
Thanks everyone and have a great weekend!
B.
 
Link to comment
Share on other sites

2 hours ago, j.persephone said:

if you are using population data, even if it is a limited temporal scope (e.g. all countries between 1945-1995), there is no reason to interpret statistical significance.

I don't undertand. Could you please explain? statistical significance of what? Interpret it how?

Link to comment
Share on other sites

Your question makes very little sense. In political science, there is generally no such thing as population data. Sure, you could have data for a certain time period for all the countries/other units you are interested in (although that's pretty rare!), but that doesn't mean that statistical inference no longer matters. You could make a statement such as "for X population in Y period, variable A was associated with this or that change in variable B", but that is a purely empirical statement, and those are rarely interesting. If you want to make any kind of substantive argument, you will need to argue that this also applies in other periods, to other places, or that it is an inherent feature rather than a coincidence; all those things will require you to show (at the very least) statistical significance. 

Link to comment
Share on other sites

Hello again,

Thank you for your replies.
 
To give the most basic example of what I am talking about, let's say you have data on all countries from 1945-1995 and you want to run some regression where you could calculate/Stata will tell you p-values. Some articles will do this and then talk about their coefficients being statistically significant and imply that this matters for all the reasons statistical significance would matter (all I meant by "interpreting"). 
 
The problem here is that some people think p-values in this scenario would be either irrelevant or dubiously applied. If this data is best thought of as population data, p-values would be irrelevant. 
 
When people decide to interpret it anyways, they will often make the appeal to the theoretically larger superpopulation argument which I mentioned in my original post. This goes basically as reasonablepie described it, that there are always future/past or even hypothetical cases that make it reasonable to talk about the statistical significance of what you have found. 
 
The problem with this is that our "sample" of this population, all countries from 1945-1995, is in no way a random or even a probability sample of this theoretical superpopulation and measures  like p-values are to my knowledge based on the properties of random sampling. For this reason, some would say interpreting p-values in this case would be dubious at best. 
 
I have read in other forums that people use p-values in things like our example because the result is mathematically a reasonably good approximation of tests that we "should" be using because they are less problematic in their theoretical grounding. However, I have been unable to find a citation for this. I was hoping someone might have an answer and I was also legitimately curious what people were learning in other programs. 
 
If I am still not communicating this in a way that makes any sense, here is a link to the debate I am talking about:
 
And here is another helpful post:
 
Thank you all so much!
 
 
 
 

 

Edited by j.persephone
Grammar and fixing a mistake in my example
Link to comment
Share on other sites

yes, I think there is a consensus by now (at least in my school) . You can of course use descriptive statistics to describe your "entire population", it will have no explanatory power. I think the super-population argument is just a convenience, it is easier to imagine but even if you do not believe in the super population you would want to test any ideas you have about the relationship between two or more variables against the "chance model" aka. reshuffling the cards (edit. even one statistic like a mean or proportion). Some of the bootstrapping techniques are developed for populations particularly.   In any case this is very well explained in the resources you link.

As for your other questions: in my program mostly the super-population argument is taught in the introductory courses, even on undergraduate level. since the significance testing is quite a mess in more advanced courses we discuss more the chance argument/ bootstrapping etc. Also the second argument comes up more in stats courses, but then again they have people who are interested in many different problems.

Edited by kaykaykay
Link to comment
Share on other sites

You're more or less correct about the theoretical issue.  In practice, everyone always uses standard frequentist tests on data of this form. 

So, you have some data of the form y = x*beta + epsilon where we'll assume all the standard stuff, including epsilon is normally distributed.  You have some observations - either a population or a sample from a population.  In the standard presentation, you wish to know whether it is possible that the true value of beta is zero.  But what's the true value? In an easy way of interpreting frequentist statistics, the "true" value is the value in the population from which the sample is drawn; thus, if you have a population, no further calculation is required, the value in the population is the number you calculate.  If it isn't zero, then the p-value is literally zero. 

But, let's think about this another way.  You're actually interested in a theoretical data generating process.  What you want to know is whether or not the theoretical parameter is zero.  In this case, the "true" value is something that could never be observed.  It's also not relevant whether we're talking about a population or a sample.  In either case, we have some observations from an underlying stochastic process.  It is an unobservable parameter that we are interested in.

In interpretation two, the question of interest from a frequentist perspective can be thought of us as: "If the true value of beta is zero, then what is the probability of observing data with an estimated beta value at least as large as the one I have estimated on the basis of random chance alone?"   This ceases to be about populations, and while the "hyper-population" idea helps frame it in terms of the first interpretation, it's not necessary.  You can, of course, derive exactly the same formula for calculating the standard errors under this interpretation.

You bring up the issue of non-random sampling, which is kind of relevant.  Thinking for simplicity purely in terms of OLS, we have an unbiased estimator if epsilon is E(epsilon|x) = E(epsilon); that is we want the xs and the epsilon to be uncorrelated ("exogenous").  Random sampling is not random assignment, so it doesn't do anything magical for us here.  However, if the x values are exogenous in the theoretical process, then random sampling ensures that we won't introduce a correlation between x and epsilon through the selection process.  

Tl:dr: No need to think in terms of hyper-populations if instead you frame inference in terms of stochastic processes.

 

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

This website uses cookies to ensure you get the best experience on our website. See our Privacy Policy and Terms of Use