Jump to content

Opinions on stats programs that don't require advanced statistical theory or measure-theoretic probability?


Recommended Posts

Posted

In recent years I've seen quite a few stats PhD programs popping up that don't have coursework requirements for advanced statistical theory (Lehman & Casella...) and measure-theoretic probability (Durrett...) - e.g. programs that focus on Bayesian/computational/high-dimensional statistics and statistical learning. What are thoughts on those programs? Pros/cons in terms of academia/industry?

Posted

What are the job placements like for the schools you mentioned? For industry, it probably makes no difference. For academia, having to take these courses may be helpful in that they allow you to sharpen your proof skills, and you pick up on certain techniques from them that you can use repeatedly in your research (splitting the expectation ftw). But if you read enough papers carefully, you can probably also pick up on "standard" proof techniques.

For academic hiring at research universities, it's most important that your *research* is prolific and at least some of it is cutting-edge (i.e. getting published in the top journals or top machine learning conferences), not the content or grades of your coursework.  

Anyway, my two cents: Lehman and Casella is a very classical text but a lot of the material in it may not be very relevant to most modern statistics research (for example, L&C gives a *very* rigorous treatment of UMP tests, admissible estimators/tests, etc., which isn't a popular research topic now). I guess it's nice in that L&C has a lot of material on things like James-Stein estimation that was one of the earliest shrinkage methods (before lasso and all the sparse regression methods). But is it really necessary to know the risk/minimaxity properties of these kinds of estimators in great detail? I'm not sure.

As for probability theory, I definitely think it's good to be able to understand notation for the Lebesgue integral and know basic inequalities (e.g. union bound), but if you're a statistician and not a probabilitist, you may be able to get away with only the basics. I believe that at UC Berkeley, PhD students in Statistics do not even need to take measure-theoretic probability (they can instead take only the Applied Statistics and Theoretical Statistics sequence), and their PhD graduates seem to get along just fine.

Posted

I completely agree with doc's assessment. In fact, I can observe this trend at Harvard. The inference class this year is taught from a range of relatively modern topics instead of unnecessarily rigorous proofs on consistency and normality of MLE/UMVUE/NP tests and stuff like that. The measure theoretic has been downplayed a lot at Harvard as well because it is "almost completely useless". That said, the classic asymptotic techniques are still very useful. When you write some research paper, it is expected that you will give some bounding statements with *NO* exceptions and the toolkit/intuitions for doing that is pretty standard from the classics.

Posted

I observed this even in the case for the first-year Masters-level Mathematical Statistics sequence at my PhD program. The first semester, based on chapters 1-5 of Casella & Berger, is more-or-less the same, but the second semester now deviates from Casella & Berger quite a bit. They used to spend a ton of time on things like UMVUE, Neyman-Pearson, and Karlin-Rudin, but now, they either skip it or abridge it considerably, and instead, focus on topics like the EM algorithm, lasso and ridge regression, etc. By now, things like EM algorithm and lasso are not that "new," but they're certainly not relatively archaic like UMVUE or UMP tests, and they will probably be standard tools used for awhile.

I think it's a good thing. But then again, when I started to do research, I was basically learning everything on my own (I could go to my advisor for help and questions). So I can't say that most of the classes were really directly useful for research, but it didn't end up mattering in the end anyway.

Posted
19 hours ago, Stat PhD Now Postdoc said:

I observed this even in the case for the first-year Masters-level Mathematical Statistics sequence at my PhD program. The first semester, based on chapters 1-5 of Casella & Berger, is more-or-less the same, but the second semester now deviates from Casella & Berger quite a bit. They used to spend a ton of time on things like UMVUE, Neyman-Pearson, and Karlin-Rudin, but now, they either skip it or abridge it considerably, and instead, focus on topics like the EM algorithm, lasso and ridge regression, etc. By now, things like EM algorithm and lasso are not that "new," but they're certainly not relatively archaic like UMVUE or UMP tests, and they will probably be standard tools used for awhile.

Did your program also have a separate class on more advanced statistical computing? We go through Casella Berger (1-5 in Semester 1 as well) but stay pretty far down in the theory and actually briefly cover E-M lasso, etc.. We have a year long (required) sequence in measure theoretic probability (Billingsley + Resnick, and others) which goes further into the theory. But, we also have another required course that is essentially dedicated to computing methods such as E-M, MCMC, Optimization methods, Bootstrapping, etc.  

It certainly gives you different flavors of everything and keeps things pretty compartmentalized at the first year level. Just curious to see how your program compares/compared!

Posted
9 minutes ago, BL250604 said:

Did your program also have a separate class on more advanced statistical computing? We go through Casella Berger (1-5 in Semester 1 as well) but stay pretty far down in the theory and actually briefly cover E-M lasso, etc.. We have a year long (required) sequence in measure theoretic probability (Billingsley + Resnick, and others) which goes further into the theory. But, we also have another required course that is essentially dedicated to computing methods such as E-M, MCMC, Optimization methods, Bootstrapping, etc.  

It certainly gives you different flavors of everything and keeps things pretty compartmentalized at the first year level. Just curious to see how your program compares/compared!

To the best of my knowledge, my PhD program still requires a year-long sequence in measure theoretic probability and a course on advanced inference based on Lehman & Casella. It's just that the second semester of the Casella & Berger Mathematical Statistics class now deviates a bit from the textbook, since the instructor doesn't want to emphasize some of the material in the later chapters. So while things like MLE, asymptotic distribution of the MLE, and likelihood ratio test (LRT) are still covered, other sections of the book are abridged or skipped entirely. There is a Statistical Computing class as well. 

Posted
4 hours ago, Stat PhD Now Postdoc said:

To the best of my knowledge, my PhD program still requires a year-long sequence in measure theoretic probability and a course on advanced inference based on Lehman & Casella. It's just that the second semester of the Casella & Berger Mathematical Statistics class now deviates a bit from the textbook, since the instructor doesn't want to emphasize some of the material in the later chapters. So while things like MLE, asymptotic distribution of the MLE, and likelihood ratio test (LRT) are still covered, other sections of the book are abridged or skipped entirely. There is a Statistical Computing class as well. 

Got it! That's certainly a good way to do it. Was just interested to see

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

This website uses cookies to ensure you get the best experience on our website. See our Privacy Policy and Terms of Use