MathStat Posted September 30, 2020 Posted September 30, 2020 I just finished the notorious first year of coursework plus preliminary exams at the UChicago Statistics PhD program. Happy to report that I am still alive, and dare I say, excited to move on to research, despite the new set of struggles and uncertainties that will come hand in hand. I do face the following issue now, which seems to be pretty common within US Statistics PhD programs - I recall, for instance, this very heated discussion from a few weeks ago, which did resonate with me a lot: https://forum.thegradcafe.com/topic/125581-school-suggestions/?tab=comments#comment-1058776870. Similarly to that post, I also felt that the first year coursework focuses on traditional statistics topics (in my case, linear regression, GLMs, overview of Bayesian methods, old-fashioned math stat, such as complete, sufficient, etc. statistics, UMVUE, minimaxity, admissibility, James-Stein estimators, and then some modern math stat, such as EM and variational Bayes algorithms, regularization methods, hypothesis testing and multiple comparisons), yet it misses some core courses needed for those who want to do modern ML research (guess I joined the dark side, too, despite starting off as a probabilist with interests in mathematical statistics). The particular courses I have in mind are optimization and statistical learning theory (and who knows what else I'm missing). I am now trying to address this by taking a very strong learning theory course, yet I do not have time to wait for the optimization course which will be offered later. So as silly as this sounds, my question is, how does one efficiently self study new material relevant for their research, especially while balancing other courses, research, TAing, etc...? I feel that in order to gain the most thorough and solid preparatio, one should take the past course materials and do all the grueling long homeworks, readings and so on, but then again, there are those other time constraints I just mentioned. I'd love to hear some advice from more experienced posters on how they pick up the skills needed for their research as they go. Thanks a lot! Bayequentist 1
Bayequentist Posted September 30, 2020 Posted September 30, 2020 I have also just passed my PhD qualifying exam, and I think we share similar research interests. In the first year, besides the required statistics classes I also fit in my curriculum extra CS/EE classes like Information Theory, Convex Optimization, Machine Learning. As a consequence I was dead tired everyday, but I hope it will all be worth it? Learning from first principle is really awesome, but I definitely feel like it's not necessarily the optimal way to prepare for research. Thus I am also very interested to hear about more efficient ways to prepare for research!
MathStat Posted September 30, 2020 Author Posted September 30, 2020 Congrats @Bayequentist! I really wish I took the extra EE/CS classes in my first year (or self studied them, when they overlapped with the core statistics classes...), but I did not manage to achieve this, haha
Stat Assistant Professor Posted September 30, 2020 Posted September 30, 2020 I often find that the best way to learn a new field/subject is to watch video lectures, read review articles and read select chapters from textbooks. So when I wanted to learn about variational inference, the first thing I did was watch a few video tutorials by David Blei and Tamara Broderick. After establishing this "baseline," I kind of just pick up on things as I go -- i.e. I just read the papers and try to figure out what the authors are doing as I go. This gets easier to do as you gain more experience and as you read more papers (in the beginning, I might annotate the papers a lot more). Realistically, when you are doing research, you won't know (or need to know) *everything* there is to know about, say, convex or nonconvex optimization. But you can pick up what it is you need as you go, and if you encounter something you're not familiar with, you get better at knowing WHERE to look and fill in those gaps. MathStat, Bayequentist, BL250604 and 4 others 4 3
DanielWarlock Posted October 3, 2020 Posted October 3, 2020 To master a technique for me is very very hard. In fact, I often find that taking even a very solid class does not truly allow me to master a technique--in the sense that I can independently solve a problem using that technique. To give an example, I first learned the gaussian interpolation in a class in the context of Slepian lemma. Then I read Vershynin's book and learned it again, this time not only Slepian but also its extension such as Gordon's inequality. I even derived Gordon's inequality using interpolation as an exercise from the book. Now when I see it again in the context of spin glass (Guerra's work on existence of free energy and upper bound), I stumbled as a total novice. I tried to prove these two theorems on my own without looking at the proof. Again, it proves to be quite a challenge and I just can't do it. So I studied interpolation fourth and fifth times. Later the monograph poses an exercise to use interpolation--again this takes me hours to finally solve on my own. You could imagine that to apply interpolation in a research problem in a nontrivial way could be much more challenging. So I still have a long way to go before I can claim myself a master of interpolation. So in a sense, taking a class is as quick and efficient as it can get but in a way I also feel it is less "nutritious" a bit like junk food. Many classes (at my institution at least) feel like a guided tour around an amusement park. You see "prototypical arguments" of a lot of stuff in its simplest form, but it never gives you a feeling that you are "hitting it hard enough" by working out all the different variants. MathStat and Bayequentist 1 1
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now