Jump to content

Engali

Members
  • Posts

    52
  • Joined

  • Last visited

Profile Information

  • Gender
    Not Telling
  • Application Season
    Already Attending
  • Program
    I/O Psych

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Engali's Achievements

Caffeinated

Caffeinated (3/10)

9

Reputation

  1. The total package does matter. I completely agree. I personally had really subpar UGPA. But my professor advisor saw something in me and took a risk. My point is not to make it seem like a low GRE score is a death sentence of that you are incapable of being a good grad student.
  2. That's a fair point and you're entitled to your interpretation of my posts. I personally felt like no one was really giving the GREs specifically or the massive body of research on cognitive tests fair treatment whatsoever. What was particularly frustrating, as I have already mentioned, is that the arguments made were misinterpretations or outright mistakes in understanding of the data, results, or measurement concepts. What was staggeringly ironic was that the criticisms surrounded well-established research findings within the domain people are trying to pursue an academic career in. There are people who are smart and motivated who have worked hard to develop and research the validity these tests; it's unlikely they haven't considered issues about cognitive ability testing that people in this thread raised. Researching those results would be a good first step in making a case against the validity of the tests in question. Understanding and accurately reporting those results in making arguments would be the next step. These are part and parcel of what being in a graduate program entails. In short, sitting in judgment of something without adequately researching it won't fly in a PhD program and it certainly won't work for your own research. And that was happening throughout this thread. To me, I just think no on likes being chastised or criticized; if you want to take me to task for being patronizing and abrasive, so be it. Yes, my responses to TXI about selection bias misinterpreted what she meant, assuming your interpretation is correct, and for that I apologize. When people speak of selection bias as an argument against validity, it usually follows what I was talking about. However, her point is already addressed when the authors speak of "criterion contamination" and the unlikelihood of that occurring. If that was indeed what TXI meant, I would assume she would have read the paper carefully enough at that point to not make a point that was already addressed directly in the paper itself. See also in the discussion section that indicates the low SDs for the operational validities indicates that are likely not moderators in the GRE score-performance relationships. Thus, it is unlikely that program quality moderates these relationships. Also, this line of reasoning also ignores that although top programs may have more resources, top programs arguably have a commensurately high level of grading standards that may make it a wash entirely. Regarding greater quality of instruction, that is unlikely in that top programs tend to be research-focused where relatively less emphasis and contingent rewards are placed on delivering quality instruction compared to publications for professors. To your point, I disagree that TXI points were solely regarding my tone. She told me to get off my high horse and resorted to ad hominems. My counterargument was that I didn't put myself on a high horse in that I wasn't claiming moral or intellectual superiority. In response, she simply reasserted I was on a high horse rather than pointing out where I was claiming moral or intellectual superiority in my original posts. Just because the matter is subjective/ a judgement/ a perception doesn't mean you simply don't have to defend your point of view or that resorting to ad hominems is ok. For example, the stance that the subject matter GRE test should be used rather than the general GRE test is a completely subjective stance that actually has to be argued for. Many of the arguments made when doing research, in developing a theory or an interpretation of results, are rather subjective need to be argued for; data and results simply lend support for your arguments. Finally, as far as whether this is adaptive or not, I have considered it and the bulk of research would suggest that these types of behaviors (e.g., questioning the validity of feedback) to defend one's sense of self-esteem is actually maladaptive in the long run. Because you don't take the feedback as valid, you don't change your strategies or behaviors to perform more effectively. Consequently, you don't learn how to perform better because you reject the very feedback that would indicate trouble spots. Alternative strategies such as self-compassion appear to be more adaptive.
  3. I think that cognitive ability can only predict so much, even when talking about an endeavor like grad school where performance is so highly related to cognitive ability. In composite, it does a really good job. And I'm not certain it actually is given as much weight as people think. Some schools use a simple cutoff score and others use it in combination with other pieces of information. It might be used as a tie breaker between two promising candidates that are otherwise very closely matched. There are few things that, with as little logistical constraint, can be as predictive of proxy measures of general mental ability and acquired skills, knowledge, etc. I'm not sure that test bias actually puts construct validity into question. Again, these terms have very specific meanings. In any case, I looked up some research on possible reasons for differences in performance between subgroups. The report by Stricker (2008) found no evidence of stereotype threat in practice, that is, in real-world settings. They do admit that it would be premature to reject the possibility that stereotype threat occurs in real-world settings. However, the phenomenon has only really emerged in tightly-controlled social psych lab experiments, which have their own critiques regarding their external validity (e.g., Adam Grant's CARMA talk on quasi-experiements). Within that same report they mention that measurement invariance is rarely investigated, but Rock et al. (1982) found that the then-version of the GRE was found to be measurement invariant across gender groups and the race/ethnicity subgroups that were investigated (black vs. white). So the items appear to be perceived the same across subgroups and tap into the same, underlying construct. When looking at different "cognitively loaded" tests, certain consistent findings emerge (Sackett et al., 2001). First, they predict performance across a broad range of domains, particularly more so when the performance itself is highly cognitive related. Second, certain differences in scores between sub-group means emerge. Differences in sub-group mean scores does not necessarily indicate cultural or predictive bias; they could be tapping into true differences between groups. In the same article, they point that out that an extensive body of research has shown test bias has rarely been indicated. In fact, when it has occurred, the bias has consistently over predicted performance for minority group members. I would agree that faculty member want people who attain their degree. Having said that, they also want people to perform well while in the program. So both are important, and since these scores are such robust indicators of performance within the program, it still makes sense to include them. It's unclear in your comparison how well the graduate students in other countries do during their program compared to the US students. Sure, there are higher levels of degree attainment, but that doesn't speak to the quality of their performance. Stricker, L. J. (2008). The Challenge of Stereotype Threat for the Testing Community, (July).
  4. You're looking at the sample-size corrected correlations. When looking at a meta-analysis the "true" correlations, called operational validities here, should be looked at. They are bolded in the table and are likely correcting for unreliability in the predictor and criterion. That you didn't find them (the sample size-corrected correlations--again, you should be looking at the operational validities which are much larger) impressive doesn't really take into the account the effect sizes that can be expected to be found in social science research. Cohen (1992) and this article itself (pg. 168) considers most of these corrected correlations to be moderately large. The weighted composite of the general GRE would be considered large. So just because it seems weak doesn't make them so when considered within the context of social science research. Also, you missed that GGPA measures GPA in graduate school beyond the 1st year, usually as the final GPA, and the various GRE component scores were all moderately large in predicting this. The authors discuss and mention that are certainly a lot of non cognitive and situational variables that may explain or moderate the relationships between GRE and other criterion such as degree attainment and research productivity. Again, nothing is perfect. No one said anything predicts anything else perfectly. The point is whether it predicts validly. And your qualms about the "weak" correlations and variance explained is already addressed within the article itself on pg. 176 starting with "The argument that one should reject a predictor because the variance accounted for is only 1%, 2%, 5% or 10% is shortsighted, not to mention potentially wrong (Ozer, 1985)." The test bias/fairness argument is really a separate one altogether from validity which has a very specific and clear meaning within the social sciences. Btw, the article does show that the GRE is a valid predictor for non-traditional students (i.e., those over 30) and the GRE-Q is actually a better predictor of GGPA for non-native speakers than for the entire sample Retention is really not a good metric for performance within a grad program because people are by definition no longer in the program when they leave. There are so many factors that predict this outcome--some like flagging interest in the topic, practical concerns about making money after starting a family, etc.. I don't have any hypotheses about the difference in scoring on these tests. As I have said, this is a thorny issue and something that people who create general mental ability tests grapple with as well. There are very consistent differences between race groups in these GMA tests, for example. Some have said that the items are biased in that they assume knowledge that some racial groups have experienced. When items are created by subject matter experts of the racial groups' associated "culture" (whole different can of worms) are tasked with writing/approving items that are not biased, the performance gap actually gets worse. I personally think a small part of this worsening effect is a result of stereotype threat, but I'm not entirely sure. In any case, there are people constantly trying to make these tests more fair and less biased, but there are no easy answers in how to go about doing this or what could explain the gap that emerges in performance between groups. i find some comfort in knowing that academia at large and psychologists in particular tend to be very sympathetic to minorities and fight for fairness on their behalf with their research. People are trying and nothing is perfect.
  5. I do not see the -.08 correlation. What is the criterion? And what do you mean it gets funky after 1st year GGPA? Your simple solution is actually not that simple in practice. You want every university with a graduate school to agree to no longer accept the general GRE and instead have each department accept subject GREs that have more to do with acquired content knowledge. As Kuncel and colleagues suggest offhand, this would do a disservice to everyone who majored in one thing in undergrad, but realized they wanted to pursue a different field in their graduate career. The point of the general GRE is to predict acquisition of knowledge and skills in your graduate career. I haven't looked at the data on the lower means for women and minorities (please refer to reputable source) and it's definitely a thorny one. The key word in you argument is "truly." The GRE "truly" predicts grad school performance in the sense that, as a weighted composite, it significantly predicts performance as a fact. That's not to say that it "truly" predicts grad school performance if you are to use the definition of truly as to mean "to the fullest degree." The problem from there is that your inference is incorrect: people who tend to score lower are not "lost causes". Women and minorities may have lower group means, but the distribution of individual scores would indicate that women and minorities still score highly. Furthermore, the GRE is not a perfect predictor, so committees still do overlook suboptimal GRE scores when everything else is stellar.
  6. Think I am where? So your argument amounts to re-assertion of your point? I made a claim that there was a lack of intellectual honesty when it came to examining the *validity* of the GRE in predicting grad school success. The point wasn't whether or not who was more or less biased or smart, but the willingness to look up research and look at the data/results. FFS the Psych Bull article was free on Google Scholar and published over a decade ago in one of the best journals within psychology. The constant nitpicking about how it doesn't predict all indices of performance perfectly ignores the larger context of nothing be a near perfect predictor or the fact that nothing thus far predicts grad school performance better as a weighted composite of predictors. Just look at table 9 and see how much more predictive UGPA is beyond the GRE. It's like when you questioned the "selection bias" of the article without thinking through the implications of what that would mean. You wanted to prove your point so you threw something out there that worked against your argument because you didn't think through what the "selection bias" would actually do to the data. This is what I am talking about regarding intellectual honesty: thinking through what the data and methods are actually telling us instead of what we want it to tell us.
  7. Just because I disagree with you doesn't mean I am on a high horse. I never claimed intellectual/moral/etc. superiority. I simply pointed out that the evidence is there to look at is more objective than the rash of overly emotional opinions on the matter. The issue was, and is as evidenced in your post, that you can't have a simple discussion about this topic without resorting to petty ad hominems and emotional appeals. Your selection bias argument actually works against your point because the range restriction that would occur from only receiving people who scored highly on the GREs lowers the correlation that would occur if we included the full range of scores. Second, I did not act as though I am immune to bias; it's inherent in being human. The point is to recognize that, which I didn't see in the knee-jerk reactions to GREs and frankly pure rationalizations about it being an invalid predictor of grad school success. The larger point is that the research is already out there--but most of you were more interested in tearing the test down instead of looking for objective evidence of the test's validity. That is the problem.
  8. I would highly recommend retaking it. I did a 3 month prep schedule. 2-3 hours a day, 4-5 on weekends. You're very fortunate to get into the Wake Forrest MA program. I've heard it's very research focused and should prepare you very well for a PhD.
  9. I know people are passionate about the topic, but as future academics we really need to have at least some amount of intellectual honesty when it comes to any topic under discussion. This is particularly salient when we're talking about an area within the province of psychology (i.e., psychometrics and psychological testing). The bottom line is nothing is perfect as a predictor of any outcome. That's just a fact of life. What we have is good evidence that the GRE in general, including the GRE-Q, is a good predictor of grad school "success" across a wide range of operationalizations. The one that has been nitpicked to death in this article: http://internal.psychology.illinois.edu/~nkuncel/gre%20meta.pdf is time to completion. It can be easily argued that how quickly one moves through a program is a poor index of "success" as long as the length isn't excessive. For example, my professor advisor suggests doing six instead of five years in my PhD program because it gives you a year to dedicate to research that you wouldn't have if you were a junior faculty member having to mentor students and teach classes. Is the opportunity cost in first year wages worth the potential long-term gain in landing a better job in a better school because of a stronger CV? Well, it's working out really well for my professor advisor to put it mildly. The larger issue is that the world isn't perfect. The very arguments used to question the *validity* of the GRE (which, btw, isn't actually put into question because the data already shows otherwise) in this thread conflict. Yes, the subject GRE is a slightly better predictor than the general GRE on the one hand, but on the other hand others have mentioned paying for even the general GRE is burdensome. The requirement set forth by the *graduate schools* of universities demand a general GRE--in this regard, the psychology departments hands are tied in that they must at least require these test scores. How can you then suggest using subject GREs when it would further disenfranchise those who are already struggling to pay for the general GRE in the first place? The problem is that we live in a limited world with limited resources and many moving parts. This is life and you deal with it. The overarching concern I have with all that I have read in this thread is this contempt for the GRE and dismissal of this validity because you don't *like* what it stands for in your minds. This constant refrain of "well grad schools need something to winnow down the list" and "it's just a way to narrow the list down." The validity of the GRE is an *empirical question* and isn't subject to your whims or values. This is how science needs to be approached in general. Can it be better? Sure. Is it perfect? See above. Does it predict grad school success? Arguably better than most any other standardized way of comparing candidates and predicting success. As aspiring scientists I would expect you all to approach these types of empirical questions with some restraint in how it affects you personally. What's particularly ironic is that you all want to be part of a field that spawned these types of tests in the first place. There is good reason the GRE is used as a way to select candidates: it predicts performance. Are there other things that also predict performance? Sure. Personality measures have incremental validity in predicting performance. But if your personality profile worked to your detriment in getting accepted somewhere, would you then question the validity of those tests? If any test or predictor worked to your detriment, would you then question the validity of those tests? Again, intellectual honesty is key to scientific progress, which may not always work in your favor or to your benefit. You might want to check how willing you are to put your personal concerns aside when if you want to pursue psychology using the scientific method.
  10. You're very welcome. If you feel in your heart of hearts that it isn't a good fit, I would definitely consider the reasons why you might feel that. From my experience thus far, concerns about the surrounding area or night life, weather, proximity to family, etc. are really not going to make much of a difference. A match between your research interests and your advisor's main research area, personality fit, and fit with the ethos of the program culture is critical. Feel free to post here or message me in private for advice. I was on here a LOT during my application cycles and got much needed insight and support. Feels good to be able to give back.
  11. It's becoming more and more common as the field is gaining recognition. This has traditionally not been the case. However, there are just so many candidates now that schools are interviewing people to break "ties" when people seem so strong on paper.
  12. I just wanted to pop in and give a word of advice. Having been through 3 application cycles (1 masters, 2 PhD), I think I am somewhat qualified to impart some words of wisdom. I would carefully consider paying for a PhD program. Your passion can blind you to economic realities. You should not take on more debt than what you can reasonably expect your first year's gross salary after getting a job. Remember that the data for average I/O salary is heavily skewed by outliers (you should have researched this) and does not separate academic salaries from industry salaries. I went in thinking I wanted to become a consultant. An internship and my current advisory has made me heavily lean towards academia as a career path. That means I will likely make less money than had I gone the consulting route. Priorities and interests change. I did a masters and I took on debt to do so. It was the right choice for me because I needed to prove that I could do graduate level work given my mediocre undergrad GPA. If there are areas in your application that are more easily addressed, I would strongly recommend taking the year to do that instead of doing a masters because you think it will give you a leg up. The bigger lesson here is not to be impulsive in your decision-making simply because you want to get in. This extends to what offer to take if you were fortunate enough to be extended offers you like. The cycle after I completed my master's, I applied to PhD programs and I got one sure offer and another really good potential offer. I found out through graduate students at the visitation weekend that the latter offer would not be good for me for reasons I won't mention. So I essentially declined all offers (i.e., intentionally shot myself in the foot at the bad-fit place) and did another cycle. I am now at a much more prestigious program with an advisor that I deeply admire, one who is shaping me to be a better person in addition to being a productive researcher. Had I gone the "easy" route, I would be infinitely worse off. Don't settle or sell yourself short. Do whatever it takes to get into the very best program for you and your needs now while also seriously considering what your long game is. Good luck and take care.
  13. a. Were the articles you wrote for ed psych first author? Were they published? If so, what journals? How is that related to topics within I/O? Btw, you obviously don't have to tell us, but be prepared to talk succinctly and tactfully about why you got Ds and Fs in undergrad. b. I would consider retaking the GREs to up to your Math GRE score. TBH you're quite competitive, but top programs seem to have an average GRE score of ~1400 for accepted students c. I would start to zero in on your research interests and actually research it. I would also start making a list of master's programs and start looking at the CV of faculty at these programs. You want a research and personality fit with your professor-advisor.
  14. The whole GRE score thing...it really varies by department. I would say that most use it as a first hurdle to weed out apps, but some use it as part of the application as a whole to get a holistic picture of who the candidate is and their strengths/weaknesses. I will say that if you want to be competitive for the top programs that you will want to raise your Quant score to at least 75%. Verbal seems to be less heavily weighted, but I wouldn't look past it, either. As far as interests, being focused is good, but I think being narrow is very limiting. If your interests are really narrow, then you will have issues finding people who are researching what you want. I would start to think about how your research interests intersects with the broader topic areas that make up I and O and start from there. You will probably do research that looks at your specific interest in the context of the larger topic or how your interest relates to a construct or constructs in a related area. I do think it is okay to mention just one professor, but from experience I will tell you I was glad I didn't and it actually worked out worlds better for me in every conceivable way. Sometimes you think you know what you want to learn and then you realize you really didn't even know how much you were interested in something else. Read this: http://www.siop.org/tip/oct11/07campion.aspx Your interests will very likely change during your grad school career. What's important is finding an advisor who has interests that are generally similar enough. Beyond that, I think their publication history/trend (should be strong and trending upward) and personality fit with yours is vital to your long term success in grad school. I can not emphasize this enough.
  15. Your app will be very strong. I don't know the conversion for the GRE scores though since I took them before they switched to this format. Do you know what percentile scores you got? If you can get a publication out there, that would obviously make you much more comeptitive, but it certainly isn't necessary. I think your background makes you a strong candidate because you should be able to write intelligently about what your interests in I/O are in you SoP. I could seek out the people who are doing research in the area you want to work in and start emailing them about your interest in their line of research.
×
×
  • Create New...

Important Information

This website uses cookies to ensure you get the best experience on our website. See our Privacy Policy and Terms of Use