Jump to content

Low quant GRE: successes and failures


dragonage

Recommended Posts

I, for one, remain underwhelmed with that -.08 correlation!!!

For such a high-stakes exam, I think it is reasonable for one to expect stronger correlations. It looks like things get a little funky after 1st year GGPA, too.

The financial burden argument makes no sense... There's a simple solution: universities can scrap the general GREs and use the subject GREs. Such a policy change would save students $35.

Consider also that women and minorities have lower means on all subscales of the GREs. If the GRE truly predicts graduate success, then wouldn't it be a lost cause to let women and minorities into graduate school? How can one justify having nonwhites and women in higher education?

I do not see the -.08 correlation. What is the criterion? And what do you mean it gets funky after 1st year GGPA?

 

Your simple solution is actually not that simple in practice. You want every university with a graduate school to agree to no longer accept the general GRE and instead have each department accept subject GREs that have more to do with acquired content knowledge. As Kuncel and colleagues suggest offhand, this would do a disservice to everyone who majored in one thing in undergrad, but realized they wanted to pursue a different field in their graduate career. The point of the general GRE is to predict acquisition of knowledge and skills in your graduate career.

 

I haven't looked at the data on the lower means for women and minorities (please refer to reputable source) and it's definitely a thorny one. The key word in you argument is "truly." The GRE "truly" predicts grad school performance in the sense that, as a weighted composite, it significantly predicts performance as a fact. That's not to say that it "truly" predicts grad school performance if you are to use the definition of truly as to mean "to the fullest degree." The problem from there is that your inference is incorrect: people who tend to score lower are not "lost causes". Women and minorities may have lower group means, but the distribution of individual scores would indicate that women and minorities still score highly. Furthermore, the GRE is not a perfect predictor, so committees still do overlook suboptimal GRE scores when everything else is stellar.

Link to comment
Share on other sites

I do not see the -.08 correlation. What is the criterion? And what do you mean it gets funky after 1st year GGPA?

 

Your simple solution is actually not that simple in practice. You want every university with a graduate school to agree to no longer accept the general GRE and instead have each department accept subject GREs that have more to do with acquired content knowledge. As Kuncel and colleagues suggest offhand, this would do a disservice to everyone who majored in one thing in undergrad, but realized they wanted to pursue a different field in their graduate career. The point of the general GRE is to predict acquisition of knowledge and skills in your graduate career.

 

I haven't looked at the data on the lower means for women and minorities (please refer to reputable source) and it's definitely a thorny one. The key word in you argument is "truly." The GRE "truly" predicts grad school performance in the sense that, as a weighted composite, it significantly predicts performance as a fact. That's not to say that it "truly" predicts grad school performance if you are to use the definition of truly as to mean "to the fullest degree." The problem from there is that your inference is incorrect: people who tend to score lower are not "lost causes". Women and minorities may have lower group means, but the distribution of individual scores would indicate that women and minorities still score highly. Furthermore, the GRE is not a perfect predictor, so committees still do overlook suboptimal GRE scores when everything else is stellar.

 

 

Referring to the table on pg. 169... these correlations aren't that impressive at all. They only used first year GPA and the correlations produced by long term measures (for example, degree attainment and research productivity) are even weaker.

 

Also, students who had a different major are typically required to take the psych subject GREs when applying to non-clinical programs, and most programs require all applicants to submit the subject GREs if they are applying to a clinical program... so I am skeptical that requiring the subject GRE would create more barriers for students who had a different major because they are required to take the subject GRE, anyway. Replacing the GRE general with another assessment for college admissions isn't that drastic of a change-- ETS started aggressively promoting the GRE as a replacement for the GMAT in business school admissions when it became profitable for ETS to do so (GMAT broke away from ETS and became an independent entity, AKA competition)

 

Here is my reputable source on group means by gender and race, btw: http://www.ets.org/s/gre/pdf/snapshot_test_taker_data.pdf

Similar gaps exist with the SAT.

It also points out some concerning results in terms of age, too.

Using the GRE or SAT as a predictor of success becomes problematic when performance on these assessments directly contradict performance in the classroom (see pretty much every piece of research ever done on academic success/failure and retention by gender).

 

What hypothesis would you use to explain these gaps? I, for one, would like to see differential prediction/measurement invariance studies done with the previously mentioned subgroups...

Edited by TheMercySeat
Link to comment
Share on other sites

Also worth mentioning...

 

Attrition is still high in US doctoral programs: http://voices.washingtonpost.com/college-inc/2010/04/nearly_half_of_doctorates_neve.html

 

Many programs in the UK or EU do not use the GRE in selection processes, yet they boast a much better completion rate: http://www.timeshighereducation.co.uk/news/phd-completion-rates-2013/2006040.article

 

Page 11 gives a nice visual of completion rates by domestic v. international programs: http://www.phdcompletion.org/resources/cgsnsf2008_sowell.pdf

 

Granted there are several differences between international and domestic programs, but it does not make sense how international programs maintain superior completion rates without the benefit of using a speeded algebra test to determine whether or not a student is capable of becoming a psychologist, writing a publishable scientific paper, or thinking about how to operationalize a variable of interest.  

Edited by TheMercySeat
Link to comment
Share on other sites

Referring to the table on pg. 169... these correlations aren't that impressive at all. They only used first year GPA and the correlations produced by long term measures (for example, degree attainment and research productivity) are even weaker.

 

Also, students who had a different major are typically required to take the psych subject GREs when applying to non-clinical programs, and most programs require all applicants to submit the subject GREs if they are applying to a clinical program... so I am skeptical that requiring the subject GRE would create more barriers for students who had a different major because they are required to take the subject GRE, anyway. Replacing the GRE general with another assessment for college admissions isn't that drastic of a change-- ETS started aggressively promoting the GRE as a replacement for the GMAT in business school admissions when it became profitable for ETS to do so (GMAT broke away from ETS and became an independent entity, AKA competition)

 

Here is my reputable source on group means by gender and race, btw: http://www.ets.org/s/gre/pdf/snapshot_test_taker_data.pdf

Similar gaps exist with the SAT.

It also points out some concerning results in terms of age, too.

Using the GRE or SAT as a predictor of success becomes problematic when performance on these assessments directly contradict performance in the classroom (see pretty much every piece of research ever done on academic success/failure and retention by gender).

 

What hypothesis would you use to explain these gaps? I, for one, would like to see differential prediction/measurement invariance studies done with the previously mentioned subgroups...

You're looking at the sample-size corrected correlations. When looking at a meta-analysis the "true" correlations, called operational validities here, should be looked at. They are bolded in the table and are likely correcting for unreliability in the predictor and criterion.

 

That you didn't find them (the sample size-corrected correlations--again, you should be looking at the operational validities which are much larger) impressive doesn't really take into the account the effect sizes that can be expected to be found in social science research. Cohen (1992) and this article itself (pg. 168) considers most of these corrected correlations to be moderately large. The weighted composite of the general GRE would be considered large. So just because it seems weak doesn't make them so when considered within the context of social science research. 

 

Also, you missed that GGPA measures GPA in graduate school beyond the 1st year, usually as the final GPA, and the various GRE component scores were all moderately large in predicting this.

 

The authors discuss and mention that are certainly a lot of non cognitive and situational variables that may explain or moderate the relationships between GRE and other criterion such as degree attainment and research productivity. Again, nothing is perfect. No one said anything predicts anything else perfectly. The point is whether it predicts validly. And your qualms about the "weak" correlations and variance explained is already addressed within the article itself on pg. 176 starting with "The argument that one should reject a predictor because the variance accounted for is only 1%, 2%, 5% or 10% is shortsighted, not to mention potentially wrong (Ozer, 1985)."

 

The test bias/fairness argument is really a separate one altogether from validity which has a very specific and clear meaning within the social sciences. Btw, the article does show that the GRE is a valid predictor for non-traditional students (i.e., those over 30) and the GRE-Q is actually a better predictor of GGPA for non-native speakers than for the entire sample

 

Retention is really not a good metric for performance within a grad program because people are by definition no longer in the program when they leave. There are so many factors that predict this outcome--some like flagging interest in the topic, practical concerns about making money after starting a family, etc..

 

I don't have any hypotheses about the difference in scoring on these tests. As I have said, this is a thorny issue and something that people who create general mental ability tests grapple with as well. There are very consistent differences between race groups in these GMA tests, for example. Some have said that the items are biased in that they assume knowledge that some racial groups have experienced. When items are created by subject matter experts of the racial groups' associated "culture" (whole different can of worms) are tasked with writing/approving items that are not biased, the performance gap actually gets worse. I personally think a small part of this worsening effect is a result of stereotype threat, but I'm not entirely sure. In any case, there are people constantly trying to make these tests more fair and less biased, but there are no easy answers in how to go about doing this or what could explain the gap that emerges in performance between groups. i find some comfort in knowing that academia at large and psychologists in particular tend to be very sympathetic to minorities and fight for fairness on their behalf with their research. People are trying and nothing is perfect.

Link to comment
Share on other sites

You're looking at the sample-size corrected correlations. When looking at a meta-analysis the "true" correlations, called operational validities here, should be looked at. They are bolded in the table and are likely correcting for unreliability in the predictor and criterion.

 

That you didn't find them (the sample size-corrected correlations--again, you should be looking at the operational validities which are much larger) impressive doesn't really take into the account the effect sizes that can be expected to be found in social science research. Cohen (1992) and this article itself (pg. 168) considers most of these corrected correlations to be moderately large. The weighted composite of the general GRE would be considered large. So just because it seems weak doesn't make them so when considered within the context of social science research. 

 

Also, you missed that GGPA measures GPA in graduate school beyond the 1st year, usually as the final GPA, and the various GRE component scores were all moderately large in predicting this.

 

The authors discuss and mention that are certainly a lot of non cognitive and situational variables that may explain or moderate the relationships between GRE and other criterion such as degree attainment and research productivity. Again, nothing is perfect. No one said anything predicts anything else perfectly. The point is whether it predicts validly. And your qualms about the "weak" correlations and variance explained is already addressed within the article itself on pg. 176 starting with "The argument that one should reject a predictor because the variance accounted for is only 1%, 2%, 5% or 10% is shortsighted, not to mention potentially wrong (Ozer, 1985)."

 

The test bias/fairness argument is really a separate one altogether from validity which has a very specific and clear meaning within the social sciences. Btw, the article does show that the GRE is a valid predictor for non-traditional students (i.e., those over 30) and the GRE-Q is actually a better predictor of GGPA for non-native speakers than for the entire sample

 

Retention is really not a good metric for performance within a grad program because people are by definition no longer in the program when they leave. There are so many factors that predict this outcome--some like flagging interest in the topic, practical concerns about making money after starting a family, etc..

 

I don't have any hypotheses about the difference in scoring on these tests. As I have said, this is a thorny issue and something that people who create general mental ability tests grapple with as well. There are very consistent differences between race groups in these GMA tests, for example. Some have said that the items are biased in that they assume knowledge that some racial groups have experienced. When items are created by subject matter experts of the racial groups' associated "culture" (whole different can of worms) are tasked with writing/approving items that are not biased, the performance gap actually gets worse. I personally think a small part of this worsening effect is a result of stereotype threat, but I'm not entirely sure. In any case, there are people constantly trying to make these tests more fair and less biased, but there are no easy answers in how to go about doing this or what could explain the gap that emerges in performance between groups. i find some comfort in knowing that academia at large and psychologists in particular tend to be very sympathetic to minorities and fight for fairness on their behalf with their research. People are trying and nothing is perfect.

I concede-- the operational validities are stronger, yet still negative on some measures for time to completion. Maybe rejecting a predictor isn't the answer.. but for the degree of faith (and billions of dollars) invested in the testing industry, I think it is reasonable to expect stronger correlations. 

 

Test bias undermines construct validity, though. 

 

Also note that when I reference retention, it is in the context of enrollment into degree completion. PhD programs want to invest in students who persist into degree completion on opposed to students who drop out, yes? 

Which brings me to another point... what is it about the selection process in European schools that results in higher rates of degree completion? How can they pull it off without the GRE?

Link to comment
Share on other sites

Think I am where?

 

So your argument amounts to re-assertion of your point? I made a claim that there was a lack of intellectual honesty when it came to examining the *validity* of the GRE in predicting grad school success. The point wasn't whether or not who was more or less biased or smart, but the willingness to look up research and look at the data/results. FFS the Psych Bull article was free on Google Scholar and published over a decade ago in one of the best journals within psychology.

 

The constant nitpicking about how it doesn't predict all indices of performance perfectly ignores the larger context of nothing be a near perfect predictor or the fact that nothing thus far predicts grad school performance better as a weighted composite of predictors. Just look at table 9 and see how much more predictive UGPA is beyond the GRE.

 

It's like when you questioned the "selection bias" of the article without thinking through the implications of what that would mean. You wanted to prove your point so you threw something out there that worked against your argument because you didn't think through what the "selection bias" would actually do to the data. This is what I am talking about regarding intellectual honesty: thinking through what the data and methods are actually telling us instead of what we want it to tell us. 

 

 

I'm not particularly interested in questions about the GRE's validity or lack thereof - no dog in this fight, so to speak - but I have been following the discussion a little here and there. I must say, when I read your original post earlier (before anyone had responded to you), I found the tone of your remarks to be both abrasive and patronizing. Having not posted in this thread previously, I didn't take it personally in any way, and I actually agreed with many of your points. Nonetheless, the tone was off-putting, albeit in a borderline comical way (well, well, someone put their admonishing-lecture pants on today!) 

 

I try to err on the side of being charitable when interpreting the intent/tone of internet comments, but your responses to TXInstrument11 have annoyed me. First of all, you seized on TXI's mention of "selection bias" without actually responding to the larger point TXI was making, namely, that a correlation between GRE scores and future academic success could be explained by the fact that high scorers are more likely to be accepted into top programs. Attending top programs may lead to greater success for reasons having to do with quality of instruction, greater resources for students, etc. If this is the case, the relationship between GRE scores and later outcomes is spurious. Of course, this is an empirical question. I don't know if there's any research that tries to tease this apart or not, and I don't actually care. My point is that you were quick to dismiss TXI's point without, apparently, giving it much thought beyond noticing that the term "selection bias" was mentioned. Whatever happened to engaging with an empirical question based on careful thinking rather than just looking for ways to bolster your own point of view? You know, engaging on a deeper level, intellectual honesty, and whatnot?

 

Also, with respect to TXI's comments on how he/she perceived your tone, your reply doesn't make much sense. For one thing, it seems the point of TXI's criticism of you personally was meant to be just that -- a judgment about how you chose to word your original post. I don't see where he/she tried to undermine your claims by attacking you personally, so "ad hominem" is a strange response. The "sanctimonious ass" bit was intended as criticism of your tone, not the argument you made. If Mary turns to Sue and says, "My God you're stupid" and Sue shouts, "ad hominem!" in response, well...

 
Your second reply to TXI was similarly odd: "So your argument amounts to re-assertion of your point?" This is a puzzling response given that there's no argument involved. There's an impression/perception/judgment/etc. involved. TXI read your comment(s) as sanctimonious and condescending. This is a subjective matter - thus, there's no argument to be made. Suppose Sue says to Mary, "It's clear from your tone that you're just kidding." Is her response to Mary's comment an argument, replete with underlying premises by which she reasons to the conclusion that Mary was kidding? Or is the statement functioning as a description of how Sue interpreted what Mary said and what conclusion she has drawn based on that impression? If Mary responds, "So, you claim that it sounded like I was kidding, eh? Prove it!" Well...
 
Finally, have you considered that, for people who are at the applying stage of grad school and score poorly on the GRE, dismissing the predictive validity of the test may be adaptive? 
Link to comment
Share on other sites

I concede-- the operational validities are stronger, yet still negative on some measures for time to completion. Maybe rejecting a predictor isn't the answer.. but for the degree of faith (and billions of dollars) invested in the testing industry, I think it is reasonable to expect stronger correlations. 

 

Test bias undermines construct validity, though. 

 

Also note that when I reference retention, it is in the context of enrollment into degree completion. PhD programs want to invest in students who persist into degree completion on opposed to students who drop out, yes? 

Which brings me to another point... what is it about the selection process in European schools that results in higher rates of degree completion? How can they pull it off without the GRE?

 

 

I think that cognitive ability can only predict so much, even when talking about an endeavor like grad school where performance is so highly related to cognitive ability. In composite, it does a really good job. And I'm not certain it actually is given as much weight as people think. Some schools use a simple cutoff score and others use it in combination with other pieces of information. It might be used as a tie breaker between two promising candidates that are otherwise very closely matched. There are few things that, with as little logistical constraint, can be as predictive of proxy measures of general mental ability and acquired skills, knowledge, etc.

 

I'm not sure that test bias actually puts construct validity into question. Again, these terms have very specific meanings. In any case, I looked up some research on possible reasons for differences in performance between subgroups. The report by Stricker (2008) found no evidence of stereotype threat in practice, that is, in real-world settings. They do admit that it would be premature to reject the possibility that stereotype threat occurs in real-world settings. However, the phenomenon has only really emerged in tightly-controlled social psych lab experiments, which have their own critiques regarding their external validity (e.g., Adam Grant's CARMA talk on quasi-experiements).

 

Within that same report they mention that measurement invariance is rarely investigated, but Rock et al. (1982) found that the then-version of the GRE was found to be measurement invariant across gender groups and the race/ethnicity subgroups that were investigated (black vs. white). So the items appear to be perceived the same across subgroups and tap into the same, underlying construct.

 

When looking at different "cognitively loaded" tests, certain consistent findings emerge (Sackett et al., 2001). First, they predict performance across a broad range of domains, particularly more so when the performance itself is highly cognitive related. Second, certain differences in scores between sub-group means emerge. Differences in sub-group mean scores does not necessarily indicate cultural or predictive bias; they could be tapping into true differences between groups. In the same article, they point that out that an extensive body of research has shown test bias has rarely been indicated. In fact, when it has occurred, the bias has consistently over predicted performance for minority group members.

 

I would agree that faculty member want people who attain their degree. Having said that, they also want people to perform well while in the program. So both are important, and since these scores are such robust indicators of performance within the program, it still makes sense to include them. It's unclear in your comparison how well the graduate students in other countries do during their program compared to the US students. Sure, there are higher levels of degree attainment, but that doesn't speak to the quality of their performance. 

 

 

Stricker, L. J. (2008). The Challenge of Stereotype Threat for the Testing Community, (July).

Link to comment
Share on other sites

 

I'm not particularly interested in questions about the GRE's validity or lack thereof - no dog in this fight, so to speak - but I have been following the discussion a little here and there. I must say, when I read your original post earlier (before anyone had responded to you), I found the tone of your remarks to be both abrasive and patronizing. Having not posted in this thread previously, I didn't take it personally in any way, and I actually agreed with many of your points. Nonetheless, the tone was off-putting, albeit in a borderline comical way (well, well, someone put their admonishing-lecture pants on today!) 

 

I try to err on the side of being charitable when interpreting the intent/tone of internet comments, but your responses to TXInstrument11 have annoyed me. First of all, you seized on TXI's mention of "selection bias" without actually responding to the larger point TXI was making, namely, that a correlation between GRE scores and future academic success could be explained by the fact that high scorers are more likely to be accepted into top programs. Attending top programs may lead to greater success for reasons having to do with quality of instruction, greater resources for students, etc. If this is the case, the relationship between GRE scores and later outcomes is spurious. Of course, this is an empirical question. I don't know if there's any research that tries to tease this apart or not, and I don't actually care. My point is that you were quick to dismiss TXI's point without, apparently, giving it much thought beyond noticing that the term "selection bias" was mentioned. Whatever happened to engaging with an empirical question based on careful thinking rather than just looking for ways to bolster your own point of view? You know, engaging on a deeper level, intellectual honesty, and whatnot?

 

Also, with respect to TXI's comments on how he/she perceived your tone, your reply doesn't make much sense. For one thing, it seems the point of TXI's criticism of you personally was meant to be just that -- a judgment about how you chose to word your original post. I don't see where he/she tried to undermine your claims by attacking you personally, so "ad hominem" is a strange response. The "sanctimonious ass" bit was intended as criticism of your tone, not the argument you made. If Mary turns to Sue and says, "My God you're stupid" and Sue shouts, "ad hominem!" in response, well...

 
Your second reply to TXI was similarly odd: "So your argument amounts to re-assertion of your point?" This is a puzzling response given that there's no argument involved. There's an impression/perception/judgment/etc. involved. TXI read your comment(s) as sanctimonious and condescending. This is a subjective matter - thus, there's no argument to be made. Suppose Sue says to Mary, "It's clear from your tone that you're just kidding." Is her response to Mary's comment an argument, replete with underlying premises by which she reasons to the conclusion that Mary was kidding? Or is the statement functioning as a description of how Sue interpreted what Mary said and what conclusion she has drawn based on that impression? If Mary responds, "So, you claim that it sounded like I was kidding, eh? Prove it!" Well...
 
Finally, have you considered that, for people who are at the applying stage of grad school and score poorly on the GRE, dismissing the predictive validity of the test may be adaptive? 

 

 

        That's a fair point and you're entitled to your interpretation of my posts. I personally felt like no one was really giving the GREs specifically or the massive body of research on cognitive tests fair treatment whatsoever. What was particularly frustrating, as I have already mentioned, is that the arguments made were misinterpretations or outright mistakes in understanding of the data, results, or measurement concepts. What was staggeringly ironic was that the criticisms surrounded well-established research findings within the domain people are trying to pursue an academic career in.

 

There are people who are smart and motivated who have worked hard to develop and research the validity these tests; it's unlikely they haven't considered issues about cognitive ability testing that people in this thread raised. Researching those results would be a good first step in making a case against the validity of the tests in question. Understanding and accurately reporting those results in making arguments would be the next step. These are part and parcel of what being in a graduate program entails. In short, sitting in judgment of something without adequately researching it won't fly in a PhD program and it certainly won't work for your own research. And that was happening throughout this thread. To me, I just think no on likes being chastised or criticized; if you want to take me to task for being patronizing and abrasive, so be it.

 

Yes, my responses to TXI  about selection bias misinterpreted what she meant, assuming your interpretation is correct, and for that I apologize. When people speak of selection bias as an argument against validity, it usually follows what I was talking about. However, her point is already addressed when the authors speak of "criterion contamination" and the unlikelihood of that occurring. If that was indeed what TXI meant, I would assume she would have read the paper carefully enough at that point to not make a point that was already addressed directly in the paper itself. See also in the discussion section that indicates the low SDs for the operational validities indicates that are likely not moderators in the GRE score-performance relationships. Thus, it is unlikely that program quality moderates these relationships. Also, this line of reasoning also ignores that although top programs may have more resources, top programs arguably have a commensurately high level of grading standards that may make it a wash entirely. Regarding greater quality of instruction, that is unlikely in that top programs tend to be research-focused where relatively less emphasis and contingent rewards are placed on delivering quality instruction compared to publications for professors.

 

To your point, I disagree that TXI points were solely regarding my tone. She told me to get off my high horse and resorted to ad hominems. My counterargument was that I didn't put myself on a high horse in that I wasn't claiming moral or intellectual superiority. In response, she simply reasserted I was on a high horse rather than pointing out where I was claiming moral or intellectual superiority in my original posts. Just because the matter is subjective/ a judgement/ a perception doesn't mean you simply don't have to defend your point of view or that resorting to ad hominems is ok. For example, the stance that the subject matter GRE test should be used rather than the general GRE test is a completely subjective stance that actually has to be argued for. Many of the arguments made when doing research, in developing a theory or an interpretation of results, are rather subjective need to be argued for; data and results simply lend support for your arguments.

 

Finally, as far as whether this is adaptive or not, I have considered it and the bulk of research would suggest that these types of behaviors (e.g., questioning the validity of feedback) to defend one's sense of self-esteem is actually maladaptive in the long run. Because you don't take the feedback as valid, you don't change your strategies or behaviors to perform more effectively. Consequently, you don't learn how to perform better because you reject the very feedback that would indicate trouble spots. Alternative strategies such as self-compassion appear to be more adaptive. 

Link to comment
Share on other sites

...did I just see a measurement invariance study cited from 1982?

Really?

:)

TBH I don't have the time/energy for this so I'm not going to fall for it beyond this: http://www.aps.org/publications/apsnews/199607/gender.cfm

Look at the section on how speededness and the multiple choice format favors males. There's a whole body of literature out there on how those concepts contribute to the gender gap that you can google and cite to wow anonymous people on the Internet with.

Edited by TheMercySeat
Link to comment
Share on other sites

I would just like to reiterate here for all those, like me, who have sucky GRE scores that getting accepted to PhD programs is possible despite your scores. I was accepted into 3 programs this seasons (after a shut out last year). I applied to 6 programs. My quant score was 144. 2 programs questioned me directly about my scores. But all three programs stated that I was a strong candidate and they were impressed by my background and research experience. Additional, I must pat myself on the back for a great SoP. That said, don't lose hope. Strengthen the areas that you can and highlight your assets. Yes, we are all aware that some schools use the GRE to weed out candidates and reduce the application pool, but I am proof that the "total package" matters as well.

Link to comment
Share on other sites

The total package does matter. I completely agree. I personally had really subpar UGPA. But my professor advisor saw something in me and took a risk. 

 

My point is not to make it seem like a low GRE score is a death sentence of that you are incapable of being a good grad student.

Link to comment
Share on other sites

From what I heard, the verbal section of the GRE is a better marker of whether an individual will do well in grad school, whereas the quant section doesn't really add much.

 

Again, I don't recall where I heard this, but intuitively, I would think that that is really the case. For the verbal section of the GRE, you actually have to know "stuff", whereas the quant section is just high school math + trickery.

 

As for me, I did pretty well in the quant section (161), but just average for verbal (155) and I got into pretty good programs. For psychology, if you do stellar in verbal and just okay/mediocre/meh in quant, you have a chance. From what I can tell, grad schools don't expect you to be a gre quant section genius; they expect you to at least be average at it.

 

Now that said, I took the GRE 3 times. I was not competitive the first time, and I was less so the second time. I didn't get into any programs when I applied that application year.

 

From what I learned about the GRE:

 - It is used as a basis for narrowing the application pool.

 - Should it be? Honestly, it is irrelevant. Why is it irrelevant? Because it is a damn test. Consider classes in college where you studied a crap ton, and it turned out that the material on the test was completely different from your study materials. In turn, you get a C+. Is the test a good predictor of your knowledge of the material?

 

No.
But at the end of the day, you got a C+.

 

To some degree, the GRE is like that. It is a unfair test that is morally dumb and scientifically subpar in predictive value (sure, the GRE is predictive to some degree, and maybe the effect is robust, but still small). Still, it doesn't matter.

 

What matters is the following:

 

The GRE is a test, and it is up to you to do well at it. If it means studying months on end (like I resorted to in order to increase my score), then do it. If it means studying on How to Solve Tricky Math 101, then DO IT.

 

Part of getting into graduate school is looking competitive. Sadly, the GRE is part of that criteria. If you are an intelligent individual, or at least somewhat so, you can do well on it with enough work invested.

 

Now, one can argue that is it because I did wellish on the GRE that I believe what I believe, but honestly, it is because I acknowledge the bullcrapery of the GRE, pulled up my sleeves and studied for months, and studied on how to do well on it.

 

*exhale*

 

IN SUMMATION:

 - GRE is Lame.

 - GRE isn't a good test.

 - So what? Study hard enough to get somewhat competitive scores.

 - Get into graduate school, and be happy that you never have to take that exam ever again.

 

Also, for those who believe that schools should consider other parts of one's application more so than the GRE, it is most likely the case that there is someone in the application pool that has very similar credentials and a better GRE score (it sucks, I know). In order to fix that,

 

Listen to this: [www.youtube].com/watch?v=eGMN-gNfdaY and get on studying.

Note: Just remove the brackets. I didn't want the video to appear on the post.

Edited by avidman
Link to comment
Share on other sites

For what it's worth, I was accepted to a few schools with average GRE's this year, however I was rejected from more schools than I received acceptances and the common theme was due to my average GRE score. If I were applying again next year I would take a GRE course and retest. The study material I used didn't match up too well to what was on the GRE, so I think the course might be better plus I think they guarantee a certain point increase otherwise it is free.

 

Point being: A high GRE won't get you in to a school, but not having a high one can certainly keep you out. At least in my anecdotal experience.

Link to comment
Share on other sites

Point being: A high GRE won't get you in to a school, but not having a high one can certainly keep you out. At least in my anecdotal experience.

 

 

Agree. I know multiple people who got in to multiple places with a high GPA but low GRE (read as ~75% verbal, ~44% quant). I also know multiple people who got into multiple places with a high GRE and low GPA (lowest I know is a 3.3).  So as long as you have high levels of one (GPA/GRE) to counteract for low levels of the other and the experience to back up that you are a good candidate, you can totally get accepted! 

 

However, You have to be able to get the adcom to even look at your app, and without a high-ish GRE score+GPA that won't happen. One of my advisers told me that GPA+GRE must hit a certain cutoff. If you have a moderate-low GPA, you really need a GRE score that is high in both areas to counteract it. The opposite also appears to be true (moderate-low GRE scores = need high GPA). However, if both are mediocre, or one is low and one is mediocre, there is a really good chance your application will be overlooked or tossed out and the rest of your application (experience, letters, etc) won't matter. There are plenty of people that have high levels of EVERYTHING. 

 

Case in point, my application:

 

Research/Applied Experience: High (I have ~5 years of experience that is either research or working directly with the population I want to study, 3 conference presentations)

GPA: Moderate (3.5 overall, 3.6 major)

GRE: Moderate-Low (311 Combined -- 160 Verbal (84%), 151 Quant (44%), Writing=4.5=56%)

 

Outcome: Rejected from 10 PhD programs with varying levels of difficulty (top 50-top 100), got 1 phone interview.

 

Conclusion: If I had a higher GPA, or a higher GRE - I probably would have had better outcomes. Don't make the same mistakes I did and hope that one part of your application that isn't GPA/GRE will hopefully make up for another. In my case I had hoped experience would make up for my GRE score, and I thought my GPA was fine. But I didn't take what I wrote above into consideration. Good luck everyone.

Edited by AliasJane2342
Link to comment
Share on other sites

One university (in fact, the only one out of the 10 or so that I interviewed for) drilled me about the GRE Q during interview, informed me that the chair is insistent on a hard GRE cut off, and then subsequently rejected me. They also told me on interview that all of their students drop out and go into industry, which I later corroborated by viewing student/alumni profiles and former student co-authors on their program website. In fact, I assumed that they had a terminal MA program until I was informed on their dropout rate during the interview. 

 

Common sense indicates that if something isn't working out (i.e., if a program cannot successfully identify students who persist into degree completion for selection), do something different. :)

 

Cost of $13k stipend for 5 years = $65,000

Cost of tuition waiver = ~$50,000 for five years (crude estimate based on undergrad rates at this specific institution)

Cost of fellowships/conference travel = $5,000 ($1,000 a year if the student is awesome and wins a lot)

Cost of the university's investment on students who drop out after 5 years = $120,000. 

 

Quite a substantial investment with absolutely nothing to show for it.  

Edited by TheMercySeat
Link to comment
Share on other sites

TheMercySeat- That is crazy! I'm very interested to look at these stats if your willing to PM me the school details. I just don't know how they can continue operating with their current model.

Link to comment
Share on other sites

I was accepted into a doctoral program in Clinical Psychology and my quantitative score was 144. That's the 18th percentile.

My AW was 93% and Verbal was 81% but it didn't really affect my admission too much. It was the last question asked during my interview and he just wanted to know if statistics would be too much of a struggle for me.

You can still make it as long as you show how much you're committed to the actual program.

YOU GIVE ME SO MUCH HOPE THANK YOU. I'm not applying til this fall though. Still...

Link to comment
Share on other sites

YOU GIVE ME SO MUCH HOPE THANK YOU. I'm not applying til this fall though. Still...

I did want to point out that a low GRE score on the quantitative does not mean the end for stats. For example, I got a 145 and and I did amazing in a graduate level stats class (I even skipped the intro class and went straight into the hard stuff). I was expecting being asked about my scores but I think that because they saw I took graduate level stats and passed with an A that they didn't bother. 

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

This website uses cookies to ensure you get the best experience on our website. See our Privacy Policy and Terms of Use