Kaimakides Posted July 13, 2015 Share Posted July 13, 2015 Hello, all. Having spent some time lurking on these forums, I've noticed that one of the most enduring and ubiquitous questions here and elsewhere is, "Should I retake the GRE?" And for good reason--not only is it terribly important, but it is also a kind of epistemological brick wall, necessitating the consideration of so many factors and being so sensitive to context that it is pretty much impossible to obtain a satisfactory answer. Being unable to directly intuit the answer in my own case, and eager to turn my obsessive attitude re: graduate school applications toward something as all-consuming and satisfying as the Excel Spreadsheet, I assembled and analyzed a veritable mountain of data about the quantitative measures of students who self-reported here at the Grad Cafe. In sections 1a and 1b, I will present the data and the results of analysis performed on that data. In section 2, I will discuss these results. In section 3, I will draw conclusions based on the discussion. In section 4, I will qualify those conclusions. In section 5, I will suggest some ways my analysis may help you answer the previous question about the GRE. I performed the analysis on the data from about 600 applications to programs in which I had a personal interest. I recorded 1. Whether an applicant was accepted or rejected, 2. Their Verbal and Quantitative score, 3. Their GPA, and 4. Whether or not the applicant attended a Master's program. If an applicant attended a Master's program, I performed calculations on their Master's GPA instead of their UG GPA. The Analytical Writing section was not recorded. Though most GRE scores recorded were on the 130-170 scale, when I came across pre-2011 scores I simply converted them using the ETS concordance table (though this is not wholly unproblematic, and I will address it if you push me on it, but not otherwise). These are the PhD programs I examined, with their PGR rank in parentheses, in no particular order: Arizona (13) Brown (20) Pittsburgh (7) Toronto (11) Wisconsin-Madison (21) MIT (12) Princeton (2) Stanford (8) Yale (6) CUNY Grad Center (16) Harvard (6) Rutgers (2) Texas-Austin (17) Massachusetts-Amherst (28) Columbia (10) NYU (1) Indiana-Bloomington (24) Duke (24) Ohio State (28) Berkeley (10) UCLA (10) 1a. Means, deviations, correlations, explanatory effect The average PGR ranking of the above programs is 13.2, their weighted average PGR ranking is 10.7, according to the number of recorded entries for each school. (Higher ranked programs get more applications, so it is unsurprising that the weighted average is lower than the average per se). The first thing you ought to know about the GRE scores of those who apply to top programs is that they are often very high, and the scores of those who are accepted are even more impressive. The mean Verbal score of applicants was 165.3. A score of 165 is at the 95th percentile of all GRE test-takers. The median of the applicants was 166, the 96th percentile. The mean and median Quant score of applicants was 160.0, the 78th percentile of the general GRE population. Consequently, the mean GRE score of applicants was 325.3, and though ETS does not assign percentile ranks to overall scores, it should be clear from the constituent parts that this not a shabby score. The modal Verbal score was an impressive 169, with 97 of the 598 applicants who reported a verbal score scoring exactly 169, and nearly as many had a perfect 170 (82 applicants). The modal Quant score was 155, with 57 students having a score of exactly 155. The standard deviation of the Quant scores is 5.4, while the standard deviation of the Verbal scores is 4.1, indicating that the Verbal scores are more tightly grouped. For comparison, the standard deviation of verbal scorers for the more general population of GRE-takers is 8, and the standard deviation of Quant scores is 9, suggesting the relative uniformity of GRE scores among applicants. A greater correlation between acceptance rate and one's Quantitative section score (r = .18, p <.0001) than between one's Verbal score (r = .16, p <.0001). However, both correlations are weak. The quantitative score is therefore a slightly better predictor of one's likelihood of being accepted (to the above fictional construction of a program, consisting of the weighted average of the individual programs), than one's verbal score. The strongest correlation I found was between one's total GRE score and acceptance, at .22. Not much better. By squaring these r values, we can get the explanatory effect of each quantitative component. Total GRE scores explain 4.99% of application results, while the verbal score alone explains 2.45% of them, and the Quant score alone explains 3.37% of them. Taken together, they explain 10.81% of the data. The minutely small correlation between GPA and application results (.04) failed to reach significance. 1b. Distribution, tiers The small standard deviation of applicants' verbal scores coheres nicely with the witnessed distribution of those scores. As you can see, quantitative scores are distributed more widely. And here is the distribution of GRE scores. Probably more important than the distribution of these scores is the success of applicants in these various score-groupings. The observed results in this vein are equal parts interesting and revealing. Total GRE scores as well as their component sections follow a very definitive, non-linear pattern. (I am not great at Paint, assume all red lines are straight.) The horizontal lines represent tiers. If a tier-line passes through or above bars A and B, then A and B are members of the same tier. The vertical lines divide tiers, and represent thresholds. Moving from left to right across a threshold corresponds to an increase in acceptance rate (moving to a higher tier). Moving from right to left corresponds to a decrease in acceptance rate (moving to a lower tier). Here is the chart for Quant score. And below is the corresponding chart for total GRE score: You may be asking yourself whether the existence of tiers is significant, or whether these groups are simply arbitrary. You might also be wondering whether any rationale underlies the location of thresholds. I will address this shortly. 2. Discussion of results In section 1a we established the relative unimportance of quantitative measures in the success of one's PhD applications. Departments are indeed telling the truth when they say that qualitative measures, especially the writing sample and letters of recommendation, are far and away the most important components of the application. "Does this mean that my GRE and GPA are unimportant, that I can get into [prestigious school X] with a subpar GPA and GRE?" This complex question has two parts best answered separately. To the first: absolutely not. GPA and GRE are very important parts of the application. The lesser importance of quantitative factors is best understood in the following way. Quantitative factors can be overridden by the more important, qualitative factors, if those qualitative aspects are especially strong. This is the essential difference between quantitative and qualitative factors. Subpar writing samples and letters on the other hand, can never be overridden by any level of success on the GRE or in your undergraduate career. "What do you mean, 'the correlation between GPA and acceptance failed to reach significance?'" It failed to meet certain standards of certainty required by correlational analysis. There is about a 32% chance that the correlation I found was due to chance, which is unacceptable. "If the correlation with GPA did reach significance though, would you expect it to be a good predictor of acceptance?" No. GPA is too variable and unsteady a factor, due to differences in grading between schools, for us to expect a correlation of any significance. And further, GPAs among applicants are even clumpier. Almost everyone has a GPA greater than 3.6, and the average GPA applied is 3.85, with a standard deviation of 0.17. More than half of all applicants have a 3.9 or greater. Very clumpy. "Quant score is a better predictor of acceptance. Does that imply that my Quant score is more important than my verbal score?" No. I suspect the Quant correlation is greater than the Verbal one simply because the Verbal scores clump together near the top. Had I sampled more mid and low-range programs, I would expect the verbal score to correlate more strongly than the Quant score. Although Quant results explain a greater number of results, it does not necessarily explain a greater number of acceptances. Further, it should be noted that the Quant score's prominence may be the result of this particular range of schools. In programs strong in formal areas such as logic, it makes good sense for the Quant score to be weighed more heavily. If a number of those programs are represented here, this is a plausible explanation. "Why are these tiers and thresholds not completely arbitrary?" I believe there is compelling reason to think that the tiers and thresholds witnessed in the data represent something real. Although many graduate programs swear up and down that they do not use GRE score cut-offs, it is a well-known fact that the overwhelming number of applications they receive gives them little recourse but to use GRE scores to divide applicants into groups of more or less promise. The patterns above give us some insight into the (average, fictional) process of the representative school in this group. In the Verbal chart, we see two thresholds creating three tiers, the first threshold dividing the groups 166 or lower from the groups 166 or higher. 166 is the median (50th percentile) verbal score of this group of applicants, and recall that 165.3 is the mean. The second threshold divides those who scored 169 or higher from those who scored exactly 170. Since 169 is such a common score, it is the 75th through 85th percentiles. A score of 170 represents the 90th percentile. In the GRE chart, three thresholds create four tiers. The first threshold divides those who scored 325 or lower from those who scored 325 or higher. 325 is the median score of the applicant pool. The second divides those who scored 329 or higher from those who scored 330 or higher. 330 is the 75th percentile score. And the final threshold divides those who scored above 335 from those who scored 336. 336 is the 92nd and 93rd percentile, while 335 is the 90th. In the Quant chart, we again have three tiers. The first threshold divides those who scored 160 or lower from those who scored 160 or higher. 160 is the median and mean Quant score. The second threshold divides those who scored 168 or higher from those who scored 167 or higher. 168 is the 95th percentile, while 167 is the 90th. As you can see, these divisions demonstrate surprising regularity, suggesting that this is not merely a statistical accident, but rather reflects a fact about how departments carve up their applicant pools. All three scores are divided at their median score, with those who score higher than the median score being much more likely to be accepted than those who score below the median. A second threshold is found at the 75th percentile for the Verbal section and Overall score. And a final threshold is found in all three scores between those who score at the 90th percentile (or who score nearly perfectly) and those who score above the 90th percentile, or nearly perfectly. What we know about committee procedures is consistent with the existence of the existence of tiers and the above analysis. On the face of it, this system may look dubious, and your first instinct may to be denounce it. After all, small differences in ability (like a 169 vs a 170 verbal) seem to be rewarded with large increases in chances of acceptance, while large differences in ability are largely ignored (those who score 166 or higher have the same chance of success as those who score 169 or higher). Things came together for me when I read something by Eric Schwitzgebel, a professor at UC Riverside, to the effect that letters of recommendations tend to blur together for him. Assuming that Eric is not substantively different in this respect from committee members across the country, and that a barrage of largely uniform quantitative measures are just as apt to blur together as a pile of generic letters of recommendation, I see no reason why we should not expect exactly the kind of distribution witnessed in the above data. Consequently, wide swaths of GRE scores are likely to receive similar attention, and thus, similar acceptance rates, with the lowest GRE scores receiving the least attention, and higher scores receiving more, but roughly equal, attention. Moreover, across the board, top-end or perfect scores are the most likely to signal to committee members that a file is worthy of closer inspection. Even if small differences at the top end of the score range do not represent large differences in ability, a higher score may substantially increase the likelihood that your file will be examined in further detail, and then green-lit. "So you're saying that higher GRE scores cause this increase in acceptance rate?" No. When you get down to thick of it, everyone who gets in does so on the merit of their sample and letters (and a few other qualitative factors, such as your origin of institution). However, in order be accepted, your application must first receive close scrutiny. And the strength of your quantitative profile directly bears on the likelihood that your application will be scrutinized more closely. 3. Conclusions 1) On the whole, those who apply and are accepted to top-ranked PGR programs have quite strong GRE scores. 2) The quantitative components of one's application are far less important than the qualitative ones (Letters and sample, institution of origin), as evidenced by the fact that they explain only about 11% of the data. (About 65 results, of 601) 3) Despite the fact that ETS revised the GRE in 2011 to make more finely discriminate between the ability of the highest scorers, Verbal scores are very closely grouped in the range 166-170. 4) It is probably standard procedure to divide applicant pools into a bottom half and a top half, to more closely examine the upper half of that pile, and then most seriously consider the top 10% of the pool. 4. Qualifications A number of factors qualify my results and impact my data. Firstly, all this data is self-reported. It is therefore entirely plausible that those who self-report have higher GRE scores than the average applicant, skewing all my score data upward by some unknown quantity. Small quantities of data make it difficult to be confident about certain low-frequency scores. I therefore caution you to interpret acceptance rates as absolute quantities.It might be mistaken that 45% of those with a Verbal score of 170 were accepted, but we can be fairly sure that someone with a verbal score of 170 is more likely to receive an acceptance than someone with a Verbal score of 169. In fact, (((45.1/33.5) - 1) * 100) 34.6% more likely. When total GRE's and Quantitative scores get very high, the margin of error is higher. The number of applicants who achieved a total GRE score of 338 or higher is only about 3% of the total sample size, and so I have excluded it from the GRE charts. 5. "So, should I retake the GRE?" The foregoing analysis makes this question far more tractable than before. We can now break it into smaller sub-questions: 1. Is your current score at the cusp of the next threshold? (note: consult the charts) 2. Do you have independent reason to believe that you can do substantially better on your next retake? (e.g. you got rather poor sleep the night of the exam for a non-recurring reason, you were ravenously hungry during the test [both of these happened to me, ugh.].] 3. Does a department on which you are especially keen superscore? If the answer to one of these questions is yes, you may want to consider re-taking the GRE. However, I would not be so confident in my analysis as to be able to determine definitively whether or not you should retake the GRE, especially if you answer 'no' to the first question but 'yes' to the second and third. philstudent1991, awdrgy, philosophe and 1 other 4 Link to comment Share on other sites More sharing options...

TakeruK Posted July 13, 2015 Share Posted July 13, 2015 I think this is a very thorough and interesting analysis. I am guessing you posted it here for feedback, but if you did not want feedback, then feel free to ignore this post 1. Although you do make a good argument for the non-arbitrary ness of the tiers, I don't feel confident in your splitting the Total GRE score into 4 tiers. I certainly agree that there is something going on at total score = 325, but the other tiers seem to be within statistical noise. That is, if you do something simple, such as apply Poisson uncertainties to each of your bins, you'll see that the difference in the "329 or higher" vs. the "330 or higher" bins are within this noise. The cutoffs in the Verbal score distribution pass this test though. I think your charts are more meaningful if you do the following: a ) Compute error bars on the height of your bins b ) Show a quantitative motivation to split the scores into these tiers. There are lots of statistical methods that compare the validity of two (or more) models. Compare a two-tier model vs. a three-tier vs. a four-tier model and show that your choice is statistically validated. You must also use the error bars computed in a) above in making this calculation. It is too easy to draw lines to "guide the eye" and point to misleading results. However, maybe I missed something you will point out for me. 2. I am also a little troubled by statements like "<variable> can explain X% of the data". But here, I confess my ignorance of your statistical methods as I don't know all of the nuances that come into the R^2 statistic. I work with Bayesian statistical methods, so I am unfamiliar with your method. I do vaguely recall that it is a measure of goodness of fit related to the chi-squared statistic, right? But in the chi-squared statistic, we must assume that all the variables are independent of each other. That is, we might require that GRE scores, GPA, and LOR quality are independent of each other in order to make the statements you made here. However, I would say that this is a bad assumption. I think there is likely a lot of correlation between GRE scores and the other aspects of an applicant's profile. That is, I believe that the high GRE scores of accepted students do not indicate that a high GRE score = good chance of acceptance, but that it is simply a side effect that the top students will generally perform well on the GRE too, because they perform well at most metrics in general. Again though, if your analysis does not require the assumption that GRE scores and other factors are independent, my apologies--please educate me! 3. Finally, I just want to note that it's interesting to see you say that the GRE Q might explain more of the score. I do agree with your assessment on why the data appears this way, due to score clumping in the GRE V. However, this is interesting since it's "common knowledge" in the STEM fields that the GRE V is what differentiates candidates (but this is probably because all STEM applicants score >90th percentile in the GRE Q so the only score with any sort of dynamic range is the GRE V). Kaimakides, L13 and ExponentialDecay 3 Link to comment Share on other sites More sharing options...

Kaimakides Posted July 13, 2015 Author Share Posted July 13, 2015 (edited) Thanks for your thoughtful response, TakeruK! An important qualification, which I forgot to add, is that I have no formal statistical training, merely some experience I have accrued in my own time, and I recognize that my analysis is pretty amateurish, but nonetheless, I hope, helpful; I think your idea of putting error bars on my data is a very good one, but regarding I don't know how I would go about doing that. But what do you mean by 'interacting,' which I presume you are using in a more rigorously statistical sense than I am familiar with? If by interacting you mean that they result from a common cause, then in that sense they do certainly interact. (Is there some way for me to check?) And re: your comment about the GRE scores of accepted students, I agree completely. Accepted students have great qualitative features, and they do well on the GRE and have great grades simply as a matter of course. The parallelism you point out in the role of the verbal section in STEM fields is very interesting. Thanks for sharing that. Edited July 13, 2015 by Kaimakides Link to comment Share on other sites More sharing options...

TakeruK Posted July 13, 2015 Share Posted July 13, 2015 I don't think I said "interacting" (I did a CTRL-F just to make sure!). Maybe "independent" ? You did already say something similar in your response, so I think you understand, but I would define "independent" variables as two randomly distributed variables where the value of one quantity (e.g. GRE score) is not correlated with the value of another (e.g. GPA). When variables/quantities do depend on each other, sometimes people call it "having covariance". There are multiple ways for variables to have "covariance", for example, as we both said, "good" students will likely have strong GRE and GPA scores. One qualitative way you can test for independence is to make a scatter plot of the two variables. For example, plot GPA on the x-axis and GRE score on the y-axis. If you see something that is like a horizontal line, a vertical line, or the cloud forms an ellipse, then there is little or no correlation. But if you see the points clumping into an ellipse (or line) that is angled, then there is some correlation. For example, you might expect to see that your points will lie on an ellipse with a slope going up from left to right. This indicates that students with higher GPAs will generally score higher on the GRE. The quantitative way to measure this is to compute the Covariance: https://en.wikipedia.org/wiki/Covariance. If we denote COV(X,Y) to be the covariance of X and Y and VAR(X) to be the variance of X (where VAR(X) = COV(X,X)), then you can compute the correlation coefficient (lots of other names for this) as: correlation between X and Y = COV(X,Y) / SQRT (VAR(X)*VAR(Y) so if X = GPA and Y = GRE, you can compute how correlated your GPA and GRE quantities are. Numbers near 0 mean no correlation, +1 means absolute positive correlation (higher X -> higher Y) and -1 means absolute negative correlation (higher X -> lower Y). Note: this really only tests linear correlation between the two variables, and no linear correlation does not necessarily mean independence. However, we often assume our quantities are normally distributed, and in this assumption, no linear correlation does imply independence. Of course, if the R-squared stat you use does not require your quantities to be independent, then this doesn't matter! Finally, to compute error bars on the bins, one potential method is Poisson statistics, or "counting error bars". Poisson statistics are a good choice when your quantities are discrete (in your case, we always have an integer number of people with a certain score, we can't have 12.5 people scoring a 167!). If you assume that the distribution of # of people per bin is Poisson (see: https://en.wikipedia.org/wiki/Poisson_distribution)then you have an easy trick: since the standard deviation of a Poisson-distributed value with mean of L is SQRT(L), you can then approximate it as a normal distribution so that the 68% confidence interval is the standard deviation (and 95% confidence interval is 2 times the standard deviation etc.) So, if you have a bin where you measure that 80 out of 100 people got accepted, you might report that the acceptance fraction is: 80 / 100 +/- SQRT (80) / 100 percent or, if you plug it into a calculator, it is: 80% +/- 9% (where the error bar represents the 68% confidence region; so it would be 80% +/- 18% for the 95% confidence region) Of course, this fails (because the approximation fails) when the numbers are small. Consider a case where you have 1 out of 5 acceptances, then your fraction would be: 20% +/- 20% (for 68% confidence region) or 20% +/- 40% (for 95% confidence region) and now the +/- 40% is rubbish because this implies you can have a negative acceptance fraction! Just something to be aware of when making this approximation (see the Wikipedia article or a textbook for more details). L13 1 Link to comment Share on other sites More sharing options...

Kaimakides Posted July 13, 2015 Author Share Posted July 13, 2015 (edited) My mistake about that malapropism. I had an inkling that non-correlation would be a good test for independence! And of course you're right, the factors do correlate with one another, though to a very small degree. Verbal correlates with Quant r = .13, GPA correlates with Verbal, r = .049, GPA correlates with Quant (r = .002), and both sections obviously correlate strongly with total GRE score (.67 and .82, V and Q, respectively). Though none of these reach significance significant except the correlation between the section scores, and the section scores' correlation with overall score. I've added CI = 68% error bars to the data. Thanks for the heuristic using the scatter plot, and the veritable lesson in statistics. I will update my original post to reflect the addition of error bars. Although I'm sad to say that you're right, some of the tiers may be within the range of standard error, I am happy to be able to say that an important core of substance remains, namely that the most effective way to improve one's chances of acceptance vis a vis retaking the GR, is to raise it above the median score of the applicant pool, and, though admittedly there is less quantitative evidence for this, to perfect your score if it is in the range 167-169. Edit: For some reason, I am unable to edit my original post. Does anyone know why that is? Edited July 13, 2015 by Kaimakides TakeruK 1 Link to comment Share on other sites More sharing options...

ExponentialDecay Posted July 13, 2015 Share Posted July 13, 2015 TakeruK, I think OP is using Pearson's r (aka rho - the correlation coefficient), otherwise I don't know why they're squaring any r-values. I would love to see the output of whatever program ran this, though. Anyway, as regards the research: I am confused by your histograms. Are they supposed to be cumulative frequencies? If so, why do they strive to resemble the normal distribution? Quantitative factors can be overridden by the more important, qualitative factors, if those qualitative aspects are especially strong. This is the essential difference between quantitative and qualitative factors. Subpar writing samples and letters on the other hand, can never be overridden by any level of success on the GRE or in your undergraduate career. Whilst I have garnered this to be true from ianfaircloud's valuable contribution to transparency in philosophy admissions (seriously, could we get some of that in my neck of the woods?), can you explain how your data supports this notion? All you've shown is, in a self-selecting sample of students with nearly identical scores, those scores play a dubiously unbiased 10% role in admissions selection. Whilst your r-squared can support a statement that factors besides the ones in your model play a more significant role in predicting Y, it cannot support the notion that anything overrides anything else. I'm not sure you can assume that the remaining 90% of the r-squared is due to qualitative factors - it is due to some as-yet unknown model misspecification. Link to comment Share on other sites More sharing options...

TakeruK Posted July 13, 2015 Share Posted July 13, 2015 Edit: For some reason, I am unable to edit my original post. Does anyone know why that is? There's a time limit for post edits (the feature is meant for fixing typos etc.). If you want to update some of your figures, maybe it would be best to just post them in a new post below. Link to comment Share on other sites More sharing options...

Kaimakides Posted July 13, 2015 Author Share Posted July 13, 2015 (edited) There's a time limit for post edits (the feature is meant for fixing typos etc.). If you want to update some of your figures, maybe it would be best to just post them in a new post below. I don't very much like that. But alright, I'll upload changes in piecemeal fashion. Here are the updated Verbal, Quant, and Overall GRE figures, with CI = 68% error bars. I would love to see the output of whatever program ran this, though. Anyway, as regards the research: I am confused by your histograms. Are they supposed to be cumulative frequencies? If so, why do they strive to resemble the normal distribution? Hey ExponentialDecay, No programs ran this, I computed everything manually in Excel. The histograms aren't cumulative frequencies, they simply indicate ranges lower than (or equal to) or higher than (or equal to) a given GRE value. The normal-distribution-looking graph is the result of the facts that the distribution is normalish to begin with, and I've excluded very low scores. I initially made these bar graphs with frequencies of discrete scores, and if I recall correctly they are decidedly non-normal. They looked vaguely like like a sine curve, and the frequencies of each score were so low that it was impossible to do math on the highest and lowest values anyway. Hence my use of ranges instead. And the reason the ranges 'turn around' (from x or lower to x or higher) at the median score because some previous math indicated that those with GRE scores above the median have considerably better application results than those below it. Though verbal scores and GPA's do get very uniform in the pools for the most prestigious programs, the pool was fairly varied on the whole I think. Whilst I have garnered this to be true from ianfaircloud's valuable contribution to transparency in philosophy admissions (seriously, could we get some of that in my neck of the woods?), can you explain how your data supports this notion? All you've shown is, in a self-selecting sample of students with nearly identical scores, those scores play a dubiously unbiased 10% role in admissions selection. Whilst your r-squared can support a statement that factors besides the ones in your model play a more significant role in predicting Y, it cannot support the notion that anything overrides anything else. I'm not sure you can assume that the remaining 90% of the r-squared is due to qualitative factors - it is due to some as-yet unknown model misspecification. I appreciate your desire for transparency. I'd have my name on that petition before anyone. With respect to my claim about qualitative factors overriding quantitative ones, you are quite right that this does not simply follow from the analysis above. Rather, it is my considered view that it is the case, based on the remarks of Directors of Graduate Admissions/Studies at various highly ranked departments to the effect that no quantitative factor can decisively sink an application (to paraphrase something Mark Schroeder from USC has commented), yet it would be expected that subpar quantitative factors be compensated for in some way by other parts of the application, in order for that application to receive further consideration. For evidence of the expectation for dynamic compensation, ctrl+F 'compensat' here. I realize this response may be a less-than-compelling pitch for my perspective, but having perused such a wide variety of sources over such a long time period, I have difficulty producing evidence for this kind of 10,000 ft. claim. Sorry about that. Edited July 13, 2015 by Kaimakides Link to comment Share on other sites More sharing options...

philosophe Posted July 17, 2015 Share Posted July 17, 2015 I haven't heard of any graduate departments "superscoring" -- in fact, I haven't heard this expression since high school SATs. Does anyone know of departments that do superscore? I've never considered asking, because I've assumed it's a no-go. (I'm re-taking the GRE in two weeks and have been trending better in verbal and worse in math than my previous score, so this could potentially be a saving grace for me.) Link to comment Share on other sites More sharing options...

Kaimakides Posted July 18, 2015 Author Share Posted July 18, 2015 I haven't heard of any graduate departments "superscoring" -- in fact, I haven't heard this expression since high school SATs. Does anyone know of departments that do superscore? I've never considered asking, because I've assumed it's a no-go. (I'm re-taking the GRE in two weeks and have been trending better in verbal and worse in math than my previous score, so this could potentially be a saving grace for me.) Yale and Columbia superscore. If anyone knows of other departments that do as well, let me know! philosophe 1 Link to comment Share on other sites More sharing options...

## Recommended Posts

## Create an account or sign in to comment

You need to be a member in order to leave a comment

## Create an account

Sign up for a new account in our community. It's easy!

Register a new account## Sign in

Already have an account? Sign in here.

Sign In Now