Jump to content

Kaimakides

Members
  • Posts

    10
  • Joined

  • Last visited

Profile Information

  • Gender
    Not Telling
  • Application Season
    2016 Fall
  • Program
    Philosophy

Recent Profile Visitors

1,335 profile views

Kaimakides's Achievements

Decaf

Decaf (2/10)

6

Reputation

  1. Hey all! Feeling lost, confused, or perhaps a bit stressed out about applying to graduate school for Philosophy in 2017? There's a fix for that! In light of the tremendous success of last year's group, the 2017 philosophy applicants Facebook group is for information-sharing, advice, and mutual support during these stressful times. Among our two-dozen members are very successful past-applicants who very generously are willing to share their invaluable experience. So what are you waiting for? Get in here, let us know a little bit about yourself, and enjoy the fruits of community. LINK BELOW https://www.facebook.com/groups/1718518991735124/ YOU'VE GONE TOO FAR (LINK ABOVE)
  2. I don't think it will be a strike against you if you fail to use the original-language text. If you can effectively work from it, that might help ever so slightly. Also, it is worth noting the importance of working from well-reputed translations. You don't want your writing sample discredited because an expert on the subject on which you are working thinks the translation you are working from has substantially misled you.
  3. Hey, I'm an undergraduate at Stony Brook intending to apply to those programs strong in the philosophy of mind and related areas. Next academic year I'll be studying abroad at Oxford, where I expect to clinch some weighty recommendations as well as produce work worthy of polish for my writing sample. I'm a dedicated autodidact on the topic of graduate school admissions and I've done independent analysis in my copious free time this summer.
  4. This post is better suited to the Philosophy subforum. That being said, I think your application has a few strong elements. Your GPA is high, your having attended this summer program is helpful, and your pedigree will not sink your application. Your GRE Verbal section could be stronger. At this point however, it may be wiser to sharpen your writing sample to a fine point than to retake the GRE. Stony Brook's phD program is one to think about. It has some of the best placement of Continental programs, and is very diverse. On the other hand, accepted students usually have strong GRE scores.
  5. Hey Arm457, I'll reply to your essay, which I thought was pretty good. 1: This phrase in this context is potentially confusing. I returned to the first clause of the sentence to check whether you had implied that many people do have an idea of what they want to study at a young age. After parsing the sentence altogether your meaning is clearer, but the phrase "on the contrary" suggests contrast between your two clauses--a contrast that simply doesn't exist. 2: This is very wordy. Try to avoid phrases like "the fact that," "this is why," and "the idea of x." They, and similar phrases, can almost always be cut out, making your writing more concise. 3: You probably mean "primarily," or "most importantly," as opposed to first (in succession, in order). 4: Your meaning gets across, but you would do better to write dampen, hinder, short-circuit (perhaps too colloquial), cut short, or something to that effect. 5: You make this claim, but don't support it very much. Tell us why the search itself is so important! 6: This is a strong point. 7: I'm not sure how subjects can be considered physical, and the phrase "stand alone complex" is a very strange one. Why not say something like, "Despite the perception of college freshmen, seemingly independent disciplines are often deeply related." 8: Not clear what you mean here. Rewrite. 9: "That every student much take before graduating," is more concise and leaves out the bit about whether or not students want to graduate. We can take for granted that they do, and in any case it is not relevant to the argument you make. So we can save time and clean up your sentence by omitting it. 10: "Preconceived" has certain connotations of insularity or obstinance that are not relevant or appropriate here. Best to replace this word with another one. 11: This is an interesting example, but it is weakened by the fact that it is very hypothetical. Sure, the reader might say, if schools regularly invited prime ministers to speak on campus, then the Common Core would be important, but since most schools don't do things like this, lacking a Common Core is not that important. I think it is more effective to frame this point in terms of the opportunities having a Common Core offers to universities. You might say something like: "The Common Core establishes a broad knowledge base that administrators can then capitalize upon. Once they have mastered the fundamentals, students stand to gain more from sponsored intellectual activities such as lectures and speeches outside the classroom, and this in turn enhances community"-- or something like that. This bit at the end about community would then segue into your next point. I hope this was helpful.
  6. Yale and Columbia superscore. If anyone knows of other departments that do as well, let me know!
  7. I don't very much like that. But alright, I'll upload changes in piecemeal fashion. Here are the updated Verbal, Quant, and Overall GRE figures, with CI = 68% error bars. Hey ExponentialDecay, No programs ran this, I computed everything manually in Excel. The histograms aren't cumulative frequencies, they simply indicate ranges lower than (or equal to) or higher than (or equal to) a given GRE value. The normal-distribution-looking graph is the result of the facts that the distribution is normalish to begin with, and I've excluded very low scores. I initially made these bar graphs with frequencies of discrete scores, and if I recall correctly they are decidedly non-normal. They looked vaguely like like a sine curve, and the frequencies of each score were so low that it was impossible to do math on the highest and lowest values anyway. Hence my use of ranges instead. And the reason the ranges 'turn around' (from x or lower to x or higher) at the median score because some previous math indicated that those with GRE scores above the median have considerably better application results than those below it. Though verbal scores and GPA's do get very uniform in the pools for the most prestigious programs, the pool was fairly varied on the whole I think. I appreciate your desire for transparency. I'd have my name on that petition before anyone. With respect to my claim about qualitative factors overriding quantitative ones, you are quite right that this does not simply follow from the analysis above. Rather, it is my considered view that it is the case, based on the remarks of Directors of Graduate Admissions/Studies at various highly ranked departments to the effect that no quantitative factor can decisively sink an application (to paraphrase something Mark Schroeder from USC has commented), yet it would be expected that subpar quantitative factors be compensated for in some way by other parts of the application, in order for that application to receive further consideration. For evidence of the expectation for dynamic compensation, ctrl+F 'compensat' here. I realize this response may be a less-than-compelling pitch for my perspective, but having perused such a wide variety of sources over such a long time period, I have difficulty producing evidence for this kind of 10,000 ft. claim. Sorry about that.
  8. My mistake about that malapropism. I had an inkling that non-correlation would be a good test for independence! And of course you're right, the factors do correlate with one another, though to a very small degree. Verbal correlates with Quant r = .13, GPA correlates with Verbal, r = .049, GPA correlates with Quant (r = .002), and both sections obviously correlate strongly with total GRE score (.67 and .82, V and Q, respectively). Though none of these reach significance significant except the correlation between the section scores, and the section scores' correlation with overall score. I've added CI = 68% error bars to the data. Thanks for the heuristic using the scatter plot, and the veritable lesson in statistics. I will update my original post to reflect the addition of error bars. Although I'm sad to say that you're right, some of the tiers may be within the range of standard error, I am happy to be able to say that an important core of substance remains, namely that the most effective way to improve one's chances of acceptance vis a vis retaking the GR, is to raise it above the median score of the applicant pool, and, though admittedly there is less quantitative evidence for this, to perfect your score if it is in the range 167-169. Edit: For some reason, I am unable to edit my original post. Does anyone know why that is?
  9. Thanks for your thoughtful response, TakeruK! An important qualification, which I forgot to add, is that I have no formal statistical training, merely some experience I have accrued in my own time, and I recognize that my analysis is pretty amateurish, but nonetheless, I hope, helpful; I think your idea of putting error bars on my data is a very good one, but regarding I don't know how I would go about doing that. But what do you mean by 'interacting,' which I presume you are using in a more rigorously statistical sense than I am familiar with? If by interacting you mean that they result from a common cause, then in that sense they do certainly interact. (Is there some way for me to check?) And re: your comment about the GRE scores of accepted students, I agree completely. Accepted students have great qualitative features, and they do well on the GRE and have great grades simply as a matter of course. The parallelism you point out in the role of the verbal section in STEM fields is very interesting. Thanks for sharing that.
  10. Hello, all. Having spent some time lurking on these forums, I've noticed that one of the most enduring and ubiquitous questions here and elsewhere is, "Should I retake the GRE?" And for good reason--not only is it terribly important, but it is also a kind of epistemological brick wall, necessitating the consideration of so many factors and being so sensitive to context that it is pretty much impossible to obtain a satisfactory answer. Being unable to directly intuit the answer in my own case, and eager to turn my obsessive attitude re: graduate school applications toward something as all-consuming and satisfying as the Excel Spreadsheet, I assembled and analyzed a veritable mountain of data about the quantitative measures of students who self-reported here at the Grad Cafe. In sections 1a and 1b, I will present the data and the results of analysis performed on that data. In section 2, I will discuss these results. In section 3, I will draw conclusions based on the discussion. In section 4, I will qualify those conclusions. In section 5, I will suggest some ways my analysis may help you answer the previous question about the GRE. I performed the analysis on the data from about 600 applications to programs in which I had a personal interest. I recorded 1. Whether an applicant was accepted or rejected, 2. Their Verbal and Quantitative score, 3. Their GPA, and 4. Whether or not the applicant attended a Master's program. If an applicant attended a Master's program, I performed calculations on their Master's GPA instead of their UG GPA. The Analytical Writing section was not recorded. Though most GRE scores recorded were on the 130-170 scale, when I came across pre-2011 scores I simply converted them using the ETS concordance table (though this is not wholly unproblematic, and I will address it if you push me on it, but not otherwise). These are the PhD programs I examined, with their PGR rank in parentheses, in no particular order: Arizona (13) Brown (20) Pittsburgh (7) Toronto (11) Wisconsin-Madison (21) MIT (12) Princeton (2) Stanford (8) Yale (6) CUNY Grad Center (16) Harvard (6) Rutgers (2) Texas-Austin (17) Massachusetts-Amherst (28) Columbia (10) NYU (1) Indiana-Bloomington (24) Duke (24) Ohio State (28) Berkeley (10) UCLA (10) 1a. Means, deviations, correlations, explanatory effect The average PGR ranking of the above programs is 13.2, their weighted average PGR ranking is 10.7, according to the number of recorded entries for each school. (Higher ranked programs get more applications, so it is unsurprising that the weighted average is lower than the average per se). The first thing you ought to know about the GRE scores of those who apply to top programs is that they are often very high, and the scores of those who are accepted are even more impressive. The mean Verbal score of applicants was 165.3. A score of 165 is at the 95th percentile of all GRE test-takers. The median of the applicants was 166, the 96th percentile. The mean and median Quant score of applicants was 160.0, the 78th percentile of the general GRE population. Consequently, the mean GRE score of applicants was 325.3, and though ETS does not assign percentile ranks to overall scores, it should be clear from the constituent parts that this not a shabby score. The modal Verbal score was an impressive 169, with 97 of the 598 applicants who reported a verbal score scoring exactly 169, and nearly as many had a perfect 170 (82 applicants). The modal Quant score was 155, with 57 students having a score of exactly 155. The standard deviation of the Quant scores is 5.4, while the standard deviation of the Verbal scores is 4.1, indicating that the Verbal scores are more tightly grouped. For comparison, the standard deviation of verbal scorers for the more general population of GRE-takers is 8, and the standard deviation of Quant scores is 9, suggesting the relative uniformity of GRE scores among applicants. A greater correlation between acceptance rate and one's Quantitative section score (r = .18, p <.0001) than between one's Verbal score (r = .16, p <.0001). However, both correlations are weak. The quantitative score is therefore a slightly better predictor of one's likelihood of being accepted (to the above fictional construction of a program, consisting of the weighted average of the individual programs), than one's verbal score. The strongest correlation I found was between one's total GRE score and acceptance, at .22. Not much better. By squaring these r values, we can get the explanatory effect of each quantitative component. Total GRE scores explain 4.99% of application results, while the verbal score alone explains 2.45% of them, and the Quant score alone explains 3.37% of them. Taken together, they explain 10.81% of the data. The minutely small correlation between GPA and application results (.04) failed to reach significance. 1b. Distribution, tiers The small standard deviation of applicants' verbal scores coheres nicely with the witnessed distribution of those scores. As you can see, quantitative scores are distributed more widely. And here is the distribution of GRE scores. Probably more important than the distribution of these scores is the success of applicants in these various score-groupings. The observed results in this vein are equal parts interesting and revealing. Total GRE scores as well as their component sections follow a very definitive, non-linear pattern. (I am not great at Paint, assume all red lines are straight.) The horizontal lines represent tiers. If a tier-line passes through or above bars A and B, then A and B are members of the same tier. The vertical lines divide tiers, and represent thresholds. Moving from left to right across a threshold corresponds to an increase in acceptance rate (moving to a higher tier). Moving from right to left corresponds to a decrease in acceptance rate (moving to a lower tier). Here is the chart for Quant score. And below is the corresponding chart for total GRE score: You may be asking yourself whether the existence of tiers is significant, or whether these groups are simply arbitrary. You might also be wondering whether any rationale underlies the location of thresholds. I will address this shortly. 2. Discussion of results In section 1a we established the relative unimportance of quantitative measures in the success of one's PhD applications. Departments are indeed telling the truth when they say that qualitative measures, especially the writing sample and letters of recommendation, are far and away the most important components of the application. "Does this mean that my GRE and GPA are unimportant, that I can get into [prestigious school X] with a subpar GPA and GRE?" This complex question has two parts best answered separately. To the first: absolutely not. GPA and GRE are very important parts of the application. The lesser importance of quantitative factors is best understood in the following way. Quantitative factors can be overridden by the more important, qualitative factors, if those qualitative aspects are especially strong. This is the essential difference between quantitative and qualitative factors. Subpar writing samples and letters on the other hand, can never be overridden by any level of success on the GRE or in your undergraduate career. "What do you mean, 'the correlation between GPA and acceptance failed to reach significance?'" It failed to meet certain standards of certainty required by correlational analysis. There is about a 32% chance that the correlation I found was due to chance, which is unacceptable. "If the correlation with GPA did reach significance though, would you expect it to be a good predictor of acceptance?" No. GPA is too variable and unsteady a factor, due to differences in grading between schools, for us to expect a correlation of any significance. And further, GPAs among applicants are even clumpier. Almost everyone has a GPA greater than 3.6, and the average GPA applied is 3.85, with a standard deviation of 0.17. More than half of all applicants have a 3.9 or greater. Very clumpy. "Quant score is a better predictor of acceptance. Does that imply that my Quant score is more important than my verbal score?" No. I suspect the Quant correlation is greater than the Verbal one simply because the Verbal scores clump together near the top. Had I sampled more mid and low-range programs, I would expect the verbal score to correlate more strongly than the Quant score. Although Quant results explain a greater number of results, it does not necessarily explain a greater number of acceptances. Further, it should be noted that the Quant score's prominence may be the result of this particular range of schools. In programs strong in formal areas such as logic, it makes good sense for the Quant score to be weighed more heavily. If a number of those programs are represented here, this is a plausible explanation. "Why are these tiers and thresholds not completely arbitrary?" I believe there is compelling reason to think that the tiers and thresholds witnessed in the data represent something real. Although many graduate programs swear up and down that they do not use GRE score cut-offs, it is a well-known fact that the overwhelming number of applications they receive gives them little recourse but to use GRE scores to divide applicants into groups of more or less promise. The patterns above give us some insight into the (average, fictional) process of the representative school in this group. In the Verbal chart, we see two thresholds creating three tiers, the first threshold dividing the groups 166 or lower from the groups 166 or higher. 166 is the median (50th percentile) verbal score of this group of applicants, and recall that 165.3 is the mean. The second threshold divides those who scored 169 or higher from those who scored exactly 170. Since 169 is such a common score, it is the 75th through 85th percentiles. A score of 170 represents the 90th percentile. In the GRE chart, three thresholds create four tiers. The first threshold divides those who scored 325 or lower from those who scored 325 or higher. 325 is the median score of the applicant pool. The second divides those who scored 329 or higher from those who scored 330 or higher. 330 is the 75th percentile score. And the final threshold divides those who scored above 335 from those who scored 336. 336 is the 92nd and 93rd percentile, while 335 is the 90th. In the Quant chart, we again have three tiers. The first threshold divides those who scored 160 or lower from those who scored 160 or higher. 160 is the median and mean Quant score. The second threshold divides those who scored 168 or higher from those who scored 167 or higher. 168 is the 95th percentile, while 167 is the 90th. As you can see, these divisions demonstrate surprising regularity, suggesting that this is not merely a statistical accident, but rather reflects a fact about how departments carve up their applicant pools. All three scores are divided at their median score, with those who score higher than the median score being much more likely to be accepted than those who score below the median. A second threshold is found at the 75th percentile for the Verbal section and Overall score. And a final threshold is found in all three scores between those who score at the 90th percentile (or who score nearly perfectly) and those who score above the 90th percentile, or nearly perfectly. What we know about committee procedures is consistent with the existence of the existence of tiers and the above analysis. On the face of it, this system may look dubious, and your first instinct may to be denounce it. After all, small differences in ability (like a 169 vs a 170 verbal) seem to be rewarded with large increases in chances of acceptance, while large differences in ability are largely ignored (those who score 166 or higher have the same chance of success as those who score 169 or higher). Things came together for me when I read something by Eric Schwitzgebel, a professor at UC Riverside, to the effect that letters of recommendations tend to blur together for him. Assuming that Eric is not substantively different in this respect from committee members across the country, and that a barrage of largely uniform quantitative measures are just as apt to blur together as a pile of generic letters of recommendation, I see no reason why we should not expect exactly the kind of distribution witnessed in the above data. Consequently, wide swaths of GRE scores are likely to receive similar attention, and thus, similar acceptance rates, with the lowest GRE scores receiving the least attention, and higher scores receiving more, but roughly equal, attention. Moreover, across the board, top-end or perfect scores are the most likely to signal to committee members that a file is worthy of closer inspection. Even if small differences at the top end of the score range do not represent large differences in ability, a higher score may substantially increase the likelihood that your file will be examined in further detail, and then green-lit. "So you're saying that higher GRE scores cause this increase in acceptance rate?" No. When you get down to thick of it, everyone who gets in does so on the merit of their sample and letters (and a few other qualitative factors, such as your origin of institution). However, in order be accepted, your application must first receive close scrutiny. And the strength of your quantitative profile directly bears on the likelihood that your application will be scrutinized more closely. 3. Conclusions 1) On the whole, those who apply and are accepted to top-ranked PGR programs have quite strong GRE scores. 2) The quantitative components of one's application are far less important than the qualitative ones (Letters and sample, institution of origin), as evidenced by the fact that they explain only about 11% of the data. (About 65 results, of 601) 3) Despite the fact that ETS revised the GRE in 2011 to make more finely discriminate between the ability of the highest scorers, Verbal scores are very closely grouped in the range 166-170. 4) It is probably standard procedure to divide applicant pools into a bottom half and a top half, to more closely examine the upper half of that pile, and then most seriously consider the top 10% of the pool. 4. Qualifications A number of factors qualify my results and impact my data. Firstly, all this data is self-reported. It is therefore entirely plausible that those who self-report have higher GRE scores than the average applicant, skewing all my score data upward by some unknown quantity. Small quantities of data make it difficult to be confident about certain low-frequency scores. I therefore caution you to interpret acceptance rates as absolute quantities.It might be mistaken that 45% of those with a Verbal score of 170 were accepted, but we can be fairly sure that someone with a verbal score of 170 is more likely to receive an acceptance than someone with a Verbal score of 169. In fact, (((45.1/33.5) - 1) * 100) 34.6% more likely. When total GRE's and Quantitative scores get very high, the margin of error is higher. The number of applicants who achieved a total GRE score of 338 or higher is only about 3% of the total sample size, and so I have excluded it from the GRE charts. 5. "So, should I retake the GRE?" The foregoing analysis makes this question far more tractable than before. We can now break it into smaller sub-questions: 1. Is your current score at the cusp of the next threshold? (note: consult the charts) 2. Do you have independent reason to believe that you can do substantially better on your next retake? (e.g. you got rather poor sleep the night of the exam for a non-recurring reason, you were ravenously hungry during the test [both of these happened to me, ugh.].] 3. Does a department on which you are especially keen superscore? If the answer to one of these questions is yes, you may want to consider re-taking the GRE. However, I would not be so confident in my analysis as to be able to determine definitively whether or not you should retake the GRE, especially if you answer 'no' to the first question but 'yes' to the second and third.
×
×
  • Create New...

Important Information

This website uses cookies to ensure you get the best experience on our website. See our Privacy Policy and Terms of Use