Search the Community
Showing results for tags 'correlation'.
Found 3 results
Hi, I correlated all my study variables, and some of the demographic variables, with each other, to see if there were any significant associations. I found significant correlations between some study variables with demographic variables. For example, let's say I analyzed which type of candy participants like eating the most; and the amount of candy XY eaten correlated with participant's educational status. This would seem like a "spurious" association, as in there would be no obvious explanation why participant's education should be associated to how much of candy XY they eat. My questions: 1) Is it common to to this sort of preliminary correlational analyses to explore associations between variables? 2) Should I report significant correlations, even if they are not part of my study questions/hypotheses? 3) If yes, should I mention these significant correlations as well in my discussion? Or can I simply report them in my results part, and them not mention them anymore in the discussion? Thank you in advance !
Hello, everyone! Given the strange concern some of us have for getting a job after completing our PhD programs, I decided to undertake a friendly/nerdy investigation. Perhaps this has been done before; at any rate, I found it enlightening. Maybe it will help you too, as we reach the final stage of our decision making. How strongly do Leiter's current (2018) Philosophical Gourmet Report (PGR) rankings correlate with job placement into permanent academic positions? Rather weakly, it turns out. I contrasted the PGR data on a spreadsheet with placement data from 2017 research funded by the APA from the years 2012-2016 with some interesting results. (See below for a link to the data.) Before I report my findings, I should note a few caveats: The APA placement data reports the most recent placement status of a given graduate within the time period, so some of those in permanent academic positions are surely second- or third-year hires, given the substantial number of PhD-earners who don't get placed for a year or two. Leiter has criticized the APA-funded data for leaving out postdocs (who may have postponed a viable permanent academic position). This is good to keep in mind; however, a number of the postdocs would have applied for positions within the 2012-2016 period, which at least ameliorates the problem. I use the terms “weak” and “strong” for correlations in an intuitive rather than a technical sense. Numbers can be presented in very biased ways, especially when statistical or categorical lines in the sand are drawn. I do draw such lines, so take my categories with a grain of salt. I left out altogether universities outside the United States. Also, when a university was distinguished from its HPS (history and philosophy of science) program, I reported whichever of the two had a higher placement rate and left the other out altogether. So, for example, when calculating the PGR representation of the top 50 schools for permanent academic placement, I divided the 29 PGR-represented schools by the 45 of the top 50 which don't fall under either of these two exclusions. There are, of course, other factors to consider besides employment: publishability, raw academic opportunity (and correlation with personal interests), oddball placement factors (school X never hires from school Y), teaching/research balance, etc. This investigation is limited, but within those limits it is insightful. Without further ado, here are some of my findings about the top 63 permanent-academic-placement (PAP) schools vis-a-vis the PGR top 50. 20 of the top 63 PAP programs are PGR-unranked. These include the following: Cincinnati , Baylor, Florida, Oregon, Tennessee, Villanova, Penn St., DePaul, Catholic University of America, Vanderbilt, New Mexico, Emory, Miami, Washington, Fordham, Stony Brook, Duquesne, Georgia, USF, and Iowa. Given the top X schools for PAP, where X is a multiple of 10 between 1 and 6, PGR never includes more than 67.3% of them. Representation always declines as we approach the top of the PAP list (except moving from top 50 to top 40, but the difference is a negligible 0.4%). By the time we reach the PAP top 10, PGR only predicts half of them. There are 11 PGR-unranked schools that have PAP rates of 50% or better. (On the above list, these consist of everything from Cincinnati to New Mexico). This rate is better than that of half (25) of PGR-ranked schools. 10 PGR-ranked schools, ranging from PGR-rank 9 to 40, placed too low even to be considered by the APA study, which bottomed out at 38% PAP. These programs include UCLA, CUNY, Brown, and Duke. Only 1 of the 8 PGR “bubble” schools (Nebraska) was in the APA top 63. Important: It is true that PGR rankings do correlate more strongly with PAP into PhD-granting programs. Of the 20 high-PAP schools that are PGR-unranked, only 3 place students into PhD-granting programs at a rate of 10% or higher. By contrast, half of the PGR top 50, including the entire top 20 (minus some of the PGR-ranked schools which placed too low overall for APA consideration), place students into PhD-granting programs. Here's the link to the APA-funded study. The portion relevant to my post begins on page 43: https://www.dropbox.com/s/61qgeway2nyhr7x/APDA2017FinalReport.pdf?dl=0 Bottom line: If you're cool with teaching undergrads, PGR isn't going to be very helpful. If you strongly prefer teaching graduate courses, PGR is going to be very helpful; however, at that point you might as well just look at the APA rankings for PAP placement into PhD-granting programs. Hope this can help someone.