Jump to content

New Gourmet Report Published!


ianfaircloud

Gourmet Report  

43 members have voted

  1. 1. What's the most under-ranked program in the new report?

    • Massachusetts Institute of Technology
      6
    • University of Arizona
      1
    • UNC Chapel Hill
      1
    • CUNY
      0
    • Cornell
      2
    • Notre Dame
      3
    • Texas Austin
      2
    • Brown
      0
    • University of Chicago
      8
    • UW Madison
      0
    • USC
      2
    • Columbia
      3
    • Berkeley
      0
    • UCLA
      1
    • University of Arizona
      2
    • Notre Dame
      2
    • UCSD
      3
    • Duke
      2
    • UC Irvine
      2
    • Other! (and this polls sucks, because it didn't list my option!)
      10
  2. 2. What's the most over-ranked program in the new report?

    • Massachusetts Institute of Technology
      5
    • University of Arizona
      2
    • UNC Chapel Hill
      0
    • CUNY
      5
    • Cornell
      3
    • Notre Dame
      1
    • Texas Austin
      3
    • Brown
      3
    • University of Chicago
      0
    • UW Madison
      1
    • USC
      9
    • Columbia
      1
    • Berkeley
      4
    • UCLA
      1
    • University of Arizona
      0
    • Notre Dame
      0
    • UCSD
      0
    • Duke
      2
    • UC Irvine
      1
    • Other (and this polls sucks, because it didn't list my option!)
      9
  3. 3. Do you expect that this will affect departments' ability to recruit admitted students?

    • Not really. Departments, even those whose rankings changed, won't notice much of a difference in recruitment.
      16
    • Yes. Some departments whose rankings have changed will notice some difference.
      23
    • Some other response / not sure.
      4


Recommended Posts

These issues probably can't be overcome in any survey.  So to call it a fault of PGR may not be right.  Maybe it's just a caveat to attach to any rankings like these.

 

A caveat if you think rankings are worthwhile nonetheless. A serious flaw (for any ranking) if you think rankings do more harm than good.  

Link to comment
Share on other sites

The site had been updated since mid-December (http://leiterreports.typepad.com/blog/2014/12/philosophical-gourmet-report-2014-15-now-updated.html). I'm glad you made this thread, though - I was wondering why no one was talking about it!

 

Ahh, I see. Did he simply not update the main page until recently? I saw this (http://leiterreports.typepad.com/blog/2015/01/pgr-2014-15-update.html), where he said, on Jan 29, that the PGR wasn't finished. And somehow I missed the updates before then. I did find that on December 2 he posted the top-fifty on his blog (and perhaps on the PGR site). Maybe we were all too busy to see it then. Thanks for the post.

Link to comment
Share on other sites

I'd be curious to know if those that think the PGR methodology is flawed also think that the PGR is helplessly flawed and counterproductive, or if they might have suggestions for improvement that might resolve their worries.

 

The point about the snowball sampling, for instance, is a true description of the method. Leiter picks the board and the board picks the reviewers. But to my knowledge, people aren't arguing that reviewer X is unqualified or that non-reviewer Y got snubbed. It seems like the reviewers are a field of relevant experts. Is this not so? If it is so, then the methodology produces reasonable results. If we cannot trust a field of relevant experts, who can we trust? And if we are complaining about implicit bias, how can we hope to have a human-based ranking system at all? (maybe the unconvinced think we just can't) 

 

I'm also not sure what could replace the PGR. We have seen rankings based on publication volume, for instance. Such rankings are useless. Rankings based on placement would be interesting, but always backward looking. So I'm not sure what the alternative is.

 

The way Leiter makes it seem, the main body of dissenters is the SPEP crowd, the continental folks who are by and large completely excluded from the rankings. It's true that continental heavy programs aren't ranked, like Emory, SUNY, Vanderbilt, Fordham, DePaul, etc. But the PGR is widely regarded as a ranking of analytic departments and doesn't advertise itself as a ranking of continental departments, at least from my understanding.  

Link to comment
Share on other sites

And if we are complaining about implicit bias, how can we hope to have a human-based ranking system at all? (maybe the unconvinced think we just can't)

 

This is a decent way of expressing my sentiment, above, that we may need to attach a caveat to any human-based ranking of departments.

Link to comment
Share on other sites

My only concern is that placement doesn't appear to be enough of a factor in determining the list. I don't have a dog in the race, since I'm Continentalist through and through, but it seems to me that this is meant to be a ranking for professional philosophers (or those aspiring) by professional philosophers. The "quality" of a department's faculty is certainly pertinent information for aspiring students, but that quality won't mean much if it can't get you a job. While most of the time a high ranking corresponds to a high placement rate, this isn't always the case. An applicant might shy away from a T50 school simply because of its ranking, even though it may have a stellar placement rate. 

 

It's not restricted to philosophy--one of my exes was in the English department at Brandeis. While nowhere near its neighbor, Harvard, in terms of ranking or pedigree, Brandeis PhDs routinely found employment in TT or Tenure-related positions. Brandeis has a better placement rate than Harvard, in fact. I wish this was more explicit in the PGR and other department rankings. 

Link to comment
Share on other sites

As far as the most underrated programs go, I honestly think that dubious distinction goes to the non-Toronto Canadian programs.

I visited Western last summer and it was an absolutely brilliant place to do philosophy. I think they're criminally underrated. I hear great things about various other places in Canada as well, so I think I'd have to agree with your claim.

Link to comment
Share on other sites

The specialty rankings really help demonstrate how much these rankings are contingent on small interdepartmental changes. For instance, if you compare the 2015 rankings for philosophy of religion (http://www.philosophicalgourmet.com/breakdown/breakdown7.asp) with the 2011: 

 

...

 

---

 

You'll notice some radical changes. Perhaps most strikingly, UNC-Chapel Hill was the second highest ranked program in the world in 2011, and has completely dropped off the charts by 2015. Why? Because (as far as I can tell) Marilyn McCord and Bob Adams have moved from being faculty at UNC to being part-time professors at the new Rutgers Centre for the Philosophy of Religion. But that means that, in 2011, UNC's ranking as one of the best places in the world to study philosophy of religion was contingent on two people. Whereas places like Notre Dame, Oxford, SLU etc. have a number of faculty working int he field, large institutional support for the area of study (specific Centres for the philosophy of religion, for instance, or sponsored conferences etc.). Yet they still ranked below UNC on the force of two individuals. It's fascinating. 

I'm also surprised that Cornell and Duke weren't included in the philosophy of religion rankings - especially Cornell, as they have a number of people in the department working in the field and a major research grant from the Templeton Foundation. 

 

 

 

Yeah, but I meant that the phil of art rankings are wonky in a different set of ways. For one thing, almost half the evaluators (43%) work primarily in the history of philosophy (especially the German tradition), not the subfield they're evaluating. A number of the programs in the top three groups just shouldn't even be there in the first place (at least one doesn't even have any non-emeritus faculty in the area! Another has a senior faculty member who's never published an article in the subfield that wasn't a review), and some of the departments ranked in group 3 (at least two) should be in the first two groups instead--even just going by the median and mode, there are weird outliers in group 3. In order for some of them to have a rounded mean of 3.5, one of the evaluators would have had to rate the program a 0 or a 1--which would actually be patently ridiculous for those programs. (The reverse seems true of Columbia, oddly enough--it's got a rather high average given its median and mode.)

 

These mean/median/mode problems also occurred in the last PGR: at least one of the evaluators is being screwy, and because there are just seven of them (eight last time; also, they're all dudes) and they can't rate their home/PhD departments, it seriously skews the results. And there's no real reason why there can't be more evaluators for this subfield, since it has a national association with hundreds of members.

Link to comment
Share on other sites

I'd be curious to know if those that think the PGR methodology is flawed also think that the PGR is helplessly flawed and counterproductive, or if they might have suggestions for improvement that might resolve their worries.

 

The point about the snowball sampling, for instance, is a true description of the method. Leiter picks the board and the board picks the reviewers. But to my knowledge, people aren't arguing that reviewer X is unqualified or that non-reviewer Y got snubbed. It seems like the reviewers are a field of relevant experts. Is this not so? If it is so, then the methodology produces reasonable results. If we cannot trust a field of relevant experts, who can we trust? And if we are complaining about implicit bias, how can we hope to have a human-based ranking system at all? (maybe the unconvinced think we just can't) 

 

I'm also not sure what could replace the PGR. We have seen rankings based on publication volume, for instance. Such rankings are useless. Rankings based on placement would be interesting, but always backward looking. So I'm not sure what the alternative is.

 

The way Leiter makes it seem, the main body of dissenters is the SPEP crowd, the continental folks who are by and large completely excluded from the rankings. It's true that continental heavy programs aren't ranked, like Emory, SUNY, Vanderbilt, Fordham, DePaul, etc. But the PGR is widely regarded as a ranking of analytic departments and doesn't advertise itself as a ranking of continental departments, at least from my understanding.  

 

Who has the kind of extensive knowledge needed to evaluate the quality of literally dozens of programs? Aren't these philosophers far too busy with their own research, teaching and educational responsibilities to be familiar with the work of everyone in an entire department, much less dozens of such departments?

 

If it doesn't judge quality, just reputation, then the inevitable results are a feedback loop in which case what people are 'relevant experts' on is uninteresting. And besides, despite the many protests that the PGR doesn't claim to be about quality, that is the rhetoric that surrounds it (and Leiter's rhetoric in general, especially regarding so-called 'party-line continentals' - we have to protect against crap philosophy!). That's how students think: they don't think "Oh boy, I got into Harvard, what a reputable program!" they think "Oh boy, I got into Harvard, what a great program!" (I am not implying that Harvard isn't a great program, btw). 

 

I don't know, statistically, what the main body of dissenters is, but there are quite a few who are not SPEP folks. Furthermore, even if SPEPies were the main body, the idea that the PGR claims just to do analytic ranking is false (notice it has 19th and 20th century continental sections in the specialty rankings) and nonetheless, the attitude that SPEP-style philosophy (whatever that is) and Leiter-style philosophy (whatever that is - it includes continental, just not 'party-line continental' or most French continental philosophy) can have their own rankings just engenders a divide that always has been, is, and always will be a load of horseshit.

 

Here's roughly how I see it: maybe  some form of specialty ranking is feasible (I am skeptical). Placement statistics (ranking or no) will be useful for prospective students. The fact that they are backward looking is a much less serious flaw than the ones that have been noted for the PGR (any data is backward looking - just because reputable faculty X is there this year does not mean they won't be taking a job elsewhere during the ~7 years you are doing your PhD), so it always strikes me as strange that defenders of the PGR take the "It's flawed, but it's the best we can do" line for the PGR but take the backwards looking nature of placement to be a reason not to have such a ranking. 

 

Otherwise, no good ranking is feasible. I am unconvinced by the fact that someone else, less qualified, is going to rank (like an organization outside of philosophy). Let them. If we make our own ranking, we're going to start taking it seriously, and if we start taking it seriously, we'll be taking something seriously that is at best seriously flawed and at worst something that magnifies privilege. 

 

To end on a constructive note, though, if it were true that alternative rankings are inevitable and that people will inevitably take them seriously, then I suppose I would have to be on-board with philosophy having its own ranking if it were done well (relatively speaking; I am pretty sure it is impossible to do such a thing well in absolute terms). The problem is the PGR is not done well and most criticism (constructive or not) is met by dismissal.

Edited by Monadology
Link to comment
Share on other sites

If it doesn't judge quality, just reputation, then the inevitable results are a feedback loop in which case what people are 'relevant experts' on is uninteresting.

 

Perhaps you come from a privileged position of being well integrated into the academic philosophical community and in a position to judge for yourself the objective worth of a philosopher's quality, but many undergraduates, myself included a few years back, are not. I didn't know who was well-regarded in philosophy. I didn't know which programs were regarded as good. My professors didn't know either because they were largely continentalists and doing their own thing.

 

So while reputation results might not be interesting to you, they're pretty interesting (and valuable) to other people who would like to know these purely subjective matters that are subject to feedback loops, because they are nevertheless inherently valuable. If the philosophical community thinks that program x is a top-10 program, I'm going to believe there is some merit to their claim and use it to orient my own school search.

Edited by Establishment
Link to comment
Share on other sites

I came from the position of attending a not-well-known small liberal arts college, so during my application process prior to getting through my MA program, I was in no such privileged position. Nor do I take myself to be in a position to judge the objective worth of a philosopher's quality (I don't even know what such objective worth is supposed to be). 

 

Having adequate placement data in a centralized location would easily compensate the most significant form of ignorance a prospective applicant might have. Anything else can be overcome by reading in one's areas of interest, looking at the bibliographies of relevant SEP articles, and especially reading what faculty have written from programs that place well and identifying where philosophers you find appealing teach. Specialty rankings could also be helpful. While I'm hesitant about the idea, they are much more likely to be helpful than an overall ranking. 

 

If people's aim is to go to programs that are reputable or 'objectively' good, independently of placement record or whether they are personally appealing, well, I just can't relate to that. I don't know what's supposed to be valuable about reputation other than placement, and I don't see how a nameless mass of philosophers is going to better track the philosophical environment someone will enjoy being in better than that person themselves (though, notably, people are pretty bad at predicting what they will enjoy regardless). 

 

EDIT: I guess the motive would be to get a good education, independently of job prospects or personal interests? Is that the idea? I guess I can understand that, though there are still issues (is the PGR really representative of the philosophical community? Does it track the quality of mentorship a student is likely to receive, which seems really important to a good education, or just research reputation?)

Edited by Monadology
Link to comment
Share on other sites

 

The point about the snowball sampling, for instance, is a true description of the method. Leiter picks the board and the board picks the reviewers. But to my knowledge, people aren't arguing that reviewer X is unqualified or that non-reviewer Y got snubbed. It seems like the reviewers are a field of relevant experts. Is this not so? If it is so, then the methodology produces reasonable results. If we cannot trust a field of relevant experts, who can we trust? And if we are complaining about implicit bias, how can we hope to have a human-based ranking system at all? (maybe the unconvinced think we just can't) 

 

 

I think the sampling method is more accurately called "Brian Leiter asks some friends to ask some friends". If it's a snowball sample, it's the smallest possible kind. Snowball samples are already considered to be a method with a lot of bias, so only having two rounds of snowballing is going to result in substantial bias. While it's hard to completely remove bias from a snowball sample, having multiple rounds of recruitment can help reduce it; no such effort is made in the PGR. Basically, everyone who completes the PGR has one degree of separation from Brian Leiter; that seems like a potentially quite large source of bias.  Also, there are criticisms that some reviewers are unqualified (in some specialty rankings) and that some people were snubbed. There are also complaints that the pool of reviewers is far too homogeneous (many reviewers came from a relatively small number of schools). 

"Complaining" about implicit bias is the first step to combating it; although I'd really rather say I'm recognizing its potential effects. There's good research that reflecting on a time you were biased is one of the best ways to limits it effects (whereas thinking about a time when you were successful can actually increase implicit bias). 

 

Perhaps you come from a privileged position of being well integrated into the academic philosophical community and in a position to judge for yourself the objective worth of a philosopher's quality, but many undergraduates, myself included a few years back, are not. I didn't know who was well-regarded in philosophy. I didn't know which programs were regarded as good. My professors didn't know either because they were largely continentalists and doing their own thing.

 

So while reputation results might not be interesting to you, they're pretty interesting (and valuable) to other people who would like to know these purely subjective matters that are subject to feedback loops, because they are nevertheless inherently valuable. If the philosophical community thinks that program x is a top-10 program, I'm going to believe there is some merit to their claim and use it to orient my own school search.

Reputational data might be interesting or worthwhile to someone, but the PGR reports reputational data of Brian Leiter and his friends, reflecting a limited view of the discipline. The PGR reflects Leiter's/the Board's ideas about what kinds of areas are worth studying, even within analytic philosophy. M&E has the biggest effect on the ranking, where philosophy of race, feminism, chinese philosophy, etc have substantially less influence. The pool of evaluators is quite small when compared to the size of the discipline (and the pool is not a representative sample). The PGR might capture reputational data about what the evaluators think of grad programs, but I highly doubt that their judgement represent "the philosophical community"  at large. 

Edited by perpetuavix
Link to comment
Share on other sites

Reputational data might be interesting or worthwhile to someone, but the PGR reports reputational data of Brian Leiter and his friends, reflecting a limited view of the discipline.

 

"This report ranks graduate programs primarily on the basis of the quality of faculty. In October 2011, we conducted an on-line survey of approximately 500 philosophers throughout the English-speaking world; a little over 300 responded and completed some or all of the surveys."

 

Leiter and a ~50-person board (which includes people like David Chalmers, Alex Rosenberg, Jason Stanley, Michael Forster, Allen Wood, Timothy Williamson, etc.) come up with a list of people to invite. I'm not really sure what more you want short of just emailing every professor out there.

 

The PGR reflects Leiter's/the Board's ideas about what kinds of areas are worth studying, even within analytic philosophy. M&E has the biggest effect on the ranking, where philosophy of race, feminism, chinese philosophy, etc have substantially less influence.

 

This seems like a pretty conventional opinion. Not just the areas you listed, but say mathematical logic too, which I do, and other areas. But these are minor areas compared to what the big hitters are. Not all areas of study are equal to each other. But this is beside the point, as there's no way to indicate this based on the information given (http://www.philosophicalgourmet.com/reportdesc.asp). Some evaluators judge a department's strengths on the whole (so see the former point), and others judge just based on their own areas of expertise.

Edited by Establishment
Link to comment
Share on other sites

Isn't that an issue in itself?

 

Perhaps, but the evaluators' responses converge nonetheless, so it would seem it works out somehow.

 

Given that there are ~300 evaluators, perhaps for the most part people grade one way. Or, perhaps, there's a correlation between a department getting high scores based on their general philosophical strength, and a department getting high scores based on having multiple, highly rated particular field strengths.

Edited by Establishment
Link to comment
Share on other sites

One might do a statistical analysis on this page: http://www.philosophicalgourmet.com/2011/departments.asp

to see if there is a correlation.

 

Not that this would directly prove/disprove anything, as the subfield specialties referred to here are a distinct evaluation from how the mass ~300 evaluators evaluate which I was referring to above.

Link to comment
Share on other sites

Perhaps, but the evaluators' responses converge nonetheless, so it would seem it works out somehow.

 

Given that there are ~300 evaluators, perhaps for the most part people grade one way. Or, perhaps, there's a correlation between a department getting high scores based on their general philosophical strength, and a department getting high scores based on having multiple, highly rated particular field strengths.

 

Fine, but folks have suggested, and you've acknowledged, that the convergence could be explained-- to some degree-- by other factors like feedback loops. I don't really see that we've got reason to think that most people grade one way. And while I buy into your last suggestion, that there is strong correlation between high 'general' scores and high scores based on strong particular fields, that's precisely a criticism that some folks marshal against the PGR-- that it is overly weighted towards analytic M&E departments. Many of the folks pushing against the PGR are doing so precisely because they want to push against a conception of philosophy that closely ties 'general strength' (importance) to those particular subfields. 

 

And even if we bracket all that, and say, as you seem to have above, that the PGR really tracks the opinions of some central members of the discipline* (although we should recognize, I think, that the central status of some board members is determined, in part, by the PGR rankings themselves), all that suggests is that the PGR should be seen as one metric among many. But-- and here's where the September Statement comes in without dealing with Leiter's 'moral character'-- the head of the PGR has a history of viciously ostracizing other proposed metrics (eg, Dicey Jenning's placement data analysis). If the PGR is really just a reflection of opinions on departmental reputation, it should be treated as a single metric among many. But Leiter definitely doesn't behave that way. So either you've got to reject his rosy-eyed view of the PGR and get behind Dicey Jennings (at least in the spirit of producing alternative metrics), or say that the PGR tracks something more substantive than (PGR-influenced) impressions of department reputation. 

 

*I'd be shocked to see Leiter himself concede this. We can go back and look if you'd like, but I definitely have the impression that he's made claims that the PGR tracks some fact of the matter (to a significant degree of accuracy). 

Link to comment
Share on other sites

(As an aside, I was really shocked to see Leiter come out so vehemently against Dicey Jenning's work on placement data, since he played a significant role in getting philosophy programs to make this data accessible to applicants in the first place. Given the venom of his original posts against Dicey Jenning's report-- which was subsequently edited-- it's hard for me to see this as anything but the head of the PGR trying to keep it as the only game in town. (Not necessarily suggesting conscious motive to Leiter, just that it seems like the most plausible structural analysis.))

Edited by flybottle
Link to comment
Share on other sites

"This report ranks graduate programs primarily on the basis of the quality of faculty. In October 2011, we conducted an on-line survey of approximately 500 philosophers throughout the English-speaking world; a little over 300 responded and completed some or all of the surveys."

 

Leiter and a ~50-person board (which includes people like David Chalmers, Alex Rosenberg, Jason Stanley, Michael Forster, Allen Wood, Timothy Williamson, etc.) come up with a list of people to invite. I'm not really sure what more you want short of just emailing every professor out there.

 

 

This seems like a pretty conventional opinion. Not just the areas you listed, but say mathematical logic too, which I do, and other areas. But these are minor areas compared to what the big hitters are. Not all areas of study are equal to each other. But this is beside the point, as there's no way to indicate this based on the information given (http://www.philosophicalgourmet.com/reportdesc.asp). Some evaluators judge a department's strengths on the whole (so see the former point), and others judge just based on their own areas of expertise.

 

Not all areas of study are equal to each other? Wow. Just wow.

Link to comment
Share on other sites

The APA has over 9,000 members (and certainly doesn't have every member of the discipline). 300 could be a good sample size to represent the discipline, but not if the sample taken is unrepresentative of the discipline. Even if you think the sample should be "distinguished" members of the discipline, you're still relying on Brian Leiter and the Board to pick those people (which brings me back to the point about implicit bias...). 

Problems with sampling:

http://choiceandinference.com/2012/04/17/manufactured-assent-the-philosophical-gourmet-reports-sampling-problem/

http://choiceandinference.com/2012/04/19/more-on-the-educational-imbalance-within-the-pgr-evaluator-pool/

Implicit bias in rankings:

https://feministphilosophers.wordpress.com/2014/09/26/rankings-and-implicit-bias/

Also, some of Dicey Jennings' placement data that flybottle was talking about:

http://www.newappsblog.com/2014/07/job-placement-2011-2014-comparing-placement-rank-to-pgr-rank.html

Link to comment
Share on other sites

Not all areas of study are equal to each other? Wow. Just wow.

 

 

I'm sorry to say it, but Establishment isn't really wrong on this (and I say this as someone perilously close to defending & the job market whose AOS is one of those bit players). Not every AOS commands equal professional respect, especially if the AOS in question is a subfield of a subfield--if you want a sense of how things lie, check out the subfield poll Leiter ran a while back: http://www.cs.cornell.edu/w8/~andru/cgi-perl/civs/results.pl?id=E_0176acd76a7cc5b9

 

That said, I do think we'd get a better ranking/PGR if the overall rankings were based on the specialty rankings (provided, of course, that the paucity of evaluators for some subfields could be rectified). You could weight them in tiers, or treat them all equally--either way, I think it would be better.

Link to comment
Share on other sites

Thanks for the link to http://www.newappsblog.com/2014/07/job-placement-2011-2014-comparing-placement-rank-to-pgr-rank.html. It's at least a helpful source.

 

I want to point out that the placement record of SLU lends more support to my view that SLU deserves a stronger reputation than it has among the PGR evaluators. I'm not sure why SLU just broke into the T50. When I lived in St. Louis, I met several people in the PhD program there. They said good things about the program. I do remember that the theists were well-represented among those students. Also, the chair is Theodore Vitali, who is a theist. The program is known for strengths in medieval and religion, which are areas of philosophy that fit into what Establishment called "minor areas" of philosophy.

 

For the sake of argument, assume that the PGR evaluators have a view similar to Establishment's. I think the reply is that there's something slanted about a survey of only people who believe that philosophy of religion (or any other subfield) is a "minor area."

 

Maybe another way to state Establishment's point is this: some subfields of philosophy are foundational to the other fields, and it's sort of an analytic truth about something foundational that its importance is (at least in some sense of the word importance) greater than those things that are not foundational. The "minor areas" are built on foundational areas. It's impossible (particularly in our time) to be a great philosopher of religion without also being a great epistemologist. I think when Establishment said that not all areas of philosophy are equal, perhaps s/he means that, all other things equal, the department with the #1 epistemologist is stronger than the department with the #1 philosopher of religion. That's a controversial position, but it's not a ridiculous position. A great philosopher once told me that epistemology is the most important subfield in philosophy. I think he meant that it's foundational. He didn't mean to take anything away from other areas. After all, why do we care about epistemology? Presumably in part because we want to answer life's important questions, some of which are contained in, e.g., the philosophy of religion. It would be odd to call philosophy of religion foundational to philosophy (unless you mean that it's foundational historically or something)

Edited by ianfaircloud
Link to comment
Share on other sites

Let's assume that some areas are foundational. Isn't it true, though, that many people working in 'subfields' are also, more likely than not, working on those areas? This seems to be something you would agree with, Ian.

 

So wouldn't the view have to be something more than 'epistemology is foundational', since, for example, many feminist philosophers do work on epistemology and, indeed, about the very core of epistemology? It seems like it would have to be more like 'contemporary epistemology done in the inherited tradition of a subsection of anglophone philosophy'? In other words, whatever definite description picks out the sort of epistemology done at top ranked departments. That seems a much harder view to defend than the much more general view that epistemology is foundational.'

Edited by Monadology
Link to comment
Share on other sites

Let's assume that some areas are foundational. Isn't it true, though, that many people working in 'subfields' are also, more likely than not, working on those areas? This seems to be something you would agree with, Ian.

 

So wouldn't the view have to be something more than 'epistemology is foundational', since, for example, many feminist philosophers do work on epistemology and, indeed, about the very core of epistemology? It seems like it would have to be more like 'contemporary epistemology done in the inherited tradition of a subsection of anglophone philosophy'? In other words, whatever definite description picks out the sort of epistemology done at top ranked departments. That seems a much harder view to defend than the much more general view that epistemology is foundational.'

 

Absolutely, I think that's often true. At least, this is true for my (minor) subfield, which is in large part a subfield of metaphysics (even if it doesn't get much love from metaphysicians!). But it's still pretty easy to distinguish between metaphysics proper and metaphysics qua my subfield. And it's once that distinction gets made that we start getting into trouble, because that's when subfields start isolating themselves/becoming more isolated from the "foundational core": specialists in my subfield have their own national and international conferences, their own journals, etc., and their work is rarely printed in "generalist" journals (partly because those journals don't have much of a history of printing work in that area, and partly because the specialist audience is to be found elsewhere). So they become more isolated from metaphysics proper (or phil. language proper, or epistemology proper, etc.), and that in turn compounds the perception that they're less important subfields. 

Link to comment
Share on other sites

Let's assume that some areas are foundational. Isn't it true, though, that many people working in 'subfields' are also, more likely than not, working on those areas? This seems to be something you would agree with, Ian.

 

So wouldn't the view have to be something more than 'epistemology is foundational', since, for example, many feminist philosophers do work on epistemology and, indeed, about the very core of epistemology? It seems like it would have to be more like 'contemporary epistemology done in the inherited tradition of a subsection of anglophone philosophy'? In other words, whatever definite description picks out the sort of epistemology done at top ranked departments. That seems a much harder view to defend than the much more general view that epistemology is foundational.'

 

Excellent point. Those dastardly SPEP folks are surely doing a great deal of metaphysics, epistemology, and ethics. The particular tied to the general rankings is much more particular than 'questions of being' or 'questions of knowledge.' Which, again, could be a plausible link, but is a very substantive (and thus contentious) position. 

Edited by flybottle
Link to comment
Share on other sites

Let's assume that some areas are foundational. Isn't it true, though, that many people working in 'subfields' are also, more likely than not, working on those areas? This seems to be something you would agree with, Ian.

 

So wouldn't the view have to be something more than 'epistemology is foundational', since, for example, many feminist philosophers do work on epistemology and, indeed, about the very core of epistemology? It seems like it would have to be more like 'contemporary epistemology done in the inherited tradition of a subsection of anglophone philosophy'? In other words, whatever definite description picks out the sort of epistemology done at top ranked departments. That seems a much harder view to defend than the much more general view that epistemology is foundational.'

 

Oh, yes, of course I agree with this, particularly the last three sentences of your post.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

This website uses cookies to ensure you get the best experience on our website. See our Privacy Policy and Terms of Use