Jump to content

mars667

Members
  • Posts

    20
  • Joined

  • Last visited

Posts posted by mars667

  1. 11 minutes ago, thebest said:

    Hey guys,

    Do we find out which group we are in as well when they send the comments?

    I don't think so, but if you've got 3 reviewers, you probably made it through the first round.

  2. 5 minutes ago, ChemGal said:

    I don't think they put too much emphasis on where you say you want to work if you're not a graduate student yet. So I wouldn't be too concerned about that. A lot more weight will be put into letters of recommendation and your statements. 

    Agreed. Undergraduate applicants apply before even submitting applications. I think it's understood that the chances of attending the institution you chose for your proposal are pretty iffy, unless you're already enrolled in a program.

    I think the location criteria would really only come into play if you make it into quality group two, and I'd assume that it's based on the location of schools you've previously attended. 

  3. 7 minutes ago, GoldenDog said:

    Really this was an excellent post. Do you know how grad versus undergrad plays into the decision, if at all?

    The old paper that they mentioned (linked here) talks about how the process used to run like 20 years ago, and we assume that it is still similar. However, there was no mention of whether or not grad versus undergrads are judged differently. Can anyone speak to this? 

    Thanks!

    I've found a few posts about that in the depths of the forums, but can't seem to track them now :wacko:. I think the gist of it was that the applications are split into different groups for undergrad, 1st year, and 2nd year and they each have different standards when it comes to publications and general knowledge of the topic. I'll try to track it down tomorrow if no one beats me to it.

    also it looks like that link just goes to the start of the forum, which still has good info, but there is a great comment containing some helpful info from a former panelist on page 17 (i thought linking might reduce the clutter...whoops)

    On 4/3/2011 at 6:29 PM, hello! :) said:

    Awhile back, I came across these notes from someone who had served on one of the NSF GRF review panels from a couple years ago. I don't remember where I found them and I haven't been able to find them again on the web... Maybe for whatever reason this person had to take them down. In any case, I'll post them here as "notes from an anonymous NSF GRF review panelist," since I think that it provides some very helpful insights into the whole process.

     

    Thank you, Anonymous Panelist!

     

    * * *

    Notes after serving on the review panel for the NSF Graduate Research Fellowship Program

    ## Executive Summary

    + Fellowship applications in field of Mechanical Engineering are evaluated by panel of ME faculty. Remember your audience when your write.

    + Two criteria—intellectual merit and broader impacts—have equal weight this year

    + Roughly 15 minutes to read an entire application—make your point clearly and quickly.

     

    Roughly 10% of those _who apply_ for these fellowships will receive them. The applicants are all amazing individuals.

     

     

    ## The Process

     

    All of the applications are evaluated by a panel of engineering faculty from a variety of schools, including research and teaching schools. Applicants for the same field (e.g. Mechanical Engineering) are evaluated by the same panel. This year the mechanical engineering panel we participated in had more than 20 members, and evaluated roughly 400 applications. The applications are sorted by level: level 1 is for those who are in their final undergraduate year, level 2 is for those who have just started their graduate programs, and there are also levels 3 and 4. While all those who are in level 1, level 2, etc are evaluated simultaneously (with criteria appropriate to the level), the final decisions on who to fund are not done by level.

     

     

    NSF has two basic criteria for evaluating the applications: intellectual merit and broader impacts. _They are weighted equally._ After a “calibration exercise” which is designed to arrive at a kind of panel-wide understanding of what would constitute intellectual merit and broader impacts, each application is read by two panelists and scored (out of 50) in each category. One panelist reading a single application takes 15-20 minutes. Panelists can not read any applications for which they have a conflict of interest.

     

    At the end of these first and second reads, applications get two Z-scores, where

     

     

     

    Z
    = [(Application's Score) − (Mean Application Score for that Panelist)] /

     

     

    (Application Standard Deviation for that Panelist)

     

    The Z-score is created to adjust for the fact that some panelists score applications much higher (on average) than others. The average of the Z-scores is used to rank the applications. Applications in the top 35% of the ranking get a third reading, as do any applications that have a wide discrepancy on their Z-scores. (The discrepancies are identified by computer and by the panelists.) The remaining 65% of the applications are retired, meaning they get no further consideration. After the third reading, applications that have widely varying Z-scores are returned to the 3 panelists for additional discussion and a resolution.

     

    Finally a new ranking is created. The top 20 or so in this ranking are in Quality Group 1—definite funding. (Notice that this is only 5% of the applications.) The next 40 or so are in Quality Group 2—honorable mention and possible funding. (The top of this group may get funded, depending on resources. Also, this group is mined for recipients of special focus awards, programs for under-represented groups, etc.) The next 40 or so are in Quality Group 3—honorable mention. The rest are in Quality Group 4 and don’t get an award.

     

     

    ## Criteria for Evaluation

     

    Here are criteria we used in evaluating the applications for level 1. Keep in mind that each panelist develops their own criteria based on the panel discussion, so that not every panelist is going to use the same standards. However, they will give you the general ideas behind the ratings. Also, they may seem very harsh, but this turns out to be essential since all of the applications are very strong.

     

     

    ### *Intellectual Merit*

     

    >#### Excellent

    >> 1. The research proposal clearly describes truly innovative or transformative research. (Transformative research transforms the way the field or society will think about the problem.)

    >> 2. The student is academically well-prepared to conduct the research. Outstanding letters of recommendation, good GPA, solid GREs. The GPA does not need to be 4.0, but should be good. The GRE’s I saw were not as high as I anticipated.

    >> 3. The student has a clear passion for their work which comes across in their writing and their actions to date.

    >> 4. The student has prior research or industry experience that demonstrated the ability to define, initiate, and complete projects with substantial independence. Avoid describing senior design projects or class projects, as they were not generally persuasive.

    >#### Very Good

    >> (2), (3), and (4) still there. Research is solid (more than incremental) but not transformative or truly innovative. Or, (1), (2), and (3) but not (4).

    >#### Good

    >> (2) and (3), research is solid, but no (4).

    >#### Fair

    >> (2) and (3). Research proposal is weak and student has little experience.

    >#### Poor

    >> Student is not well-prepared, research plan is ordinary and sketchy, and the student has failed to convey any passion for their work.

     

    ### *Broader Impacts*

     

    Be sure to address this topic, as Broader Impacts is half of the score and many applicants who were Excellent in Intellectual Merit did not address this area sufficiently.

     

    Also, be sure to realize that almost everyone who applies for these grants wants to teach at the college level. Wanting to be a teacher at the college level is not evidence of broad impact.

     

    _The identity of an individual does not constitute a broad impact._ This was explicitly discussed at the panel and explicitly ruled out (by NSF) as a broad impact. The fact that you are a female, Hispanic, Native American, African-American, etc does not, in itself, qualify as a broad impact. Also, personal struggle (health/economic/family) does not constitute a broad impact. Whoever you are, you need the types of broad impacts discussed under “Excellent” below. However, if you are part of an under-represented group or have overcome substantial difficulties in getting to your current position, do put this information in your personal statement if you want it to be considered. After the proposals are ranked, those who fall into these categories in Quality Group 2 will be picked up for additional funding opportunities.

     

    >#### Excellent

    >> 1. Demonstrated record of substantial service to the community, K-12 outreach, commitment to encouraging diversity, etc. Straight leadership a plus, but most highly ranked applicants have ongoing outreach/service activities.

    >> 2. Clear explanation of the broader impacts of the research. How will it affect society, and why should the government fund your project over someone else’s? If the project’s success would have huge impacts on its engineering field, it would fall a bit here and a bit in Intellectual Merit. (Different panelists had different views on this.)

    >#### Very Good

    >> (1) or (2) is somewhat weaker. (1) still has demonstrated record (not just “I will do...”) but the record is weaker, or (2) is still there but the impact is less dramatic.

    >#### Good

    >> Both (1) and (2) are present, but weak.

    >#### Fair

    >> (1) or (2) is completely missing, but the one that is present is at an Excellent level.

    >#### Poor

    >> (1) or (2) is completely missing, the one component that is present is only at a Very Good level.

     

     

     

    * * *

    Edit: Had to fix some dumb formatting issues. dry.gif

  4. In case anyone's interested! (and if not, sorry for the spam!)

    On 3/30/2016 at 6:48 PM, Pitangus said:
      On 3/30/2016 at 4:29 PM, Eigen said:

    I deferred one year, in retrospect I should have deferred two. There's no assurance that they will, but NSF has given cost-of-living raises each year (including retroactive raises). 30k to 32k to 34k.

    Everyone is comparing letter rankings (because it's what you have), but that's not what's used to decide the awards. Think of letter rankings as an A B C system- the reviewers still give numerical scores for each application (Z-scores). You see your letter grade, but you also had a numerical score associated with your application. 

    It also changes from reviewer to reviewer (scores are normalized to some degree to weed out too-easy and too-hard reviewers), as well as discipline to discipline. They try to keep the awards in each discipline proportional, so if you're in a very popular sub discipline with lots of applicants, you might need a more competitive score to get an award than someone in a smaller discipline. The school you're attending also matters to some degree (they try to use NSF awards to spread money to good applicants at institutions with lower numbers of other NSF grants), as well as your background and demographics. 

    On 3/30/2016 at 6:48 PM, Pitangus said:

    A slight clarification to this:

    In previous years (and probably now as well), each reviewer assigned an application a numerical score of 0 - 50 for IM and BI. I don't have the old reviewer's guide in front of me, but it went something like... 

    40 - 50 = E

    30 - 39 = VG

    20 - 29 = G

    10 - 19 = F

    0 - 9 = P

    So you can see how the letter scores can be misleading on their own: a numerical score of 39 would give one applicant a VG, while a 40 gives another an E, but the two scores are only one point apart.

     

    The Z-scores are the standardization of the numerical scores. The formula is something like:

    Z-score = (applicant's score - mean score from that reviewer) / std dev of reviewer's scores

     

    If you imagine an applicant who scored all 40s from reviewers who gave high average scores, then it makes sense that there will be applicants who scored all Es but did not win an award/HM. 

     

    Also, the diversity criteria only apply to applicants ranked in Quality Group 2 when it comes to deciding who gets an award vs an HM. Applicants in Quality Group 1 (the top group of applicants according to their ranked Z-scores) all get awards no matter what their background. 

    On 4/2/2015 at 1:18 PM, Pitangus said:

     

    This gets brought up multiple times every year, and I'm probably starting to sound like a broken record to those who have actually read through the threads, but yes, this is how it has been done according to the actual reviewers' guide (in 2008 and I would guess it hasn't changed much):

     

    A given subject panel distributed the applications so that each one went to two reviewers. The reviewers scored the application on a 1-50 scale for IM and BI (this scale corresponds to the P - E scores applicants see), and applications that scored below the 65th percentile after two reviewers were retired without being read by a third reviewer. The remaining proposals got a third reviewer, and were then ranked based on the average of the z-scores (standardized scores weighted based on all scores given by a reviewer, to help offset reviewer variability). The reviewers then deliberate to finalize the ranking.

     

    Applications were then sorted into four Quality Groups by their ranks:

    Applicants in Group 1 are all awarded fellowships

    Applicants in Group 2 "receive awards to the limit of funds available using criteria such as geographical region, discipline, and other factors" (so this is probably where diversity comes in); the rest receive HMs

    Applicants in Group 3 get HMs

    Applicants in Group 4 do not get awards or HMs (this group includes applications that were below the 65th percentile after two ratings and were retired before the third rating).

     

    So to conclude again: the scores applicants receive don't tell you much about how you were actually ranked, so it's pointless comparing how many Es and VGs you received compared to other applicants.

     

    More helpful things:

  5. I'd assume that the panels wrapped up with sorting applications into quality groups in February, but the group 2 reviews probably take up a lot of time from the grfp staff (has the review process been talked about in this thread?)

    I wonder if the Tuesday/Friday release dates might be related to the panel schedule..... I think someone else on here posted (from nsfgrfp.org) that the panel reviews were either mon/thurs or tues/fri, but maybe those days are designated for the entire grfp review process, not just panel reviews. If the grfp staff review apps on mondays/thursdays, that would match up with the tues/fri releases that happen in the early morning. Just a thought!

  6. 19 minutes ago, nanograd said:

    Was looking over the NSF GRFP website and saw this:

    "For GRFP applicants and reference writers who are located in Puerto Rico, the US Virgin Islands and other islands in hurricane-impacted areas there will be an extension of the application and reference letter deadlines until 5:00 p.m., submitter’s local time, Friday December 29, 2017. Submitter’s local time is determined by the applicant's mailing address. "

     

    i knew there was a delay but didnt realize it was until Dec. 29th. Do you think this will substantially change the release day?

    I'd also think that having an online application system does some of the work for them--at least, they don't have to sort them since we select our field/year/etc. And from what I've read about the review process, it seems like the bulk of it happens in January anyway. I wouldn't expect it to delay the process that much, if at all. 

  7. Whelp, I think I just used up an entire year's worth of luck. My headphone wire got snagged on the backpack of someone walking (UNNECESSARILY) too close to me on a subway platform--the wire flung towards the tracks, but my phone somehow managed to unplug and fall flat, right next to me. Praise Jah (and please take off your backpack on crowded subways, for the love of god) (and don't be a personal space invader)  

    However, I'm now suspecting that I will end up with curmudgeony/unhelpful reviewers. 

  8. I don't think the reasons behind applicant hopefulness are as complicated as all that (at least not for everyone). I only made waitlists, which tbh is still a pleasant surprise (high school drop out over here).  I'm crossing my fingers for a grf so that I can maybe actually turn one of those waitlists into an acceptance. A grfp award will open a lot of doors for a lot of people, and give them opportunities they may not have had otherwise. Seems pretty simple to me.

    Also, does impatience necessarily suggest desperation? I don't think so. I think it's perfectly reasonable to be excited about something that could potentially have a pretty big impact on your career/life. 

    Edit to add that the negativity popping up in this forum is pretty disappointing. As nsf grfp applicants, we each had to include statements on how we hope to improve diversity and retention in our respective fields; some of the comments being made here are just mean and seem to go against nsf's values of supporting fellow scientists. If you can't support the excitement of your peers, how can you expect to support and encourage the excitement of younger generations of scientists? But what do I know........I'll end this rant with some words of wisdom that I feel are applicable to this situation (and pretty much all situations): "be excellent to each other"

     

  9. 17 minutes ago, STEMed13 said:

    Not quite the maintenance we were  hoping for...

    Screen Shot 2018-03-22 at 1.59.44 PM.png

    jfc.

    on a side note, that is a long closure....i bet it's related to the website changes and has nothing to do with us.

  10. 18 minutes ago, carlsaganism said:

    Is there a source that claims a Friday release? 

    only thing I've seen is on reddit from the "I smell BS" user previously mentioned on this thread. I wouldn't put too much stock into it.

  11.  

    4 minutes ago, carlsaganism said:

    nicee, I'm 2nd year in grad school, I'm old = (

    but I am also working indirectly on black holes = )

    haha, maybe academically--it took me a while to find something I liked enough to go to college for.

    I've been studying them in cosmological simulations, which has been really fun! I get to look at lots of pretty (simulated) pictures.

  12. 4 minutes ago, carlsaganism said:

    if you don't mind me asking, what year are you in? and what area in physics/astro?

    senior undergrad. my research so far has been in brown dwarfs and black holes in dwarf galaxies, but I'm hoping to get into direct imaging in grad school.

  13. 6 minutes ago, carlsaganism said:

    haha, having hopes up for early release but then not getting a word is too mentally taxing for me...

    agreed! I think I'm going to redirect my energy into something more productive and less speculative....like homework or something i guess......

  14. 22 minutes ago, Straightoutta said:

    Based on me skimming through the last few years worth of forums.

     

    Seems to be that the maintenance alert we are on the lookout for typically goes up the day before results are released (Monday if results are posted Tuesday). This alert could be posted as last as 9pm on the day before announcements. It is usually a maintenance alert that lists the end time at being 3am or 5am. Additionally, I would say that we are in the home stretch given last year's awards and honorable mentions are no longer listed. That list tends to go down a week or two before announcements. 

    great minds! thanks for making me feel less creepy!

  15. I don't trust random redditors, especially not ones that avoid answering questions, but comment on their downvotes. I think it'll be next week based on what I've read while creeping through threads from previous years (um, this is a judgement free zone, right?). There seems to be a pattern to the fastlane closures: one the weekend before announcements (sat-sun), followed by the previous year's list of awardees/HMs being unavailable, which seems to be where we're at now, and results the following week (closure announcement typically day of-sometimes late at night!)

    Or maybe all of this waiting is finally getting to me and I am just losing my mind.

  16. 1 hour ago, Marinebio444 said:

    image.thumb.png.8f53ec480344e5f7ab6220fa602385a4.png

    Was I the only one who got super excited when I saw there was a new advisory on the FastLane homepage only to see it wasn't a maintenance update?

    Definitely not lol. 

     

    I don't have much evidence to go on, and my brain is too fried to track things down again, but I think somewhere in the 2016 and 2017 topics they posted about maintenance closures over a Friday-Saturday 6 days before results were posted. I'm half-heartedly expecting results Friday since the last maintenance was the 10th. Also the default awards page now has the 2018 Awardees header.....

    But I'm probably wrong. 

    Almost there!

    Edit: NVM! 2016's maintenance was March 18th-19th with results on the 29th. I think sleep and wishful thinking affected my memory. Anyway, still hopeful for this Friday or Tuesday (for no reason)!

×
×
  • Create New...

Important Information

This website uses cookies to ensure you get the best experience on our website. See our Privacy Policy and Terms of Use