Jump to content

NSF GRFP 2017-18


spectastic

Recommended Posts

4 minutes ago, SaltyHatter said:

I feel that. The days between are freaking awful. Refreshing Fastlane every two hours is how I'm staying sane at this point. 

Same! My productivity is way lower than it should be because I'm on fastlane all the time. :( 

Link to comment
Share on other sites

4 minutes ago, engineerwoacause said:

So why are we thinking the re-upload of the 2017 list is indicative of our release? I may have missed something 

Pretty sure it's just speculation at this point. It's the first thing that's happened re: GRFP on fastlane since the list went down, so we're assuming it has something to do with the release.

Link to comment
Share on other sites

I haven't seen anyone else mention this but one of the reasons I'm anxious to see if I got the award or not is to know whether or not I can contribute to my IRA this year, since it requires W-2 income. If I don't get it, I will be paid on an RA and be able to contribute right away, but if I do get it, I'll be paid by fellowship and have to open up a new taxable account to save. Maybe it's actually financially better for me to not get the award. :rolleyes:

Link to comment
Share on other sites

4 minutes ago, engineerwoacause said:

So no maintenance today.

thoughts?

Maybe maintenance tomorrow? Or no maintenance needed anymore? Or they’ll just surprise us in the middle of the night tonight?

Link to comment
Share on other sites

8 minutes ago, engineerwoacause said:

So no maintenance today.

thoughts?

I've also heard of notifications coming until 9pm, so we could still get a maintenance notification tonight.

Edited by boilers23
typo
Link to comment
Share on other sites

The maintenance message came up around 9pm the night before sometime from 2011 to 2013, and I think one time the day before at 9:30am, so, I'd assume if there's no message up by tomorrow night it'll be next week. I also have a hard time believing they'd stray from Tues/Fri though I'm unsure as to why they've kept it to those two days. 

Link to comment
Share on other sites

I'd assume that the panels wrapped up with sorting applications into quality groups in February, but the group 2 reviews probably take up a lot of time from the grfp staff (has the review process been talked about in this thread?)

I wonder if the Tuesday/Friday release dates might be related to the panel schedule..... I think someone else on here posted (from nsfgrfp.org) that the panel reviews were either mon/thurs or tues/fri, but maybe those days are designated for the entire grfp review process, not just panel reviews. If the grfp staff review apps on mondays/thursdays, that would match up with the tues/fri releases that happen in the early morning. Just a thought!

Link to comment
Share on other sites

In case anyone's interested! (and if not, sorry for the spam!)

On 3/30/2016 at 6:48 PM, Pitangus said:
  On 3/30/2016 at 4:29 PM, Eigen said:

I deferred one year, in retrospect I should have deferred two. There's no assurance that they will, but NSF has given cost-of-living raises each year (including retroactive raises). 30k to 32k to 34k.

Everyone is comparing letter rankings (because it's what you have), but that's not what's used to decide the awards. Think of letter rankings as an A B C system- the reviewers still give numerical scores for each application (Z-scores). You see your letter grade, but you also had a numerical score associated with your application. 

It also changes from reviewer to reviewer (scores are normalized to some degree to weed out too-easy and too-hard reviewers), as well as discipline to discipline. They try to keep the awards in each discipline proportional, so if you're in a very popular sub discipline with lots of applicants, you might need a more competitive score to get an award than someone in a smaller discipline. The school you're attending also matters to some degree (they try to use NSF awards to spread money to good applicants at institutions with lower numbers of other NSF grants), as well as your background and demographics. 

On 3/30/2016 at 6:48 PM, Pitangus said:

A slight clarification to this:

In previous years (and probably now as well), each reviewer assigned an application a numerical score of 0 - 50 for IM and BI. I don't have the old reviewer's guide in front of me, but it went something like... 

40 - 50 = E

30 - 39 = VG

20 - 29 = G

10 - 19 = F

0 - 9 = P

So you can see how the letter scores can be misleading on their own: a numerical score of 39 would give one applicant a VG, while a 40 gives another an E, but the two scores are only one point apart.

 

The Z-scores are the standardization of the numerical scores. The formula is something like:

Z-score = (applicant's score - mean score from that reviewer) / std dev of reviewer's scores

 

If you imagine an applicant who scored all 40s from reviewers who gave high average scores, then it makes sense that there will be applicants who scored all Es but did not win an award/HM. 

 

Also, the diversity criteria only apply to applicants ranked in Quality Group 2 when it comes to deciding who gets an award vs an HM. Applicants in Quality Group 1 (the top group of applicants according to their ranked Z-scores) all get awards no matter what their background. 

On 4/2/2015 at 1:18 PM, Pitangus said:

 

This gets brought up multiple times every year, and I'm probably starting to sound like a broken record to those who have actually read through the threads, but yes, this is how it has been done according to the actual reviewers' guide (in 2008 and I would guess it hasn't changed much):

 

A given subject panel distributed the applications so that each one went to two reviewers. The reviewers scored the application on a 1-50 scale for IM and BI (this scale corresponds to the P - E scores applicants see), and applications that scored below the 65th percentile after two reviewers were retired without being read by a third reviewer. The remaining proposals got a third reviewer, and were then ranked based on the average of the z-scores (standardized scores weighted based on all scores given by a reviewer, to help offset reviewer variability). The reviewers then deliberate to finalize the ranking.

 

Applications were then sorted into four Quality Groups by their ranks:

Applicants in Group 1 are all awarded fellowships

Applicants in Group 2 "receive awards to the limit of funds available using criteria such as geographical region, discipline, and other factors" (so this is probably where diversity comes in); the rest receive HMs

Applicants in Group 3 get HMs

Applicants in Group 4 do not get awards or HMs (this group includes applications that were below the 65th percentile after two ratings and were retired before the third rating).

 

So to conclude again: the scores applicants receive don't tell you much about how you were actually ranked, so it's pointless comparing how many Es and VGs you received compared to other applicants.

 

More helpful things:

Link to comment
Share on other sites

3 hours ago, mocefacdargeht said:

I haven't seen anyone else mention this but one of the reasons I'm anxious to see if I got the award or not is to know whether or not I can contribute to my IRA this year, since it requires W-2 income. If I don't get it, I will be paid on an RA and be able to contribute right away, but if I do get it, I'll be paid by fellowship and have to open up a new taxable account to save. Maybe it's actually financially better for me to not get the award. :rolleyes:

There's no reason you can't contribute the max this year assuming you've made $5500 in W2 income. Same with the last year, you can always contribute up to your earned income. There's really only one year where you definitely can't contribute if you take all 3 years in a row. But I wonder if you can take it like a year on, 6 months off, a year on, 6 months off, a year on or something so you can always contribute.

Edited by nc61
Link to comment
Share on other sites

24 minutes ago, nc61 said:

There's no reason you can't contribute the max this year assuming you've made $5500 in W2 income. Same with the last year, you can always contribute up to your earned income. There's really only one year where you definitely can't contribute if you take all 3 years in a row. But I wonder if you can take it like a year on, 6 months off, a year on, 6 months off, a year on or something so you can always contribute.

For 2017 I had W2 income from my undergrad and the summer, so I did contribute. I think if I get the NSF (unlikely), fellowships will be my only income in 2018 as I am currently a first year and am finishing a 1 year fellowship. Breaking it up like that would be nice!

Link to comment
Share on other sites

53 minutes ago, mars667 said:

In case anyone's interested! (and if not, sorry for the spam!)

 

More helpful things:

Really this was an excellent post. Do you know how grad versus undergrad plays into the decision, if at all?

The old paper that they mentioned (linked here) talks about how the process used to run like 20 years ago, and we assume that it is still similar. However, there was no mention of whether or not grad versus undergrads are judged differently. Can anyone speak to this? 

Link to comment
Share on other sites

7 minutes ago, GoldenDog said:

Really this was an excellent post. Do you know how grad versus undergrad plays into the decision, if at all?

The old paper that they mentioned (linked here) talks about how the process used to run like 20 years ago, and we assume that it is still similar. However, there was no mention of whether or not grad versus undergrads are judged differently. Can anyone speak to this? 

Thanks!

I've found a few posts about that in the depths of the forums, but can't seem to track them now :wacko:. I think the gist of it was that the applications are split into different groups for undergrad, 1st year, and 2nd year and they each have different standards when it comes to publications and general knowledge of the topic. I'll try to track it down tomorrow if no one beats me to it.

also it looks like that link just goes to the start of the forum, which still has good info, but there is a great comment containing some helpful info from a former panelist on page 17 (i thought linking might reduce the clutter...whoops)

On 4/3/2011 at 6:29 PM, hello! :) said:

Awhile back, I came across these notes from someone who had served on one of the NSF GRF review panels from a couple years ago. I don't remember where I found them and I haven't been able to find them again on the web... Maybe for whatever reason this person had to take them down. In any case, I'll post them here as "notes from an anonymous NSF GRF review panelist," since I think that it provides some very helpful insights into the whole process.

 

Thank you, Anonymous Panelist!

 

* * *

Notes after serving on the review panel for the NSF Graduate Research Fellowship Program

## Executive Summary

+ Fellowship applications in field of Mechanical Engineering are evaluated by panel of ME faculty. Remember your audience when your write.

+ Two criteria—intellectual merit and broader impacts—have equal weight this year

+ Roughly 15 minutes to read an entire application—make your point clearly and quickly.

 

Roughly 10% of those _who apply_ for these fellowships will receive them. The applicants are all amazing individuals.

 

 

## The Process

 

All of the applications are evaluated by a panel of engineering faculty from a variety of schools, including research and teaching schools. Applicants for the same field (e.g. Mechanical Engineering) are evaluated by the same panel. This year the mechanical engineering panel we participated in had more than 20 members, and evaluated roughly 400 applications. The applications are sorted by level: level 1 is for those who are in their final undergraduate year, level 2 is for those who have just started their graduate programs, and there are also levels 3 and 4. While all those who are in level 1, level 2, etc are evaluated simultaneously (with criteria appropriate to the level), the final decisions on who to fund are not done by level.

 

 

NSF has two basic criteria for evaluating the applications: intellectual merit and broader impacts. _They are weighted equally._ After a “calibration exercise” which is designed to arrive at a kind of panel-wide understanding of what would constitute intellectual merit and broader impacts, each application is read by two panelists and scored (out of 50) in each category. One panelist reading a single application takes 15-20 minutes. Panelists can not read any applications for which they have a conflict of interest.

 

At the end of these first and second reads, applications get two Z-scores, where

 

 

 

Z
= [(Application's Score) − (Mean Application Score for that Panelist)] /

 

 

(Application Standard Deviation for that Panelist)

 

The Z-score is created to adjust for the fact that some panelists score applications much higher (on average) than others. The average of the Z-scores is used to rank the applications. Applications in the top 35% of the ranking get a third reading, as do any applications that have a wide discrepancy on their Z-scores. (The discrepancies are identified by computer and by the panelists.) The remaining 65% of the applications are retired, meaning they get no further consideration. After the third reading, applications that have widely varying Z-scores are returned to the 3 panelists for additional discussion and a resolution.

 

Finally a new ranking is created. The top 20 or so in this ranking are in Quality Group 1—definite funding. (Notice that this is only 5% of the applications.) The next 40 or so are in Quality Group 2—honorable mention and possible funding. (The top of this group may get funded, depending on resources. Also, this group is mined for recipients of special focus awards, programs for under-represented groups, etc.) The next 40 or so are in Quality Group 3—honorable mention. The rest are in Quality Group 4 and don’t get an award.

 

 

## Criteria for Evaluation

 

Here are criteria we used in evaluating the applications for level 1. Keep in mind that each panelist develops their own criteria based on the panel discussion, so that not every panelist is going to use the same standards. However, they will give you the general ideas behind the ratings. Also, they may seem very harsh, but this turns out to be essential since all of the applications are very strong.

 

 

### *Intellectual Merit*

 

>#### Excellent

>> 1. The research proposal clearly describes truly innovative or transformative research. (Transformative research transforms the way the field or society will think about the problem.)

>> 2. The student is academically well-prepared to conduct the research. Outstanding letters of recommendation, good GPA, solid GREs. The GPA does not need to be 4.0, but should be good. The GRE’s I saw were not as high as I anticipated.

>> 3. The student has a clear passion for their work which comes across in their writing and their actions to date.

>> 4. The student has prior research or industry experience that demonstrated the ability to define, initiate, and complete projects with substantial independence. Avoid describing senior design projects or class projects, as they were not generally persuasive.

>#### Very Good

>> (2), (3), and (4) still there. Research is solid (more than incremental) but not transformative or truly innovative. Or, (1), (2), and (3) but not (4).

>#### Good

>> (2) and (3), research is solid, but no (4).

>#### Fair

>> (2) and (3). Research proposal is weak and student has little experience.

>#### Poor

>> Student is not well-prepared, research plan is ordinary and sketchy, and the student has failed to convey any passion for their work.

 

### *Broader Impacts*

 

Be sure to address this topic, as Broader Impacts is half of the score and many applicants who were Excellent in Intellectual Merit did not address this area sufficiently.

 

Also, be sure to realize that almost everyone who applies for these grants wants to teach at the college level. Wanting to be a teacher at the college level is not evidence of broad impact.

 

_The identity of an individual does not constitute a broad impact._ This was explicitly discussed at the panel and explicitly ruled out (by NSF) as a broad impact. The fact that you are a female, Hispanic, Native American, African-American, etc does not, in itself, qualify as a broad impact. Also, personal struggle (health/economic/family) does not constitute a broad impact. Whoever you are, you need the types of broad impacts discussed under “Excellent” below. However, if you are part of an under-represented group or have overcome substantial difficulties in getting to your current position, do put this information in your personal statement if you want it to be considered. After the proposals are ranked, those who fall into these categories in Quality Group 2 will be picked up for additional funding opportunities.

 

>#### Excellent

>> 1. Demonstrated record of substantial service to the community, K-12 outreach, commitment to encouraging diversity, etc. Straight leadership a plus, but most highly ranked applicants have ongoing outreach/service activities.

>> 2. Clear explanation of the broader impacts of the research. How will it affect society, and why should the government fund your project over someone else’s? If the project’s success would have huge impacts on its engineering field, it would fall a bit here and a bit in Intellectual Merit. (Different panelists had different views on this.)

>#### Very Good

>> (1) or (2) is somewhat weaker. (1) still has demonstrated record (not just “I will do...”) but the record is weaker, or (2) is still there but the impact is less dramatic.

>#### Good

>> Both (1) and (2) are present, but weak.

>#### Fair

>> (1) or (2) is completely missing, but the one that is present is at an Excellent level.

>#### Poor

>> (1) or (2) is completely missing, the one component that is present is only at a Very Good level.

 

 

 

* * *

Edit: Had to fix some dumb formatting issues. dry.gif

Link to comment
Share on other sites

2 hours ago, harshingig said:

You people are going crazy. 2017 results have been available for at least 2-3 days so I'm not sure what you are talking about.

you are the crazy one for posting a non sequitur.

Edited by carlsaganism
Link to comment
Share on other sites

Some of the stuff in this thread is ridiculous for so many reasons. So much nearly blind guessing (with so many factors involved), hype and emotions. Going through the past several pages of this thread you would think we would have had the results 10 times over. It's like some care more about the money attached to this grant than they do the research. I seriously hope none of this carries over into your research careers and scientific method. I know you have all wasted my time sucking me into thinking it will be announced early. This reminds me of people who stress so much over their grades and they miss out on life, learning and independent thinking.

Edited by harshingig
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

This website uses cookies to ensure you get the best experience on our website. See our Privacy Policy and Terms of Use