Frankly, I think the reasons people are troubled by the AW section are different from those they cite. Let's look at some common criticisms:
1) "The AW section does not test any real-world skills! How often in your academic life will you need to write a 5-paragraph essay in 30 minutes? Never!"
This is equally valid for more or less any standardized test. How often in your academic life are you called upon to complete an analogy "upbraid : reproach :: ? : ?" picking from 5 different alternatives, without the help of a dictionary? Certainly you must agree that, prima facie, the "write an essay in 30 minutes" is more connected with skills you will actually have to use in academic life than completing analogies. And yet, people do not complain nearly as much about the verbal section.
2) OMG, I got 800 on the verbal test, but only 3.5 on the AWA, the AWA must be bonkers!
a) It seldom crosses people's minds that it could be that the verbal section is bonkers.
b ) More seriously, really, AW and verbal sections are meant to test two very different skills. There is really nothing that says that a person with a good vocabulary and reading comprehension must be a good writer. It's kind of like saying "OMG, I got 800+++ on verbal, but only 320 on quantitative, the quantitative section is obviously rubbish", but nobody does that, do they? And while I agree that you should expect a higher correlation between verbal section and AW section than between quantitative and verbal sections, say, that correlation certainly is not high enough to make "800V, 3.5AW" statistically unexpected.
3) "The SOP and writing samples are much better judges of writing capacity anyway, so AW is positively useless."
This is true to some extent, were it not for a fact that it is way too easy to have someone else heavily edit your SOP and writing samples, or indeed write it for you completely. The AWA does not suffer from that. And I this a glowing SOP and writing sample combined with a low AW score will raise some eyebrows, as it should.
4) "The type of writing required on the AW is nothing near anything you'll ever need to write in real life. They just require a long, dry 5-paragraph essay, with lots of stock transitional phrases. Nothing like the style of a good writer."
a) I'd like to see some hard data on this. It seems to me that this is the kind of myths that companies like Princeton Review perpetrate for their own benefit ("There is a secret formula that guarantees a 6 on the AWA, go to our classes/buy our books to find out!")
b ) As people have pointed out, good writers should be able to adapt their style depending on the circumstances.
5) "But how can people adapt, if ETS does not publish what criteria they use to assess the essays?"
a) See a) above. Also, if ETS have never said anything about what they want, how come there is such a strong consensus on these boards and others about the type of essays that will earn a high score?
b ) Admissions committees seldom publish what they want to see in the SOP. And yet nobody complains. Commercial publishing houses rarely make explicit what kind of texts they want. And nobody complains. People just seem to be able to figure out anyway, just like they do with the AWA.
6) "Not to brag, but I'm a truly great writer, and yet I got a low AW score. The AW section is just crap.
a) See 4b)
b ) I think that more often than not, people are bad judges of their own writing abilities.
c) Even if there is anecdotal evidence of great writers who don't get high scores, this is statistically expected for any imperfect test, just like there could be great mathematicians who receive a bad score of the quantitative section. Anecdotal evidence like that does not prove that the whole test is invalid, only that it does not have 100% validity.
7) The test is scored by a e-rater. Computers don't know jack all about wit.
I agree that this does hold some merit. But
a) See 4b).
b ) There is still a human grader too. If you get a low score on an essay, at least one human grader has assigned it a grade within 0.5 of what you received.
So, going out on a limb here, I think the real reason people complain so much about the AW section is because it is subjectively black-box scored. This makes it very easy to start rationalizing away low AW scores by declaring the whole process to be invalid. Which we all do, because of our human nature. The reason that we don't see as many posts similarly complaining about the other sections is that the scoring is much more transparent, which makes it harder to come up with those rationalizations.
But what many people forget, in my opinion, is that there are two parts to whether or not a test measures what it claims to measure: validity of the test quations, and validity of the scoring. I think that, compared to the other two parts of the GRE, the task the AW section sets us is in fact closest to anything we will need to tackle in real life. The scoring on the other hand, might or might not be completely rubbish, but I do not see hard data either way.