Jump to content

NCTQ / US News study


Embarcadero
 Share

Recommended Posts

The Washington Post indicated that the study focused on content taught in teacher education programs rather than instruction. They apparently examined syllabi, handbooks, etc from 500+ cooperating universities. I downloaded the report last night, but I didn't have a chance to read it yet (save for the first few pages). I'm going to try and take a closer look at it tonight.

Link to comment
Share on other sites

It's generating controversy because Americans love to bash public education. Their methodology is seriously flawed. They examined primarily documents--how about actually observing instruction? How about the value and quality of field experiences? No informed educational scholar would take this report seriously. This report was designed to create controversy and fuel the downfall of public education in this country. Who wins? Just follow the money... public tax dollars for private companies.

 

I met Kate Walsh when she came and spoke to us--although she acknowledges that Teach for America teachers do no better or worse than teachers trained by university programs, she secretly supports TFA and is part of the educational "reformers" that don't really understand schooling and have only their interests (however bizarre they may be) in mind. She thinks that only the top students should become teachers. Really? What incentives are you going to provide to attract top students to teaching? There's a reason teachers need summer vacation--to recharge and get mentally and emotionally ready for the next school year. I taught for 10 years in public schools. I didn't leave because I didn't enjoy teaching students. I left because being a public school teacher was no longer worth it for me and because I wanted to pursue research to better understand why my students weren't learning what I wanted them to learn.

 

Trying to blame schools of education (or university-based teacher preparation programs) for conditions that are out of our control is ignorant and wrong. Kate Walsh and all the other pseudo-educators out there need to stop playing the blame game.

Link to comment
Share on other sites

While I understand where you're coming from, having taught in public schools as well, your stated claim potentially restricts the power of reform and the positive changes that it might bring to schools and kids. This doesn't mean that we have to tear down every facet of the system and start from scratch, but we need to be cognizant of the fact that our methods of teaching and schooling are at the very least underperforming, and they are in need of some serious, and maybe systemic, changes. Teacher education, standardized assessment, teacher's unions and representation, standards, school funding, "accountability", teacher evaluations, and a host of other issues have been and should be part of this conversation, to the point of being changed. Such a willingness to truly improve the system should mean that no aspect is sacred, and at the same time no aspect is evil or always bad. Our biases towards or against different people or programs or political and economic approaches, or even our fondness for how we were prepared, cannot dictate the course we chart towards reform.

 

Methodologically, the NCTQ study has serious flaws, but their conclusion is true: teacher education is a logical area to improve. I don't understand how NCQT can claim that it can accurately gauge an interactive activity like teaching and teacher preparation merely by examining documents, booklets, and handouts. I've read just about the whole report, so I have a rudimentary understanding of their data collection, but I don't know what this really tells you. Without factoring in faculty and the quality of instruction, which would admittedly be difficult to collect and to evaluate given the amount of data and its subjectivity, one is really left with only a paper shell of a teacher education program. Also, the lack of focus on alternative certification programs, which I guess will be measured in subsequent reports, is disconcerting, as these programs are bringing an increasing number of educators into the classroom, and these programs vary tremendously from program to program.

 

If you disregard the data they collected and how they analyzed it, instead looking only at the logic of their approach, the NCQT does have a point. Teacher education is a vital experience for pre-service teacher, so its effectiveness should be a key issue.

 

Can you really tell me that teacher education is not contributing to the problems we have? Whether you think that the best students should fill the ranks of teachers or not, the reality is that there are some particularly uninspiring and limited people in our classrooms. I've worked with them--hell, maybe I was one of them my first year of teaching, and they seem to be an overwhelming majority in the schools I've worked. I worked with one teacher in New Mexico who had joined our staff after completing a degree at the state's "flagship university," and she did not know how to write a lesson plan. Her ability to manage a classroom was nil. Curriculum design? Forget it. I won't just point the finger, either. I went through alternative certification in New Mexico, and I was left woefully unprepared for the world I entered. 

 

"Special education", "inclusion", "differentiation", "co-teaching", etc should not be buzzwords. They should be tools and approaches, theoretical and yet practical, that the new teacher should be familiar with, if not well-versed in. The very basics of teaching, things like ye ole lesson planning, should be engrained in prospective teachers.

 

What I've read in the report doesn't jive with parts of your post. The NCQT doesn't insist that "only the top students" be accepted to teaching programs; the recommendation is that a higher bar is set, and the specific suggestion (if I recall correctly) was a 3.2 GPA and 1120 SAT average, for the entire program. Those numbers do not necessarily mean much. I know a girl from my hometown who scored poorly on the SAT, yet she is fantastic with children. She ended up going to a second-choice school, majoring in elementary education, and she seems to be doing quite well in the classroom. I don't think the NCQT is saying people like her should be passed over simply because of a standardized test score, but generally speaking we should be looking to draw in talented individuals. Perhaps that is an ideal, one that is not properly incentivized at present. But our teacher education programs have went in the opposite direction; rather than allow qualifying candidates trickle in after meeting high standards, they have opened the floodgates and allowed virtually anyone to be a teacher. True, colleges of education can't make teaching attractive on their own, but that doesn't mean they have to use their programs as a cash cow, either.

 

For sure, a closer look is needed at the items you mentioned: faculty, quality of field experiences, alternative programs, and other important criteria need to be weighed to make this kind of effort truly effective. What I see in the report does concern me, regardless of the foundation or person who inspired the study or what their motivation are. I'm headed to graduate school this fall, on a statement of purpose that addressed engagement in teacher education as arguably the biggest issue education and schooling faces in the US.

Edited by wjdavis
Link to comment
Share on other sites

I guess the issue for me, as a consumer, is not whether Teacher Ed programs can be improved (of course they can) but whether this is an accurate reflection of the current state of particular programs one might choose to invest one's time/money in.

 

The questions I have are:

 

Are the criteria appropriate?

 

Can an outside study get an adequate sense of each program in relation to the criteria from course titles and descriptions?

 

There seems to be a lot of prose and 30,000 foot analysis, and not as much hard data. How did they apply the information they have to the criteria to determine whether those criteria were met? In what proportion in order to assign stars to each program? If they don't show their work, how is it possible to understand relative strengths and weaknesses of programs, and whether stars have been appropriately given? 

Link to comment
Share on other sites

I guess the issue for me, as a consumer, is not whether Teacher Ed programs can be improved (of course they can) but whether this is an accurate reflection of the current state of particular programs one might choose to invest one's time/money in.

 

The questions I have are:

 

Are the criteria appropriate?

 

Can an outside study get an adequate sense of each program in relation to the criteria from course titles and descriptions?

 

There seems to be a lot of prose and 30,000 foot analysis, and not as much hard data. How did they apply the information they have to the criteria to determine whether those criteria were met? In what proportion in order to assign stars to each program? If they don't show their work, how is it possible to understand relative strengths and weaknesses of programs, and whether stars have been appropriately given? 

 

Exactly.

Link to comment
Share on other sites

What I've read in the report doesn't jive with parts of your post. The NCQT doesn't insist that "only the top students" be accepted to teaching programs; the recommendation is that a higher bar is set, and the specific suggestion (if I recall correctly) was a 3.2 GPA and 1120 SAT average, for the entire program. Those numbers do not necessarily mean much. I know a girl from my hometown who scored poorly on the SAT, yet she is fantastic with children. She ended up going to a second-choice school, majoring in elementary education, and she seems to be doing quite well in the classroom. I don't think the NCQT is saying people like her should be passed over simply because of a standardized test score, but generally speaking we should be looking to draw in talented individuals.

 

My statement did not come from the report. It came from a direction question from me to her--and that was her response.

Link to comment
Share on other sites

This is not just an academic exercise for me. I am making a mid-life career change to go into teaching, and have chosen the grad school, Teacher Ed route. Although I am committed to a program, the (rather large) check has not yet been written, and if I'm wasting my time, I'd like to know now.

 

The problem is I'm hardly going to take a star rating at face value without more evidence and explanation, and I'm not finding much in the survey beyond the star rating on which to base my decision. 

Link to comment
Share on other sites

p.s. I did a lot of research, including on-site visits, throughout the process, so it's not like I'm going into this without info. But this is a study that purports to be comprehensive, and consumer-oriented, and the media seems to be giving it a lot of credence. It also struck a chord with me because one of my concerns going in is that my primary goal is to develop the tools to become a good teacher, and I've been afraid that many programs seem to get bogged down in ideology at the expense of the nitty gritty tools. (I happen to share much of the equity/social justice ideology professed by many programs, but I'm not going to grad school to learn what I already believe.)

 

I want to be as informed as possible, but this survey isn't giving me a whole lot of meat to chew on. 

Link to comment
Share on other sites

I guess the issue for me, as a consumer, is not whether Teacher Ed programs can be improved (of course they can) but whether this is an accurate reflection of the current state of particular programs one might choose to invest one's time/money in.

 

The questions I have are:

 

Are the criteria appropriate?

 

Can an outside study get an adequate sense of each program in relation to the criteria from course titles and descriptions?

 

There seems to be a lot of prose and 30,000 foot analysis, and not as much hard data. How did they apply the information they have to the criteria to determine whether those criteria were met? In what proportion in order to assign stars to each program? If they don't show their work, how is it possible to understand relative strengths and weaknesses of programs, and whether stars have been appropriately given? 

 

The criteria are appropriate, but incomplete. There are items that wildviolet mentioned that are missing, and really need to be represented for this study to be really useful to people who are in your position. The effort made here is a starting point, not the end. 

 

Talking to students in programs, as well as practicing teachers who either came from successful programs or who have mentored teachers from successful (and I guess unsuccessful) programs would certainly be worthwhile in ascertaining the true value of the program. I agree that this report does not paint a complete picture of what a program might offer. However, it does shed some light on what happens in our schools and colleges of education, as long as you take their "data" in context and acknowledge that more important elements have been omitted this time around.

Link to comment
Share on other sites

Embarcadero--I would not put much stock in the results of this report for the reason that their primary source of data--written course syllabi--say nothing about what was actually taught or what students actually learned.

 

First--look up articles on the different kinds of curriculum. Cuban (1999), for example, identifies at least five: recommended, official, taught, learned, and tested. The NCTQ study basically looked only at the official curriculum (course syllabi). The recommended curriculum is often different from the official curriculum, which is often different from what actually gets taught, which is often not what gets learned and, in the case of teaching, not what gets tested (i.e., actual classroom teaching performance). All of these types of curricula are difficult to measure, and they interact in idiosyncratic ways that are unpredictable. This is one of the reasons why we haven't solved our educational "problems" yet. Never mind the fact that 30 students in a classroom are going to see 30 different things in the same classroom, that we have teachers who come primarily from white, middle-class backgrounds teaching students who do not, and that schooling is forced upon children who would rather be doing anything else than sitting in a desk all day listening to adults tell them what to do and how they should do it.

 

As far as I know, researchers have developed tools that are able to identify the top and bottom 10% of teachers. That means 80% of teachers are somewhere in the middle--which means that it was difficult to distinguish those teachers with a reasonable degree of reliability. Similarly, trying to "grade" teacher prep programs is a difficult task--most will be in the middle, few will be at the top, and few will be at the bottom.

 

All of this is not to say that university-based teacher preparation programs have no problems or that we don't have some bad teachers out there.

 

Ultimately, I think it comes down to fit--certain people do better in certain programs because their interests/abilities/learning styles match the interests/expectations/teaching styles of the program. These rankings have no bearing on what you will get out of your program--that is ultimately up to you. Having said that, however, there are good and bad programs out there, but this report will not give you an accurate picture of them. Sorry I cannot offer much advice, but I also do not feel comfortable giving advice because how well your program prepares you to teach is so contextual. The desired result--effective classroom teaching performance--is a combination of many factors.

Link to comment
Share on other sites

  • 1 month later...

And now this, from today's NY Times (http://www.nytimes.com/2013/08/15/nyregion/new-york-issuing-scorecards-on-teacher-colleges.html?ref=nyregion&pagewanted=print)

 

Seeking Better Teachers, City Evaluates Local Colleges That Train Them
By JAVIER C. HERNÁNDEZ
 
Mayor Michael R. Bloomberg has used data to rate restaurants, track the repair of potholes and close lackluster schools in New York City. Now he is bringing his results-oriented approach to an area far outside his usual purview: teacher colleges.
 
In an effort to shake up institutions that have been criticized as too insular and inert, his administration released scorecards on Wednesday for a dozen teacher-preparation programs in the city.
 
Public and private education schools are being evaluated in various ways, including how many graduates are certified in high-needs areas like special education and whether their teachers have been able to increase student test scores.
 
The release of the scorecards places the city at the forefront of a national effort, backed by the Obama administration, to use data to upend the teaching profession and the pathways to it. Critics have said subpar teaching programs too often hamper school systems, churning out graduates familiar with theory but lacking in practical classroom skills. A study by the National Council on Teacher Quality released in June argued that teaching colleges were too lenient in their admissions criteria and had not adequately prepared teachers in subjects like reading, math and science.
 
The results released on Wednesday showed that even some of the country’s most prestigious programs have room for improvement. For example, one in five recent graduates of teaching programs at Columbia University and New York University were given low marks for how much they were able to improve student test scores; by contrast, 1 in 10 teachers who graduated from City College of New York received poor marks.
 
City officials cautioned against drawing sweeping conclusions from the data, saying the numbers were meant to provoke conversation, not rivalry. They noted that sample sizes were small; that test scores were available only in certain grades, in math and English; and that the data reflected only information from the past four years.
 
But in New York City, where competitive streaks are widespread, education leaders could not resist a little jockeying.
 
David M. Steiner, dean of Hunter College School of Education, said the results would prompt schools like Columbia and N.Y.U. to rethink elements of their program.
 
“These are places that are very well known for their research and scholarship,” Dr. Steiner, a former state education commissioner, said. “Is it possible that they need to pay more attention to their clinical preparation of teachers?”
 
Thomas James, provost of Teachers College at Columbia, said the reports prompted the school to examine how closely its curriculum aligned with city academic standards. He said the data also spurred interest in increasing the number of teachers who pursue certification in special education, where city data showed the school lagged behind its peers.
 
“We can see more clearly what the greatest needs are,” Dr. James said. “The direction we’re going is to have more comprehensive and better planning.”
 
Other education school leaders were not as enamored of the city’s decision to broaden its interests to higher education.
 
Alfred S. Posamentier, dean of the Mercy College School of Education, which was credited with sending the largest percentage of its teachers to schools with the greatest needs, said it would be more useful if education officials compared teaching data over a longer period.
 
“It’s nice to look at, it somehow verifies what I already knew, but it’s not going to change anything,” Dr. Posamentier said.
 
The city’s data-driven foray into the world of higher education is also likely to encounter resistance from professors and graduates of schools that did not fare as well on the reports. Some have objected to the idea of judging teachers on the basis of student test scores, arguing for a more nuanced approach. And because graduates of Teachers College and N.Y.U. are in demand at high-performing schools, where test scores are already approaching the top of the scale, they may have a harder time showing improvement.
 
City education officials noted that the scorecards this year were only a beginning. They said the results were mostly positive, showing that recent hires generally received high marks in the classroom.
 
David A. Weiner, a deputy chancellor in the city’s Education Department, said that for too long, city governments and universities had worked in isolation. The reports, he said, would help bridge that divide.
 
“It can’t just be the universities; it can’t just be the school system,” Mr. Weiner said. “We want a highly effective teacher in every single classroom, and every single university president says the same thing.”
 
Under the city’s system, teaching programs were evaluated on six measures. The city looked at the number of teachers that were placed in low-performing schools and the number of teachers certified in areas with high demand, like math, science, special education and English as a second language. Touro College had the largest percentage of teachers certified in special education, at 86 percent, double the citywide average.
 
The city also factored in teacher performance, including whether recent hires were denied tenure and whether they received an unsatisfactory rating from principals. For teachers of reading or math in grades four through eight, the city looked at student progress on state tests.
 
Teacher retention was also considered. Citywide, 80 percent of recent hires still worked as teachers in the system three years later. But for several schools that serve large populations of New York City natives, the numbers were higher.
 
More than 90 percent of recent graduates of Queens College and St. John’s University, for instance, were still working as teachers after three years. By contrast, 72 percent of students from Teachers College still worked in the system three years later. (Teachers College noted that it serves a large population of students from out of state, and that many eventually return home to work.)
 
The city said it had no plans to award letter grades to universities, as it has done with public schools and restaurants, saying education officials were focused on identifying and sharing effective practices. Next year, the scorecards will expand to include more rigorous teacher evaluations developed by the state. And this fall, the city plans to release similar reports for teachers certified through alternative pathways, like Teach for America.
 
City officials said New York would be the first urban district to compare teaching programs. Ohio and Tennessee have also started evaluating elements of university teaching programs. And New York State, which will require new teachers to complete a more stringent certification process beginning in 2014, plans to start delivering feedback to teaching schools this fall.
 
The federal education secretary, Arne Duncan, said that he saw New York’s efforts as “a major step forward, and one from which others can learn.”
 
“It puts the record of preparation programs, including their impact on student learning, into sharp focus,” he continued.
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use and Privacy Policy.