For the past six months, I have been studying what I have facetiously named the PREblem: the high failure rate of preservice teachers in Michigan taking the Pearson Professional Readiness Exam. In October 2013, this test replaced the Basic Skills Test, and as a consequence of its allegedly more rigorous content and its higher cut scores, teacher candidates in every discipline and at every Michigan teacher preparation institution are failing in huge numbers. How huge? Some recent pass rates (courtesy of the Center for Michigan’s Bridge, January 2015):

U-M Ann Arbor, 71%
Michigan Tech University,55%
Hope College, 43%
Calvin College, 42%
Grand Valley State, 41%
Michigan State University, 41%
Lake Superior State, 39%
Concordia University, 37%
Aquinas College, 36%
Cornerstone University, 35%
Madonna University, 32%
Oakland University, 29%
Spring Arbor University, 28%
Saginaw Valley State, 27%
Eastern Michigan University,23%
Northern Michigan University,23%
Adrian College, 22%
Central Michigan University, 22%
Andrews University, 20%
Western Michigan University,20%
U-M Dearborn, 19%
Wayne State University, 19%
Baker College, 18%
U-M Flint, 18%
Ferris State University, 17%
Alma College, 16%

Pearson would likely argue that these failure rates prove the rigor of the test. After all, as one Pearson representative told me at a PRE standards-setting workshop , “You have to draw the line somewhere,” when it comes to allowing aspiring teachers into the profession. If only 19 percent of Wayne State students can manage to pass the PRE, then only 19 percent of them should be allowed to teach, right?

Well, no. We cannot allow a corporate-designed multiple-choice test to act as a gatekeeper into the teaching profession. This is more than a minor issue. It is a full-blown crisis in Michigan. But how to begin critiquing a standardized test that seems to have a stranglehold on our teacher preparation institutions? One way is to question its validity. There are dozens of ways of measuring test validity (see the methods advocated by Pearson competitor College Board), and while I am not a pyschometrician, it is clear that the PRE violates at least two key principles of test validity.

The first is called consequential validity. A definition the College Board site:

Some testing experts use consequential validity to refer to the social consequences of using a particular test for a particular purpose. The use of a test is said to have consequential validity to the extent that society benefits from that use of the test. Other testing experts believe that the social consequences of using a test—however important they may be—are not properly part of the concept of validity.

Messick (1988) makes the point that “. . . it is not that adverse social consequences of test use render the use invalid but, rather, that adverse social consequences should not be attributable to any source of test invalidity such as construct–irrelevant variance.”

What are the adverse societal consequences of the Professional Readiness Exam? First, in colleges of education across Michigan, aspiring teachers are in limbo. The problem began with large numbers of students failing the new PRE rolled in 2013; this led many colleges of education to temporarily waive the previous admission requirement of the passing entrance-level exam. Students with failing scores entered their programs, completed their their first field placement (typically teacher assisting for a semester), and have even passed their subject area tests (also sold by Pearson). But they are still unable to pass the PRE and are thus prevented from student teaching. My university currently has 60 students caught in limbo, and other schools are reporting similar numbers. If Michigan does indeed have a teacher shortage (especially early childhood, world languages and special education), then holding back potential teachers will only exacerbate it.

Teacher candidates are also require to pay Pearson for each test taken–sometimes, up to 5 or 6 times, at $50.00 apiece. The Michigan Department of Education does kindly recommend that that students who have failed four or more times ” seek academic counseling from college/university staff in an attempt to overcome testing deficiencies.” Of course, Pearson does provide a practice exam for only $29.00. So, is all of this money spent a significant social consequence? Yes–considering that the money might otherwise go toward tuition, loan repayment, or books.

Moreover, the PRE is preventing teachers of color and teachers for whom English is a second language from entering the job market. In a letter to State Superintendent Brian Whiston, The Michigan Association for Colleges of Teacher Education points out that African-American and Hispanic teacher candidates have substantially lower PRE pass rates than white candidates; this while more students in Michigan speak Spanish as a first language; this while city schools in Grand Rapids, Flint, and Detroit are facing enormous issues in attendance, graduation rates, teacher turnover, and more.

A second and equally important measure of test validity is content validity. Content validity measures whether the body of knowledge that the test includes reflects the subject area that the test is meant to measure. In the case of teaching, Pearson has determined that all teachers must know three basic subject areas: reading, writing, and math. There is a kind of back-to-basics appeal to this curricular trio–reading, writing, and arithmetic. Here’s the problem: I taught high school English for eight years, from 1994 to 2001. I have an MA in literary studies from Michigan State, and a Ph.D. in English (specialization English Ed) from Western Michigan. For the last 12 years I taught methods courses and have supervised teacher assistants and student teachers in dozens of secondary schools in west and mid-Michigan. I also publish books and articles on teaching English.

In short, I am qualified to teach secondary English. But when I retook the PRE this fall, I failed the math portion. I haven’t done Algebra or Trigonometry in over 20 years, and while my math memory may fail me, I am pretty sure I never had to teach trigonometry in my British literature classes. I did have to weight and average grades, and I may have drawn a triangle or two on the board, but that is about all the math I did, lucky for me.

So, why should all teachers need to master difficult math concepts–ones they will never, ever use in their careers? I would much rather see all teachers certified in the fields of neurodiversity, gender awareness, English language learning, and other more applicable areas. I’ve got nothing against algebra or tough standards, but the content of the entry-level test simply does not align with the actual teaching field.

Even the skills that all teachers do need to possess–the ability to read and to write–are represented problematically on the PRE. A third kind of validity, construct validity, means that a test measures what it is supposed to measure; a valid test asks you to perform the same kinds of skills that the actual subject matter asks you to perform. This works in the case of math (my nemesis), since the kinds of problems the PRE includes represent the kinds of problems actual mathematicians do, if we assume that they do so with a time limit and with only a small whiteboard and dry erase marker (sorry, no paper or pencils allowed).

With the writing test in particular, however, there is a serious mismatch between what writers actually do and how the test frames what writers do. Given this mismatch, it is not a surprise that the writing scores on the PRE are consistently the lowest; indeed, writing scores on standardized tests as a whole are always the lowest. There is just no way to cram what writers do into a limited-time, artificial exercise. The best way to illustrate this idea is with a sample question from Pearson’s PRE study guide:

Given a short passage, the test taker is asked: “Which of the following parts should be edited to correct an error in subject-verb agreeement?”

Setting aside the ridiculous nomenclature (no writer has ever called a sentence a part), and the problematic idea that editing writing involves discriminating between three error-free sentences and one incorrect sentence (imagine a copyeditor saying to herself, “I know one of these sentences has an error in subject-verb agreement. If I only knew which one!”), we are still left with the implausible,even absurd scenario that the creation of this question involved: presumably, the sentence was once correct and was made incorrect for the purpose of the test, so that the original sentence was

This pioneering, volunteer-based approach that she developed to bring eye-care services to underserved populations has had a positive effect on the lives of countless people.

This sentence was then changed to the incorrect version (with an effect/affect error thrown in for good measure):

This pioneering, volunteer-based approach that she developed to bring eye-care services to underserved populations have had a positive affect on the lives of countless people.

In a process unlike anything that writers actually do, the test taker is supposed to identify this manipulation and return the part to its original, correct stage. It reminds me of a children’s television program or coloring book: Swiper (or similar villain) has taken all the berries from the secret forest! Can you find them and return them to the berry bushes?

And on it goes for 42 multiple choice questions. Then, the test taker gets to compose two constructed responses–one analytical argument; and one expository. Again, there is little to no similarity between what the test taker is asked to do and what real writers actually do. There is no writer in the world who is asked to write about a topic he or she is not interested in; to follow a basic organizational pattern that can be easily assessed; to write without the chance for revision; and do to so within a limiting time frame.

All this is to say that teacher educators (like me) would be better off channeling our energy into fighting this test instead of scrambling to prepare our students to do better on it. In the short term, we can offer all the support possible (workshops, one-credit classes, web resources), but our long-term goal should be to kick this test to the curb. It is deeply flawed, and it is hurting our students and our state.