Guest Voices

I welcome voices from others involved in our grassroots movement for educational equity and education justice — not just those opposed to the effects of standardized tests, but those who are looking to create a movement in which democracy is once again supported by robust and challenging PUBLIC education driven by parents, communities, and, of course, actual classroom teachers. Contact me through the blog or at Montclair parent at outlook dot com. Thanks!

Guest Voices Posts have included New Jersey parents:

Please let me know if you would like to add your voice to the conversation.

4 thoughts on “Guest Voices

  1. I am a 6th grade teacher in Ohio. I recently administered the practice PARCC test for ELA. While I have many issues with the test, my main concern now is that there are no explanations to correct answers. The essay portion answer key is a Grade 6-11 generic scoring rubric. My colleagues and I have looked at the questions for multiple choice, and we can’t figure out the rationale for the answers. Not to mention, we have no idea what would be considered acceptable on the essays. How am I going to provide feedback to my kids, if I have no idea what the PARCC graders consider correct? There are so many issues that have not been looked at, yet we still have to forge ahead and give this test.

    Like

  2. I am writing to you my frustrations as an educator so that you might appreciate knowing there are teachers out there who feel as you do. I am a mathematics teacher in upstate New York, which, like New Jersey, has subscribed to the juggernaut Pearson’s PARCC assessments. Fortunately for us, our lawmakers have delayed the transition to PARCC assessments, as clearly this system is lacking in so many ways. Do not get me wrong… I do believe there is a time and a place for testing and measuring student growth. However, this profit centered and rushed effort by Pearson is as discouraging as it is frustrating.
    Despite all attempts at preparing for the new age PARCC assessments, students are not ready for this leap. This is not due to a lack of effort or preparation attempts by schools, but rather, a lack of support from Pearson. Clarity around the user interface has been vague, and it is more than just probable that students who know their stuff will score lower than their ability due to this unfamiliarity. Schools that field tested this spring have a slight advantage. Those students who were tested this past spring however, are now in a different grade and are likely using new tools for calculations / graphing / writing / solving equations, etc…
    Pearson has a solution to this however… According to the PARCC website, Pearson has designed practice tests “so that all students – not just those involved in the field testing – can experience PARCC test items the way they will appear on the tests.” I agree that, if properly utilized, this could be of great benefit to educators in designing lesson materials that prepare students for such exams. The only problem? They have yet to release any mathematics Performance Based Assessments that were expected to be made available “Fall 2014”. With the first PBAs arriving in March, students have very little time to familiarize themselves with the more difficult question types, never mind learn to navigate the new interface.
    Despite how this may sound, I am not a huge proponent of dedicating entire classes, schooldays and sometimes a sequence of schooldays to prepare for standardized testing. I firmly believe that such practice should be embedded in every lesson, from day one, and should only make up a fraction of instructional time. Pearson’s delay with releasing this material is perplexing given that the first PBAs will be administered in just three months. This delay forces schools and educators into a position where they feel the need to have students cram in order to adapt to the new interface and question types. I agree with the many who feel this is an absolute misuse of otherwise valuable instruction time.
    PARCC has clearly articulated the importance of releasing sample test items in their intended environment so as “to provide information about the assessment system and support educators as they transition to the Common Core State Standards and the PARCC tests.” Yet, as Fall has come and gone, and 2014 has moved into 2015 the PARCC website still reads:
    Fall 2014 Release
    English Language Arts/Literacy: Grades 3-11 End of Year tests
    Mathematics: Grades 3-8 Performance Based tests in Algebra I, Algebra II, and Geometry
    Yet, these tests still do not exist.
    I am fortunate to work in New York State, since we will not be required to use the PARCC assessment system this year. I could not imagine the frustration level of educators who have been largely misled and yet may still be evaluated on their students’ performance on such mysterious exams.
    But… the frustration is still tangible in our little bubble because Pearson also designs the pencil-and-paper tests we take in NYS. Like PARCC items, many of our assessment items are impervious to public or even educator oversight. As a matter of fact, discussion of secured testing material by an educator could result in loss of that educator’s accreditation. Such is the reason for this nom de plume correspondence. What I am about to discuss, may fall somewhere on either side of a very grey line.
    To give some context, I am a rather successful middle school mathematics teacher in a very disadvantaged and underprivileged area of upstate NY. Despite this, my students have consistently scored in the top 1% of schools in all of NYS on these Common Core assessments. I am not overly proud of myself despite their accomplishments (for which they should be very proud). My disappointment comes from a concession I have made in educating my students in order to enhance their test scores.
    This may get a bit mathematical, but bear with me and you will understand my frustration. In the spring of 2013 my students took a state exam designed by Pearson. On this exam was a question that asked students to select a graph that had a “greater rate of change” than a given graph. This would be determined by calculating the slope of the given graph and each other graph, selecting the one graph with a greater slope. Simple enough… I felt my students would be well prepared for such a question.
    As I observed my students taking this exam, I noticed that they were stumped. One raised his hand to tell me that there were two answers. I had not looked at the problem very carefully and just restated that the directions are to fill in only one response for each question. As I continued to circulate, more and more of my students just stared at the problem, recalculating, erasing, recalculating… I finally had to read the problem, and sure enough, there were two answers. One answer had a positive slope and the other a negative slope.
    The mathematical error that the author of this question made was thinking that a negative rate of change is less than any positive rate of change. Mathematically and semantically, this is not true. Change is ranked by magnitude regardless of an increase or a decrease, and which ever rate of change has the higher absolute value indicates a greater change. Not all of my students understood this fully, but many remembered I had told them that the sign does not matter, and that they should compare the absolute values. Now here they were, sitting, stumped because based on my teachings, this question had two answers.
    I felt as if it were my fault for teaching them this, as none of Pearson’s released material ever had students compare a negative rate of change. I felt it would be a cool tangent to explore to enrich their mathematical understanding. I gave examples of situations that illustrated this perfectly and allowed students to debate whether or not it made sense… They seemed to leave those lessons unanimous, and almost as unanimously, struggled when it came time to be assessed. Pearson had made such a gross error that not only frustrated me, but my students as well.
    I discussed this with my boss, who coincidentally also has a mathematics degree, and he agreed that it was a bad question. I wanted to address the issue directly with Pearson, but was not granted permission because of that very grey line. I had no forum with which to vent my frustrations. Those frustrations only grew when I was told that in the future I should teach students the wrong way to do this because that is how Pearson is assessing that standard. I took a large issue with this, as I found it outrageous that anyone would suggest that a publishing company should alter mathematical fact.
    I argued my case over the course of the rest of the year, to the point that it became a running joke in our department… to everyone but me. Finally, I was directed to alter my plans to teach this differently and had to concede. My frustration arose from feeling responsible for hurting my students’ scores the year prior and, to a greater magnitude, about teaching incorrect mathematics to inflate future scores. This compounded as I eventually became the curriculum developer for our district and had to sow these bad seeds as if I believed them to be true.
    The frustration, discouragement and finally embarrassment led me to articulate my feelings on the standard, still without a forum through which to vent… The following are facts, evident to those with a strong understanding of math.
    Two pumps are pumping water… Pump A is filling a pool at a rate of 100 gallons per hour. Pump B is emptying the same pool at a rate of 200 gallons per hour. Which is the faster pump? Clearly pump B is faster. Does that change because pump B is emptying the pool and is written as a negative rate of change?
    Train A departs travelling at 65 mph, while Train B is arriving at 65 mph. Which train is traveling at a greater speed? Obviously they are travelling at the same speed. Does that change because train A’s distance is increasing while the Train B’s distance is decreasing?
    A ski lift brings skiers up a mountain at 6 mph while a skier skis down the same path at 30 mph. Whose elevation is changing faster? Obviously the skier headed down the mountain. Does that change because the skiers on the lift are increasing their elevation while the downhill skier is decreasing?
    Tom earns $250 a week and spends $300 a week. Which of these things would cause a greater change in his bank account? Obviously the spending is causing a greater change in his bank account. Does that change because his earnings are positive and his expenses are negative?
    Fuel tank A is decreasing by 2 gallons per hour while fuel tank B is decreasing by 4 gallons per hour. Which tank has the greater rate of change? Obviously tank B’s fuel level is changing faster. Does that change because they are both negative and -2 is greater than -4?
    Two glasses are both at room temperature. Glass A remains at room temperature while Glass B is placed into a freezer. Which glass do you expect to have a greater rate of change in temperature? Obviously, the glass that was placed in the freezer will have a greater change. But wait, is no change more than negative change?
    According to the author of that question, which was allowed to assess my students’ understanding, I would be wrong on each of those problems. I take real issue with this, as a mathematician and as an educator. The real issue is that teachers are not allowed to provide any feedback on prior questions as they are “secured material.” Furthermore, no clarification has ever been made on this standard, nor any information on how this question was graded. As educators, those with degrees in the subjects we teach, we should be allowed a forum to provide feedback on questions that are ambiguous.
    Furthermore, I am disgusted that we have fostered a society that emphasizes student scores over understanding and comprehension. So much is tied to student performance numbers that educational leaders are willing to overlook accuracy and accountability of the very tests that determine such numbers. Tests have become about generating data, whether or not the data is in fact an accurate representation of what learning takes place in any given classroom or in any given child. This is testing for the sake of testing.

    Like

  3. I am a retired PhD mathematician who has taught a wide range of subjects from motor skills like swimming, to graduate-level courses in applied math. I also am an “older dad” with a 15 year old son who attends a public school in a Washington DC suburb. I think it is a great school and I am very happy with the quality and depth of coverage in the courses he is taking. It seems to be much better than what I remember from my high school days.

    Genomics didn’t even exist in my days, and putting a man on the moon was futuristic science fiction. But on the other hand, life was simpler. I went to a small public school, don’t recall any angst over taking a standardized college entrance exam, and only applied to one college. Today they call it a “highly selective” college, but I just liked it because a neighbor had gone there and I thought she was very smart. That’s all I remember about the process of “getting into college.”

    My son is getting all A’s in school and is a well-rounded and socially-adjusted kid. Of course I think he is great. But now we are approaching the SAT’s, and he’s under great pressure to take more “advanced placement” (AP) courses. I object!

    As Ms. Blaine has described so well, the SAT process is flawed. I am thrilled to discover that some of our better universities no longer require it. But most, if not all, “highly selective” colleges still do require it, so my son must now prepare for it. A neighbor who is a world-know physicist told me that he had his son take one of these SAT prep courses and his son got a perfect score on the SAT. whether that was a consequence of taking the prep-course or not we’ll never know. So my son will take a prep-course to make sure he has as good a chance as any in being accepted at the college of his choice.

    But wait; isn’t that was high school is supposed to do? Isn’t it supposed to do its best to prepare you to get into college and do well when you get there? If scoring high on the SAT is part of the college admission process (setting aside the question of whether it should be part of the process), why don’t our schools offer a semester course in “SAT prep?” They offer diver’s education, home economics, band, physical education, and numerous other electives. Why should my son have to go to a private education firm to prep for the SAT?

    My other sore point is the concept of “advanced placement” courses. A neighbor was boasting to me the other day that his daughter had taken so many AP courses in high school that she started school with enough credits ( and together with some summer courses) to graduate from college in three years, saving him a lot of money. But wait, I thought the idea of college was to get an education, not to save money. With all due respect to the teachers at our high school, taking AP physics or calculus at ___ high school is not the same as taking first year physics or calculus at Princeton or Stanford. I, for one, don’t want my son taking a watered-down “version” of physics or calculus; I want him taking it from a professor who has a deep understanding of the subject, in an environment where in-depth studying is encouraged, and with fellow students who discuss and share their ideas with him.

    College is not just a matter of collecting enough “reward points” to graduate; it’s about getting an education during a period in your life where you have the ability and desire to absorb and analyze knowledge at a fantastic rate. Like SAT prep courses, the AP concept is a flawed idea that will hopefully pass out of existence soon.

    Like

Leave a comment