Join the Movement!

Jessica Handy
Government Affairs Director

Jessica works with parents, legislators, and other stakeholders to push for policy that puts children first.

This school year marks the first in which all Illinois school districts are conducting educator evaluations under the guidelines of the Performance Evaluation Reform Act (PERA). This transition has been in the works since 2010. We wanted to check in with an experienced teacher who was a part of the transition and we’re grateful to high school Spanish and English teacher Patricia Heyen for agreeing to share her experience with us. What follows is the transcript of my conversation with Ms. Heyen.

JH: Start out by telling us a little about yourself. What do you teach, how long have you been teaching, what’s your district’s like, any awards you’ve received?

PH: I’ve been teaching for 17 years. I teach secondary Spanish and English. Our district has about 1,792 students. We have 51% low-income throughout the district. We’re a predominantly white district. In high school, we have 470 students and a pretty high mobility rate of about 27%. We have a lot coming and going. Our PARCC scores last year showed 26% ready for further endeavors and we have about 82% of freshmen on track, so that’s what the district looks like. In terms of myself and what certifications and awards I’ve received, I have my National Board Certification. I have received the Coca-Cola Educator of Distinction award—I was nominated by a student. I’m trained as a teacher mentor and a teacher mentor trainer.

JH: Early in your teaching career, what kind of support and feedback did you get from did your principal to help you improve? What kind of evaluations did you have?

PH: I had really good evaluations, but I didn’t get a lot of feedback. Mostly anecdotes. The principal when I first started would tell me things about my students – things that they’d said to him about me. Really, it was essentially: “Hey, you’re doing a great job. Keep it up!” Probably what I felt to be the strongest feedback came when I first started teaching. I taught a class on short stories, which was a place for the kids who really weren’t English inclined or who struggled in English – that would be a class they would take; it was an elective instead of English 1, 2, or 3. My principal came to me and asked me to teach Advanced Placement. I saw that as feedback in a sense that he trusted me with that job. But other than that, not really a lot of feedback.

JH: In 2010, the State enacted the Performance Evaluation Reform Act (PERA), which required new training for evaluators, creation of a local joint committees to work out the details for their district’s process, inclusion of student growth as a component of performance evaluations, and minimum requirements about what classroom observations would look like. The earliest districts adopted the new evaluations in 2012 and then this year is the first year that all districts will have move to PERA. When did your district begin using PERA evaluations?

PH: We started in the fall of 2014.

JH: Were you on the local joint committee that hashed out your local details?

PH: I was.

JH: Oh great! Tell us a little about that process.

PH: We started in . . . I would say early 2013. And we started with the growth. It was kind of a volunteer group. So we started with a group of teachers and administrators, and we did some professional development. We went to a couple conferences on PERA in general. And I also did the personal stuff. I went to workshops offered by the union, and the New Teachers Collaborative, and then also to the workshops we did together as a team. A couple of those were ROE[1]-generated and they focused primarily on Sandoval, which had received the School Improvement Grant. We had a lot of training on their method, and we worked from that method. We also had training from the ROE on Charlotte Danielson. So we did a lot of preparatory conferencing and PD[2] before we actually started meeting and talking about our plan. From that point, we started working on the first section, which was the basic teacher evaluation – the performance part of it. And we didn’t start with growth until later. We put performance evaluation into place and we ran it for a full year and then we did a pilot for the SLO[3].

JH: So you started talking about this in 2013 and then fall of ’14 is when you had your first evaluation under this system, right?

PH: Yes, and I was the first teacher in the high school to go through the evaluation process.

JH: Tell us about your experience going through that evaluation yourself.

PH: Well, I found it to be more stressful because, for one thing, there’s a whole lot more preparation involved in the PERA evaluations than there ever was in the former evaluations. It used to be the administrator say, “I’m going to come in on such-and-such day, so fill out this paper,” which would essentially be your lesson plan. For PERA it’s much more entailed. There’s a lot of documentation that you have to provide. The way we have it set up is the documentation for the “off-stage” sections of Danielson[4] (1 and 4) is submitted before the observation, so we’d have all of that documentation to prepare plus the lesson plan and class/student analysis to outline for the evaluator. So in terms of work, there’s certainly more work involved in these.

JH: So there’s definitely a difference in the preparation aspect – how else did it compare to the old evaluation process?

PH: In terms of the actual evaluation, obviously you have the actual observation where the administrators used to come in and take a few notes and maybe stay 10 – 15 minutes. This was a whole 50-minute period process with an administrator on the laptop taking notes the whole time. So when it came time for the post-conference, there was a lot more feedback they could give you and you could look at what they’ve recorded about what you said and the way you responded, so it was much more beneficial to the teacher because you could actually see what you were saying and how students were responding to it. So in terms of feedback, it’s much more effective. In terms of the conversation between the teacher and the principal, it was a drastic change from anything I had experienced previously because in the past it has always been, “OK, here’s your evaluation document. Sit here for a minute and take a look at it. Do you have any questions or concerns? Sign on the dotted line,” and off you go. It was a ten-minute process. There was no talk about instruction really, except in a very vague sense. So this whole PERA evaluation opens the door to conversations about instruction, about learning, about assessment, and it’s really effective in terms of talking pedagogy.

JH: I know you went through the National Board Certification. How did this evaluation process compare to that sort of professional development or other things you’ve gone through to get feedback on teaching.

PH: I think they are very similar in terms of conversation, but I will say also that having gone through the National Board was a strong preparation for this transition. It certainly made me more comfortable with everything that was happening than a lot of my colleagues who had never been through national board. National Board – when I did it (it’s changed since then. I did it almost ten years ago) – it was very concentrated. You had to do all four of the modules in a one-year period. And it was intensive and extreme and it was really harder doing that National Board than it was earning my Master’s or even my specialist degree while I was teaching—that’s how intense it was. In terms of the evaluation, I don’t think it is as grueling as National Board because it’s not as intense in having to all be done at once, but I also feel that National Board is a strong preparation for what’s happening in our evaluation process today because it instills deep reflection and evaluation that lead to better teaching.

JH: You mentioned the National Board prepared you more than your colleagues who hadn’t been through a process like this. What did your colleagues generally think of PERA? Were they open to it and did they find it to be a more effective process? Or was there a lot of anxiety around it generally?

PH: I think there was a lot of anxiety. Part of that is because we were hit with Common Core[5] and PERA in one fell swoop. So the truth is there was a lot of anxiety and at the time, I was on the school improvement team. Our district had hired a consultant to help with the Common Core aspect of it, so we were going through a lot of changes at the same time and staff was reeling. So we, the School Improvement Team, started doing some faculty development and trainings, which I think went a long way to calming the staff and getting them on board. I would say the first couple of months there was a lot of anxiety and denial about it. And I think once we started training them on what it was and how it would help, then things got a lot better.

JH: Let’s talk a little about the student growth piece. You’re a Spanish teacher, which isn’t traditionally a standardized tested subject. Did you have any challenges finding a way to measure student growth?

PH: I did find it a little bit difficult at first, because with Spanish, so much of it is about recall and learning vocabulary, learning how to conjugate a verb. So a lot of learning is at a recall level. We have a rigor analysis attached to our evaluation and teachers had to hit all four levels of the rigor analysis, so to get into applying and critical thinking skills, it became a bit of a challenge figuring out for Spanish 2 how to get past the recall and actually get them where there’s rigor involved, using as well the standards for foreign language—both the Illinois standards and the American Council for Foreign Language standards. So I tried to incorporate those both and what I ended up doing was to find a way to give them a cloze paragraph in which they had to know vocabulary, and they had to be able create different connections with it to reach those standards. At first it was a challenge, but eventually it just took some thinking—wrapping my head around the learning goal and working out the plan to get there.

JH: How much of the evaluation is based on growth?

PH: When we started with the pilot, we decided as a team that rather than starting with 25% and then having to bump it up to 30%, which is the minimum, we just started with 30. So our evaluation has growth at 30%.

JH: Has your district used the evaluation to help inform professional development to individualize it and help people go to what’s most relevant or where they need the most support?

PH: I don’t think they’ve dictated that in our district because it has always been choosing your own professional development, other than hiring the consultant, which was specialized PD to help us get through core subjects in the common core, but that wasn’t really related to the evaluation per se.

JH: What would you recommend to other districts that are just now implementing their evaluation plans?

PH: Well I think it’s really important to be flexible and open-minded as you go through the design process, recognizing both what’s required by the state and looking at other examples. When we were researching, we looked at Wisconsin and Indiana who were on the forefront in those regards. So . . . just looking at different models. We ended up using the Sandoval model pretty much, but we adapted. It’s really important that that evaluation fit not only what the state requires, but also what your district culture is. So we made a lot of changes based on that. I would recommend that for anybody. It’s also important to start training your staff even before the evaluation model rolls out because assuming that people know what’s going on or that they understand Danielson or Marzano[6] or whoever you’re using for your model can cause a lot of problems in the long run. So don’t assume that your staff know, but provide them with training right off the bat so they are comfortable going into it. Having a couple people in each building who have served on the joint committee and can be go-to people is also very helpful.

JH: Are there any aspects of the PERA law that are unhelpful or that you’d like to see changed?

PH: Having been through it all, I was and still am very much for Common core. The standards are a good thing. The student growth is problematic because, having worked with it so long, I can just see there are so many variables in what makes a student perform or not perform and a teacher can only do so much, especially when we get mandates for student growth but there’s no big changes in the way our schools are set up. And of course, there’s the whole imbalance of money to take into consideration. We have different levels of funds available to help students. So . . . there are so many different variables. I have thought about it a lot, and if there is one thing I would change, it would be that . . . I would like to see PERA not allow these different types of assessments. Eventually, maybe just start with a district-created assessment and move to a regionally-created assessment for every class K-12 that is given once a year. Maybe have that grow to a statewide test—given once a year—because I think that would be a much more accurate representation of what teachers do. I see problems not only in terms of student variables, but also that SLO results can be manipulated. An annual state-wide common assessment designed in much the way of the Common Core Standards could positively impact instruction, learning, and future policy.

JH: So those would be the Type 3[7] assessments that are developed between the teacher with their own evaluator?

PH: Yes, and I really don’t even think a district test is enough. I think it needs to be regional, like among mid-state counties and southern counties, etc. It needs to be more regional to get a true measure, especially until we have a state law that provides funding that is more equitable so that all schools have the same resources with which to reach the standards for teachers and students.

 

Note: Thanks to Teach Plus for introducing us to Ms. Heyen. Teach Plus is an organization that empowers excellent, experienced teachers to take leadership over key policy and practice issues that affect their students’ success. Learn more about them here.


[1] “ROEs” are the Regional Offices of Education that work with school districts outside of Cook County. Their responsibilities range from providing bus driver training to school district audits. Chicago Public Schools is not represented by an ROE. In the rest of Cook County, ROE responsibilities are fulfilled by three Intermediate Service Centers.

[2] Professional development.

[3] Student Learning Objectives (SLOs) identify a specific learning goal and measurement to track student growth toward that goal. In PERA evaluations, SLOs are often developed at the classroom level between a teacher and evaluator.

[4] The “Danielson Framework for Teaching” is a research-based set of components of instruction that many districts use as their model for evaluating teacher practice. The framework is divided into four “domains”: (1) planning and preparation, (2) classroom environment, (3) instruction, and (4) professional responsibilities. Domains (2) and (3) are evident through classroom observation, but domains (1) and (4) are not.

[5] Illinois learning standards were revised in 2010 to align with the Common Core State Standards. The first year of statewide implementation was the 2013-2014 school year.

[6] The Marzano framework is another teacher evaluation model.

[7] PERA’s Administrative Rules define three types of assessments used to measure student growth: (I) a nationally-normed standardized test, (II) an assessment approved and implemented district-wide, and (III) an assessment jointly selected by the teacher and his or her evaluator.

Share This Page

Add a comment