I recently worked with a school that had asked me to help out with professional development. Before I arrived at the school, I asked my contact to provide me with a list of various improvements to courses and teaching that staff reported making over the last few years. I also requested a graph showing the percentage of each assessment mark achieved by Grade 10 students for the last 5 years. My contact wasn’t aware of such a graph being on file or part of any existing report, so this had to be generated especially. In New Zealand, overall results are made public anyway, so this wouldn’t be seen as an issue.
The staff reported improvements to their Grade 10 programs in several ways, such as:
- Redesigning the course
- Redesigning the resources
- Simplifying the Assessment
- Doing multiple things to better accommodate boys
- Introducing an interactive learning platform as an additional support
- Removing said interactive learning platform
- New textbooks
- Doing mini-tests more frequently
- Redesigning the grading rubric to make it more readable
- Making the resource more student-friendly
This extensive list looked impressive and indicated a dedicated staff. But there was an issue when this list was compared with the results graph for Grade 10 assessment over the same period. The trend lines for each obtainable assessment mark gained by students over 5 years were flat lines. Each year represented hundreds of students, submitting about 20 assessments each but despite the multiple changes to programs, 44% of every year’s Grade 10 assessments obtained just the basic “Pass” grade. Except for one slight exceptional year, the other three percentages of assessment grades also proved fixed every year. This is alarming when you consider that improvements in recent years are applied on top of those in the earlier years and still the grades stay the same.
The list of improvements made for familiar reading, and so on viewing the results trend, teachers were forced to question how so many ‘improvements’ had not in fact made a difference.
I had far more important questions for them. Rather than explore the changes you have made, consider first the two processes you use for:
- Deciding on an improvement; and
- Measuring the impact of the change.
It has long been the habit of many schools and teachers to use what an old colleague of mine always referred to as “gut feeling” when both deciding on changes and thinking about their impact. It is still common for the practice of redesigning the learning experience to not use either results or student feedback so as to achieve a considered and impactful change.
Notice also that the improvements are not related directly to teaching practice or teacher behavior. Often groups of teachers shy away from discussing teaching as traditionally it has always been a personal affair and can feel confrontational to be questioning the approach taken by a colleague. This is why New Zealand expects groups of teachers to take on a professional approach to find and share the best practice within a team. This leads to teachers realizing that as a team the aim is to build on everyone’s strengths through targeted analysis of the impact of each strategy.
New Zealand introduced a formal nationwide program for teachers called “Teaching as Inquiry” (TAI). This encourages an ongoing cycle of inquiry into your practice as a teacher. TAI is meant to be “how you operate” in the classroom. Teachers are asked to work with their students and cultivate shared ownership of learning to improve the program for better outcomes. Every lesson is an opportunity for feedback and discussion about the current situation in regards to clarity of objectives plus opportunities for possible approach and learning output. Inquiring into your practice as a teacher forms part of New Zealand’s encouragement towards student-centered learning, where the learners have a constant voice in the proceedings.
Inquiry as Government accountability
To support and align with this inquiry and review model, the New Zealand government’s Education Review Office (ERO), tasked with measuring school performance, recently took on a new long-term and more strengths-based approach by measuring the quality of a school’s self-review processes to confirm things are being targeted to ensure on-going improvements.
It seems common sense to me and keeps everyone smiling if a government is to work with schools on their reviewing processes in the same way it makes perfect sense for teachers to work with their students to ensure that improvements really are such and if the learners have a voice and agree with changes, then real improvement is far more likely to have a lasting impact.