Thursday 10 September 2015

Application of research to the classroom: assessment for learning

I use multiple strategies in my lessons to enhance student engagement, motivation, progress and deep learning, but how often do we as teachers pick apart what we are doing and why? I grabbed a book out of my school's library for some light reading over the summer break--Evidence Based Teaching: a practical approach, by Geoff Petty--and I was reminded that the strategies we use to enhance the learning of our students have been thoroughly researched and there are some very worthwhile strategies that we should employ more often.

I was pleased to read Petty's example teacher Tina (p. 88) doing one of my lessons; it is almost identical in its constructivist/social constructivist pedagogical approach but I have applied some extra approaches here and there from my own experience.

The example Petty gives doesn't cover the whole lesson, so I am going to fill in the bits I do extra here; if you find his book you can compare!

Students are presented with the title and objective/s of the lesson. Setting the goals for students activates their thinking and begins the process of information/past experience retrieval from long term memory. I do not use statements such as "you will be able to....", I prefer to turn the learning objectives into questions. These questions are the first things the students see and they begin to address them as they sit down and prepare for the lesson; students don't get to sit and do nothing at the start of my lessons!
After 5 minutes, find out from the students what they know; write up on the board.
Identify what the class is currently struggling with and present on how that skill is done; use of some typical non-examples will enhance the presentation of the skill. Share a graphic organiser with the students that they can add notes to; in the GAFE classroom this would likely be a google drawing.
Students construct criteria for completing the skill effectively; students work in pairs and can snowball to 4s.
The teacher nominates a speaker from each group to offer a criterium. The teacher questions why that has been selected and requests members of the other groups to verify the criterion.
With a complete list of criteria, the students complete some products that are to be peer assessed.
The teacher circulates and identifies issues with the products and discusses these with the student in regard to the criteria listed; "look at your A; how have you done against the criteria?" The student self assesses and reflects on why they have not been successful. The teacher guides the student to identify what the error is and why they have made it. This form of AfL will generate deep learning as the student is actively thinking about the mistake they have made therefore altering and improving their mental model of the skill.
The students self assess their products then peer assessment takes place; all against the agreed criteria. The teacher can continue to circulate and supports those students who are having any difficulty deciding if the criteria are being met. Personally, I prefer to sit and observe/listen to the students as they peer assess. My students help each other and check each other's work before they come to me; "3 before me" I believe the strategy is called! I encourage my students to explain to the person they are assessing as to why they gave a mark from a mark scheme or the criteria provided. This explanation can lead to a productive discussion between the assessed and the assessor until agreement is reached; the teacher is the final arbiter in the process and can often come down to saying "...you are both correct because..... Let's give the benefit of the doubt and award the mark!"
By listening, watching and circulating, I find out what is causing the most issues for the students but to verify I will ask that the assessors write down on mini-whiteboards the criterion that caused the problem on the work they assessed. Doing this removes some of the anxiety and embarrassment from the individual while giving me a window into the class's overall performance. The most common difficult criterion I can then pick apart as to how to best address it for the next time.
Students improve their products clearly highlighting what needed fixing and why they were unsuccessful in the first place. The students take some time to update their ebooks with the success criteria for the skill and add photos of their products as evidence and for their revision. These notes are then reviewed by myself when I look at their GSlides eBooks using Goobric.
To finish the lesson, students can analyse some 'broken' non-examples to identify what is wrong with them and how they can be improved. This could be done from the interactive whiteboard with students using their mini whiteboards to communicate, or they could use their devices to give recorded responses using apps such as Pear Deck, Kahoot or Socrative. Kahoot has timings and leaderboards and brings an element of fun competition to the plenary. Pear Deck allows the students to change their answers if the teacher permits this which can be a bit of a giggle!

So what is going on in this lesson that makes it outstanding? Let's break it down from the top:
This lesson is an example of direct instruction whole-class interactive teaching with constructivist activities; Gangné's nine events of instruction are closely followed here. Hattie states that whole-class interactive teaching produces an average effect size of 0.81. 

Goal setting, Exemplars and Non-examples: Marzano identifies goal setting as having a 0.97 effect size. Giving students specific, achievable goals at the start of the lesson, gives the students a target to reach while activating their prior knowledge and experience of the topic. By providing exemplar material that students can use to 'calibrate' their thinking as to what a perfect piece of work looks like also gives them a finishing line to aim for. I use the finishing line example because, as a runner, when I go running with a coach I need to know how far I will be going in order to pace myself and adjust my effort. The non-examples come afterwards; these are mini assessments and can be used in a game like fashion with the students where they use the success criteria to identify what is wrong with the non-example. The use of the non-examples is also an opportunity for the students to assess and reflect on correctness and how to improve a piece of work.

Graphic organisers have been shown to have an effect size of 1.2; this is very high. As a teacher I can use the graphic organiser to demonstrate conceptual links between topics, but for students they can be used to review and analyse content and present the content in a reformulated fashion that has taken deep thinking to achieve. I am a fan of the atomistic mind-map to illustrate connections, and venn diagrams for compare and contrast activities.

Feedback- self, peer and teacher: Marzano identifies a 1.13 effect size on giving students highly specific feedback on the process and strategies they have employed in doing their work. This feedback can come from themselves, their peers or their teacher.
Identifying what went well, even better ifs, and using these to set appropriate targets for individual students to move their mental models forward.

Mastery learning:
Assessment proformas:

http://www.facdev.niu.edu/facdev/resources/guide/learning/gagnes_nine_events_instruction.pdf
http://www.instructionaldesign.org/theories/conditions-learning.html

No comments:

Post a Comment