Wednesday 23 September 2015

Correlating practise with results

The IALs, or international A Levels, by Edexcel have 6 units; 1, 2 and 3 make up the AS level in Year 12,  and 4, 5 and 6 complete the A2 level in year 13. The qualification does not have a practical assessment in the typical sense like the CIE A Level, but relies on written assessments to examine the skills of the students i.e. units 3 and 6.

I hold the belief that providing students with opportunities to demonstrate their skills in an appropriate practical context is the best way for them to hone their skills of analysis, and to be able to draw conclusions and support those conclusions with theory and published data. These skills transcend the unit 3 and 6 papers and are important in all of the units; students need to have these skills and be able to apply them everywhere, not just be good at answering the practical papers.



I have been considering the correlation between my Y12 students' performances in their unit 3 mock, core practical write-ups, research write-up, and the actual unit 3, and overall AS performances in the summer 2015 period. (Correlation coefficients whose magnitude are between 0.9 and 1.0 indicate variables which can be considered very highly correlated. Correlation coefficients whose magnitude are between 0.7 and 0.9 indicate variables which can be considered highly correlated. Correlation coefficients whose magnitude are between 0.5 and 0.7 indicate variables which can be considered moderately correlated.Correlation coefficients whose magnitude are between 0.3 and 0.5 indicate variables which have a low correlation. Correlation coefficients whose magnitude are less than 0.3 have little if any (linear) correlation.)
Unit 3 mock vs AS Unit 3 actual: R = 0.3209; 
Core write-up max score vs AS Unit 3 actual: R = 0.3121; 
Core write-up max score + Research vs AS Unit 3 actual: R = 0.4285;
The data above indicate that the activities during the course--Unit 3 mock, the core write-ups, and the research--have low correlations with the students' AS Unit 3 exam performance. What is of note is the comparison between those students who performed the Research tasks throughout the year and those that did not; the Research scores improved the correlation with the AS Unit 3 actual performances by these students.



In comparing the mock exams with the AS performances, the correlations become much more visable:
Unit 1,2+3 mock vs AS Actual: R = 0.7287; (not enough data for Unit 2) 
Unit 1 mock vs AS Actual: R = 0.7828; 
Unit 3 mock vs AS actual: R = 0.3656;
The Unit 1 mock exam taken by the students halfway through the course is highly correlated with their final AS performance making this a useful tool for guiding students in improving themselves. The Unit 3 mock paper has a low correlation with the final AS performance; considering the students do so well in this mock it may provide a false sense of security and introduce a degree of complacency in our students.



In November 2014, I presented my use of GAFE tools to assess and report my students' performances in their write-ups and research activities. It is this assessment strategy and data I have led my colleagues to bring together to identify the correlations below:

Core write-up max score vs AS actual: R = 0.6281; 
Core write-up max score + Research vs AS actual: R = 0.7204; 
Unit 1 mock + Core write-up max score vs AS Actual: R = 0.7642; 
Unit 1 mock + Core write-up max score + Research vs AS Actual: R = 0.8949;

The scoring of a research write-up was carried out with 13 students with the other 18 students only completing the core practicals. 

The Core Write-up Max scores by the students are moderately correlated to their actual AS performance. The addition of the Research performances increases the correlation to moderately correlated, and with the addition of the Unit 1 Mock, the correlation is climbing towards 'highly correlated'. 


I believe that by targeting the students' Core Write-up and Research performances throughout the year, we can improve our students' AS performances. This will require a shift in attitude by our students towards the activities as they seem to be under the impression that the Unit 3 paper is where their practical analysis skills are assessed, and one which they are good at! 

On analysing the Unit 1 and 2 papers of students who have under-performed, the questions they struggle with are those relating to graph and data analysis, validity, reliability, accuracy and precision, which are addressed directly in the Core Practical Writeups. 

Our instruction needs to consider KASI (Knowledge, Attitudes, Skills, Interpersonal skills). In context of the discussion above, Attitude and Skills needs to be addressed. The skills of analysis and the attitude of value in the write-up process need to targeted. 

The use of a write-up rubric should be adhered to with a Triple Impact Marking (T.I.M. linked with D.I.R.T.) strategy used for self,  peer and finally assessment using the comments within the GDoc writeup. The feedback (Hattie, needs citation!) students receive from themselves from grading their work against the rubric, with reasons for the grade awarded should identify where improvement is needed; the student must act on the assessment and improve their work (different colour?!) Peer assessment brings the student more feedback for improvement as the peer assesses the self-assessment and offers guidance for improvement; this advice needs to be acted upon (another different colour.) The peer also comes into more contact with the rubric causing them to assimilate the requirements for a complete write-up more. Finally, the teacher assesses the final write-up, the self and peer assessments and the action taken by the student to enhance their performance.

T.I.M. and D.I.R.T. require careful time management but the collaborative nature of GDocs and the commenting features make it easy for students to interact with each other. Students must aim to be better the next time they do a write-up; reflecting on their performance is important. Adding this reflection to the GDoc after the teacher assessment would be a straight forward strategy. A more powerful process may be to use DocAppender for GDocs which used a GForm to add the reflection to the GDoc. Since the reflection is first stored in the GForm's Sheet, the Sheet can be used to express the targets back to the student at a later date wthrough Gmail or even a GSites page.

Finally, getting the students to create a Gallery of their write-ups for public scrutiny will enhance the motivation of the students to do a better job as the authenticity of the work will be lifted. This is a standard Project Based Learning tactic to increase motivation. A gallery walk will also bring the weaker performing students into contact with the better write-ups and allow those highly skilled students to support their peers.


Thursday 10 September 2015

Application of research to the classroom: assessment for learning

I use multiple strategies in my lessons to enhance student engagement, motivation, progress and deep learning, but how often do we as teachers pick apart what we are doing and why? I grabbed a book out of my school's library for some light reading over the summer break--Evidence Based Teaching: a practical approach, by Geoff Petty--and I was reminded that the strategies we use to enhance the learning of our students have been thoroughly researched and there are some very worthwhile strategies that we should employ more often.

I was pleased to read Petty's example teacher Tina (p. 88) doing one of my lessons; it is almost identical in its constructivist/social constructivist pedagogical approach but I have applied some extra approaches here and there from my own experience.

The example Petty gives doesn't cover the whole lesson, so I am going to fill in the bits I do extra here; if you find his book you can compare!

Students are presented with the title and objective/s of the lesson. Setting the goals for students activates their thinking and begins the process of information/past experience retrieval from long term memory. I do not use statements such as "you will be able to....", I prefer to turn the learning objectives into questions. These questions are the first things the students see and they begin to address them as they sit down and prepare for the lesson; students don't get to sit and do nothing at the start of my lessons!
After 5 minutes, find out from the students what they know; write up on the board.
Identify what the class is currently struggling with and present on how that skill is done; use of some typical non-examples will enhance the presentation of the skill. Share a graphic organiser with the students that they can add notes to; in the GAFE classroom this would likely be a google drawing.
Students construct criteria for completing the skill effectively; students work in pairs and can snowball to 4s.
The teacher nominates a speaker from each group to offer a criterium. The teacher questions why that has been selected and requests members of the other groups to verify the criterion.
With a complete list of criteria, the students complete some products that are to be peer assessed.
The teacher circulates and identifies issues with the products and discusses these with the student in regard to the criteria listed; "look at your A; how have you done against the criteria?" The student self assesses and reflects on why they have not been successful. The teacher guides the student to identify what the error is and why they have made it. This form of AfL will generate deep learning as the student is actively thinking about the mistake they have made therefore altering and improving their mental model of the skill.
The students self assess their products then peer assessment takes place; all against the agreed criteria. The teacher can continue to circulate and supports those students who are having any difficulty deciding if the criteria are being met. Personally, I prefer to sit and observe/listen to the students as they peer assess. My students help each other and check each other's work before they come to me; "3 before me" I believe the strategy is called! I encourage my students to explain to the person they are assessing as to why they gave a mark from a mark scheme or the criteria provided. This explanation can lead to a productive discussion between the assessed and the assessor until agreement is reached; the teacher is the final arbiter in the process and can often come down to saying "...you are both correct because..... Let's give the benefit of the doubt and award the mark!"
By listening, watching and circulating, I find out what is causing the most issues for the students but to verify I will ask that the assessors write down on mini-whiteboards the criterion that caused the problem on the work they assessed. Doing this removes some of the anxiety and embarrassment from the individual while giving me a window into the class's overall performance. The most common difficult criterion I can then pick apart as to how to best address it for the next time.
Students improve their products clearly highlighting what needed fixing and why they were unsuccessful in the first place. The students take some time to update their ebooks with the success criteria for the skill and add photos of their products as evidence and for their revision. These notes are then reviewed by myself when I look at their GSlides eBooks using Goobric.
To finish the lesson, students can analyse some 'broken' non-examples to identify what is wrong with them and how they can be improved. This could be done from the interactive whiteboard with students using their mini whiteboards to communicate, or they could use their devices to give recorded responses using apps such as Pear Deck, Kahoot or Socrative. Kahoot has timings and leaderboards and brings an element of fun competition to the plenary. Pear Deck allows the students to change their answers if the teacher permits this which can be a bit of a giggle!

So what is going on in this lesson that makes it outstanding? Let's break it down from the top:
This lesson is an example of direct instruction whole-class interactive teaching with constructivist activities; Gangné's nine events of instruction are closely followed here. Hattie states that whole-class interactive teaching produces an average effect size of 0.81. 

Goal setting, Exemplars and Non-examples: Marzano identifies goal setting as having a 0.97 effect size. Giving students specific, achievable goals at the start of the lesson, gives the students a target to reach while activating their prior knowledge and experience of the topic. By providing exemplar material that students can use to 'calibrate' their thinking as to what a perfect piece of work looks like also gives them a finishing line to aim for. I use the finishing line example because, as a runner, when I go running with a coach I need to know how far I will be going in order to pace myself and adjust my effort. The non-examples come afterwards; these are mini assessments and can be used in a game like fashion with the students where they use the success criteria to identify what is wrong with the non-example. The use of the non-examples is also an opportunity for the students to assess and reflect on correctness and how to improve a piece of work.

Graphic organisers have been shown to have an effect size of 1.2; this is very high. As a teacher I can use the graphic organiser to demonstrate conceptual links between topics, but for students they can be used to review and analyse content and present the content in a reformulated fashion that has taken deep thinking to achieve. I am a fan of the atomistic mind-map to illustrate connections, and venn diagrams for compare and contrast activities.

Feedback- self, peer and teacher: Marzano identifies a 1.13 effect size on giving students highly specific feedback on the process and strategies they have employed in doing their work. This feedback can come from themselves, their peers or their teacher.
Identifying what went well, even better ifs, and using these to set appropriate targets for individual students to move their mental models forward.

Mastery learning:
Assessment proformas:

http://www.facdev.niu.edu/facdev/resources/guide/learning/gagnes_nine_events_instruction.pdf
http://www.instructionaldesign.org/theories/conditions-learning.html