What is the purpose of assessment in schools?
Assessment is most commonly synonymous with public examinations and is communicated through a language of grades, percentages and GPA. Typically applied towards the end of life at school, assessment serves as the connective tissue between formal schooling and society, how we gauge the relative success a pupil has made during their time at school. Indeed, assessment is currency that allows access to higher education and career opportunities.
This much is true, and in fact necessary. However, this article argues that such a narrow perspective on assessment undermines the power that effective assessment can have in driving learning and underpinning excellent teaching. The premise of this article is based on the two following points:
Assessment is most powerful when it is used by teachers as a means of gathering evidence on what a child may know or can do in order to inform the next steps in teaching and learning
Assessment has greatest impact when it affords relevant and specific feedback to the learner on how they can further develop what they know or can do.
(Hattie (2008), Black and Wiliam (1998) Black box, Coe et al (2014))
The importance of effective assessment in shaping learning; and as a feature of effective teaching, is well documented. However, the practical implementation of such practice is not always straightforward: For Instance:
- Do assessments truly measure what they proport to? For example, to what extent does a public examination or end of term test assess critical thinking or creativity? In fact, does the domain specification of any test truly cover the content and levels of thinking required of the learner?
- Are we sure the assessment of a child’s history essay, art work or music performance is reproducible?
- To what extent does the end of unit test in science inform next steps in learning for a pupil?
- Is marking of pupil work a worthwhile exercise?
- Surely feedback to pupils has a positive impact on learning, right?
The world of effective assessment is murky. Teachers are encouraged to use formative and summative forms of assessment. Feedback is promoted as the panacea of effective practice. Yet, feedback has been demonstrated to have a negative impact on learning (Kluger and DeNisi, 1996), whilst I am often concerned about the compartmentalisation of assessment into various modes; e.g. formative, ipsative, summative to name a few.
What to do? Work undertaken some time ago when I was at Durham University attempted to elucidate a way through the challenges raised above. Our view was to reconceptualise assessment based on four fundamental pillars:
It is essential that there is clarity in the purpose of assessment; what is being assessed, why is it important and how will it advance learning for the pupils? The answer to this question is fundamental in shaping the nature of assessment to be used which, although straightforward, is one of the areas of challenge faced in practice. For example, if a primary school pupil is asked to undertake 10 questions in maths independently, what is the desired outcome? Could it be to provide increasing challenge through a range of questions that move from recall to application in an unfamiliar context, to promote fluency or to identify misconceptions? The nature of the of the 10 questions would need to be carefully structured for each of the purposes listed above, whilst the method of administering the assessment would also differ significantly between each approach.
In its simplest form, validity in terms of assessment can be defined as: does the assessment measure what the teacher intends for it measure? For instance, a set of 10 maths questions on multiplication and division of fractions may be used to gauge what pupils know and can do on this concept with the intention of using peer assessment to generate an overall score that is reviewed by the teacher. One child may achieve 6 correct answers and get the more difficult questions incorrect leading to the conclusion that the child requires further support with the concept of fractions. Yet, the child may fully appreciate fractions and how to multiply and divide them but has a weak foundation for managing multiplication and division with numbers in the tens or hundreds. For this child, the assessment lacks validity.
For an assessment to be reliable, it must be reproducible. That is, if what one child knows and can do with fractions is judged to meet age related expectations, the judgment made on other children in the class or across the year group must be consistently similar. Achieving rigour and objectivity in teacher assessment lies at the core of effective practice in and across schools. How well embedded practices are for achieving reliability often determines the effectiveness and success of assessment in shaping learning.
Valid and reliable assessment only becomes effective when it is used to advance learning. How the outcome of the assessment is communicated and used by the learner determines the value of assessment. For example, using the scenario above, should the teacher use outcomes from the 10 question assessment on fractions to the undertake a series of mini-conferences with small groups of students to delve deeper into what pupils know and can do with fractions, it is likely that the teacher will gather more relevant evidence on learning and pupils will receive specific and structured feedback on how they can develop their learning.
This approach to planning and implementing assessment is promoted at Durham University and across Wellington College China and Huili Education. It is argued that it affords sound foundations for effective practice. Indeed, it will help tailor assessment practice so that teachers have access to valuable evidence on what pupils know and can do and allows for the most impactful and meaningful use of feedback to pupils.
This model of developing effective assessment practices at a classroom or school level is one of four themes that feature in the Inspiring Learning Conference led by the Institute of Learning in Tianjin on the 26th and 27th January and will be led by Professor Stuart Kime.