How Key-Stage Examinations Can Improve Education Quality
By The EDge Editorial Team
Nov 28, 2020
CSF interviewed Amit Kaushik, the CEO of Australian Council for Educational Research (ACER), India, to better understand how these key-stage examinations can help improve the quality of education and what it entails institutionally.
The National Education Policy (NEP) 2020 approved by the Union Cabinet in July 2020 lays down an ambitious plan for transforming school education. It highlights the need for holding key-stage examinations in Grades 3, 5, and 8 with a focus on learning outcomes. CSF interviewed Amit Kaushik, the CEO of Australian Council for Educational Research (India), to better understand how these key-stage examinations can help improve the quality of education and what it entails institutionally.
CSF: NEP 2020 recommends holding key-stage assessments in Grades 3, 5 and 8. How will they be different from annual exams held for these grades? Will they help improve learning outcomes?
Amit Kaushik: The purpose of the key-stage assessments outlined in NEP 2020 is to track progress over time, rather than just at the end of school when a student transitions into skill development programmes or tertiary education. These assessments are expected to provide data that allows monitoring the health of the school system and forms an input into measures to improve the teaching-learning process and quality. In this sense, they will be similar to exercises undertaken in other countries, as for example the National Assessment Programme – Literacy and Numeracy (NAPLAN) in Australia, which helps parents, teachers, schools, education authorities, governments, and the community understand if students are developing the literacy and numeracy skills they require as a foundation for other learning. To borrow a term from medicine, one should consider these assessments in the nature of diagnostic tools that help determine the line of treatment to be followed – they will help diagnose gaps and shortcomings in the school education system so that corrective action may be taken in time.
CSF: The NEP 2020 also outlines that these exams will be low-stakes. Given that exams and assessments in India have traditionally always been perceived as high-stakes, what would low-stake exams imply? How are teachers, parents, and students to understand this new model of assessments?
Amit Kaushik: This is a very good point – implementation of these low-stakes assessments at key-stages will need a significant reorientation of stakeholders so that they appreciate their objectives and don’t treat them as high-stakes examinations. This means educating teachers, parents, and students so that they understand that the outcome will not impact them personally but will help to make things better for everyone. This also implies that one needs to be judicious in selecting the purpose for which the data from these assessments is used – for instance, linking performance in these assessments to say, funding from the Central government under centrally sponsored schemes would be a mistake as that would put teachers and administrators under pressure and perhaps lead to distortions in their administration. Any such distortions would make the assessment data unreliable for its intended purpose of improving educational quality.
On the other hand, one also needs to be careful in ensuring that stakeholders don’t go to the other extreme and take these assessments lightly – such a situation would also lead to unreliable data. It will therefore be important to maintain a fine balance.
CSF: What stakeholders will be involved in implementing key-stage assessments? What are some of the key aspects / potential pitfalls to bear in mind while designing a robust system for its implementation?
Amit Kaushik: Stakeholders in key-stage assessments would include educational administrators, academics, teachers, students, and parents, aside from national and state institutions like CBSE (which is expected to house the national assessment centre, PARAKH, in the initial stages) NCERT, SCERTs, etc. In the initial phases, some external technical assistance may also perhaps be required from organisations that have international experience in the field so that global best practices can be incorporated.At ACER, we view high-quality systemic assessments as a process involving several key components – from setting the objectives of the assessment to identification of the key personnel and building their capacities, developing technical standards and an assessment framework, crafting high quality cognitive assessment items, test design, sampling framework, linguistic quality control, standardised field operations, data analysis, and reporting. This process is a continuum that needs each of the components to work well in order to deliver valid and reliable data. When thinking about the implementation of these key-stage assessments, it will be important to create the necessary capacities within institutions, nationally and at state and school levels, to implement these distinct components and undertake regular assessments, and analyse and report the data. Most importantly, it will be critical to create a culture of using the learning outcome data from these assessments – the assessments themselves are not the end goal; the end goal is to utilise the data to make decisions about the steps needed to improve quality.
CSF: What do you think will be 2-3 of the biggest challenges in the successful implementation of census assessments?
Amit Kaushik: Census assessments by their very nature, generate massive amounts of data, collecting, cleaning, and processing which can be challenging. In the absence of adequate systemic capacity to handle such large amounts of data, errors and inconsistencies can arise, making the conclusions drawn on the basis of such data vulnerable and open to question. In many cases, assessment of a properly drawn representative sample can actually provide as accurate information about the larger population as a full-fledged census assessment, but the choice of which to use depends on the situation and the policy objectives of the assessment. Given the size of our country, undertaking census assessments is likely to be a Herculean exercise requiring large numbers of technically trained staff to collect, process, and report the data. Building the required capacity to do so will be a key task.
CSF: What are your thoughts on the data from these assessments? How can this data be used to improve student learning? Is there a way to ensure reliable gathering of this data?
Amit Kaushik: Data gathered from such assessments must be robust so that any conclusions drawn from it are valid and reliable. This implies the need to have in place systems and processes that enable uniformity and standardisation, both in the way assessments are administered, and in how data arising from them is processed and reported. The use of technology can help to some extent in gathering reliable data, but technology is not the solution to all issues – a badly designed assessment delivered online or on a handheld device is still a bad assessment. Building capacities at all levels will remain essential, so that there is widespread appreciation of the objectives of assessments and the manner in which to use the data. It will also be important to communicate the outcomes of such assessments to the wider group of stakeholders, including the public at large. Data from these assessments can be used to inform policy and improve classroom practice, leading eventually to an improvement in the quality of learning.
At the same time, it is important to remember that assessment data is only one input into understanding the overall progress made by a student – the progress they make is measured not only by their performance in an assessment administered at a point in time, but also by other indices such socio-emotional well-being, performance in sports and co-curricular activities, interest in music or art, and so many others. It is necessary therefore to relate assessment data with other indicators in order to draw relevant conclusions about school and student performance.
The EDge Editorial Team
Share this on