By examining literacy and numeracy results across assessments, we can better understand the performance of Australian students over time; we can pinpoint areas of national strength and weakness and improve Australia’s educational outcomes.

Introduction

Over the past three decades Australia has developed an increasingly advanced national system of student assessments, results from which have been used to identify areas of growth, stagnation or decline in student learning. For the most part, trends in different standardised assessments have been considered in isolation. By examining literacy and numeracy results across assessments, we can better understand the performance of Australian students over time; we can pinpoint areas of national strength and weakness and improve Australia’s educational outcomes.

This report considers the four National Assessment Program (NAP) assessments that measure literacy and numeracy[1]: the National Assessment Program – Literacy and Numeracy (NAPLAN), Progress in International Reading Literacy Study (PIRLS), Trends in International Mathematics and Science Study (TIMSS) and Programme for International Student Assessment (PISA). NAPLAN is conducted by the Australian Curriculum, Assessment and Reporting Authority (ACARA) and assesses how students are progressing over time, while monitoring system-level and school-level performance. The other three assessments – PIRLS, TIMSS and PISA – are international programs that the Australian Government chooses to participate in, to benchmark the learning outcomes of Australian students against their peers in countries around the world.

First, this report examines the purpose of the National Assessment Program. It finds that, over time, many purposes have been ascribed to the NAP and NAPLAN in particular, and suggests that this may have created some confusion and undermined confidence as to whether the assessments are fit for purpose.

Second, this report examines reasons why PISA shows significant declines in both reading and mathematics achievement, while NAPLAN, PIRLS, and TIMSS show either growth or stability. Drawing on preliminary analysis by the Australian Council for Educational Research (ACER) on behalf of AERO, this report finds no single cause can be definitively identified.

Finally, the report explores how the NAP assessments have the capacity to tell us much about effective practice and policy; they can help to detect ‘what works’ in education. While there are some limitations, analysis of NAPLAN, PISA, PIRLS and TIMSS trends can help to identify policies and practices that may have contributed to improvements over time. In addition, limitations could be addressed, in part through data linkages and the creation of a central student data set, and by surveying students and teachers when NAPLAN is conducted to provide richer detail on the classroom practices and school approaches being used.

The National Assessment Program is an important investment made by all Australian governments. It is an asset that helps measure the health of Australian school education. This report considers how the usefulness of the NAP can be enhanced to improve our evidence base about the successes and challenges of the Australian school system. It is timely to do so given Australia’s suite of national assessments have been in place for more than a decade and the Measurement Framework for Schooling in Australia (Framework), which is used to measure school system performance, is currently under review.

AERO Benchmarking performance report

[1] The broader NAP includes the yearly NAPLAN assessments, the 3-yearly sample assessments in science literacy, civics and citizenship, and information and communication technology (ICT) literacy, and the international sample assessments PIRLS, TIMSS and PISA.


Keywords: educational datasets, data analysis, student performance, student progress