The tyranny of metrics: Why we learning nothing from the learning outcome data

Updated: Feb 28, 2019 3:43 AM

Experts will argue NAS and ASER cannot be compared because of a significant difference in methodologies and sampling.

A decade ago, the education sector witnessed a shift towards measuring learning outcomes in a standardised manner. (Representational photo)

By Seema Bansal & Shoikat Roy

The principal secretary of school education in a central Indian state faces a conundrum—elated at being placed in the top quartile in India in MHRD’s annual National Achievement Survey (NAS), he has been replaced with a sense of despondency at being rock bottom in the recent ASER 2018 results. The secretary has the state’s own learning data collected randomly across 1 lakh students in government schools—which is different from NAS and ASER. There is also a file on learning outcome data from Aspirational Districts collected by the NITI Aayog that is as yet unread on his desk—his fear that it will paint a fourth picture is not unfounded.

It is difficult for the secretary to decide which set of learning data is worth spending time on. NAS scores will determine funding under SSA, the Prime Minister will personally review Aspirational Districts improvement, while for the media and general public ASER is the flavour of the month. Not only this, the secretary also gets other rankings: NITI Aayog’s SEQI, MHRD’s PGI and Aspirational Districts delta rankings, among others. Each has similar but different takes on what is important in education and throws up a different set of ranks and priorities for his state.

ALSO READ: Reliance Industries Chairman Mukesh Ambani breaks into global top 10 rich list; check top 3

How did we get here? A decade ago, the education sector witnessed a shift towards measuring learning outcomes in a standardised manner. At the time, the push to collect learning data was essential to understand and confront the depth of the learning crisis in India. But the need to assess and measure learning outcomes has been taken to an extreme today that renders the entire exercise futile. While the efforts are well-intentioned, they’re creating more confusion than clarity. In the NAS data, for example, states like Kerala, Andhra Pradesh and Delhi have been ranked bottom, while Jharkhand and Rajasthan are closer to the top. ASER rankings show the polar opposite in many cases. NAS posits that an average grade 5 student scored 60-70% on a graded paper in Maths and Language, whereas ASER claims that in many parts of India a majority of students in grade 5 are not even familiar with basic Grade 2 competencies like subtraction.

One north Indian state, for example, claims that 70% of its government school students are at grade level according to its internal data, while ASER indicates that half its students are at least 2 to 3 grade levels behind in every subject. The ASER data itself throws up inexplicable results that limit decision-making on the basis of it. Several states have shown extreme swings in successive years in the same grades that seems nearly impossible. Thus, in the absence of disaggregated and transparent data to explore the potential causes, it is difficult to arrive at a nuanced action agenda.

ALSO READ: Delhi Budget 2019-20: Arvind Kejriwal’s favourite ‘Mohalla Clinic scheme’ gets little attention

Experts will argue NAS and ASER cannot be compared because of a significant difference in methodologies and sampling. It is possible NAS is skewed towards brighter students attending school, but it is unacceptable that multiple surveys are leading to two antithetical views of overall educational outcomes in a state. As a consequence of conflicting measurements and lack of quality data, the objective of index-based measurement systems—to prioritise and identify weaknesses for improvement—is largely a lost cause. Today, if any bandwidth is spent in states, it is on wondering “what really is the truth.” Data-based learning assessments and rankings that should have been a clarion call to action for states have degraded into a source of frustration and cynicism, as well as the target of ridicule.

This is magnified by the significant transaction costs imposed on the system by such exercises. Internal studies estimate teachers spend 10-20 days of critical teaching time on conducting and uploading various types of assessments data; not to mention the costs borne by the administration in collating, analysing and reconciling. It is unacceptable this burden is not ameliorated by meaningful data-based insights and a clear action agenda to improve outcomes.

ALSO READ: Karachi stock exchange bleeds after IAF strikes down terror camps in Pakistan; here’s how much index tanked

So what is the answer? It is critical that Centre, states and external organisations adopt a collaborative approach to measurements. All bodies must reconcile the definitions of learning outcome metrics and corresponding measurement tools and processes. While external approaches must be encouraged, the ecosystem needs to come to a common answer on what learning outcomes are to be measured, for what purposes (incentivisation, teacher training, accountability initiatives), and develop common approaches and best practices for accurate data collection and validation. Unless this is done immediately, the purpose of measuring learning outcomes will be completely lost.

-Bansal is director, Social Impact Practice, BCG; Shoikat Roy is an expert consultant in Education Practice

Get live Stock Prices from BSE, NSE, US Market and latest NAV, portfolio of Mutual Funds, calculate your tax by Income Tax Calculator, know market’s Top Gainers, Top Losers & Best Equity Funds. Like us on Facebook and follow us on Twitter.

Financial Express is now on Telegram. Click here to join our channel and stay updated with the latest Biz news and updates.

Next Stories
1Tribal welfare: GI tag can help ST entrepreneurs thrive
2Strengthening cities: Property tax is the key to unlocking potential
3Agrarian reforms: Can over-reliance on MSP harm agricultural states?