Driven by the late Dr Samuel Paul, the Public Affairs Centre (PAC), Bangalore has done a lot of good work, such as citizen monitoring of delivery of public goods and services. Governance is a widely used and abused expression. Whatever be the definition of governance, this earlier work has led to a natural extension. PAC has just (March 2016) produced a public affairs index (PAI), an attempt to rank governance inter-state. A large chunk of governance has always been state-level and even lower down, at local body level. With increased emphasis on decentralisation, not just fiscal devolution, more work should indeed focus on states and local bodies. Let’s first understand what PAC has done in constructing PAI. First, there are 10 themes: (1) essential infrastructure; (2) support to human development; (3) social protection; (4) child and women; (5) crime, law and order; (6) delivery of justice; (7) environment; (8) transparency and accountability; (9) fiscal management; and (10) economic freedom. My intention is not to critique PAC/PAI. Instead, I wish to highlight problems anyone who undertakes such a ranking confronts. This isn’t a performance ranking of states. It is a public affairs index, a governance ranking. As soon as it is projected as a governance ranking and not a performance ranking, there is an implicit value judgment about what we expect a government to do and in what form.
This will become clearer once we zero in on “focus subjects”, sub-categories under each of those 10 themes. In addition, governance is about a process and what goes into delivery of public goods and services. What we end up measuring is invariably outcomes. Finally, to get an index, we need weights and an aggregation formula that obtains an overall index using those weights. From 10 themes to PAI, PAC’s choice is simple—equal weights and arithmetic mean. Weighting is inherently subjective, whatever one does. Equal weights have the virtue of being simple to comprehend. Hence, no complaints on that score. As I said, as sub-categories under 10 themes, we have “focus subjects”. There are 25 of these focus subjects—(1) power; (2) water; (3) roads and communication; (4) housing; (5) education; (6) health; (7) public distribution system; (8) social justice and empowerment; (9) minority welfare; (10) employment; (11) child; (12) women; (13) violent crimes; (14) atrocities; (15) policing; (16) pendency of cases; (17) vacancies of presiding officers; (18) pollution and environmental violations; (19) forest cover; (20) renewable energy; (21) transparency; (22) public accountability; (23) FRBM indicators; (24) resource generation and development expenditure; and (25) economic freedom. I don’t need to specify which focus subject is under what theme. That’s obvious enough. Under each theme, equal weights are assigned to relevant focus subjects and the arithmetic mean used once again.
To illustrate the problem, we need to go one layer further down, to indicators under each focus group. There are 68 of these, choice often driven by availability of data, or its lack. Let’s take focus subject “education”, under the theme “support to human development”, as an example. There are four associated indicators—educational development index, ASER learning levels, number of higher educational colleges per 1 lakh population and educational expenditure as % of GSDP. (All data are suitably normalised.) First, other indicators are also possible in the area of education. Why only these? Second, if these indicators are used, are they only a function of what government does? Isn’t there an implicit value judgment about the role of government? If these indicators aren’t only influenced by what government does, should this be called a governance ranking or a performance ranking? Third, if one has used equal weights for aggregating across themes and focus subjects, why deviate from that principle and use unequal weights here? Educational development index has a weight of 30%, ASER learning levels of 40%, number of higher educational colleges of 15% and educational expenditure of 15%. As I said, the intention isn’t to critique PAC/PAI, but to point out the unavoidable subjectivity in any such exercise.
With these qualifications, in the overall ranking among large states, the best states are Kerala, Tamil Nadu and Karnataka, in that order. The worst states are Bihar, Jharkhand and Odisha, in that order. People love rankings. Often, even if authors don’t desire that, reportage suggests great precision in such rankings, when there is no such precision. A little bit of tinkering with options might place Tamil Nadu ahead of Kerala or Jharkhand below Bihar. If there is little difference in scores, there is little to choose one state over another. But sometimes, differences in scores are robust and immune to tinkering. Whatever one does, and however one measures it, governance in Kerala will be superior to that in Bihar. In using such rankings, one tends to focus on inter-state comparisons. An alternative, if such rankings are periodically done by the same organisation, is to benchmark a state’s improvement over time. That is, one doesn’t use the absolute level of the index, but its increment. In attempting to influence changes, such temporal tracking of a state seems to be more acceptable than comparing states with different contexts and backgrounds. The latter rankles, the former doesn’t. When I and a colleague did such rankings earlier, in the first version, we said—half the states are below average. Our methodology was roundly criticised. In subsequent versions, we said—half the states are above average. The methodology was accepted.
The author is Member, NITI Aayog. Views are personal