Rather than strive for speed in disseminating data on a more real-time basis, it would be better to tarry and provide final numbers even if there are lags involved.
By Madam Sabnavis
Economic data is probably the most important piece of information as it is the basis for formulating policy. It is used for making global comparisons, and benchmarks are set based on the data supplied by the official sources. This is what makes it different from private estimates which appeal to the closed user group that is involved, but not to the outside world. Global institutions like IMF and World Bank also have their calculations, but their estimates are based on what is supplied by the local authority and there are fixed formats for the same. This is why credibility is very important when it comes to dissemination of data.
India has always taken pride in the data systems, notwithstanding a very complex economy. But, now, controversies have surrounded the datasets on GDP and employment because of a large degree of opacity in the approaches given the complex structure of the economy.
RBI data, by far, is the most robust dataset as it has tied up the sources in a systematic manner for several decades now. Therefore, data on bank credit or deposits never changes and there is no room for speculation. The presentation of accounts by banks has been homogenised to ensure that there is no ambiguity in definitions, and hence there are no revisions in the data. The monetary and banking data is probably the best in the country and respected everywhere. But, this is possible because the data flows from a regulated set of entities which have to comply with the structures and, also, in terms of number, are quite small compared to the size of the economy.
The same is not straightforward for real sector data. If one looks at the IIP or WPI or CPI series, there are revisions made regularly, and this creates problems—especially for, say, monetary policy that is based on these numbers. When the variation is significant, it could give the incorrect indication to users. The problem is mainly because of data sources not being readily available.
Agricultural prices are based on mandis or retail touch-points, where such data may not be final and there are changes after the data is released. Prices of manufactured goods come from associations where data may get revised periodically, thus giving a different picture. Also, at times, the prices for certain commodities are just not available which leads to sudden bumps in movements when it is provided by the relevant agency. While the methodology is stated upfront, the delivery has several lacunae which lead to provisional numbers undergoing substantial changes.
Trade data, too, is based on how the reporting is done, and while the RBI-BOP data is straightforward as it looks at entry and exit of forex from the system in a particular time period, the data from the Directorate General of Commercial Intelligence and Statistics is subject to changes and, at times, the conclusions drawn could be different.
GDP, however, has become the most controversial bit of information because it is the leading indicator of any economy. The change in methodology—shifting to the 2011-12 base year—that was in line with the global methodology of looking at market prices instead of ‘at factor cost’has become a political game. GDP calculations require the use of several proxies since data is not available to the CSO. If one looks at agriculture, it is probably the most complex setup.
The marketable surplus that can be accounted for can vary from 60-80% of total output, and hence, it is hard to fathom the actual output as a lot of the produce is used for self-consumption or seed. Also, the marketable surplus never gets recorded fully in the mandi and the data on AGMARKNET on arrivals could be as low as 20-30% of the final number that is presented in official data. Therefore, the output is probably a guess.
When it comes to organised manufacturing or services that cover transport, trade and construction, amongst others, the reliance is on data with the MCA which covers 5 lakh companies. While this number is surely impressive, as it supersedes the earlier approach of looking at 2,500 companies’ data, it is hard to accept these numbers. Most commercial corporate data bases cover technically around 20,000-30,000 companies but data is practically available on a continuous basis for not more than 3,000-5,000 companies. When it comes to quarterly results, the availability of information is much leaner.
This raises the question whether the database of MCA is really updated for all these companies as this can drastically change the value-added numbers used for Q4 that are presented on May 31. One must remember that both the output numbers and prices are susceptible to change over time, which will cause the GDP growth numbers to vary. The answer is that it is, at best, a far guess which could be right. Also, as the IIP is used as proxy for unorganised manufacturing and can change significantly after even three months, are we getting the true picture? When the GDP number changes, other targeted ratios like current account and fiscal deficit receive an impetus, especially if the former is in the upward direction.
Employment data is tricky because the concept is amorphous. Also, there is no database on employment and most private source data as well as NSS data are based on Surveys. Can a survey tell us the right story, considering no one can cover the entire country? The quality of sampling could be outdated and the responses given could be vague. Also, in unorganised sector, no one would like to give the right number as it can lead to other issues like getting enrolled for provident fund, etc. To bring more transparency in employment numbers, the focus should be on the organised sector to begin with where every entity (like the 5 lakhs covered by MCA) has to report the headcount based on a pre-defined concept. But, for the unorganised sector, including agriculture, it would always remain a grey area with duplicity involved—especially if unskilled labour gets jobs with multiple projects during a year. This can hold for construction sector in particular.
Two conclusions can be drawn from the above discussion. The first is that, rather than strive for speed in disseminating data on a more real-time basis, it would be better to tarry and provide final numbers even if there are lags involved. This would avoid the embarrassment of changing the discourse or commentary when reacting to new numbers. This became apparent when the revised number for GDP growth in FY17 were higher but never had any such commentary when the first set of numbers were released when every analyst attributed the lower performance to demonetisation. Second, data should not be used as a tool for political debate as it raises doubts that are quite unnecessary. This could otherwise become an own-goal in the international arena where our data will not be taken too seriously.
(Author is Chief economist, CARE Ratings. Views are personal)