Imagine a chef who has leading-edge cookware at his disposal, top-of-the-line appliances and virtually limitless access to the most delicious, exotic, and exciting ingredients imaginable to cook the most delicious veg biryani. You would think he has everything he needs to cook the most amazing meal one can ever eat. Except he’s missing one key element—an element that is, in fact, absolutely essential if he’s to cook delicious stuff: the knowledge of how to use all those ingredients, appliances, and cooking tools. So far, PSU banks in using data have been a lot like me in the kitchen—pretty clueless. They have absolutely no clue when it comes to using the loan data that they have over the years of their corporate customers to value NPAs. However, they have all the ingredients, appliances, and cooking tools to put a value to their NPAs by making the most of the vast amount of data that banks already have about loan performance over the years. Many banks have no problem prepping the kitchen, but they don’t know the process or the proportion of ingredients in the recipe. Worse, they can’t even tell whether basmati rice or brown rice is used in preparing lip-smacking biryani.
There are two main paradigms for valuing NPAs: model-driven and data-driven. A model-driven framework for valuing NPAs is like the structural model proposed by economist Robert Cox Merton in his Nobel-prize winning work. In this model, a firm is viewed as solvent as long as the value of the firm’s assets is larger than the value of its liabilities. The value of the assets of the firm is not observable and the structural model helps in inferring this value. The Merton model does this by linking the value of the assets to the market cap of the firm and face value of debt, both of which are observable. For instance, the market cap of Deccan Chronicle is Rs 42.63 crore. Their long-term debt is Rs 580.25 crore. The key insight of the Merton framework is that the equity of the firm can be viewed as a call option on the total assets of the firm with its strike price equal to its liabilities. This allows banks to infer the value of the assets, which in the case of Deccan Chronicle comes to approximately `255.94 crore using the model. Coincidentally, it comes pretty close to the worth of assets established by the Enforcement Directorate, but that needn’t always be the case.
You may also like to watch:
It is well-known that the recovery value of loans using this model is an over-estimate of the true recovery value of the loan over short horizons. For instance, the recovery value of each rupee of loan of Deccan Chronicle is roughly 36.7 paisa. The actual recovery may be less. A crucial step in NPA valuation is creating a mapping between the theoretical recovery values and actual recovery values historically observed by the PSU bank. In this model-driven approach, the loan value is largely a function of market cap and equity volatility. A natural outcome of this model dependence on market cap and volatility is that loan losses are cyclical. Loan losses correlate nicely in this model, rising during periods of overall economic distress and falling during expansions.
Valuing NPAs using model-driven framework is like cooking veg biryani using a recipe book with the correct theory of the cooking process. Getting the ingredient proportions and the process of cooking right is awfully difficult if somebody like me knows only the theory of cooking biryani
but has never done it in practice. And like biryani cooking, which happens in one big vessel where the tastes, flavours and aromas of ingredients interact to make the end-product delectable, you put all the inputs to model including their correlations in the NPA valuation engine and make sure that the model is calibrated well for the NPA valuation to be useful. Taking the analogy further, to get the perfect taste, you need to not just cook the appropriate proportion of ingredients, but there are other factors like the intensity of flame and the right amount of water to cook the particular type of rice to perfection. No recipe-book can cognise for all such permutations and combinations, and therefore cookbook recipes have their limitations. Similarly, each corporate credit profile is different and have their own unique characteristics which no model can cognise for. Therefore, it would be a bit utopic to think of getting the ‘true’ valuation of an NPA using a structural model, assuming such a thing exists. Yet, a recipe-guided edible, if not lip-smacking, biryani is better than doing nothing with the ingredients, appliances, and cooking tools.
There is another way to becoming a master chef, and this is by being a food connoisseur. You savour the dish and try to infer the ingredients that go into it, even if you do not know the cooking process and the proportion of ingredients, as in a model-driven framework. Data-driven is a new way of thinking about valuing NPAs enabled by machine learning. This paradigm involves algorithms putting a thinking cap to spot connections and correlations, identifying patterns and non-linear relationships that, to a human mind, are obscure. In a model-driven approach, the model drives the results. In a data-driven approach, past results on what the NPAs were eventually worth, drive the model. In machine learning, the algorithm learns from data without relying on rules-based programming. PSU banks can let the machine-learning algorithms savour the loan recovery data that they have over the years to identify an approximate formula. It can construe relationships based on past data about the value of current NPAs. Again, it has its limitations, and the most limiting one is it assumes that the past is good a proxy of the future. That being said, no model, be it model-driven or data-driven, is perfect. As they say, all models are wrong, some are useful. The point is to make them useful.