Article

Artificial Intelligence for Credit Risk Management

Introduction

The use of artificial intelligence (AI) in credit risk management has been a hot topic in the banking industry for some time. Following an industry survey on the application of AI conducted by the Hong Kong Monetary Authority (HKMA), the HKMA issued the High-level Principles on Artificial Intelligence and the Consumer Protection in respect of Use of Big Data Analytics and Artificial Intelligence by Authorized Institutions in November 2019. These are the first clear missives from the Hong Kong SAR regulators about the use of AI by Authorized Institutions. Banks that have adopted, or plan to adopt, AI solutions will need to identify any deviations from the High-level Principles, and familiarise themselves with the latest developments in the use of AI in the banking industry.

Machine Learning

AI is most easily defined as the simulation of human intelligence processes by machines. It is a fast developing field that covers a broad range of problem-solving processes executed by machines. Machine learning (ML) is a sub-field within AI that enables computers to learn rules by themselves through the data. Advances in computational power have enabled the use of ML algorithms, such as deep learning, random forests, Gradient-Boosting Machines (e.g. XGBoost and LightGBM), cluster analysis (e.g. k-means and DBSCAN) etc.

Common tasks performed by ML algorithms include regression, classification, forming networks, and discriminant analysis (clustering) – all of which are useful applications to credit risk management. Deloitte research has found that ML algorithms outperform traditional models in terms of predictive power for various applications, such as predicting defaults. Also, ML algorithms can be used to analyse unstructured data, with applications that include text analysis. This creates further opportunities in credit risk management, such as modelling early warning signals based on media reports.

 

Applications in Credit Risk Management

The use of AI in credit risk management is still in its nascence, but the combination of an exponential increase in the amount of available data and improving ML algorithms to digest these data has the potential to greatly impact the field. The use of ML in credit risk management can be illustrated through two interesting applications that are developing rapidly:

1.    Probability of Default

Traditional probability of default (PD) models rely heavily on logistic regression. Logistic regression models are relatively easy to understand and interpret; they have been market best practice for decades. However, traditional models are not capable of capturing complex relationships that may be present in the actual data. In other words, there is more predictive power in the data than traditional methods are able to excise.

A case study by Deloitte France on PD modelling found that models built using random forest, gradient boosting, and stacking methods all outperform logistic regression models in multiple performance measures. There is a case to be made that under the right conditions, the adoption of ML methods in model estimation are likely to result in enhanced model accuracy.

However, it should be noted that the progress observed in the accuracy of ML models is often made at the cost of their explainability. ML models are commonly described as ‘black boxes’, as it is generally difficult to explain relationships between model inputs and outputs in an intuitive way. For this reason, the adoption of ML methods is often challenged by both credit professionals and regulators. This tension will likely remain a topic of discussion for years to come, with ML model explainability of keen interest to financial supervisors.

Recently, there has been a concerted effort to unfold the black box of ML credit models. This is a vital step that will allow ML models to be put to broader use. For example, Deloitte France has developed a solution called "Zen Risk" that combines the strengths of traditional regression with ML algorithms to produce a model that is both accurate and auditable.

2.    Early Warning signals

Early warning signals are commonly used in credit risk management to identify entities that are exposed to higher risk of default before the default occurs. Traditional early warning systems usually require a large number of experimentally defined indicators and rely heavily on expert judgement.

AI excels at discovering patterns based on large volume, high velocity data that can be used to generate credit default signals. With sufficient computational power, AI algorithms are capable of generating early warning signals using indicators from a wide range of sources as well as increasing the accuracy of said indicators.

It is also possible to analyse textual information using natural language processing (NLP). NLP is ubiquitous in our daily lives - translation apps, virtual assistants on smartphones, and intelligent customer service from retail banks are only a few examples. With NLP, a range of written media from social media posts to financial news can be captured and used in credit analysis, something that has traditionally been performed by human analysts.

The exceptional performance of NLP can be seen in "Deloitte Intelligent Bond" solution developed by Deloitte China, a bond credit risk management platform with real-time early warning and public opinion monitoring features. The tool had undergone trial testing in the first 10 months of 2018 and achieved 100% accuracy rate of early warnings in China's bond market.

 

Challenges

In a survey conducted by the HKMA in Q3 2019, the top five barriers affecting the adoption of AI by banks in Hong Kong SAR were: a lack employees with AI expertise; insufficient data; design ethics of AI; data privacy and security; and, legal and compliance challenges.

For highly regulated areas like credit risk management, banks should pay special attention to the following challenges:

1.    Regulatory compliance

Regulators in Hong Kong SAR have placed great focus on risk management supervision. As a matter of course, financial institutions are now expected to have more transparent, auditable risk measurement frameworks and business decision-making processes. In the High-level Principles on Artificial Intelligence, the HKMA has emphasized Board accountability, in-house expertise, auditability, quality data, third-parties management and data protection.

It remains to be seen whether HKMA will allow the use of AI in credit risk modelling for reporting purposes. However, it is clear that AI models, which have a reputation of being ‘black boxes’ that generate results that are difficult to interpret, need further enhancement in interpretability, transparency, and auditability to fulfil supervisory expectations.

In order to obtain HKMA approval, banks should also comply with the expectations which HKMA sets out in Consumer Protection in respect of Use of Big Data Analytics and Artificial Intelligence by Authorized Institutions. For example, Governance and accountability; Fairness; Transparency and disclosure; and Data privacy and protection to ensure customer protection.

While the HKMA has not yet released a timeline for regulations about AI adoption, banks developing AI solutions should actively engage in consultation with the HKMA in order to meet regulatory requirements as they develop.

2.    Model Governance

While a great deal of time and effort is required to train the AI algorithms, black box design could also lead to unexplainable, or even biased decision making. Without proper model explainability, using unpredictable AI models could pose greater model risk than ever before.

Adjustments in the risk management framework will need to be made in order to address AI-associated risks. For example, model development assumptions and methodologies, model input, and control measures will all need to be revisited. Practices like model interpretation and dynamic calibration are as well necessary to maintain the health of AI model.

3.    Data quality

Similar to traditional credit risk models, AI models are data-sensitive. As sufficiently large and comprehensive datasets are required to train AI models, banks should pay special attention to data standardisation, accuracy, validity, and integrity in processing mass datasets across multiple working platforms and systems. Corrupted data should be detected and rectified at an early stage.

Apart from the data quality, banks should mitigate the risks in handling mass data, such as data privacy, cybersecurity and computing capacities. Moreover, banks should be aware of other relevant regulations, for example, cross-border data handling, cloud computing, etc.

 

What's Next?

The use of AI will be a progressive feature of credit risk modelling. While the above HKMA survey revealed that 89% of the banks in Hong Kong SAR have adopted or plan to adopt AI applications, Eddie Yue, Chief Executive of HKMA, mentioned in the Keynote Speech at Hong Kong Fintech Week 2019 that HKMA is also making progress in AI adoption and RegTech development. Banks in Hong Kong SAR may want to look to supervisory developments in other jurisdictions for insights. For example, the Principles to Promote Fairness, Ethics, Accountability and Transparency (FEAT) in the Use of Artificial Intelligence and Data Analytics in Singapore’s Financial Sector published by the Monetary Authority of Singapore may offer important lessons for financial institutions in Hong Kong SAR.

With the technology fast maturing and a better understanding of AI models, we expect to see considerable growth in AI in financial services within the next three to five years. AI will impact areas like fraud detection, model validation, stress testing, and credit scoring. However, the governance frameworks for these applications are relatively immature due to the black box nature of most AI algorithms. This will have impacts to firms’ model governance and will require high quality data to keep AI running properly.

In order to take advantage of the opportunities AI solutions present, and to gain the confidence of financial regulators, credit risk practitioners should pay close attention to how AI is being implemented within their own organisation and maintain an open dialogue with supervisors.

Did you find this useful?