Issues Archive » FundsTech Spring 2022

Artificial Intelligence: For explainable AI, look in the white box

White boxThe rise in AI investing parallels A growing need to understand how computers make decisions, writes Lynn Strongin Dodds.

Computer-driven models are nothing new, but as they have grown in sophistication, so too have calls for a brighter light to be shone on their inner workings. This does not mean the end of the ‘black box’ – a system viewed purely in terms of input and output – but greater usage of tools such as explainable artificial intelligence (known in the tech world as XAI) to justify the rationale behind the outcomes spewed out by AI and machine learning.

“There is a shift away from the black box but not completely,” says Steve Cracknell, CEO of Insig AI, a technology and data science firm. “The difference today is that AI and machine learning are so much more accessible. 

“In the past, access to AI and machine learning was too expensive and required specialist skills to write highly bespoke code. But today, with advances in the cloud and the proliferation of open-source libraries, people no longer need to buy hardware or write complex algorithms from scratch. They can lease the computing power, and code packages are more readily available in an open-source world.”

As Julien Molez, group innovation data and AI leader at Societe Generale, puts it: “I do not think we will see a strong and definitive transition from a black to white box, but explainable AI algorithmic techniques can help the model developer better understand what the model is doing and why.”

In the past, asset managers were also reluctant to relinquish their confidential algos and intellectual property for fear of losing their competitive advantage, according to John Pizzi, senior director, enterprise strategy, capital markets at fintech group FIS. 

However, today the pressure is on companies across the board to be clear about who trains their AI systems, what data was used and, just as importantly, what went into their algorithms’ recommendations. 

“In a simple model, it was much easier to extrapolate information, but they have become more complex because there is so much more data to traverse and people want to understand what is driving the decisions,” says Pizzi.

The result is that explainable AI has increasingly become an important component in institutional investing strategies as asset managers turn to technology to navigate the low-yielding investment landscape.

AI budget increase
Traditionally, AI was the preserve of top-tier managers and hedge funds that had the resources. As data became more accessible and cheaper, a wider range of fund managers have leveraged AI and machine learning to derive meaningful insights that can be used to generate outperformance. 

In fact, a recent survey from Deloitte Centre for Financial Services found that 85% of the 400 senior investment executives polled were using AI-based solutions in the pre-investment stage to help them generate returns. In addition, 71% planned to increase their budget for these alpha-generating technologies over the next 12-18 months.

AI is also being used to screen and monitor risks more effectively. This has become more significant since the pandemic, when unprecedented levels of volatility and unpredictability rattled markets. Indeed, the ability to predict and manage risks took on new meaning for many managers.

Explainable AI is seen as part of a more responsible approach to the use of the technology. At its heart, it is a set of processes and methods that allow people to comprehend and trust the results and output created by machine learning and AI-driven algorithms. It is an evolving system that is used to describe an AI model, its expected impact and potential biases, and to help characterise model accuracy, fairness, transparency and outcomes. This enables data scientists to make any necessary adjustments to the model. 

“There are three different reasons that explainable AI is important,” says Roger Burkhardt, capital markets chief technology officer at Broadridge Financial Solutions.  “The first is that it is critical for business adoption to explain a recommended action. The second is that it also helps business and data science improve the quality of the model, and third is it avoids inadvertently creating models with harmful bias for the consumer.”

That last point – the ethical side of AI – has grabbed rRegulators’ attention in recent times. A report by the United Nations Educational, Scientific and Cultural Organization (Unesco), ‘Towards an Ethics of Artificial Intelligence’, highlighted the dangers of social scoring whereby certain AI systems can be susceptible to bias and misinterpretation. 

The systems in question tend to judge people based on their behaviour and categorise them into clusters, such as by ethnicity or gender.

These are some of the many issues being addressed by the European Union’s draft Artificial Intelligence Act, which proposes major changes in the areas of social scoring, biometric recognition systems and high-risk applications. More generally, the Act, along with the revised Coordinated Plan, recommends a regulatory framework for harmonised rules for the development, market placement and use of AI systems, using a proportionate, risk-based approach. Obligations will be introduced for risk management, data quality, technical documentation, human oversight, transparency, robustness, accuracy and security.

In the financial services world more specifically, the AI systems that are deemed high-risk are those used to evaluate a person’s creditworthiness, monitor and evaluate work performance and behaviour, or recruit staff. “It will have broader implications and be a benefit for the financial services industry because it is requiring transparency around the data that is used and whether it is unethical,” says Pizzi.

Though it hasn’t been singled out, the asset management community welcomes the Act, which could take another two to three years to come into force. As the German investment funds association BVI noted, the sector would be “significantly influenced” by improved data availability and quality, as well as the introduction of documentation for algorithms.

Built on trust
The general view is that if companies are more accountable as the Act suggests, it will bolster confidence in the use of these machines. For asset managers, an article in the Harvard Data Science Review sums up the issue. “Trusting a black box model means that you trust not only the model’s equations, but also the entire database that it was built from,” it notes.

Asset management is built on trust, but it would be foolhardy for any asset manager to fully trust any one system. AI and machine learning for investment managers must be explainable not only to the investment manager, but also to stakeholders, such as portfolio managers, shareholders, investors, CIOs and clients.

However, explainable AI or the ‘glass box’, like its black-box counterparts, comes in various formats. For example, simpler forms of machine learning such as decision trees, Bayesian classifiers and other algorithms that have certain amounts of traceability and transparency in their decision-making can provide the visibility needed for critical AI systems without sacrificing too much performance or accuracy. 

Visibility is more difficult in complicated, more powerful algorithms such as neural networks and ensemble methods. Take the example of ‘random forests’, a commonly used machine learning algorithm that combines the output of multiple decision trees to generate a single result. 

Societe Generale’s Molez adds: “There are many different perspectives, and it depends on the models and what the end user wants to know. Most models in asset management are based on structured data and there is a prevalence of more classic machine learning [such as decision trees] which are easier to explain and understand than neural networks, because you can at least know which features have greater importance and weight in the contribution to a decision.

“For example, if there are 100 features, the focus may be on 3 and 5 to explain 75% of the investment decision-making process.”

Developers have their own viewpoints too, but it is important that they be allowed to experiment and make their own mistakes, according to Cracknell at Insig AI. 

“They need the room to build the sandbox and break it and do it quickly in their black box. To me, explainable AI needs to be understandable, but that doesn’t mean that the end user has to understand how you create the model. You need to be able to walk them through the series of actions and inputs that you have used and explain why.”

For now, though, Molez notes that the use of AI is still very much a complementary process involving humans and algorithms. “The criticality of AI explainability is not as high in the process where humans remain the final decision-taker,” he says.

“Yet, even in this case, a proper monitoring of the model performance must be set to ensure that a human using AI for decision-taking will be alerted in case of a significant deviation in the AI algorithm behaviour.”

© 2022 fundsTech