Blog

Can financial advisors open the AI black box for clients?

by Thomas Bosshard

Many machine-learning models behind AI applications offer no visibility to what they do and how they reach a decision or course of action. In other words, we can see the information that goes in and the results that come out, but we do not understand what goes on inside. This so-called “black box” problem is a key obstacle to accepting AI-based solutions for investment purposes because it makes it nearly impossible to explain to clients how algorithms reached a decision, e.g. to invest their assets or decide portfolio weights.

It is natural for people to distrust what they do not fully understand. Therefore, wealth managers should be able to explain the “thinking” behind AI-powered investment decisions, particularly if they expect clients to pay management fees for the decision-making. Using a black box makes this impossible.

No transparency—no trust

If no one can explain what goes on between input and output, blind trust into machine learning can be risky. Especially deep-learning algorithms raise concerns because they are so complex and autonomous that even the people who built them in the first place are at some point no longer able to understand them. Deep learning is a form of artificial intelligence that ‘learns’ by identifying patterns as it sifts through data and information. It mimics the learning brain and requires minimal guidance in reaching decisions. Put simply, it is essentially AI being built by AI, and it can quickly look like an uninterpretable, tangled mess of neural connections.

Another obvious issue is accountability. The EU’s General Data Protection Protocol (GDPR), aimed at showing EU citizens how companies like Facebook track and use their personal data and ensuring their “right to be forgotten”, also includes a “right to explanation”. For decisions that affect their lives and livelihoods, e.g. if they are denied a loan, insurance or medical treatment, people will obviously want to understand why a “computer says no”. However, if no justification is available, how can anyone trust these decisions or the businesses and government agencies behind them?

In addition, artificial intelligence cannot guarantee that nothing will go wrong, as the accidents with self-driving cars have recently shown. But how can problems be fixed if no one understands them? And who can be made accountable if something goes wrong? Considering the growing reach of AI, increasing concerns about accountability are justified. We need to find a way to trust AI. For one, there should be clear visibility of what is driving decisions that are life-changing—such as financial ones.

In the long run, not understanding the reasoning behind decisions is intolerable for the human mind. No one trusts machines yet enough to tell us what to believe and make potentially life-changing decisions for us. This applies to a financial advisory as much as it does to any other part of life.

From opaque to transparent: glass box AI

Now that regulators have turned their attention to accountability, people are becoming warier of AI applications. Therefore, AI applications are now being taught to justify their reasoning. The emerging field of explainable AI (XAI) aims to create new AI methods that are accountable to human reasoning. For example, if an algorithm classifies something, it would at the same time explain why it was classified as such. This makes data networks more interpretable, and helps users learn from machines. At the forefront of this new development is the U.S. Department of Defense who uses AI-powered technology to help with all kinds of high-risk functions and decisions. Their aim is to produce “glass box” AI models that are interpretable in real-time by “humans in the loop”. Explainable AI is also rapidly being adopted by businesses as best practice.

As with all technologies, humans need to be in the loop of AI-powered decision-making. It is not yet alone capable of delivering the guidance investors can get when they develop a long-term relationship with a professional, human financial advisor who can help them feel in control of their financial decisions. But AI-powered applications such as Interactive Advisor can help inform advice and create a deeper understanding of clients. It also bypasses the black box problem by making the relationship between input data and decisions transparent in a way that is easily understandable and explainable to clients.

This article was written with information from:

Go back

Get in contact – our office in Zurich.

Adviscent AG
Binzstrasse 23
8045 Zurich
Switzerland

+41 43 344 91 89
company@adviscent.com