Artificial intelligence: The only way is ethics

As artificial intelligence becomes increasingly prevalent, Nicholas Pratt asks what is in place to ensure the ethical use of the technology.

On November 11, Douglas Rain passed away. A Canadian actor and narrator, he was best known for his role in the film 2001: A Space Odyssey, as the disembodied voice of the Hal 9000 computer. In his eeriest scene, Hal reacts as follows to the prospect of being shut down: “I’m sorry Dave. I’m afraid I can’t do that… This mission is too important for me to allow you to jeopardise it.”

The prospect of a computer assuming control and usurping its human programmers is a popular doomsday scenario that reoccurs in the public consciousness with every new technological advance. The growth of artificial intelligence, machine learning and neural networks and their role in financial services has focused attention on the need for a more explicit ethical framework than currently exists.

A recent survey from IBM found that 60% of the 5,000 polled executives are concerned about their ability to explain how AI and machine-learning tools are using data and making decisions to meet their regulatory obligations (see Figure 1, next page).

This is a marked increase on the 2016 survey results where the same concern was shared by just 29% of respondents. Similarly, legal, security and privacy concerns and data governance policies are of much greater interest to businesses than they were two years ago.

So-called ‘explainability’ has become the new feature of the big tech firms’ AI tools, from IBM to Google to Microsoft. Google’s latest AI product, the What-If Tool, enables non-programmers to examine the machine-learning systems and to assess the fairness of algorithms. Meanwhile firms such as KPMG are building their own explainability tools in order to open up the black boxes and understand how the algorithms made their decisions.

In the US, the Federal Reserve is also paying closer attention to the use of AI and what regulatory responsibilities it has. Fed governor Lael Brainard, interestingly the only current board member not appointed by Donald Trump, recently spoke of the need to pay more attention to the risks posed by the use of AI.

“Regulation and supervision need to be thoughtfully designed so that they ensure risks are appropriately mitigated but do not stand in the way of responsible innovations that might expand access and convenience for consumers and small businesses or bring greater efficiency, risk detection, and accuracy,” she said.

And in the asset management industry, the issue of AI explainability and transparency has become more commonplace. BlackRock was reported to have scrapped the use of a number of AI liquidity risk models because of a lack of transparency.

The neural networks beat other models in tests but the results could not be explained, hence they were shelved. “The senior people want to see something they can understand,” commented Stefano Pasquali, BlackRock’s head of liquidity research.

And then there is the issue of liability if, as is increasingly likely, we end up in a scenario where a fully autonomous, AI-powered fund is selected by an autonomous fund selector. Just like the debates about liability and driverless cars, where does the accountability lie in an autonomous fund management life-cycle?

If an AI-powered fund should go wrong and experience its own Hal 9000 moment, who does the end investor blame? Is it the original developer of the AI software? Is it the fund manager that manufactures the fund and deploys the technology or is it the fund selector that recommends the fund to the end investor?

Pushback
When talk of AI relates to the front office and the role of the active manager, things get very pointy very quickly, says JB Beckett, fund buyer and UK director, Association of Professional Fund Investors (ABFI). “That’s when you get the biggest pushback.”

Beckett, a prominent proponent of AI and author of the 2017 book New Fund Order, says criticising AI models as black boxes is unfair. There is an overconfidence among fund selectors that they can understand the inner workings of an active fund manager.

“We’ve become accustomed to that uncertainty but this is not the case with AI,” says Beckett. In part, this may be down to a residual fear from the 2008 financial crisis over relying too much on models without properly understanding how they work or how they should be applied.

When it comes to AI-driven funds, he says fund selectors and investors need to understand two important issues. The first is the rules of the algorithms. The second is the structure within which the algorithms work and how the AI will evolve as it continues to learn. “It is this issue that will create the greatest challenge and potential discomfort for fund buyers,” he adds.

There is also the question of who is responsible for setting the algorithmic rules and managing the interaction. The FCA has made some effort to address this via the CP17/25 consultation paper published in 2017, which calls for firms to identify who is responsible for setting an algorithm’s rules and its means of interacting.

The challenge for the regulators, says Beckett, is to find a way to safeguard the rules behind a machine-learning system, and to keep these safeguards in place if the machine learning leads to the system evolving beyond its initial framework.

The due diligence process will also have to evolve. Companies will need to devise questionnaires that consider the algo programmer’s role in the design and implementation of trading strategy. It is an issue that the APFI is currently reviewing in its current training and accreditation programme.

There also needs to be more consideration of the technical details of the AI and algorithms powering the fund, says Beckett. For example, what technology is the platform built on, what are its parameters and how often are they calibrated? “What we are worried about is the self-learning. What are the conditions for self-learning? We are almost asking as much about the technology infrastructure as the strategy,” says Beckett.

For fund selectors it means a lot more time talking to algo programmers. For those selectors that look beyond the lead manager when researching a fund, this is will not be a big development – but for others, it will be a big change, says Beckett. “I would say that just 10% of fund selectors are ready for this.”

Automated
Beckett sees more innovation among asset managers than fund buyers at present. There have been some developments, though. Morningstar has developed its Q Rating service which, while not a purely quantitative approach, is at least automated. “The fascinating thing will be how it assesses an AI-powered fund,” says Beckett.

Launching and running a fund is still an onerous place in terms of the processes and regulatory burden you would expect from a market built on 50 year-old platforms. But clearly AI-powered funds are becoming more prevalent. This is both a defensive and offensive move for active managers who want to protect themselves against the rise of indexing and also want to tap into the relatively unexplored world of millennial investors, says Beckett.

All in all, is the growing use of AI a healthy development for the industry? Beckett does foresee some potential short-term trauma as fee compression and mergers and acquisitions intensify, but in the long term, the market should become more efficient as a result.

Beckett says we are still some way off the prospect of an AI-fund selection process that removes any human error. But the marketing campaigns behind asset managers’ AI funds are in full throttle. The consequence is that in these early stages, AI-powered funds may actually be more expensive, despite the logic dictating that using algorithms instead of humans should result in lower operating costs.

“While anything is new it is more expensive and marketed as such. AI is clearly the future of indexisation but right now we are still seeing AI being deployed in inefficient mutual fund structures, so we also have to see more AI introduced into back-office tasks,” says Beckett.

The manager’s maxim
Raphael Fiorentino, CEO of UK-based fintech Butterwire, is a believer in augmented intelligence – the use of AI to complement traditional active management rather than replace it. It is an approach that relies on the maxim of legendary hedge fund manager Paul Tudor Jones that “no man is better than a machine but no machine is better than a human with a machine”.

AI_fig_1The investment community has become siloed by style, says Fiorentino. At one end of the spectrum are the traditional active managers driven by fundamental, bottom-up analysis, where deep company and industry knowledge is everything and the only digital support required is a Bloomberg terminal and an Excel spreadsheet.

At the other end are top-down macro strategies, quant-based strategies and trend-following commodity trading adviser strategies that come with their own specialised skills. But the market has become siloed and the investment manager has become either agnostic or sceptical of other styles on their stock selection and portfolio construction.

Fiorentino talks about using AI to take on the “tedious data-intensive work that humans are not designed for” – such as advanced stock-screening based on drawing data from a range of data sources (market data providers, official filings, big data aggregators and personal data).

“For the first time, artificial intelligence can now bring a whole new perspective to investment decision-making,” he says. “The power of AI is its ability to tirelessly look for, combine and distil signals from masses of noisy data already available in the marketplace.

By bringing out “interesting” insights, whether to confirm or enhance a suspected salient point or by identifying one that might have been overlooked otherwise, AI is the humble “idiot-savant” that can usefully take on the tedious data-intensive work that humans are not best suited for.

“Active funds can embrace the AI analytical revolution, delegating the more systematic data-intensive task to an AI engine, freeing analysts and PMs’ time to focus on provocative research and high-conviction portfolio decisions, and exploiting its computing power to contribute an extra informational edge, all for a fraction of the cost of traditional research.”

However, Fiorentino is less enthusiastic about the prospect of using AI to take on traditional active management tasks or fully autonomous AI fund management. “We see AI as a tool to augment traditional fund management. It is intractable to ask a backwards-looking black box to make future decisions,” he says.

“The minute you start moving from the use of AI for low-skill/high-predictability jobs to high-skill/low-predictability jobs, you are moving to an intractable, facile situation.”

While he does not believe that AI can fully replicate the finer points of successful active management, it is the excesses of poor active management that have created the possibility of fully autonomous funds. “One of the main reasons that we are opening the door to AI is because fund managers have let people down and are seen as too expensive.

“But no machine can sit down with a CEO and work out if they are lying, or work out if a German corporate bond is riskier over the long term than an Italian equity, or what to make of Brexit.”

Fiorentino says that he understands the attraction of AI but is “staggered” that a fund selector would want a fund without any human intervention. “It is a reflection of how low the industry has fallen and how out-of-date the cult of personality/star trader has become,” he says.

“For decades, the industry’s pricing model has not fundamentally changed. All the pain is still passed on to the investors. They are frustrated and the reaction is to get rid of the humans. I disagree and think we should use the machines to help humans raise their game. We should strive to engineer investment knowledge and reduce fees,” says Fiorentino.

Falling into rabbit holes
Aside from scepticism about the ability of AI to capture the qualities of a star manager, Fiorentino is also concerned about the market’s integrity should there be a proliferation of AI-powered funds. “The markets need diversity to function. Without it, everyone ends up doing the same thing, like we have seen in the ETF market where a duopoly has developed.

“In our approach, we do not want two engines coming out with the same result. That reinforces these huge, deep silos and falling into analytical rabbit holes rather than developing broader insights. It is not about mining more historical data, it should be about gaining more insight. You need uncorrelated behaviour. The markets thrive when there is diversity.

“More effort needs to be made to find people with fluid economic styles that the machines can learn, to siphon the behavioural economics and the investment knowledge that comes from that. AI should be there to help us raise our game and keep us honest.”

Nobody ever complains when fees are reduced and performance is improved, says Fiorentino. “My concern is that it could lead to concentration, a duopoly and unhealthy, disorderly markets. If autonomous funds start to expand beyond a small part of the market, then human traders will have to adapt.”

©2018 funds europe

HAVE YOU READ?

THOUGHT LEADERSHIP

The tension between urgency and inaction will continue to influence sustainability discussions in 2024, as reflected in the trends report from S&P Global.
FIND OUT MORE
This white paper outlines key challenges impeding the growth of private markets and explores how technological innovation can provide solutions to unlock access to private market funds for a growing…
DOWNLOAD NOW

CLOUD DATA PLATFORMS

Luxembourg is one of the world’s premiere centres for cross-border distribution of investment funds. Read our special regional coverage, coinciding with the annual ALFI European Asset Management Conference.
READ MORE

PRIVATE MARKETS FUND ADMIN REPORT

Private_Markets_Fund_Admin_Report

LATEST PODCAST