LEGAL: Automating the law

Funds Europe examines some of the legal issues associated with new technology and how lawyers might be affected by automation.

No professional service is seemingly immune to the threat of displacement by robotics, artificial intelligence and machine learning. As an increasing number of straightforward tasks are earmarked for automation, professionals will be forced to demonstrate just how they add value.

For journalists, there is the development of machine-readable news, also known as events-driven analytics. These services are designed for algorithmic traders that want to consume earnings announcements and central bank policy statements at the same low-latency pace as price movements. News precedes price and machine-readable news precedes scrolling news, says the logic.

Meanwhile in the investment management industry, active managers and financial advisers face the threat of robo-advisers. And now, it seems, lawyers face their own threat of automated disruption.

Earlier this year, the UK’s Financial Conduct Authority (FCA) announced plans to introduce machine-readable rules that could be automatically read by machines. As the FCA’s head of data and information, Nick Cook, described it, the aim is to “put out rules which are written manually in ways that can be fully and unambiguously interpreted by machines”.

The theory is that this will help reduce the resources needed for regulatory reporting and cut the cost of compliance by eventually feeding directly into a firm’s compliance system rather than being sent via a legal adviser who will decipher the ‘ambiguity’.

Part of this cost has been on software – it has been forecast that the global market for compliance software will be worth $118 billion by 2020, up almost 50% from the current $80 billion. Another part is the money spent on legal advisers.

For now, most law firms are positive about the prospect of machine-readable rules. They do not see it as a substitute for considered legal advice, nor do they see it as a potential dilution of regulations on the basis that “unambiguous” rules may be devoid of some of the nuance that robust regulation contains.

“I don’t think it will dilute or dumb down the regulations,” says Jerome Lussan, founder and chief executive of Laven Partners, a consultancy focused on advising funds on regulatory issues. “Machines are computed to work within our context and not the other way round so I am quite optimistic about the prospect of machine-readable rules.”

Furthermore, Lussan does not see these machine-readable rules as an example of artificial intelligence (AI) and draws a distinction between machine-readable and machine learning.

“I am a believer that AI doesn’t exist yet. We have better and better algorithms, but they are not intelligent. At best they are algos that can be improved with more inputs.”

Robo-lawyer
Nevertheless, some law firms have tried to get ahead of any potential disruption by developing their own robo-lawyers. In December 2017, Norton Rose Fulbright launched what it describes as its own AI-powered tool, ahead of the introduction of the EU’s General Data Protection Regulation (GDPR).

The chatbot is called Parker, presumably in homage to the butler-cum-chauffeur from 1960s puppet TV show Thunderbirds, and is designed to help non EU-based clients determine whether they are bound by the GDPR.

According to the law firm, Parker uses natural language processing to provide automated answers to a series of frequently asked GDPR-related questions. The chatbot was originally developed in Australia between the law firm’s global head of technology and innovation, Nick Abrahams, and Australia-based tech firm Edward Odendaal, and applied to Australia’s own data protection regulation, which was introduced in February.

According to Abrahams, the chatbots have “extensive applications” and were a result of demand from clients, many of whom had “expressed a strong interest in exploring and developing these technologies to help address their most pressing business needs”.

However, the use of machine learning and AI will be a balancing act for the legal profession, just as it will be for others, from journalists and machine-readable news to active fund managers and robo-advisers.

“The advent of machine-readable legislation and rules is part of a broader theme: increasing efficiency for all stakeholders across the legal ecosystem through the use of technology,” says Imogen Garner, a financial services partner at Norton Rose Fulbright.

“Technology will undoubtedly be incredibly impactful to the legal profession, fostering increases in efficiency and enhancing client outcomes, yet this impact is unlikely to displace the demand from clients for considered views on how legislation and rules impact their businesses legally and commercially.”

One concern for Laven’s Lussan, however, is that an increasing use of this technology could impact the education and training of novice lawyers. “We are all building towards automated workflows that are reducing the time for certain tasks such as due diligence and regulatory reporting.

“It is amazingly useful but could make redundant some the tasks typically done by junior staff. This could be very threatening to the legal profession. How do you train lawyers if those tasks are taken away from them?”

If, in the future, technology and software takes care of all the mundane tasks and leaves lawyers to focus solely on the more complex aspects that makes them great, Lussan asks, how do you become that great lawyer without the ability to learn by completing thousands of due diligence arrangements and regulatory reports? “The law is about being diligent, attention to detail and using your eyes and your memory,” he says.

There are countless examples where the advancement of technology and automation has led to a perceived deterioration of manual skills, from calculators and arithmetic to smartphones and the inability of anyone to remember a phone number any more, including their own. The worry, says Lussan, is that the same trend happens in the legal profession.

Conflicting demands
Another concern voiced by market participants about the legal and regulatory implications of the greater use of AI is that the opaqueness of the technology may clash with the transparency demands of contemporary regulation, such as MiFID II, which asks for granular detail on firms’ trading activity.

When considering AI risks in a business context, it is important to remember that not all AI is created equally, says Dave Wells, European vice president and managing director at US-based software company Pegasystems. “Specifically, artificial intelligence comes in two distinct flavours – transparent and opaque – and both have very different uses, applications and impacts for businesses and users in general.”

A transparent AI is a system where any insights can be understood and audited, and any outcomes can be reverse-engineered to discover how the system arrived at any given decision, thereby allowing one to reverse-engineer each of its outcomes to see how it arrived at any given decision.

But there are also opaque AI systems that do not easily reveal how they work and are more akin to the black box software model. Such a distinction creates obvious business issues for firms, not least whether they can fully trust the AI system they are using, argues Wells.

“Either the AI needs to be transparent so that business management can understand how it works or, if the AI is opaque, it needs to be tested before it is taken into production. These tests need to be extensive and go beyond searching for viability in delivering business outcomes; they must also search for the kind of unintended biases that have the potential to influence AI outcomes.”

Additionally, there are regulatory and legal considerations to be taken into account – not just MiFID II and the granular transaction reporting requirements but also GDPR, taking us full circle, from the chatbot created to help with basic GDPR enquiries prior to the legislation becoming active to the data protection issues created by more sophisticated AI in the future.

If an AI system can only show the result but not how it arrived at such an outcome, could this be a problem? “The opaqueness of machines is always a concern,” says Laven’s Lussan. “But regulators are mostly interested in results, so I don’t necessarily see it as a problem. After all, the brain is very opaque.”

Ultimately it depends on how much we want to know about how the software is made, he says making an analogy with modern cars where you cannot typically open the bonnet of the car to fix the oil because it is connected to the manufacturer, which wants to monitor the car’s oil use. “I think this is fine if it results in more efficient systems.”

Wells, on the other hand, believes that robust AI testing will be a necessary in the post-GDPR era. “The legislation will mandate that companies must be able to explain exactly how they reach certain algorithmic-based decisions about their customers,” he says.

“What this means is that organisations able to control AI transparency levels by forcing the methods AI uses to make decisions from opaque to transparent will have a distinct advantage, as they’ll be much more easily able to comply and to protect both themselves and their customers.”

©2018 funds europe

HAVE YOU READ?

THOUGHT LEADERSHIP

The tension between urgency and inaction will continue to influence sustainability discussions in 2024, as reflected in the trends report from S&P Global.
FIND OUT MORE
This white paper outlines key challenges impeding the growth of private markets and explores how technological innovation can provide solutions to unlock access to private market funds for a growing…
DOWNLOAD NOW

CLOUD DATA PLATFORMS

Luxembourg is one of the world’s premiere centres for cross-border distribution of investment funds. Read our special regional coverage, coinciding with the annual ALFI European Asset Management Conference.
READ MORE

PRIVATE MARKETS FUND ADMIN REPORT

Private_Markets_Fund_Admin_Report

LATEST PODCAST