Risk management specialists are staring down the barrel of increasing uncertainty caused by artificial intelligence (AI) while also considering the opportunities that AI will provide for both the risk management function and the business.

In-house lawyers, especially general counsels (GCs), often bear the brunt of this risk when incorporating AI, data and tech into their legal teams and wider businesses. In a recent roundtable co-hosted by legal services provider Consilio and The Lawyer, GCs gathered to address their concerns and share knowledge on the considerations related to the use of AI.

The trouble with vendors

“When approaching AI, understanding how technology works is half the battle,” one general counsel begins. Grasping how the software works takes time, so much so that GCs are concerned they may not make efficiency gains.

Another challenge with comprehending tech is that it’s difficult to understand what ‘good’ looks like and where the investment goes when you buy an AI product. Consequently, some GCs stated that they haven’t found a product that gives them the confidence to presume that an AI-powered product will be a worthy investment.

The roundtable participants highlighted that the vendor market is saturated, and there’s a lot of transactional vendors that aim to sell their product, but don’t know the data implications well themselves. Therefore, a GC may invest in a piece of tech, but then struggle to fit all of the various products together. They agreed that a scattergun approach often fails holistically. Ideally, there’s some gadgetry that is one size fits all, but this is difficult to determine in large businesses when everyone wants something different.

Navigating this process is difficult, and what the GCs want is for a guinea pig to test all of the products and tell them what to use. It’s better to be a fast follower than the first, the in-house lawyers contend.

Their other pain point is that enlisting a vendor takes time, requiring careful consideration and internal approvals, without experience and a strong sense of whether the technology product will be effective. Herein lies the challenge of securing approvals; regulators aren’t sympathetic to you taking shortcuts.

But vendors and large tech companies are experiencing an onslaught of regulatory activity themselves. There was concern among the group that some vendors don’t understand the regulation they are now subject to. It also emerged that contracts from AI sellers tend to be incomplete and misunderstood, seemingly generated by a businessperson and not external lawyers.

However, there was a high degree of comfort that AI is here to stay, and so the investment is becoming more viable. Cleaning up data is important, but this has always been an issue. Regulators have given guidelines, but they will wait for businesses to fail to meet them before rules are enacted. Attendees were concerns that the Financial Conduct Authority hasn’t yet issued guidance.

Another ethical dilemma for GCs

The GCs then turned to ethics, an instinctive trajectory for conversations on AI. As one GC shared, experts maintain that ChatGPT came out too early, before the regulation and framework came out for it to be used ethically.

Now, 18 months on, the question remains how much responsibility can you give AI? GCs have heard that genAI software Microsoft Copilot creates data three times faster than a human. What are the retention and record keeping obligations of this data? Where is the responsibility for what’s created and who is the custodian of these documents? Others worry that AI has no moral bias so it essentially ‘spits out’ whatever it wants.

Ultimately, data is the key to AI’s success – the more informed it is, the better the answers are going to be. Vendors also have vested interest in not limiting the use of data, because their tech can be better trained. But the gains made by AI cut both ways. The fact that you can do so much more means that you will also be required to do so much more.

There are legal obligations to customer confidentiality, which concerned some of the GCs. Sharing data is a hard line to navigate. Others maintain that a best efforts approach is best for the use of the tools. “I refuse to believe that there isn’t some kind of functionality that couldn’t be built into a tool to redact personal data,” said one attendee.

This topic isn’t being actively explored by vendors. Indeed, their current approach is “here’s the tool, if you don’t use it, you will be left behind, but you need to deal with the problems”.

To close, one general counsel highlighted that universities, especially those housing academic specialists on AI, are incorporating the tech into their undergraduate studies. This positions the next generation of junior lawyers as especially knowledgeable on technology – which is attractive to law firms.

Consilio commentary

Sitting at the nexus of law and the artificial intelligence revolution can be a daunting proposition. There is likely great benefit to collaboration between companies and even regulators, as to how artificial intelligence will impact the legal process and pillars of the rule of law. In all industries and certainly in financial services, there is a great deal of focus from the business on realizing the efficiency gains promised by the integration of AI.

As discussed in the roundtable, much of the risk of those decisions falls to the general counsel, however, the GC also must enable the business to compete in a rapidly evolving world.

I am Peter Ostrega, Consilio’s global managing director and leader of our Financial Services Vertical. Having facilitated and moderated roundtable discussions amongst our clients for 5+ years I’ve had the unique opportunity to see how AI has entered the discussion and very quickly come to dominate it. Many of our clients are highly engaged in a variety of proof-of-concept efforts some of which have shown incredible promise to drive efficiency into the business and legal processes. It’s an exciting time to be sure as companies in every industry try to define their future and find a competitive advantage in their deployment of AI tools and workflows.

A continued robust discussion about the quality of data being used to train models, the data privacy implications and the transparency of these processes will be critical to ensuring a successful AI future for the banking industry and beyond.