A Balancing Act – the best practice approach to AI in the motor retail industry

Staff
By Staff
10 Min Read

In its latest Spotlight on Finance and Insurance, AM explores the best practice approach to AI (artificial intelligence), one which ticks all the boxes and meets with the regulator’s approval. Aimée Turner reports

At an industry conference last September a Financial Conduct Authority (FCA) official restated her organisation’s intention to be relentless in championing the consumer’s interest.

The assertion prompted one leading lender to express what many were thinking.

“It’s a great thing,” he said, following her speech, “but I think it is hard for lenders to give forbearance to do all the right things and to make a profit at the end of the day. I just hope that you appreciate that because your speech makes it sound very easy.  For lenders, it is anything but.”

The Consumer Duty regulations which came into effect last July are aimed at creating more stringent standards for consumers and improving competition in the retail financial services market.

Speaking at that Financing & Leasing Association event in September, the FCA official outlined the principles under which it is operating: “Real world outcomes are what really matter, which means customers are given the information they need at the right time and presented in a way that they can understand.”

With this in consideration, AI (artificial intelligence) is being examined to solve precisely these issues, offering, as it does, a potentially compelling win-win scenario for both consumers and lenders in terms of better suited products for the buyer and greater efficiency for the provider.

The car finance and insurance market is undergoing a seismic shift in terms of how it demonstrates compliance. It’s no longer a case of ‘business as usual’ so technology tools such as AI, particularly revolutionary generative AI, could be the technology that both delivers innovation and fulfils the customer prerogative.

Over and above the hype, however, what are the considerations the finance and insurance community need to take into account to avoid falling foul of regulatory scrutiny?

To be fair, the regulator is pretty bullish about the potential of AI. Nikhil Rathi, CEO of the FCA, speaking in January, acknowledged that financial services are at a global inflection point and that slow adoption could mean missed opportunities for growth and investment.

Even so, consumer-facing technology that is powered by AI should always observe the critical principles of financial inclusion and data security – or else risk triggering what Rathi terms a ‘techlash’. 

There are four main areas where the use of AI could have the greatest impact on car finance and insurance: risk assessment and pricing; frictionless customer journeys; fraud detection and prevention; and dynamic market adaptability.

Jonny Clayton, director and founder of Oodle, sees AI as a computing power that has the potential to disrupt existing business processes and customer journeys ‘on a scale that 99% of us can’t even comprehend yet’.

He says: “It may sound scary, but actually it offers exciting consequences – and positive customer benefits – by supporting regulatory lending businesses with a number of functions.” 

He adds that AI’s beauty is that it’s able to create underwriting models that continually learn and improve. “Data science can already create weighting and credit score algorithms that allow the understanding of credit risk, pricing and lending criteria etc.,”  he says.

“But AI takes this to another level with machine-run experiments to better understand how different channels perform. By feeding in big data and credit bureau data, we can continually reinforce AI’s learning in terms of lending and underwriting decisions and its ability to understand risk.”

While AI has the potential to improve operational efficiency for businesses and enhance customer experiences and outcomes, the integration of AI in financial services  does, however, call for regulatory and ethical considerations to align with consumer protection standards.

Aidan Rushby, founder and CEO of lender Carmoola, says AI should be anchored on three principles: transparency and explainability where customers know how AI impacts their loan terms; data privacy and security where rigorous protection measures safeguard customer information and prevent misuse; and fairness and non-discrimination where AI algorithms are unbiased and treat all applicants equally, regardless of background or demographics, with only materially significant information being sought. 

Oodle’s Clayton agrees. “You can’t unleash the AI beast unchecked,” he says. “It’s essential to add another superpower into the mix – human intelligence. Only then can the system truly understand the concept of the regulatory framework and how all the lending elements interact. And it’s this potent brew of AI and human oversight that will allow the magic to happen.”

Founded in 1976, CGI is an independent IT and business consulting services firm that has been active in supporting car finance and insurance businesses with AI solutions for many years.

Whether AI operates passively or extends to highly capable generative AI that can perform various tasks independently, human involvement remains essential,” says Cheryl Allebrand, a CGI specialist in AI and automation.

“Even with generative AI, it should function as an augmentative tool rather than a substitute for human input and the unique value people contribute. It cannot operate autonomously as implementation relies on human guidance for meaningful outcomes and value extraction.”

Not only is the human element indispensable but Allebrand stresses the need for ongoing maintenance of AI models too.

“You can’t just build it and expect that it’s going to continue working,” she says. “That’s where businesses fail at implementing AI because they think it’s like a computer system that they buy and simply implement. But no, you need to regularly review that the models are still doing things well.”

Voyc is another key player in the field of AI-driven quality assessment and compliance monitoring. Its founder Matthew Westaway similarly stresses the need for AI to be considered as a tool for augmentation rather than an outright replacement. 

“Its goal should be to enhance QA team efficiency, but should always recognise AI’s limitations in capturing nuances such as regional accents,” he says.

On a monthly basis, Voyc processes around two million calls and around 50,000 of those are manually checked for compliance when the system flags calls that fail compliance or score below a certain threshold. He says Voyc’s competitors focus on analytical tools, selling trends and word clouds to companies but lack the flexibility that it offers in being able to override AI-generated quality scores.

Voyc has already secured a large chunk of new business with 20 car finance lenders and is now eyeing the dealership segment which is fast recognising the need for trust-building and ensuring accurate communication with customers.

Both CGI and Voyc stress that AI implementation needs the buy-in of customer-facing and quality assurance teams.

CGI’s Allebrand notes that while quantitatively, the benefits to businesses are evident, the qualitative aspects such as employee experience should also be considered as AI initiatives risk failing if employees are not adequately trained or if there is a lack of stakeholder buy-in.

Voyc’s Westaway echoes that. “AI is there to support the team leader or the QA individual, and to make their work a lot more effective. It’s not to replace their work of reviewing interactions or providing feedback. It’s just there to support them.”

Ultimately, Carmoola’s Rushby believes that the extent to which AI will be useful depends on each individual business, pointing out that AI is likely to have a larger transformative effect on big legacy businesses as part of a broader digital transformation programme – albeit complex and expensive to execute.

“For more agile, digital-first businesses that have been designed with a tech- and data-first approach, it’s more likely that proprietary technology – which can include AI – will be a key differentiator as soon as the business’s products and services come to market,” he says.

One thing is for sure, as the FCA moves decisively towards outcomes-based regulation, its approach will equally apply to consumer-facing technology charged as it is with applauding the good, sanctioning the bad and most certainly outlawing the downright ugly.

This feature article appeared in AM’s Spotlight on F&I 2024. Download the Issue for free here and learn how Advantage Finance decided to implement AI-driven call monitoring two years before the introduction of Consumer Duty and how it’s reaping the rewards today.

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *