AI has been the topic of a lot of discussion over the past few years.
For many, AI once seemed confined to the realm of science fiction. However, the emergence of large language model (LLM) technologies, such as ChatGPT, has transformed that perception. Now, even those with limited tech experience can access AI-driven services with ease.
The advancements resulting from this evolution have been remarkable. But, the increased adoption of LLMs hasn’t come without its challenges. Chargebacks serve as a pertinent example in this context.
Merchants have been employing AI technologies for years to combat fraud and handle chargeback issues. The task of reviewing and responding to each chargeback claim can be overwhelming for human teams, especially given the high volume submitted by cardholders daily. By using AI, merchants can optimize chargeback management, addressing flaws in the system while safeguarding against fraudulent claims. However, there is a significant risk that bad actors could leverage this technology to automate the submission of illegitimate chargeback requests.
With the power of AI at their fingertips, scammers can expand their schemes and inundate merchants and financial institutions with fraudulent claims.
The Functionality of LLMs
Before delving deeper into the subject, it’s essential to clarify how large language models operate and how they contrast with “true” artificial intelligence.
A large language model works by scrutinizing and labelling vast collections of written material. This lets it “learn” to discern patterns and relationships within that data. For instance, an LLM can identify that the term “Declaration of Independence” often occurs in conjunction with names like “Thomas Jefferson” and the date “1776.” This capability allows it to accurately answer inquiries regarding the year when the Declaration was signed with “1776” and questions about its author with “Thomas Jefferson.”
Nonetheless, it’s important to address the issue of hallucinations. This refers to a situation in large language models where errors can stem from inadequate or flawed training data, or from a misunderstanding of the context.
LLMs can connect their responses to the questions asked. But, they do not possess true understanding. They do not “think” in the human sense, which sets them apart from genuine artificial intelligence. This limitation can lead to complications; for instance, Air Canada was recently mandated to refund customers who were incorrectly promised compensation by an AI chatbot, although those customers were not eligible for such refunds.
These systems can be remarkably sophisticated and capable of improvement. That said, the tendency to produce large volumes of credible-looking (but frequently incorrect) information hinders the effectiveness of LLMs in several areas.
Can LLMs Assist Scammers in Committing Fraud?
The brief answer is “yes.”
The majority of chargebacks are started by individual cardholders. However, a significant — and evidently increasing — portion is carried out by organized crime groups. These fraudsters prioritise quantity over everything else. They are capable of submitting hundreds of chargeback requests each day. Consequently, even if half of these disputes are denied by banks or successfully contested by merchants, they can still generate considerable profits, negatively impacting both merchants and financial institutions.
Criminals would not use their actual identities; instead, they must create fake ones using stolen information. LLMs simplify this process by generating large quantities of convincing text rapidly, which enables fraudsters to engage with their victims in a way that resembles the interaction with an AI chatbot.
While these fraudulent attempts may not be perfect, the flaws are inconsequential. The emphasis is on the volume of attempts. With enough submissions, it’s likely that some will pass undetected.
Addressing the Risk of AI-Driven Chargeback Fraud
It is entirely possible for LLMs to generate considerable amounts of well-crafted text that could facilitate fraudulent schemes. However, one should not assume that anti-fraud organizations are falling behind. In fact, the anti-fraud strategies used by leading payment processors examine many factors beyond just the text itself.
Modern detection technologies can assess thousands of indicators, no matter how insignificant they may seem, to produce an extensive threat analysis for each transaction. This applies equally to dispute claims. The aim should be to leverage a mix of machine learning and human insight to pinpoint and block both invalid claims and recurrent chargeback triggers.
With the right use of AI, merchants can:
- Determine the actual cause of chargebacks
- Achieve higher success rates in disputes
- Identify additional revenue possibilities
- Reduce processing expenses
- Decrease the total number of chargebacks
- Eliminate false positives and unwarranted declines
Even when the textual elements of a fraudulent chargeback are flawless, fraudsters still have ample opportunities to make errors. By leveraging AI to their advantage, merchants can be well-positioned to take advantage of those mistakes.