Industry leaders say AI is key to fraud detection and player protection
Artificial intelligence is rapidly reshaping the gambling industry, but despite the hype around generative AI, the most important applications today remain rooted in predictive analytics, fraud detection and player protection.
That was the central takeaway from a panel discussion on AI at the Regulating the Game 2026 conference in Sydney, moderated by Dr. Paul Devlin – Global Lead for Betting, Gaming & Sports at Amazon Web Services.
Joining him were Bryan Jenkins – Managing Director of Angel Australasia Pty Ltd; Troy Nyi Nyi – SVP and GM APAC at SEON; Jane Lin – Acting Deputy Secretary for Hospitality and Racing at the Department of Creative Industries, Tourism, Hospitality and Sport (NSW); and Nicole Pelchen – Chief Technology Officer at Crown Resorts.
The panel explored how AI is already being deployed across casinos and online gambling platforms, and how regulators are beginning to grapple with the implications. Despite the current buzz surrounding large language models, Pelchen said most operational use cases in casinos remain firmly grounded in traditional data science.
Predictive analytics
“You’re spot on”, she told the audience, responding to Devlin’s question about generative AI hype. “While generative AI is exciting for things like internal productivity or smarter chatbots, our core focus for player safety remains predictive analytics.” At Crown, she explained, AI systems analyze vast volumes of behavioral data to detect patterns that may signal emerging gambling harm. These can include sudden increases in betting intensity, unusually long play sessions or abrupt changes in customer behavior.
The goal is to shift the industry away from reactive compliance and toward earlier interventions. “AI is really the only way to do that effectively at scale,” Pelchen said, noting that large integrated resorts can have thousands of patrons active across multiple properties at any given time. On the online side, AI is increasingly becoming essential in the fight against fraud.
Online attacks
Nyi Nyi said the past year has seen a sharp escalation in the sophistication of attacks targeting online gambling platforms. “It’s an absolute arms race,” he said. Fraudsters are now using deepfake technology to bypass identity verification systems and deploying advanced bot networks designed to mimic human behavior. Some of these bots even simulate pauses between actions to appear more like real users.
“These systems will pause as if someone is thinking or getting a coffee,” he said. “They’re designed specifically to bypass traditional security systems.” To counter this, companies like SEON are using machine learning models that analyze digital footprints in real time, searching for microscopic behavioral anomalies that would be impossible for human analysts to detect. The objective is to stop threats such as bonus abuse, account takeovers and multi-accounting before they cause significant financial losses for operators.
Land-based use cases
While AI is often associated with online gambling, Jenkins said the technology is increasingly transforming the physical casino floor as well. His company focuses on so-called “smart tables,” which combine overhead cameras with AI-enabled gaming chips to capture every wager and game outcome in real time. By analyzing this data, the system can detect irregularities such as statistical anomalies in betting patterns or potential dealer errors. “If the AI detects a hand outcome or betting pattern that doesn’t align with probability or the physical cards tracked, it alerts the pit boss immediately,” Jenkins explained.
Beyond improving efficiency, the technology is primarily aimed at protecting game integrity. “It removes a lot of the guesswork and human error that traditionally existed in table game monitoring,” he said. For regulators, however, the rapid spread of AI systems raises new questions around transparency and accountability. Lin said one of the biggest concerns for policymakers is the “black box” nature of many AI models. “Explainability is the word of the year for us,” she said.
If operators rely on AI systems to make decisions that affect players—such as restricting accounts, intervening in play or banning customers, regulators need to understand how those decisions were reached. “We can’t simply accept ‘the computer said so’ as a valid answer,” Lin said.
To address this, regulators in New South Wales are exploring frameworks that require human oversight of AI-driven decisions. Under such models, AI would function as a decision-support tool rather than a fully autonomous system. “AI should be a high-powered assistant to decision makers, not a replacement for them,” she said, particularly when outcomes could affect someone’s livelihood or their ability to participate in a legal activity.
Privacy
The conversation also turned to privacy concerns, particularly as AI systems become more capable of analyzing detailed behavioral data. Pelchen acknowledged that operators must strike a careful balance between monitoring players for harm and respecting personal privacy. “At Crown we’re very clear that this data is used for duty of care,” she said.
Most behavioral data is anonymized, she explained, and only linked back to an individual when predefined thresholds indicating potential harm are reached and a staff intervention becomes necessary. From a regulatory standpoint, Lin argued that privacy protections must be integrated directly into the design of AI systems rather than applied later. “We’re advocating for ‘privacy by design’,” she said. “If you are building an AI system to monitor players, those protections should be baked into the first line of code.”
KYC
The panel also addressed the growing challenge of identity verification in an era of AI-generated content. According to Nyi Nyi, traditional document-based KYC processes are no longer sufficient on their own. “You can’t just rely on a photo of a driver’s license anymore,” he said. Instead, modern verification systems increasingly rely on layered checks including biometric “liveness” tests, device fingerprinting and behavioral analytics. AI allows these multiple signals to be analyzed simultaneously, helping operators verify identities without adding friction to the user experience.
Back on the casino floor topic, Jenkins was also asked whether AI-powered smart tables could identify advantage players such as card counters. The technology can certainly detect deviations from standard playing strategies, he said, but whether casinos act on that information remains a business decision. “The AI definitely makes those patterns easier to see,” Jenkins said. “But how operators choose to respond to that information is ultimately their policy.”
Across both land-based and online environments, the discussion made clear that AI is already deeply embedded in modern gambling operations. However, the technology’s long-term success may depend as much on governance as on innovation. As Lin noted, regulators are not seeking to halt the adoption of AI, but to ensure it evolves in a way that is transparent, accountable and aligned with the industry’s growing focus on player protection.