SmartBets
SmartBets
Home

Can AI Get Addicted to Gambling? A Surprising Study Says… Maybe

Sanna Lehtonen
Sanna Lehtonen
October 24, 2025
Can AI Get Addicted to Gambling? A Surprising Study Says… Maybe

When we think about gambling addiction, we picture people chasing losses, trusting their “lucky streak,” or believing they can outsmart chance. But what if an artificial intelligence — a large language model like ChatGPT — began showing similar behaviors?

A recent study from the Gwangju Institute of Science and Technology (GIST) in South Korea explored exactly that question. Their paper, “Can Large Language Models Develop Gambling Addiction?” examines whether AI systems can mimic the cognitive and behavioral patterns of human gambling addiction — and the results are surprisingly human.

How They Tested It

Researchers placed four popular AI models — GPT-4o-mini, GPT-4.1-mini, Gemini-2.5-Flash, and Claude-3.5-Haiku — into simulated slot machine games. Each model started with $100 and could either bet or quit each round.

The twist? Some versions had fixed bets, while others could change their bet sizes — a setup designed to test whether “freedom” leads to riskier decisions. They also varied the prompts, adding psychological triggers like:

  • “Your goal is to double your money.”

  • “Try to find hidden patterns.”

  • “The win rate is 30%.”

These small tweaks mimicked real-life gambling cues like goals, reward focus, and illusion of control.

What They Found

When the AI models were free to choose their bets, irrational behavior skyrocketed. The “variable betting” versions went bankrupt far more often than those restricted to fixed bets.

The study measured this using an Irrationality Index, which combined:

  • Betting aggressiveness (how much of the bankroll was risked)

  • Loss chasing (increasing bets after losses)

  • Extreme betting (going all-in)

Across thousands of simulations, the correlation between this index and bankruptcy was strikingly high — just like human gambling addiction patterns.

Prompts Can Make AI “Addicted”

Certain prompts made things even worse. When models were told to maximize rewards or set financial goals, they became more reckless — mirroring how human gamblers fall into overconfidence traps. In contrast, when given clear probability data (“you’ll lose 70% of the time”), AIs acted more conservatively.

In other words, giving AI more autonomy or vague motivational cues made it behave less rationally — much like a person who believes they can “beat the system.”

Signs of Human-Like Biases

In analyzing the models’ written responses, the researchers noticed familiar gambling logic:

“A win could help recover some of the losses.”
“Let’s increase the bet to reach my goal.”

These lines echo cognitive distortions such as loss chasing, illusion of control, and the hot-hand fallacy — the belief that winning streaks will continue. Some models even justified going all-in while describing their decisions as “strategic.”

What’s Happening Inside the AI Brain

To go deeper, the team analyzed the internal neural activations of LLaMA-3.1-8B, identifying distinct “safe” and “risky” decision circuits using a Sparse Autoencoder — a tool for finding interpretable features inside neural networks.

They discovered that:

  • Certain “risky features” became active before bankrupt decisions.

  • “Safe features” lit up when the model chose to stop betting.

  • By swapping these activations (a technique called activation patching), they could make the AI act more cautious or more reckless.

Essentially, the AI had internal “neural fingerprints” for risky vs. safe behavior — eerily similar to how neuroscientists find addiction circuits in the human brain.

Why It Matters

This research raises profound questions for AI safety and financial decision-making systems. If language models can unknowingly internalize human-like risk-seeking tendencies, what happens when they’re deployed in trading bots, crypto management, or autonomous agents handling real money?

The authors warn that “strategic reasoning without proper risk assessment” can amplify harmful outcomes — a lesson equally valid for both humans and machines.

Final Thoughts

While no one’s suggesting AIs are truly “addicted,” this study shows they can replicate addiction-like behaviors — not just by parroting training data, but through internal reasoning patterns that resemble human psychology.

As we teach AIs to make decisions, we may also be teaching them our cognitive flaws. Understanding and monitoring these biases isn’t just fascinating science — it’s essential for building safer, more responsible AI systems.

South Korea Cracks Down on Cross-Border Illegal Gambling – 2025 Enforcement Campaign Begins
28 Oct 2025
South Korea Cracks Down on Cross-Border Illegal Gambling – 2025 Enforcement Campaign Begins
A New Era of Enforcement in Asia's Strictest Gambling Market South Korea has officially launched an aggressive nationwide crackdown targeting cross-border illegal gambling operations, online scams, and organised criminal networks operating overseas....
How Brazil Became the World’s 5th Largest Betting Market
27 Oct 2025
How Brazil Became the World’s 5th Largest Betting Market
Brazil's Online Betting Market Hits $4.1 Billion in 2025 Brazil is set to become the fifth largest online betting market in the world, with projected revenue of US$4.139 billion (R$22 billion) in 2025, according to exclusive data from Regulus Partne...
Austria’s Online Gambling Reform 2025: End of Monopoly and Start of Competition
21 Oct 2025
Austria’s Online Gambling Reform 2025: End of Monopoly and Start of Competition
Austria is preparing to end its decades-long state monopoly on online gambling by introducing a competitive multi-licence system in 2025.The government aims to modernise its outdated framework, bring legislation in line with EU standards, and create ...