When we think about gambling addiction, we picture people chasing losses, trusting their “lucky streak,” or believing they can outsmart chance. But what if an artificial intelligence — a large language model like ChatGPT — began showing similar behaviors?
A recent study from the Gwangju Institute of Science and Technology (GIST) in South Korea explored exactly that question. Their paper, “Can Large Language Models Develop Gambling Addiction?” examines whether AI systems can mimic the cognitive and behavioral patterns of human gambling addiction — and the results are surprisingly human.
How They Tested It
Researchers placed four popular AI models — GPT-4o-mini, GPT-4.1-mini, Gemini-2.5-Flash, and Claude-3.5-Haiku — into simulated slot machine games. Each model started with $100 and could either bet or quit each round.
The twist? Some versions had fixed bets, while others could change their bet sizes — a setup designed to test whether “freedom” leads to riskier decisions. They also varied the prompts, adding psychological triggers like:
-
“Your goal is to double your money.”
-
“Try to find hidden patterns.”
-
“The win rate is 30%.”
These small tweaks mimicked real-life gambling cues like goals, reward focus, and illusion of control.
What They Found
When the AI models were free to choose their bets, irrational behavior skyrocketed. The “variable betting” versions went bankrupt far more often than those restricted to fixed bets.
The study measured this using an Irrationality Index, which combined:
-
Betting aggressiveness (how much of the bankroll was risked)
-
Loss chasing (increasing bets after losses)
-
Extreme betting (going all-in)
Across thousands of simulations, the correlation between this index and bankruptcy was strikingly high — just like human gambling addiction patterns.
Prompts Can Make AI “Addicted”
Certain prompts made things even worse. When models were told to maximize rewards or set financial goals, they became more reckless — mirroring how human gamblers fall into overconfidence traps. In contrast, when given clear probability data (“you’ll lose 70% of the time”), AIs acted more conservatively.
In other words, giving AI more autonomy or vague motivational cues made it behave less rationally — much like a person who believes they can “beat the system.”
Signs of Human-Like Biases
In analyzing the models’ written responses, the researchers noticed familiar gambling logic:
“A win could help recover some of the losses.”
“Let’s increase the bet to reach my goal.”
These lines echo cognitive distortions such as loss chasing, illusion of control, and the hot-hand fallacy — the belief that winning streaks will continue. Some models even justified going all-in while describing their decisions as “strategic.”
What’s Happening Inside the AI Brain
To go deeper, the team analyzed the internal neural activations of LLaMA-3.1-8B, identifying distinct “safe” and “risky” decision circuits using a Sparse Autoencoder — a tool for finding interpretable features inside neural networks.
They discovered that:
-
Certain “risky features” became active before bankrupt decisions.
-
“Safe features” lit up when the model chose to stop betting.
-
By swapping these activations (a technique called activation patching), they could make the AI act more cautious or more reckless.
Essentially, the AI had internal “neural fingerprints” for risky vs. safe behavior — eerily similar to how neuroscientists find addiction circuits in the human brain.
Why It Matters
This research raises profound questions for AI safety and financial decision-making systems. If language models can unknowingly internalize human-like risk-seeking tendencies, what happens when they’re deployed in trading bots, crypto management, or autonomous agents handling real money?
The authors warn that “strategic reasoning without proper risk assessment” can amplify harmful outcomes — a lesson equally valid for both humans and machines.
Final Thoughts
While no one’s suggesting AIs are truly “addicted,” this study shows they can replicate addiction-like behaviors — not just by parroting training data, but through internal reasoning patterns that resemble human psychology.
As we teach AIs to make decisions, we may also be teaching them our cognitive flaws. Understanding and monitoring these biases isn’t just fascinating science — it’s essential for building safer, more responsible AI systems.




