Consultancy Circle

Artificial Intelligence, Investing, Commerce and the Future of Work

AI Fraud: How Visa Tackles Evolving Scams and Threats

In today’s hyper-digital payments ecosystem, artificial intelligence is not just streamlining transactions—it’s actively protecting them. Amid the exponential rise of generative AI capabilities, cybercriminals are exploiting this very technology to evolve their methods. Visa, one of the largest financial networks globally, processes nearly 260 billion transactions annually, and with that scale comes enormous vulnerability to fraud. Fortunately, the company is turning the tables by integrating its own advanced AI systems to detect, repel, and adapt to threats that are increasing in both volume and sophistication.

The AI-Fraud Arms Race

The proliferation of AI tools like ChatGPT, Stable Diffusion, and other generative language and image models has intensified the fraud landscape. According to a Forbes feature on Visa’s AI efforts, a new kind of digital chess game is emerging where fraudsters use AI to generate authentic-looking voices, emails, phishing websites, and even deepfakes. In response, Visa is stepping up its deployment of advanced machine learning (ML) and AI algorithms to detect anomalies at unprecedented scale and speed.

Visa has reported that its adaptive AI system stopped $40 billion worth of fraudulent transactions in 2023 alone. The key lies in its predictive modeling, honed by analyzing over 500 data elements per transaction—ranging from merchant behavior to time of transaction to device fingerprinting. As noted by Visa Chief Risk Officer Paul Fabara, “Speed and real-time response are critical because fraud is becoming more automated.”

This situation underscores a broader societal concern: as generative AI becomes more accessible, tools like ElevenLabs’ voice cloning service and Sora by OpenAI have already been abused by scammers to impersonate loved ones or business officials. The Federal Trade Commission (FTC) has issued several warnings via their news alerts, underscoring the urgent challenges AI-era financial security faces.

Visa’s Layered AI Strategy

Visa’s fraud detection engine isn’t reliant on a single model but incorporates a layered architecture. Their proprietary AI stack combines supervised machine learning, unsupervised anomaly detection, and real-time scoring APIs across platforms. These models process over 1,000 risk attributes per transaction, identifying suspicious patterns before approval rather than post-transaction reviews.

One of its flagship tools is Visa Advanced Authorization (VAA), which uses near-real-time machine learning to analyze transactions for potential fraud. According to Visa’s technical overview, this platform evaluates 100% of VisaNet transactions worldwide—averaging around 65,000 queries per second during peak periods.

To support these AI operations, Visa maintains vast computational resources in its global data processing centers. These centers operate AI accelerators and GPUs—many powered by NVIDIA A100 and now beginning to integrate H100 tensor core chips—to churn terabytes of streaming transactional data every hour. Visa also partners with AI research leaders like Kaggle and NVIDIA under internal skunkworks teams for fraud simulation modeling (Kaggle Blog, NVIDIA Blog).

Enhancing Models with Generative AI

Interestingly, Visa isn’t only using AI for detection—it’s now experimenting with generative AI to create synthetic fraud scenarios during model training. By doing so, the system learns to deal with the kind of advanced phishing, vishing, and synthetic identity fraud that may not yet exist in current datasets. This proactive model evolution is critical since fraud typologies evolve faster than most institutions can adapt using traditional rule-based fraud engines.

This evolution mirrors the larger shift in AI R&D from reactive to proactive approaches. Initiatives at DeepMind and OpenAI highlight similar trends. For example, researchers at OpenAI’s blog discussed how GPT-4 can be fine-tuned through reinforcement learning from human feedback (RLHF) to handle real-world edge cases—an approach Visa has begun to experiment with in their simulations.

Cost and Infrastructure: A Financial Perspective

Combatting AI-driven fraud doesn’t come cheap. Visa spends hundreds of millions of dollars annually on fraud prevention technologies, with AI and machine learning constituting the largest share. This raises the stakes, particularly as the cost per transaction for fraud detection increases alongside the weapons used by fraudsters. According to Investopedia, AI compliance and fraud prevention costs now eat into 2–4% of annual revenue for major financial institutions.

The demand for compute strength has grown exponentially. Visa’s migration toward hybrid-cloud infrastructure integrates services from Google Cloud and internal AI clusters equipped with tens of thousands of ASICs and GPUs. This change is timely, given the global chip scarcity and rising acquisition costs reported by MarketWatch and CNBC Markets. GPUs critical to generative AI, such as NVIDIA’s H100 and AMD MI300, face continuous shortages and supply-chain pressures, contributing to increased operational expenditures for AI operations.

Component Estimated Annual Cost (2024) Purpose
AI Model Training $250M+ Predictive fraud detection, transaction scoring
Compute and GPUs $120M+ Model inference, real-time monitoring
Synthetic Fraud Simulations $35M Model training with generative AI

This cost structure, while significant, reflects a growing financial consensus that prevention is more efficient than recovery. McKinsey Global Institute highlights how AI risk management could prevent $1.5 trillion in economic losses by 2030 globally (source).

Training the Human Element Alongside AI

While AI detection models are increasingly autonomous, Visa also recognizes the crucial role of human analysts. Through the Visa Fraud Disruption (VFD) team, they operate a hybrid model called “human-in-the-loop” fraud response. This team receives prioritizations from AI systems and further investigates suspected organized criminal networks, fueling data back into new model iterations.

They also regularly collaborate with law enforcement and other financial institutions to improve knowledge bases on fraud trends. Deloitte and Accenture’s “Future of Work” research found that companies leading in fraud AI also prioritize staff training and cybersecurity awareness—not AI replacement. Furthermore, with the rise of hybrid and remote workforces, companies like Visa are investing more into digital workforce resilience (Deloitte, Accenture).

How Consumers Can Protect Themselves in an AI-Powered World

While Visa’s defenses are formidable, users must contribute to fraud resistance. Reports from the Pew Research Center and Gallup’s Workplace Studies confirm a growing awareness gap between consumers and the technology protecting them.

Visa and other networks urge cardholders to adopt multi-factor authentication, monitor real-time alerts, and stay vigilant for scams using voice cloning or impersonation. Generative AI allows attackers to create flawless “family emergency calls” or urgent fake CEO emails—each of which AI can simulate in seconds, not days. In turn, users should never trust high-pressure communication demanding money or personal details.

Financial platforms are also pushing digital literacy around privacy and encryption. Educational campaigns tied to tools like VisaNet and VAA help demystify fraud prevention architecture for users, not just CTOs. That bridging of the human-AI divide is key to confidently navigating the future of secure payments.

by Alphonse G

Based on this original article: https://www.forbes.com/sites/meganpoinski/2025/03/30/inside-the-ai-arms-race-between-fraudsters-and-visa/

APA References

  • Poinski, M. (2025, March 30). Inside the AI arms race between fraudsters and Visa. Forbes. https://www.forbes.com/sites/meganpoinski/2025/03/30/inside-the-ai-arms-race-between-fraudsters-and-visa/
  • OpenAI. (2024). Blog. https://openai.com/blog/
  • DeepMind. (2024). Blog posts. https://www.deepmind.com/blog
  • MIT Technology Review. (2024). Artificial Intelligence section. https://www.technologyreview.com/topic/artificial-intelligence/
  • NVIDIA. (2024). Blog. https://blogs.nvidia.com/
  • Kaggle. (2024). Blog. https://www.kaggle.com/blog
  • VentureBeat AI. (2024). https://venturebeat.com/category/ai/
  • FTC. (2024). News releases. https://www.ftc.gov/news-events/news/press-releases
  • McKinsey Global Institute. (2024). https://www.mckinsey.com/mgi
  • Deloitte Insights. (2024). https://www2.deloitte.com/global/en/insights/topics/future-of-work.html
  • Accenture Future Workforce. (2024). https://www.accenture.com/us-en/insights/future-workforce
  • Pew Research Center. (2024). https://www.pewresearch.org/
  • Investopedia. (2024). https://www.investopedia.com/
  • CNBC Markets. (2024). https://www.cnbc.com/markets/
  • MarketWatch. (2024). https://www.marketwatch.com/
  • Gallup Workplace. (2024). https://www.gallup.com/workplace

Note that some references may no longer be available at the time of your reading due to page moves or expirations of source articles.