Agentic commerce isn’t even the future; it’s the now. According to Salesforce, 39% of consumers and over half of Gen Z already use AI for product discovery. Data from Morgan Stanley also tells us that 23% of Americans bought something using AI in the past month, and Adobe released statistics stating there was an 805% year-over-year (end of 2025) increase in AI traffic to US retail sites on Black Friday 2025.
From all of that, McKinsey predicts we’ll see $5 trillion in global agentic commerce volume by 2030, with AI-generated product recommendations generating 4.4x higher conversion rates compared to traditional search.
There are a lot of statistics that we could give you that prove agentic search is rapidly going to become our new norm, but it’s the security solutions supporting it that we’re more interested in.
From human-to-agent trust exploitation to synthetic-identity risk, what are the security solutions we’re going to rely so heavily on to tackle the inherent risks of AI? Read on to find out.
What is Agentic Commerce?
Agentic commerce is online shopping powered by autonomous AI agents that essentially act on a customer’s behalf. An AI system can now effectively and efficiently:
- Discover products
- Compare options
- Check availability
- Complete purchases on websites, apps, and APIs
And they do it all based on user intent. With agentic AI, the traditional multi-step funnel collapses, and customers can simply ask AI for the best product, and AI will find it for them. McKinsey calls this a “seismic shift.”
Security Challenges in Agentic Commerce
AI has some big security and fraud issues. By design, AI agents automate flows without manual navigation, meaning fraudsters can use them in the same manner to commit fraud and general abuse of the system at scale.
Visibility gaps are a major challenge. Many merchants can’t reliably identify which agents are accessing their sites, what actions they take, or why, making governance and monitoring extremely difficult.
Perhaps a bigger issue is the fact that AI agents can mimic legitimate customers almost perfectly. For example, using stolen credentials to log in (agent‑assisted account takeover) or running card‑testing and checkout loops to commit payment fraud or scraping.
DataDome warns that agent-mediated attacks, such as fake accounts, scalping, credential stuffing, card testing, etc., can mimic legitimate flows at speed and scale.
Visa’s security research echoes this. They found that criminals are rapidly reshaping the cyber threat landscape using AI and automating scams. They’re even building entire fake merchant networks for exploitation.
Traditional bot defenses are inadequate. AI-driven bots are adapting their behavior in real time and blending into human traffic. As DataDome notes, “Traditional bot defenses focus on static fingerprints and simple automation patterns.” Agentic interactions blend human initiation with AI execution across multi-step web, mobile, and API journeys. Static fingerprints and simple automation patterns simply aren’t enough.
For example, attackers can easily spoof user-agent strings, such as “I am ChatGPT,” to masquerade as trusted crawlers, and sophisticated AI scrapers ignore directives entirely.
What we’re seeing from all that is an unprecedented volume of malicious automation. Akamai reported a 300% surge in AI-powered bot traffic over the past year, with over 25 billion AI‑bot requests attacking commerce sites in two months. Visa also noted a 25% rise in bot-initiated transactions, 40% in the US, in just six months as agentic commerce accelerated.
The scale problem is already visible in real-world data. The Future of Search and Discovery playbook found that AI bot traffic grew 4.5× in 2025 alone, with automated requests now exceeding human browsing behaviour — distorting analytics, inflating impressions, and obscuring genuine customer intent.
Security Solutions
Companies need next-generation bots and fraud defenses. Key approaches include:
EMBED: PayPal’s Strategy to Stop AI-Powered Bots & Reduce Fraud
Intent-based bot management
Modern platforms can analyze session behavior, like mouse movements and click timing, instead of static fingerprints. By doing that, they can distinguish human-like agents from simple bots.
As DataDome explains, merchant systems need “intent-based detection” that evaluates what an agent is doing, not just its static attributes. This involves mapping agent sessions, examining their routes and outcomes (allow, block, rate-limit, or monetize), and separating legitimate agent behaviors from automated attacks in milliseconds.
Solutions like DataDome’s Bot Protect and Human Security’s agentic-trust product use AI models to spot subtle anomalies. For example, DataDome reports an AI engine that processes 5 trillion signals daily to detect malicious intent in under 2 milliseconds with a false-positive rate below 0.01%.
Bot Authentication and Trusted Agent Protocols
A massive challenge is trusting the identity of an AI agent. Standard web traffic has no built-in agent ID—any bot can claim to be a known agent. New protocols address this gap. For example, the emerging IETF Web Bot Auth standard allows AI agents to attach cryptographic signatures to each HTTP request, proving their identity.
DataDome’s Agent Trust feature supports this. It verifies every agent’s signature, for example, from platforms like Amazon Bedrock AgentCore, in real time, allowing verified agents to be let in and blocking unknown ones.
Account and Transaction Security
Agentic commerce still relies on real customer accounts and payments, so traditional fraud tools must evolve. Solutions enforce strong user authentication, like multi-factor login, and continuous identity checks for any agent-initiated action.
One of the leading vendors for account and transaction security is DataDome. Its sophisticated Account Protect product effortlessly prevents fraud, allowing businesses to trust every account creation and login.
With that, they can generate a 99% reduction in account takeovers and save millions in fraudulent charges and disputes annually.
Comprehensive Fraud Monitoring
Security has to go beyond individual checks—security stacks must link together bot signals with fraud analytics.
Leading vendors now integrate bot management with fraud teams. For example, DataDome calls its full offering a “Cyberfraud Protection Platform” that unifies bot, account, API, and DDoS defenses. It provides dashboards for SOC and fraud operations to correlate agent traffic with other indicators.
Agentic commerce is quickly evolving, and we’re lucky that there are some incredibly advanced security solutions supporting it. That said, it is worrying to think of the potential of AI fraud through agentic commerce, so it’ll be interesting to see how the security solutions evolve to tackle the issue.






