Inside Agentic AI: A CTO Perspective on Banking Security in the Agentic Era - XTN Cognitive Security

Inside Agentic AI: A CTO Perspective on Banking Security in the Agentic Era

trust your fraud expert icon

Since 2023, Generative AI has set the stage for a new era, but 2026 is set to mark the real turning point with Agentic AI, which is becoming a near-term reality for every sector built around digital services.
These autonomous systems do more than generate content; they act, interact with APIs, and make real-time decisions. Like all transformative innovations, it promises significant benefits, but it also introduces new risks, and our role is to identify and address them. Today, we know that this technology could open entirely new avenues for fraud, and its impact will be substantial: Capgemini estimates that agentic AI could generate up to $450 billion in value for financial services by 2028.


To make sense of this shift, we turned to our CTO, Guido Ronchetti, whose experience in AI-driven fraud prevention reveals why banks, users, and anti-fraud systems must navigate this uncharted territory with both opportunity and caution.


Opportunities, Risks, and Adoption of Agentic AI

While Agentic AI offers new value to banking customers by simplifying their operations, it also introduces unprecedented risks. How might fraudsters exploit these capabilities to refine their fraudulent strategies?

I believe they will leverage these capabilities to create more sophisticated and tailored attacks. They can generate voice and image content, make victims interact with different virtual personas, increasing the credibility of the fraud. Most importantly, imagine a team of 10 fraudsters who today operate manually: each day, they might make hundreds of calls and track their conversion rates. With AI agents, the same team could deploy thousands of virtual operators to run multiple campaigns in parallel, testing and optimizing approaches in real time. This creates a scale and speed of attack that humans alone could never achieve.

What is your perception of Agentic AI adoption among banks in Europe and Italy? Are any institutions already experimenting with these services, and how close are we to seeing them operational on accounts or digital platforms?

Banking is still lagging behind. The e-commerce sector is certainly further ahead, as it is more naturally aligned with these logics. Banks move at a slower pace, partly due to the complexities of regulatory compliance.

The expectation is that Agentic AI will eventually become an interaction channel on par with mobile or web apps. However, this transition will take time, and banks will not be the early adopters. The real challenge lies in determining whether these tools will become a standard interaction method and if they can effectively serve all user segments. For example, we already see younger generations using ChatGPT for investment advice and other financial inquiries.

The missing link remains what we call the “last mile”: the concrete ability for users to issue direct orders to the bank via an AI assistant. While the potential is enormous, it introduces new security risks that banks must manage proactively.

How Fraud Evolves with AI Agents

What are the primary differences between traditional fraud and fraud executed via AI agents, and what new attack vectors should we anticipate?

The AI agent revolution lies in their autonomy: they can identify and select attack strategies with the highest probability of success. Initially, we will see current scams migrate to AI agents to automate manual workflows, such as social engineering, user outreach, and the creation of credible materials. During this phase, humans will likely remain “in the loop,” supervising or approving specific actions. The future is more unpredictable: agents could potentially devise attack strategies that humans would never have even considered.

What capabilities must an anti-fraud system have to identify and stop exploited AI agents?

An anti-fraud system must distinguish between agent and human behavior, determining whether the agent’s actions reflect the orders of a genuine user. It must analyze the agent’s activities as it would for a human user and correlate behaviors across multiple agents to identify complex or suspicious patterns.

There is a clear shift between the pre- and post-agentic AI eras. With traditional fraud, we monitor human behavior for anomalies and errors.

In fraud supported by Agentic AI, the user no longer interacts directly with the banking service but through an intermediary. Consequently, an anti-fraud system must analyze the assistant’s actions to verify if they align with the user’s habitual and genuine behavior or if they contain suspicious signals indicating malicious exploitation.

From Behavior to Intent

That seems to be a key shift. Until now, online fraud detection has focused on identifying what’s wrong in user behavior. How does this change with the use of AI agents?

Currently, the focus is on human behavior: the anti-fraud system analyzes legitimate users’ interactions to learn their habits and understand how they typically use the service, enabling it to detect anomalies.

With the advent of AI agents, the focus shifts: the user may act through an intermediary, an AI assistant, that performs actions on their behalf. Consequently, the anti-fraud system must determine whether the agent’s actions truly reflect the user’s intent or hide fraudulent signals. We are seeing a natural transition from focusing on “what’s wrong with this behavior?” to “does this reflect the user’s real intent?”.

This obviously presents technical challenges, as AI agent behavior more closely resembles that of a bot than a human being. It therefore becomes crucial to prevent account takeover, not just of banking credentials, but of the user’s AI assistant credentials as well.

So, when we talk about “seeing what is real,” does it mean verifying whether the intent behind the request to the agent is genuine?

Exactly. We must detect authentic intent and genuine behavior, not only for the agent but also for the user commanding it, while ensuring a trusted context for every action.

Building Trust, Privacy, and Competitiveness

What should banks focus on in the coming years to build trust and remain competitive?

Banks must build an ecosystem of secure and intuitive interfaces for interacting with AI assistants. This transition is no different from the introduction of web or mobile apps; only the way users relate to the service is changing. This will require both the development of new interfaces and the upgrading of security technologies to counter threats linked to these new channels. Just as we saw with mobile apps, it will be essential to integrate the new interaction perimeter of AI agents into the existing omnichannel ecosystem, including web, mobile, and z

This is a critical point. The entities managing the AI assistants will become massive centralizers of the data flowing through them. This is particularly relevant when discussing banking information, which classifies users based on income, spending habits, social status, and more.

The issue of data localization must also be considered, as many tools are currently available only from the United States. Therefore, technological maturity and alignment among the bank, the AI provider, and users are required to determine which data will be accessible and how it can be used.

And will it be difficult to determine liability if an AI agent makes a mistake or is manipulated?

Yes, this is a complex issue that will be subject to regulation. There will likely be shared liability between the bank and the AI assistant provider. A portion of the responsibility will certainly fall on those offering the AI service.

If you had to give advice to a manager who will soon need to consider how to protect their organization from risks related to AI agents, what would it be?

The first step is to start addressing the issue as soon as possible and understanding the new perimeter opened by these technologies. Several vendors are already moving in this direction, so it is best to start immediately.

Behavioral analysis remains the indispensable foundation for defending against new risks. The crucial aspect is recognizing that this evolutionary step is inevitable and must be managed. Doing so requires the flexibility that a solution like ours offers, allowing us to respond to the rapid evolution of these tools rather than looking for a definitive solution today that could become obsolete tomorrow.

 

Thank you, Guido, for this brilliant conversation. The challenge is clear: AI agents will introduce significant risks in the world of fraud, but XTN is ready to manage them. Our solutions, built on AI, generative AI, and behavioral biometrics and analysis, are not only designed to detect and respond to fraud and threats but also to evolve alongside them. Our clients trust us because we have been supporting them hand in hand in managing every emerging threat. We will be fully ready to support them as Agentic AI becomes a standard channel in the near future. With XTN, banks can navigate this evolving landscape confidently, turning emerging risks into managed threats while gaining a clear competitive advantage.

Stay tuned as we continue exploring the impact of Agentic AI on online fraud. In the coming weeks, we will be sharing more insights on this rapidly evolving topic. If you have questions or would like to discuss how these trends could affect your organization, feel free to reach out through the form below. Our team will be happy to continue the conversation.


Published:

Share this Article:

GET IN TOUCH

Have any question? We’d love to hear from you. 

Related Contents

Discover our resources

White paper, Business Case, Webinar and more