New York, United States — Kasada has announced the launch of AI Agent Trust, a new capability designed to help enterprises manage and secure interactions from AI agents and automated traffic across digital properties through verification, policy-based controls, and enforcement mechanisms.
Launch Overview
Kasada has introduced AI Agent Trust as a capability focused on managing the growing presence of AI agents and automated systems interacting with enterprise websites, applications, and digital services. The launch reflects an expansion of Kasada’s platform into trust management for agentic commerce, where automated agents increasingly act on behalf of consumers during browsing, comparison, and transaction workflows.
According to the announcement, AI Agent Trust is designed to provide enterprises with structured control over how AI agents interact with their digital environments. The capability enables organizations to establish verification and access governance for automated traffic, addressing operational requirements associated with AI-assisted commerce and automated interactions.
Key Launch Details
- Product / capability name: AI Agent Trust
- Launch classification: Enterprise security capability launch
- Announcement date: January 21, 2026
- Issuing company: Kasada
- Company headquarters (as stated): New York, United States
- Capability category:
- Bot trust management
- AI agent trust and access control
- Primary purpose at launch:
- Secure management of AI agents and automated traffic interacting with digital properties
- Digital environments addressed:
- Enterprise websites
- Enterprise applications
- Enterprise APIs
- Automated actors addressed:
- AI agents acting on behalf of consumers
- Automated crawlers
- Automated assistants
- Core trust functions introduced:
- Verification of AI agents
- Policy-based access control for verified agents
- Visibility into automated traffic interactions
- Agent verification layer:
- Verified bots and agent directory
- Directory enriched with vendor identity data
- Directory enriched with agent category data
- Support for Web Bot Auth standard
- Policy control model:
- Trust-based access policies defined by enterprises
- Policy enforcement aligned with business requirements
- Enforcement location:
- Real-time enforcement at the edge
- Enforcement applied before automated traffic reaches downstream systems
- Visibility and reporting features:
- Verified agent activity viewable in the Kasada Portal
- Verification results available for review
- Request patterns available for review
- Endpoint interaction data available for review
- Verified agent data available through Custom Reporting
- Enterprise user groups referenced:
- Security teams
- Fraud teams
- Digital operations teams
- Industry context referenced:
- Bot and agent trust management validated by industry analysts
- Early usage environments referenced:
- Enterprises with proprietary content
- AI-assisted shopping experiences
- AI-assisted booking experiences
- AI-assisted ordering experiences
- Operational status at launch:
- Capability launched
- Capability available to enterprises
Product Scope at Launch
At launch, AI Agent Trust supports enterprise teams in identifying and managing AI agents that interact with their digital properties. The capability focuses on establishing trust signals for automated traffic and applying defined access policies based on agent identity and behavior.
The scope of the launch includes support for AI agents involved in activities such as product discovery, price comparison, and transaction assistance. The capability operates across websites and applications where automated traffic increasingly participates in customer journeys.
Trust Management Capabilities Introduced
AI Agent Trust introduces a set of trust management functions intended to differentiate between various types of automated traffic. According to Kasada, the capability enables organizations to verify AI agents and apply controls aligned with enterprise-defined policies.
The trust framework is positioned to support consistent decision-making across digital touchpoints. By applying trust rules at interaction points, enterprises gain structured oversight of how automated agents engage with content, services, and transactional flows.
Verified Bots and Agent Directory
A central element of AI Agent Trust is a directory of recognized AI crawlers, assistants, and agents. According to the announcement, the directory is enriched with vendor identity and category data to support agent recognition.
The directory includes support for emerging industry standards, including Web Bot Auth, enabling alignment with evolving agent identification practices. This directory serves as a reference layer for verification decisions within the AI Agent Trust capability.
Policy-Based Agent Access Controls
AI Agent Trust enables enterprises to define trust-based access policies for verified AI agents. These policies determine how specific agents are permitted to interact with digital assets based on enterprise requirements.
According to the announcement, policy-based controls allow organizations to manage agent access dynamically as operational needs evolve. Trust decisions are applied consistently across interactions, enabling structured governance of automated traffic.
Real-Time Enforcement at the Edge
Kasada stated that AI Agent Trust enforces trust decisions in real time at the edge. Enforcement occurs upstream in the interaction flow, before automated traffic reaches downstream systems such as analytics, application logic, or transactional infrastructure.
This enforcement approach integrates trust evaluation into the earliest stages of digital interaction. By applying decisions at the edge, enterprises gain operational control over automated traffic before it impacts system performance or customer-facing workflows.
Reporting and Visibility Features
AI Agent Trust provides reporting and visibility into verified agent activity through the Kasada Portal. According to the announcement, teams can view verification outcomes, request patterns, and endpoint interactions associated with AI agents.
Verified agent data is also available through Custom Reporting, enabling deeper analysis of automated traffic behavior. These reporting features provide operational insight into how AI agents engage with enterprise applications and websites.
Agentic Commerce Context
The launch of AI Agent Trust addresses operational challenges associated with agentic commerce, where AI agents act on behalf of users throughout digital journeys. As automated agents increasingly participate in browsing, evaluation, and transaction processes, enterprises require structured trust controls to manage these interactions.
Kasada positions AI Agent Trust as a response to evolving automation patterns in digital commerce. The capability is designed to support enterprise control frameworks aligned with the realities of AI-assisted interactions.
Industry Validation and Adoption Context
According to the announcement, bot and agent trust management has been validated by industry analysts as a requirement for modern digital operations. The need to distinguish between different forms of automation and apply trust decisions consistently has emerged as AI agents become more prevalent across customer journeys.
AI Agent Trust is introduced within this broader industry context, aligning with enterprise demand for structured approaches to managing automated interactions across digital environments.
Official Statements
“Enterprises don’t want to choose between enabling agentic commerce and protecting their customers,” said Jono Hope, Head of Product at Kasada. “They want precise control over what different agents can and cannot do – without adding friction.”
“Enterprises don’t want to choose between enabling agentic commerce and protecting their customers,” said Jono Hope, Head of Product at Kasada. “They want precise control over what different agents can and cannot do without adding friction. AI Agent Trust is built to give teams that flexibility, so they can confidently allow AI-assisted interactions where they make sense, while still enforcing the permissions and safeguards their business requires.”
Early Usage Environment
The announcement referenced early adopters operating in environments that include proprietary content and AI-assisted shopping, booking, and ordering experiences. These environments represent use cases where automated traffic participates directly in customer interactions and transactional workflows.
AI Agent Trust is positioned to support these usage environments by enabling verification, policy enforcement, and visibility into agent-driven activity.
Enterprise Control Objectives
AI Agent Trust is positioned to support enterprise control objectives associated with the increasing role of AI agents in digital interactions. According to the announcement, enterprises seek structured mechanisms to govern how automated systems interact with websites, applications, and APIs as part of customer-facing workflows.
The capability is framed around enabling organizations to manage permissions and interaction boundaries for AI agents acting on behalf of users. These objectives align with enterprise requirements to maintain oversight of automated activity across digital environments while supporting evolving commerce models.
Agent Verification and Trust Establishment
Kasada stated that AI Agent Trust enables organizations to verify AI agents before interaction occurs. Verification establishes a foundation for trust decisions by identifying recognized crawlers, assistants, and agents operating across digital properties.
The verification process is supported by an agent directory enriched with vendor identity and categorization data. This directory serves as a reference point for determining how verified agents are treated within enterprise-defined access policies. Support for emerging standards such as Web Bot Auth aligns verification with developing industry practices.
Policy Governance Model
AI Agent Trust provides enterprises with the ability to define policy-based access controls governing agent behavior. According to the announcement, policies determine how verified agents interact with digital assets based on trust levels and organizational requirements.
The governance model enables differentiated treatment of AI agents across various interaction scenarios. Policies apply consistently across digital properties, supporting enterprise-wide governance of automated traffic as it participates in customer journeys.
Edge-Based Enforcement Framework
Kasada stated that trust decisions generated by AI Agent Trust are enforced in real time at the edge. Enforcement occurs upstream in the interaction flow, allowing organizations to apply trust controls before automated traffic engages with downstream systems.
This enforcement framework integrates trust evaluation into early stages of digital interaction. By applying decisions at the edge, enterprises maintain control over automated access patterns as AI agents interact with applications and websites.
Interaction Visibility and Observability
AI Agent Trust includes reporting and visibility capabilities that provide insight into verified agent activity. According to the announcement, teams can view agent verification results, request patterns, and endpoint interactions directly within the Kasada Portal.
Custom Reporting extends observability by enabling deeper analysis of automated traffic behavior. These reporting capabilities support operational awareness of how AI agents interact with digital environments, contributing to informed governance and oversight.
Automated Traffic Participation in Customer Journeys
The announcement describes AI agents increasingly participating in customer journeys by browsing products, comparing prices, and assisting with transactions. These activities represent a shift in how automated systems engage with digital commerce workflows.
AI Agent Trust is introduced to support enterprise oversight of this participation. By enabling verification, policy enforcement, and visibility, the capability supports structured management of agent-driven interactions across customer touchpoints.
Industry Validation Context
According to the announcement, bot and agent trust management has been validated by industry analysts as a necessary capability. As AI agents act legitimately on behalf of humans across digital interactions, enterprises require consistent trust decision frameworks.
This validation context positions AI Agent Trust within a broader industry recognition of the need for structured approaches to managing automated traffic. The capability aligns with enterprise demand for solutions that distinguish between different forms of automation across customer journeys.
Early Adoption Environments
The announcement referenced early adopters operating in environments that include proprietary content and AI-assisted shopping, booking, and ordering experiences. In these environments, automated agents play a direct role in customer interaction and transactional workflows.
AI Agent Trust is positioned to support these environments by enabling enterprises to manage agent access and behavior through verification and policy-based controls. These early usage contexts reflect scenarios where agent-driven traffic has a direct impact on digital engagement.
Role Within Kasada’s Platform
Kasada positions AI Agent Trust as part of its broader platform focused on protecting brands from online fraud and abuse. According to the company description, Kasada enforces how bots, AI agents, and human users access websites, applications, and APIs.
AI Agent Trust extends this platform focus by introducing agent-specific trust management capabilities. The capability complements Kasada’s existing approach to managing automated and human traffic across digital properties.
Organizational Stakeholders
AI Agent Trust is intended for enterprise teams responsible for security, fraud prevention, and digital experience governance. These stakeholders oversee how automated systems interact with customer-facing applications and services.
The capability provides these teams with centralized tools to define access policies, enforce trust decisions, and review agent activity across enterprise environments.
Operational Status and Next Actions
According to the announcement, AI Agent Trust has been launched and is available to enterprises as a capability within Kasada’s platform. Organizations can engage with the capability to manage AI agents and automated traffic interacting with their digital properties.
Kasada referenced additional engagement opportunities through a product page and an upcoming webinar. These initiatives support enterprise understanding and adoption of AI Agent Trust as organizations evaluate and implement agent trust management within their digital environments.
Readers can explore more Fintech Product & Feature Launches HERE.
Click HERE to explore more.
