Browse
Core Concepts
Reasoning
Memory & Retrieval
Agent Types
Design Patterns
Training & Alignment
Frameworks
Tools
Safety
Meta
Browse
Core Concepts
Reasoning
Memory & Retrieval
Agent Types
Design Patterns
Training & Alignment
Frameworks
Tools
Safety
Meta
Daily Active Agents (DAA) is a proposed metric framework for measuring AI adoption and engagement in the agent-oriented computing era. Analogous to Daily Active Users (DAU), a foundational metric in mobile and web analytics, DAA aims to quantify the number of distinct autonomous AI agents actively engaged in tasks or interactions within a defined time period, typically a 24-hour window. The concept was introduced by technology industry leaders as a standardized measure for tracking the proliferation and utility of AI agent systems across global digital infrastructure.
The DAA metric represents a paradigm shift in how industry practitioners conceptualize AI adoption metrics. While DAU measures human engagement with digital platforms and services, DAA extends this analytical framework to autonomous agent systems that operate with varying degrees of independence from direct human oversight. The metric encompasses agents that execute tasks, make decisions, and interact with systems and other agents without continuous human intervention 1)
DAA differs fundamentally from traditional usage metrics because agents may operate across multiple platforms, execute parallel tasks, and persist in their operational state beyond single user sessions. This necessitates new methodological approaches to measurement and attribution compared to conventional DAU calculations, which typically track individual human users accessing a service at least once within a 24-hour period.
The DAA framework emerged as enterprise and technology sectors increasingly deployed autonomous agent systems for customer service, data analysis, trading, research, and operational automation. By 2026, major technology companies including Baidu had adopted DAA as a primary performance indicator, reflecting the growing strategic importance of agent-based systems in their business models. The metric gained particular relevance as organizations scaled agent deployments across distributed infrastructure and cloud environments.
Industry projections suggested that global DAA could eventually reach 10 billion agents operating simultaneously or within defined measurement windows 2)—a figure that would represent orders of magnitude growth compared to the human user base of major platforms. This scale reflects both the potential for agent multiplication (where single organizations deploy thousands or millions of specialized agents) and the breadth of economic sectors incorporating agent-based automation.
Implementing DAA metrics requires addressing several technical challenges distinct from DAU measurement. Key considerations include:
* Agent identification and attribution: Distinguishing between distinct agents, agent instances, and coordinated multi-agent systems; determining whether replicated or forked agents constitute separate entities * Activity definition: Establishing thresholds for what constitutes active engagement—whether agents making routine maintenance checks, querying knowledge bases, or executing scheduled tasks qualify * Cross-platform tracking: Aggregating agent activity across heterogeneous systems, cloud providers, and organizational boundaries * Temporal alignment: Defining measurement windows and timezone handling for agents operating across global infrastructure
These technical challenges imply that DAA metrics may vary significantly depending on measurement methodology and organizational implementation standards, potentially limiting comparability across companies and sectors.
The introduction of DAA reflects recognition that agent-based systems represent a distinct category of digital activity with different economic, operational, and strategic implications compared to human user engagement. While DAU metrics emphasize platform stickiness, user retention, and advertising opportunities, DAA metrics focus on automation scale, task completion efficiency, and operational impact per agent. Organizations increasingly recognize that high DAA figures may generate greater business value than equivalent DAU growth in certain domains such as enterprise operations, financial systems, and research infrastructure.
The relationship between DAA and DAU metrics remains an open question in industry practice. Some analysts suggest that agent adoption may partially substitute for human user engagement in specific applications, while others contend that agent systems and human users will operate in complementary rather than competitive roles, with both metrics growing in parallel.