AI Strategy, Cybersecurity, Compliance Automation & Microsoft 365 Managed IT for Security-First Financial Institutions | ABT Blog

The AI Agent Identity Crisis: Machine IDs Will Outnumber Humans by 2027

Written by Justin Kirsch | Tue, Feb 03, 2026

Machine identities already outnumber human identities 82-to-1 in the average enterprise. By 2027, that ratio will climb past 100-to-1 as agentic AI, automated workflows, and cloud-native applications generate new service accounts, API keys, and bot credentials faster than any IAM team can track them. Financial institutions built their identity programs for people. That foundation is cracking under the weight of machines.

This is not a theoretical problem. Half of organizations surveyed by CyberArk in 2025 reported security breaches tied directly to compromised machine identities. In financial services, where a single compromised service account can expose millions of member records, the stakes are higher than in any other industry. The institutions that solve machine identity governance now will avoid the examiner findings, audit failures, and breach costs that are coming for everyone else.

The Identity Problem Nobody Planned For

Every AI agent needs an identity. Every automated workflow needs credentials. Every API integration needs a key. Every scheduled task needs a service account. Every cloud resource needs a workload identity.

When your institution deployed its first IAM platform, the identity universe was small and human-shaped: employees who logged in at 8 AM and logged out at 5 PM, contractors with time-bound access, and vendors with VPN tokens. The IAM program handled onboarding, offboarding, periodic access reviews, and password rotation. It worked because the population was manageable and the behavior was predictable.

That model assumed identities would grow at the pace of hiring. Instead, they grew at the pace of automation. A single robotic process automation (RPA) deployment can create dozens of service accounts. A single cloud migration can generate hundreds of workload identities. A single AI agent deployment can spawn new credentials with every task it executes.

Financial institutions are now managing identity populations that dwarf their employee count. A mid-size bank with 500 employees might have 15,000 to 40,000 machine identities running across its environment. Most of those identities were created by someone who has since changed roles or left the organization. Many have standing privileges that were granted for a one-time project and never revoked.

82:1
Machine identities now outnumber human identities 82-to-1 in the average enterprise, up from 45-to-1 just one year ago
Source: CyberArk 2025 State of Machine Identity Security Report

Don't Deploy AI Without Governance

Financial institutions rushing to deploy Copilot without assessing readiness are building on unstable foundations. See where your tenant stands before you flip the switch.

By the Numbers: The Machine Identity Explosion

The growth is not gradual. CyberArk's 2025 State of Machine Identity Security Report, based on a survey of 1,200 security leaders across six countries, found that 79% of organizations anticipate machine identities will spike by as much as 150% over the next 12 months. AI adoption is the primary driver, with AI expected to create the greatest volume of new identities with privileged and sensitive access in 2025 and 2026.

The types of machine identities multiplying across financial institution environments include:

  • Service accounts: Automated processes running scheduled tasks, batch jobs, and system integrations. These accounts often have broad privileges and rarely get reviewed.
  • API keys: Credentials that connect applications, payment processors, core banking systems, and third-party services. A typical bank has hundreds of active API keys.
  • Bot identities: RPA bots handling loan processing, account reconciliation, and compliance reporting. Each bot operates under its own identity.
  • Workload identities: Cloud resources in Azure, AWS, or hybrid environments that authenticate to other services using managed identities or certificates.
  • AI agent identities: The newest category. Autonomous AI agents that make decisions, call APIs, provision resources, and interact with other systems on behalf of the organization.
  • Certificate-based identities: TLS/SSL certificates, code-signing certificates, and SSH keys that authenticate machines to other machines without human involvement.

Each category has its own lifecycle, its own rotation requirements, and its own risk profile. Most financial institutions manage them in silos, if they manage them at all. CyberArk found that 70% of respondents identified identity silos as a root cause of their cybersecurity risk.

Why This Matters Right Now

In September 2024, CyberArk completed its acquisition of Venafi, the leading machine identity management platform, for $1.54 billion. That price tag reflects how seriously the market takes machine identity risk. Microsoft responded in January 2026 by launching Entra Agent ID, a purpose-built identity and governance system for AI agents. The race to solve machine identity governance is accelerating because the attack surface is growing faster than defenses.

Machine identities are growing exponentially while human identities grow linearly. The ratio has jumped from 45:1 to 82:1 in just one year.

Why Traditional IAM Fails for Machine Identities

Traditional IAM was built around assumptions that do not hold for machines:

Assumption 1: Identities have managers. Human identities belong to employees who report to someone. When an employee transfers or leaves, their manager triggers the offboarding process. Machine identities have no manager. The developer who created a service account three years ago may have left the company. The account persists, running with the same privileges it was granted on day one.

Assumption 2: Access reviews happen periodically. Annual or quarterly access certifications work when reviewers can look at a list of 50 employees and confirm each one still needs access. When the review list includes 15,000 service accounts, reviewers rubber-stamp approvals because they cannot evaluate each one. The Non-Human Identity Management Group found that 91% of former employee tokens remain active after departure.

Assumption 3: MFA adds a second layer. Multi-factor authentication stops credential theft for human accounts. Machine identities do not respond to MFA prompts. They authenticate with static credentials, certificates, or tokens that, if compromised, provide full access with no second factor to stop an attacker.

Assumption 4: Password rotation is sufficient. Human password policies enforce rotation every 90 days. Many service accounts have passwords or API keys that have not been rotated in years because rotating them risks breaking the application that depends on them. Teams avoid the risk of downtime, and the credential ages indefinitely.

The result is an environment where machine identities operate with more privilege, less oversight, and longer credential lifespans than any human user in the organization. And nearly half of those machine identities have sensitive or privileged access, according to CyberArk's data.

50%
of organizations have experienced security breaches linked to compromised machine identities, with API keys and TLS certificates as the primary threat vectors
Source: CyberArk 2025 State of Machine Identity Security Report

The Financial Services Attack Surface

Financial institutions face machine identity risks that are both broader and more consequential than other industries. The regulatory environment demands traceability. The data is high-value. And the interconnections between systems create lateral movement paths that attackers exploit.

Specific risk scenarios playing out in financial services right now:

Compromised service accounts for data exfiltration. Attackers who gain access to a service account with database read permissions can extract member data without triggering the behavioral anomaly alerts designed for human access patterns. Service accounts running at 3 AM look normal. A human account running the same query at 3 AM would flag immediately.

API keys leaked in code repositories. Development teams commit API keys to Git repositories, configuration files, and deployment scripts. In financial services, those keys often connect to core banking systems, payment processors, and shadow AI tools that bypass governance entirely. A single leaked key can provide an attacker with the same access as the application it was built for.

Bot identities with excessive privileges. RPA bots deployed for loan processing or compliance reporting are often granted broad access to complete their tasks. That access is rarely scoped to least privilege because restricting it might break the workflow. The bot then runs with admin-level access to systems it only needs to read from.

Orphaned accounts from vendor integrations. Financial institutions integrate with dozens of vendors: loan origination systems, credit bureaus, payment networks, compliance platforms. Each integration creates service accounts. When the vendor relationship ends or the integration is replaced, the accounts persist. They become orphaned identities with active credentials and no one monitoring them.

The Non-Human Identity Management Group documented multiple real-world breaches where NHI compromise was the initial attack vector, including incidents involving OAuth token hijacking, leaked service account credentials, and compromised CI/CD pipeline secrets. In financial services, the OWASP Top 10 for Agentic AI specifically calls out agent identity compromise as a primary risk.

"Machine identities have become the primary source of privilege misuse, with their growth showing no sign of slowing. Nearly half have sensitive or privileged access, and most operate unnoticed and unmonitored."

CyberArk 2025 Identity Security Landscape Report
A purpose-built governance framework for machine identities requires six components: inventory, classification, ownership, lifecycle management, monitoring, and regular access reviews.

Agentic AI Makes It Worse

If service account sprawl was the first wave and API key proliferation was the second, agentic AI is the third wave. And it moves faster than either of its predecessors.

Agentic AI agents do not just consume identities. They create them. An AI agent that provisions cloud resources generates new workload identities with each action. An agent that integrates with third-party services exchanges credentials automatically. An agent that orchestrates multi-step workflows may spawn sub-agents, each with its own identity and its own access scope.

The Cloud Security Alliance surveyed security leaders in late 2025 through a study commissioned by Strata Identity and found that only 18% are highly confident their current IAM systems can effectively manage agent identities. The remaining 82% are either uncertain or know their systems cannot handle it. Teams continue to share human credentials and access tokens with AI agents because they have no alternative identity framework for autonomous systems.

This creates a governance gap that is already causing real problems:

  • Dynamic privilege escalation: An AI agent that starts with read-only access may need write access to complete a task, and some agent frameworks grant that escalation automatically without human approval.
  • Cross-system credential sharing: Agents that span multiple platforms carry credentials across trust boundaries, creating lateral movement paths that traditional network segmentation was designed to prevent.
  • Identity persistence: Agent identities created for a specific task often persist after the task is complete. Without lifecycle management, these become the same orphaned accounts that plague service account governance today.
  • Attribution gaps: When an AI agent takes an action using shared credentials, audit logs show the credential but not the specific agent or decision that triggered the action. This makes regulatory-grade audit trails nearly impossible to produce.

Microsoft recognized this gap in January 2026 when it launched Entra Agent ID, a purpose-built identity registration and governance system for AI agents within the Entra ecosystem. CyberArk's 68% stat is telling: 68% of organizations lack identity security controls for AI entirely. The tools are emerging, but adoption is trailing the threat by at least 18 months.

Why This Matters Right Now

The 2026 NHI Reality Report from the Cyber Strategy Institute warns that at least one high-profile breach in 2026 will originate from a compromised AI agent whose non-human identity was hijacked or over-privileged. Real-world events in 2025, including LangChain CVE-2025-68664, Langflow remote code execution, and the OmniGPT credential exposure, have already demonstrated that NHI misuse in AI agent frameworks can produce high-impact breaches at machine speed.

A Machine Identity Governance Framework for Financial Institutions

Solving this problem requires a purpose-built governance framework. Grafting machine identity management onto your existing human IAM program will not work. The lifecycle, the risk model, and the monitoring approach are fundamentally different.

A financial institution machine identity governance framework has six components:

1. Inventory All Non-Human Identities

You cannot govern what you cannot see. Start with a complete inventory of every service account, API key, bot identity, workload identity, certificate, and AI agent credential in your environment. Most institutions discover 3x to 5x more machine identities than they expected when they run their first comprehensive scan. Use your Microsoft 365 security audit as the starting point for Entra ID-based identities, then extend to cloud infrastructure and third-party integrations.

2. Classify by Risk

Not all machine identities carry the same risk. A service account with read-only access to a test environment is different from an API key with write access to your core banking system. Classify each identity by: data access scope (what data can it reach), privilege level (read, write, admin, delete), network scope (internal, cross-zone, internet-facing), and regulatory sensitivity (does it touch member PII, financial records, or compliance data).

3. Assign Ownership

Every machine identity gets a human owner. No exceptions. The owner is responsible for justifying the identity's continued existence, approving its access scope, and decommissioning it when no longer needed. For AI agent identities, the owner is the team that deployed the agent. For service accounts, the owner is the application team. For API keys, the owner is the integration lead.

4. Implement Lifecycle Management

Machine identities need the same lifecycle rigor as human identities: creation approval, periodic recertification, credential rotation, and decommissioning. Automate where possible. Set maximum credential lifespans (90 days for API keys, 12 months for certificates, no exceptions for "we'll break the app" excuses). Build rotation into the CI/CD pipeline so credential changes deploy automatically.

5. Monitor for Anomalous Behavior

Machine identities have predictable behavior patterns: they run at specific times, access specific resources, and generate specific volumes of traffic. When a service account that normally reads 100 records per day suddenly reads 100,000, that deviation should trigger an alert. Build behavioral baselines for your highest-risk machine identities and monitor for deviations the same way you monitor human accounts for impossible travel or off-hours access.

6. Include in Regular Access Reviews

Machine identities belong in your access certification campaigns. But do not dump 15,000 service accounts into a quarterly review and expect a human to evaluate each one. Automate the review for low-risk identities (auto-recertify if behavior is within baseline). Focus human review on high-risk and high-privilege machine identities. Flag any identity that has not been used in 90 days for automatic suspension. Move toward continuous compliance monitoring rather than annual certification for machine identities.

Getting Ahead of the Identity Curve

Regulators are watching. The FFIEC's examination procedures increasingly reference non-human access management. NIST SP 800-207 (Zero Trust Architecture) explicitly requires authentication and authorization policies for application and service identities, not just users. Examiners are beginning to ask for NHI governance evidence as part of cloud and AI assessments.

Financial institutions that build machine identity governance now will have three advantages:

Regulatory readiness. When examiners add machine identity questions to the IT examination (and they will), you will have answers, evidence, and a documented program rather than a scramble to build one retroactively.

Breach resilience. A compromised machine identity in a governed environment has limited blast radius. Credentials rotate automatically. Behavioral anomalies trigger alerts. Orphaned accounts get decommissioned before they become attack vectors.

AI deployment velocity. The institutions that struggle to deploy AI agents at scale are often stuck on the identity problem. They cannot figure out how to give an AI agent the access it needs without violating least privilege. A mature machine identity framework solves this by providing a governed path for agent identity provisioning.

The starting point is straightforward: run an identity inventory that includes every non-human account in your environment. Know what you have. Classify the risk. Assign ownership. Then build the lifecycle management and monitoring that your human IAM program already provides for employees.

Machine identities are not going away. They are growing at a rate that will make today's 82-to-1 ratio look quaint within two years. The financial institutions that govern them now will be the ones that deploy AI with confidence instead of fear. The ones that ignore them will keep finding orphaned service accounts in their breach investigations.

4 phases in the AI journey — most institutions skip the first two

Before Copilot, Before Agents — Get Your Foundation Right

Copilot doesn’t know which documents are sensitive and which aren’t — your governance framework does. Make sure it’s ready before AI starts reading your data.

Frequently Asked Questions

A machine identity is any non-human credential used to authenticate automated processes within a financial institution. This includes service accounts, API keys, bot credentials, workload identities, TLS certificates, and AI agent identities. Machine identities allow applications, automated workflows, and AI systems to access data and services without human intervention.

The average enterprise has 82 machine identities for every human identity, according to CyberArk's 2025 research. A mid-size bank with 500 employees may have 15,000 to 40,000 machine identities across service accounts, API keys, bot credentials, and workload identities. Most institutions discover three to five times more than expected during their first inventory.

Traditional IAM was designed for human users who have managers, respond to MFA prompts, follow onboarding and offboarding workflows, and participate in periodic access reviews. Machine identities lack managers, cannot respond to MFA challenges, have no onboarding lifecycle, and operate with static credentials that often go unrotated for years. These fundamental differences break the assumptions that traditional IAM relies on.

NIST SP 800-207 Zero Trust Architecture requires authentication and authorization for application and service identities alongside user identities. FFIEC examination procedures reference non-human access controls. The Cloud Security Alliance published an Agentic AI IAM framework in 2025. Regulators are beginning to require explicit NHI governance evidence during cloud and AI assessments of financial institutions.

AI agents create and consume identities dynamically, spawning new credentials with each task. They may escalate privileges at runtime, share credentials across trust boundaries, and persist identities beyond task completion. Only 18% of security leaders are confident their IAM systems can manage agent identities. Traditional static credential models cannot govern entities that operate autonomously and continuously.

Start with a comprehensive inventory of every non-human identity in your environment. Scan Entra ID for service accounts and workload identities, audit your cloud infrastructure for managed identities, catalog all API keys and certificates, and identify every bot and AI agent credential. Most institutions discover three to five times more machine identities than they expected during this first scan.

API keys should rotate every 90 days. Certificates should rotate annually at minimum. Service account credentials should follow the same rotation schedule as privileged human accounts. Behavioral reviews should run continuously with automated baselines. Any machine identity unused for 90 days should be automatically suspended. High-privilege identities warrant monthly review by assigned human owners.

Justin Kirsch

CEO, Access Business Technologies

Justin Kirsch has managed identity and access across 750+ financial institutions over 25 years. As CEO of Access Business Technologies, he sees firsthand how machine identity sprawl from AI agents and automated workflows creates the security gaps that traditional IAM programs miss.