The Evolving Landscape of IT Governance and Non-Human Identity Risks
A trove of documents from I-Soon, a private contractor competing for Chinese government contracts, reveals that its hackers compromised over a dozen governments, according to cybersecurity firms SentinelLabs and Malwarebytes.
The interconnectedness of today’s digital world has led to a pressing need for companies to rethink their IT governance practices. As we approach 2026, industry experts are urging organizations to focus on ‘non-human’ identity risks and develop robust strategies to mitigate unauthorized actions. Paul Walker, a field strategist at Omada, emphasizes this need in a recent discussion with Digital Journal.
Prediction 1: By 2026, Non-Human Identities Must Be Treated as First-Class Citizens
Walker asserts that traditional identity governance approaches are outdated, primarily built around human users. He highlights a critical issue: the presence of numerous machine identities—essentially digital identities belonging to software and services—that remain ungoverned. This concern is echoed in the recent report from OWASP, which lists ‘improper offboarding’ as the biggest risk associated with these identities.
To illustrate, Walker explains the challenge presented by the creation of service accounts for temporary projects. Frequently, these accounts persist long after their intended use, granting unfettered access to sensitive databases and cloud resources. With each development initiative, organizations inadvertently create thousands of orphaned credentials, each serving as a potential entry point for cybercriminals.
Walker’s concern deepens as he points to how these “ghost” identities are unmonitored and often have excessive privileges. The rapid rise of cloud-native architectures, microservices, and automated integrations has made it almost impossible for traditional identity governance platforms, which were designed with human users in mind, to keep up.
Prediction 2: The Privilege Creep Problem Will Worsen
Privilege creep—the gradual accumulation of excessive access rights—poses significant risks, particularly for machine identities. Walker emphasizes that, unlike their human counterparts, machine identities accumulate permissions invisibly, creating vulnerabilities within organizations. While human access reviews can sometimes catch over-privileged roles during role changes, machines often go unchecked.
Walker describes a stark reality: access reviews fail not due to a lack of effort, but because they often devolve into mere rubber-stamping exercises that overlook real risks. The sheer scale of Software as a Service (SaaS) compounds the issue, as different teams may manage machine identities without any accountable owner. This fragmented oversight makes it almost impossible to validate what permissions a machine identity actually requires versus what it has accumulated over time.
Prediction 3: A Widening Gap Between Digital Transformation and Identity Hygiene
As organizations pursue digital transformation, they often overlook fundamental aspects of identity security—leading to serious vulnerabilities. Walker notes a striking trend: by the end of 2025, high-profile companies with substantial security investments could find themselves at the mercy of compromised machine credentials, simply because they had not been managed effectively.
Walker cites incidents involving companies like Jaguar Land Rover and Marks & Spencer, where breaches linked to compromised non-human identities resulted in crippling operational disruptions and massive financial losses. These breaches illustrate a worrying reality—major companies can suffer catastrophic consequences due to the mismanagement of entity governance, emphasizing that such risks are real and immediate rather than hypothetical.
Prediction 4: Regulatory Demands for Transparency in Autonomous Agents
The future of autonomous agents in business is also set to be transformed by regulatory scrutiny. Walker highlights that new laws, such as the EU AI Act and California’s transparency requirements, mandate a thorough documentation process for decisions made by AI agents. Organizations must provide clear reasoning and maintain comprehensive audits of the data these systems access and their actions.
This shift means companies can no longer hide behind the ambiguity of AI decision-making. If an AI system conducts a transaction or denies a loan, organizations must articulate the reasoning behind that decision in a way that is understandable to regulators and affected individuals alike. The age of shrugging off AI decisions with “it was the AI’s choice” is effectively coming to an end, making transparency a non-negotiable aspect of deploying autonomous systems.
