6 May 2026
·5 min read
Tech IndustrySecurityPlatform ReliabilityWhy 68% of Breaches Start With Your Engineers, Not Your Code
The Verizon DBIR shows the human element is still the dominant initial access vector. For engineering leaders, that means rethinking developer workflows, secrets handling, and on-call escalation paths — not just buying more security tools.
The Verizon Data Breach Investigations Report (DBIR 2024) found that 68% of breaches involved a non-malicious human element — phishing, pretexting, or credential mishandling. The pattern has held for three consecutive reporting years, and the high-profile incidents that defined the last 24 months — Okta's support-system compromise, Twilio's SMS phishing, MGM's helpdesk impersonation, the Snowflake customer credential thefts — all started the same way: someone in engineering or operations was tricked, and an attacker walked through a door that no firewall was guarding.
For CTOs, the uncomfortable implication is that the initial access vector is rarely a zero-day in your stack. It is a session token in a developer's browser, a long-lived API key in a CI runner, or an on-call engineer who answered a Slack DM at 02:00 from someone claiming to be from IT. The attack surface is your engineering workflow.
What the data actually says
Three findings from the DBIR and adjacent incident write-ups are worth pulling out, because each one maps to a concrete control engineering leaders own — not the CISO.
First, stolen credentials and phishing together account for the majority of initial access. The DBIR puts credentials at 38% of breaches as a discrete pattern, with phishing close behind. In the Okta October 2023 incident report, the entry point was a service account credential saved to a personal Google profile by an employee — a workflow shortcut, not a sophisticated exploit.
Second, median time-to-click on a malicious link is under 60 seconds, per the DBIR's user-behaviour telemetry. Awareness training has not moved this number meaningfully in five years. The control that works is making the credential itself useless when stolen — phishing-resistant MFA, short-lived tokens, and bound sessions — not better training videos.
Third, the helpdesk and on-call channels are now primary targets. The MGM and Caesars intrusions in 2023, and several 2025 ransomware cases attributed to Scattered Spider, used voice-based social engineering against IT support to reset MFA. If your on-call runbook allows a phone call to result in a credential reset, you have a documented playbook for attackers.
Action one: kill long-lived secrets in your delivery pipeline
Go into your CI/CD system this week and inventory every static secret older than 90 days. Most enterprise pipelines accumulate them: cloud provider keys for deploys, registry tokens, third-party API keys for integration tests, signing keys. Each one is a credential that survives the laptop it was created on.
The replacement is OIDC-federated short-lived credentials. GitHub Actions, GitLab, Buildkite, and CircleCI all support OIDC token exchange with AWS, GCP, Azure, and HashiCorp Vault. The migration is mechanical but tedious — typically two to four weeks of focused work for a 200-engineer org. The payoff is that a leaked CI log or compromised runner no longer yields usable credentials. This is the single highest-leverage change a Head of Engineering can make against the credential-theft pattern, and it sits squarely inside the delivery and CI/CD remit, not security's.
Action two: make MFA resets a two-person, async control
Review the procedure your IT or platform team uses to reset MFA for an engineer who has lost a device. If it can be completed via a single synchronous channel — a phone call, a Slack DM, a video call where someone holds up an ID — it is exploitable. The Scattered Spider playbook depends on this exact assumption.
The fix is structural, not technological. Require that MFA resets involve at least two approvers, that the request be raised in a ticketing system with an audit trail, and that final activation happen out-of-band — for example, the engineer must collect a hardware token in person from a named office, or be verified by their direct manager via a pre-registered channel. Yes, this slows down genuine resets. The trade-off is acceptable when the alternative is the MGM scenario.
Apply the same control to permission escalations in cloud consoles and to break-glass account access. Several of the 2025 incidents catalogued in the DBIR involved attackers who phished initial access then immediately requested elevated permissions through the same compromised channel.
Action three: treat the developer endpoint as the new perimeter
The Okta 2023 incident is instructive because it was not a failure of Okta's product. It was a session token from a managed device that ended up cached in a personal browser profile. Once an engineer can sync browser state to a personal account, your enterprise SSO posture is irrelevant.
Three controls materially reduce this risk and can be rolled out within a quarter:
- Enforce browser management policies that prevent profile sync to non-corporate Google or Microsoft accounts on any machine that holds production credentials.
- Move from cookie-based sessions to device-bound sessions wherever your IdP supports it (Okta DPoP, Google DBSC, Microsoft Entra token protection). A bound token stolen from a developer laptop is unusable on the attacker's machine.
- Audit which SaaS tools your engineers authenticate to with corporate SSO and which they authenticate to with passwords or personal tokens. The latter category is your shadow attack surface.
None of this requires a new vendor. It requires someone in engineering leadership to own the project end-to-end, because the security team typically does not have the authority to change developer workflows.
Why this is an engineering problem, not a security one
The DBIR's persistent finding — that the human element dominates initial access — is often read as a call for more training. The evidence does not support that reading. Click-rates on phishing simulations have been roughly flat across the industry since 2019. What has measurably reduced incident impact is removing the value of what the human gives up: making credentials short-lived, sessions device-bound, and privileged actions multi-party.
Every one of those controls lives in systems that engineering owns: the CI pipeline, the IdP integration, the cloud IAM model, the on-call runbook, the developer endpoint configuration. Security can advise on the threat model, but the implementation is platform engineering work. Treating it as such — putting it on the same backlog as reliability work, with the same SLOs and review cadence — is what separates organisations that absorb a phishing event from those that file an 8-K.
Anystack helps engineering leaders implement these controls without stalling delivery. Our work on platform reliability typically includes secrets-rotation programmes, OIDC migration in CI, and on-call runbook hardening as part of broader resilience engagements. The patterns are well-understood; the work is execution and prioritisation against everything else competing for platform-team time.
