All resources
March 25, 2026·9 min readISO 27001SDLCThreat ModelingA.8.25A.8.26

Threat Modeling in the SDLC: What ISO 27001 A.8.25 and A.8.26 Actually Demand

Most SDLCs claim threat modeling. Most can't produce a single completed threat model on demand. What A.8.25 and A.8.26 require, the gaps we flag in audits, and a working template you can adopt in one sprint.

Jonathan Major
Jonathan Major
Lead ISO Internal Auditor · Risk and Response

Most SDLCs claim threat modeling as part of the process. Most SDLCs cannot produce a single completed threat model on demand. This is the gap that gets flagged in nearly every modern ISO 27001 internal audit — the policy says "we threat model major releases," and the engineering team confirms they "do threat modeling," but the artifacts that prove it don't exist or haven't been touched in 18 months.

ISO 27001:2022 introduced specific Annex A controls for the secure development lifecycle that put real teeth on this. This post walks through what A.8.25 and A.8.26 actually require, the gaps an internal auditor sees, and a working template you can implement in a single sprint without crushing engineering velocity.

What A.8.25 and A.8.26 actually require

ISO 27001:2022 Annex A.8.25 — Secure development life cycle — requires the organization to establish and apply rules for the secure development of software and systems. It explicitly calls out threat modeling as part of the methodology, alongside secure coding standards, secure development environments, version control, and security testing.

Annex A.8.26 — Application security requirements — requires information security requirements to be identified, specified, and approved when developing or acquiring applications. This is the upstream input to A.8.25: you can't apply secure development if you haven't first decided what "secure" means for the thing you're building.

Together, the two controls describe a continuous loop:

  1. Identify the security requirements for a planned change (A.8.26)
  2. Apply secure development methodology — including threat modeling — to build against those requirements (A.8.25)
  3. Verify, before release, that the requirements are met

Threat modeling is the technique that makes this loop concrete. It's the bridge between "we have a security policy" and "we have evidence that this specific feature was assessed for security risk before it shipped."

What "good" actually looks like

A defensible threat-modeling program has six components. Get any five right and you'll close A.8.25 and A.8.26. Get all six and a Stage 2 auditor will move on quickly.

1. A defined trigger

"We threat model major releases" is meaningless without a definition of "major." Specific triggers I recommend writing into the policy:

  • A new service, microservice, or significant component
  • A change to an authentication or authorization boundary (a new role, a new SSO provider, a new federation)
  • A change to a data-classification boundary (e.g., the system now handles PII, payment data, or regulated health data)
  • A new external integration or third-party processor
  • Any AI/ML feature — mandatory, regardless of perceived size (drives ISO 42001 Annex A.6 alignment)
  • A change to deployment topology (new region, new cloud account, new tenancy model)

Code these as a checklist in your engineering proposal template. The product spec or design doc has a "does this trigger a threat model?" section that the tech lead fills in.

2. A chosen method

Pick one and stick with it:

  • STRIDE — Microsoft's framework. Six threat categories (Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, Elevation of Privilege). Best for system-level architectural review. Default choice for most teams.
  • LINDDUN — Privacy-focused (Linkability, Identifiability, Non-repudiation, Detectability, Disclosure of Information, Unawareness, Non-compliance). Use when the system handles PII at scale or where GDPR / ISO 42001 obligations dominate.
  • OWASP ASVS — Application Security Verification Standard. More requirements checklist than threat model. Useful as a verification gate downstream, not as the threat-modeling method itself.
  • PASTA — Process for Attack Simulation and Threat Analysis. Heavier, business-impact-focused. Reach for it when you need to align security risk with business risk explicitly (e.g., financial services).

For most SaaS teams, the answer is STRIDE for the threat model and OWASP ASVS as a verification checklist before release. LINDDUN replaces or augments STRIDE if the system handles substantial PII or is in scope for ISO 42001.

3. A standard artifact

Every threat model produces the same three things:

  • A data flow diagram — boxes, arrows, trust boundaries. Mermaid, draw.io, OmniGraffle, whatever your team actually uses. The diagram lives in the design doc or repo.
  • A threat list — a table with one row per identified threat. Columns: ID, STRIDE category (or LINDDUN), the asset/data flow it affects, likelihood, impact, mitigation, status (mitigated / accepted with justification / out of scope).
  • A sign-off block — names, dates, what was decided. This is the artifact that proves the conversation happened.

4. Named owner and reviewer

Tech lead drafts. Security partner reviews. Both sign off in the document before release. If you don't have an in-house security partner, the engineering manager fills the reviewer role and the threat model gets a quarterly external review.

5. Discoverable storage

Two places, mirrored:

  • In repo, version-controlled: /docs/threat-models/<feature-name>.md
  • Linked from the GRC platform — Vanta, Drata, or whatever you use — as evidence under A.8.25 / A.8.26

The single most common audit failure is "we did the threat model but I can't find it" — repo + GRC link solves it.

6. Linkage to the backlog

Each threat marked "mitigated" needs a corresponding ticket — closed or in-flight — that implemented the mitigation. Each "accepted" threat needs the named approver and a documented rationale. The threat model is not a static doc; it's the design-time output of an operational risk-treatment process.

The gaps we flag in internal audits

Across recent ISO 27001 audits, these are the threat-modeling gaps that come up most.

Gap 1: "Major release" undefined

The most common issue — the policy commits the organization to threat modeling for "major releases," but "major" isn't defined anywhere. Without explicit triggers, every release is a judgment call, and the judgment usually defaults to "not major."

Gap 2: Threat model done once, never refreshed

A team threat-modeled the v1 architecture two years ago. The artifact sits in a Confluence page nobody touches. The system has evolved through six major releases since. Auditors look for evidence the threat model is alive — at minimum an annual review for long-lived services.

Gap 3: No linkage between threats and mitigations

The threat list says "Threat T-12: SQL injection in admin search — mitigated by parameterized queries." The auditor asks for the ticket, the test, the code review note — the artifact that proves the mitigation actually shipped. Often, no link.

Gap 4: AI features skipped

AI/ML features keep slipping through under "incremental change" framing. They're not. An AI feature changes data flows (training corpus, inference inputs, output handling), introduces new abuse vectors (prompt injection, jailbreaks, model exfiltration), and often crosses regulatory boundaries (ISO 42001, GDPR). Make AI a mandatory trigger.

Gap 5: Third-party integrations not modeled

A new SaaS integration adds a trust boundary, new data flows, and new credentials. Each of those deserves at least a lightweight threat-modeling pass. Most teams do a vendor risk assessment instead — useful, but it's not a threat model.

A working template

Here's the lightweight, repeatable structure I recommend. Most teams can adopt it inside one sprint.

Policy text (to add to your Secure Development Policy)

Threat modeling is required for all changes that meet any of the following criteria: new services or microservices; changes to authentication or authorization boundaries; changes to data classification scope; new external integrations; AI/ML features; or changes to deployment topology. The threat model uses STRIDE methodology by default, with LINDDUN substituted for changes primarily affecting personally identifiable information. The artifact lives in the relevant feature's design documentation, is reviewed and signed off by Engineering and Security before release, and is reviewed at least annually for long-lived services.

Per-release process

  1. Tech lead identifies trigger criteria during design — flagged in the design doc template
  2. Tech lead drafts the data flow diagram (boxes + arrows + trust boundaries) — 30–60 minutes
  3. One-hour STRIDE walkthrough session: tech lead + security partner + product if relevant. One row per identified threat.
  4. Each threat assigned a status: mitigated (with linked ticket), accepted (with named approver and rationale), or out of scope (with reason)
  5. Mitigation tickets created, prioritized, scheduled into the release plan
  6. Release readiness gate: tech lead and security partner sign off that all threat-model commitments shipped or are accepted
  7. Threat model committed to repo and linked from GRC platform as evidence

Annual review

For each long-lived service, calendar an annual threat-model refresh. Review what's changed, add new threats from the past year's incidents and security advisories, retire mitigated/closed threats that no longer apply.

Scaling without crushing velocity

The fear behind not adopting structured threat modeling is usually velocity — "we'll spend half our time threat modeling and never ship." Three things keep this from being true:

Trigger-based, not commit-based

Most PRs don't need a threat model. Define the triggers tightly, keep the bar at architectural change rather than implementation change, and the cadence stays manageable. A typical SaaS team runs 6–12 threat-modeling sessions per year, not per sprint.

Lightweight checklists for routine changes

For changes below the trigger bar, a 10-question security checklist in the PR template captures the same intent without the overhead. "Are you adding a new external API call? Are you changing what data we log? Are you bypassing existing authn?" Yes-answers escalate to a threat-modeling review.

Tooling helps but isn't required

Microsoft Threat Modeling Tool, OWASP Threat Dragon, IriusRisk, and pytm are all good. None are required. A markdown file with a diagram and a threat table is an A.8.25-compliant artifact. Pick a tool when the team is bottlenecked on artifact creation, not before.

What an audit looks for

During an ISO 27001 internal audit, I review the following for A.8.25 and A.8.26:

  1. The Secure Development Policy with explicit threat-modeling triggers and methodology
  2. A list of releases in the audit period and which triggered a threat model
  3. The threat-model artifacts for at least two of those releases — including data flow diagram, threat list, sign-off
  4. The mitigation tickets linked to the threat list, showing they shipped before the corresponding release
  5. For long-lived services, evidence of an annual review in the past 12 months

If those five artifacts exist and reconcile, A.8.25 and A.8.26 close cleanly.

Frequently asked questions

How long does a single threat model take?

For a typical feature: 30–60 minutes for the diagram, 60–90 minutes for the STRIDE walkthrough, 30 minutes for write-up and ticket creation. Total: 2–3 engineering hours for the tech lead, 1–2 hours for the security reviewer. The first one your team runs takes longer; the fifth is fast.

Do we need a dedicated security engineer?

For a 50-person SaaS, no. The engineering manager or staff engineer can fill the security-partner role with one day of STRIDE training. For larger organizations, a part-time security engineer or a fractional CISO covers this without a full-time hire.

What if we don't do "releases" — we ship continuously?

Triggers are based on architectural change, not a release calendar. Continuous deployment doesn't change anything: the threat model gates the change, not the deployment. If a feature flag enables a change that crosses a trust boundary, it's the same trigger.

How is this different from penetration testing?

Threat modeling is design-time and proactive — it identifies risks before code exists. Pen testing is post-implementation and empirical — it validates that controls actually work. Both are required for ISO 27001, and they cover different parts of the lifecycle.

Where does ISO 42001 extend this?

ISO 42001:2023 Annex A.6 (AI system lifecycle) effectively requires threat modeling for AI features specifically — covering prompt injection, data poisoning, model exfiltration, and bias/fairness as additional categories beyond STRIDE. If you're running an integrated ISO 27001 + ISO 42001 program, the threat model template should include an "AI threats" section that picks up where STRIDE leaves off.

Does Vanta or Drata cover this automatically?

Both platforms include policy templates and evidence-collection slots for A.8.25 and A.8.26. Neither generates a threat model for you. Use the platform to track that the threat model exists, was reviewed in the period, and is linked to the controls — but the substantive engineering work still needs to happen.

About the author

Jonathan Major

Jonathan leads ISO 27001, ISO 42001, and ISO 9001 internal audits at Risk and Response. 25 years across engineering, information security, and compliance — IBM, BlackRock, Barclays, Crux Informatics.