Back to Blog

How Production Context Changes Which Security Alerts Matter First

How Production Context Changes Which Security Alerts Matter First

A medium alert tied to a live, internet-facing artifact can deserve faster action than a critical alert in code that never shipped.

Most security queues get built the same way: start with severity, sprinkle in exploitability, then argue about what matters “in production” without actually knowing what is in production.

GitHub’s newer production-context and linked-artifacts documentation is a step toward ending that argument. It shows teams how to attach real deployment and storage metadata to the artifacts that alerts refer to, then use that context to filter and sort Dependabot and code scanning findings.

The result is simple: fewer debates about theory, faster decisions about what to fix first.

What GitHub added

GitHub’s linked artifacts experience gives a unified view of artifacts produced by GitHub Actions. Instead of treating “an alert” as a detached blob of data, the linked-artifacts view can show:

  • how the artifact was built
  • where it is stored or running
  • which security and compliance metadata is associated with it
  • how it connects to other related artifacts

GitHub’s production context tutorial then describes how teams can feed storage and deployment records into that view. GitHub says this metadata enables alert filtering with fields such as:

  • artifact-registry-url
  • artifact-registry
  • has:deployment
  • runtime-risk

The docs also show how teams can focus on alerts tied to deployed artifacts, and further narrow to artifacts that carry runtime risks such as internet exposure.

Why runtime context changes the queue

Severity still matters. It just becomes more useful when it is paired with deployment reality.

Consider two findings:

  • Critical in dead code: A critical issue in a branch that never shipped, or in an image that was built once and never deployed.
  • Medium in a live workload: A medium issue in an artifact that is currently deployed, internet-exposed, and tied to a production service.

If you only look at CVSS, the queue is obvious. If you look at risk to the business right now, the queue flips.

Runtime context makes that flip defensible. It gives AppSec a way to explain prioritization in operational terms that engineering leadership recognizes: what is running, what is exposed, what is reachable, and what is owned.

A practical triage model: severity × deployment × exposure

Production context is most valuable when you treat it as a simple scoring model. Not a complex spreadsheet. Just a consistent rubric.

Here is one lightweight approach many teams can adopt quickly:

  1. Is the artifact deployed? (has:deployment)
    • If no, keep it visible but deprioritize unless the fix is trivial or the issue is systemic.
  2. Is there runtime risk? (runtime-risk)
    • Internet exposure and sensitive data paths usually outrank internal-only workloads.
  3. Does the alert map to the deployed artifact?
    • Prioritize issues that affect the exact running version, not a stale build.
  4. Does the owning team exist and can they act?
    • A fix without ownership is not a fix. Push ownership resolution into the process.

Even if you never formalize “a score”, these questions turn alert review into triage.

What linked artifacts help you answer

Once storage and deployment records are flowing, the questions get better:

  • Which vulnerable artifacts are actually deployed?
  • Which deployed artifacts carry runtime risks (internet exposure, sensitive data, regulated workloads)?
  • Which teams own the affected workloads?
  • Which findings belong in this week’s response queue versus a backlog cleanup pass?

This is the point where alert review starts to look like operational work, not spreadsheet cleanup.

How teams can apply this in a weekly workflow

Production context works best when it is attached to a cadence.

A simple weekly loop:

  1. Pull a “deployed + runtime risk” view and fix the top slice first.
  2. Create a second view for “deployed, no runtime risk” and timebox it.
  3. Keep a third view for “not deployed” and use it for hygiene and systemic improvements.

This structure stops the queue from being dominated by alerts that are technically severe, but practically irrelevant.

What teams still need after the filter works

GitHub gets you much closer to the right queue. Many organizations still need one more layer to move from prioritization to action.

Security teams still need to connect deployed artifacts to:

  • reachable assets in the environment
  • nearby identities, secrets, and cloud permissions
  • remediation ownership across engineering teams
  • first safe fix steps once the risk is confirmed

That broader operating view is where many security programs still slow down.

FAQ

What is runtime-risk?

GitHub’s linked artifacts documentation says deployment records can include runtime risks such as sensitive data or internet exposure.

Does production context only work with GitHub registries?

No. GitHub’s docs say teams can provide production context using the REST API or partner integrations, and the linked artifacts page can work alongside external registries such as JFrog Artifactory.

Should severity still be part of triage?

Yes. Production context makes severity more useful because it changes the order in which teams should act.

If you want to see how Cantina turns deployed context into ownership, blast radius, and next actions, book a platform walk-through.