Back to Blog

How to Catch Secret Leaks in AI Coding Workflows Before Commit

How to Catch Secret Leaks in AI Coding Workflows Before Commit

A secret can leave the session before anyone commits code.

GitHub's documentation for the GitHub MCP server says push protection blocks secrets in AI-generated responses and also stops secrets from being included in actions you perform through the server, including creating an issue. GitHub says this protection is on by default for public repositories and for private repositories covered by GitHub Advanced Security.

That moves the first meaningful leak check into the agent loop.

What changed in GitHub MCP push protection

The important shift is the placement of the control.

GitHub used to feel like the last checkpoint before sensitive material landed in the repository. With the GitHub MCP server, the checkpoint now sits inside the session itself. The block can fire while the model is drafting a response or while a user asks the server to carry that content into a GitHub task such as issue creation.

For security teams, that means the first review point is earlier and the blast radius can be smaller.

Why AI coding workflows raise the stakes

Secrets surface in more places when agents are involved. A token can show up in:

  • model output shown in chat
  • generated code or configuration
  • an issue or pull request draft
  • a follow-on action that writes back to GitHub

A single block is useful because it prevents the cleanest path to exposure. It also gives the team a signal that sensitive context already entered the session.

Where coverage still ends

GitHub's documentation is specific to the GitHub MCP server. That is helpful and it is also the boundary.

Many teams now run agent workflows across GitHub, cloud consoles, ticketing tools, internal APIs, and extra MCP servers. A block on the GitHub surface does not explain:

  • which other tools were reachable in the same session
  • whether the secret showed up elsewhere before the block
  • which identity the agent was using across the rest of the workflow
  • how bypasses or exceptions stacked up during the session

That is usually where teams find the gaps that still matter.

Four reviews to run this week

1. Map every place a secret can leave the session

Include model output, generated artifacts, issue creation, pull request drafts, terminal output, and external tools.

2. Inventory which tools can write durable records

A response that stays in chat carries one level of risk. A response that becomes an issue, a comment, a file, or an API call carries another.

3. Review who can bypass secret blocks

GitHub provides a bypass flow. Teams should decide who can use it, what reason is acceptable, and how that exception is reviewed later.

4. Tie blocks to identity and session context

A blocked secret becomes far more useful when it is tied to the user, the agent session, and the reachable tool set around it.

FAQ

Does this apply to private repositories?

Yes. GitHub says the protection is on by default for private repositories covered by GitHub Advanced Security.

Does this protect every tool an agent can use?

No. The documentation is specific to the GitHub MCP server. Teams still need coverage across the rest of the agent workflow.

Why does this matter if the secret was blocked?

Because a block tells you sensitive material already entered the session. That is exactly the moment when teams should inspect the surrounding context, reachable tools, and exception path.

If you want to map where agent sessions can leak secrets across GitHub, MCP tools, and runtime systems, Cantina can walk that workflow with your team. Start with an AI agent risk review.