Secrets Management for Developers: A Practical Guide

A new starter on a team I worked with last year committed an AWS access key to a public mirror of a project. Within 14 minutes, automated scanners had picked it up and spun up 32 GPU instances in eu-west-2 to mine cryptocurrency. The bill ran past 4,000 GBP before the alert paged the on-call engineer.

That is the speed at which leaked secrets are exploited now. Secrets management is not a checklist item for the security review at the end of a project; it is a daily habit that affects every developer pushing every commit. This is how to think about it without pretending you need an enterprise vault on day one.

What Counts as a Secret

The first useful move is drawing a line between configuration and secrets. They behave differently and need different tooling.

TypeExamplesTreatment
ConfigurationLOG_LEVEL, PORT, FEATURE_X_ENABLEDPlain text, in repo, per-environment
Sensitive configInternal hostnames, third-party endpointsOut of repo, but low-stakes if leaked
SecretsAPI keys, DB passwords, signing keys, OAuth client secretsEncrypted at rest, audited access, rotated

If a value gives an attacker capability when leaked, it is a secret. Everything else can sit in a config file in the repo. Treating both the same way clutters your vault and trains the team to ignore “secrets” warnings because most of them are not real secrets.

This sits next to the broader topic of environment variables done right: env vars are the transport mechanism, secrets management is the policy you apply to a subset of them.

The Local Development Trap

Most leaks I have investigated did not start in production. They started on a developer laptop. The pattern goes:

  1. Developer needs a credential to debug something.
  2. Credential lands in .env.
  3. Six months later, .env.example and .env get committed in the same hurried git add ..
  4. The credential is now in Git history forever.

A few habits remove most of this risk:

  • Always use .env and never .env.example for real values. Commit .env.example with placeholder strings; add .env to .gitignore from the first commit, not the day after the leak.
  • Run a pre-commit hook that scans for high-entropy strings. Gitleaks ↗ and git-secrets ↗ both work well; install them as part of the repo bootstrap so new starters get them automatically.
  • Use direnv or a per-project shell hook. That keeps secrets out of your global shell history and makes it obvious which project owns which credential.
  • Use short-lived dev credentials wherever you can. AWS SSO, GCP Application Default Credentials, and Azure CLI all give you tokens that expire in hours rather than long-lived access keys.

The pre-commit hook is the single highest-leverage thing on this list. It catches the leak before the push, which is the only point at which it is cheap to fix.

Choosing a Vault

The market has converged on a small handful of tools that are all good enough. Pick on operational fit, not features.

ToolBest forWatch out for
AWS Secrets ManagerTeams already on AWS, especially with RDSPer-secret pricing adds up; rotation Lambdas need maintenance
AWS Parameter StoreCheap, simple, AWS-onlyNo native rotation; encrypt with KMS or it is just config
HashiCorp VaultMulti-cloud, dynamic secrets, fine policyHeavy to run yourself; HCP Vault is the easier path
DopplerSmall to mid teams, CI integrationVendor lock-in if you go all-in on the SDKs
InfisicalOpen source, similar UX to DopplerYounger ecosystem
1Password DeveloperTeams already on 1PasswordLess suited to high-volume service-to-service
GitHub / GitLab encrypted secretsCI-only credentialsNot a general-purpose vault; no rotation

A common mistake is picking the most powerful option (Vault) when the team has neither the time nor the operational appetite to run it. The correct choice is whatever your team will keep tidy after the launch sprint. A neglected Vault cluster with two-year-old root tokens is worse than a well-run Doppler project. The HashiCorp Vault docs ↗ are a fair primer if you want a deeper sense of the dynamic secrets pattern.

Rotation: The Bit Everyone Skips

A vault that holds a four-year-old database password is not secrets management; it is a slightly fancier filing cabinet. Rotation is the practice that makes the rest of the work meaningful.

Three rotation patterns are worth understanding:

Static rotation on a schedule. The vault generates a new credential, the application reads the new value on its next refresh, and the old credential is invalidated after a grace window. AWS Secrets Manager handles this for RDS out of the box; for everything else, you write a small Lambda or scheduled job.

Dynamic secrets. The vault generates a fresh credential per session, scoped to a short TTL. Vault’s database engine and AWS dynamic credentials work this way. The credential lives for an hour, gets used, then expires and is never seen again. This is the gold standard but requires the application to fetch credentials per session rather than reading them at boot.

Event-driven rotation. Rotate when something happens: a staff member leaves, a credential is suspected compromised, a deploy goes through a sensitive boundary. This is the one that gets neglected because the trigger is not on a calendar.

Below is the lifecycle most teams should be running for production database credentials.

Vault generates new credential on schedule or on event App fetches on next refresh cache TTL ~5 min Grace window both valid 10 to 60 min Old credential revoked audit logged

The grace window is the part teams routinely get wrong. Rotate too aggressively and the old credential is revoked before every replica has fetched the new one, causing intermittent auth failures. The window needs to be longer than your application’s secret refresh cache TTL, with margin.

CI: Where Secrets Leak Quietly

CI pipelines are the second most common leak surface after developer laptops. They run on shared infrastructure, log to public artifacts, and stitch together credentials from a dozen sources. The defaults are not safe.

A short defensive pattern that catches most of the bad outcomes:

# GitHub Actions example
jobs:
  deploy:
    runs-on: ubuntu-latest
    permissions:
      contents: read
      id-token: write   # for OIDC, not long-lived AWS keys
    steps:
      - uses: actions/checkout@v4

      - name: Configure AWS credentials via OIDC
        uses: aws-actions/configure-aws-credentials@v4
        with:
          role-to-assume: arn:aws:iam::123456789012:role/deploy
          aws-region: eu-west-2

      - name: Fetch DB password from Secrets Manager
        id: db
        run: |
          PASSWORD=$(aws secretsmanager get-secret-value \
            --secret-id prod/db/password \
            --query SecretString --output text)
          echo "::add-mask::$PASSWORD"
          echo "DB_PASSWORD=$PASSWORD" >> "$GITHUB_OUTPUT"

      - name: Run migrations
        env:
          DB_PASSWORD: ${{ steps.db.outputs.DB_PASSWORD }}
        run: ./scripts/migrate.sh

Three things matter here:

  1. OIDC instead of long-lived AWS keys. GitHub mints a short-lived token; AWS exchanges it for a session. There is no static credential to leak. The GitHub OIDC docs ↗ have the per-cloud setup.
  2. ::add-mask:: registers the value with the runner so it is redacted from logs. Without it, set -x or a verbose curl will spill the password into the build log.
  3. Pass via env, not via command line. Command-line args show up in ps listings and in some CI logs. Env vars are scoped to the step.

For the broader pattern around CI safety, the GitHub Actions patterns post covers permissions, pinning, and least-privilege workflows. The CI piece of secrets management lives inside that wider conversation about pipeline hardening; it is not a separate problem.

Application-Side: Read Once, Cache Briefly, Refresh Quietly

How an application consumes secrets matters as much as where it stores them. A few patterns hold up.

Read on startup, refresh in the background. Boot reads the current value from the vault and caches it in process memory. A background goroutine or scheduled task refreshes it every few minutes. The application never blocks waiting for the vault, and rotation propagates automatically.

Fail closed on missing secrets, not missing config. If DATABASE_URL is missing, crash on startup with a clear error. Do not fall back to a default that silently connects to the wrong place. This is one of the cheapest changes you can make and one of the most effective.

Never log a secret, even at debug. That includes the first eight characters “for debugging”. The combination of secret prefix and known service is enough to identify which credential leaked, which is half of what an attacker needs.

Treat secrets in memory dumps as equivalent to secrets on disk. A core dump on a shared machine is a credential leak. Configure the runtime to disable core dumps in production, or to redact known secret env vars when they are written.

The structured logging discipline from the developer’s guide to logging is what makes the “never log a secret” rule sustainable. With ad-hoc string interpolation, somebody will eventually log the wrong thing.

A Practical Adoption Plan

If you are looking at this and realising your team is running on .env files committed to a private repo, that is a normal starting point. Here is the smallest progression that gets you to a defensible posture.

  1. Week 1. Pick a vault. Add .env to .gitignore if it is not already. Install gitleaks as a pre-commit hook in every active repo.
  2. Week 2. Move production secrets out of CI environment configuration into the vault. Switch CI to fetch them at runtime. Switch any long-lived cloud credentials to OIDC.
  3. Week 3. Audit who has access to which secrets. Remove anyone who left in the last year. Document the access policy somewhere a new starter can find.
  4. Week 4. Set up automated rotation for the highest-blast-radius secret you have, usually the production database master credential. Verify the application handles rotation cleanly in staging first.
  5. Month 2. Repeat rotation for the next tier of secrets. Add audit log monitoring; an unfamiliar IP fetching a production credential should page somebody.
  6. Quarterly. Run a tabletop exercise where a credential leaks. Time the response. The answer should be “rotate, audit, blameless review”, not “scramble”.

This is conservative and unglamorous. It is also enough to keep the cryptocurrency miners out of your AWS account.

What Goes Wrong Even With a Vault

A vault on its own does not stop the common failure modes. Watch for these:

  • The “shared developer” credential. A single AWS access key with broad scope, used by every developer for everything. Replace with named SSO identities; the audit trail alone is worth it.
  • Build secrets that survive into runtime images. A Docker build that uses a private package registry token will bake the token into a layer if you do not use multi-stage builds or buildx secrets. Docker’s build secrets docs ↗ show the safe pattern.
  • Secrets in third-party SaaS dashboards. Cron services, error trackers, and uptime monitors all collect credentials in their own UIs. Inventory them; they are usually nobody’s job to rotate.
  • Backup blast radius. A database backup contains every secret your application has ever touched in plain text rows. Encrypt backups; control access to backup buckets at least as tightly as to production.
  • Secret sprawl across environments. A staging key with production access is a production key. Either keep environments separated by hard policy boundaries, or treat all environments as production-grade for secrets purposes.

This connects to the general posture of secure coding practices: secrets management is one strand of a defence-in-depth stance, not a substitute for the rest of it.

A Realistic Endpoint

A team running secrets management well looks calm during incidents. A leaked key triggers a 10-minute rotation playbook and a blameless postmortem; it does not trigger a weekend. New starters get access through SSO on day one and lose it within an hour of leaving. Rotation happens on a schedule nobody has to remember. The vault has the credentials; the team has the discipline.

The technical pieces are the easy part. The hard part is making rotation, auditing, and tidy access part of how the team works, week in and week out. Pick the smallest set of practices you will actually keep doing, and build from there.

Frequently asked questions

What is the difference between a secret and an environment variable?

Every secret is an environment variable in some sense, but not every environment variable is a secret. A secret is any value that, if leaked, hands an attacker capability they did not have: API keys, database passwords, signing keys, OAuth client secrets. Configuration like LOG_LEVEL or PORT is not a secret. The distinction matters because secrets need rotation, audit, and access controls; ordinary configuration does not.

Do I really need a vault for a small project?

If you are the only developer and the project is a side hobby, a local .env file plus your platform's built-in secret store (Cloudflare Wrangler, Vercel, GitHub Actions encrypted secrets) is enough. The moment a second person needs the same credentials, or a credential needs to rotate, you want a vault. Sharing secrets over Slack DMs is the classic point at which projects pick up their first breach.

How often should secrets be rotated?

Frequency depends on blast radius. Database master credentials and signing keys: at least every 90 days, ideally on a schedule the vault automates. Service-to-service tokens with narrow scopes: every 6 to 12 months is reasonable if you also rotate on staff changes and on any suspected compromise. The honest answer is that rotation is most valuable when it is automated, because manual rotation tends to slip.

Should I use AWS Secrets Manager, HashiCorp Vault, or Doppler?

AWS Secrets Manager and Parameter Store make sense if you are already on AWS and want native IAM integration; they handle rotation for RDS and a few other services automatically. HashiCorp Vault is the most flexible and the most operationally heavy; pick it if you need dynamic secrets, multi-cloud, or fine-grained policy. Doppler, Infisical, and 1Password are easier to onboard for small teams and integrate well with CI providers. There is no single right answer; pick the one your team will actually use.

What should I do if a secret leaks into a public Git repo?

Rotate the credential immediately. Removing it from Git history is secondary; assume the leaked value is compromised regardless of how quickly you scrubbed the commit. After rotation, audit usage of the old credential in your logs, run truffleHog or git-secrets across the repository to find any siblings, and add a pre-commit hook to stop it happening again. GitHub's secret scanning will often beat you to the discovery, which is a useful safety net but not a substitute for prevention.

Enjoyed this article? Get more developer tips straight to your inbox.

Comments

Join the conversation. Share your experience or ask a question below.

0/1000

No comments yet. Be the first to share your thoughts.