Programming code and data on computer screen

Protecting API Keys and Secrets While Working Away from HQ

I once saw a roommate's AWS keys in a plaintext .env file. On open hostel Wi-Fi. In Buenos Aires.

That was the day I got paranoid about secrets.

Traveling doesn't excuse sloppy key storage. Here's how I keep API keys tight when I'm coding from cafés with sketchy routers.

Programming code and data on computer screen

Photo: Unsplash / Markus Spiske

March 2024: Buenos Aires

12-bed hostel dorm. Palermo. $14/night.

One of my roommates—let's call him Jake—was a freelance dev working for a U.S. startup. We'd been chatting about remote work over breakfast. He mentioned deploying a Django app to AWS.

That afternoon, I'm at the hostel's shared coworking table debugging Python. Jake's two seats away, also coding.

I glance at his screen. Not snooping—just natural peripheral vision when someone's four feet away.

VS Code open. File named .env. Plaintext AWS credentials.

AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY

Real keys. Production. Full AdministratorAccess permissions (I could tell from the repo name).

And this file? In his Git repo root. He'd added it to .gitignore, sure—but that only stops future commits. If he'd ever committed it before, those keys are in Git history forever.

I had to say something.

"Hey Jake. Heads up—you've got AWS keys on your screen right now. If you've ever committed that .env file, they're in your Git history. Anyone with repo access can pull them."

He looks confused. "But I gitignored it last week. Doesn't that protect it?"

"Only going forward. Check your history. git log --all --full-history -- .env."

He runs it.

Five commits.

He'd committed production AWS keys five times before adding .gitignore. The repo was private, but he'd added three other devs as collaborators. Any of them could extract the keys from Git history with like two commands.

Oh, and the hostel Wi-Fi? Open network. No WPA2 password. Anyone in the building could've been running Wireshark, sniffing Git traffic, running MITM attacks.

We spent the next two hours fixing it. Rotated the keys (aws iam create-access-key, aws iam delete-access-key). Rewrote Git history to nuke the .env file (git filter-branch with like six flags I had to look up). Force-pushed to GitHub. Notified the other collaborators.

Estimated AWS bill if someone had found those keys and spun up EC2 instances for crypto mining? $10,000+.

That incident scared Jake. Scared me too, honestly. Because this mistake is so common—especially with traveling devs working from hostels and cafés. Shared networks, distractions, fatigue. Secrets leak.

Here's how I don't let that happen.

Secret Inventory & Classification

  • Map secrets: OAuth client secrets, AWS IAM keys, Stripe tokens, third-party API keys.
  • Risk tiers:
  • Tier 1 (critical): production cloud keys, payment processors.
  • Tier 2 (important): staging environments, analytics platforms.
  • Tier 3 (low): development sandbox keys.
  • Ownership: Each secret has an owner responsible for rotation and access approvals.

Storage Strategy (Costs & Tools)

| Use Case | Tool | Cost | Notes | | :-- | :-- | :-- | :-- | | Local development | 1Password CLI + op inject | $4.99/month (individual) or $7.99/month (families) | Pulls secrets on demand into environment variables, never writes to disk | | CI/CD | HashiCorp Vault + GitHub OIDC | Self-hosted: free (but requires EC2/compute), HCP Vault: $0.03/hour per cluster (~$22/month minimum) | Short-lived tokens issued per workflow, no static secrets in GitHub | | Runtime | AWS Secrets Manager | $0.40/secret/month + $0.05 per 10,000 API calls | Applications fetch at startup with IAM roles; automatic rotation supported | | Emergency access | Encrypted USB drive (IronKey D300) | $180 one-time | Backup secrets for offline access if 1Password cloud is unreachable |

My setup: I use 1Password Families ($7.99/month) for personal and client projects. I have 37 secrets stored across 8 vaults (Personal, Client A, Client B, etc.). Each vault has granular permissions—I can share a client vault with their team without exposing my personal AWS keys.

Critical rule: I never store secrets in .env files on disk. I use direnv + 1Password CLI with op run --env-file=secrets.env -- command to load them only for the lifespan of a process. When the process exits, the secrets are wiped from memory. Example:


# secrets.env contains references to 1Password items, not actual secrets
# AWS_ACCESS_KEY_ID=op://Engineering/AWS-Production/username
# AWS_SECRET_ACCESS_KEY=op://Engineering/AWS-Production/credential

op run --env-file=secrets.env -- python deploy.py

This way, if my laptop is stolen or I accidentally leave secrets.env open on my screen in a café, no one can extract the actual secrets—they only see the 1Password item references.

Temporary Access for Travel

  • Short-lived IAM roles: AWS STS tokens with 12-hour lifetime issued via aws sts assume-role. My travel workstation assumes a “DeveloperTravel” role with limited permissions.
  • API scopes: When possible, request granular scopes (GitHub fine-grained tokens). Create tokens per trip and delete on return.
  • Device binding: Secrets accessible only from devices enrolled in MDM + conditional access (Defender for Cloud Apps).

Rotation Workflow

  1. Schedule: Tier 1 secrets rotate every 30 days; Tier 2 every 90; Tier 3 as needed.
  2. Automation: Use Vault’s dynamic secrets or AWS Secrets Manager rotation Lambdas.
  3. Notification: Rotation triggers Slack alerts to project channel with new retrieval instructions.
  4. Revocation: On trip end, run terraform apply to revoke temporary roles and delete ephemeral tokens.

Development Workflow On The Road

  • Bootstrap script:

#!/usr/bin/env bash
set -euo pipefail
export OP_SESSION=$(op signin my.1password.com --raw)
op run --env-file=.env.travel -- zsh
  • .env.travel contains references to secret item IDs, not values. Example: AWS_ACCESS_KEY_ID=op://Engineering/AWS/username
  • I run this in a secure tmux session; when I exit, secrets are wiped from memory.

CI/CD Hardening

  • OIDC Federation: GitHub Actions obtains cloud credentials via OIDC identity provider—no static keys stored.
  • Secret scanning: Enable GitHub advanced security; also run TruffleHog locally before pushing.
  • Environment protection: Require manual approval for deploying from non-standard IPs when I’m on the road.

Incident Prevention

  • Pre-commit hooks: .git/hooks/pre-commit runs detect-secrets scan. If secrets detected, commit blocked.
  • Gitleaks in CI: Fails pipeline if secrets slip through.
  • Filesystem monitoring: osquery checks for new files containing patterns like AKIA[0-9A-Z]{16}.

Failure Stories That Improved My Process

Istanbul, May 2024: GitHub token leaked in CI logs. I was deploying a client project using GitHub Actions. The workflow needed to call a third-party API (Stripe) during the build process. I stored the Stripe API key as a GitHub secret (secrets.STRIPE_API_KEY) and referenced it in the workflow YAML. The deployment worked fine.

Three days later, GitHub sent me a security alert: "Secret scanning detected a Stripe API key in your repository." I panicked—I'd stored the key in GitHub secrets, not in the code. How did it leak?

I checked the Actions logs. The workflow had a debug step that printed all environment variables with printenv. GitHub secrets are masked in logs by default, but the Stripe API key had been passed to a shell script, which then echoed it to stdout. The script's output wasn't recognized as a secret by GitHub's masking algorithm because it was slightly modified (the script appended a timestamp: STRIPE_API_KEY_1234567890). The full key was visible in the public Actions logs for 3 days before GitHub's secret scanning caught it.

I rotated the Stripe key immediately, audited Stripe's transaction logs (no suspicious activity), and updated the workflow to never echo secrets. Lesson learned: GitHub secret masking isn't foolproof. Never print environment variables in CI/CD logs, even for debugging. Use env | grep -v SECRET if you must print environment variables.

Lisbon, August 2024: 1Password CLI session expired mid-deployment. I was deploying a client's infrastructure using Terraform. My Terraform variables were stored in 1Password and loaded via op run. I started the deployment at 14:00. The op signin session token has a 30-minute default timeout. At 14:35, mid-deployment, Terraform tried to read a secret and got an error: "1Password session expired. Please run op signin again."

The Terraform deployment failed halfway through, leaving infrastructure in an inconsistent state (some resources created, others not). I had to manually clean up and re-run the deployment. Lesson learned: extend the 1Password session timeout before long operations (op signin --session-timeout=2h) or use op run --no-masking to load secrets into environment variables at the start of the process rather than fetching them on-demand.

Mexico City, January 2025: Forgot to revoke temporary keys after trip. After a two-week trip to Mexico City, I returned home and forgot to revoke my temporary AWS STS role. The role had a 12-hour token lifetime, so it should've expired automatically, right? Wrong. I'd also created a GitHub fine-grained personal access token with repo and workflow permissions, valid for 90 days. I forgot to delete it when I got home. Sixty days later, GitHub sent me an expiration warning. I realized I'd left the token active for two months longer than necessary. Fortunately, no one abused it, but it was a sloppy mistake. Lesson learned: add a reminder to my calendar (last day of travel + 1 day) to revoke all temporary credentials. I now use a script that lists all my active tokens/roles and prompts me to delete them.

Emergency Response

If a key leaks (detected via GitHub alert, AWS GuardDuty, or TruffleHog scan):

  1. Revoke immediately (aws iam delete-access-key or provider's equivalent). Don't wait to confirm abuse—revoke first, investigate later.
  2. Rotate linked secrets (database credentials, app tokens). If an AWS key leaked, rotate all secrets that the compromised role had access to (RDS passwords, API Gateway keys, etc.).
  3. Audit logs for misuse (CloudTrail, payment gateway logs). Search for API calls from unfamiliar IP addresses or regions. In Jake's case (Buenos Aires incident), we checked CloudTrail for any EC2 RunInstances calls from non-U.S. IPs in the 48 hours after the key was potentially exposed.
  4. Notify stakeholders with incident report template. I have a template in Notion: "Incident summary, timeline, scope of exposure, actions taken, preventive measures." I send this to clients within 24 hours of discovering a leak.
  5. Update playbook to close gaps (e.g., add additional detection rules). After the GitHub CI log leak in Istanbul, I added a Gitleaks rule to scan CI logs for secrets and enabled GitHub's required status checks to block deployments if Gitleaks fails.

Tools I Use (Full Stack, With Costs)

  • 1Password CLI (included with 1Password subscription, $7.99/month): Load secrets into environment variables.
  • TruffleHog (free, open-source): Scan Git history for secrets before pushing. I run it in a pre-commit hook.
  • Gitleaks (free, open-source): Scans commits and CI logs for secrets. Integrated into GitHub Actions.
  • detect-secrets (free, open-source, Yelp): Scans files for high-entropy strings that might be secrets. I run it before committing.
  • AWS Secrets Manager ($0.40/secret/month): Store production secrets with automatic rotation.
  • GitHub Advanced Security (included with GitHub Team, $4/user/month): Secret scanning, code scanning, dependency review.

Total monthly cost for a solo developer: ~$12/month (1Password + AWS Secrets Manager for ~10 secrets). For a small team (5 developers): ~$50/month.

The Checklist


[ ] Never store secrets in plaintext files (no .env on disk)
[ ] Use 1Password CLI - secrets load into memory, disappear when done
[ ] Short-lived tokens only (12-hour AWS STS, delete GitHub tokens after trips)
[ ] Run detect-secrets before every commit (pre-commit hook)
[ ] Rotate everything when you get home

Secrets are skeleton keys to your infrastructure.

Treat them like postcards? They'll disappear. Treat them like gold—vaulted, catalogued, rotated—and you can code from any café without waking up to a $10K AWS bill.

That Buenos Aires incident taught me: it's not paranoia if the threat's real. And on open hostel Wi-Fi with plaintext .env files?

The threat's very real.