
Multi-Region Cloud Backups on a Nomad Budget
Back in 2021 my single-region S3 bucket went dark during an AWS outage. No data lost, but several hours of downtime taught me a simple truth: nomads need multi-region backups just as much as enterprises do. Here's the system I run now—resilient storage that spans continents, doesn't break the bank, and can be restored from a café table.
The Morning Everything Went Dark
December 7, 2021. I was in a coworking space in Porto, drinking the third espresso of the morning, when a client Slack pinged: "Can you send me last week's deliverables? Our end is down." I pulled up the AWS console. The S3 bucket returned a 503. I refreshed. Still 503. I checked the AWS status page. US-East-1 was listed as "degraded." My bucket was in US-East-1.
The client needed files within the hour for a board presentation. I had local copies of some projects, but not the one they needed. That project was backed up exactly once: to S3, in one region, in one AWS account. I spent the next forty minutes explaining to a very unhappy VP of Engineering why I couldn't produce a file that "should be in the cloud."
The outage lasted seven hours. By the time S3 came back online, I had a spreadsheet open with three cloud storage providers, a trial Backblaze account, and a half-written script to replicate everything across continents. That system has saved me twice since then. Here's how it works.
Three-Provider Strategy That Actually Works
After the Porto disaster, I designed a backup system with three rules: different providers, different regions, and cheap enough that I won't skip backups to save $20/month.
Primary storage: AWS S3 Standard in eu-west-1
(Ireland). This holds active projects, client deliverables, and anything I might need to retrieve quickly. About 180 GB as of today. Cost: $4.20/month storage plus $0.50/month in GET requests. I pick EU-West-1 because I spend a lot of time in Europe, and latency to pull a file is under 40ms from most cities.
Secondary copy: Backblaze B2 in us-west
(Arizona). Backblaze is not AWS. If AWS has a meltdown, Backblaze won't. I use rclone to sync the S3 bucket to B2 every 48 hours. The B2 bucket is encrypted with a separate passphrase stored in 1Password. About 180 GB mirrored. Cost: $0.90/month (B2 storage is $0.005/GB/month, and the first 10 GB of daily egress is free).
Tertiary archive: Wasabi ap-southeast-1
(Singapore). Wasabi is flat-rate pricing with no egress fees, which makes it perfect for archival snapshots. I push monthly snapshots here—full backups of everything, tagged with the month. If both AWS and Backblaze explode, I still have a monthly recovery point. About 180 GB stored as monthly snapshots (I keep three months, then delete the oldest). Cost: $5.99/month for the first 1 TB (Wasabi has a 1 TB minimum).
Offline copy: IronKey D300 Managed 128GB USB. Once a month I run restic copy
to a hardware-encrypted USB drive. I keep one IronKey in my laptop bag and a second one at my parents' house in the US. The drives rotate every six months when I visit. If the internet ceases to exist, I still have a snapshot from last month.
Tooling: Restic + rclone. Restic handles deduplication, compression, and encryption. Rclone provides the multi-cloud abstraction so I'm not writing separate scripts for S3, B2, and Wasabi. Both run on macOS (my daily driver) and Linux (my backup VPS for automation).
Total monthly cost: $11.59 plus AWS egress when I restore. That's less than two coworking day passes.
Restic Configuration
/etc/restic/config.yml
(templated):
repositories:
s3_primary:
url: s3:s3.amazonaws.com/secure-roamer-backups
env:
AWS_ACCESS_KEY_ID: $AWS_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET
b2_secondary:
url: b2:secure-roamer-backups
env:
B2_ACCOUNT_ID: $B2_ID
B2_ACCOUNT_KEY: $B2_KEY
wasabi_archive:
url: s3:https://s3.ap-southeast-1.wasabisys.com/secure-roamer-archive
env:
AWS_ACCESS_KEY_ID: $WASABI_ID
AWS_SECRET_ACCESS_KEY: $WASABI_KEY
password-file: /usr/local/etc/restic/passphrase
Backup Command
restic -r s3_primary:/ backup ~/Projects ~/Obsidian --exclude-file ~/.config/restic/excludes.txt --verbose
restic -r b2_secondary:/ backup ~/Projects ~/Obsidian --verbose
- Primary runs nightly at 02:00 local.
- Secondary runs every 48 hours on odd days.
- Archive runs monthly; restic snapshots tagged
--tag monthly
.
Cost Optimizations That Actually Matter
When I started, the system cost $47/month because I didn't understand storage tiers and kept everything in hot storage. Here's how I cut it to $11.59:
Lifecycle rules on S3. Any snapshot older than thirty days automatically moves to S3 Glacier Instant Retrieval. Glacier IR costs $0.004/GB/month versus $0.023/GB for Standard, and retrieval is still milliseconds (versus hours for Deep Archive). I set this up using an S3 Lifecycle policy:
{
"Rules": [{
"Id": "MoveToGlacierIR",
"Status": "Enabled",
"Transitions": [{
"Days": 30,
"StorageClass": "GLACIER_IR"
}]
}]
}
That policy alone saves about $3/month.
Restic's deduplication magic. Restic splits files into variable-length chunks and only stores unique chunks. If I have five versions of a 200 MB Figma file and only 10 MB changed between versions, Restic stores ~210 MB total, not 1 GB. This keeps B2 and Wasabi costs low even though I'm running daily backups.
Pre-compress video and media. Before backing up client video projects, I use HandBrake CLI to convert to HEVC (H.265). A 4 GB ProRes file compresses to ~600 MB with no perceptible quality loss for archival purposes. Command:
HandBrakeCLI -i input.mov -o output.mp4 --encoder x265 --quality 22
I don't do this for active projects, only for archives older than ninety days.
Use B2's free egress allowance. Backblaze gives you 3x your stored data per month in free egress. With 180 GB stored, I get 540 GB/month of free downloads. As long as I don't restore everything three times in one month, egress is $0.
Avoid cross-region replication in AWS. AWS Cross-Region Replication (CRR) costs $0.02/GB to transfer between regions. Instead, I run rclone on a $5/month Hetzner VPS in Finland. The VPS pulls from S3 EU-West-1 and pushes to B2/Wasabi. Bandwidth from AWS to the VPS is free (within the same region), and Hetzner includes 20 TB/month of outbound traffic. That setup saves ~$7/month versus CRR.
Restore Drills (Because Backups You Can't Restore Are Useless)
I learned this the hard way in 2019 when I discovered my backup passphrase was typo'd and I couldn't decrypt anything. Now I run a full restore drill every quarter, timed and logged:
Step 1: Spin up a clean EC2 instance in us-east-1
. I use a t3.micro (free tier eligible). This simulates restoring from a borrowed laptop or a fresh machine in a different region. I SSH in and install dependencies:
sudo apt update && sudo apt install -y restic rclone curl
Step 2: Configure credentials from 1Password. I copy the Restic passphrase and cloud credentials from 1Password using the CLI (op
). This tests whether I can actually access my secrets in an emergency.
Step 3: Restore a critical project. I pick one client project folder and restore it:
restic -r s3:https://s3.eu-west-1.amazonaws.com/secure-roamer-backups restore latest --target /tmp/restore --include "Projects/client-x"
I time the operation. Target: under twenty minutes to restore 5 GB. Typical result: twelve to sixteen minutes depending on EC2 network.
Step 4: Verify integrity. I keep a local SHA256 manifest of key files. After restore, I run:
sha256sum /tmp/restore/Projects/client-x/final-deliverable.zip
and compare against my manifest. Mismatch means corruption or incomplete restore.
Step 5: Repeat with B2. I run the same restore from the Backblaze repository to make sure cross-provider backups actually work:
restic -r b2:secure-roamer-backups restore latest --target /tmp/restore-b2 --include "Projects/client-x"
sha256sum /tmp/restore-b2/Projects/client-x/final-deliverable.zip
If the checksums match across S3 and B2, I know the system is solid.
Step 6: Log everything. I document the drill in a Notion database: date, duration, bandwidth consumed, any errors, and action items. My target is sub-twenty-minute restores. If it takes longer, I investigate—maybe I need a bigger EC2 instance type or a CDN in front of S3.
One drill in Kraków exposed that my Wasabi credentials had expired. I would've discovered that during a real emergency if I hadn't run the drill. Now I check credential expiry during every quarterly test.
Security Controls
- Encryption: Restic uses AES-256 by default. Passphrase stored in 1Password; offline copy sealed in safe.
- MFA delete: Enabled on S3 bucket; requires authentication + code for deletions.
- Access policy: IAM role with least privilege,
s3:PutObject
/GetObject
and noDeleteObject
except for rotation script. - Audit logs: CloudTrail and B2 logs exported to separate bucket for review.
Automation
Launchd (macOS) plist runs restic nightly. For Linux, systemd service + timer:
[Unit]
Description=Restic backup
[Service]
EnvironmentFile=/etc/restic/env
ExecStart=/usr/local/bin/restic -r s3_primary:/ backup /home/user/Projects
[Install]
WantedBy=multi-user.target
Offline Copy
- Monthly
restic copy
to IronKey stored in a separate bag. - Keep second IronKey at home base. Rotating pair ensures at least one offline snapshot always exists.
Monitoring and Alerts
Backups fail silently all the time. A credential expires, a network hiccups during the cron job, or you fat-finger a config file. Without monitoring, you discover the failure three months later when you need to restore.
I use Healthchecks.io to ping after every successful backup. I create three separate checks—one for S3, one for B2, one for Wasabi—and append a curl
to the end of each backup script:
restic -r s3_primary:/ backup ~/Projects && curl -fsS --retry 3 https://hc-ping.com/YOUR-UUID-HERE
If Healthchecks doesn't receive a ping within 25 hours (I set the grace period to 1 hour past the expected 24-hour interval), it fires an alert to my Signal and email. I get a notification like "S3 backup missed expected check-in" within an hour of failure.
I also log all Restic output to ~/Library/Logs/restic.log
and forward it to Logtail (formerly Better Stack Logs) using Fluent Bit. Logtail lets me search backup logs from my phone if I'm troubleshooting from a café. Sample Fluent Bit config:
[INPUT]
Name tail
Path /Users/username/Library/Logs/restic.log
[OUTPUT]
Name http
Match *
Host in.logtail.com
Port 443
URI /
Header Authorization Bearer YOUR_LOGTAIL_TOKEN
tls On
For audit logs, I enable AWS CloudTrail on the S3 bucket (costs about $0.30/month for my volume) and export B2 transaction logs to a separate bucket monthly. I review them in Notion to catch any unexpected access patterns—like an API key being used from an IP I don't recognize.
Two Times This System Saved Me
Incident one: Laptop stolen in Barcelona. Someone grabbed my MacBook from a café table while I was in the bathroom. Forty-eight hours later I had a replacement MacBook Air. I spun up an EC2 instance, restored the last S3 snapshot (92 GB), and rsynced it to the new laptop. Total downtime: six hours, most of which was waiting for the laptop to arrive from the Apple Store.
Incident two: Corrupted SSD in Belgrade. My laptop's SSD started throwing I/O errors. I couldn't boot. I borrowed a USB-C enclosure from the coworking space, pulled what I could off the drive (about 40% readable), then restored everything else from Backblaze. Because I had the B2 copy in Arizona, I didn't have to wait for AWS to be healthy. Restore took ninety minutes over the coworking Wi-Fi. I was back online by lunch.
Both times, the multi-region setup mattered. If I'd only had S3 in one region, I'd have been stuck during the Barcelona theft because I couldn't access AWS from the police station (long story involving a dead phone battery). B2 saved me because I could log in from a borrowed laptop and pull files immediately.
The Checklist I Run Monthly
[ ] Verify nightly S3 backup completed (check Healthchecks.io)
[ ] Verify B2 sync ran within last 48 hours
[ ] Run monthly Wasabi snapshot with --tag monthly
[ ] Copy latest snapshot to IronKey (rotate if 6 months elapsed)
[ ] Review CloudTrail + B2 transaction logs for anomalies
[ ] Check Restic logs for warnings or errors
[ ] Test restore from one random file (< 5 min drill)
Why This Matters More When You're Nomadic
If you work from an office with IT support and local backups, a cloud outage is an inconvenience. When you're in a Chiang Mai hostel with flaky Wi-Fi and your client needs a file in twenty minutes, a single-region backup is a career risk.
Multi-region backups aren't overkill. They're the minimum viable resilience for anyone whose livelihood depends on being able to produce files from anywhere, anytime, even when half the internet is on fire. For $11.59/month, that's the cheapest insurance policy I've ever bought.