
Remote Threat Hunting from Hostels Without Losing Evidence
Threat hunts rarely wait until you're back in the office. I've combed through suspicious IAM logs from a hostel bunk in Lisbon and run memory analysis in a Mexican coworking space. The trick: treat your tooling like a portable SOC. Here's how I hunt threats from anywhere without spilling evidence or blowing through bandwidth.
The Lisbon IAM Compromise Hunt
June 2024, Lisbon. I was in a six-bed hostel near Rossio station when the SOC Slack channel lit up at 23:00: "Suspicious IAM role creation in prod AWS. Multiple regions. Looks like privilege escalation." I had a bottom bunk, two roommates snoring, and a 10 Mbps hostel Wi-Fi connection that dropped every fifteen minutes.
I grabbed my laptop, connected to my GL.iNet travel router (forcing all traffic through Mullvad VPN), and opened Azure Sentinel. The client's AWS logs were forwarded to Sentinel via CloudTrail ingestion. I ran a KQL query looking for CreateRole
and AttachRolePolicy
actions from unusual IPs. Found twelve suspicious events in thirty minutes—all originating from a Vietnamese IP range, all creating admin-level roles.
I couldn't download the full CloudTrail logs to my laptop—too risky with unencrypted evidence on a shared network. Instead, I spun up a t3.xlarge EC2 instance in eu-central-1 running REMnux, copied the logs there via S3, and analyzed them remotely via SSH over Tailscale. Took me four hours to confirm: compromised access key from a phished developer, used to create backdoor IAM roles. I documented everything in Notion, stored evidence in an encrypted S3 bucket, and destroyed the EC2 instance before sunrise. Total cost: $2.40 in compute.
That hunt taught me the portable SOC model works. Here's the full architecture.
Lab Architecture (Cloud-First, Zero Local Evidence)
The rule: evidence never touches my laptop. Everything lives in the cloud, accessed via encrypted tunnels. This protects evidence integrity, prevents accidental exposure on shared networks, and means I can work from anywhere without hauling forensic images.
Primary investigation environment: Azure Sentinel. Costs ~$200/month for our log volume (50 GB/day). Sentinel ingests logs from AWS CloudTrail, Okta, Microsoft 365, CrowdStrike EDR, and custom syslog sources. I write KQL hunting queries in VS Code, version them in a private GitHub repo (hunt-queries/
), and run them in Sentinel's hunting blade. Results stay in the cloud—I export to CSV only when needed for reporting, and even then it goes straight to the encrypted S3 evidence bucket, never to my local disk.
On-demand sandbox: AWS EC2 t3.xlarge ($0.1664/hour, ~$120/month if left running). I keep a Terraform module that spins up a REMnux instance (Ubuntu 20.04 with forensic tools pre-installed: Volatility, Kape, YARA, Autopsy). The instance mounts an encrypted EBS volume (GP3, 100 GB) for temporary analysis. I SSH in via Tailscale, never over public IP. When the hunt is done, I run terraform destroy
and the instance vanishes. Evidence is exported to S3 before destruction.
Access control: Tailscale + FIDO2. My laptop, the EC2 sandbox, and my phone are all on a Tailscale mesh network (free tier, max 100 devices). Tailscale enforces mTLS between nodes and requires device posture checks—my laptop must have firewall enabled and disk encrypted or it gets kicked off the network. SSH to the sandbox requires a YubiKey 5 NFC (FIDO2). No passwords, no SMS 2FA.
Evidence bucket: S3 with SSE-KMS encryption. I use a dedicated bucket incidents-{year}
with server-side encryption (KMS key managed by me, not AWS default). Lifecycle policy auto-migrates objects to Glacier Flexible Retrieval after 30 days (cost drops from $0.023/GB to $0.0036/GB). I keep a chain-of-custody CSV in the bucket root with artifact names, SHA256 hashes, and handler timestamps.
Pre-Hunt Checklist
[ ] Laptop connected via travel router (VPN enforced)
[ ] Tailscale authenticated; verify mTLS to cloud lab
[ ] Incident runbook open in Standard Notes
[ ] Spare SSD for temporary artifacts mounted and encrypted
Log Collection Workflow
- Scoping: Confirm hypotheses and datasets required (CloudTrail, Okta logs, EDR telemetry).
- Querying: Use Azure Sentinel hunting queries stored in git (
./queries/privilege_escalation.kql
). Example:
CloudAppEvents
| where ActionType == "Add user to role"
| where IPAddress !in (known_good_ips)
| project TimeGenerated, UserId, AppRoleId, IPAddress, DeviceDetail
- Export: Save query results to CSV in Sentinel workspace; export to evidence bucket via API. Never download to laptop.
- Case management: Update investigation tracker in Jira/Notion with query ID, findings, next steps.
Memory & Artifact Analysis
- Request memory dumps from SOC team via remote EDR (CrowdStrike RTR
memdump
). - Upload to S3; process in cloud sandbox using Volatility 3. Sample command:
volatility3 -f s3://incident-bucket/dumps/host1.mem windows.pslist > /tmp/pslist.txt
- Compress results (
xz -z
) before storing to reduce bandwidth if download eventually needed.
Bandwidth Management (Hostels Have Terrible Internet)
Hostel Wi-Fi is optimized for Instagram, not 40 GB forensic image downloads. I've learned to work around this:
Schedule heavy transfers during off-peak hours. In Lisbon, hostel bandwidth peaked at 10 Mbps between 20:00-23:00 (everyone streaming Netflix). After midnight, I got 40+ Mbps. If I need to pull a memory dump or disk image, I schedule it for 02:00-05:00 using at
or a cron job, then go to sleep. By morning, it's done.
Use aws s3 cp --expected-size
and multipart uploads. If you're copying a 20 GB file and the connection drops at 18 GB, AWS CLI will restart from the beginning unless you specify --expected-size
. This flag enables multipart uploads, so if the connection drops, it resumes from the last completed part (default 5 MB chunks).
Tether to LTE if hostel Wi-Fi is unusable. I carry a local SIM with 50 GB/month data (~$30 in most EU countries). If hostel Wi-Fi drops every five minutes, I tether my laptop to my phone and throttle AWS CLI with aws configure set default.s3.max_bandwidth 2MB/s
. This prevents saturating the LTE connection and lets me continue browsing/Slacking while the download runs.
Compress everything before transfer. A 10 GB memory dump compresses to ~3 GB with xz -9
(takes 20 minutes on a modern laptop, saves hours of bandwidth). I decompress on the EC2 sandbox, not locally.
Communication & Collaboration
- Maintain incident Slack channel; use threads to minimize chatter.
- Summaries via async Loom (compressed 720p) if visual context needed.
- Keep SOC leadership updated with bullet points: Hypothesis → Actions → Evidence → Next Steps.
Evidence Integrity
- Generate SHA-256 hashes for each artifact (
shasum -a 256 file
). Store hash file alongside artifact in S3. - Log chain of custody entries in shared spreadsheet (timestamp, handler, artifact).
- For cross-border transfers, note jurisdiction and ensure encryption keys satisfy compliance requirements.
Tooling to Bring
- CLI arsenal:
awscli
,az
,gcloud
,jq
,volatility
,gitleaks
,detect-secrets
. - Obsidian vault with detection engineering notes accessible offline.
- Travel router configured with DNS over HTTPS to prevent log tampering by hostile networks.
Incident Wrap-Up
- Document findings, attach query IDs and evidence links.
- Tag detection engineering team for rule updates.
- Destroy cloud sandbox (
terraform destroy
) to avoid lingering cost/surface area. - Sync final report to Confluence/Notion; notify stakeholders.
Threat hunting is less about where you sit and more about how you structure your environment. Keep analysis in the cloud, protect evidence with encryption and logs, and rehearse the workflow so responding from a hostel bunk feels just like home—only with worse lighting.