What I learned from going 'cloud-optional' with my side projects.

Written by

My AI is local, my code is hand-rolled: Why I'm falling in love with building small again

It was 1:47 a.m. when I caught myself refreshing the AWS billing dashboard like a nervous parent checking their kid’s location. My “quick little side project” is a tool I called LinkNibble that saves articles, pulls the content, and gives me a smart summary so I don’t waste time on fluff. it had somehow turned into a Frankenstein of Lambda functions, API Gateway, RDS, S3 buckets, EventBridge rules, and a suspicious number of IAM roles.

All I wanted was a place to throw links and get decent summaries. Instead, I was paying enterprise money for what felt like enterprise complexity.

That night I finally admitted it: I had done the classic indie hacker thing. I’d reached for the shiniest, most “future-proof” tools for a problem that didn’t need them. My monthly cloud bill wasn’t ridiculous yet, but it was heading there, and every small change felt like untangling Christmas lights.

The worst part? I wasn’t even learning that much anymore. I was just wiring together managed services and hoping the bill didn’t spike.

So I decided to try something different. I went cloud-optional.

Not “delete my AWS account and move to a cabin” cloud-optional. More like… intentional. I stopped defaulting to serverless everything and started asking: What’s the simplest thing that actually works here?

Sometimes that’s still AWS or GCP. Sometimes it’s a $5-10 VPS, Docker Compose, and tools I control. For LinkNibble, it meant moving most of the workload to a single Hetzner server, running a local LLM with Ollama, and hand-rolling a few things that used to be fancy cloud services.

This post is the story of that shift. The specific things I moved, the stupid mistakes I made, the unexpected joys I found, and what I learned about cost, control, and why I’m genuinely having more fun building side projects again.

The Cloud Honeymoon Phase (and the slow realization it was a trap)

LinkNibble started innocently enough. I wanted a personal tool that would let me save interesting links, strip out the junk, and give me a concise summary so I could decide if it was worth my time. Nothing groundbreaking, just something useful for me.

I spun it up on Heroku’s free tier in a single evening. It felt amazing. Zero servers to manage, Git push to deploy, Postgres add-on with one click. I was in love. “This is why people say cloud is magical,” I thought.

Then, like every good honeymoon, reality slowly crept in.

As I added features such as background refreshing of links, better summarization, a tiny web UI. I started reaching for more “proper” cloud tools:

  • OpenAI API for smarter summaries
  • AWS Lambda + API Gateway when Heroku started feeling limiting
  • RDS for the database because “I should use real Postgres”
  • S3 + CloudFront for static files
  • EventBridge for scheduled jobs

Before long, my simple cron-like job for refreshing articles had turned into a Lambda function, an S3 bucket to store content, API Gateway triggers, and three different IAM roles that I barely understood. All so I could run what was essentially python scrape_and_summarize.py every few hours.

I still remember the exact moment the spell broke. I opened my AWS console one morning and saw a bunch of billable services I had completely forgotten were running. My “fun side project” now had more moving parts than some production apps I’ve worked on at my day job.

The real kicker? Most months the bill was only $30–50. Not bank-breaking, but completely stupid for something where I was the only user. I wasn’t getting better at building software, I was getting better at gluing AWS services together. Every time I wanted to add a small feature or debug something, I had to think about permissions, cold starts, logging costs, and vendor limits.

I had accidentally built a distributed system for a problem that used to be solved with a single Python script and a crontab entry on a cheap VPS.

That’s when I decided enough was enough.

What “Cloud-Optional” Actually Means to Me

I didn’t go full “off-grid dev”. I’m not anti-cloud. Some things are genuinely magical in the cloud. But I got tired of defaulting to complex cloud services for everything, especially when I was the only person using the project.

Cloud-optional, for me, is simple:

Choose the right tool for the actual problem, not what looks best on a résumé or feels “production-grade.”

For most of my side projects, the requirements are pretty humble: it should work reliably, cost almost nothing, be easy to maintain, and ideally feel like mine. That often means using a VPS for the core stuff and keeping cloud services only where they truly add value.

So I started migrating LinkNibble piece by piece. Here’s what the shift actually looked like:

Before (Cloud-heavy):

  • Backend & API → AWS Lambda + API Gateway
  • Scheduler → EventBridge
  • Database → AWS RDS Postgres
  • File storage → S3
  • Summaries → OpenAI API
  • Hosting → Mix of Lambda and CloudFront

After (Cloud-optional):

  • Backend & API → FastAPI running in a Docker container on a single Hetzner VPS ($4.70–5.90/month)
  • Scheduler → Simple systemd timer (literally a cron job again)
  • Database → PostgreSQL in Docker (I also added a lightweight SQLite mode for even simpler deploys)
  • File storage → Local filesystem on the VPS
  • Summaries → Ollama running a local model (Llama 3.1 8B or Phi-3)
  • Web server → Caddy (automatic HTTPS, super clean config)

The first time I ran a full refresh cycle locally with Ollama and it just worked without any API keys or token counters, I got this stupid grin on my face. My AI was finally local. No more surprise bills if I accidentally processed a few extra articles. No more generic corporate tone in the summaries. I could tweak the model or prompt exactly how I wanted to my hearts content.

The cron job that had become this elaborate Lambda + S3 + IAM dance? Now it’s literally one script and a timer. I can read the code, understand it, and change it in seconds.

It felt like I had decluttered my own brain.

The Real Work (and the pains I definitely didn’t expect)

Migrating everything sounded straightforward in theory. In practice? It was a classic mix of “this is awesome” and “why did I do this to myself to begin with?”

The first weekend I sat down with a fresh Hetzner VPS. I installed Docker, wrote a docker-compose.yml file, and fired everything up. For about twenty glorious minutes, it felt like I had superpowers. FastAPI, PostgreSQL, Redis (for caching), Ollama, and Caddy all running together nicely.

Then reality hit.

I spent an embarrassing amount of time fighting permissions on Docker volumes. I broke my database twice because I didn’t properly set up persistent storage. One night I couldn’t figure out why Caddy wasn’t getting a Let’s Encrypt certificate, only to realize I had fat-fingered the domain name. Classic.

I also had to re-learn some basic sysadmin habits I’d forgotten after years of serverless life, like:

  • Setting up UFW firewall rules
  • Configuring automatic security updates
  • Installing fail2ban (because I’m paranoid :)
  • Setting up proper backups with restic and actually testing restores (highly recommended)

The DevOps tax was real. There were moments I thought, “You know what, Lambda was simpler…” But after the initial hump (about 3–4 evenings of solid work), something shifted. The stack became predictable. I knew exactly where everything lived. Logs were just docker compose logs. Deploying a change went from “update Lambda, pray IAM is correct” to git pull && docker compose up -d.

The local AI part was the biggest surprise win. Getting Ollama running with a solid model took some tinkering (GPU passthrough, model quantization, etc.), but once it clicked, it felt alive. Summaries were suddenly running on my hardware. No rate limits. No per-token anxiety. I could even run it with larger context windows when I wanted deeper analysis.

And yes, I still have nightmares about that one time PostgreSQL refused to start because of a volume permission issue. But I fixed it. Myself. Without opening a support ticket.

What I Actually Gained (and why I’m falling in love with building small again)

After the initial pain, the benefits started hitting me one by one. And they were way better than I expected.

First, the savings. My AWS bill dropped from roughly $35–55/month (depending on usage) to €4–6 on the Hetzner VPS. That’s not pocket change when you have several side projects. I now run LinkNibble, a personal bookmark tool, ShareTXT and a couple of small experiments on the same server without sweating the bill.

Second, the speed and simplicity. No more cold starts. No more “why is this Lambda taking 8 seconds?” Debugging is trivial. I just look at the logs on the server. Making a quick change and deploying it takes seconds instead of context-switching between six different AWS consoles.

Third, the local AI magic. This has been the most surprising joy. Running Ollama with Gemma 4, Llama 3.1 (or even smaller models) means my summaries feel more consistent and personal. I’m not feeding every article I read to OpenAI anymore. I can experiment with system prompts, run longer contexts when needed, or switch models without thinking about cost. There’s something deeply satisfying about my AI actually living on my machine.

Fourth, the feeling of ownership. My code feels hand-rolled again. I replaced the over-engineered SQS-style queue with a simple SQLite-backed one for my use case. I understand every single piece of the stack. When something breaks (rare now), I know where to look. The whole project feels like mine instead of a rented collection of cloud services.

I also got back something I didn’t realize I was missing: developer joy. There’s a quiet satisfaction in SSHing into my server, running docker compose up -d, and knowing the whole thing is under my control. I’m learning real skills again. Docker, Linux networking, proper backups, simple CI; instead of just learning new vendor dashboards.

Don’t get me wrong: I still use cloud services when they make sense. I keep Cloudflare in front for DNS and DDoS protection, and I’d happily reach for AWS if I ever built something with unpredictable traffic or needed serious scale. But for personal projects? I’m done with defaulting to the complex route.

Component Before (Cloud-heavy) After (Cloud-optional) Key Benefit
Backend & API AWS Lambda + API Gateway FastAPI in Docker on VPS Simpler, no cold starts
Database AWS RDS PostgreSQL PostgreSQL in Docker (SQLite option) Full control + lower cost
File Storage Amazon S3 Local filesystem on VPS No egress fees
Scheduler / Cron EventBridge + Lambda Simple systemd timer One script, easy to debug
AI Summaries OpenAI API (paid per token) Ollama + Local LLM (Llama 3.1 / Phi-3) Private, no usage bills
Web Server / HTTPS CloudFront / API Gateway Caddy (automatic HTTPS) Zero-config SSL
Monitoring CloudWatch Uptime Kuma + basic logs Free & sufficient
Monthly Cost $35 – $55 €4 – €6 ~90% savings

Finding Your Own Cloud-Optional Sweet Spot

So, should you do the same thing?

Not necessarily. But if you’ve ever felt that quiet frustration with growing bills, YAML configuration hell, or the sense that your side project no longer feels like yours, it might be worth experimenting.

Here’s how I think about it now:

Good candidates for going cloud-optional:

  • Personal tools and internal apps
  • Scheduled jobs or background processors
  • Moderate-traffic sites (a few hundred to a few thousand users)
  • Anything where the workload is relatively predictable
  • Projects where you want to experiment with local AI

Leave it in the cloud when:

  • You have spiky or unpredictable traffic
  • You need serious global low-latency delivery
  • You’re working with a team and need shared operational responsibility
  • Compliance or enterprise features matter
  • You’re doing heavy ML training or specialized services

My practical starter toolkit these days:

  • VPS: Hetzner (best price/performance for me) or DigitalOcean/Linode if you prefer
  • Container setup: Docker + Docker Compose
  • Web server: Caddy (automatic HTTPS is pure joy)
  • Database: PostgreSQL in Docker, with SQLite as a lightweight alternative
  • Local AI: Ollama + good open models
  • Monitoring: Uptime Kuma + Grafana (when I want pretty dashboards)
  • Backups: restic to Backblaze B2
  • DNS: Cloudflare (still the most practical option for most people)

Start small. You don’t need to move everything at once. Pick one service, it may be a cron job or a simple API, then migrate just that. You’ll quickly feel whether the trade-off is worth it for you.

The goal isn’t purity. It’s intention. Stop defaulting to the most complex solution just because it’s what everyone uses at work.

Building Small Again

Looking back, going cloud-optional wasn’t some grand ideological stand. It was just me getting tired of overcomplicating things and watching money leak out for no good reason.

I still love the cloud. There are problems where AWS, GCP, or Vercel are absolutely the right choice. But for most of my personal projects, I’ve found a much sweeter spot: a simple VPS, Docker Compose, local tools, and code that I actually understand and control.

LinkNibble is now faster, cheaper, more private, and genuinely more fun to work on. My AI runs locally. My cron jobs are cron jobs again. The whole thing feels like a workshop I built in my garage instead of renting shelf space in someone else’s massive factory.

I’ve regained that quiet satisfaction I remember from early programming days. The joy of making something that’s truly mine. No surprise bills. No vendor lock-in anxiety. Just me, my code, and a server I can SSH into and fix when things go wrong.

If you’ve been feeling the same creeping fatigue with modern development, the endless services, surprise costs, and layers of abstraction. I highly recommend auditing one of your side projects. Pick something small. Move it to a VPS. Try running a local model. See how it feels.

You might discover, like I did, that sometimes building small again is the most liberating upgrade of all.

What about you? Have you pulled any projects off the cloud lately? What’s one thing you’re thinking about simplifying? I’d love to hear in the comments.


Portrait of Rex Anthony
Rex Anthony

Rex is a content creator and one of the guys behind ShareTXT. He writes articles about file sharing, content creation and productivity.

View more →