My Infrastructure Journey: From Docker Compose to Dokploy
A journey through different hosting approaches: from simple Docker Compose setups to Kubernetes chaos, and finally finding peace with Dokploy
· 6 min read
Over the past year, I’ve been on quite the infrastructure journey. What started as a simple attempt to host my portfolio has evolved into months of experimentation with different hosting approaches, each teaching me valuable lessons about the complexity and beauty of modern infrastructure. Today, I want to share that journey with you—from my humble beginnings with Docker Compose, through the chaos of trying to set up Kubernetes, to finally finding my sweet spot with Dokploy.
The Foundation: Oracle’s Always Free Tier #
It all began about a year ago when I discovered Oracle’s generous always free tier. Getting my hands on a few VPSes opened up a whole world of possibilities for self-hosting and experimenting with my own services. Little did I know that these free servers would become the testing ground for months of infrastructure adventures.
Phase 1: The Docker Compose Era #
My first approach was refreshingly simple: Docker with Docker Compose. I started with my portfolio, and after spending just a few minutes wrestling with nginx configuration, I quickly pivoted to what would become my go-to framework for most of the year—a Docker Compose setup with my service paired with a Caddy instance and a very simple Caddyfile.
Here’s what my portfolio setup looked like:
services:
app:
build: .
ports:
- "4173:4173"
caddy:
image: caddy:2-alpine
ports:
- "80:80"
- "443:443"
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile:ro
depends_on:
- app
With an equally simple Caddyfile:
mydomain.com {
reverse_proxy app:4173
tls my@email.com
}
This framework served me incredibly well. I used it to host a variety of services throughout the year: Immich for photo management, a Jellyfin instance for media streaming, and during my school projects, everything from websocket servers for our Touchchef project to backend services for OrderEat.
The real highlight came when working on Budgeteer with a friend. We created an elaborate Docker Compose setup that included everything we needed—the main application, PostgreSQL database, monitoring with Prometheus and Grafana, logging with Loki, and even BookStack for documentation. We even set up CI/CD so that whenever we pushed to the Budgeteer repositories, it would automatically build the Docker image, pull it on the VPS, and deploy it.
The setup was impressive, but it came with its own set of challenges. I found myself SSH-ing into the server frequently to fix small issues. Adding new services meant modifying the existing Caddyfile, and I couldn’t just add services wherever I wanted—the port conflicts with multiple reverse proxies made it impossible to have multiple Caddyfiles running simultaneously.
Phase 2: The Kubernetes Experiment #
After watching countless Kubernetes tutorials, reading “Understanding Kubernetes in a Visual Way,” and getting inspired by my friends over at StartupNationLabs, I decided it was time to level up. My plan was ambitious: merge all my VPSes into one Kubernetes cluster and use FluxCD to manage my infrastructure through a single GitHub repository.
I was convinced this would be the solution to all my problems. I could host as many services as I wanted, have proper GitOps workflows, and finally add “DevSecOps Engineer” to my resume. I created my repository, started with a simple Jellyfin deployment, added the entire Budgeteer stack, and tested everything locally using Kind. Everything worked beautifully.
Then came the reality check.
I installed kubeadm and kubectl on my VPSes, initialized the master node and worker nodes, and what followed were two months of constant battle with networking. Two entire months of debugging, troubleshooting, and wearing the surprised Pikachu face as things continued to break in new and creative ways.
By the end of those two months, I had a cluster that not only didn’t work but was somehow even more broken than when I started. Kubernetes networking had defeated me, and it was time to accept that defeat gracefully. It was time to explore Platform as a Service (PaaS) solutions.
Phase 3: Testing the Waters with Komodo #
My first foray into PaaS was with Komodo, recommended by a coworker. I decided to pair it with BunkerWeb (recommended by a classmate) to add an extra layer of security through BunkerWeb’s Web Application Firewall capabilities.
The setup worked great—until it didn’t. BunkerWeb turned out to be a little too secure. It would time me out after just one request, regardless of what I changed in the settings and environment variables. I’d make one request, and BunkerWeb would essentially be like.
So I had my services hosted but no way to actually access them.
After a full day of debugging this issue, I gave up once again and decided to head home and try replacing BunkerWeb with Traefik.
Phase 4: Discovering Dokploy #
But life had other plans. By complete coincidence, just as I was about to set up Traefik, one of my favorite YouTubers (Dreams of Code) uploaded this video about Dokploy. I was immediately amazed by what I saw.
Dokploy had all the features that impressed me in Komodo, plus more. But the real game-changer was that it had Traefik packaged in. I didn’t even need to think about reverse proxying—it would just handle it automatically. All I had to do was click “add domain,” type in the domain name I was pointing to my server, and it would do the rest. Even better, I could click “generate domain name” and get a traefik.me link that I could use for testing immediately.
I was beyond sold and immediately migrated everything to Dokploy. The experience was transformative:
- Effortless deployment: No more manual Caddyfile modifications
- Template system: Quick deployment of common services
- GitHub integration: Automatic builds and deployments from my repositories
- Notifications: Telegram alerts for build successes and failures, complete with domain links for immediate testing
- Future-proof: Built-in Docker Swarm support for when I need to scale across multiple VPSes or add home lab machines
Lessons Learned #
This journey taught me several valuable lessons:
Start Simple: Sometimes the simplest solution that works is better than the most sophisticated one that doesn’t. My Docker Compose setup served me well for months before I actually needed something more complex.
Know When to Pivot: Spending two months fighting Kubernetes networking was probably too long, but it taught me about persistence and knowing when to cut losses.
The Right Tool for the Job: Not every project needs Kubernetes. For my use case, Dokploy provides the perfect balance of features and simplicity.
Community Matters: Whether it was recommendations from coworkers, inspiration from YouTube creators, or learning from friends’ projects, the community played a huge role in guiding my decisions.
Looking Forward #
Today, I’m running everything on Dokploy and couldn’t be happier. It’s given me the deployment simplicity I had with Docker Compose, the service management capabilities I wanted from Kubernetes, and the scalability options I’ll need as my projects grow.
The infrastructure journey isn’t over—it never really is. But for now, I’ve found a solution that lets me focus on building and deploying applications rather than fighting with networking configurations. And sometimes, that’s exactly what you need.