Dev Diaries #1: Dilemmas in Docker Configuration
Iâve been itching to build a side project for fun, and instead of littering my development OS with random installs, I turned to Docker. Professionally, I use Docker every dayâour infra team handles the CI/CD pipelines, and we just roll with their setup. But for my little personal experiment, I wanted to understand the whole process from the ground up.
I had plenty of questions as I set out:
- How does a Docker container actually make its way into Production?
- Whatâs the real difference between a dev container and a prod container?
- Do I need Docker for development at all, or should I only use it for deployment?
- And a handful of âetcâŠâ questions along the way.
I dove into docs, blog posts, and community threads to piece together an approach. Is this the ultimate âbest practiceâ? Probably not. But it works for me, and I learned a ton in the process.
Why I Use Docker?
The main reason I choosed to use Docker is quite silly. I just did not want to install all the packages I used to development in my local machine. I like to keep my machine barebone and minimum. Maybe I overthink this but keeping my local machine as minimal as possible makes me feel "clean". I also choose Docker because I was developing my project across multiple systems and OSs; Windows with WSL2, MacOS and Debian. I didn't want to alternate between commands and google search the intricacy on why my project works or breaks on certain OSs. Also Docker deployment sounds cool.
Dilemma #1: To commit docker-compose.yml or not?
At work, we do commit and git-track our docker-compose.yml file. The idea is simple: make it easy for anyone new to Docker to get started quickly. No setup fuss, just clone the repo and run docker compose up.
Sounds great, right? But hereâs the catch â everyone ends up tweaking something.
âOh no, port 5432 is already taken.â âOh wait, Iâm using a different environment variable for local testing.â âOh crap, I didnât mean to commit that.â
So now your local git status keeps yelling at you about uncommitted changes⊠but you canât push them either, because those changes are personal. It becomes this weird dance of remembering to stash or discard local changes before switching branches.
That got me wondering: Why are we even committing the docker-compose.yml file in the first place?
Every environment (DEV, SIT, UAT, PROD) is going to need a different setup anyway â different ports, volumes, credentials, services, you name it. So committing a single file that tries to cover all cases felt⊠off. Like trying to wear the same pair of pants to a beach party and a wedding.
Dilemma #2: What about committing Dockerfile
Here we go again â another âshould I commit this or not?â situation. This time, itâs about the Dockerfile.
At first, it made perfect sense to commit it. I mean, itâs the heart of the container, right? It tells Docker how to build the whole thing â base image, packages, setup steps, the works. Without it, youâve basically got a kitchen with no recipe.
But then I hit a wall.
My dev environment needs some unit testing tools â things Iâd never want in production. My test environment needs to wipe and reset the DB each time it starts up, like a fresh install. Meanwhile, SIT and UAT need to run specific scripts to update env variables before anything launches. And donât even get me started on prod â that one needs to be squeaky clean, no extra fluff.
So now Iâm staring at my single Dockerfile like:
âWait⊠how am I supposed to juggle all of this in one file?â
Do I write one Dockerfile per environment? That feels like overkill. But also, stuffing everything into a single file with a ton of conditional logic and environment args feels like a spaghetti sandwich. And yeah, I briefly considered not committing the Dockerfile â but that just felt wrong. Like hiding the blueprint to a house youâre trying to build with friends.
Dilemma #3: So⊠how does this actually get to production?
Alright. So now Iâve got my Dockerfile. My app runs. I even have a docker-compose.yml that works on my machineâą.
But then I paused and asked myself a deceptively simple question:
âHow does this actually get to production?â
Like... whatâs the actual flow? Do I run docker build locally, tag it, and scp the image to my server? Do I need a Docker registry? What even is a registry? Should I just let Vercel or some cloud service handle everything for me?
Cue another round of Googling.
Turns out, there are multiple answers â and most of them start with: âWell, it depends.â
The âIt Dependsâ Answer (But Hereâs My Simple Setup)
Alrightâhere comes the obligatory âit dependsâ disclaimer. Every team, project, and environment marches to its own beat, so thereâs no universal truth. But for my one-person side project on a single VPS, this routine has been a breath of fresh air:
- No
docker-compose.yml
in local dev- When Iâm coding, I skip Compose entirely:
docker build --target dev -t myapp:dev . docker run -p 3000:3000 myapp:dev
- One container, one command, zero YAMLâdev loop stays insanely fast.
- Compose files live in their own environments
- I only commit Compose configs for staging and production.
- Each environment gets its own docker-compose.yml (in a separate folder or branch), so test mocks, Node test servers, Nginx frontends, cron workersâthey all stay neatly contained.
- One Dockerfile with multi-stage builds
- Stage 1: installs dev tools (linters, test runners).
- Stage 2: copies over only production artifacts.
- Build locally with:
docker build --target dev -t myapp:dev .
- Build production by default:
docker build -t myapp:latest .
- One Dockerfile with multi-stage builds
- Stage 1 installs dev tools (linters, test runners).
- Stage 2 pulls in only the build artifacts for production.
- I build a dev image locally via:
docker build --target dev -t myapp:dev .
- And production images by default with docker build -t myapp:latest ..
- Environment variables via
.env
+ CI/CD secrets- Keep a git-tracked .env.example for reference.
- Ignore your local .env so you can tweak ports or tokens without fear.
- In CI (e.g., GitHub Actions), real secrets get injected at build/deploy time.
- Building & deploying on the VPS
- I SSH into the server, git pull, then:
git pull # get the latest code docker build -t myapp:latest . # build fresh images from your Dockerfile(s) docker compose up -d # recreate and start your services
- No registry headachesâeverything builds from source. If I ever need more automation, Iâll hook up a CI job to push images to Docker Hub or GHCR.
- Keeping Production lean
- My final prod image is only the runtime: base image â compiled code â minimal dependencies.
- No test frameworks, no dev tools, no dangling layers.
This setup is a bit manual, but it stays out of my way for a small side project. When this app grows up and demands more complexity, Iâll evolveâuntil then, Iâm happily âit dependsâ and rolling with it.