My Scratchpad

A playground of raw thoughts, half-baked ideas, and friendly debates with myself.

I call these my hypotheses—ever-evolving opinions that grow as I learn.

← Back to all posts

Dev Diaries #1: Dilemmas in Docker Configuration

I’ve been itching to build a side project for fun, and instead of littering my development OS with random installs, I turned to Docker. Professionally, I use Docker every day—our infra team handles the CI/CD pipelines, and we just roll with their setup. But for my little personal experiment, I wanted to understand the whole process from the ground up.

I had plenty of questions as I set out:

  • How does a Docker container actually make its way into Production?
  • What’s the real difference between a dev container and a prod container?
  • Do I need Docker for development at all, or should I only use it for deployment?
  • And a handful of “etc
” questions along the way.

I dove into docs, blog posts, and community threads to piece together an approach. Is this the ultimate “best practice”? Probably not. But it works for me, and I learned a ton in the process.

Why I Use Docker?

The main reason I choosed to use Docker is quite silly. I just did not want to install all the packages I used to development in my local machine. I like to keep my machine barebone and minimum. Maybe I overthink this but keeping my local machine as minimal as possible makes me feel "clean". I also choose Docker because I was developing my project across multiple systems and OSs; Windows with WSL2, MacOS and Debian. I didn't want to alternate between commands and google search the intricacy on why my project works or breaks on certain OSs. Also Docker deployment sounds cool.


Dilemma #1: To commit docker-compose.yml or not?

At work, we do commit and git-track our docker-compose.yml file. The idea is simple: make it easy for anyone new to Docker to get started quickly. No setup fuss, just clone the repo and run docker compose up.

Sounds great, right? But here’s the catch — everyone ends up tweaking something.

“Oh no, port 5432 is already taken.” “Oh wait, I’m using a different environment variable for local testing.” “Oh crap, I didn’t mean to commit that.”

So now your local git status keeps yelling at you about uncommitted changes
 but you can’t push them either, because those changes are personal. It becomes this weird dance of remembering to stash or discard local changes before switching branches.

That got me wondering: Why are we even committing the docker-compose.yml file in the first place?

Every environment (DEV, SIT, UAT, PROD) is going to need a different setup anyway — different ports, volumes, credentials, services, you name it. So committing a single file that tries to cover all cases felt
 off. Like trying to wear the same pair of pants to a beach party and a wedding.


Dilemma #2: What about committing Dockerfile

Here we go again — another “should I commit this or not?” situation. This time, it’s about the Dockerfile.

At first, it made perfect sense to commit it. I mean, it’s the heart of the container, right? It tells Docker how to build the whole thing — base image, packages, setup steps, the works. Without it, you’ve basically got a kitchen with no recipe.

But then I hit a wall.

My dev environment needs some unit testing tools — things I’d never want in production. My test environment needs to wipe and reset the DB each time it starts up, like a fresh install. Meanwhile, SIT and UAT need to run specific scripts to update env variables before anything launches. And don’t even get me started on prod — that one needs to be squeaky clean, no extra fluff.

So now I’m staring at my single Dockerfile like:

“Wait
 how am I supposed to juggle all of this in one file?”

Do I write one Dockerfile per environment? That feels like overkill. But also, stuffing everything into a single file with a ton of conditional logic and environment args feels like a spaghetti sandwich. And yeah, I briefly considered not committing the Dockerfile — but that just felt wrong. Like hiding the blueprint to a house you’re trying to build with friends.


Dilemma #3: So
 how does this actually get to production?

Alright. So now I’ve got my Dockerfile. My app runs. I even have a docker-compose.yml that works on my machineℱ.

But then I paused and asked myself a deceptively simple question:

“How does this actually get to production?”

Like... what’s the actual flow? Do I run docker build locally, tag it, and scp the image to my server? Do I need a Docker registry? What even is a registry? Should I just let Vercel or some cloud service handle everything for me?

Cue another round of Googling.

Turns out, there are multiple answers — and most of them start with: “Well, it depends.”


The “It Depends” Answer (But Here’s My Simple Setup)

Alright—here comes the obligatory “it depends” disclaimer. Every team, project, and environment marches to its own beat, so there’s no universal truth. But for my one-person side project on a single VPS, this routine has been a breath of fresh air:

  1. No docker-compose.yml in local dev
    • When I’m coding, I skip Compose entirely:
    docker build --target dev -t myapp:dev .
    docker run -p 3000:3000 myapp:dev
    
    • One container, one command, zero YAML—dev loop stays insanely fast.
  2. Compose files live in their own environments
    • I only commit Compose configs for staging and production.
    • Each environment gets its own docker-compose.yml (in a separate folder or branch), so test mocks, Node test servers, Nginx frontends, cron workers—they all stay neatly contained.
  3. One Dockerfile with multi-stage builds
    • Stage 1: installs dev tools (linters, test runners).
    • Stage 2: copies over only production artifacts.
    • Build locally with:
    docker build --target dev -t myapp:dev .
    
    • Build production by default:
    docker build -t myapp:latest .
    
  4. One Dockerfile with multi-stage builds
    • Stage 1 installs dev tools (linters, test runners).
    • Stage 2 pulls in only the build artifacts for production.
    • I build a dev image locally via:
    docker build --target dev -t myapp:dev .
    
    • And production images by default with docker build -t myapp:latest ..
  5. Environment variables via .env + CI/CD secrets
    • Keep a git-tracked .env.example for reference.
    • Ignore your local .env so you can tweak ports or tokens without fear.
    • In CI (e.g., GitHub Actions), real secrets get injected at build/deploy time.
  6. Building & deploying on the VPS
    • I SSH into the server, git pull, then:
    git pull               # get the latest code
    docker build -t myapp:latest .  # build fresh images from your Dockerfile(s)
    docker compose up -d   # recreate and start your services
    
    • No registry headaches—everything builds from source. If I ever need more automation, I’ll hook up a CI job to push images to Docker Hub or GHCR.
  7. Keeping Production lean
    • My final prod image is only the runtime: base image → compiled code → minimal dependencies.
    • No test frameworks, no dev tools, no dangling layers.

This setup is a bit manual, but it stays out of my way for a small side project. When this app grows up and demands more complexity, I’ll evolve—until then, I’m happily “it depends” and rolling with it.