Why I Don’t Want Docker to Be the Default Deploy Path
Tech

Why I Don’t Want Docker to Be the Default Deploy Path

Docker is good software. I want to say that up front because the internet has a special talent for turning every tooling opinion into a cage match. I use Docker. I like Docker for databases, repeatable CI jobs, weird dependency stacks, internal services, and anything where I need a clean system image that behaves the same everywhere. But I do not want Docker to be the default deploy path for every web app. Sometimes I just want to put a small app on a VPS and have it run. That should feel boring. The default path got heavier A lot of modern deploy tutorials quietly turn this: build app copy files to server start app route traffic into this: write a Dockerfile pick a base image handle build layers create a registry push an image pull it on the server wire up compose configure networking mount secrets debug why the container exits None of those steps are evil. They are just a lot. And for many apps, they are not the interesting part. If I am deploying a side project, a small SaaS, a webhook handler, a dashboard, or a little internal tool, the app usually needs a few simple things: build the code start the process serve HTTPS restart when it crashes keep secrets out of git show logs when something breaks maybe run a few apps on the same machine That list does not automatically mean "containerize everything". Containers solve real problems This is not an anti-Docker post. Docker solves problems that are absolutely real. It gives you a repeatable runtime. It makes system packages less mysterious. It can isolate services from each other. It makes CI easier. It gives teams a common artifact they can pass around. That is useful. But defaults matter. When Docker becomes the first step for every deploy, even tiny apps inherit container concerns before they have container problems. Now the developer is thinking about image size, build cache, multi-stage builds, registry auth, container networking, volume paths, base image updates, and whether the process can find the right port inside the container. Again, all valid stuff. Just not always the first stuff. A VPS can run normal processes The funny thing is that a VPS is already a computer. It can run a process. That sounds obvious, but a lot of modern deployment advice treats a server like it is only useful once it is running a container scheduler. For many apps, a direct process model is enough: bun run start node server.js ./my-go-app ./target/release/my-rust-app The hard parts are not usually "can Linux run this binary?" The hard parts are everything around it: how does traffic reach it? how does HTTPS work? how do I deploy a new version without downtime? where do logs go? how do secrets get injected? how do I restart it? how do I run multiple apps on one box? Those are deployment problems, not necessarily Docker problems. I want the PaaS feeling without giving up the server This is the thing I keep wanting. I like the feel of a PaaS: deploy and then the app is live. But I also like owning a small VPS. It is cheap, flexible, and boring in a good way. I know where the app is running. I can SSH in. I can inspect the machine. I am not turning every weekend project into a cloud architecture diagram. So the ideal flow, at least for me, looks more like this: tako deploy Local machine builds the app. The deploy tool copies the release to the server. The server runs the app as a normal process. A proxy routes requests to healthy instances. HTTPS is handled. Logs are available. Secrets are managed outside random .env files. No image registry needed. No Dockerfile unless I actually want one. No container networking puzzle for a two-route web app. That is the direction I have been exploring with Tako, which is a small deployment tool for running apps on your own servers. The boring path should be the happy path There is a version of deployment that feels almost disappointingly plain: tako init tako servers add tako deploy That is the kind of boring I want. Not boring as in weak or limited. Boring as in: fewer concepts before the first deploy fewer files created only for infrastructure fewer moving parts for small apps fewer places where a simple mistake hides fewer "wait, is this a Docker problem or an app problem?" moments I think the default path should optimize for the app getting online first. Then, if the app grows into container needs, reach for containers. Docker should be an option, not the entrance fee The web has a habit of turning powerful tools into mandatory tools. Docker is powerful. It deserves its place. But I do not think every deploy should start by asking the developer to write a container recipe. For a lot of projects, the best deploy path is still: build the app put it on a server run it route traffic to it make updates boring That is not old fashioned. That is just a good abstraction. The default deploy path should feel calm. It should feel like the server is helping you run your app, not asking you to become a platform engineer before lunch.

Read full story →

Comments

Loading comments…

Related