Deploying a Hugo Site with Docker, Nginx, and systemd

Containerization has revolutionized how we deploy web applications, and Docker stands at the forefront of this transformation. In this post, I’ll walk through my experience deploying this website using Hugo, Docker, and Nginx, all orchestrated by a systemd service that makes management effortless.

Why Docker?

The popularity of Docker isn’t accidental. It simplifies deployment by bundling applications with their dependencies into portable containers. Rather than installing Hugo, configuring web servers, and managing dependencies directly on the host system, Docker lets me execute a few commands to spin up an entire web stack. The container handles everything. I just start the service from my virtual machine, and the site goes live.

Architecture Overview

My deployment workflow consists of three main components working in harmony:

ComponentPurposeKey Benefit
HugoStatic site generatorBuilds and minifies HTML/CSS/JS for fast loading
DockerContainerization platformPortable, reproducible deployment environment
NginxWeb serverEfficiently serves static content
systemdService managerAutomatic startup and container lifecycle management

The Deployment Process

Building with Hugo

Hugo transforms my markdown content into optimized static pages. The build process uses minification to reduce file sizes and improve load times: hugo --minify --gc --enableGitInfo This command generates production-ready HTML, CSS, and JavaScript in the public/ directory, stripping unnecessary whitespace and optimizing assets for web delivery.

Containerization with Docker

The Dockerfile orchestrates the build process and packs everything into a deployable container. It uses a multi-stage build approach: first, it creates the static site with Hugo, then packs the generated files with Nginx for serving.

The docker-compose.yml file defines the container configuration, including:

  • Working directory paths
  • Port mappings for web traffic
  • Volume mounts for persistent data
  • Network configuration
  • Restart policies

Systemd Service Management

Here’s where automation truly shines. I created a systemd service file in /etc/systemd/system/ that manages the Docker container lifecycle. The service file specifies:

Service ConfigurationDescription
WorkingDirectoryLocation of docker-compose.yml file
ExecStartCommand to start the container (docker-compose up)
ExecStopCommand to stop the container (docker-compose down)
RestartAutomatic restart policy on failure
AfterDependency on docker.service

The systemd service ensures the website automatically starts when the server boots and restarts if the container crashes. This level of automation means I can focus on content creation rather than infrastructure management.

The Result

The website is now fully operational and accessible on the web. The entire stack runs reliably with minimal intervention—systemd handles startup, Docker manages the runtime environment, Nginx serves the static files efficiently, and Hugo ensures the content is optimized for performance.

This setup exemplifies modern web deployment: automated, containerized, and maintainable. Whether I’m updating content or restarting the server, the deployment process remains consistent and reliable.