Skip to content

Using Docker in Development

Pegasus recommends using Docker during development. Although Docker is not strictly required, many parts of the documentation and helper tools do assume you are using it.

In production, Docker can also be used to deploy your application to containerized platforms. See the deployment page for more details on Docker in production.

When configuring your Pegasus project to use Docker, you can select from two different options: services-only, and full-Docker development.

In services-only mode, Docker is only used to run the external services: PostgreSQL and Redis. The Django server, Celery and any other processes are run directly on the local machine. In this mode, you don’t need to install PostreSQL and Redis on your local machine, which simplifies the setup and maintenance. You also have direct access to the other dev processes which simplifies debugging and inspection. The main downside of services-only mode is that it requires installing uv and npm.

In full-Docker mode, Docker is used to run the services, as above, but also runs Django, npm, and Celery. No processes are run directly on your local machine. This mode makes it easier to get up and running---since all you need to install is Docker---but it can make development more complicated, since all the processes are running inside Docker.

As a rough guideline: If you are comfortable installing and running Python and Node.js on your machine, use services-only mode. Otherwise, use full-Docker mode.

You need to install Docker prior to setting up your environment.

Mac users have reported better performance on Docker using OrbStack, which is a Docker Desktop alternative optimized for performance.

To start the Docker services, run:

Terminal window
make start

This will start the Database services (PostgreSQL and Redis) and in full-mode, start all the processes needed to run your application, including Django, the front end build, and Celery.

The first time you run the app you should run:

Terminal window
make init

Which will also create and run database migrations and bootstrap your application.

This section provides technical details about the Docker setup and how it works.

The Docker configuration is primarily in docker-compose.yml, where you can inspect the configured containers.

In this mode, the docker-compose.yml file will only include container definitions for PostgreSQL and Redis. The containers listed below will run with their default ports exposed. Use docker ps to check.

Container NamePurposePort
dbRuns Postgres (primary Database)5432
redisRuns Redis (Cache and Celery Broker)6379

In this mode, the docker-compose.yml file will also include containers for Django, node, and Celery.

Depending on your project settings, there are several containers that might be running. These are outlined in the table below:

Container NamePurposeIncluded
dbRuns Postgres (primary Database)Always
redisRuns Redis (Cache and Celery Broker)Always
webRuns DjangoAlways
viteRuns Vite (for CSS/JavaScript assets)If building with Vite
webpackRuns Webpack (for CSS/JavaScript assets)If building with Webpack
celeryRuns Celery (for background tasks)If Celery is enabled
frontendRuns the React Front EndIf the standalone front end is enabled

Like above, the DB containers will expose their default ports.

You can inspect the Dockerfiles being used in docker-compose.yml. Python containers use the Dockerfile.dev file.

The docker environment sets environment variables using the included .env file.

The .env file is automatically ignored by git, so you can put any additional secrets there. It generally should not be checked into source control. You can instead add variables to .env.example to show what should be included.

The following instructions are specific to “full” docker mode, where Docker is also running your application.

The Python environment is run in the containers, which means you do not need to have your own local environment if you are always using Docker for development. Python requirements are automatically installed when the container builds.

However, keep in mind that if you go this route, you will need to run all commands inside the containers as per the instructions below.

Running commands on the server can be done using docker compose, by following the pattern used in the Makefile.

For example, to bootstrap Stripe subscriptions, run:

Terminal window
docker compose exec web python manage.py bootstrap_subscriptions

Or to promote a user to superuser, run:

Terminal window
docker compose exec web python manage.py promote_user_to_superuser [email protected]

You can also use the make manage command, passing in ARGS like so:

Terminal window
make manage ARGS='promote_user_to_superuser [email protected]'

You can add any commonly used commands you want to custom.mk for convenience.

If you add or modify anything in your requirements.in (and requirements.txt) files, you will have to rebuild your containers.

The easiest way to add new packages is to add them to requirements.in and then run:

Terminal window
make requirements

Which will rebuild your requirements.txt file, rebuild your Docker containers, and then restart your app with the latest dependencies.

You can use debug tools like pdb or ipdb by enabling service ports.

This can be done by running your web container with the following:

Terminal window
docker compose run --service-ports web

If you want to set up debugging with PyCharm, it’s recommended to follow this guide on the topic.

Some environments---especially on Windows---can have trouble finding the files on your local machine. This will often show up as an error like this when starting your app:

python: can't open file '/code/manage.py': [Errno 2] No such file or directory

These issues are usually related to your disk setup. For example, mounting your code on a remote filesystem or external drive to your machine. To fix, try running the code on the same drive where Docker Desktop is installed, or on your machine’s default “C:” drive.

You can also get around this issue by running your application natively, instead of with Docker.