This is the abridged developer documentation for SaaS Pegasus # SaaS Pegasus Documentation > Everything you need to know about setting up and configuring Pegasus for your project. ### Quicklinks [Section titled “Quicklinks”](#quicklinks) [Getting Started ](/getting-started/) [Configuration ](/configuration/) [Teams ](/teams/) [Subscriptions ](/subscriptions/) [Deployment ](/deployment/overview/) [Get Help From AI ](/ai/development/) # Getting Started > Complete setup guide for Pegasus projects with Docker or native Python, including database configuration and post-installation steps. Here’s everything you need to start your first Pegasus project. ## Watch the video [Section titled “Watch the video”](#watch-the-video) Visual learner? The above video should get you going. Else read on below for the play-by-play. ## Create and download your project codebase [Section titled “Create and download your project codebase”](#create-and-download-your-project-codebase) If you haven’t already, you’ll need to [purchase a Pegasus License on saaspegasus.com](http://www.saaspegasus.com/licenses/). Then, [create a new project on saaspegasus.com](https://www.saaspegasus.com/projects/), following the prompts and filling in whatever configuration options you want to use for your new project. Make sure that the “license” field at the bottom is set. Once you’re done, [connect your project to Github](/github) or download your project’s source code as a zip file. **Note: it’s recommended to use the Github integration which will make future upgrades and changes to your project easier to manage.** ## Set up source control [Section titled “Set up source control”](#set-up-source-control) It is highly recommended to use git for source control. [Install git](https://git-scm.com/downloads) and then follow the instructions below: ### If using the Github integration [Section titled “If using the Github integration”](#if-using-the-github-integration) If you created your project on Github, you can use `git clone` to get the code. Get your git URL from the Github page and then run the following command, swapping in your user account and project id: ```bash git clone https://github.com/user/project-id.git ``` ### If using the Zip file download [Section titled “If using the Zip file download”](#if-using-the-zip-file-download) If you chose to use a zip file instead, unzip it to a folder where you want to do your development and then manually initialize your repository: ```bash git init git add . git commit -am "initial project creation" ``` It is also recommended to create a `pegasus` branch at this time for future upgrades. ```bash git branch pegasus ``` You can read [more about upgrading here](/upgrading). ## Get up and running with Docker [Section titled “Get up and running with Docker”](#get-up-and-running-with-docker) If you’ve chosen to use Docker in development (the quickest and easiest way to get up and running), continue to the [Docker instructions](/docker). Then skip ahead to the [post-install steps](/getting-started/#post-installation-steps). ## Get up and running with native Python [Section titled “Get up and running with native Python”](#get-up-and-running-with-native-python) If you’re using Docker you can skip this section. ### Enter the project directory [Section titled “Enter the project directory”](#enter-the-project-directory) ```bash cd {{ project_name }} ``` You should see your project files, including a `manage.py` file. ### Set up your Python environment [Section titled “Set up your Python environment”](#set-up-your-python-environment) There are several ways of setting up your Python environment. See [this page](/python/setup) for information on choosing an option and setting up your environment. ### Install package requirements [Section titled “Install package requirements”](#install-package-requirements) With `uv`: ```bash # with uv uv sync # or if using pip tools pip install -r dev-requirements.txt ``` Note: if you have issues installing `psycopg2`, try installing the dependencies outlined in [this thread](https://stackoverflow.com/questions/22938679/error-trying-to-install-postgres-for-python-psycopg2) (specifically `python3-dev` and `libpq-dev`). On Macs you may also need to follow the instructions from [this thread](https://stackoverflow.com/a/58722268/8207). And specifically, run: ```bash brew reinstall openssl export LIBRARY_PATH=$LIBRARY_PATH:/usr/local/opt/openssl/lib/ ``` ### Create your .env file [Section titled “Create your .env file”](#create-your-env-file) If you installed with Github, you’ll have to create your `.env` file for your environment variables and secrets. You can do this from the example, by running: ```bash cp .env.example .env ``` ### Set up database (Postgres only) [Section titled “Set up database (Postgres only)”](#set-up-database-postgres-only) If you installed with Postgres, edit the `DATABASE_URL` value in `.env` with the appropriate username and password for connecting to your DB. You will also need to create a database for your project if you haven’t already. Assuming that your postgres admin user is named `postgres`, and you’re using identity authentication you should run: ```bash sudo -u postgres createdb {{ project_name }} ``` Or for standard authentication: ```bash createdb -U postgres -h localhost -p 5432 {{ project_name }} ``` Followed by the password for the postgres user. ### Create database migrations [Section titled “Create database migrations”](#create-database-migrations) ```bash # with uv uv run manage.py makemigrations # or with normal venv python ./manage.py makemigrations ``` ### Run database migrations [Section titled “Run database migrations”](#run-database-migrations) ```bash # with uv uv run manage.py migrate # or with normal venv python ./manage.py migrate ``` ### Run server [Section titled “Run server”](#run-server) ```bash # with uv uv run manage.py runserver # or with normal venv python ./manage.py runserver ``` Go to and you should see the default Pegasus landing page. ![Landing Page](/_astro/pegasus-landing-page.BuKwtqZg_1bn1u3.webp) ### Run / build the front end [Section titled “Run / build the front end”](#run--build-the-front-end) If you’re using Vite, or you didn’t include static files with your build, you will also need to set up your front end. The basic steps are: 1. Install node/npm. 2. Run `npm install` 3. Run `npm run dev` For more details, see the [front end docs](/front-end/overview). ## Post-installation steps [Section titled “Post-installation steps”](#post-installation-steps) Once up and running, you’ll want to review these common next-steps. ### Create a User [Section titled “Create a User”](#create-a-user) To create your first user account, just go through the sign up flow in your web browser. From there you should be able to access all built-in functionality and examples. ### Enable admin access [Section titled “Enable admin access”](#enable-admin-access) Use [the `promote_user_to_superuser` management command](/cookbooks/#use-the-django-admin-ui) to enable access to the Django Admin site. ### Confirm your site URL [Section titled “Confirm your site URL”](#confirm-your-site-url) For Stripe callbacks, email links, and JavaScript API clients to work, you must make sure that you have [configured absolute URLs correctly](/configuration/#absolute-urls). ### Set up your Stripe Subscriptions [Section titled “Set up your Stripe Subscriptions”](#set-up-your-stripe-subscriptions) If you’ve installed with subscriptions, you’ll want to set things up next. Head to the [subscriptions documentation](/subscriptions) and follow the steps there! ### Set up Background Tasks [Section titled “Set up Background Tasks”](#set-up-background-tasks) For the progress bar example to work---and to run background tasks of your own---you’ll need a Celery environment running. Head to [celery](/celery) and follow the steps there! ## Building Your Application [Section titled “Building Your Application”](#building-your-application) At this point, Pegasus has installed scaffolding for all of the user management, authentication, and (optionally) team views and Stripe subscriptions, and given you a beautiful base UI template and clear code structure to work from. Now that you’re up and running it’s time for the fun part: building your new application! This can obviously be done however you like. Some examples of things you might want to do next include: * Customize your landing page and set up a pricing page * Start modifying the list of navigation tabs and logged-in user experience * Create a new django app and begin building out your data models in `models.py`. It’s recommended to use the [Pegasus CLI](https://github.com/saaspegasus/pegasus-cli/) for this. For some initial pointers on where to to make Pegasus your own, head on over to the [Customizations Page](/customizations). For the nitty-gritty details on setting up things like email, error logging, sign up flow, analytics, and more go to [Settings and Configuration](/configuration). # Working with Python Packages (uv) > Fast Python package management with uv using pyproject.toml and uv.lock files for adding, removing, and upgrading dependencies efficiently. Recent versions of Pegasus use [uv](https://docs.astral.sh/uv/) to manage Python packages. It provides all the functionality of `pip-tools` while being much faster and offering more flexibility and features. ### Requirements Files [Section titled “Requirements Files”](#requirements-files) `uv` uses two files to manage requirements. The first is a `pyproject.toml` file, which contains the base list of packages. The `pyproject.toml` file also supports dependency groups, which are used for development and production requirements. `pyproject.toml` replaces the previous `requirements.in`, `dev-requirements.in`, and `prod-requirements.in` files. The second file is the `uv.lock` file. This file contains the pinned versions of dependencies that are used by the project’s environment. This file is automatically generated from the `pyproject.toml` file and *should not be edited by hand*. `uv.lock` replaces the previous `requirements.txt`, `dev-requirements.txt`, and `prod-requirements.txt` files. #### Adding or removing a package [Section titled “Adding or removing a package”](#adding-or-removing-a-package) To add or remove packages you can run the following commandss: ```bash # native version uv add uv remove # docker version make uv add make uv remove ``` If you’re using natively this is all you have to do! The command will update your `pyproject.toml` file, your `uv.lock` file, and sync your virtual environment. On Docker, you will have to also rebuild the container. You can do that with: ```bash make build make restart ``` The `make requirements` command can also be used to sync your `uv.lock` file and rebuild / restart your containers. #### Upgrading a package [Section titled “Upgrading a package”](#upgrading-a-package) You can upgrade a package with: ```bash # native version - update the lockfile uv lock --upgrade-package # native version - update the lockfile and sync the virtual environment uv sync --upgrade-package # docker version make uv "lock --upgrade-package wagtail" ``` You can upgrade *all* packages with: ```bash # native version - update the lockfile uv lock --upgrade # native version - update the lockfile and sync the virtual environment uv sync --upgrade # docker version make uv "lock --upgrade" ``` Like with adding packages, if you’re using Docker, you’ll have to rebuild and restart Docker containers for the updated environment to work: ```bash make build make restart ``` # Pegasus's Code Structure > Understand Pegasus project organization with apps, static files, templates, and code formatting using pre-commit hooks and ruff. ## Overall structure [Section titled “Overall structure”](#overall-structure) This is the overall structure of a new Pegasus project: The first three directories are Python modules while the remaining ones are not. ## Your `{{project_name}}` module [Section titled “Your {{project\_name}} module”](#your-project_name-module) This is your Django project root directory. It’s where your settings, root urlconf and `wsgi.py` file will live. ## Your `apps` module [Section titled “Your apps module”](#your-apps-module) This is where your project’s apps will live. It is pre-populated with Pegasus’s default apps for you to further customize to your needs. The module starts with several apps, depending on your configuration. Here are some of the main ones: * `content` is where the [Wagtail CMS models](/wagtail) are configured. * `subscriptions` is for functionality related to [Stripe subscriptions](/subscriptions). * `users` is where your user models and views are defined. * `teams` is where [team models and views](/teams) are defined. * `utils` is a set of functionality shared across the project. * `web` contains utilities and components related to the generic views, layouts and templates. ## The `pegasus` module [Section titled “The pegasus module”](#the-pegasus-module) This is where the Pegasus examples live. In general, it is not expected that you’ll need to modify much in this module, though feel free to do so! ## The `requirements` folder [Section titled “The requirements folder”](#the-requirements-folder) This is where you define your project’s Python requirements. Requirements are managed using `pip-tools`. For more information on using it see [their documentation](https://github.com/jazzband/pip-tools). ## The `assets` folder [Section titled “The assets folder”](#the-assets-folder) This is where the source files for your site’s JavaScript and CSS live. These files are what you should edit to change your JS and CSS. See [front-end](/front-end/overview) for more information on how to compile these files. ## The `static` folder [Section titled “The static folder”](#the-static-folder) This folder contains your project’s static files, including the compiled output files from the `assets` folder as well as images. ## The `templates` folder [Section titled “The templates folder”](#the-templates-folder) This folder contains your project’s Django templates. There is one sub-folder for each application that has templates. The majority of the project’s base template layouts are in the `templates/web` folder. ## Code formatting [Section titled “Code formatting”](#code-formatting) For projects that have enabled the `Autoformat code` option, the code will have been formatted using [ruff](https://github.com/astral-sh/ruff)—a drop-in replacement for [black](https://black.readthedocs.io/en/stable/) and [isort](https://pycqa.github.io/isort/) that runs much faster than those tools. The project will also include [pre-commit](https://pre-commit.com/) as a dependency in the requirements file as well as the `.pre-commit-config.yaml` file in the root directory. pre-commit is a tool for managing pre-commit hooks - which can be used to ensure your code matches the correct format when it’s committed. After installing the project dependencies you can install the pre-commit hooks: ```bash $ pre-commit install --install-hooks pre-commit installed at .git/hooks/pre-commit ``` The default configuration that ships with Pegasus will run `ruff` and `ruff-format` prior to every Git commit. If there are fixes that are needed you will be notified in the shell output. ### pre-commit Usage [Section titled “pre-commit Usage”](#pre-commit-usage) **Manually running hooks** ```bash # run all hooks against currently staged files pre-commit run # run all the hooks against all the files. This is a useful invocation if you are using pre-commit in CI. pre-commit run --all-files ``` **Temporarily disable hooks** See For more information on using and configuring pre-commit check out the [pre-commit docs](https://pre-commit.com/#quick-start) ### Tool configurations [Section titled “Tool configurations”](#tool-configurations) The configuration for the tools can be found in the [`pyproject.toml`](https://black.readthedocs.io/en/stable/usage_and_configuration/the_basics.html#what-on-earth-is-a-pyproject-toml-file) file, using the same syntax as `black`. For the most part the default black/ruff formats have been preserved, with a few updates, for example, increasing the line length to 120. You can find more information about these values in the [ruff README](https://github.com/astral-sh/ruff?tab=readme-ov-file#configuration). ### Upgrading [Section titled “Upgrading”](#upgrading) See [this cookbook](/cookbooks/#migrating-to-auto-formatted-code) for guidance on how to enable code formatting on an existing Pegasus project. # Settings and Configuration > Configure Django settings, environment variables, email providers, social authentication, Stripe payments, and production deployments. This section describes some of the settings and configuration details you can change inside Pegasus. ## Settings and environment files [Section titled “Settings and environment files”](#settings-and-environment-files) Pegasus uses environment variables and `django-environ` to manage settings. You *can* modify values directly in `settings.py`, but the recommended way to modify any setting that varies across environments is to use a `.env` file. Out-of-the-box, Pegasus will include multiple `.env` files for your settings: **`.env` is for development in either a native or a Docker-based environnment.** It will be picked up by default if you run `./manage.py runserver` or `docker compose start`. If you need to swap between these environments you might need to modify a few variables in this file---in particular the database and redis URLs. The `.env` is typically not checked into source control (since it may include secrets like API keys), so is included in the `.gitignore`. **`.env.example` is an example file.** It is not used for anything, but can be checked into source control so that developers can use it as a starting point for their `.env` file. Projects downloaded as zip files will include a `.env` file, but projects created or pulled from Github will typically only include a `.env.example` file, so you will need to copy this file locally to run your development server. *Note: Pegasus versions prior to 2024.3 also included a `.env.docker` file. This has been merged with the `.env` file.* ### Settings environment precedence [Section titled “Settings environment precedence”](#settings-environment-precedence) Most settings are configured in the form: ```python SOME_VALUE = env('SOME_VALUE', default='') ``` As mentioned above, *it is recommended to set these values in your environment / `.env` file*, which will always work as expected. The environment takes precedence over the default if it’s set---even if it is set to an empty value. This can lead to confusing behavior. For example, if in your `.env` file you have this line: ```dotenv SOME_VALUE='' ``` And in your `settings.py` you provide a default: ```python SOME_VALUE = env('SOME_VALUE', default='my value') ``` The default will be ignored, and `SOME_VALUE` will be an empty string. To fix this, either *remove the value entirely from your `.env` file*, or *explicitly set the value in your `settings.py`* (instead of using the `default` argument). E.g. ```python SOME_VALUE = 'my value' ``` ## Project Metadata [Section titled “Project Metadata”](#project-metadata) When you first setup Pegasus it populated the `PROJECT_METADATA` setting in `settings.py` with various things like page titles and social sharing information. These settings can later be changed as you like by editing the setting directly: ```python PROJECT_METADATA = { 'NAME': 'Your Project Name', 'URL': 'http://www.example.com', 'DESCRIPTION': 'My Amazing SaaS Application', 'IMAGE': 'https://upload.wikimedia.org/wikipedia/commons/2/20/PEO-pegasus_black.svg', 'KEYWORDS': 'SaaS, django', 'CONTACT_EMAIL': 'you@example.com', } ``` See the [project metadata documentation](/page-metadata) for more information about how this is used. ## Absolute URLs [Section titled “Absolute URLs”](#absolute-urls) In most of Django/Pegasus, URLs are *relative*, represented as paths like `/account/login/` and so forth. But in some cases you need a complete URL, including the *protocol* (http vs https) and *server* (e.g. [www.example.com](http://www.example.com)). These are necessary whenever you use a link in an email, with an external site (e.g. Stripe API callbacks and social authentication), and in some places when APIs are accessed from your front end. ### Setting your site’s protocol [Section titled “Setting your site’s protocol”](#setting-your-sites-protocol) The *protocol* is configured by the `USE_HTTPS_IN_ABSOLUTE_URLS` variable in `settings.py`. You should set this to `True` when using https and `False` when not (typically only in development). ### Setting your server URL [Section titled “Setting your server URL”](#setting-your-server-url) When you first install Pegasus it will use the `URL` value from `PROJECT_METADATA` above to create a Django `Site` object in your database. The domain name of this `Site` will be used for your server address. If you need to change the URL after installation, you can go to the site admin at `admin/sites/site/` and modify the values accordingly, leaving off any http/https prefix. In development, you’ll typically want a domain name of `localhost:8000`, and in production this should be the domain where your users access your app. Note that this URL must match *exactly* what is in the browser address bar. So, for example, if you load your development site from `127.0.0.1:8000` instead of `localhost:8000` then that is what you should put in. **Example Development Configuration** ![Development Site Settings](/_astro/site-admin-dev.D-ocuSIA_16FrW3.webp) **Example Production Configuration** ![Site Settings](/_astro/site-admin.B37AAtIM_Z25ivwv.webp) ## Sending Email [Section titled “Sending Email”](#sending-email) Pegasus is setup to use [django-anymail](https://github.com/anymail/django-anymail) to send email via Amazon SES, Mailgun, Postmark, and a variety of other email providers. To use one of these email backends, change the email backend in `settings.py` to: ```python EMAIL_BACKEND = 'anymail.backends.mailgun.EmailBackend' ``` And populate the `ANYMAIL` setting with the required information. For example, to use [Mailgun](https://www.mailgun.com/) you’d populate the following values: ```python ANYMAIL = { "MAILGUN_API_KEY": "key-****", "MAILGUN_SENDER_DOMAIN": 'mg.{{project_name}}.com', # should match what's in mailgun } ``` If you are in the EU you may also need to add the following entry: ```python 'MAILGUN_API_URL': 'https://api.eu.mailgun.net/v3', ``` The [anymail documentation](https://anymail.readthedocs.io/en/stable/) has much more information on these options. The following django settings should also be set: ```python SERVER_EMAIL = 'noreply@{{project_name}}.com' DEFAULT_FROM_EMAIL = 'you@{{project_name}.com' ADMINS = [('Your Name', 'you@{{project_name}}.com'),] ``` See [Sending email](https://docs.djangoproject.com/en/stable/topics/email/) in the django docs for more information. ## User Sign Up [Section titled “User Sign Up”](#user-sign-up) The sign up workflow is managed by [django-allauth](https://allauth.org/) with a sensible set of defaults and templates. ### Social logins [Section titled “Social logins”](#social-logins) Pegasus optionally ships with “Login with Google/Twitter/Github” options. You’ll separately need to follow the steps listed on the [provider-specific pages here](https://docs.allauth.org/en/latest/socialaccount/providers/index.html) to configure things on the other side. These steps can sometimes be a bit involved and vary by platform. But will generally entail two steps: 1. Creating a new application / client on the service you want to use. 2. Adding the credentials to your environment (`.env`) file. See the Google guide below for an example you can follow. If you want to add a social login that’s not supported out of the box (e.g. Facebook/Meta or Apple), you can follow the existing patterns and configure things based on the allauth docs. If you need help setting this up feel free to get in touch! Additionally, see the resources below. #### Google OAuth Specific instructions [Section titled “Google OAuth Specific instructions”](#google-oauth-specific-instructions) 1. Register the application with google by following just the “App registration” section [here](https://docs.allauth.org/en/latest/socialaccount/providers/google.html). Note that the trailing slash for the “Authorized redirect URLs” is required. For example, assuming you are developing locally, it should be set to exactly `http://localhost:8000/accounts/google/login/callback/`. 2. Set the resulting client id and secret key in the `.env` file in the root of your project. ```dotenv GOOGLE_CLIENT_ID="actual client id from the google console" GOOGLE_SECRET_ID="actual secret id from the google console" ``` #### Other Social Setup Guides [Section titled “Other Social Setup Guides”](#other-social-setup-guides) The Pegasus community has recommended the following guides to set things up with specific providers: * [Github](https://python.plainenglish.io/django-allauth-a-guide-to-enabling-social-logins-with-github-f820239fb73f) ### Requiring email confirmation [Section titled “Requiring email confirmation”](#requiring-email-confirmation) Pegasus does not require users to confirm their email addresses prior to logging in. However, this can be easily changed by changing the following value in `settings.py` ```python ACCOUNT_EMAIL_VERIFICATION = 'optional' # change to "mandatory" to require users to confirm email before signing in. ``` *Note: The email verification step will be skipped if using a social login.* ### Enabling sign in by email code [Section titled “Enabling sign in by email code”](#enabling-sign-in-by-email-code) Sign in by email code is controlled by the `ACCOUNT_LOGIN_BY_CODE_ENABLED` setting. You can enable / disable it in `settings.py`. ```python ACCOUNT_LOGIN_BY_CODE_ENABLED=True ``` ### Two-factor authentication [Section titled “Two-factor authentication”](#two-factor-authentication) Two-Factor authentication (2FA) is configured using the [allauth’s mfa](https://docs.allauth.org/en/latest/mfa/index.html) support. When using Two-Factor Auth with Pegasus, a new section is added to the user profile for enabling & configuring the OTP (one-time password) devices for the user. If a user has a Two-Factor device configured then they will be prompted for a token after logging in. ### Customizing emails [Section titled “Customizing emails”](#customizing-emails) Pegasus ships with simple, responsive email templates for password reset and email address confirmation. These templates can be further customized by editing the files in the `templates/account/email` directory. See [the allauth email documentation](https://docs.allauth.org/en/latest/common/email.html) for more information about customizing account emails. ### Disabling public sign ups [Section titled “Disabling public sign ups”](#disabling-public-sign-ups) If you’d like to prevent everyone from signing up for your app, set the following in your `settings.py`, replacing the existing value: ```python ACCOUNT_ADAPTER = 'apps.users.adapter.NoNewUsersAccountAdapter' ``` This will prevent all users from creating new accounts, though existing users can continue to login and use the app. ### Further configuration [Section titled “Further configuration”](#further-configuration) Allauth is highly configurable. It’s recommended that you look into the various [configuration settings available within allauth](https://docs.allauth.org/en/latest/account/configuration.html) for any advanced customization. ## Stripe [Section titled “Stripe”](#stripe) If you’re using [Stripe](https://www.stripe.com/) to collect payments you’ll need to fill in the following in `settings.py` (or populate them in the appropriate environment variables): ```python STRIPE_LIVE_PUBLIC_KEY = os.environ.get("STRIPE_LIVE_PUBLIC_KEY", "") STRIPE_LIVE_SECRET_KEY = os.environ.get("STRIPE_LIVE_SECRET_KEY", "") STRIPE_TEST_PUBLIC_KEY = os.environ.get("STRIPE_TEST_PUBLIC_KEY", "") STRIPE_TEST_SECRET_KEY = os.environ.get("STRIPE_TEST_SECRET_KEY", "") STRIPE_LIVE_MODE = False # Change to True in production ``` ## Google Analytics [Section titled “Google Analytics”](#google-analytics) To enable Google Analytics, add your analytics tracking ID to your `.env` file or `settings.py` file: ```python GOOGLE_ANALYTICS_ID = 'UA-XXXXXXX-1' ``` Pegasus uses a “global site tag” with gtag.js by default, which is a simpler version of Google Analytics that can be rolled out with zero additional configuration. If you use Google Tag Manager, you can make changes in `templates/web/components/google_analytics.html` to match the snippet provided by Google. See [this article](https://support.google.com/tagmanager/answer/7582054) for more on the differences between gtag.js and Google Tag Manager. ## Sentry [Section titled “Sentry”](#sentry) [Sentry](https://sentry.io/) is the gold standard for tracking errors in Django applications and Pegasus can connect to it with a few lines of configuration. If you build with Sentry enabled, all you need to do is populate the `SENTRY_DSN` setting - either directly in your `settings.py` or via an environment variable. After setting it up on production, you can test your Sentry integration by visiting `https:///simulate_error`. This should trigger an exception which will be logged by Sentry. ## OpenAI and LLMs [Section titled “OpenAI and LLMs”](#openai-and-llms) For help configuring LLMs and AIs, see the [AI docs](/ai/development/). ## Celery [Section titled “Celery”](#celery) See the [celery docs](/celery) for set up and configuration of Celery. ## Turnstile [Section titled “Turnstile”](#turnstile) To enable support for [Cloudflare Turnstile](https://www.cloudflare.com/products/turnstile/), set `TURNSTILE_KEY` and `TURNSTILE_SECRET` in your settings or environment variables. This should automatically enable turnstile on your sign up pages. It is recommended to create two different Turnstile accounts on Cloudflare for development and production. In development you can specify “localhost” as your domain like this: ![Turnstile Dev](/_astro/turnstile.1SEbnPxr_Z8Udm.webp) In production, you should replace that with your site’s production domain. ## Mailing List [Section titled “Mailing List”](#mailing-list) Pegasus includes support for subscribing users to a marketing email list upon signup. Currently, three platforms are supported: 1. [Mailchimp](https://mailchimp.com/) 2. [ConvertKit](https://convertkit.com/) 3. [Email Octopus](https://emailoctopus.com/?urli=Cd7hX) Make sure you choose the platform you would like to use when building your Pegasus project. Then follow the instructions below for the platform you’ve chosen. After completing these steps, new sign-ups will automatically be added to your configured marketing list. Note that it is your responsibility to notify your users / get their consent as per your local privacy regulations. ### Mailchimp [Section titled “Mailchimp”](#mailchimp) To enable the Mailchimp integration, first create a mailing list, then fill in the following to values in your environment/settings. ```python MAILCHIMP_API_KEY = '' MAILCHIMP_LIST_ID = '' ``` ### ConvertKit [Section titled “ConvertKit”](#convertkit) To enable the ConvertKit integration, first create a form, then fill in the following values in your environment/settings. ```python CONVERT_KIT_API_KEY = "" CONVERT_KIT_FORM_ID = "
" ``` ### Email Octopus [Section titled “Email Octopus”](#email-octopus) To enable the Email Octopus integration, first create a mailing list, then fill in the following values in your environment/settings. ```python EMAIL_OCTOPUS_API_KEY = "" EMAIL_OCTOPUS_LIST_ID = "" ``` Note: If you use [this link](https://emailoctopus.com/?urli=Cd7hX) to sign up for email octopus, you’ll get $15 off your first payment, and help support Pegasus. ## Logging [Section titled “Logging”](#logging) Pegasus ships with a default Django log configuration which outputs logs to the console as follows: * Django log messages at level INFO and above * Pegasus log messages at level INFO and above The Pegasus loggers are all namespaced with the project name e.g. `{{project_name}}.subscriptions`. ### Changing log levels [Section titled “Changing log levels”](#changing-log-levels) There are two environment variables which can be used to control the log levels of either Django messages or Pegasus message: * `DJANGO_LOG_LEVEL` * `{{project_name.upper()}}_LOG_LEVEL` Alternatively the entire log configuration can be overridden using the `LOGGING` setting as described in the [Django docs](https://docs.djangoproject.com/en/4.0/topics/logging/). ## Storing media files [Section titled “Storing media files”](#storing-media-files) SaaS Pegasus ships with optional configuration for storing dynamic media files in S3 e.g. user profile pictures. If you do not have this enabled the [default Django configuration](https://docs.djangoproject.com/en/4.1/topics/files/) will be used which requires you to have persistent storage available for your site such as a Docker volume. ### Setting up S3 media storage [Section titled “Setting up S3 media storage”](#setting-up-s3-media-storage) *For a video walkthrough of this content (using kamal deployment), see below:* This section assumes you have set up your SaaS Pegasus project with the **S3 media file storage** enabled. In order to use S3 for media storage you will need to create an S3 bucket and provide authentication credentials for writing data to the bucket. Once you have done the S3 setup (see below), you can update your `.env` file as follows: ```dotenv USE_S3_MEDIA=True AWS_ACCESS_KEY_ID= AWS_SECRET_ACCESS_KEY= ``` With this configuration your media files will be accessible at `https://{{ project_name }}-media.s3.amazonaws.com/media/`. [This guide](https://testdriven.io/blog/storing-django-static-and-media-files-on-amazon-s3/) is an excellent resource with step-by-step instructions for the S3 setup. #### Additional settings [Section titled “Additional settings”](#additional-settings) AWS\_STORAGE\_BUCKET\_NAME : Name of the S3 bucket to use. Defaults to `{{project_name}}-media`. ### Alternative storage backends [Section titled “Alternative storage backends”](#alternative-storage-backends) Should you wish to use a different storage backed e.g. [Digital Ocean Spaces](https://www.digitalocean.com/products/spaces) you can follow the setup described in the [django-storages](https://django-storages.readthedocs.io/en/latest/index.html) documentation. There is also a [Pegasus community guide](/community/digital-ocean-spaces) that walks through this in more detail. ## Django Debug Toolbar [Section titled “Django Debug Toolbar”](#django-debug-toolbar) Pegasus ships with [Django Debug Toolbar](https://github.com/jazzband/django-debug-toolbar#readme) as an optional package. This section describes how the feature is configured in Pegasus. The `django-debug-toolbar` package is placed in the `dev-requirements.txt` file which means it will only be available in dev environments. Should you wish to use it in a production environment you will need to add it to your `prod-requirements.in` file and [re-build](/python/setup) your `prod-requirements.txt` file. By default, the toolbar is enabled in development environments via the `ENABLE_DEBUG_TOOLBAR` setting in your `.env` file(s). You can change this setting in any environment to turn it on/off. ```dotenv ENABLE_DEBUG_TOOLBAR=True ``` # APIs > Django REST Framework APIs with auto-generated OpenAPI schemas, TypeScript clients, and authentication support for building modern web applications. Pegasus comes with a rich ecosystem of APIs that can used by your app’s front end as well as exposed to third-party developers. ## APIs in Pegasus [Section titled “APIs in Pegasus”](#apis-in-pegasus) APIs in Pegasus consist of three pieces: 1. **API endpoints**, created with [Django Rest Framework (DRF)](https://www.django-rest-framework.org/). These are the Django views that serve your APIs. 2. **API schemas**, created with [drf-spectacular](https://drf-spectacular.readthedocs.io/en/latest/). These are automatically created by your APIs, and can be used for API documentation and client generation. They follow the [OpenAPI 3](https://spec.openapis.org/oas/v3.1.0) specification. 3. **API clients**, created by [OpenAPI Generator](https://openapi-generator.tech/). These can be used by developers to interact with your APIs. Pegasus ships with a TypeScript (JavaScript) client that is used in your app’s front end by the parts of the app that interact with the backend APIs (e.g. JavaScript charts, and the React/Vue demos). This might sound like a lot of moving parts, but, critically, *all the logic lives in the API endpoints themselves*. The schemas are auto-generated by the endpoints, and the clients are auto-generated by the schemas. So you only have to maintain your APIs in a single place, and everything else is kept in sync with tooling. Using the schemas and clients is optional. You can always interact with a Pegasus API by making the appropriate HTTP requests directly. However, using a client can greatly simplify the code you write and improve the development experience. Front end code in Pegasus that interacts with APIs uses it by default. Additionally, getting API docs “for free” from the schemas can be a big win if you plan to make your project’s API third-party-developer-facing. ## API Documentation [Section titled “API Documentation”](#api-documentation) By default, your Pegasus app ships with two built-in sets of API documentation available at the `/api/schema/swagger-ui/` endpoint ( in development) and `/api/schema/redoc/` endpoint ( in development). The API docs will look something like this: **Swagger API docs:** ![Swagger API Docs](/_astro/swagger-api-docs.CyMzYekt_ZX2A8c.webp) **Redoc API docs:** ![Redoc API Docs](/_astro/redoc-api-docs.9KICU7qX_Z1EUqMg.webp) ## API Clients [Section titled “API Clients”](#api-clients) As part of the [front end](/front-end/overview), Pegasus ships with an API client that can be used to interact with your project’s APIs. **This client is automatically generated from your APIs and should not be modified by hand.** You can find the source code of the API client(s) in the `api-client` folder in your project’s root directory. *Note: In releases prior to 2024.3 the API client was in the `assets/javascript/api-client` directory.* ### Using the API client [Section titled “Using the API client”](#using-the-api-client) There are several example usages of the API client in the Pegasus codebase. The steps, as seen in the employee app demo, are as follows: **Initialize the API client** ```javascript import {Cookies} from "./app"; import {Configuration, PegasusApi} from "./api-client"; const apiConfig = new Configuration({ basePath: 'https://yourserver.com/', // or pass this in via {{server_url}} template variable headers: { 'X-CSRFToken': Cookies.get('csrftoken'), } }) const client = new PegasusApi(apiConfig); ``` **Call an API** ```javascript client.employeesList().then((result) => { // do something with the API result here console.log('your employees are ', result.results); }); ``` ### Client method names [Section titled “Client method names”](#client-method-names) The easiest way to find out the methods available in the API client is by looking at the source code in `api-client/apis/Api.ts`. Method names are determined by the `operationId` value for the API in the auto-generated `schema.yaml` file. These identifiers are auto-generated, but can be overridden using DRF Spectacular’s `extend_schema_view` and `extend_schema` helper functions. This can be done for an entire `ViewSet` as follows: ```python from drf_spectacular.utils import extend_schema_view, extend_schema from rest_framework import viewsets @extend_schema_view( create=extend_schema(operation_id='employees_create'), list=extend_schema(operation_id='employees_list'), retrieve=extend_schema(operation_id='employees_retrieve'), update=extend_schema(operation_id='employees_update'), partial_update=extend_schema(operation_id='employees_partial_update'), destroy=extend_schema(operation_id='employees_destroy'), ) class EmployeeViewSet(viewsets.ModelViewSet): # rest of viewset code here ``` The IDs in the Python code will be converted to camelCase in the JavaScript client. ### Generating the OpenAPI3 schema.yml file [Section titled “Generating the OpenAPI3 schema.yml file”](#generating-the-openapi3-schemayml-file) In a new Pegasus installation, the OpenAPI3 `schema.yml` will be available at the `/api/schema/` endpoint ( in dev). If you plan to use the `schema.yml` file in production, it is more efficient to create it once and serve it as a static file. This can be done by running: ```bash ./manage.py spectacular --file static/api-schema.yml ``` Then you can reference the file by using `{% static /api-schema.yml %}` in a Django template. ### Generating the API client [Section titled “Generating the API client”](#generating-the-api-client) Anytime you change your APIs you should create a new API client to keep things in sync. This can be done using the [OpenAPI Generator](https://openapi-generator.tech/) project. The [typescript-fetch](https://openapi-generator.tech/docs/generators/typescript-fetch) client is the one used by Pegasus. #### Running natively (requires Java) [Section titled “Running natively (requires Java)”](#running-natively-requires-java) To generate your API client natively, first install the `openapi-generator-cli` (this library also requires `java`): ```bash npm install @openapitools/openapi-generator-cli -g ``` Then run it as follows: ```bash openapi-generator-cli generate -i http://localhost:8000/api/schema/ -g typescript-fetch -o ./api-client/ ``` The above assumes your Django server is running at , but you can replace that value with any URL or file system reference to your `schema.yml` file. #### Running in docker [Section titled “Running in docker”](#running-in-docker) You can also generate your API client with docker to avoid having to install Java by running: ```bash make build-api-client ``` while your server is running. You should see the files in `api-client` get updated. #### Rebuilding your front end [Section titled “Rebuilding your front end”](#rebuilding-your-front-end) After re-creating the API client, you’ll have to rebuild your front end: ```bash npm run dev ``` Note that introducing breaking changes to your APIs can also break your API client! If you’re unsure if you introduced breaking changes it is worth testing any functionality that depends on the API client. ## Authentication APIs [Section titled “Authentication APIs”](#authentication-apis) *Added in version 2024.3. Changed in 2025.4.1* If you enable the “Use Authentication APIs” checkbox in your project, Pegasus will generate a set of API endpoints for registering and logging in users. These endpoints can be used to integrate your backend with single page applications (SPAs) and mobile apps. Under the hood, Pegasus uses [allauth headless](https://docs.allauth.org/en/dev/headless/openapi-specification/) for these endpoints. This feature uses Django’s session-based authentication by default---which works great for single page apps---though it is possible to add in JWT or another token-based authentication scheme to better support mobile applications. A complete end-to-end example that uses the API authentication feature in a React SPA can be found in the experimental [standalone front end](/experimental/react-front-end). This example includes React/API-based sign up, login, password reset, two-factor authentication, email confirmation and more. ## API Keys [Section titled “API Keys”](#api-keys) Pegasus supports the use of API Keys to access APIs, built on top of the [Django REST Framework API Key](https://florimondmanca.github.io/djangorestframework-api-key/) project. Pegasus includes the ability to create API keys, associate them with your User objects, and access APIs using the key. ### Creating and managing API keys [Section titled “Creating and managing API keys”](#creating-and-managing-api-keys) A simple UI for creating, viewing, and revoking API keys is available to end users from the Profile page. More advanced/customized management of API keys---including the ability to associate names and expiry dates with keys---is available through the Django admin interface. Note that when an API key is created it will be displayed *once* and will not be available after that. For more details on working with API keys see [the library documentation](https://florimondmanca.github.io/djangorestframework-api-key/guide/#creating-and-managing-api-keys). ### API keys and Users [Section titled “API keys and Users”](#api-keys-and-users) Pegasus associates API keys with your Django `User` objects. This is a good, practical way to get started with API key scoping. All access granted by the key will the same as the associated `CustomUser` object, which allow you to easily create APIs that work with logged-in users *or* API keys. The `apps.api.models.UserAPIKey` class is used to associate an API key with a `CustomUser`. You can then enable API keys for any user-specific views, by following the instructions for `APIView`s and `ViewSet`s below. More complex API key permissions---for example, associating a key with a single API or a single team---can be created by following [these instructions](https://florimondmanca.github.io/djangorestframework-api-key/guide/#api-key-models). To enable API-key support for an `APIView`, or `ViewSet`, use the `IsAuthenticatedOrHasUserAPIKey` permission class in place of `IsAuthenticated`. This will allow either authenticated users or UserAPIKey users to access the APIs. In either case, the associated user object will be available as `request.user`. You can see an example `APIView` in the `EmployeeDataAPIView` class that ships with the Pegasus examples, and an example `ViewSet` in the `EmployeeViewSet` code. ### Testing API keys [Section titled “Testing API keys”](#testing-api-keys) The easiest way to test API key functionality is to use a tool like [curl](https://curl.se/). The following command can be used to test a user-based API key with a default Pegasus installation: ```bash curl http://localhost:8000/pegasus/employees/api/employees/ -H "Authorization: Api-Key " ``` You should replace `` with the API key displayed when it is created. ## Troubleshooting [Section titled “Troubleshooting”](#troubleshooting) ### API client requests are failing [Section titled “API client requests are failing”](#api-client-requests-are-failing) When API client requests fail you will get error messages in parts of the application that use the API clients, including the Teams UI (if you are using React), and the React/Vue employee examples. The most common reason that API client requests fail is a mismatch between the absolute URL configured in the server and the servers *actual* URL. This mismatch be fixed by modifying the Django Site object and settings to match the URL you’re loading the site from, as described in the documentation on [absolute URLs](/configuration/#absolute-urls). In *development* the most common issues are: 1. Your Django Site is not set up for development. Ensure the site’s domain name is `localhost:8000` in your Django admin, [as described here](/configuration/#absolute-urls). 2. You are loading from a mismatched domain. Be sure you are loading your browser at and not . Or alternatively, if you want to use the 127.0.0.1 address, update the Django site accordingly to use that. # Async and Websocket Support > Enable asynchronous Django views and real-time websockets using Daphne, Uvicorn, and Django Channels for modern web applications. As of version 2023.10, Pegasus provides support [asynchronous support](https://docs.djangoproject.com/en/stable/topics/async/), as well as support for websockets via the [channels library](https://channels.readthedocs.io/). ## Enabling Async Support [Section titled “Enabling Async Support”](#enabling-async-support) You can enable Async support by checking the “Use Async / Websockets” option in your project settings. Enabling Async will: 1. Change your default development server to [Daphne](https://docs.djangoproject.com/en/stable/howto/deployment/asgi/daphne/). 2. Change your default production server to [Uvicorn](https://www.uvicorn.org/) (via gunciorn). 3. Add and configure `channels` in your project for websocket support. In addition to the above configuration changes, enabling async will also use it for LLM chats if available. Finally, there is an optional group chat application you can separately add (details below). ## The Async / Websocket Demo Application [Section titled “The Async / Websocket Demo Application”](#the-async--websocket-demo-application) Pegasus includes an optional demo application to demonstrate the asynchronous and socket capabilities. The demo application is an extension of the demo application that you build while completing the [channels tutorial](https://channels.readthedocs.io/en/latest/tutorial/index.html). You can see a demo below. The demo application uses the [HTMX websockets extension](https://htmx.org/extensions/ws/) to simplify the implementation. If you prefer not to use HTMX at all, you can change your websocket connection logic to use vanilla JavaScript instead, as shown in the [channels tutorial here](https://channels.readthedocs.io/en/latest/tutorial/part_2.html#add-the-room-view). A React-based websocket demo is on the roadmap. ## Websocket urls [Section titled “Websocket urls”](#websocket-urls) Websocket URLs are defined separately from your app’s main `urls.py` file. In Pegasus, the convention is to put your websocket urls in `channels_urls.py` in your project folder (the same one containing `urls.py`). Because websocket urls are separate from your main app, and because they follow a different protocol, they must be referenced as absolute URLs in your front end (including prepending “ws\://” or “wss\://” depending on whether you’re using HTTPS). Pegasus ships with two helper functions you can use to assist with working with URLs, so long as you follow Pegasus conventions. The `websocket_reverse` function will reverse a relative websocket URL, and the `websocket_absolute_url` function will turn a relative URL into an absolute websocket URL based on your Site address and the `USE_HTTPS_IN_ABSOLUTE_URLS` setting. You can combine these functions like so to pass the URL of a websocket endpoint to a template: ```python room_ws_url = websocket_absolute_url(websocket_reverse("ws_group_chat", args=[room_id])) ``` You can then use the websocket URL in a template/JavaScript like this: ```js const chatSocket = new WebSocket({{ room_ws_url}}); chatSocket.onmessage = function(e) { // handle message }; ``` ## Asynchronous web servers [Section titled “Asynchronous web servers”](#asynchronous-web-servers) There are several ASGI servers supported by Django. By default, Pegasus uses the Daphne web server in development and the Uvicorn web server in production, for reasons described below. That said, you can customize your app to use whichever server you prefer. ### Daphne [Section titled “Daphne”](#daphne) In development, Pegasus uses the [Daphne](https://pypi.org/project/daphne/) web server for its tight integration with Django’s `runserver` command, as [outlined in the Django docs](https://docs.djangoproject.com/en/4.2/howto/deployment/asgi/daphne/). Daphne is installed via `dev-requirements` and will be added to your `INSTALLED_APPS` whenever `settings.DEBUG` is `True`. ### Uvicorn [Section titled “Uvicorn”](#uvicorn) In production, Pegasus uses the [Uvicorn](https://www.uvicorn.org/) web server. Uvicorn has a seamless integration with `gunicorn`, making transitioning to it very easy. Uvicorn is installed via `prod-requirements`, and if you build with async features enabled, your `gunicorn` command will be updated to use it. ## Troubleshooting [Section titled “Troubleshooting”](#troubleshooting) **The chat app loads but nothing happens when I send a message.** The most likely reason this would happen is if your site URLs are not set up properly, which would cause the websocket endpoints to not hit the right address. See the documentation on [absolute URLs](/configuration/#absolute-urls) to fix this, and in particular make sure your Django site object has the right domain. In development this should be set to `localhost:8000`. **I’m getting an error: No module named ‘daphne’** If you are getting this error *in production* it is likely because your `DEBUG` environment variable is not set. Due to the order in which settings are imported, you *must* define `DEBUG=False` in your *environment*, `.env` file, or main `settings.py` file. This is in addition to (or instead of) setting `DEBUG=False` in your `settings_production.py` file. If you are getting this error *in development*, be sure that Daphne is installed. You should have the a `channels[daphne]` entry in your `dev-requirements.in` file, and you should [build and install your requirements](/python/setup) as needed. To do this in a non-Docker environment, run: ```plaintext pip-compile requirements/dev-requirements.in pip install -r requirements/dev-requirements.txt ``` **I’m having another issue deploying to production.** Since this is a new feature there may be some speed-bumps getting it into production on all platforms. While every deployment platform is expected to work, it is not possible to test every app/configuration. So, if you have any issues please reach out over email () or on Slack and I will do my best to help! # Celery > Set up Celery distributed task queues with Redis for background tasks, scheduled jobs, and async processing in Pegasus applications. [Celery](https://docs.celeryq.dev/) is a distributed task queue used to run background tasks. It is required by several Pegasus features, including: 1. The “background task” example. 2. Per-unit subscriptions (celery runs the background task to sync unit amounts with Stripe). 3. AI Chat (it is used in all builds to set chat names, and, if async is not enabled, for the chats themselves). If you aren’t using any of the above features, you can disable celery by unchecking the “use celery” option---added in version 2025.1---in your project settings. **If you *are* using any of the above features, this option will not do anything.** ## Quick Start [Section titled “Quick Start”](#quick-start) **If you’re using [Docker in development](/docker) then Celery should automatically be configured and running. The instructions in this section are for running Celery outside of Docker.** The easiest way to get going in development is to [download and install Redis](https://redis.io/download) (if you don’t already have it) and then run: *With uv:* ```bash uv run celery -A {{ project_name }} worker -l info --pool=solo ``` *With standard Python:* ```bash celery -A {{ project_name }} worker -l info --pool=solo ``` Note that the ‘solo’ pool is recommended for development but not for production. When running in production, you should use a more robust pool implementation such as `prefork` (for CPU bound tasks) or `gevent` (for I/O bound tasks). ### Running Celery with Gevent [Section titled “Running Celery with Gevent”](#running-celery-with-gevent) The `gevent` pool is useful when running tasks that are I/O bound which tends to be 90% of tasks. The same configuration can also be used to run Celery on Windows (if the `solo` pool is not suitable) since Celery 4.x [no longer officially supports Windows](https://docs.celeryq.dev/en/4.0/whatsnew-4.0.html#removed-features). To use the `gevent` pool, change the concurrency pool implementation to `gevent` instead. ```bash pip install gevent celery -A {{ project_name }} worker -l info -P gevent ``` For more information see the [Celery documentation](https://docs.celeryq.dev/en/stable/userguide/concurrency/gevent.html). ## Setup and Configuration [Section titled “Setup and Configuration”](#setup-and-configuration) The above setup uses [Redis](https://redis.io/) as a message broker and result backend. If you want to use a different message broker, for example [RabbitMQ](https://www.rabbitmq.com/), you will need to modify the `CELERY_BROKER_URL` and `CELERY_RESULT_BACKEND` values in `settings.py`. More details can be found in the [Celery documentation](https://docs.celeryq.dev/en/stable/getting-started/backends-and-brokers/index.html). ## Monitoring with Flower [Section titled “Monitoring with Flower”](#monitoring-with-flower) [Flower](https://flower.readthedocs.io/en/latest/) is an open-source web application for monitoring and managing Celery clusters. It provides real-time information about the status of Celery workers and tasks. If you’d like to use Flower in development, add the following to the `services` section of your `docker-compose.yml`: ```yaml flower: image: mher/flower environment: - CELERY_BROKER_URL=redis://redis:6379 command: celery flower ports: - 5555:5555 depends_on: - redis ``` In production, you will likely want to run Flower behind a private VPN, or [set up authentication](https://flower.readthedocs.io/en/latest/auth.html) on your Flower instance, and use a [reverse proxy](https://flower.readthedocs.io/en/latest/reverse-proxy.html) to expose it. ## Scheduled Tasks with Celery Beat [Section titled “Scheduled Tasks with Celery Beat”](#scheduled-tasks-with-celery-beat) [Celery Beat](https://docs.celeryq.dev/en/stable/userguide/periodic-tasks.html) is a scheduler that triggers tasks at regular intervals, which can be used to run periodic tasks like daily reports, or sending scheduled notifications. ### Configuration [Section titled “Configuration”](#configuration) By default, Celery Beat will store the schedule in file on the filesystem. When running in a production environment and especially in a containerized environment, you should use persistent storage to store the schedule. Pegasus is pre-configured to store the schedule in the Pegasus database using [`django-celery-beat`.](https://django-celery-beat.readthedocs.io/en/latest/). You can place the schedule task definitions in the `SCHEDULED_TASKS` setting in your `settings.py` file and then run the `bootstrap_celery_tasks` management command to create the tasks in the database. ```python from celery.schedules import crontab SCHEDULED_TASKS = { 'example-task-every-morning': { 'task': '{{ project_name }}.tasks.example_task', 'schedule': crontab(hour=7, minute=0), # Run every day at 7:00 AM }, 'another-example-every-hour': { 'task': '{{ project_name }}.tasks.another_example', 'schedule': 3600.0, # Run every hour (in seconds) 'args': (16, 16), # Arguments to pass to the task }, } ``` ```bash python manage.py bootstrap_celery_tasks --remove-stale ``` This will create or update the tasks in the database and remove any stale tasks that are no longer defined in `SCHEDULED_TASKS`. If you want to bootstrap the tasks automatically during you application deploy process you can do so by running the bootstrap command alongside the Django migration command. ### Running Celery Beat [Section titled “Running Celery Beat”](#running-celery-beat) To run Celery Beat in development: *With Docker:* If you are using the local dockerized setup with docker compose, then Celery Beat will already be running as part of the `celery` service. *With uv:* ```bash # Alongside the Celery worker, you can run Celery Beat uv run celery -A {{ project_name }} worker -l info --beat # AS a dedicated process uv run celery -A {{ project_name }} beat -l info ``` Note that if you run Celery Beat as a standalone process, you will need to ensure that the Celery worker is running separately. This is because Celery Beat is responsible for scheduling tasks while the worker executes them. #### Production Setup [Section titled “Production Setup”](#production-setup) In production, you can run Celery Beat as a separate process. You must ensure that there is only ever one Celery Beat process running at a time to avoid multiple instances of the same task being scheduled. It’s also important to note that you can not run Celery Beat in the same process as a worker that is using the `gevent` pool. For more information, see the [Celery Beat documentation](https://docs.celeryq.dev/en/stable/userguide/periodic-tasks.html). # Cookbooks > Step-by-step guides for Django admin setup, migrating from pip-tools to uv, enabling code formatting, and common development tasks. Step-by-step guides to some different things you might want to do with Pegasus. ## Use the Django Admin UI [Section titled “Use the Django Admin UI”](#use-the-django-admin-ui) Pegasus ships with a simple script to promote any user to a superuser who can access the Django admin. After going through the sign up flow, to convert your newly-created user into an admin, run the following command, being sure to replace the email address with the one you used to sign up: **Docker:** ```bash docker compose exec web python ./manage.py promote_user_to_superuser yourname@example.com ``` **Native:** ```bash python ./manage.py promote_user_to_superuser yourname@example.com ``` Now you should be able to access the django admin at ## Migrating from pip-tools to uv [Section titled “Migrating from pip-tools to uv”](#migrating-from-pip-tools-to-uv) To migrate your project from pip-tools to uv follow these steps. ### Install uv [Section titled “Install uv”](#install-uv) If you haven’t already, [install uv](https://docs.astral.sh/uv/getting-started/installation/): ```bash curl -LsSf https://astral.sh/uv/install.sh | sh ``` ### Update your project code [Section titled “Update your project code”](#update-your-project-code) It’s recommended to do this in two steps: 1. [Upgrade your project](/upgrading) to the latest Pegasus version, keeping your package manager as “pip-tools”. Merge all conflicts and ensure your project is working properly on this version. 2. Then, change the package manager from pip-tools to uv in your project settings and do another upgrade/pull request. At this point you will likely have conflicts in your requirements files, but hopefully nowhere else. See the next sections for resolving these. ### Prepare to resolve conflicts [Section titled “Prepare to resolve conflicts”](#prepare-to-resolve-conflicts) First, follow the github instructions to merge your project on your local machine, by checking out the pegasus upgrade branch and merging the main branch into it. You will have to update the command below with the exact branch name of the pull request created by Pegasus: ```bash git fetch origin git checkout pegasus-- git merge main ``` At this point you’ll have a partially merged branch with conflicts. ### Migrate your requirements.in files [Section titled “Migrate your requirements.in files”](#migrate-your-requirementsin-files) The uv build of Pegasus no longer uses requirements files, so any changes you’ve made to these will need to be migrated to `pyproject.toml` and `uv.lock`. You can use the [reqs-sync](https://github.com/saaspegasus/reqs-sync/) package to help with this. Follow the steps below for any file with conflicts. To migrate your main *requirements.in* file: ```bash uv tool run reqs-sync reqs-to-toml requirements/requirements.in ``` To migrate your development *dev-requirements.in* file: ```bash uv tool run reqs-sync reqs-to-toml requirements/dev-requirements.in --group=dev ``` To migrate your production *prod-requirements.in* file: ```bash uv tool run reqs-sync reqs-to-toml requirements/prod-requirements.in --group=prod ``` These commands should copy all project requirements from your `requirements.in` file(s) to your `pyproject.toml` file (into the appropriate group, if necessary). ### Update your uv.lock file [Section titled “Update your uv.lock file”](#update-your-uvlock-file) Next you should rebuild your `uv.lock` file from the updated `pyproject.toml` file: ```bash uv lock ``` You should then check the versions that were added to the `uv.lock` file and update any as needed based on the versions your requirements.txt files. ### Test the migration [Section titled “Test the migration”](#test-the-migration) Run your project (`uv run python manage.py runserver`) and verify everything works as expected. ### Remove your requirements files [Section titled “Remove your requirements files”](#remove-your-requirements-files) Finally, run: ```bash git rm requirements/*` ``` To remove all your requirements files. Congratulations, you’ve migrated to uv! Resolve any other conflicts, push and merge your code, and you’re done! ## Migrating to auto-formatted code [Section titled “Migrating to auto-formatted code”](#migrating-to-auto-formatted-code) As of February, 2023 all Pegasus projects have the option to auto-format your Python code. To migrate a project from non-formatted to formatted code, you can go through the following steps: 1. First, do a full Pegasus upgrade to the version you want to update to, as described [here](/upgrading). **Do *not* check the “autoformat” checkbox yet.** 2. Next, run the formatting tools on your project’s `main` branch: 1. Install ruff: `pip install ruff` 2. Run ruff linting `ruff check --extend-exclude migrations --line-length 120 . --fix` 3. Run ruff formatting: `ruff format --line-length 120 .` 3. Commit the result: 1. `git add .` 2. `git commit -m "apply formatting changes"` 4. Finally, check the “autoformat” box on your Pegasus project, and do *another* upgrade according to the same process. ## Delete Pegasus Examples [Section titled “Delete Pegasus Examples”](#delete-pegasus-examples) You can remove the Pegasus examples by unchecking the “Include Examples” checkbox on your project page and re-downloading (/or [upgrading](/upgrading)) your codebase. For earlier versions you can use [these instructions](https://github.com/saaspegasus/pegasus-docs/blob/1becc2cb8f86738eeba85c9faddb15f69b8ad7bc/cookbooks.md#delete-pegasus-examples). # Customizations > Customize landing pages, navigation, styles, and JavaScript in your Pegasus application with popular CSS frameworks. This page outlines the basics of customizing Pegasus to meet your application’s needs. ## Personalize your landing page [Section titled “Personalize your landing page”](#personalize-your-landing-page) Pegasus ships with a simple landing page that varies based on your selected CSS framework. Most projects will want to highly customize the landing page from what comes out of the box. Unless you are planning on building a marketing site on a different platform, this is likely one of the first things you’ll do. To modify the default landing page, you can edit the `./templates/web/landing_page.html` file (and any included sub-templates) and make the customizations you want. Another good option is to use a paid or open-source alternative for your marketing content. Some recommended places to get marketing templates include: * **Tailwind**: [Tailwind UI](https://tailwindui.com/), [Flowbite](https://flowbite.com/). * **Bootstrap**: [Official themes](https://themes.getbootstrap.com/), [other free recommendations](https://dev.to/bootstrap/bootstrap-5-templates-91p). * **Bootstrap (Material)**: [Material Kit Pro](https://www.creative-tim.com/product/material-kit-pro) ## Update the logged-in experience [Section titled “Update the logged-in experience”](#update-the-logged-in-experience) After you’ve tweaked your landing page, you’ll likely want to dive into the nuts and bolts that make up your app. To modify the logged-in default page, edit the `./templates/web/app_home.html` file to your liking. ### Changing the navigation [Section titled “Changing the navigation”](#changing-the-navigation) There are two levels of navigation that ship with Pegasus, the top nav and the sidebar nav. You’ll likely want to modify both. To change the top nav edit the `./templates/web/components/top_nav.html` file. To change the sidebar nav edit the `./templates/web/components/app_nav.html` file. ## Styles [Section titled “Styles”](#styles) All of Pegasus’s CSS frameworks are designed to be customized to your needs. You can set specific colors or override the themes entirely. How styles are customized depends on the CSS framework. For more information, see the individual page for your framework in [the CSS docs](/css/overview) ## Javascript [Section titled “Javascript”](#javascript) The project uses a webpack build pipeline to compile the javascript files. For more details on how it works see the [front-end documentation](/front-end/overview). # Using Docker in Development > Set up Django development environment with Docker Compose including PostgreSQL, Redis, Celery, and debugging configuration. Pegasus optionally includes support for [Docker](https://www.docker.com/) during development. The Docker development setup can also be used as a foundation for deploying to containerized platforms. See [our deployment page](/deployment/overview) for more details. ## Prerequisites [Section titled “Prerequisites”](#prerequisites) You need to install [Docker](https://www.docker.com/get-started) prior to setting up your environment. Mac users have reported better performance on Docker using [OrbStack](https://orbstack.dev/), which is a Docker Desktop alternative optimized for performance. Windows users may also need to install a 3rd-party package to run `make` commands. The easiest way to do that is via [these instructions](https://stackoverflow.com/a/57042516/8207). ## Getting Started [Section titled “Getting Started”](#getting-started) First set up your Pegasus project with Docker enabled and using Postgres as a database following the [getting started guide](/getting-started). ### Enter the project directory [Section titled “Enter the project directory”](#enter-the-project-directory) ```bash cd {{ project_name }} ``` ### Run the initialization script [Section titled “Run the initialization script”](#run-the-initialization-script) ```bash make init ``` This will spin up a database, web worker, celery worker, and Redis broker and create and run your database migrations. Note: users of older versions of Windows may [need to install “make” separately to use it](https://stackoverflow.com/questions/32127524/how-to-install-and-use-make-in-windows). Alternatively, you can just inspect the `Makefile` in the repository and run the commands manually (e.g. `docker compose up -d`). ### Load server [Section titled “Load server”](#load-server) Visit in a browser and you should be up and running! ## Using the Makefile [Section titled “Using the Makefile”](#using-the-makefile) Pegasus ships with a self-documenting `Makefile` that will run common commands for you, including starting your containers, performing database operations, and building your front end. You can run `make` to list helper functions, and you can view the source of the `Makefile` file in case you need to add to it or run any once-off commands. Most of the commands you might need to run in your project will involve running something like: ```bash docker compose exec ``` The `Makefile` has many example of these you can refer to if you need to run a specific command against a specific container. ## Architecture and how it works [Section titled “Architecture and how it works”](#architecture-and-how-it-works) ### Containers [Section titled “Containers”](#containers) The Docker configuration is primarily in `docker-compose.yml`. Depending on your project settings, there are several containers that might be running. These are outlined in the table below: | Container Name | Purpose | Included | | -------------- | ------------------------------------- | ----------------------------------------------------------------------- | | `pg` | Runs Postgres (primary Database) | Always | | `redis` | Runs Redis (Cache and Celery Broker) | Always | | `web` | Runs Django | Always | | `vite` | Runs Vite (for CSS/JavaScript assets) | If [building with Vite](/front-end/vite) | | `celery` | Runs Celery (for background tasks) | If [Celery is enabled](/celery) | | `frontend` | Runs the React Front End | If [the standalone front end is enabled](/experimental/react-front-end) | ### Settings [Section titled “Settings”](#settings) The docker environment sets environment variables using the included `.env` file. The `.env` file is automatically ignored by git, so you can put any additional secrets there. It generally should not be checked into source control. You can instead add variables to `.env.example` to show what should be included. ### Python environments [Section titled “Python environments”](#python-environments) The Python environment is run in the containers, which means you do not need to have your own local environment if you are always using Docker for development. Python requirements are automatically installed when the container builds. However, keep in mind that if you go this route, you will need to run all commands inside the containers as per the instructions below. ## Running once-off management commands [Section titled “Running once-off management commands”](#running-once-off-management-commands) Running commands on the server can be done using `docker compose`, by following the pattern used in the `Makefile`. For example, to bootstrap Stripe subscriptions, run: ```bash docker compose exec web python manage.py bootstrap_subscriptions ``` Or to promote a user to superuser, run: ```bash docker compose exec web python manage.py promote_user_to_superuser me@example.com ``` You can also use the `make manage` command, passing in `ARGS` like so: ```bash make manage ARGS='promote_user_to_superuser me@example.com' ``` You can add any commonly used commands you want to `custom.mk` for convenience. ## Updating Python packages [Section titled “Updating Python packages”](#updating-python-packages) If you add or modify anything in your `requirements.in` (and `requirements.txt`) files, you will have to rebuild your containers. The easiest way to add new packages is to add them to `requirements.in` and then run: ```bash make requirements ``` Which will rebuild your `requirements.txt` file, rebuild your Docker containers, and then restart your app with the latest dependencies. ## Debugging [Section titled “Debugging”](#debugging) You can use debug tools like `pdb` or `ipdb` by enabling service ports. This can be done by running your web container with the following: ```bash docker compose run --service-ports web ``` If you want to set up debugging with PyCharm, it’s recommended to follow [this guide on the topic](https://testdriven.io/blog/django-debugging-pycharm/). ## Troubleshooting [Section titled “Troubleshooting”](#troubleshooting) ### ”No such file or directory” errors [Section titled “”No such file or directory” errors”](#no-such-file-or-directory-errors) Some environments---especially on Windows---can have trouble finding the files on your local machine. This will often show up as an error like this when starting your app: ```plaintext python: can't open file '/code/manage.py': [Errno 2] No such file or directory ``` These issues are usually related to your *disk setup*. For example, mounting your code on a remote filesystem or external drive to your machine. To fix, try running the code on the same drive where Docker Desktop is installed, or on your machine’s default “C:” drive. You can also get around this issue by running your application natively, instead of with Docker. ## Other Resources [Section titled “Other Resources”](#other-resources) * [Dockerizing Django with Postgres, Gunicorn, and Nginx](https://testdriven.io/blog/dockerizing-django-with-postgres-gunicorn-and-nginx/) provides an overview of the setup, and has additional information about using Docker in production * [Environment variables in Compose](https://docs.docker.com/compose/environment-variables/) is a good resource on the different ways to work with environment variables in Docker # Feature Flags > Implement feature flags with Django Waffle to control feature rollouts, A/B testing, and user-specific or team-based feature access. [Waffle](https://waffle.readthedocs.io/en/stable/) is the top library for managing feature flags in Django. Pegasus includes configuration for using Waffle with or without teams. If you are using [Teams](/teams) then the Waffle flags can be turned on based on the user or the team. If you are not using Teams then flags only apply to users. ## Usage [Section titled “Usage”](#usage) Waffle can be used to turn on and off features. For example: ```python import waffle def my_view(request): if waffle.flag_is_active(request, 'flag_name'): """Behavior if flag is active.""" else: """Behavior if flag is inactive.""" ``` The flags themselves are managed via the Django Admin site where each flag can be activated for specific users or teams, or based on certain conditions such as *superuser* status. Flags can also be managed via the command line. For full details on configuring flags see the [Flag Attributes](https://waffle.readthedocs.io/en/stable/types/flag.html#flag-attributes) of the Waffle docs. Flags may be used in views, templates, JavaScript and more. For full details see the [Waffle docs](https://waffle.readthedocs.io/en/stable/usage/index.html) ## Usage with *Teams* [Section titled “Usage with Teams”](#usage-with-teams) If you are using [Teams](/teams), Pegasus ships with a [custom flag model](https://waffle.readthedocs.io/en/stable/types/flag.html#custom-flag-models) which allows you to activate flags on a per-team basis in addition to the other default options. ## Example usage [Section titled “Example usage”](#example-usage) To see flags in actions look at the “Flags” example in the Pegasus Example Gallery. The flag in the example is configured in [test mode](https://waffle.readthedocs.io/en/stable/testing/user.html) which allows us to activate the flag with a URL parameter. # Forms > Render Django forms with CSS framework integration, dynamic Alpine.js functionality, and custom template tags for better UX. Pegasus ships with some extensions to Django forms to integrate with different CSS frameworks and add some extensions. ## The `form_tags` module [Section titled “The form\_tags module”](#the-form_tags-module) You can use default Django form rendering for forms, but if you want all the built-in style support, you should instead use the utilities in the `form_tags` module. To use it, first include `form_tags` in any Django template file: ```jinja {% load form_tags %} ``` Then, you can render a form using the `render_form_fields` template tag. Here is a basic example: ```jinja {% csrf_token %} {{ form.non_field_errors }} {% render_form_fields form %}
``` You can also render individual fields using `render_field`: ```jinja
{% csrf_token %} {{ form.non_field_errors }} {% render_field form.username %} {% render_field form.password %}
``` ## Dynamic forms with Alpine.js [Section titled “Dynamic forms with Alpine.js”](#dynamic-forms-with-alpinejs) *Added in version 2023.6* The form rendering helpers also support adding attributes, which can be useful to add Alpine.js to make a form more dynamic. For example, you can bind a form value to an alpine model by passing it in `attrs` like this: ```python class ExampleFormAlpine(forms.Form): YES_NO_OTHER = ( ("yes", gettext("Yes")), ("no", gettext("No")), ("other", gettext("Other")), ) like_django = forms.ChoiceField( label=gettext("Do you like Django?"), choices=YES_NO_OTHER, widget=forms.Select(attrs={"x-model": "likeDjango"}), # this line will bind the value to an alpine model ) ``` Then in the HTML template you have to add an alpine model to the form: ```jinja
{% render_field form.like_django %} ``` The `render_field` tags support two special syntaxes to make using alpine easier: 1. Any attribute starting with `x` will be automatically converted to `x-`. 2. Double underscores (`__`) will be replaced with colons (`:`). The following example alpine form and template (which also ship with Pegasus, available at ) demonstrate this usage, including hiding/showing a field based on the value of another field, rendering field values in labels, and changing the style of a field based on its value. Django form class: ```python class ExampleFormAlpine(forms.Form): YES_NO_OTHER = ( ("yes", gettext("Yes")), ("no", gettext("No")), ("other", gettext("Other")), ) STYLES = ( ("regular", gettext("Normal")), ("success", gettext("Success")), ("danger", gettext("Danger")), ) like_django = forms.ChoiceField( label=gettext("Do you like Django?"), help_text=gettext("Try choosing 'other' to see unhiding a form field based on a value."), choices=YES_NO_OTHER, widget=forms.Select(attrs={"x-model": "likeDjango"}), ) like_django_other = forms.CharField(label=gettext("Please specify more details about your answer.")) styled_options = forms.ChoiceField( label=gettext("Styled Options"), help_text=gettext("Try picking an option to see how you can style a component based on its value."), choices=STYLES, widget=forms.Select(attrs={"x-model": "styleValue"}), ) ``` Django template: ```jinja {% csrf_token %} {% render_field form.like_django %} {% render_field form.like_django_other xshow="likeDjango === 'other'" xcloak='True' %} {% render_field form.styled_options xbind__class="'pg-bg-' + styleValue" %}

You can also use alpine to display selected values. Like Django: , Style:

``` # GitHub Integration > Integrate projects with GitHub using OAuth or personal access tokens for automated updates, pull requests, and version control. As of February, 2024 you can connect your Pegasus projects directly to GitHub instead of downloading them as a zip file. This makes for a more streamlined workflow---especially when changing or upgrading your project. ## Watch the video [Section titled “Watch the video”](#watch-the-video) The following video shows how to create and update a project using the Github integration. ## Connecting your account [Section titled “Connecting your account”](#connecting-your-account) There are two ways to connect your Github account. The Oauth-based “Connect Github” option is easier and more reliable, while personal access token option allows you to control exactly what repositories Pegasus can access. ### Using “Connect Github” (Oauth) [Section titled “Using “Connect Github” (Oauth)”](#using-connect-github-oauth) The easiest way to connect your account is by using the “Connect Github” button on the project download page. You will be prompted to accept permissions, and your account will be connected in a few seconds. Note: While you will be prompted to grant access to “all private repository data,” Pegasus does not view or modify data in any repositories unless you connect them. ### Using Personal Access Tokens [Section titled “Using Personal Access Tokens”](#using-personal-access-tokens) If you prefer not to grant Pegasus access to your entire Github account, you can use [Personal Access Tokens](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens) to limit the scope of what Pegasus has access to. #### With Classic Tokens [Section titled “With Classic Tokens”](#with-classic-tokens) To use Pegasus with a classic token, visit the [Personal access tokens](https://github.com/settings/tokens) page on Github, then select “Generate new token (classic)” from the dropdown, or [visit this page](https://github.com/settings/tokens/new). Choose a note and expiration date for your token and grant the following scopes: * user:email (Access user email addresses (read-only)) * repo (Full control of private repositories) * workflow (Update GitHub Action workflows) Then click “Generate token”. You will be taken to a page where your token is displayed. Copy this value and paste it into the “personal access token” field from your project download page on Pegasus. Note that you won’t be able to view the token again! #### With Fine-Grained Access Tokens [Section titled “With Fine-Grained Access Tokens”](#with-fine-grained-access-tokens) If you want the most control over your permissions, you should use a fine-grained access token, which allow you to control access to specific repositories. Note that if you use fine-grained tokens **you must create the repository for your project before creating the token**. Pegasus cannot create the project for you with these tokens. After creating the repository, [create a new fine-grained-token from this page](https://github.com/settings/personal-access-tokens/new). Set a token name and expiration date, and then use “Only select repositories” to choose the repositories you want to grant access to (the one you just created). Under “Permissions” —> “Account Permissions” you must grant *read* access to: * Email addresses Then under “Permissions” —> “Repository Permissions” you must grant **read and write** access to: * Contents * Pull Requests * Workflows Then click “Generate token”. You will be taken to a page where your token is displayed. Copy this value and paste it into the “personal access token” field from your project download page on Pegasus. Note that you won’t be able to view the token again! ## Connecting an existing project to Github [Section titled “Connecting an existing project to Github”](#connecting-an-existing-project-to-github) Projects that were created before February 2024, or that didn’t use the Github integration can still be connected to Github via a one-time process. After completing this, you will be able to upgrade and change your Pegasus project using automatic pull requests. First, you’ll have to connect your Github account using one of the methods described above. Next, you will need to find the commit id of the last Pegasus update you have made. If you have never updated your codebase, this will be the first commit in the repository, which you can find by running `git log --reverse`. If you have updated your codebase using one of the other methods below, this will be the last commit on the `pegasus` branch of your repository, which you can find by running `git checkout pegasus` followed by `git log`. Once you have the commit id ready, add your existing Github repository to your Pegasus project from the downloads page. After completing this step you will be prompted with a page that looks like this: ![Set Commit](/_astro/set-commit.Bh21vQoQ_mJB17.webp) Enter the commit ID there, and you should now be able to update your project with pull requests. ## Working with repositories owned by an organization [Section titled “Working with repositories owned by an organization”](#working-with-repositories-owned-by-an-organization) Github organizations do not allow API-based repository access by default, so to connect a repository owned by an organization you will also have to grant programmatic access. Github provides detailed guidance on how to do this. For “Connect Github,” follow the [oauth instructions](https://docs.github.com/en/organizations/managing-oauth-access-to-your-organizations-data), and for personal access tokens, follow the [personal access token instructions](https://docs.github.com/en/organizations/managing-programmatic-access-to-your-organization/setting-a-personal-access-token-policy-for-your-organization). ## Pushing Pegasus code to a subdirectory in your repository [Section titled “Pushing Pegasus code to a subdirectory in your repository”](#pushing-pegasus-code-to-a-subdirectory-in-your-repository) By default, your entire git repository is dedicated to Pegasus, with all of Pegasus’s files included at the root of the repository. Some projects---especially those with a separate front end---may want to instead include Pegasus code in a subdirectory of the repository (e.g. “backend”), so that other projects (e.g. “frontend”) can be included in the same repository. It is possible to configure your Github integration this way. To do so, when adding the repository, click “Show Advanced Options,” then specify the subdirectory you want to use for your Pegasus code in the “Subdirectory” field. If you would like to update an existing project to use a subdirectory, you’ll have unlink and re-add your repository, then [reconnect it](#connecting-an-existing-project-to-github). ## Troubleshooting [Section titled “Troubleshooting”](#troubleshooting) **I keep getting “Error pushing to GitHub. Please check your token scopes.” when pushing my project.** While Pegasus does its best to catch errors that come from Github and show them to you, sometimes it will return this generic error. One common reason a valid token is unable to push code is related to email privacy settings. Specifically the “Blocking command line pushes that expose your personal email address” setting---which currently must be *disabled* in order to use the Github integration. To check and disable this setting: 1. Go to your [Github email settings](https://github.com/settings/emails) 2. Scroll down to where it says “Keep my email addresses private”. 3. If that option is checked, ensure that the “Block command line pushes that expose my email” option below it is *not* checked. 4. If that option is *not* checked, then it is a different problem. You are welcome to reach out directly for support # Using Github Actions > Automate Django testing and front-end builds with GitHub Actions CI/CD workflows for continuous integration. [GitHub Actions](https://github.com/features/actions) allows you to automate your software workflows. Pegasus apps optionally ship with Github actions support for a few things to build off. If you’ve built with Github actions support, they should successfully run the first time you push your code to Github. Actions are configured in the `.github` directory in your project. The following actions ship with Pegasus: ## Running Django Tests [Section titled “Running Django Tests”](#running-django-tests) The Django tests are configured in `.github/tests.yml`. By default, it will: * Run on every push to the `main` branch and every pull request. * Run on Python version 3.12 (other Python versions can be added by modifying the `python-version` list) * Use the latest version of Postgres * Run `./manage.py test` All of these can be changed by modifying the relevant sections of the `.github/tests.yml` file. ## Building the Front End [Section titled “Building the Front End”](#building-the-front-end) The front end build is configured in `.github/build_frontend.yml`. By default, it will: * Run on every push to the `main` branch and every pull request. * Run on Node version 22 (other Node versions can be added by modifying the `node-version` list). * Run `npm run build`, ensuring your front end builds properly. * Run `npm run type-check`, ensuring all type checks pass. Any compilation errors in your JavaScript should show up as build failures. # Internationalization > Add multi-language support and timezone handling to Django applications with translation files, locale management, and user preferences. Pegasus supports internationalization via built-in support for timezones and language translations. To enable timezone and multi-language support, you must select the “use internationalization” option in your project settings. ## Translation Demo [Section titled “Translation Demo”](#translation-demo) This two-minute demo highlights how translations work in Pegasus apps. ## Localization [Section titled “Localization”](#localization) Pegasus ships with full support for localizing user-facing text. Currently, not all the user-facing text is properly tagged for localization but this will be incrementally addressed in future releases. For full documentation on localization see the [Django docs](https://docs.djangoproject.com/en/stable/topics/i18n/). ## Big picture [Section titled “Big picture”](#big-picture) Big picture there are two steps to translation: 1. **Define the text you want to translate (in Python, HTML, or JavaScript)**. This step happens in your project’s code. 2. **Add a translation for that text to other languages**. This step happens in your project’s translation files, which can be found in the `locale//LC_MESSAGES/` folders (there will be one for each language). ## Managing enabled languages [Section titled “Managing enabled languages”](#managing-enabled-languages) There are two steps to updating the list of languages that will be available on your site. The first step is to define it in `settings.LANGUAGES`. Out of the box this will be English and French: ```python from django.utils.translation import gettext_lazy LANGUAGES = [ ('en', gettext_lazy('English')), ('fr', gettext_lazy('French')), # add other languages here ] ``` The second step is to create the translations folder for the language. This can be done by running: ```bash python ./manage.py makemessages -l [new lang code] --ignore node_modules --ignore venv ``` Or in Docker: ```bash docker compose exec web python manage.py makemessages -l [new lang code] --ignore node_modules --ignore venv ``` ## Marking text in your app for translation [Section titled “Marking text in your app for translation”](#marking-text-in-your-app-for-translation) All text you want to be translatable must be tagged in your application. This can be done as follows: **In Python:** ```python from django.utils.translation import gettext def my_view(request): output = gettext("Welcome to my site.") return HttpResponse(output) ``` See the [Django docs](https://docs.djangoproject.com/en/4.0/topics/i18n/translation/#internationalization-in-python-code) for more. **In Django templates:** ```jinja {% load i18n %} {% translate "This is the title." %} ``` See the [Django docs](https://docs.djangoproject.com/en/4.0/topics/i18n/translation/#internationalization-in-template-code) for more. **In JavaScript:** ```javascript document.write(gettext('this is to be translated')); ``` See the [Django docs](https://docs.djangoproject.com/en/4.0/topics/i18n/translation/#internationalization-in-javascript-code) for more. **In Wagtail:** See the [Wagtail docs](/wagtail/#internationalization). ## Creating / updating translation files [Section titled “Creating / updating translation files”](#creating--updating-translation-files) After you’ve marked text for translation, you’ll need to update your language files. This can be done by running: ```bash python ./manage.py makemessages --all --ignore node_modules --ignore venv python ./manage.py makemessages -d djangojs --all --ignore node_modules --ignore venv ``` Or in Docker: ```bash make translations ``` Note: if you get any errors you may need to [install gettext](https://stackoverflow.com/q/35101850/8207). ## Adding actual translations for other languages [Section titled “Adding actual translations for other languages”](#adding-actual-translations-for-other-languages) To add a translation for another language you need to edit that languages messages (.po) file. For example, to edit a French translation, you would update `locale/fr/LC_MESSAGES/django.po`. Then search for the text you want to translate, and add the French translation: ```plaintext msgid "My Team" msgstr "Mon Équipe" ``` The above lines will replace “My Team” with “Mon Équipe” whenever the French language is configured. After editing any message (.po) file, you will have to compile the messages for the updates to show up in your app. This can be done by: ```bash python ./manage.py compilemessages ``` Or in Docker: ```bash make translations ``` ## Technical notes [Section titled “Technical notes”](#technical-notes) Pegasus is configured to use cookies to track the current locale. This allows localization to work for both authenticated and unauthenticated users. More information on this approach is available the Django docs: [How Django discovers language preference](https://docs.djangoproject.com/en/4.2/topics/i18n/translation/#how-django-discovers-language-preference) ## Timezones [Section titled “Timezones”](#timezones) Pegasus includes support for user’s setting their own time zones via their profile (version 2023.7 and later). When a user sets a timezone, it will be automatically activated by the `UserTimezoneMiddleware` so that by default all dates and times will appear in their local time. For more information on working with timezones in Django, see [Django’s timezone documentation](https://docs.djangoproject.com/en/4.2/topics/i18n/timezones/). # Project/Page Metadata and SEO > Configure SEO metadata, page titles, social sharing tags, and XML sitemaps for better search engine optimization and discoverability. Pegasus comes with some built in tools and best-practices for setting page-level metadata (e.g. title, image URL, etc.). ## The `PROJECT_METADATA` setting [Section titled “The PROJECT\_METADATA setting”](#the-project_metadata-setting) Your Pegasus project will ship with a `settings.py` variable called `PROJECT_METADATA` with the following values: ```python PROJECT_METADATA = { 'NAME': '', 'URL': '', 'DESCRIPTION': '', 'IMAGE': 'https://upload.wikimedia.org/wikipedia/commons/2/20/PEO-pegasus_black.svg', 'KEYWORDS': 'SaaS, django', 'CONTACT_EMAIL': '', } ``` This information will be available in every view under the variable name `project_meta`. Out of the box, the values are used in a number of places, though can be overridden/modified at the view level. ## Page Titles [Section titled “Page Titles”](#page-titles) The default title for your pages will be your project name and description from `PROJECT_METADATA`. If you want to add a custom page title, you can pass a `page_title` context variable to the template. For example: ```python def my_new_view(request): return render('a/template.html', {'page_title': 'My New Page'}) ``` Pegasus will then set your title to be `My New Page | `. If you’d like to change the way the title is formatted (e.g. remove the project name), you can change that behavior in `web.templatetags.meta_tags.get_title`. In Pegasus versions after 2022.4 you can also override the title directly in a template by overriding the `page_title` block. For example: ```jinja {% block page_title %}This title will be used instead of the Pegasus versions{% endblock %} ``` ## Sitemaps [Section titled “Sitemaps”](#sitemaps) As of version 2022.6, Pegasus will automatically generate a basic [sitemap](https://developers.google.com/search/docs/advanced/sitemaps/overview) for your site at `sitemap.xml`. Out of the box, the sitemap will only contain your application’s homepage, but can be readily extended by adding URLs in `apps/web/sitemaps.py`. If you have [enabled Wagtail](/wagtail), your sitemap will also include any content managed by Wagtail. Make sure you [properly set the hostname in your Wagtail site](https://docs.wagtail.org/en/stable/reference/contrib/sitemaps.html#setting-the-hostname). # E-Commerce / Payments > Build digital storefronts with Stripe integration for one-time and recurring payments, product management, and purchase tracking. Pegasus (version 2023.9.1 and up) includes an out-of-the-box E-Commerce/Payments demo. In a few clicks you can have a fully functional digital storefront in your application, allowing you to collect and track one-time or recurring payments with Stripe. ## Watch a video [Section titled “Watch a video”](#watch-a-video) To see how this feature works, you can watch the following video: ## Getting Started [Section titled “Getting Started”](#getting-started) ### Set up Stripe Products [Section titled “Set up Stripe Products”](#set-up-stripe-products) First add your products in the Stripe dashboard. Be sure to add readable product names, descriptions, and images, as these will be used for the in-app store. Additionally, make sure each product includes at least one Price. ### Set up your development environment [Section titled “Set up your development environment”](#set-up-your-development-environment) Setting up your development is similar to the [process for subscriptions](/subscriptions), but has fewer steps. 1. If you haven’t already, update the `STRIPE_*` variables in `settings.py` or in your os environment variables to match the keys from Stripe. See [this page](https://stripe.com/docs/keys) to find your API keys. 2. Run `python manage.py bootstrap_ecommerce` to sync your Stripe products and prices to your local database. Once you’ve done this, login and click on the e-commerce tab in the navigation, and you should see your store. ## Data models [Section titled “Data models”](#data-models) ### `ProductConfiguration` [Section titled “ProductConfiguration”](#productconfiguration) What shows up in your store is controlled by the `ProductConfiguration` data model. You can manage these objects from the Django admin (available at locally). For example, to remove a product from the store you can uncheck “is active”. The `ProductConfiguration` model is also a good place to add additional information to your products. For example, you can add additional display data there, or add a `FileField` if you want purchases to grant access to a digital download. ### `Purchase` [Section titled “Purchase”](#purchase) The `Purchase` model is used to record user purchases. A `Purchase` is associated with a `User` and a `ProductConfiguration` and also has details of the Stripe checkout session, date of purchase, and product/price used at the time of purchase. ## Feature gating [Section titled “Feature gating”](#feature-gating) The `@product_required` decorator can be used to restrict access to a view based on whether or not the logged-in user has purchased a particular product. This decorator expects a `product_slug` field in the URL / view with the slug of the `ProductConfiguration` object to be checked. If the user owns the product, they will be granted access to the view. Additionally, if the user gets access, two additional field will be populated on the `request` object: * `request.product_config` will have the `ProductConfiguration` object. * `request.product_purchase` will have the `Purchase` object. If the user does *not* have access to the product, the decorator will redirect them back to the store homepage. ## Webhooks [Section titled “Webhooks”](#webhooks) Like subscriptions, it’s recommended to use webhooks to ensure you receive all updates from Stripe. For the e-commerce store, the only required webhook is `checkout.session.completed`. Follow [the subscriptions documentation](/subscriptions/#webhooks) to set up webhooks in development and production. # Version History and Release Notes > Complete changelog and version history for SaaS Pegasus Django boilerplate with detailed feature updates and migration guides. Releases of [SaaS Pegasus: The Django SaaS Boilerplate](https://www.saaspegasus.com/) are documented here. ## Version 2025.8 [Section titled “Version 2025.8”](#version-20258) This is a maintenance release which improves Docker-based development, upgrades dependencies and addresses a number of minor issues. ### Changed [Section titled “Changed”](#changed) * **Upgraded all Python packages to their latest versions.** * **Upgraded all JavaScript packages to their latest versions.** * **Changed how CSS files are built and imported in vite builds. This fixes the flash of unstyled content when running Vite in development.** * Removed the redundant `site-.js` files and instead added the imported CSS files directly as entry points to `vite.config.ts`. * Updated `base.html` to use `vite_asset_url` instead of `vite_asset` for CSS files. * **Updated development Docker setup to always use a separate container for Node / NPM.** This removes all node/npm logic from `Dockerfile.dev` and uses either `Dockerfile.vite` or `Dockerfile.webpack` for the front end. * Also updated the `Makefile` to reference this new container where necessary. * Changed `sentry-sdk` to `sentry-sdk[django]` and pinned the version. Thanks Ralph for suggesting! * Changed how email confirmation works when updating an email address to be more aligned with allauth best practices. * Changed the typescript module resolution strategy to “bundler”, which aligns better with how Vite resolves modules in the project. * Added `.claude/settings.local.json` to `.gitignore`. * Updated the behavior of the subscription page for team non-admins so that it shows a useful message telling them they aren’t allowed to manage subscriptions for their team, instead of returning a generic 404. Thanks Haydn for the suggestion! * `./manage.py bootstrap_subscriptions` will now use Stripe’s “marketing features” property of Products to generate the relevant configuration in Pegasus. Thanks Zac for suggesting! * `./manage.py bootstrap_subscriptions` will now only use products that have recurring pricing set when generating the Pegasus configuration. * The `build-api-client` make target will now delete unused files and set correct file permissions on the generated code. Thanks Finbar for the contribution! ### Fixed [Section titled “Fixed”](#fixed) * **Improved the Python environment setup in `Dockerfile.dev` to be much more performant. This should make Docker container rebuilds after adding/changing Python dependencies much faster.** * Python environments and packages are now created and installed as the django user to avoid expensive chown calls. Thanks Jacob and Mark for the suggestion! * Uv now uses Docker’s cache system consistently so that dependencies are cached by Docker across builds. * Added a `require_POST` decorator to `create_api_key` view so it doesn’t work with GET requests. Thanks Brennon for reporting! * Fixed a bug where subscriptions tests failed due to a missing `dateutil` dependency under certain build configurations. Thanks Jacob for reporting! * Fixed styling of allauth’s “email change” template, which is used if you set `ACCOUNT_CHANGE_EMAIL = True`. Thanks Finbar for the report and fix! * Fixed a bug where `./manage.py bootstrap_subscriptions` and `./manage.py bootstrap_ecommerce` sometimes had to be run twice to sync all products and prices to a new installation. Thanks Zac for reporting! * Updated stripe API imports to remove warnings about deprecated `stripe.api_resources` packages. Thanks Cristian for reporting! *August 1, 2025* ## Version 2025.6.2 [Section titled “Version 2025.6.2”](#version-202562) This hotfix release addresses two minor issues in the 2025.6 release: * Remove breaking reference to `.babelrc` in `Dockerfile.web` on Vite builds. This was causing deployments to fail on some Docker-based platforms. * Always add `gevent` dependency to production requirements if using celery. This fixes an issue running celery in production on certain deployment platforms. Thanks Justin and Eugene for the bug reports! *June 25, 2025* ## Version 2025.6.1 [Section titled “Version 2025.6.1”](#version-202561) This is a hotfix release which addresses two minor issues: * Fix `make npm-install` and `make npm-uninstall` commands when using vite as a bundler. Thanks Matt for reporting! * Fix broken dark mode behavior on Tailwind when attempting to disable it. Thanks Wik for the report and fix! *June 23, 2025* ## Version 2025.6 [Section titled “Version 2025.6”](#version-20256) This release hardens the production Celery set up, expands AI-development tooling, improves production support for the standalone React front end, and extends the ecommerce application. Read on for details! ### Celery improvements [Section titled “Celery improvements”](#celery-improvements) * Celery periodic tasks can now be configured via `settings.SCHEDULED_TASKS` and synchronized with a new management command (`./manage.py bootstrap_celery_tasks`). The previous migration files that created celery periodic tasks have been removed. * The Celery gunicorn worker pool changed from the default of ‘prefork’ to ‘gevent’ in production, and the concurrency was increased. This should be a more scalable setup for most projects, though may need to be changed for projects that are heavily CPU-bound. * Because of the above change, a separate worker for Celery Beat has been added to all production deploy environments (because beat can’t be run with the gevent pool). * Updated the [Celery documentation](/celery) to reflect these changes. ### AI-Coding improvements [Section titled “AI-Coding improvements”](#ai-coding-improvements) * **Added an optional Claude Code Github workflow**. When enabled, you can mention @claude on a Github pull request or issue to trigger a Claude Code update. Learn more [in the docs here](/ai/development/#the-github-workflow-file). * **Added optional support for JetBrains / PyCharm Junie AI rules files.** [Docs](/ai/development/#working-with-junie) * Edited and expanded the AI rules files based on various user feedback (thanks to many who have contributed to this). ### Standalone front end improvements [Section titled “Standalone front end improvements”](#standalone-front-end-improvements) These updates affect the [standalone React front end](/experimental/react-front-end). * Updated the front end CSS to build the files directly in the front end (and import relevant files from the Django app in `index.css`), rather than including the built Django CSS files directly. * Some required Tailwind CSS files in the `assets` directory will be included if you use the standalone front end even if you build for a different framework. * Added tailwindcss, the typography plugin, and daisyui as explicit dependencies (and plugins) to the front end to enable the above change. * Upgraded all JavaScript dependencies in the front end. * Removed unnecessary default styles from `index.css`. * Updated front end to use aliases for the “assets” directory. Also updated `tsconfig.json` to handle this. * Updated `vite.config.ts` to fix various build issues if the parent `node_modules` isn’t available. * Fixed the default values of `FRONTEND_ADDRESS` and related values in `settings.py` and `.env` files to point to “” (instead of port 5173). * Added `CSRF_COOKIE_DOMAIN`, `CORS_ALLOWED_ORIGINS`, and `SESSION_COOKIE_DOMAIN` to `settings.py` using environment variables. These must be customized when deploying the standalone front end. * Updated Kamal’s `deploy.yml` to include default values for the above settings. * **Added initial documentation on [deploying the standalone front end to production](/experimental/react-front-end/#deployment).** ### Other updates [Section titled “Other updates”](#other-updates) * **Added a digital download example to the ecommerce application.** You can now associate a file with ecommerce products and only people who have purchased the product will be able to access it. * Also added tests for this workflow. * Added a private storage backend, for storing private files on S3-compatible storage backends (used by the above). * Upgraded most Python dependencies to their latest versions. * Fix `target-version` in `pyproject.toml` to match the currently recommended Python 3.12. Thanks Finbar for reporting! * Fixed a bug where group chat avatars were incorrectly styled on Tailwind builds. Added a new `pg-avatar` CSS class to handle this. * Made some updates Digital Ocean deployments: * Switched Redis to Valkey, and upgraded it to version 8. * Upgraded Postgres to version 17. * Updated the [Digital Ocean deployment docs](/deployment/digital-ocean) to reflect the latest changes. * Fixed email verification emails when `ACCOUNT_EMAIL_VERIFICATION_BY_CODE_ENABLED = True`. Thanks Justin for reporting and helping with the fix! * Removed default font-weight styling from `email_template_base.html`. * Api keys associated with inactive users will no longer pass API permission checks. Thanks Brennan for the suggestion! * Removed unused `.babelrc` file if not building with Webpack. * Automatically confirm user emails when they create accounts through the invitation acceptance workflow, since they can only get the invitation URL from the email link. ### Upgrading [Section titled “Upgrading”](#upgrading) If your project has existing migration files that create celery tasks (e.g. `/apps/subscriptions/migrations/0001_celery_tasks.py`), you should leave them in your repository to prevent issues running future migrations. The tasks themselves are unaffected, since they live in the database. *June 10, 2025* ## Version 2025.5.1 [Section titled “Version 2025.5.1”](#version-202551) This is a minor bugfix release on top of 2025.5. * Removed bad reference to Modals in `site.js`. Thanks Jacob for reporting! * Fixed Python Celery setup in `build_celery.sh` when using `uv` (Render deployments only). Thanks Jacob for reporting! * Fixed issue with the shadcn dashboard caused by a missing `{% vite_react_refresh %}` tag. Thanks Shoaib for reporting! *May 16, 2025* ## Version 2025.5 [Section titled “Version 2025.5”](#version-20255) This release has a few big updates: ### Use Vite instead of Webpack for building the front end [Section titled “Use Vite instead of Webpack for building the front end”](#use-vite-instead-of-webpack-for-building-the-front-end) This release adds the option to use [Vite](https://vite.dev/) as a bundler instead of Webpack. Vite is a modern build tool that adds a few key benefits over the Webpack build system: 1. It is much faster than Webpack. 2. Hot Module Replacement (HMR)---a development feature that lets code changes in your front end files automatically update without a full-page reload. 3. Code splitting---a production feature that breaks your front end files into individual bundles that encapsulate code dependencies. This leads to less redundant JavaScript and faster page loads. You can watch the video below for a walkthrough of these benefits and how they work in the new setup. You can also see the overhauled [front end documentation](/front-end/overview) and [Vite-specific guidance](/front-end/vite) for more details. ### Gitlab CI support [Section titled “Gitlab CI support”](#gitlab-ci-support) You can now run CI on Gitlab in addition to Github. Gitlab’s CI will run your tests, linting, and build / type-check your front end files. Thanks to Paolo and Simon for contributing to this feature! ### Retiring the Bootstrap Material Theme [Section titled “Retiring the Bootstrap Material Theme”](#retiring-the-bootstrap-material-theme) **The material theme for Bootstrap has been deprecated.** This means that the theme will be in maintenance-only mode, and support will eventually be dropped (probably in 6-12 months). Existing projects can continue using the theme, but new projects should not, and new Pegasus features will eventually not be developed and tested on the theme. Dropping support for this theme was a difficult decision. The main reason it was made is that several Pegasus customers have complained about the lack of documentation and support for this theme from its maintainer, Creative Tim. Additionally, their process around updating the theme has involved releasing large, poorly-documented updates which have been difficult to incorporate back into Pegasus. If you would like help migrating off this theme, you can reach out via standard support channels. ### Complete changelog [Section titled “Complete changelog”](#complete-changelog) **Changes related to Vite support** * **Added Vite as an option for your front end build system. See [the front end](/front-end/overview) and [vite-specific docs](/front-end/vite) for details.** * **`window.SiteJS` is now populated explicitly in JavaScript files (in addition to webpack’s library support, which does not work with Vite builds).** * Affected files include: `app.js` (`window.SiteJS.app`), `pegasus.js` (`window.SiteJS.pegasus`) * Imports in those files were also renamed to avoid namespace confilcts. * Updated all JavaScript files using JSX to have a `.jsx` extension. * Removed legacy Vue2 code and imports from the Vue example. * Removed unused imports shadcn components. * Removed leading tilde (”\~” character) from CSS imports in various places. * Changed CSS imports in JavaScript files from `require` to `import`. * Fixed a few small React warnings/issues in the AI chat app. * Removed no longer needed `vue-template-compiler` dependency. * **Updated the standalone front end to run on port 5174 to not conflict with the default vite port.** **Other Changes** * **Added “Gitlab” as an option for CI.** (Thanks Paolo and Simon!) * **Deprecated the Material Bootstrap theme.** * **Upgraded all Python packages to the latest versions, including Django 5.2.** * **Upgraded all npm packages to the latest versions.** * **Updated all `blocktranslate` tags to use the `trimmed` option for easier translation.** * Added explicit width and height to some svgs to slightly improve styling when CSS is not present. * Made minor updates to AI rules files. * Use the new `ACCOUNT_SIGNUP_FIELDS` setting to configure sign up fields and removed usages of deprecated allauth fields. * **Removed `project_settings` from the `project_meta` context processor.** This was previously only used to pass the now-deprecated `ACCOUNT_SIGNUP_PASSWORD_ENTER_TWICE` setting to sign up templates. The sign up templates now render the second password field based on the form value. ### Upgrading [Section titled “Upgrading”](#upgrading-1) For help switching from Webpack to Vite, see [the Webpack to Vite migration guide](/front-end/migrating). *May 15, 2025* ## Version 2025.4.4 [Section titled “Version 2025.4.4”](#version-202544) This is another minor release: * Stop dynamically setting user/group ID in the `Makefile` and just default to `1000`. The dynamic ID assignment was continuing to cause issues on certain MacOS environments. * Add `make build-api-client` target even when not using Docker. * Added additional guidance on Pegasus’s Django model conventions to the Python AI rules. *May 5, 2025* ## Version 2025.4.3 [Section titled “Version 2025.4.3”](#version-202543) This is another bugfix release: * Make the user/group creation more resilient in development Docker containers, which fixes a permissions issue on MacOS in certain environments. Thanks Chris for reporting! * Add `architecture.md` to cursor rules directory. *May 1, 2025* ## Version 2025.4.2 [Section titled “Version 2025.4.2”](#version-202542) This is a bugfix release that addresses a few problems in the most recent build: * Moved the new `CustomHeadlessAdapter` to `users/adapters.py` to fix an issue with it not being available if you built without teams enabled. Thanks Alex for reporting! * Remove source maps for JavaScript bundles in production. This results in substantially smaller production bundle sizes. Thanks Jan for reporting! * Automatically do a best effort to set the user/group ID used by the development docker container in the `Makefile`. Thanks Jacob for suggesting! For the source map fix, you can change the “devtool” setting in `webpack.config.js` to this: ```javascript devtool: process.env.NODE_ENV === 'production' ? false : "eval-cheap-source-map", ``` *Apr 29, 2025* ## Version 2025.4.1 [Section titled “Version 2025.4.1”](#version-202541) This is a big release with a few major updates. ### Team invitation workflow changes [Section titled “Team invitation workflow changes”](#team-invitation-workflow-changes) The workflow around new users joining teams and accepting invitations has been streamlined based on user feedback. For a summary of the changes you can watch [this walkthrough](https://youtu.be/qxr_WdQEL2g) or read below. Key user-facing changes: * **When a user signs up with a pending invitation they will be redirected to view it before creating their first team.** * **Accepting an invitation requires having a verified email address for the email it was sent to.** * Users can view pending invitations for any of their email addresses from the team selector dropdown. * Inviting an email address of someone who’s already in a team will show an error message that they are already part of the team. In addition, the following fixes and code updates were made: * Added an API and serializer for accessing the logged-in user’s invitations, used by the React view. * React: renamed `getInviteUrl` helper JS function to `getResendInviteUrl`. *Thanks to EJ, Geoff, Valics, Simon, Arno, and possibly others who contributed ideas and feedback on the design of these changes.* ### API authentication and Standalone front end updates [Section titled “API authentication and Standalone front end updates”](#api-authentication-and-standalone-front-end-updates) The [Standalone React Front end](/experimental/react-front-end) underwent a major overhaul. Importantly, it now uses [allauth headless](https://docs.allauth.org/en/dev/headless/index.html) instead of a custom `dj-rest-auth` and custom authentication APIs. On top of this, support for many new authentication workflows was added to the standalone front end, including email confirmation, password reset, and social authentication. The standalone front end---which is still in experimental mode---is now close-to-parity with the Django authentication system. Details: * **Enabled and configured [allauth headless](https://docs.allauth.org/en/dev/headless/index.html)** (if authentication APIs are enabled or using the standalone front end). * **Removed `dj-rest-auth` and `djangorestframework-simplejwt` and associated setup code. Auth now uses allauth headless and sessions by default.** * **Removed the `apps/authentication` app and associated api client code.** * **Updated the standalone front end to use an authentication system against allauth headless and added support for email confirmation, social authentication and password reset.** These changes borrow heavily from the [allauth example](https://github.com/pennersr/django-allauth/tree/main/examples/react-spa) project, and involve a large number of code-level changes which are not fully outlined here, though some of the larger ones are listed below: * Added a `CustomHeadlessAdapter` class to add the user’s profile picture to the API. * Removed translation markup from JavaScript code that is shared with the standalone front end. Translations are not supported, currently. * Upgraded eslint-related libraries. * Updated `.eslintrc.cjs` to `eslint.config.mjs` and tweaked the configuration settings. * Show more/better validation errors on login, signup, etc. * Changed `ProtectedRoute` to `AuthenticatedRoute`. * Added templates and components for various new authentication workflows (email confirmation, password reset, etc.). * Added an `ACCOUNT_USER_DISPLAY` setting. * Updated [the standalone front end docs](/experimental/react-front-end) to reflect the latest setup. ### Djstripe upgrade and webhook updates [Section titled “Djstripe upgrade and webhook updates”](#djstripe-upgrade-and-webhook-updates) This release upgrades `dj-stripe` to version 2.9 and migrates to dj-stripe’s database-backed webhooks. This lets you set up multiple webhook endpoints/secrets, if desired. See the upgrade section below for details on updating. Details: * **Upgraded dj-stripe to version 2.9** * **Webhook endpoints now need to be configured in the database instead of having a single global endpoint.** See [the updated subscription webhooks documentation](/subscriptions/#webhooks) for more details. * Updated webhook handling for subscriptions and ecommerce purchases to be compatible with the above model. * Added a `bootstrap_dev_webhooks` management command to help set up `djstripe` webhooks for development. * Added `apps.utils` to `settings.INSTALLED_APPS` so that management commands inside it are picked up. * Removed the no-longer used `DJSTRIPE_WEBHOOK_SECRET` setting and environment variable. * Upgraded `stripe` to version `11.6` (there is [a bug with djstripe and the latest `12.0` release](https://github.com/dj-stripe/dj-stripe/issues/2153)) * Updated the [subscription docs](/subscriptions/#webhooks) to reflect the latest changes for setting up webhooks in dev and production. ### Ruff linting updates [Section titled “Ruff linting updates”](#ruff-linting-updates) The ruff linting rules were expanded and code has been modified to pass the revised ruleset. This leads to cleaner, more consistent code across the project and should make future Pegasus merges/upgrades smoother. Details: * **Updated the default ruff rules to enable all of the [E (error) Rules](https://docs.astral.sh/ruff/rules/#error-e), as well as the [UP (pyupgrade) Rules](https://docs.astral.sh/ruff/rules/#pyupgrade-up), [B (flake8-bugbear) Rules](https://docs.astral.sh/ruff/rules/#flake8-bugbear-b), and [SIM (flake8-simplify) rules](https://docs.astral.sh/ruff/rules/#flake8-simplify-sim), in addition to the already-enabled [F (Pyflakes) Rules](https://docs.astral.sh/ruff/rules/#pyflakes-f), and [I (isort) Rules](https://docs.astral.sh/ruff/rules/#isort-i).** * These lead to some minor code changes, including: * Use `contextlib.suppress` in a few places instead of the previous exception handling * Use `raise ... from` in several places for more explicit exception handling. * Combined some nested if statements into single lines. * Use `super()` instead of `super(C, self)` * Use f-strings instead of percent style format strings when possible. * Use `Type | OtherType` instead of `Union[Type, OtherType]` in type hints * Use core types for `list`, `dict` etc. instead of the type classes. * Define classes without the object base class. * Increased strictness around line lengths. * Changed rule definition from `extend-select` to `select` based on [ruff’s recommendations](https://docs.astral.sh/ruff/linter/#rule-selection). ### Other updates [Section titled “Other updates”](#other-updates-1) * **Change: Upgraded npm to the latest version (11.3) in Docker containers and docs.** * **Change: Added a honeypot field to the sign up form to help reduce bot/spam sign ups.** (Thanks Chris and Stian for suggesting!) * Change: Added an ”@” alias for the `assets/javascript` folder and started using it in imports. * Change: Updated development Docker setup to run as the logged-in user (under a `django` user account) instead of root. This should help with file ownership permissions being assigned to root after running the project with Docker. Thanks Finbar and Jacob for the suggestion and help with this! * Change: Removed the “app-card” styling from the loading widget to make it more versatile. * Change: Tweaked whitespace in a few templates to be more consistent across the project. * Change: Use `blocktranslate trimmed` instead of `blocktranslate` in some Django templates. * Change: Updated the output of `bootstrap_subscriptions` to communicate that only subscription products should be added to `ACTIVE_PRODUCTS`. * **Fix: Changed reference of `stripe.Invoice.upcoming` to `stripe.Invoice.create_preview` since Stripe [deprecated the upcoming invoice API](https://docs.stripe.com/changelog/basil/2025-03-31/invoice-preview-api-deprecations).** * This fixes an issue with loading the “manage subscription” page when using the latest Stripe API version. * Fix: Added `DEBUG=false` to `heroku.yml` setup section, which helps enforce that debug is disabled when running `collectstatic`. This helps avoid “No module named ‘daphne’” errors in async deployments. Thanks Abhishek for reporting! * Fix: The `dark_mode_selector.html` component is no longer included if you have disabled dark mode. * Fix: Improved chat height styling on mobile screens to avoid extra scrolling. * Fix: Updated the migration that creates the default Site object to also update the table sequence ID. Thanks Julian and Geoff for the suggestion and help with this! * Fix: Fixed a test case in `test_member_management` that wasn’t getting properly exercised. * Fix: Deleted the unused `_create_api_keys_if_necessary` function in `bootstrap_subscriptions.py` * Fix: Fixed the hover text color of the `.pg-button-danger` CSS class styles on tailwind builds. ### Upgrading [Section titled “Upgrading”](#upgrading-2) There are several changes in this release that may require additional steps during the upgrade process. To help with this, I recorded a video walkthrough of myself upgrading one of my own projects, which you can watch below: #### Authentication APIs [Section titled “Authentication APIs”](#authentication-apis) If you were using Pegasus’s [standalone React front end](/experimental/react-front-end) then your setup should work out of the box after upgrading. If you were using the `dj-rest-auth` app and previous authentication APIs in a different way, then you will need to either: 1. Update the client code to work with allauth headless. This can be done by referring to the example front end and [allauth documentation](https://docs.allauth.org/en/dev/headless/index.html). 2. Restore the previous implementation of the authentication APIs. This can be achieved by *rejecting* the proposed changes to remove the `apps.authentication` app and library dependencies/setup during the upgrade process. #### Djstripe and Webhooks [Section titled “Djstripe and Webhooks”](#djstripe-and-webhooks) There are a few issues you might run into with the dj-stripe upgrade. **Database Migrations** If you get an `InconsistentMigrationHistory` running `manage.py migrate` on your database, look for any diffs in your existing migration files that change the `djstripe` dependency from `0012_2_8` to `0014_2_9a`, and then revert these changes back to `0012_2_8`. **Webhooks** The most recent dj-stripe has disabled the global webhook support in favor of database-backed webhooks. These are more versatile, secure, and easier to set up, but require a migration from the previous set up. To migrate your webhooks, follow the instructions to set up a new webhook endpoint from the [subscriptions docs](/subscriptions/#webhooks-in-production) and then delete your previous webhook endpoint. There is a complete walkthrough of this process in the video above. **If you fail to do this your webhooks will stop working in production.** #### Formatting and linting [Section titled “Formatting and linting”](#formatting-and-linting) All Pegasus code should be updated to pass the new ruff linting configuration, but the configuration changes might cause build failures on code that has been added/modified. Many fixes can be automated by running: ```bash (uv run) ruff check --fix --unsafe-fixes ``` On the upgraded codebase and reviewing the changes made. However, some errors will likely require manual fixing, which can be done by reading the output and making the suggested change (or even giving the task to an LLM). You can see how I did this process with Claude code in the above video. Alternatively, you can modify the `[tool.ruff.lint]` section of `pyproject.toml` to remove any categories of fixes you don’t want to turn on for your project. *April 24, 2025* ## Version 2025.4 [Section titled “Version 2025.4”](#version-20254) The main feature of this release is improved support for AI tools and coding assistants. This release adds a suite of rules files that can be used with Cursor, Claude Code, or other AI-enabled IDEs. It also adds an MCP configuration for interfacing with your development database and controlling a web browser. These options are configurable via new project settings. Watch a demo below, or check out the new [AI tool docs](/ai/development). ### Added [Section titled “Added”](#added) * **Optional rules files and MCP configuration for Cursor or Claude.** * These files will continue to be modified and iterated on as more developers use them and provide feedback.settings ### Changed [Section titled “Changed”](#changed-1) * Improved default file input styles. * Add front end install / build to `make init`. (Thanks Jacob for reporting!) * Bumped `vite` used by the standalone front end to the latest version. * Upgraded several Python packages to their latest versions. * Removed unused `postcss.config.js` file from the front end. (Thanks Jacob for reporting!) ### Fixed [Section titled “Fixed”](#fixed-1) * **Fixed a potential XSS vulnerability issue with `markdown_tags` not properly escaping vulnerable tags.** This issue existed if you were using the AI chat UI, or built other functionality on top of that library. All markdown is now sanitized with `nh3`. (Thanks Mitja for reporting!) * Also added tests for this functionality. ### Translation Creator updates [Section titled “Translation Creator updates”](#translation-creator-updates) A number of new features were added to [Translation Creator](https://www.saaspegasus.com/store/product/translation-creator/) this month. Big thanks to community member Valics who contributed the first draft of most of these updates. * **Upgraded to the latest Pegasus, including Tailwind 4 and DaisyUI 5.** * **Translations will now retain comments.** * **Added pagination, sort, and filtering to the translations view.** * Added the ability to delete projects and clear translations. * Updated the DB constraint to use a hash of the input text instead of the text itself, which improves performance and fixes a bug with long translations. * Added / updated test cases. *April 4, 2025* ## Version 2025.3 [Section titled “Version 2025.3”](#version-20253) This release upgrades TailwindCSS to version 4 (and DaisyUI to Version 5). It also has several minor updates and fixes. ### Tailwind 4 Update [Section titled “Tailwind 4 Update”](#tailwind-4-update) Pegasus now runs on Tailwind V4! This comes with [a huge number of improvements](https://tailwindcss.com/blog/tailwindcss-v4), including much faster build times, simplified tooling, automatic content detection, and more. Tailwind and DaisyUI were upgraded using the associated guides ([Tailwind](https://tailwindcss.com/docs/upgrade-guide), [DaisyUI](https://daisyui.com/docs/upgrade/)). There is also an [upgrade guide for Pegasus apps](/css/tailwind/#upgrading-from-tailwind-3-to-4). Here’s a detailed breakdown of the changes: * **Upgraded to Tailwind 4 and DaisyUI 5.** * **Changed how tailwind is imported and customized in `site-tailwind.css` to match the V4 guidance.** * **Removed `content` section of `tailwind.config.js`. Tailwind 4 automatically finds all content for the project.** * **Updated `postcss.config.js` to match the Tailwind 4 recommendation (using `@tailwindcss/postcss`).** * **Converted tailwind-specific CSS to V4 syntax, using `npx @tailwindcss/upgrade`. *These changes were automated.*** * Removed `@layer` declarations * Converted some helper classes to use `@utility` * Changed some double quotes to single quotes and cleaned up whitespace in css files. * Updated various classes in templates/JavaScript files according to the migration guide, e.g. `outline-none` —> `outline-hidden`, `flex-grow` —> `grow`, `max-w-screen-xl` —> `max-w-(--breakpoint-xl)` etc. * DaisyUI updates: * **DaisyUI is now initialized as a plugin in `site-tailwind.css` instead of `tailwind.config.js`.** * **Themes are also now handled in this section. The docs have been updated to reflect this.** * Updated Pegasus CSS color variables to use the DaisyUI 5 versions. * Cleaned up Tailwind form rendering tags, removed unnecessary markup, and upgraded markup to be compatible DaisyUI 5, e.g. removing `-bordered` classes. * Checkboxes will now appear on the left instead of the right of labels. * Updated active tabs to use the latest DaisyUI markup (`menu-active` instead of `active`). * Shadcn updates: * **Moved shadcn components from `assets/javascript/components/ui` to `assets/javascript/shadcn/components/ui`.** * **New shadcn components can now be added via the CLI and will end up in the right place with no additional steps.** * Updated `tsconfig.json` and `webpack.config.js` to be consistent with new shadcn setup. * Regenerated shadcn components from the latest version of the library. * Changed shadcn themeing to use `@theme` declaration. * Removed all shadcn customizations from `tailwind.config.js` as they are superceded by the theme system. * Upgraded various shadcn dependencies to their latest versions. * Flowbite updates: * **Upgraded Flowbite to version 3.1.** * **Flowbite is now initialized as a plugin in `site-tailwind.css`.** * Explicitly import flowbite styles when building with flowbite enabled. This fixes out-of-the-box styling of some plugins. (Thanks Eeshan for reporting and fixing!) * Extracted dark mode selector to its own component and upgraded it to work with DaisyUI 5. * Other fixes / changes * Cleaned up various bits of CSS to use nested selectors. * Improved the contrast of the `pg-text-muted` class on dark mode. * Cleaned up commented out code in CSS files. * Removed unused “app” CSS class styles. * Standalone front end updates: * **Removed tailwind entirely from the standalone front end CSS.** The standalone front end currently gets its css from the same built file as the Django app. * Updated the [Tailwind Documentation](/css/tailwind) to reflect the V4 changes. ### Other Updates [Section titled “Other Updates”](#other-updates-2) * Fixed an issue running `./manage.py` commands in production docker containers when using `uv`. Thanks Richard, Bryan, and Ken for reporting! * Fixed active tab highlighting on Flowbite demo. * Removed `--no-emit-package setuptools` from the `make pip-compile` command. Some configurations require setuptools and this was causing issues on some pip-tools builds. Thanks Jim for reporting and fixing! * Changed ruff `exclude` to `extend-exclude` in `pyproject.toml` to keep ruff’s defaults. Thanks Justin for the suggestion! * Added help text to a few `make` targets that were missing it. Thanks Steve for the suggestion! * Removed unused `pg-is-loading` CSS class. * Fix syntax of commented out `EMAIL_BACKEND` variable in `deploy.yml`. * Removed language codes from the language selector dropdown. ### Upgrading [Section titled “Upgrading”](#upgrading-3) See the [Tailwind upgrade guide](/css/tailwind/#upgrading-from-tailwind-3-to-4) for details on upgrading existing Tailwind projects. *Mar 26, 2025* ## Version 2025.2.2 [Section titled “Version 2025.2.2”](#version-202522) This is a hotfix release that fixes a bug in the styling of the avatar in the navbar on Bootstrap using certain browsers. Thanks Luc for reporting! * Restored `navbar.css` on bootstrap builds and moved it out of the bulma-specific folder. * Updated imports in `base.css` accordingly. *Mar 13 2025* ## Version 2025.2.1 [Section titled “Version 2025.2.1”](#version-202521) This is a hotfix release that fixes a missing newline between `REDIS_URL` and `GOOGLE_ANALYTICS_ID` in `.env` / `secrets` files. Thanks Peter for the bug report! *Mar 7 2025* ## Version 2025.2 [Section titled “Version 2025.2”](#version-20252) This is a maintenance release with a number of upgrades and fixes. ### Added [Section titled “Added”](#added-1) * **You can now configure the Github integration to push your Pegasus code to a subdirectory of the repository.** [More details in the updated Github docs here](/github/#pushing-pegasus-code-to-a-subdirectory-in-your-repository). Thanks to Simon for helping with this, and Aaron, Bernard, Danil, and Arno for suggesting it! * Added a `429.html` error template. ### Changed [Section titled “Changed”](#changed-2) * **Migrated the majority of shared style files from sass to css, and removed sass from Tailwind builds.** This makes the setup more consistent with a typical Tailwind project. * Removed “sass” and “sass-loader” packages from Tailwind builds. * Updated `webpack.config.js` on bootstrap and bulma builds to also now handle `.css` files. * Related, ported the `navbar.sass` file to css, moved it to the `bulma` folder, and removed it from non-Bulma builds. * **Set [Django’s cache framework](https://docs.djangoproject.com/en/latest/topics/cache/) to use Redis in production by default.** * The Redis cache will be enabled when `settings.DEBUG` is `False`. * Also explicitly list `redis` as a first-class requirement, which fixes a bug where tests could fail if you disabled celery. * Added `.venv` and `venv` to the `.gitignore` file. (Thanks Peter for suggesting!) * Use the project id in the default `AWS_STORAGE_BUCKET_NAME` in deploy.yml. (Kamal deployments, thanks Peter for suggesting!) * Updated the version of `ruff` used by pre-commit to the one that’s installed in the project, and upgraded ruff to the latest (0.9.7). (Thanks Peter for reporting!) * Added a timeout and error handling to turnstile requests, to prevent hanging if Cloudflare was for some reason down. (Thanks Peter for suggesting!) * Removed `ENABLE_DEBUG_TOOLBAR=True` from production environment/secrets files. * Consistently use double quotes instead of single quotes in environment and deployment files. (Thanks Peter for suggesting!) * Removed duplicate and unused variable declarations across Kamal’s `deploy.yml` and `secrets` files. Public variables are now listed in `deploy.yml` and private ones are listed in `secrets`. (Thanks Peter for suggesting!) ### Fixed [Section titled “Fixed”](#fixed-2) * **Improved edge-case handling the Stripe checkout integration.** * Users should now see helpful error messages instead of getting 500 errors or ending up in an invalid state if they hit certain invalid URLs. * This also fixes a vulnerability where an attacker could potentially simulate e-commerce purchases through manual inspection and manipulation of requests. * Fixed a bug where `setuptools` was accidentally not present in production requirements files when using pip-tools. This caused production deployments to fail in certain cases. (Thanks Eeshan and Jim for reporting!) * Fixed an issue deploying to Heroku with Docker when using uv by removing Docker caching, which Heroku does not support. (thanks Toul for reporting!) * Fixed the active tab highlighting styles in the examples navigation on Bulma builds. * Removed unnecessary `
` elements from `top_nav.html` on Bootstrap builds. * Don’t include Docker translation instructions in README if not using Docker. (Thanks Peter for reporting!) * Updated the Pegasus CLI to [version 0.8](https://github.com/saaspegasus/pegasus-cli/releases/tag/v0.8), which fixes a bad html closing tag in the generated templates. (Thanks Julian for the bugfix!) * Removed celery sections from `deploy.yml` in kamal builds if celery isn’t enabled. ### Removed [Section titled “Removed”](#removed) * Removed `django_otp` dependency and configuration, which was only there to facilitate the transition to `allauth.mfa`. See the release notes for [Version 2024.5](#version-20245) for more information on this change. * Also removed the associated `migrate_allauth_2fa` management command. * Removed the default user-facing messages on login/logout. You can add them back or customize them by editing the files in `/templates/account/messages/`. ### Documentation [Section titled “Documentation”](#documentation) * Added [a community guide on using Digital Ocean Spaces](/community/digital-ocean-spaces) (alongside Amazon SES). Thanks Neil and Finbar for the contribution! ### Upgrading [Section titled “Upgrading”](#upgrading-4) Tailwind projects that have added their own `.sass` files will need to either restore sass support or port these files to `.css` (llms are good at this!). You can “restore” sass support by rejecting the proposed changes in `package.json` and `webpack.config.js` during upgrade. If you have removed Redis from your project you will need to update the default cache config in `settings.py`. *Feb 28, 2025* ## Version 2025.1.1 [Section titled “Version 2025.1.1”](#version-202511) This is a hotfix release that fixes an issue with installing Node 22 in the development Docker container. Thanks Oscar and Emiliano for reporting! If you’d rather manually apply the patch, you can just apply the following patch to your `Dockerfile.dev` file: ```diff RUN --mount=target=/var/lib/apt/lists,type=cache,sharing=locked \ --mount=target=/var/cache/apt,type=cache,sharing=locked \ rm -f /etc/apt/apt.conf.d/docker-clean && \ echo "deb https://deb.nodesource.com/node_22.x bookworm main" > /etc/apt/sources.list.d/nodesource.list && \ wget -qO- https://deb.nodesource.com/gpgkey/nodesource.gpg.key | apt-key add - && \ curl -fsSL https://deb.nodesource.com/setup_22.x | bash - && \ apt-get update && \ apt-get install -yqq nodejs \ ``` *Jan 28, 2025* ## Version 2025.1 [Section titled “Version 2025.1”](#version-20251) This release includes mostly backend infrastructure changes to some Pegasus features to pave the way for a (future) plugin ecosystem. This should make it easier to maintain Pegasus apps as well as possible (in the future) for other people to develop apps that can seamlessly integrate into Pegasus. ### New “async” build flag [Section titled “New “async” build flag”](#new-async-build-flag) Previously when you enabled async / websockets, you also got the group chat example application. Now you can enable async features without this additional example app, and turn it on separately with a new configuration option. This lets you use async but not have to manually delete the example application. ### Added a flag to remove Celery (if possible) [Section titled “Added a flag to remove Celery (if possible)”](#added-a-flag-to-remove-celery-if-possible) Added a configuration option that will remove celery and all dependencies + configuration ***if no other parts of your application need it***. Celery will also be removed from production deployment configurations. Celery is still required (and will be automatically enabled) if you are using any of: 1. The Pegasus examples 2. Subscriptions with per-unit billing enabled 3. Any AI chat features If you’re not using these features and want to disable Celery you can do that from your project settings page. ### Organizational changes to apps for more consistency [Section titled “Organizational changes to apps for more consistency”](#organizational-changes-to-apps-for-more-consistency) The following changes don’t have any new features or functionality, but change small things about how the code is organized for affected apps (AI chat, AI images, and async group chat). It is hoped that these changes will make maintenance, upgrades, and future extensions to Pegasus easier. Changes affecting the AI Chat, AI Images, and Group Chat apps: * Moved app declarations for these apps to the end of `PROJECT_APPS` in `settings.py` * Moved url declarations for these apps to the end of `urls.py`. * Moved settings and environment variables for these apps to be located together. * Settings for these apps are now prefixed with `AI_CHAT_` or `AI_IMAGES_`, respectively. * **This also means that shared settings like `OPENAI_API_KEY` are now declared multiple times and need to be updated in multiple places.** See the “upgrading” section below on how to get around this duplication. * Moved chat JavaScript setup to the end of `module.exports` in `webpack.config.js`. * Depending on your configuration, the order of navigation tabs in the UI may change. * Made minor tweaks to how channels urls are set up. * Image logos used by the AI chat and images apps were moved to `/static../../assets/images/ai_../../assets/images/` and `/static../../assets/images/ai_../../assets/images/`, respectively. * The declaration for these apps has moved to a new “plugins” section of `pegasus-config.yml`. ### Other Changes [Section titled “Other Changes”](#other-changes) Other changes included in this release are below. **Changed** * **Upgraded default Python to Python 3.12.** * Bumped the Python version to 3.12 in CI, and dev/production Docker containers. * Also added [a `.python-version` file](https://docs.astral.sh/uv/concepts/python-versions/#python-version-files) for uv builds (set to 3.12) * **Upgraded default Node to 22.** * Bumped the Node version to 22 in CI, and dev/production Docker containers. * **Upgraded nearly all Python packages to their latest versions.** * Added a pin to `dj-stripe<2.9` because 2.9 is not yet supported. * **Upgraded nearly all JavaScript packages to their latest versions.** * Tailwind v4 was not upgraded as it was just released and is not yet supported. * **Ruff and pre-commit will now sort imports by default.** (See upgrade notes below) * **This also updates import sorting in a number of files.** * **Pre-commit now runs ruff with `--fix` enabled, which will automatically apply (but not stage) fixable errors.** * Dependencies are now sorted in `pyproject.toml` (uv builds) and `requirements.in` (pip-tools builds) * Added email address to admin search for team memberships and invitations. Thanks EJ for the suggestion! * Made the “timezone” field editable in the user admin. Thanks Peter for the suggestion! * Changed active tab variable for ai image app from “ai\_images” to “ai-images” to match convention of other apps. * Added a link from the user profile to manage email addresses if the user has more than one email registered. (Thanks Simon for the suggestion!) * Make it so that `./manage.py` commands default to `uv run` if you build with uv enabled. * The `chat_tags` template tag library was moved to the `web` app and renamed to `markdown_tags`, making it easier to use outside the chat application. **Fixed** * **Fixed an issue that caused Render deployments to fail when using uv.** (Thanks Jacob for reporting and helping fix!) * Add `psycopg2-binary` to production requirements if using sqlite, since it still required for production deployments. (Thanks Randall for reporting!) * Updated invitations to always store email addresses in lowercase to be consistent with account emails. Also fixed comparisons between invitations and sign up emails to be case-insensitive. (Thanks EJ for reporting and the fix!) * Renamed `tailwind.config.js` to `tailwind.config.cjs` which prevents build failures on Node 22. **Removed** * Removed no-longer-used `payments.js` and `stripe.sass` files. * Stopped including `pip-tools` in `dev-requirements` when using `uv`, as it is no longer needed. ### Upgrading [Section titled “Upgrading”](#upgrading-5) **Python / Node updates** You may need to manually modify your dev/production environment to upgrade to Python 3.12 and Node 22. If you’re using Docker, this should happen automatically by following the [upgrade process](/upgrading). Pegasus apps should still run on Python 3.11 / Node 20, but will no longer be extensively tested on those versions moving forwards. **Settings Changes** Some settings around AI API keys have been renamed and will need to be updated in your `settings.py` and `.env` files. If you are using AI chat and AI images with OpenAI, the easiest way to use a shared API key is to add the following to your `.env` / environment variables: ```plaintext OPENAI_API_KEY="sk-***" ``` And then modify your settings variables to read from that value: ```python # add an OPENAI_API_KEY setting, in case it was referenced elsewhere in your code OPENAI_API_KEY = env("OPENAI_API_KEY", default="") # modify the image/chat settings to use the same openai key instead of reading from new environment variables AI_IMAGES_OPENAI_API_KEY = OPENAI_API_KEY AI_CHAT_OPENAI_API_KEY = OPENAI_API_KEY ``` **Import Sorting Changes** If you have auto-formatting enabled you will likely get CI errors after upgrading due to the stricter import sorting. You can fix these by running a manual ruff check locally and then committing the result: ```plaintext ruff check --fix # or with uv uv run ruff check --fix ``` *Jan 27, 2025* ## Version 2024.12.1 [Section titled “Version 2024.12.1”](#version-2024121) This is a minor hotfix release for 2024.12 * **Fixed a bug where the delete workflow was broken for apps created by the Pegasus CLI on non-Tailwind builds.** This happened becasue the “css\_framework” cli option was accidentally missing from `pegasus-config.yml`. Thanks Robert for reporting! * Updated the README instructions for setting up pre-commit hooks when using uv. *Jan 13, 2025* ## Version 2024.12 [Section titled “Version 2024.12”](#version-202412) content/ This release adds first-class support for using uv as a complete replacement for development and production workflows (see below), and has a handful of fixes/changes. ### UV support! [Section titled “UV support!”](#uv-support) This release adds full support for [uv](https://docs.astral.sh/uv/) as a replacement package manager for your project. You can use uv by selecting the new “uv” as your “Python package manager” on your project settings page. When you select uv the following changes will be made: * All requirements.in / requirements.txt files are removed. * Your project requirements will now be listed in your `pyproject.toml` file. * Development and production dependencies will be listed under separate dependency-groups. * Your pinned project requirements will be listed in a new `uv.lock` file. * Docker containers (in development and production) will use `uv` to set up and manage the Python environment. * A `make uv` target will be added to Docker builds to run `uv` commands in your container. The main benefits of using uv are: * Speed. It is just way, way faster to anything related to package management. * Easier to setup and install Python. * Lock files (pinned versions) are consistent across any platform. * More tooling. * Speed. (It’s so fast we put it twice.) There will be a longer write up about uv released very soon, but in the meantime you can review the updated [python documentation](/python/setup) and new [uv documentation](/python/uv). The rest of the docs have been updated to accommodate uv, though it’s possible there are some places that were missed. If you spot any issues in the docs, get in touch! ### Other fixes [Section titled “Other fixes”](#other-fixes) * **Upgraded the pegasus cli to fix an issue where the generated list views were not properly scoped to the appropriate team / user.** If you used the CLI to generate any apps it’s highly recommended that you check that you are not exposing objects that should not be viewable. ### Other updates [Section titled “Other updates”](#other-updates-3) * **Changed the default set up of social logins to use settings-based configuration instead of `SocialApps` in the database.** See the upgrade notes if you are using social logins to prevent issues. Thanks Alex for the suggestion and for helping with the updated documentation! * Updated the default flowbite setup to disable the forms plugin. This was causing styling conflicts with the default DaisyUI styles on certain form elements, including checkboxes. * Re-formatted the default form input template for readability. ### Upgrading [Section titled “Upgrading”](#upgrading-6) To migrate an existing project to `uv` see [this guide](/cookbooks/#migrating-from-pip-tools-to-uv). If your application was already using social logins defined in the database, the new settings-based declaration will conflict and cause errors on social login. To fix this you can either delete the `APPS` section of the relevant service in `settings.SOCIALACCOUNT_PROVIDERS`, or you can move the credentials into your project environment (e.g. `.env`) and delete relevant the `SocialApp` from the Django admin. *November 29, 2024* ## Version 2024.11.3 [Section titled “Version 2024.11.3”](#version-2024113) This is a minor maintenance release with a few changes in preparation for adding `uv` support (coming soon!). ### Changed [Section titled “Changed”](#changed-3) * Pinned the version of `uv` used in CI and Dockerfiles. * Added `venv` and `.venv` directories to the `.dockerignore` file and `make translations` target. * The `make requirements` command now restarts containers in the background, making it easier to combine with other make targets. * Added a catch-all to the `Makefile` to prevent error messages when running `make npm-install ` and similar commands. * Updated README commands to consistently use `python manage.py` instead of just `./manage.py`. * Made some minor formatting changes to `pyproject.toml`. * Fixed the link to the multi-stage dockerfile docs in `Dockerfile.web` * Upgraded a number of Python packages. * Updated the `default_stages` of the `.pre-commit-config.yaml` file to the latest expected format (`pre-commit`). *Nov 21, 2024* ## Version 2024.11.2 [Section titled “Version 2024.11.2”](#version-2024112) This release adds the ability to disable dark mode on Tailwind, upgrades front end libraries, bumps the API client version, and has a handful of other small changes and fixes. ## Added [Section titled “Added”](#added-2) * **Added a new build option to disable dark mode for Tailwind builds.** (Thanks Arno for suggesting!) * Added basic user-facing error messages to the standalone front end sign up and login workflows. ## Changed [Section titled “Changed”](#changed-4) * **Upgraded all JavaScript dependencies.** * **Updated the API client to use the latest version 7.9.0, and updated the standalone front end to work with the latest changes.** * Updated template-partials installation to be manually loaded, to allow for easier integration with other templating systems like django-cotton. * Moved active tab highlighting to the base view in the example object demo. * Made a few very minor edits to comments and whitespace in a few places. ## Fixed [Section titled “Fixed”](#fixed-3) * Fixed a bug where your migrations and tests would fail if your project name was > 50 characters (thanks Bernard for reporting!). * Fixed a bug in the group chat demo where submitting an empty room name would take you to a 404 page. * The `docker_startup.sh` file is no longer included if you are not using a docker-based deploy platform. * Updated the `config/README` file which had outdated information that predated the migration to Kamal 2. (Thanks Arno for reporting!) * Improved comments in the kamal `secrets` file and `.env` files. (Thanks Arno for suggesting!) ## Removed [Section titled “Removed”](#removed-1) * The `.env` file is no longer included in zip downloads. This file was already removed from Github builds so this just makes the two consistent. Projects should create `.env` file from the `.env.example` file. * Removed the `migrate_customers_to_teams` management command. This was added for an upgrade two years ago, and is assumed to be no longer needed. *Nov 14 2024* ## Version 2024.11.1 [Section titled “Version 2024.11.1”](#version-2024111) This is a minor hotfix release. ### Fixed [Section titled “Fixed”](#fixed-4) * Fixed an issue where the team selector was accidentally transparent in Tailwind builds. * Removed shadcn template that was accidentally included even if shadcn was disabled. ### Updated [Section titled “Updated”](#updated) * Removed extra whitespace from `form_tags.py`. (Thanks Brennon for reporting!) * Updated `make help` to allow for commands defined in `custom.mk` with digits to also show up. (Thanks Arno for suggesting!) *Nov 4 2024* ## Version 2024.11 [Section titled “Version 2024.11”](#version-202411) This is a feature release with an emphasis on improving the Tailwind CSS experience with Pegasus. Watch the video below for a demo, or read on for the highlights. ### Dark mode improvements [Section titled “Dark mode improvements”](#dark-mode-improvements) A dark mode selector was added to the navigation, allowing users to easily toggle between light, dark, and “system default” mode. The user’s selection is preserved server-side in the session object, which also helps to prevent flickering across page loads. ### Better Theme Support [Section titled “Better Theme Support”](#better-theme-support) It’s now easier than ever to change your project’s theme. Each project now supports a default light and dark theme which will be used throughout the site. The default themes need only be changed in `tailwind.config.js`, and `settings.py` and everything else is taken care of. See the updated [tailwind theme documentation](/css/tailwind/#changing-your-themes) for more details. ### New shadcn integration and demo dashboard [Section titled “New shadcn integration and demo dashboard”](#new-shadcn-integration-and-demo-dashboard) A new build setting allows you to build your project with [shadcn/ui](https://ui.shadcn.com/) installed. Shadcn is a great and versatile component library for React and Tailwind, but it is difficult to integrate it into a Django project without building a separate front end. Now Pegasus takes care of that integration for you, and provides a reference dashboard implementation of how to work with the library. The reference dashboard is a hybrid single-page React app served by Django. It uses the same colors as the DaisyUI theme, and will update when you change your theme, and has many interactive components. However, it is not connected to any backend data—it is just a UI example. Read more in the [shadcn docs here](/css/tailwind/#shadcn). ### New flowbite integration and demo component page [Section titled “New flowbite integration and demo component page”](#new-flowbite-integration-and-demo-component-page) Another new build setting allows you to build your project with [flowbite](https://flowbite.com/) installed. Flowbite is another great component library for Tailwind and does *not* use React---making it a great fit for htmx projects. If you enable this setting, flowbite will automatically be installed and you can drop flowbite components into any Django template. The reference page has an example of a few of these components. Read more in the [flowbite docs here](/css/tailwind/#flowbite). ### Other updates [Section titled “Other updates”](#other-updates-4) * **Upgraded all Python packages to their latest versions.** * **[uv](https://docs.astral.sh/uv/) is now used to install Python packages in Docker files and Github actions.** * Also updated `make pip-compile` target to use `uv`. * This resulted in minor changes to all `requirements.txt` files. * **Team invitation pages now prompt a user to log in instead of sign up if the email is associated with a known account.** (Thanks Daniel for suggesting!) * Your configured Github username, if available, will be used in a few places instead of a default value. (Thanks Richard for suggesting!) * Added `bg-base-100` to the `` tag of the base template and removed it from other components where it was now redundant. This improves theming support when themes heavily modify the base color. (Tailwind builds only) * Added equals signs to `ENV` declarations in production Docker files, for consistency. (Thanks Denis for suggesting!) * Slightly improved the styling of the e-commerce app. * Overhauled the [Tailwind CSS documentation](/css/tailwind). **Updates to the CLI ([release notes](https://github.com/saaspegasus/pegasus-cli/releases))** * Fixed a bug on certain environments where the `pegasus` command conflicted with a local `pegasus` folder, causing import errors running the CLI. * Apps created with `startapp` now use a `POST` for deletion instead of a `GET`. * Deletion now includes a modal confirmation (Tailwind and Bulma builds only). ### Upgrading [Section titled “Upgrading”](#upgrading-7) If you’re using Docker the `make upgrade` command won’t work out-of-the-box due to the change in how requirements files are managed. You will first have to rebuild your containers with: ```bash make build ``` or ```bash docker compose build ``` After that, you should be able to run `make upgrade` as normal. *Nov 1, 2024* ## Version 2024.10 [Section titled “Version 2024.10”](#version-202410) This release upgrades Kamal deployment to Kamal 2 and dramatically simplifies the Kamal deployment process. ### Kamal 2 deployment and related changes [Section titled “Kamal 2 deployment and related changes”](#kamal-2-deployment-and-related-changes) In the upgrade to Kamal 2, the following changes were made: * Updated Kamal to run from the root project directory instead of the `deploy` subdirectory. * Also moved the config file was also moved from `deploy/config/deploy.yml` to `config/deploy.yml` * Moved environment secrets from `deploy/.env` to `.kamal/secrets` to match Kamal 2’s recommendation. * Kamal can now be installed and run with Docker without any additional workarounds [as described here](https://kamal-deploy.org/docs/installation/) The custom docker set up instructions have been removed. * Kamal is now run as root by default, which dramatically simplifies the server setup process. There is now no need to run any manual steps to set up your server. * Kamal now creates and manages its own docker network. * Traefik has been dropped in favor of `kamal-proxy` for the proxy server, as per the new Kamal defaults. * The `.gitignore` and `.dockerignore` files were updated to reflect the new structure. * Added `apps.web.middleware.healthchecks.HealthCheckMiddleware` to workaround Kamal health checks and Django security features, [as outlined here](https://github.com/basecamp/kamal/issues/992#issuecomment-2381122195). * Removed unnecessary media directory set up from `Dockerfile.web`. It is recommended to use an external storage service for media files and not the Docker container. In addition, there were a few changes that affect projects that aren’t using Kamal: * `apps.web.locale_middleware` was moved to `apps.web.middleware.locale` * `docker_startup.sh` was moved from the `deploy` folder to the project root. The [Kamal documentation](/deployment/kamal) has been updated to reflect these changes. ### Other fixes [Section titled “Other fixes”](#other-fixes-1) * **Subscriptions in a “past due” state are now treated as “active” for the purposes of feature gating and accessing the billing portal.** This is more consistent with [how Stripe treats subscriptions in this state](https://docs.stripe.com/api/subscriptions/object#subscription_object-status). (Thanks Luc for suggesting!) * Fixed a bug where several `make` targets mistakenly included a `--no-deps` flag which would fail if your database container was not running. (Thanks Gary for reporting!) * Fixed an issue where Stripe subscription webhooks weren’t properly handled if you were using the embedded Stripe pricing table. (Thanks Andrew for reporting!) * Fixed an issue introduced in 2024.9 where Stripe ecommerce webhooks weren’t always processed correctly. * Added a migration file to automatically work around [this dj-stripe issue](https://github.com/dj-stripe/dj-stripe/issues/2038) so that it wasn’t a manual process. *Oct 15, 2024* ## Version 2024.9.3 [Section titled “Version 2024.9.3”](#version-202493) This release is mainly [an update to the CLI](https://github.com/saaspegasus/pegasus-cli/releases/tag/v0.3): ### CLI updates [Section titled “CLI updates”](#cli-updates) * **You can now generate apps that work seamlessly with Pegasus teams** (will use `BaseTeamModel` and add the team slug and permissions checks to all urls and views). * The CLI now generates a default `admin.py` config for each data model. * User foreign keys now use `settings.AUTH_USER_MODEL` instead of being hardcoded to `apps.users.models.CustomUser`. ### Other changes [Section titled “Other changes”](#other-changes-1) * Fixed an issue where HTMX links without href tags weren’t showing a pointer cursor on some CSS frameworks. * Add default region to Redis and Postgres configurations in `render.yaml` to make it easier to find/replace them when changing your project’s region. (Thanks Jacob for suggesting!) *Sep 26, 2024* ## Version 2024.9.2 [Section titled “Version 2024.9.2”](#version-202492) This release fixes a bug that prevented the CLI from running on Windows machines. Thanks Jonathan for reporting! If you don’t want to upgrade you can just `pip install pegasus-cli==0.2.1` to apply the fix. *Sep 20, 2024* ## Version 2024.9.1 [Section titled “Version 2024.9.1”](#version-202491) This release fixes a few things in the 2024.9 release. * Updated the `bootstrap_ecommerce` management command to create `ProductConfiguration` objects for all active Products in Stripe. * Fixed an issue on the ecommerce homepage where a closing `
` tag was misplaced if a product didn’t have a default price set. *Sep 18, 2024* ## Version 2024.9 [Section titled “Version 2024.9”](#version-20249) There are two big updates in this release: 1. The Pegasus CLI, which allows you to instantly spin up new apps. 2. E-Commerce/Payments improvements. ### The Pegasus CLI [Section titled “The Pegasus CLI”](#the-pegasus-cli) The [Pegasus CLI](https://github.com/saaspegasus/pegasus-cli/) is a standalone command-line tool that allows you to instantly spin up new Django apps. You can specify as many data models as you want and it will generate a starting CRUD interface for each of them. Here’s a quick demo: **At the moment the CLI only supports HTMX build of Pegasus.** A React-based implementation is planned for a future date. Huge thanks to Peter for his excellent [Pegasus example apps](https://github.com/pcherna/pegasus-example-apps-v2) project which served as a reference for implementing the CRUD application and pagination. ### E-Commerce / Payments demo improvements [Section titled “E-Commerce / Payments demo improvements”](#e-commerce--payments-demo-improvements) This is a series of updates designed to make it easier to build a functional end-to-end application on top of the e-commerce demo. * Added a `ProductConfiguration` model to attach additional metadata to products. * E-Commerce product URLs and views now use the `ProductConfiguration` `slug` field instead of the Stripe Product IDs. * Added a `@product_required` decorator that can be used to restrict access to views based on whether the user has purchased a product. * Added a demo “access product” page that shows how to use the `@product_required` decorator. * Added `user_owns_product` and `get_valid_user_purchase` helper functions. * Improved the navigation and use of breadcrumbs in the demo UI. * **See upgrade notes for information about migrating previous data to the new set up.** See also: the updated [Payments docs](/payments). ### Other Changes [Section titled “Other Changes”](#other-changes-2) #### Added [Section titled “Added”](#added-3) * **Added `django-htmx` and `django-template-partials` as first-class dependencies to HTMX builds.** These libraries are used by the CLI and will be used for more HTMX-based functionality moving forwards. * Added `make manage` command to run arbitrary `manage.py` commands in a docker environment. E.g. `make manage ARGS='createsuperuser'`. * Added the ability to pass arguments to `make test` in docker. E.g. `make tests ARGS='apps.teams --keepdb'`. (Thanks David for the suggestion!) #### Changed [Section titled “Changed”](#changed-5) * Changed links on the tailwind signup page to use `pg-link` class instead of explict tailwind classes. (Thanks Peter for the suggestion!) * Silenced extraneous djstripe warnings when running tests. (Thanks Chris for the suggestion!) * Added `.vscode` and vs workspace files to the project `.gitignore`. * Switched from `assert` statements to `raise ValueError` in the e-commerce Stripe checkout confirmation view. * Moved some of the currency helper functions out of the `subscriptions` app into `utils.billing` so they can be used in ecommerce workflows even if subscriptions are disabled. * Set `PYTHONUNBUFFERED` and `PYTHONDONTWRITEBYTECODE` in docker compose file for python containers. (Thanks Richard for the suggestion!) * Upgraded Django to 5.1.1. #### Fixed [Section titled “Fixed”](#fixed-5) * Fixed a typo in the help text for the `bootstrap_ecommerce` command. * Fixed a bug where `user_teams` context processor could cause a crash if auth middeware didn’t run (for example, on a 500 error page in production). ### Upgrade Notes [Section titled “Upgrade Notes”](#upgrade-notes) If you have existing `Purchase` data in your application you will need to migrate it to the new `ProductConfiguration` structure. This is a three-step process: First you will need to apply the database updates, but allow `Purchase.product_configuration` to be null. Instead of running `./manage.py migrate` you will have to run the following command: ```bash ./manage.py migrate ecommerce 0002 ``` After running this, you can run the following command to migrate the existing data: ```bash ./manage.py migrate_ecommerce ``` The `migrate_ecommerce` management command will: 1. Create `ProductConfiguration` objects for all products in `settings.ACTIVE_ECOMMERCE_PRODUCT_IDS` 2. Create `ProductConfiguration` objects for all products referenced in existing `Purchase` models. 3. Set `purchase.product_configuration` to the new `ProductConfiguration` object for each `Purchase`. Finally, you can make the `Purchase.product_configuration` field non-null, by running: ```bash ./manage.py migrate ecommerce 0003 ``` **New projects, or projects without any existing purchase data can skip these steps and run `./manage.py migrate` directly.** However, you may still want to run `./manage.py migrate_ecommerce` to populate `ProductConfiguration` objects for your active products. *Sep 17, 2024* ## Version 2024.8.2 [Section titled “Version 2024.8.2”](#version-202482) This is a maintenance release that includes a number of mostly small fixes and updates, and updates Django to version 5.1. ### Fixed [Section titled “Fixed”](#fixed-6) * **Fixed a few styling issues on Bulma builds**: * Disabled dark mode. The styling for Dark mode was not fully supported by Bulma and led to strange-looking layouts. * Fixed an issue where the active tab wasn’t properly highlighted in certain cases on Bulma builds. * Fixed an issue with sqlite builds where the default `DATABASE_URL` would cause the DB to switch to Postgres. (Thanks Harry and Richard for reporting!) * Switched allauth from [Twitter](https://docs.allauth.org/en/latest/socialaccount/providers/twitter.html) (which seems no longer supported) to [Twitter Oauth2](https://docs.allauth.org/en/latest/socialaccount/providers/twitter_oauth2.html), which still works. (Thanks Bandi for reporting!) * Fixed an issue introduced in version 2024.8 which caused Heroku Docker deploys to fail. Heroku [does not support caching](https://stackoverflow.com/a/78901250/8207), so it has been removed from Heroku Docker builds. (Thanks Richard for reporting!) * Fixed a bug where the `team_nav_items.html` and `team_selector.html` templates could be accidentally included even if you built without teams. * Changed the (unused) `text-muted` css class to `pg-text-muted` in a handful of places on Tailwind builds. (Thanks Peter for reporting!) * Removed unused `AWS_S3_CUSTOM_DOMAIN` variable from `.env` files. ### Changed [Section titled “Changed”](#changed-6) * **Upgraded Django to version 5.1.** * Upgraded all Python packages to their latest versions. * Updated Pegasus color CSS variables to use the DaisyUI variables, so that they change when you change DaisyUI themes. (Thanks Peter for the suggestion!) * Removed `custom.mk` if your project was not generated with a `Makefile`. (Thanks Finbar for reporting!) * Removed “Containers started” message from `make start` command that never executed. (Thanks Richard for reporting!) * Better style inputs of type `time` and `datetime-local` in forms on all CSS frameworks. (Thanks Peter for reporting and fixing!) * Simplified Bulma navbar to use bulma native classes instead of custom CSS. (See upgrade note below.) * Updated default Github repo in `app-spec.yml` to use raw project slug instead of the hyphenated version. (Digital Ocean deployments, only, thanks Richard for suggesting) * Moved `SERVER_EMAIL` and `DEFAULT_FROM_EMAIL` from `settings_production.py` to main `settings.py` file, and made it possible to set them via the environment/`.env` file. * Added many more common settings and secrets to the Kamal `deploy.yml` file. ### Documentation [Section titled “Documentation”](#documentation-1) * Improved the documentation on [customizing the Material Bootstrap theme](/css/material). * Added documentation for [deploying multiple apps to the same VPS with Kamal](/deployment/kamal/#cookbooks). ### Upgrading [Section titled “Upgrading”](#upgrading-8) * Bulma builds may need to add the `is-tab` class to `navbar-items` in the top nav to mimic the updated navbar styling. *August 23, 2024* ## Version 2024.8.1 [Section titled “Version 2024.8.1”](#version-202481) This is a maintenance release which upgrades HTMX to version 2.0 and fixes a handful of minor bugs. ### Changed [Section titled “Changed”](#changed-7) * **Upgraded HTMX to [version 2.0](https://htmx.org/posts/2024-06-17-htmx-2-0-0-is-released/).** See upgrade note below. ### Fixed [Section titled “Fixed”](#fixed-7) * Fixed a bug on some environments where `make build-api-client` would wrong relative to the wrong directory. (Thanks Ben for finding and fixing!) * Downgraded Postgres from 16 to 14 on Digital Ocean deployments, due to [an issue with permissions on version 16](https://www.digitalocean.com/community/questions/how-can-i-create-a-postgres-16-user-that-has-permission-to-create-tables-on-an-app-platform-dev-database) that was causing new Digital Ocean deployments to fail. (Thanks Panagiotis for reporting!) * Switched the default celery pool to [solo](https://docs.celeryq.dev/en/stable/internals/reference/celery.concurrency.solo.html) in development, to fix issues running on Windows. See [updated docs](/celery). * Updated in-app help hint to recommend running `./manage.py bootstrap_ecommerce` instead of `./manage.py djstripe_sync_models price`. ### Upgrading [Section titled “Upgrading”](#upgrading-9) Htmx 2.0 requires loading new extensions. If you were loading HTMX extensions in your own templates, you will have to upgrade the location of those to the 2.0 versions. Before: ```html ``` After: ```html ``` *August 13, 2024* ## Version 2024.8 [Section titled “Version 2024.8”](#version-20248) This is a maintenance release with many small updates and fixes. ### Added [Section titled “Added”](#added-4) * **Added test cases for subscription decorators, feature gating, and views.** These can be extended/adapted to test custom subscription logic. Also added utility functions to create test products, subscriptions and mock requests. * Added a test that will fail if your project is missing any database migrations. [More on this concept here](https://adamj.eu/tech/2024/06/23/django-test-pending-migrations/). * **Added an example landing page to Tailwind builds, based largely on [Scriv’s landing page](https://scriv.ai/).** * Added `TURNSTILE_KEY` and `TURNSTILE_SECRET` to Kamal’s default secrets. * Added a section on configuring static files to the [production checklist](/deployment/production-checklist/#check-your-static-file-setup). ### Changed [Section titled “Changed”](#changed-8) * **Code is now automatically formatted for all projects.** The “Autoformat code” check box has been renamed to “Enable linting and formatting” and now only controls whether `ruff` and the pre-commit hooks are included in the project download. Projects that had already enabled auto-formatting are unaffected by this change. (See upgrade notes below.) * **The example landing pages are now used as the project’s landing page instead of being listed in the examples**. (Bulma and Tailwind builds only.) * **Team invitation emails are now better styled, matching the same format as account emails.** (Thanks EJ for the suggestion!) * The `EMAIL_BACKEND` setting is now configurable via an environment variable. Also, added a commented-out example of how to set email settings for a production email provider (Mailgun). * Apt and pip packages are now cached across Docker builds, which should result in faster build times after the first build. (Thanks Tobias for the suggestion!) * Improved the display format of “role” in the team invitation list. (thanks Andy for the suggestion!) * Change `user/` to `YOUR_GITHUB_USERNAME/` in the Digital Ocean `app-spec.yml` file to make it more obvious that it should be edited. (Thanks Stephen for suggesting!) * Changed the UI of social logins on the “sign in” page to match that of the “sign up” page on the Material Bootstrap theme. This makes the implementation more extensible and more consistent with other CSS frameworks. * **Upgraded all Python packages to the latest versions.** ### Fixed [Section titled “Fixed”](#fixed-8) * Fixed a bug where the formatting `make` targets were still calling `black` and `isort` instead of `ruff`. `make black` is now `make ruff-format` and `make isort` is now `make ruff-lint`. * Fixed a bug where the sign up view tests would fail in your environment if `settings.TURNSTILE_SECRET` was set. (Thanks Finbar for reporting!) * Fixed translations on the user profile form field names. * Removed `svg` as an option for profile picture uploads, to prevent the possibility of using it as an XSS attack vector. ([More info on this threat here](https://medium.com/@rdillon73/hacktrick-stored-xss-via-a-svg-image-3def20968d9)). * Disable debug toolbar in tests, which fixes test failures under certain conditions. * Bumped the Postgres version used by Digital Ocean deployments from 12 to 16. Digital Ocean has deprecated support for version 12. (Thanks Stephen for reporting!) * Simplified how the list of social login buttons is rendered, and make social login buttons work when configuring social applications in settings (previously buttons only showed up if you configured apps in the database). See upgrade note below. ### Removed [Section titled “Removed”](#removed-2) * Deleted the “sticky header” html and CSS code that was only used on the example landing pages. ### Upgrade Notes [Section titled “Upgrade Notes”](#upgrade-notes-1) * If you had **not** been using auto-formatting until now, you should first follow the instructions for [migrating to auto-formatted code](/cookbooks/#migrating-to-auto-formatted-code) prior to upgrading to this release. Otherwise you will likely get a lot of formatting-related merge conflicts when trying to upgrade. * If you already enabled auto-formatting (most projects), you don’t need to do anything. * If you had previously configured allauth social applications in the database *and* in your settings file, you may see a duplicate “Login with XXX” button on the sign up and login pages. To fix this, remove the social application from either your settings or the database. *August, 7, 2024* ## Version 2024.6.1 [Section titled “Version 2024.6.1”](#version-202461) This is hotfix release that addresses a few issues from yesterday’s update: * Fix app styles accidentally being purged during the Docker build process. This caused styling on Docker-based deployments for tailwind builds. (Thanks Steve for reporting!) * Moved channels url import to after Django initialization. This fixes an `AppRegistryNotReady` error when deploying asynchronous apps with the AI chat app enabled. (Thanks Roman for reporting!) * Don’t create the periodic task to sync subscriptions unless per-unit billing is enabled. *June 6, 2024* ## Version 2024.6 [Section titled “Version 2024.6”](#version-20246) This is a feature release with a few big updates and a lot of smaller ones. ### AI model changes [Section titled “AI model changes”](#ai-model-changes) The library used for non-OpenAI LLMs has been changed from [`llm`](https://github.com/simonw/llm) to [`litellm`](https://docs.litellm.ai/docs/). Reasons for this change include: * It has far fewer additional dependencies. * It supports async APIs out of the box (for most models). * The `llm` library is more targeted for the command line use-case, whereas `litellm` offers similar functionality as a native Python library with a cleaner API. Litellm can still be used with all common AI models, including OpenAI, Anthropic/Claude, and local models (via ollama). For details on getting started with `litellm` see the updated [AI documentation](/ai/llms). ### Formatting and linting now use Ruff [Section titled “Formatting and linting now use Ruff”](#formatting-and-linting-now-use-ruff) Black and isort have been replaced with [ruff](https://github.com/astral-sh/ruff)---a Python linter/formatter that offers the same functionality as those tools but is much faster. Additionally, Pegasus will now remove unused imports from your files automatically, both when building your project and if you have set up `pre-commit`. This change should be a relatively seamless drop-in replacement, though you may see some new lint errors in your projects which you can choose to address. ### Spam prevention updates [Section titled “Spam prevention updates”](#spam-prevention-updates) There has been a dramatic increase in spam-bots over the last month. Many of these bots target seemingly-innocuous functionality like sign up and password reset forms. This version includes a few updates to help combat these bots. First, you can now easily add [Cloudflare turnstile](https://www.cloudflare.com/products/turnstile/) to your sign up forms, which will present the user with a captcha and should help reduce bot sign-ups. See [the turnstile documentation](/configuration/#turnstile) for information on setting this up. Additionally, the `ACCOUNT_EMAIL_UNKNOWN_ACCOUNTS` setting has been set to `False` by default. This prevents “forgot password” and “magic link” emails from being sent out to unknown accounts. It should also help reduce unnecessary email sending. Finally, the [admin dashboard](#admin-dashboard) no longer shows users with unconfirmed email addresses if you have set `ACCOUNT_EMAIL_VERIFICATION = 'mandatory'`. This helps filter out likely bots from the report to provide clearer visibilty of people actually signing up for your app. ### Complete changelog [Section titled “Complete changelog”](#complete-changelog-1) Below is the complete set of changes in this release. #### Added [Section titled “Added”](#added-5) * **Added configurable captcha support on sign up pages, using [Cloudflare turnstile](https://www.cloudflare.com/products/turnstile/).** See [the turnstile documentation](/configuration/#turnstile) for more information on setting this up. (Thanks Troy, Jacob, Robert and others for suggesting.) * Added API views for two-factor authentication, and to change the logged-in user’s password. (Thanks Finbar for suggesting!) * Add UI to tell users they need a verified email address prior to setting up two-factor auth. * Also added a `has_verified_email` helper class to the `CustomUser` model. * Added tests for the delete team view for both team admins and members. (HTMX builds only) * Added test for team member removal permissions. * Add display and sort on the number of active members in the teams admin. #### Fixed [Section titled “Fixed”](#fixed-9) * Fixed a bug where team names longer than 50 characters could cause a crash during sign up. * Fixed a bug where multi-factor authentication QR codes had a dark background when dark mode was enabled (Tailwind builds only). (Thanks Artem for reporting!) * Fixed a bug where it was possible to bypass two-factor-authentication when using the API authentication views. (Thanks Finbar for reporting and helping with the fix!) * Fixed a bug where deleting the user’s only team while impersonating them resulted in a temporary crash. (Thanks EJ for reporting!) * Fixed a bug where creating an API key crashed if your user’s first + last name combined to more than 40 characters. (Thanks Luc for reporting!) * Improved the UI feedback when LLMs fail (e.g. if your API key is wrong or ollama is not running). * Removed the `static/css` and `static/js` directories from the `.dockerignore` file so that other project files can be included in these directories. Also updated the production Docker build process so that any existing files are overwritten by the built versions. (Thanks Raul for reporting!) * Made some performance improvements to the production Dockerfile build (don’t rebuild the front end if there are no changes in the dependent files). * Better support trialing subscriptions with no payment methods. The subscription UI will now show the date the trial ends and won’t log errors about missing invoices. (Thanks Jarrett for reporting!) #### Changed [Section titled “Changed”](#changed-9) * **Upgraded all Python packages to the latest versions.** * **Upgraded all JavaScript packages to the latest versions.** * **Non-OpenAI builds now use `litellm` instead of `llm`.** See above. (Thanks Sarthak for the suggestion!) * **Changed the formatter/linter from `black` and `isort` to [ruff](https://github.com/astral-sh/ruff).** See above. * Also addressed a handful of minor linting errors that came up as a result of this change. * Codebase linting is now substantially faster. * Unused imports are now automatically removed when building your projects. * **Celerybeat now uses the `django-celery-beat` library to store tasks in the database instead of on the filesystem.** This improves support for celerybeat on Docker-based platforms. (Thanks Peter and Artem for the suggestion!) * Also added a migration to save the default scheduled tasks in the database. * The login API response has changed, to allow for two-factor auth prompts, and more machine-readable status fields. * Removed the no-longer-used `use_json_field=True` argument from wagtail `StreamField`s. * The admin dashboard no longer shows users with unconfirmed email addresses if you have set `ACCOUNT_EMAIL_VERIFICATION = 'mandatory'`. * The admin dashboard now includes sign ups from the current date, by default. * Changed behavior when team role checks fail from raising a `TeamPermissionError` to returning a 403 response, and updated affected tests. One side effect of this is that the stack traces are removed from successful test runs. * Secret keys should no longer change every time you build your Pegasus project. They are also now clearly prefixed with `django-insecure-` to indicate that they should be changed in production. * Updated the default OpenAI chat model to gpt-4o. * Upgraded the openapi client generator to version 7.5.0 and also pinned the version used by `make build-api-client` to the same one. * Team IDs are now optional on the create team page (HTMX builds only). * Add clearer error message when charts are broken due to api config issue. (Thanks Yngve for reporting!) * Added `assume_scheme="https"` to form `URLField`s to be compatible with Django 6 behavior. * Added `FORMS_URLFIELD_ASSUME_HTTPS = True` to be compatible with Django 6 behavior. * Set `ACCOUNT_EMAIL_UNKNOWN_ACCOUNTS = False` by default, so that “forgot password” emails do not get sent to unknown accounts. This can help prevent spam bots. #### Removed [Section titled “Removed”](#removed-3) * Removed `black` and `isort` from dev-requirements, since they have been replaced by `ruff`. * Removed `llm` library and associated code, since it has been replaced by `litellm`. * Removed no longer used `TeamPermissionError` class. #### Standalone front end [Section titled “Standalone front end”](#standalone-front-end) The following changes affect the experimental [standalone front end](/experimental/react-front-end): * **The standalone React front end now supports two-factor-authentication.** * Improve the UI when you have login issues in the standalone React front end. *June 5, 2024* ## Version 2024.5.3 [Section titled “Version 2024.5.3”](#version-202453) This is a hotfix release that fixes a bug where the landing and dashboard page image was accidentally removed if you built without the examples enabled. *May 21, 2024* ## Version 2024.5.2 [Section titled “Version 2024.5.2”](#version-202452) This is a hotfix release that fixes a bug that prevented the team management page from loading in certain browsers if you built with a React front end and with translations enabled. Thanks Finbar for reporting! * Added `defer` keyword to various bundle scripts so they are loaded after the JavaScript translation catalog. * Updated references to `SiteJS` to run on the `DOMContentLoaded` event to allow for usage of the `defer` tag. *May 16, 2024* ## Version 2024.5.1 [Section titled “Version 2024.5.1”](#version-202451) This is a hotfix release that fixes issues running the [experimental React frontend](/experimental/react-front-end) in Docker. Thanks Mohamed for reporting this! * Fix `api-client` path in the frontend docker container and add to `optimizeDeps` in vite config. * Mount `node_modules` as an anonymous volume in the frontend docker container, so it is not overwritten. * Automatically create `./frontend/.env` when running `make init` if it doesn’t exist. *May 14, 2024* ## Version 2024.5 [Section titled “Version 2024.5”](#version-20245) This is a major release with several big updates. Here are a few highlights: ### New AI models [Section titled “New AI models”](#new-ai-models) In addition to using OpenAI chat models, you can now build the Pegasus AI chat applicaiton with the [`llm` library](https://github.com/simonw/llm). This lets you run the chat application against any supported model---including the Anthropic family (Claude 3), and local models like Llama 3. Additionally, the image generation demo now supports Dall-E-3 and Stable Diffusion 3. For complete details, see the new [AI documentation](/ai/images). ### Health Checks [Section titled “Health Checks”](#health-checks) A new setting allows you to turn on health checks for your application, powered by [django-health-check](https://django-health-check.readthedocs.io/en/latest/). This will create an endpoint (at `/health` by default) that pings your database, Redis instance, and Celery workers and returns a non-200 response code if there are any identified issues. These endpoints can be connected to a monitoring tool like [StatusCake](https://www.statuscake.com/) or [Uptime Robot](https://uptimerobot.com/) so that you can be alerted whenever your site is having issues. See the section on [monitoring](/deployment/production-checklist/#set-up-monitoring) in the production checklist for more information. ### Allauth updates [Section titled “Allauth updates”](#allauth-updates) The [django-allauth](https://docs.allauth.org/en/latest/) library was updated to the latest version, which enabled several useful changes. The first is a “sign in by email code” option which can be used in addition to the standard username/password and social option. Users can request a code be sent to their registered email and can then use that to login. See [the magic code documentation](/configuration/#enabling-sign-in-by-email-code) to enable/disable this. The second is using the recent [multi-factor authentication](https://docs.allauth.org/en/latest/mfa/index.html) support added to allauth in favor of the third-party `django-allauth-2fa` library. This reduces dependencies and puts all of authentication functionality on a standard path moving forwards. The complete release notes are below: ### Added [Section titled “Added”](#added-6) * **Added an optional health check endpoint at /health/.** (see above for details) * **Added an option to connect the chatbot to other LLMs**. (see above for details) * **The AI image generation now supports Dall-E 3 and Stability AI.** * **All generated projects now include a `LICENSE.md` file.** The goal of the license file is not to change how Pegasus can be used in any way, but rather to document those terms in the codebase itself (previously they were only documented on the [terms page](https://www.saaspegasus.com/terms/)). For more information you can see the new [license page](https://www.saaspegasus.com/license/). * **Added support for “magic-code login”, where a user can login to the site by requesting a code to their email address.** [Documentation.](/configuration/#enabling-sign-in-by-email-code) * **Google cloud run builds now support Redis.** For details, see the [updated documentation](/deployment/google-cloud). (Thanks Forrest for suggesting!) * Added a `custom.mk` file where you can add additional `make` targets without worrying about future Pegasus upgrades. (Thanks John for proposing this!) ### Changed [Section titled “Changed”](#changed-10) * Upgraded allauth to the latest version (0.62.1). * **Migrated two-factor authentication from the third-party `django-allauth-2fa` to the `django-allauth` built-in implementation.** See upgrade notes below for migrating existing projects. * Refactored how many allauth views work to be compatible with their new template override system. * **Bootstrap and Bulma builds: Move sidebar navigation into the mobile menu instead of having it take up the top of the screen on mobile screens**, similar to how things already worked on Tailwind and Material. (Thanks Luc for the nudge!) * This includes splitting out the menu items into their own sub-template files so they can be included in both menus. * Inline buttons are now spaced using the `gap` CSS property instead of the `pg-ml` class on individual buttons. * `Alpine.start()` is now called on `DOMContentLoaded` loaded event instead of using `window.load`. This makes Alpine-powered UIs more responsive, especially when used on pages with lots of images. * **Updated external JavaScript imports to use [the `defer` keyword](https://www.w3schools.com/tags/att_script_defer.asp) for slightly better page load performance.** (See upgrade note.) * Also updated inline JavaScript code in a handful of places to be compatible with deferred scripts. * Added a Github logo to connected Github accounts on profile page. * **The AI image demo and code has been moved to a first-class Pegasus application / tab.** * Update the docker container registry used by Google Cloud to reflect the latest version in Google. Also push more Google Cloud configuration variables out of the Makefile and into the environment variables. (Thanks Erwin for reporting!) * Added additional `.env` files to `.dockerignore` for Google Cloud builds. * Bumped django to the latest `5.0.6` release. ### Fixed [Section titled “Fixed”](#fixed-10) * **SQLite build now properly parse `DATABASE_URL` if it is set. This fixes issues deploying to certain platforms when building with SQLite.** (Thanks Manasvini for reporting!) * Updated allauth documentation links in the README to point to the new [allauth docs site](https://docs.allauth.org/). (Thanks Shantu for reporting!) ### Removed [Section titled “Removed”](#removed-4) * Removed several no-longer-needed allauth template files. * Removed deprecated “version” field from the dev `docker-compose.yml` file. (Thanks Moyi for reporting!) * Removed no-longer-used `pg-ml` css spacing class. * Removed redundant type=“text/javascript” declarations from a few ` {% endblock %} ``` This example will open the modal with ID `onLoadModal` on page load. Alternatively, you can add individual bootstrap javascript modules via `site-bootstrap.js` like this: ```javascript require('./styles/site-bootstrap.scss'); // window.Modal = require('bootstrap/js/dist/modal'); // modals (used by teams) ``` And then call it in a Django template like this (with no `bootrap.` prefix): ```javascript const onLoadModal = new Modal(document.getElementById('landing-page-modal')); ``` # Bulma > Customize Bulma CSS framework using Sass variables for colors, typography, and styling in your Pegasus application. Bulma is readily customizable via [Sass variables](https://bulma.io/documentation/customize/variables/). Any of the variables used by Bulma can be changed by modifying the `assets/styles/site-bulma.scss` file. Try adding the following lines to the top of your file to see how it changes things: ```scss $primary: #2e7636; // change primary color to green $body-color: #00008B; // change main text to blue ``` **You’ll have to run `npm run dev` to see the changes take.** For more details on building the CSS files, see the [front end documentation](/front-end/overview). # CSS File Structure > Understand Pegasus CSS file organization with framework-independent styles and framework-specific overrides compiled from assets to static directories. CSS source files live in the `assets/styles` folder, and are compiled into the `static/css` folder. Some Pegasus styles are written using [Sass](https://sass-lang.com/), which provides many benefits and features on top of traditional CSS. **Modifying CSS requires having a functional [front-end build setup](/front-end/overview).** All versions of Pegasus contain two main sets of styles: * Styles that are *framework-independent* are contained and imported in `assets/styles/app/base.sass` and compiled into `static/css/site-base.css`. * Styles that *extend or override the CSS framework* are contained in `assets/styles/app//` and compiled into `static/css/site-.css`. This split is not required, and you can optionally combine everything into a single file by importing the styles from `base.sass` into your framework file and deleting `site-base.css`. # The Material Theme (deprecated) > Legacy Material Design theme based on Creative Tim's Material Kit and Dashboard, now deprecated with maintenance-only support until 2025. **This theme has been deprecated.** This means that the theme is in maintenance-only mode, and support will be dropped by the end of 2025. Existing projects can continue using the theme, but new projects should not, and new Pegasus features will eventually not be developed and tested on the theme. The reason for this is that several Pegasus customers have complained about the lack of documentation and support for this theme from its maintainer, Creative Tim. Additionally, their process around updating the theme has entailed releasing large, poorly-documented updates which have been difficult for me to incorporate back into Pegasus. The following documentation is for people already using the material theme. ## Customizing the Material theme [Section titled “Customizing the Material theme”](#customizing-the-material-theme) The customization process outlined above largely works for the Material theme as well. For example, you can change the primary color from the default magenta to a dark green by adding the following lines towards the top of `assets/styles/site-bootstrap.scss`: ```scss // Configuration @import "~bootstrap/scss/functions"; // add these lines $primary: #2e7636; // change primary color + gradients to green $primary-gradient: #2e7676; $primary-gradient-state: #2e7676; ``` You will also have to [build your front end](/front-end/overview) to see the changes. Material has more customization options than the default theme, which can be found in the [Material Dashboard documentation](https://www.creative-tim.com/learning-lab/bootstrap/overview/material-dashboard). The theme files live in the `assets/material-dashboard` folder. You can see the modifications that have been made for Pegasus support [on Github here](https://github.com/creativetimofficial/material-dashboard/compare/master...czue:pegasus-tweaks). In particular, a few bugs have been fixed, and the unused pro files have been removed. Creative Tim offers pro versions of [Material Dashboard](https://www.creative-tim.com/product/material-dashboard-pro) and [Material Kit](https://www.creative-tim.com/product/material-kit-pro) which are helpful if you want to have access to more pages / components. These should integrate seamlessly with the Pegasus theme. ### Enabling Material’s JavaScript [Section titled “Enabling Material’s JavaScript”](#enabling-materials-javascript) Pegasus doesn’t ship with the Material theme JavaScript built in. If you would like to use their JavaScript functionality (required for many of their components) you can take the following steps: 1. Download [the `material-kit.min.js` file from Creative Tim’s Github repository](https://github.com/creativetimofficial/material-kit/blob/master/assets/js/material-kit.min.js). 2. Copy it into your Django static directory. For example, to `/static/js` 3. Add it to the `` section of your `base.html` template (or wherever you want to use it): ```jinja ``` After completing these steps, the Material Kit JavaScript functionality should work. # Choosing a CSS Theme > Compare TailwindCSS, Bootstrap, Bulma, and Material Design themes with screenshots, features, and recommendations for Django projects. There are four CSS themes available in Pegasus. **If you don’t know which one you want, we recommend TailwindCSS.** It is the most popular choice, easiest to customize, and supports themes and dark mode out-of-the-box. In addition to Tailwind, there is a [Bootstrap 5](https://getbootstrap.com/) theme, a [Bulma](https://bulma.io/) theme, and a deprecated theme based on Creative Tim’s [Material Kit](https://www.creative-tim.com/product/material-kit) and [Material Dashboard](https://www.creative-tim.com/product/material-dashboard) products (not recommended). The look and feel of the site is slightly different for each framework, but the overall layout is the same. Below are screenshots of the app in each of the four themes. *If you’re not sure which framework you want to use, you can change the setting on your project and download multiple copies of the codebase to try out different ones.* **Tailwind CSS:** Light mode: ![Tailwind Home](/_astro/tailwind-home-light.DygBNhqu_Z2fPlyk.webp) Dark mode: ![Tailwind Home (Dark Mode)](/_astro/tailwind-home-dark.BJVKg2e7_Z1FmxUO.webp) **Bootstrap Default Theme:** ![Bootstrap Home](/_astro/bootstrap-home.DaBA-2Xi_Z1uk4rA.webp) **Bulma:** ![Bulma Home](/_astro/bulma-home.DfnQnVsO_Z22ki8O.webp) **Bootstrap Material Theme (Deprecated):** ![Material Home](/_astro/material-home.HxW3zxce_1iKN8t.webp) # Pegasus CSS > Cross-framework CSS classes with pg- prefixes for consistent styling across Bootstrap, TailwindCSS, and Bulma using Sass @extend and @apply. In addition to your app’s CSS, Pegasus also ships with its own set of CSS classes to provide compatibility across different frameworks. These classes are typically proxies for similar classes provided by the underlying frameworks themselves, and are created using the Sass [`@extend` helper](https://sass-lang.com/documentation/at-rules/extend) or Tailwind’s [`@apply` helper](https://tailwindcss.com/docs/reusing-styles#extracting-classes-with-apply). Pegasus CSS classes are defined in `pegasus/.sass/css`, and they all begin with `pg-`. You are welcome to leave them in and use them throughout your project, or you can replace them with the framework-specific names (for example, replacing all instances of `pg-column` with `column` on Bulma, or `col-md` on Bootstrap). The following table demonstrates some of the most common Pegasus CSS classes and their corresponding values across frameworks. If you ever need to look up what a class is doing you can look in `./assets/styles/pegasus/`. | Pegasus Class | Description | Value in Bootstrap | Value in Tailwind | Value in Bulma | | --------------- | ------------------- | ------------------ | --------------------------------------------------------------- | --------------- | | `pg-columns` | Wrapper for columns | `row gy-4` | `flex flex-col space-y-4 lg:flex-row lg:space-x-4 lg:space-y-0` | `columns` | | `pg-column` | Individual column | `col-md` | `flex-1` | `column` | | `pg-title` | A title | `h3` (element) | `text-3xl font-bold text-gray-900 mb-2` | `title` | | `pg-subtitle` | A subtitle | `lead` | `text-xl text-gray-900 mb-1` | `subtitle` | | `pg-button-***` | A styled button | `btn btn-***` | `btn btn-***` (from daisyUI) | `button is-***` | | `pg-text-***` | Colored text | `text-***` | `text-***` (from daisyUI) | `has-text-***` | # Tailwind CSS > Build modern UIs with TailwindCSS v4, DaisyUI components, shadcn/ui integration, dark mode themes, and Flowbite styling options. Pegasus supports [Tailwind CSS](https://tailwindcss.com/) (Version 4) and it is the recommended CSS framework for most projects. ## Demo and Overview [Section titled “Demo and Overview”](#demo-and-overview) Here’s a quick overview of using TailwindCSS in Pegasus ## Development [Section titled “Development”](#development) Because TailwindCSS only includes the styles found in your HTML / JavaScript files, you will need to actively rebuild your CSS files any time you add new styles/components to your templates. The easiest way to do this is by running (after installing Node packages): ```bash npm run dev-watch ``` Or in Docker: ```bash make npm-watch ``` See the [front-end docs](/front-end/overview) for more information about working with these files. ## Customization [Section titled “Customization”](#customization) Pegasus uses [daisyUI](https://daisyui.com/) to provide default, well-styled components with convenient CSS classes. Components from daisyUI can be brought in as needed by your app. A full list of available components can be found at the [daisyUI component library](https://daisyui.com/components/). ### Changing your themes [Section titled “Changing your themes”](#changing-your-themes) If you enable dark mode, Pegasus will ship with the default DaisyUI light and dark themes which are used for regular and dark mode, respectively. But DaisyUI offers a number of [out-of-the-box themes](https://daisyui.com/docs/themes/) you can use in your Pegasus app. To change themes, make sure the theme is enabled in the daisyui section of `site-tailwind.css` and specify what you want for defaults for light and dark mode as follows: ```css @plugin "daisyui" { themes: cupcake --default, night --prefersdark; }; ``` Additionally, you should update the `darkMode` setting in your `tailwind.config.js`: ```javascript module.exports = { // sets the "night" theme as the one used for dark mode darkMode: ["class", '[data-theme="night"]'], } ``` After changing these values you will have to [rebuild your front end](/front-end/overview). Finally, you will also have to update the default themes in your `settings.py`: ```python LIGHT_THEME = "cupcake" DARK_THEME = "night" ``` After this, your app should be fully styled in the new themes! For a list of the available themes, and information about creating your on theme, see the [daisyUI theme documentation](https://daisyui.com/docs/themes/) and their online [theme generator](https://daisyui.com/theme-generator/). ### Extending themes [Section titled “Extending themes”](#extending-themes) If you’d like to extend one of the built-in themes you can do that in your `site-tailwind.css` file as specified in the [DaisyUI docs](https://daisyui.com/docs/themes/#-3). For example, to change the colors of the default theme, add a section like this: ```css @plugin "daisyui/theme" { name: "light"; default: true; --color-primary: blue; --color-secondary: teal; } ``` ## Other products / themes [Section titled “Other products / themes”](#other-products--themes) ### shadcn [Section titled “shadcn”](#shadcn) [shadcn/ui](https://ui.shadcn.com/) is a React component library for Tailwind. It includes many out-of-the-box components that you can install and use in your projects. As of version 2024.11 Pegasus ships with a demo dashboard using shadcn. To enable the dashboard you have to build with the Tailwind CSS framework and check the “Use Shadcn” checkbox in your project settings. Here’s a screenshot: ![Shadcn Demo Dashboard](/_astro/shadcn-demo.C_wZrJUK_Z2kKOhB.webp) The dashboard is [a hybrid single-page React app](https://www.saaspegasus.com/guides/modern-javascript-for-django-developers/integrating-django-react/) served by Django. It uses the same colors as the DaisyUI theme, and will update when you change your theme, and has many interactive components. However it is *not* connected to any backend data---it is just a UI example. #### Working with shadcn [Section titled “Working with shadcn”](#working-with-shadcn) The dashboard can be found in `assets/javascript/shadcn-dashboard`. Shadcn components are stored in the `assets/javascript/shadcn/components/ui` folder. Components can be imported in other JavaScript files using the same import path syntax used by the dashboard: ```javascript import { Button } from "@/components/ui/button" ``` You can use the [shadcn cli](https://ui.shadcn.com/docs/cli) to create components, and they should automatically be added to the right folder. ### Flowbite [Section titled “Flowbite”](#flowbite) [Flowbite](https://flowbite.com/) is a library with many great UI components---most of which are free and open source. Also, unlike shadcn, it does *not* use React---making it a great fit for Django templates and htmx projects. As of version 2024.11 Pegasus ships with the option to enable flowbite, along with a page demonstrating some sample components. To enable Flowbite, choose Tailwind CSS and check the “Use Flowbite” checkbox in your project settings. If you enable this setting, flowbite will automatically be installed and you can drop flowbite components into any Django template. The reference page has an example of a few of these components. #### Extending Flowbite [Section titled “Extending Flowbite”](#extending-flowbite) The default setup shows how to use Flowbite *alongside* DaisyUI. However, if you want to use Flowbite more holistically for your application you can. To get started, uncomment the following line in your `site-tailwind.css` file: ```plaintext /* @import "flowbite/src/themes/default"; */ ``` This will add flowbite’s default styles, which are necessary for some extended components like datatables. ### Tailwind UI [Section titled “Tailwind UI”](#tailwind-ui) [Tailwind UI](https://tailwindui.com/) is a great product for building more complex pages, including marketing sites and app UIs. It another great option for getting help with UI components and pages, and should integrate seamlessly with the current Pegasus templates. Note that you will have to rebuild styles when adding TailwindUI components, as described in the “Development” section above. ## Upgrading from Tailwind 3 to 4 [Section titled “Upgrading from Tailwind 3 to 4”](#upgrading-from-tailwind-3-to-4) Pegasus 2025.3 updates Tailwind from version 3 to version 4. This is a big upgrade, and if you have added Tailwind markup to your project you will likely need to upgrade your own code and not just rely on the Pegasus updates. This section should help you with that process. It will be updated over time as additional questions and issues come up. If you have any problems with the migration, send a message in the community Slack! It’s recommended to follow the following steps to upgrade your project to Tailwind 4: 1. Read through the [Tailwind Upgrade Notes](https://tailwindcss.com/docs/upgrade-guide) and confirm you’re ready to upgrade from a browser support perspective. 2. Do a [normal Pegasus upgrade](/upgrading) of your project to Version 2025.3 or later. 3. Merge all conflicts as carefully as you can. 4. Rebuild your front end (`npm install`, `npm run dev`). 5. Run your app. At this point, your project should be running on Tailwind 4, though you should review the sections below for additional steps. ### Restoring custom themes [Section titled “Restoring custom themes”](#restoring-custom-themes) To restore custom themes, follow the [instructions above](#changing-your-themes) to re-apply your theme configuration (and if necessary, be sure to also remove it from `tailwind.config.js`). Note that some DaisyUI themes look slightly different in version 5 and may require further customization for the same look-and-feel. ### Migrating non-Pegasus files [Section titled “Migrating non-Pegasus files”](#migrating-non-pegasus-files) You will likely want to run the [Tailwind upgrade tool](https://tailwindcss.com/docs/upgrade-guide#using-the-upgrade-tool) on your project to apply any automatic upgrades to files that aren’t managed by Pegasus. After going through the steps above, you can re-run Tailwind’s migration tool by following these steps. First, temporarily re-install Tailwind v3 on your project. This is required for the upgrade tool to run: ```bash npm install tailwindcss@3 ``` Next, temporarily restore the “content” section in your `tailwind.config.js` from your main branch. It should look something like this: ```javascript content: [ './apps/**/*.html', './apps/web/templatetags/form_tags.py', './assets/**/*.{js,ts,jsx,tsx,vue}', './templates/**/*.html', ], ``` Finally run the upgrade tool: ```bash npx @tailwindcss/upgrade --force ``` This should apply Tailwind’s automatic migrations to your existing HTML / JS / CSS files. Review these changes, commit the changes you want, and then undo the changes made to the `content` section above. Note that you may not want to apply some changes like shadow-downsizing, since these have already been included in Pegasus. ### DaisyUI Updates [Section titled “DaisyUI Updates”](#daisyui-updates) Some common DaisyUI upgrades that you may need to check include: * Changing active navigation tab classes from `"active"` to `"menu-active"`. * Removing `-bordered` from inputs. ## Troubleshooting [Section titled “Troubleshooting”](#troubleshooting) ### Styles aren’t working after adding new components [Section titled “Styles aren’t working after adding new components”](#styles-arent-working-after-adding-new-components) Every time you use a new Tailwind class you need to rebuild your front end as described in the “[Development](#development)” section above. After doing that, if they are still not showing up, be sure that you have hard-refreshed your browser (Ctrl-Shift-R) on most browers. You can also disable browser caching when devtools are open by following these instructions [for Chrome](https://stackoverflow.com/a/23944114/8207) or [for Firefox](https://stackoverflow.com/a/48027947/8207). If you are building your front end in Docker, be sure to also read the troubleshooting section of the [front end documentation](/front-end/overview) for potential issues with cross-platform compatibility. # Digital Ocean > Deploy Pegasus apps to Digital Ocean App Platform using Docker containers with PostgreSQL, Redis, and Celery support for scalable SaaS applications. Pegasus provides native support for Digital Ocean App Platform. To build for Digital Ocean, choose the “digital\_ocean\_app\_platform” option when installing Pegasus. Then follow the steps below to deploy your app. ### Prerequisites [Section titled “Prerequisites”](#prerequisites) If you haven’t already, create your Digital Ocean account. **You can sign up with [this link](https://m.do.co/c/432e3abb37f3) to get $100 credit and help support Pegasus.** Next, install and configure the `doctl` command line tool by following [these instructions](https://www.digitalocean.com/docs/apis-clis/doctl/how-to/install/). Additionally, you must connect Digital Ocean to your project’s Github repository. This can be done from inside App Platform, or by following [this link](https://cloud.digitalocean.com/apps/github/install). ### Deploying [Section titled “Deploying”](#deploying) Once you’ve configured the prerequisites, deploying is just a few steps. If you are planning to use Celery or Redis (Valkey), first create your Database cluster. The easiest way to do this is in Digital Ocean Dashboard. Navigate to [Databases —> New](https://cloud.digitalocean.com/databases/new), and choose “Valkey”. It’s recommended to name this database `-redis`. Next edit the `/deploy/app-spec.yaml` file. In particular, make sure to set your Github repository and branch. If you aren’t using Celery, you can remove the sections related to redis, and the celery-worker. If you are using Redis/Valkey, the cluster name must match what you chose when you created the Database. Finally, run `doctl apps create --spec deploy/app-spec.yaml` That’s it! In a few minutes your app should be online. You can [find and view it here](https://cloud.digitalocean.com/apps). Once your app is live, you should restrict access to your Redis/Valkey instance, by navigating to the database in the Digital Ocean console and setting your app as a “trusted source” and saving. Failure to do this may result in your app’s data and infrastructure being exposed to the public. **After deploying, review the [production checklist](/deployment/production-checklist) for a list of common next steps**. ### Settings and Secrets [Section titled “Settings and Secrets”](#settings-and-secrets) App platform builds use the `settings_production.py` file. You can add settings here, and use environment variables to manage any secrets, following the pattern used throughout the file. Environment variables can be managed in the Digital Ocean dashboard [as described here](https://docs.digitalocean.com/products/app-platform/how-to/use-environment-variables/). ### Running One-Off Commands [Section titled “Running One-Off Commands”](#running-one-off-commands) The easiest way to run once-off commands in your app is to click the “console” tab in app platform and just type in the command. See the screenshot below for what it looks like: ![Console Migrations](/_astro/running-migrations-do.BGGpoC46_ZmfR47.webp) You may also need to run additional commands to get up and running, e.g. `./manage.py bootstrap_subscriptions` for initializing your Stripe plan data. ### Celery Support [Section titled “Celery Support”](#celery-support) Celery should work out-of-the box. If you have issues running celery, ensure that you have created a Redis database, and that the values for the `REDIS_URL` environment variables match the name you’ve chosen. If you need to run `celerybeat` (for scheduled/periodic tasks), you’ll have to add a second worker to your `app-spec.yaml` file. You can copy and paste the configuration for the `celery` worker, but replace the `run_command` with the following line (swapping in your app name for `your_app`): ```bash celery -A your_app beat -l INFO ``` Note that simply adding `--beat` or `-B` to the existing Celery worker does *not* work on app platform. # Fly.io > Container-based Django deployment to Fly.io with PostgreSQL, Upstash Redis, and automated database migrations using Docker and flyctl CLI. Pegasus supports container-based deployment to [Fly.io](https://fly.io/). ### Prerequisites [Section titled “Prerequisites”](#prerequisites) If you haven’t already, install the [flyctl CLI](https://fly.io/docs/hands-on/install-flyctl/). The create an account with `fly auth signup` or login with `fly auth login`. ### Setup [Section titled “Setup”](#setup) Once you have logged in via the CLI you can create your app and the services it will need. For each of the commands below follow the prompts given. In the example below the “Chicago, Illinois (US) (ord)” region is selected. You may change the region to suit your needs, but it should be consistent throughout the commands. **Create your app in Fly.io** ```bash $ fly launch --dockerfile Dockerfile.web \ --dockerignore-from-gitignore \ --no-deploy \ --name {app-name} \ --region ord ``` After running that, answer ‘yes’ to the first question: ```plaintext An existing fly.toml file was found for app {app-name} ? Would you like to copy its configuration to the new app? Yes ``` Fly will output some details, then ask another question about customizing. Answer ‘yes’ to that as well: ```plaintext Using dockerfile Dockerfile.web Creating app in /path/to/app/source We're about to launch your app on Fly.io. Here's what you're getting: Organization: Your Name (fly launch defaults to the personal org) Name: {app-name} (specified on the command line) Region: Chicago, Illinois (US) (specified on the command line) App Machines: shared-cpu-1x, 1GB RAM (most apps need about 1GB of RAM) Postgres: (not requested) Redis: (not requested) ? Do you want to tweak these settings before proceeding? (y/N) Yes ``` A browser tab should open where you should add a Fly Postgres database called {app-name}-db, and an Upstash Redis server. You can leave the other defaults or change the machine size as you see fit. It should look something like this: ![Fly DB config](/_astro/fly-db-config.BD0z3j2A_cySWr.webp) Click “Confirm Settings” and then close the tab. Back on the command line, Fly will output some more things and should eventually end with a message like this: ```plaintext ✓ Configuration is valid Your app is ready! Deploy with `flyctl deploy` ``` If you see these two lines you are ready to deploy! If not, see the “Troubleshooting” section below. ### Deploying [Section titled “Deploying”](#deploying) You are now ready to deploy your app. You can do this by running: ```bash $ fly deploy ``` In a few minutes your app should be live! **After deploying, review the [production checklist](/deployment/production-checklist) for a list of common next steps**. In particular, make sure add your app URL to the `ALLOWED_HOSTS` variable in your environment/settings as well as in the `http_service.checks` section of `fly.toml`. ### Running Database Migrations [Section titled “Running Database Migrations”](#running-database-migrations) Database migrations are applied in the release command during deploy. This is configured in the `fly.toml` file. ### Settings and Secrets [Section titled “Settings and Secrets”](#settings-and-secrets) Fly.io builds use the `settings_production.py` file. You can add settings here or in the base `settings.py` file, and use environment variables to manage any secrets, following the examples in these files. Secrets are managed in Fly.io via the web UI or on the command line using the CLI: ```bash $ fly secrets set MY_VAR=secret_value ``` ### Running One-Off Commands [Section titled “Running One-Off Commands”](#running-one-off-commands) You can one-off commands via a shell: ```bash $ fly ssh console app $ ./code/manage.py [command] ``` ### Celery Support [Section titled “Celery Support”](#celery-support) Out of the box, Pegasus is configured to run Celery using the [multiprocess support](https://fly.io/docs/reference/configuration/#the-processes-section) provided by Fly.io. For alternatives see ### Troubleshooting [Section titled “Troubleshooting”](#troubleshooting) **My release / migrate command is failing.** If you get an error like the following when running `fly deploy` ```plaintext django.db.utils.OperationalError: connection to server at "localhost" (127.0.0.1), port 5432 failed: Connection refused Is the server running on that host and accepting TCP/IP connections? ``` it is likely that your Database is not set up properly. You can confirm if this is the case by running `fly secrets list` and making sure that you see a `DATABASE_URL` variable. If you do not see one, it is not properly connected/attach. If you need to create a new database, you can run: ```bash fly postgres create --name {your-app-db} ``` And you can (re-)attach a database to an app by running: ```bash fly postgres attach {your-app-db} -a {your-app-name} ``` **My deploy keeps timing out.** If you keep getting an error like the following: ```plaintext Error: timeout reached waiting for health checks to pass for machine 28749e0b443558 ``` You can try re-deploying with a higher timeout. For example, try running: ```bash flyctl deploy --wait-timeout 5m ``` # Google Cloud > Deploy Pegasus projects to Google Cloud Run with Cloud SQL PostgreSQL, Redis, Secret Manager, and Google Cloud Storage for production applications. Pegasus can be deployed to Google Cloud Run using containers. *This feature is in beta and Celery is not yet supported.* To build for Google Cloud, choose the “google\_cloud” option when installing Pegasus. Then follow the steps below to deploy your app. ### Prerequisites [Section titled “Prerequisites”](#prerequisites) Pegasus deployments on Google Cloud Run loosely follow the [Django on Cloud Run](https://codelabs.developers.google.com/codelabs/cloud-run-django) guide. However, instaed of running it in the Google Cloud Shell, we’ll use the `gcloud` CLI on our development machine. This allows you to more easily work with your existing codebase. #### 1. Set up the `gcloud` sdk. [Section titled “1. Set up the gcloud sdk.”](#1-set-up-the-gcloud-sdk) Though much of the setup can be completed in the cloud shell, you will need to install and configure the `gcloud` sdk on your development machine. Follow [the google installation guide](https://cloud.google.com/sdk/docs/install) for your OS. After you install the `gcloud` SDK you will also need to authenticate your account [as described here](https://cloud.google.com/docs/authentication/provide-credentials-adc). In particular, you will need to run: ```bash gcloud auth login gcloud init ``` #### 2. Create and connect your project. [Section titled “2. Create and connect your project.”](#2-create-and-connect-your-project) Follow the instructions [in Step 2 of the Google guide](https://codelabs.developers.google.com/codelabs/cloud-run-django#1) to create a new project for your app and enable billing. Once you’ve created your project you should be able to run: ```bash gcloud projects list ``` On your development machine and see it. Next run: ```bash gcloud config set project ``` To set the project as the default for future commands. #### 3. Enable the Cloud APIs [Section titled “3. Enable the Cloud APIs”](#3-enable-the-cloud-apis) As per [step 3 of the Google guide](https://codelabs.developers.google.com/codelabs/cloud-run-django#2), run the following command to enable the services needed for your application. ```bash gcloud services enable \ run.googleapis.com \ sql-component.googleapis.com \ sqladmin.googleapis.com \ compute.googleapis.com \ cloudbuild.googleapis.com \ secretmanager.googleapis.com \ artifactregistry.googleapis.com ``` This command takes a while to finish, but should eventually output something like: ```plaintext Operation "operations/acf.cc11852d-40af-47ad-9d59-477a12847c9e" finished successfully. ``` *We will skip step 4 of Google’s guide, since we already have a project and move on to step 5.* #### 4. Create the backing services. [Section titled “4. Create the backing services.”](#4-create-the-backing-services) *This section follows [step 5 of Google’s guide](https://codelabs.developers.google.com/codelabs/cloud-run-django#4), with some minor changes.* **First create a service account:** ```bash gcloud iam service-accounts create cloudrun-serviceaccount ``` **Set up your environment variables:** Everything you will need is defined in `/deploy/.env.google.example`. It is recommended that you first copy this file to `.env.google`: ```bash cp deploy/.env.google.example deploy/.env.google ``` Then update the values in it as needed---including setting any passwords and keys to random, autogenerated values. Once you have made all the modifications to this file, load it into your environment by running: ```bash source deploy/.env.google ``` After running the above command you should be able to run the remaining commands in the same shell and the variables will be swapped in. ```bash gcloud artifacts repositories create containers --repository-format docker --location $REGION ``` **Create and configure the database:** Next, create the database instance. Note that in our `.env.google` file we are changing the instance name from the guide’s `mysinstance` to something unique to your project, and storing that as an environment variable. Note: for production you may want to use a higher value for `--tier`, however do be aware of the costs associated with doing this. [More information here](https://cloud.google.com/sql/pricing). ```bash gcloud sql instances create $DATABASE_INSTANCE_NAME --project $PROJECT_ID --database-version POSTGRES_14 --tier db-f1-micro --region $REGION ``` This command takes a long time to run. Once it completes, create a database in the instance, again using a unique name instead of the guide’s `mydatabase`: ```bash gcloud sql databases create $DATABASE_NAME --instance $DATABASE_INSTANCE_NAME ``` Next, create the database user. Make sure you have set/changed `DATABASE_PASSWORD` in your `deploy/.env.google` file, (and run `source deploy/.env.google` again if necessary) then run: ```bash gcloud sql users create ${DATABASE_USER} --instance ${DATABASE_INSTANCE_NAME} --password ${DATABASE_PASSWORD} ``` And finally, grant the service account permission to access the DB: ```bash gcloud projects add-iam-policy-binding $PROJECT_ID --member serviceAccount:${SERVICE_ACCOUNT} --role roles/cloudsql.client ``` **Create and configure Redis:** If you are using tasks, caching, or websockets you will also need to set up Redis. To enable Redis support you first have to [create a Redis instance](https://cloud.google.com/memorystore/docs/redis/create-manage-instances). ```bash gcloud redis instances create ${PROJECT_ID}-redis --size=1 --region=${REGION} ``` You might get prompted to enable the Redis API which you should say “yes” to. Then this will run for a long time. After creating your Redis instance you will need to run `source deploy/.env.google` again to populate the necessary environment variables for the IP and network. You can also view these by running: ```bash gcloud redis instances describe ${PROJECT_ID}-redis --region ${REGION} ``` **Create and configure the storage bucket:** ```bash gcloud storage buckets create gs://${GS_BUCKET_NAME} --location ${REGION} ``` ```bash gcloud storage buckets add-iam-policy-binding gs://${GS_BUCKET_NAME} \ --member serviceAccount:${SERVICE_ACCOUNT} \ --role roles/storage.admin ``` **Store configuration as a secret.** Once you have completed the above you should have everything you need in place to run your app. The final step before deploying is to save your configuration as a secret. First create the `.env.production` file from `.env.production.example`: ```bash cp .env.production.example .env.production ``` Then update the `DATABASE_URL`, `GS_BUCKET_NAME`, and `REDIS_URL` values in `.env.production`. You can find these values with the echo command, e.g.: ```bash echo $DATABASE_URL ``` For Redis you will need to put in the value of REDIS\_IP which you can get from: ```bash echo $REDIS_IP ``` Next, save these values in google cloud: ```bash gcloud secrets create application_settings --data-file .env.production ``` And allow the service account to access it: ```bash gcloud secrets add-iam-policy-binding application_settings --member serviceAccount:${SERVICE_ACCOUNT} --role roles/secretmanager.secretAccessor ``` ### Create and deploy your docker containers [Section titled “Create and deploy your docker containers”](#create-and-deploy-your-docker-containers) From here on out we’ll stop using the guide, since Pegasus should handle everything else for you out of the box. Also, instead of using a `Procfile` we’ll use our own Docker container. First make sure you have loaded your environment: ```bash set -o allexport && source deploy/.env.google && set +o allexport ``` Now you can build your container for Google Cloud by running the following command. ```bash make gcp-build ``` This, and all other `make` commands are defined in the `Makefile` in your project. You can see what they are doing there. Once you’ve built your container, enable docker pushes with: ```bash gcloud auth configure-docker ``` And then you can push it with: ```bash make gcp-push ``` And finally deploy it with: ```bash make gcp-deploy ``` This should deploy your application to a new container. It should output the URL for your app, which is now online! Future deploys can be done in the same manner. Or you can use the following command as a shortcut to run everything: ```bash make gcp-full-deploy ``` ### Settings and Secrets [Section titled “Settings and Secrets”](#settings-and-secrets) You can use Google Secret Manager to add additional settings and secrets by adding them to `.env.production` and uploading it to Secret Manager using: ```bash gcloud secrets versions add application_settings --data-file .env.production ``` See `settings_production.py` for examples of using these secrets in your settings file. ### Cookbooks [Section titled “Cookbooks”](#cookbooks) #### Automating Deployment to Cloud Run using GitHub Actions [Section titled “Automating Deployment to Cloud Run using GitHub Actions”](#automating-deployment-to-cloud-run-using-github-actions) If you would like to automate your Google Cloud deployment so it is deployed on every push to Github, you can refer to this community guide: [Automating Deployment to Cloud Run using GitHub Actions](/community/google-cloud-github-actions/) # Heroku > Deploy Pegasus apps to Heroku using Python buildpacks or Docker containers with PostgreSQL, Redis, and Celery for scalable web applications. Pegasus supports deploying to Heroku as a standard Python application or using containers. Before getting started, first take the following steps in Heroku: 1. In the Heroku dashboard, create a new app. 2. Set up the [Heroku CLI](https://devcenter.heroku.com/articles/heroku-command-line) and run `heroku login` locally. 3. Connect your app, by running `heroku git:remote -a {{ heroku_app_name }}` ### Building using Heroku’s Python support [Section titled “Building using Heroku’s Python support”](#building-using-herokus-python-support) To deploy with Heroku’s Python module, first set up Pegasus using the “heroku” deploy platform option. This will create your Heroku `Procfile`, `runtime.txt`, and additional requirements/settings files needed for the Heroku platform. ### Building using Heroku’s Docker container support [Section titled “Building using Heroku’s Docker container support”](#building-using-herokus-docker-container-support) To deploy to Heroku using Docker, you should build Pegasus with the “heroku docker” deployment option. This will create your production `Dockerfile`, a `heroku.yml` file you can use [to build and deploy your container](https://devcenter.heroku.com/articles/build-docker-images-heroku-yml), and additional requirements/settings needed for the Heroku platform. After building and setting up Heroku you will also need to configure Heroku to deploy with containers by running: ```bash heroku stack:set container ``` ### Configure Django Settings [Section titled “Configure Django Settings”](#configure-django-settings) The Heroku deployment uses its own settings module (which extends the normal `settings.py`). To tell Heroku to use it, set the `DJANGO_SETTINGS_MODULE` config var to `{ project_slug }.settings_production`. This can be done in the “settings” tab of your Heroku application (you may need to click to reveal the Config vars) or in the CLI using the following command (replacing the `project_slug` with your app name): ```bash heroku config:set DJANGO_SETTINGS_MODULE={ project_slug }.settings_production ``` ### Disable DEBUG [Section titled “Disable DEBUG”](#disable-debug) Similar to setting the Django Settings, you should disable DEBUG mode in your Heroku config: ```bash heroku config:set DEBUG=False ``` ### Set up Databases [Section titled “Set up Databases”](#set-up-databases) To set up your Postgres database, first enable the addon in the UI or by running: ```bash heroku addons:create heroku-postgresql ``` Database migrations should be handled automatically by Heroku. If you want to use Redis as a cache or to use Celery, you will need to install [the Heroku Redis addon](https://elements.heroku.com/addons/heroku-redis) from the UI or by running: ```bash heroku addons:create heroku-redis ``` ### Deploying [Section titled “Deploying”](#deploying) Both builds can be deployed using Heroku’s standard git integration. After you’ve connected your project’s git repository to Heroku, just run: ```bash git push heroku main ``` You can also configure Heroku to automatically build from a branch of your git repository. **After deploying, review the [production checklist](/deployment/production-checklist) for a list of common next steps** ### Setting environment variables [Section titled “Setting environment variables”](#setting-environment-variables) To set environment variables run: ```bash heroku config:set {variable_name}={ value } ``` e.g. ```bash heroku config:set SECRET_KEY={some long randomly generated text} ``` ### Additional settings configuration [Section titled “Additional settings configuration”](#additional-settings-configuration) If you need additional production settings, you can put them in the `settings_production.py` file, or include them as config vars like this: ```python SECRET_KEY = env('SECRET_KEY') ``` **It is strongly advised to put any secrets in your environment instead of directly in your settings file.** ### Running one-off commands [Section titled “Running one-off commands”](#running-one-off-commands) You can run once-off commands using the `heroku` CLI. E.g. ```bash heroku run python manage.py bootstrap_subscriptions ``` ### Building the front end [Section titled “Building the front end”](#building-the-front-end) As of Pegasus version 0.19, Heroku container builds will automatically build your front end files for you. You don’t need to do anything to set this up. If you’re using Heroku’s Python support you can also configure Heroku to build your front-end files for you. To set this up, all you need to do is add the `heroku/nodejs` buildpack to your application from the settings page. Just make sure that this buildpack runs *before* the `heroku/python` buildpack, so that the compiled files are available when the `collectstatic` command runs. ### Celery support [Section titled “Celery support”](#celery-support) The Heroku environment supports Celery out-of-the-box. Additionally, you may need to run the following command to initialize a Celery worker: ```bash heroku ps:scale worker=1 ``` This process should be the same for Python and containerized builds. # Kamal (Deploy to any VPS) > Deploy Pegasus to any Linux VPS using Kamal with automated SSL certificates, load balancing, and PostgreSQL database management. Pegasus supports container-based deployment to any Linux server using [Kamal](https://kamal-deploy.org/). Kamal is a deployment tool that uses Docker to deploy applications to servers. It is designed to be simple to use and to work with a single server or a cluster of servers. It can also be used to deploy multiple apps to the same server. Kamal will deploy the app as Docker containers, and will also deploy the database and any other services that are required. It will also configure a load balancer ([kamal-proxy](https://github.com/basecamp/kamal-proxy)) to route traffic to the app and configure SSL certificates using LetsEncrypt. By default, Pegasus will run all the services on a single server, but Kamal is designed to work with multiple servers, so you can easily move services to separate servers and update the Kamal configuration in `config/deploy.yml`. ### Screencast [Section titled “Screencast”](#screencast) You can watch a screencast showing how to deploy to a Hetzner server with Kamal here: Or follow along with the documentation below. ### Overview [Section titled “Overview”](#overview) Deploying on Kamal will require a few pieces: 1. A server running Linux (the latest Ubuntu LTS is recommended---version 24.04 as of this writing) and accessible via SSH. 2. A domain name for your app. You will need to create a DNS record pointing to your server’s IP address. 3. A Docker registry to store your images. You can use [Docker Hub](https://hub.docker.com) or any other registry. 4. A development environment where you install and configure Kamal. We’ll walk through these in more detail in order below. ### Provision and prepare your server [Section titled “Provision and prepare your server”](#provision-and-prepare-your-server) The first step is to provision a server were you will host your application. Some popular choices include: * Hetzner (get €20 credit and support Pegasus with [this link](https://hetzner.cloud/?ref=49vhF1w3TIyB)). * Digital Ocean Droplets (get $100 credit and support Pegasus with [this link](https://m.do.co/c/432e3abb37f3)). * [Linode](https://www.linode.com/). * [AWS](https://aws.amazon.com/) (Lightsail or EC2). * [Google Cloud](https://cloud.google.com/). * [Microsoft Azure](https://azure.microsoft.com/en-us). It is recommended to choose the latest Ubuntu LTS---version 24.04 as of this writing---for your operating system. Other operating systems might work, but are not tested or officially supported. We also recommend at least 2GB of RAM. Once you’ve chosen a hosting company and provisioned a server, follow the instructions provided to login (SSH) to the server. You will need to be able to log in remotely to complete the rest of the setup. The rest of these instructions will run kamal as the root user. If you prefer to run kamal as a different user---which can prevent certain kinds of attacks---see the note below. ### Set up DNS [Section titled “Set up DNS”](#set-up-dns) To set up SSL you will need a DNS record pointing at your sever. Create a new “A” record using whatever tool you use to manage your DNS, and point it at the IP address of the server you created above. The most common domain to use is `www..com`. ### Create the image repository on Docker Hub [Section titled “Create the image repository on Docker Hub”](#create-the-image-repository-on-docker-hub) Before doing deployment, you need a place to store your Docker images, also known as a *Docker registry*. The most popular one is [Docker Hub](https://hub.docker.com/), so we’ll use that one, though you can choose a different one if you want, as described in the [Kamal docs](https://kamal-deploy.org/docs/configuration). First create an account on [Docker Hub](https://hub.docker.com/) and note your username. Then create a new repository, choosing a unique name for your app, and marking it “private”. Finally you will need to create an access token. Go to “Account Settings” —> “Security” and make a new access token, giving it the default permissions of Read, Write, Delete. **Save this token somewhere as you will need it in the next step and will only see it once.** ### Install and configure Kamal [Section titled “Install and configure Kamal”](#install-and-configure-kamal) Finally, we can set everything up to deploy our production application with Kamal. If you have a Ruby environment available, you can install Kamal globally with: ```bash gem install kamal ``` *Note: you may want to use [`rbenv`](https://github.com/rbenv/rbenv) to manage your environment.* If you don’t have Ruby running you can also use Docker to install Kamal, by creating an alias command as described [in the Kamal docs here](https://kamal-deploy.org/docs/installation/). #### Create `secrets` file in the `.kamal` directory [Section titled “Create secrets file in the .kamal directory”](#create-secrets-file-in-the-kamal-directory) Kamal expects a `.kamal/secrets` file in this folder which will contain all the environment variables needed for deployment. The `secrets` file should not be checked into source control. See `.kamal/secrets.example` for the required variables. ```bash cp .kamal/secrets.example .kamal/secrets ``` #### Update the Kamal configuration files [Section titled “Update the Kamal configuration files”](#update-the-kamal-configuration-files) The Kamal configuration is in `config/deploy.yml`. You will need to update the following values: * Docker image repo: `image: /` - this is the repository you created above. If you’re using Docker Hub, the `namespace` will typically be your username. * Docker registry username: `username: ` - the username you chose above. * Your server IP address (or hostname) `` (this value is listed once per service). * Your app domain: `host: ` in the `proxy` section at the end, if this is not already set via your project configuration. Additionally, in your `.kamal/secrets` file you should add the following variables: * Set `KAMAL_REGISTRY_PASSWORD` to the access token value you created above. * Choose secure, unique, and ideally random values for `POSTGRES_PASSWORD` and `SECRET_KEY`. * Update the `DATABASE_URL` value (use the same password as `POSTGRES_PASSWORD`). You can review other settings in `deploy.yml`, but those should be all that you need to set yourself to do your first deployment. ### Deploy [Section titled “Deploy”](#deploy) Finally, we can use Kamal to do the rest of the setup. Run the following on your *local* machine, from the project root directory. ```bash kamal setup ``` This will perform all the tasks necessary to deploy your application (duplicated below from the [Kamal docs](https://kamal-deploy.org/docs/installation)): * Connect to the servers over SSH (using root by default, authenticated by your SSH key). * Install Docker on any server that might be missing it (using get.docker.com): root access is needed via SSH for this. * Log into the registry both locally and remotely. * Build the image using the standard Dockerfile in the root of the application. * Push the image to the registry. * Pull the image from the registry onto the servers. * Ensure kamal-proxy is running and accepting traffic on ports 80 and 443. * Start a new container with the version of the app that matches the current Git version hash. * Tell kamal-proxy to route traffic to the new container once it is responding with 200 OK to GET /up. * Stop the old container running the previous version of the app. * Prune unused images and stopped containers to ensure servers don’t fill up. If everything is set up properly then in five or so minutes you should be able to visit your new application at the configured domain. You’re done! ### Post-deployment steps [Section titled “Post-deployment steps”](#post-deployment-steps) Once you’ve gotten everything set up, head on over to the [production checklist](/deployment/production-checklist) and run through everything there. In particular, you will have to set up media files using an external service like S3. #### Manage changes after initial deployment [Section titled “Manage changes after initial deployment”](#manage-changes-after-initial-deployment) See the `config/README.md` file in your project repo for pointers on managing the production environment after the initial deployment. The main command you will regularly run is `kamal deploy`, which will push new releases and configurations of your application. ### Settings and Secrets [Section titled “Settings and Secrets”](#settings-and-secrets) Kamal builds use the `settings_production.py` file. You can add settings here, and use environment variables to manage any secrets, following the pattern used throughout the file. If you modify `settings_production.py` (or any other code) you will need to run: ```bash kamal deploy ``` To push the changes to your servers. Secrets should be managed in environment variables. To add new environment variables you will need to update them in two places: 1. The variable *name* needs to be added to the `env` section at the top of `config/deploy.yml`. 2. The variable name *and value* needs to be added to `.kamal/secrets` (the same `secrets` file we’ve been using above). You can see examples of this for variables like `DATABASE_URL` in those two files. Once you modify your environment variable files you will need to run: ```bash kamal deploy ``` To update the variables on the server and redeploy the app. ### Running one-off commands [Section titled “Running one-off commands”](#running-one-off-commands) The easiest way to run one-off commands on your server is to use the `kamal app exec` command. For example: ```bash kamal app exec -r web 'python manage.py bootstrap_subscriptions' ``` If you want an interactive SSH-style shell you can run: ```bash kamal app exec -i bash ``` You should now have a shell where you can run any Python/`manage.py` command. You can also get a database shell by running: ```bash kamal accessory exec postgres -i 'psql -h localhost -p 5432 -U ' --reuse ``` For more information see [Kamal commands](https://kamal-deploy.org/docs/commands). ### Troubleshooting [Section titled “Troubleshooting”](#troubleshooting) #### Something went wrong during setup [Section titled “Something went wrong during setup”](#something-went-wrong-during-setup) If the `kamal setup` command fails it should print out the error it got. Once you’ve resolved it, you may need to set up the services individually instead of re-running it. You can do that with the commands below: ```bash # rebuild the PostgreSQL container kamal accessory reboot postgres # rebuild the Redis container kamal accessory reboot redis # rebuild the proxy container kamal proxy reboot # build the proxy container (if it didn't succeed the first time) kamal proxy boot # deploy the app kamal deploy ``` If deploy continues to fail, check the logs of your docker container, using: ```bash kamal app logs ``` #### Resolving `ERROR exec /bin/sh: exec format error` [Section titled “Resolving ERROR exec /bin/sh: exec format error”](#resolving-error-exec-binsh-exec-format-error) If you see this error on your server/logs it is likely that the architecture used to build your image is not the same as the one running on your server. Review the `builder` section of your `deploy.yml` file and in particular make sure `multiarch` is set to `true`. You can also explicitly build the image on the remote server, or set the target architecture using other `builder` options as described [in the kamal docs](https://kamal-deploy.org/docs/configuration#using-remote-builder-for-native-multi-arch). #### Resolving `ERROR /bin/sh: 1: /start: not found` [Section titled “Resolving ERROR /bin/sh: 1: /start: not found”](#resolving-error-binsh-1-start-not-found) If you see this error on your server/logs it is likely that your `/start` script has the wrong line endings. This can happen if you edit the `./deploy/docker_startup.sh` file in certain programs on the Windows operating system. To fix this, change the line endings of the file from CRLF to LF using your preferred text editor (you can Google or ask ChatGPT how to do this for your specific environment). #### Health checks are failing because of `ALLOWED_HOSTS` [Section titled “Health checks are failing because of ALLOWED\_HOSTS”](#health-checks-are-failing-because-of-allowed_hosts) Kamal runs a “health check” during deploys to ensure your new application is ready to handle requests. This involves pinging your workers on an internal docker address and waiting for them to respond with a “200 OK” status code. Because it’s not possible to predict the hostname for these requests, a special middleware that bypasses the host check is included in Pegasus to handle this situation. If health checks are failing that means the middleware isn’t set up properly. The healthcheck url that kamal hits (`/up` by default), must match the path defined in `apps.web.middleware.healthchecks.HealthCheckMiddleware` (also `/up` by default). To fix it, confirm the following things: 1. Ensure that `"apps.web.middleware.healthchecks.HealthCheckMiddleware",` is the first middleware in `settings.MIDDLEWARE`. 2. Ensure that the paths match. 1. The kamal path is defined in the `proxy` section of `deploy.yml`, under `healthcheck`. If there is no `healthcheck` section, then it’s using the default path. 2. The middleware path is defined in `apps/web/middleware/healthchecks.py`. ### Cookbooks [Section titled “Cookbooks”](#cookbooks) #### Changing your site URL [Section titled “Changing your site URL”](#changing-your-site-url) To change your site’s URL, do the following: 1. Set up a new DNS endpoint as outlined above. 2. Change the `host` value in your proxy configuration in `deploy.yml` to the new domain. 3. Update your `ALLOWED_HOSTS` setting / environment variable as needed. 4. Run `kamal proxy reboot`. 5. Run `kamal deploy` Your app should now be running on your new domain. #### Getting a database backup [Section titled “Getting a database backup”](#getting-a-database-backup) Here is one way to get a database dump of your server: First you can run the following command to save a database dump to the *host* machine: ```bash kamal accessory exec postgres 'pg_dump -h localhost -p 5432 -U > db_dump.sql' --reuse ``` This should create a file on the *host* machine at `/home/kamal/db_dump.sql`. If you want to copy this file locally, you can run: ```bash scp kamal@yourapp.com:db_dump.sql ./ ``` Note: you may want to zip or gzip this file first if you have a large database. #### Doing a database restore [Section titled “Doing a database restore”](#doing-a-database-restore) To restore a database you first put the backup file on the host: ```bash scp ./db_dump.sql kamal@yourapp.com: ``` Then create the DB: ```bash kamal accessory exec postgres 'createdb -h localhost -p 5432 -U ' --reuse ``` After that you will need to login to the *host* machine: ```bash ssh kamal@yourapp.com ``` And copy the database dump onto the DB machine. Run `docker ps` to get the container id of the DB machine. Then run: ```bash docker cp db_dump.sql :/tmp/db_dump.sql ``` Finally, login to the DB container: ```bash docker exec -it /bin/bash ``` And restore the data: ```bash psql -h localhost -p 5432 -U < /tmp/db_dump.sql ``` #### Deploying multiple apps to the same server [Section titled “Deploying multiple apps to the same server”](#deploying-multiple-apps-to-the-same-server) One of the major benefits of the VPS-based approach is that you can easily host multiple apps on the same hardware, which is usually a substantial cost advantage over hosting each one on its own. This is now supported out of the box by Kamal and Pegasus. To deploy multiple applications to the same server, just set up Kamal individually for each application and run through the steps above. Once multiple sites are set up, `kamal-proxy` will automatically route traffic to the right app based on the site URL. #### Running Docker as a non-root user [Section titled “Running Docker as a non-root user”](#running-docker-as-a-non-root-user) Follow these steps if you don’t want to run kamal and Docker as the root user. ##### Manually Install Docker [Section titled “Manually Install Docker”](#manually-install-docker) If you don’t run kamal as root you’ll have to install Docker yourself. You can test if Docker is installed by running `docker -v` on the command line. You should see output like the following if it is installed correctly. ```plaintext Docker version 24.0.5, build 24.0.5-0ubuntu1~20.04.1 ``` If you need to install it, you can find instructions in [Docker’s documentation](https://docs.docker.com/engine/install/ubuntu/). You only need to install Docker Engine, not Docker Desktop. ##### Prepare a user account for Kamal [Section titled “Prepare a user account for Kamal”](#prepare-a-user-account-for-kamal) Next, create a user for Kamal to use. You can choose any username you like. In this example we will use `kamal`. We’ll also add this user to the `docker` group so that Kamal can run docker commands. First login to your server as a user with root access. Then run the following commands: ```bash sudo adduser kamal --disabled-password sudo adduser kamal --add_extra_groups docker ``` Next, add your SSH key to the `kamal` user’s `authorized_keys` file so you can login without a password. If you need to generate an SSH key you can [follow these steps](https://www.digitalocean.com/community/tutorials/how-to-configure-ssh-key-based-authentication-on-a-linux-server): ```bash sudo mkdir -p /home/kamal/.ssh sudo cp ~/.ssh/authorized_keys /home/kamal/.ssh/authorized_keys sudo chown -R kamal:kamal /home/kamal/.ssh ``` Next, test the login works. Exit out of your server and on your *local machine* run: ```bash ssh kamal@ ``` If you’ve set everything up properly the `kamal` user should be able to login with no password. Once you’re logged in, as a final test, ensure the `kamal` user can run docker commands by running: ```bash docker run hello-world ``` If the command above completes without error you are ready to go! Finally, update your `config/deploy.yml` file to specify a different user by adding an ssh section, as [described in the docs](https://kamal-deploy.org/docs/configuration/ssh/#the-ssh-user): ```yaml ssh: user: kamal ``` # Deployment Overview > Compare Django deployment options including VPS, PaaS platforms like Heroku and Render, Docker containers, and Kubernetes for SaaS applications. Pegasus---like Django---can be deployed on any standard cloud infrastructure. The most common ways of deploying Pegasus are: 1. On a raw VPS / Virtual Machine, such Digital Ocean, Linode, or Amazon EC2 or Lightsail 2. On a platform-as-a-service (PaaS) platform, such as Heroku, or PythonAnywhere 3. In a containerized way, using Docker, and (optionally) Kubernetes Choosing the right deployment architecture involves a complex set of trade-offs, and there’s no one-size-fits-all solution. PaaS and Docker-based solutions tend to be easier to get up and running, but can be more difficult to modify and are often more expensive at scale. Meanwhile, setting up a VPS can be error-prone but is a very cost-effective way to deploy small applications. Much of the choice will also depend on the knowledge and comfort of you/your team with various tools and platforms. See this [Django Deployment Guide](https://www.saaspegasus.com/guides/django-deployment/) for a big-picture overview on choosing a deployment strategy. ## Officially supported PaaS platforms [Section titled “Officially supported PaaS platforms”](#officially-supported-paas-platforms) Pegasus ships with configuration files to deploy to select platforms out-of-the-box. The officially supported platforms are: * [Render](/deployment/render) (Python-based) * [Fly](/deployment/fly) (Docker-based) * [Heroku](/deployment/heroku) (Python or Docker) * [Digital Ocean App Platform](/deployment/digital-ocean) (Docker-based) * [Google Cloud Run](/deployment/google-cloud) (Docker-based) Render and Fly are comparable, and are the recommended options for staging sites or MVPs, since they are easy to set up and have a generous free tier. If you would like to deploy to a platform that’s not listed here, please get in touch on Slack or by emailing and I’m happy to help! ## Deployment to any VPS [Section titled “Deployment to any VPS”](#deployment-to-any-vps) In addition to the above platforms, you can use Pegasus’s Kamal deployment support to deploy your application onto any Linux server, using Docker containers. For more information on deploying to a VPS, see the [kamal deployment documentation](/deployment/kamal). ## Other options [Section titled “Other options”](#other-options) If, for whatever reason, you don’t want to use the built in Kamal option to deploy to a VPS, the Django documentation provides a good overview on [how to deploy Django to your own server](https://docs.djangoproject.com/en/stable/howto/deployment/). Pegasus user [Mitja Martini](https://mitjamartini.com/) has documented how he [deploys his SaaS Pegasus application to a VPS using Dokku](https://mitjamartini.com/blog/2024/09/22/deploying-django-on-dokku/) (an open-source, self-hosted PaaS platform). Pegasus user [Artem Gordinskiy](https://artem.cool/) has documented his experience [migrating Pegasus apps from Kamal to Coolify](https://artem.cool/blog/coolify-django/) (another open-source, self-hosted PaaS). Pegasus’s [Docker support](/docker) can be used as a basis for other production environments that supports container---for example, Google Kubernetes Engine and Amazon ECS. Please reach out in the Pegasus Slack `#deployment` channel for any help on this! # Production checklist > Essential Pegasus production setup checklist covering security settings, email configuration, static files, media storage, and monitoring for live applications. The following are some recommendations for deploying production Pegasus applications. ## Run the Django deployment checklist [Section titled “Run the Django deployment checklist”](#run-the-django-deployment-checklist) Django provides a [deployment checklist](https://docs.djangoproject.com/en/stable/howto/deployment/checklist/) that helps ensure your site has some of the most important settings properly configured for production environments. It is executed by running `manage.py check --deploy` on your production server. It’s recommended to run this on your production application and address any critical issues. The default Pegasus configuration will contain some warnings, to help prevent misconfigurations which can affect your site’s availability. Not all warnings are serious issues and some may not be possible to address (e.g. if part of your site must be available over HTTP instead of HTTPS). After running the `manage.py check --deploy` command you should read through the documentation for any issues you get and update the relevant settings where necessary. *Note: The “unable to guess serializer” warnings are safe to ignore, and will be fixed in a future version of Pegasus.* ## Set your `ALLOWED_HOSTS` [Section titled “Set your ALLOWED\_HOSTS”](#set-your-allowed_hosts) In your app’s `settings_production.py` be sure to update the [`ALLOWED_HOSTS` setting](https://docs.djangoproject.com/en/4.1/ref/settings/#allowed-hosts) with the domain(s) you want the site to be available from, replacing the `'*'` that is there by default: ```python ALLOWED_HOSTS = [ 'example.com', # use your app's domain here ] ``` Failure to do this opens up your site to more HTTP host header attacks. ## Update your Django Site [Section titled “Update your Django Site”](#update-your-django-site) In order for absolute URLs and JavaScript API clients to work, your Django site should match your application’s domain. See the documentation on [absolute URLs](/configuration/#absolute-urls) to do this. ## Set up email [Section titled “Set up email”](#set-up-email) If you haven’t already, you’ll want to set up your site to [send email](/configuration/#sending-email) ## Make sure your secrets are set [Section titled “Make sure your secrets are set”](#make-sure-your-secrets-are-set) Application secrets (e.g. API keys, passwords, etc.) are managed in environment variables. Ensure that you have configured the following variables (if you are using them): * All apps should set `SECRET_KEY` to a long, randomly-generated value. * If you’re using Stripe, you should set the `STRIPE_TEST_PUBLIC_KEY`, `STRIPE_TEST_SECRET_KEY`, `STRIPE_LIVE_PUBLIC_KEY`, and `STRIPE_LIVE_SECRET_KEY` config vars (or whatever subset you are using). You also need to set `STRIPE_LIVE_MODE` to `True`. * If you set up email, ensure whatever keys/secrets you need are set. * If you’re using Mailchimp, set `MAILCHIMP_API_KEY` and `MAILCHIMP_LIST_ID`. * If you’re using Health Checks, set `HEALTH_CHECK_TOKENS`. Refer to your [chosen platform’s documentation](/deployment/overview) for details on how to set environment variables in that platform. ## Sync Stripe data [Section titled “Sync Stripe data”](#sync-stripe-data) After setting up your Stripe variables per above, you’ll want to run: ```bash python manage.py bootstrap_subscriptions ``` to initialize your subscription data. See your [chosen platform’s documentation](/deployment/overview) for how to run one-off commands. ## Set up media files [Section titled “Set up media files”](#set-up-media-files) Some functionality, like user profile pictures, requires saving user-uploaded files. In development these are saved to the file system, but in most production environments the file system is not usable for it. Instead, you need to set up an external storage to handle these. There is guidance on configuring media files in the [settings and configuration docs](/configuration/#storing-media-files). The most common choice of external storage is [Amazon S3](https://aws.amazon.com/s3/), though many cloud providers have their own S3-compatible options, e.g. [Digital Ocean Spaces](https://www.digitalocean.com/products/spaces). ## Check your static file setup [Section titled “Check your static file setup”](#check-your-static-file-setup) By default, Pegasus uses [whitenoise](https://whitenoise.readthedocs.io/en/stable/index.html) for static files. **If you keep the default setup, you do not need to change anything.** Static files will be built and collected as part of the build process of your Docker container and should be available on your production site. If you decide to switch to serving files externally, for example, using Amazon S3, then you may need to modify your static file set up for some platforms. This is because production secrets necessary to save files to S3 may not be available during the Docker container build. If this is the case, you should modify your deployment set up so that `python manage.py collectstatic --noinput` is run at the same time as Django database migrations, so that the necessary secrets are available to the application. The exact way to do this will vary by deployment platform. ## Optimize your front end [Section titled “Optimize your front end”](#optimize-your-front-end) The front-end files that ship with Pegasus are the developer-friendly versions. In production, these should be optimized. Most Pegasus deployment configurations will handle this automatically for you, but if you need to handle it yourself, follow the guidance below. First you should add the compiled files to your `.gitignore` as described in the [front end docs](/front-end/overview). Then, as part of your CI/CD deployment process, you should build the bundle files directly on your production server (using `npm install && npm run build`). This will ensure that the latest, optimized version of the front-end code is always deployed as part of your production environment. ## Update other configuration options [Section titled “Update other configuration options”](#update-other-configuration-options) See [the configuration page](/configuration) for a larger list of options, including social login, sign up flow changes, analytics, logging, adding captchas, and so on. ## Set up monitoring [Section titled “Set up monitoring”](#set-up-monitoring) It’s highly recommended to enable Sentry and connect it to your application so that you can see any errors that are encountered. It’s also recommended to enable the health check endpoint and connect it to a monitoring tool like [StatusCake](https://www.statuscake.com/) or [Uptime Robot](https://uptimerobot.com/) so that you can be alerted whenever your site or services are having an outage. The URL you should connect is: `yourdomain.com/health/`. If you have the “Health Check Endpoint” option enabled for your project you should also ensure that you have set the `HEALTH_CHECK_TOKENS` environment variable to a secure value. This can be a comma-separated list of tokens that are required to access the health check endpoint. For example: ```plaintext HEALTH_CHECK_TOKENS=secrettoken1,secrettoken2 ``` Then your health check endpoint will only be accessible at: `https://yourdomain.com/health/?token=secrettoken1` and `https://yourdomain.com/health/?token=secrettoken2` These URLs can then be connected to a monitoring tool to ensure that only your monitoring tool (or anyone who knows the token) can access the health check endpoint. ## Double-check your language settings [Section titled “Double-check your language settings”](#double-check-your-language-settings) Make sure your [internationalization settings](/internationalization/) are correct, and you don’t have any extra languages in `settings.LANGUAGES` that you don’t currently support. This is especially important if you are using Wagtail, as links to pages in unsupported languages may error or return the wrong results. ## Consider switching to `psycopg2` source distribution [Section titled “Consider switching to psycopg2 source distribution”](#consider-switching-to-psycopg2-source-distribution) For ease of development, Pegasus ships with the `psycopg2-binary` package which is used for connecting to PostgreSQL however the [psycopg documentation](https://www.psycopg.org/docs/install.html#psycopg-vs-psycopg-binary) recommends using the source distribution (`psycopg2`) in production environments. The issues mentioned in the documentation mostly impact non-Docker deployments. ### Switching from `psycopg2-binary` to `psycopg2` [Section titled “Switching from psycopg2-binary to psycopg2”](#switching-from-psycopg2-binary-to-psycopg2) 1. In `requirements/requirements.in`, replace `psycopg2-binary` with `psycopg2` 2. [Re-build](/python/packages/) your requirement TXT files If you are using the Dockerfiles shipped with Pegasus you should not need to make any changes however if you are running your Pegasus app directly on a VM you will need to make sure the [build prerequisites](https://www.psycopg.org/docs/install.html#build-prerequisites) are installed before deploying the requirements changes. # Render > Deploy Pegasus applications to Render platform with PostgreSQL, Redis, automatic builds, and Celery worker support for web services. Pegasus supports deploying to [Render](https://render.com/) as a standard Python application. ### Prerequisites [Section titled “Prerequisites”](#prerequisites) If you haven’t already, create your Render account. To use celery you will need to upgrade to a paid plan. ### Deploying [Section titled “Deploying”](#deploying) Once you’ve logged into Render you can create your app as follows: 1. In the Render dashboard, create a new blueprint 2. Connect your GitHub or Gitlab account and select your project’s repository 3. Configure the *Blueprint Name* and select the branch you want to deploy from 4. Review the configuration, add settings and click ‘Apply’ This will kick off the process to create your PostgreSQL database and Redis instances as well as deploy your web application (configured in your project’s `render.yaml` file). **After deploying, review the [production checklist](/deployment/production-checklist) for a list of common next steps** ### Build Script [Section titled “Build Script”](#build-script) The `build.sh` file is run by Render to run the commands needed to build the app, as [described here](https://render.com/docs/deploy-django#create-a-build-script). This is also where “release” commands like `collectstatic` and `migrate` run. If there are other commands (e.g. `./manage.py bootstrap_subscriptions`) that you want to run on every deploy you can add them to `build.sh`. If you enable celery, it will use the `build_celery.sh` file, which runs the basic build steps, but not the “release” commands. You generally should not need to modify this file. #### (Optional) Running Migrations in the Release Phase [Section titled “(Optional) Running Migrations in the Release Phase”](#optional-running-migrations-in-the-release-phase) If you want, you can optionally run the database migrations in the release phase using Render’s [Deploy steps](https://render.com/docs/deploys#deploy-steps) functionality. This is not required, and notably **it is not supported on Render’s free tier**, but may lead to a more consistent deployment process. To do this, first remove the following lines from `deploy/build.sh`: ```bash echo "Running database migrations" python manage.py migrate ``` Then create the following file at `deploy/pre_deploy.sh`: ```bash #!/usr/bin/env bash # exit on error set -o errexit export DJANGO_SETTINGS_MODULE={{cookiecutter.project_slug}}.settings_production echo "Running database migrations" python manage.py migrate ``` Finally, add the following line to your `render.yaml` file, after the `buildCommand`: ```yaml preDeployCommand: "./deploy/pre_deploy.sh" ``` After completing these steps, migrations will run in the pre-deploy phase. ### Settings and Secrets [Section titled “Settings and Secrets”](#settings-and-secrets) Render builds use the `settings_production.py` file. You can add settings here or in the base `settings.py` file, and use environment variables to manage any secrets, following the examples in these files. Environment variables can be managed from the “Environment” tab on your app’s dashboard. ### Running One-Off Commands [Section titled “Running One-Off Commands”](#running-one-off-commands) You can run one-off commands in the Render shell (paid plan required) or [via SSH](https://render.com/docs/ssh). ### Celery Support [Section titled “Celery Support”](#celery-support) To run celery workers on Render you will need to upgrade to a paid plan. Then in your `render.yaml` file uncomment the ‘celery’ section and rebuild from the steps above. If you previously deployed your application you can choose “Update Existing Resources” to avoid having to recreate your app / database / redis instance. ### Troubleshooting [Section titled “Troubleshooting”](#troubleshooting) **Sometimes Render fails to build on the first deployment.** Retrying the deployment from the same commit seems to resolve this. ### Container-based deployment [Section titled “Container-based deployment”](#container-based-deployment) It is possible to use Render’s docker-based support to deploy Pegasus apps, but it is not recommended because there is no “release” phase, which makes it difficult to set up things like database migrations. More details can be found in [this support thread](https://community.render.com/t/release-command-for-db-migrations/247/7). # Troubleshooting > Resolve common Django deployment issues including 400 Bad Request errors, broken styles, API client problems, and misconfigured absolute URLs. Below are some common issues related to deployment, and how to fix them. ### Page displaying a 400 Bad Request error page [Section titled “Page displaying a 400 Bad Request error page”](#page-displaying-a-400-bad-request-error-page) **Problem:** Your site deploys but you get a “400 Bad Request” when opening it in a browser. **Solution:** This is usually caused by a misconfigured `ALLOWED_HOSTS` setting. See [the section on `ALLOWED_HOSTS`](/deployment/production-checklist/#set-your-allowed_hosts) to fix. ### App is online but all styles are broken [Section titled “App is online but all styles are broken”](#app-is-online-but-all-styles-are-broken) **Problem:** Your app is working but all the pages look horrible and unstyled. **Solution:** It’s likely that your static file set up is not correct. If you use Pegasus-supported deployments, this shouldn’t happen, but if you’ve veered from them at all it’s a common failure mode. To fix, Ensure that you are running `collectstatic` somewhere in your deployment pipeline, and that your `STATIC_ROOT` is properly configured. More on static files in production can be found in the [Django documentation](https://docs.djangoproject.com/en/4.1/howto/static-files/deployment/). ### JavaScript API clients not working [Section titled “JavaScript API clients not working”](#javascript-api-clients-not-working) **Problem** JavaScript API clients are failing to load data. This is likely the problem if the employee React demo or the teams list UI don’t work properly. **Solution:** This is usually caused by a misconfigured Django site. See the documentation on [absolute URLs](/configuration/#absolute-urls) to fix. ### Invitation / account emails have the wrong links [Section titled “Invitation / account emails have the wrong links”](#invitation--account-emails-have-the-wrong-links) **Problem** When you try to confirm an email address or accept a team invitation you are sent to the wrong site (e.g. localhost). **Solution:** This is usually caused by a misconfigured Django site. See the documentation on [absolute URLs](/configuration/#absolute-urls) to fix. ### Stripe callbacks are going to the wrong place [Section titled “Stripe callbacks are going to the wrong place”](#stripe-callbacks-are-going-to-the-wrong-place) **Problem** After completing a payment in Stripe Checkout, you are redirected to the wrong place (e.g. localhost). **Solution:** This is usually caused by a misconfigured Django site. See the documentation on [absolute URLs](/configuration/#absolute-urls) to fix. # Standalone React Front End > Build decoupled React single-page applications with Vite, session authentication, API integration, and deployment to static hosting platforms. Experimental Feature This feature is experimental. It is likely (but not guaranteed) that it will make it into a future Pegasus release. While in the experimental phase, it may undergo significant changes, including breaking changes. *Added in version 2024.4. Expanded in version 2025.4.1.* SaaS Pegasus’s default React integration is based on a hybrid-model for reasons [outlined here](https://www.saaspegasus.com/guides/modern-javascript-for-django-developers/client-server-architectures/#enter-the-hybrid-architecture). The hybrid model is still recommended for the overwhelming majority of Pegasus projects using React. However, there are valid reasons to run a completely separate React front---including access to dedicated tooling and libraries, isolating your front end and back end code, and working with AI-based tools that generate single-page applications. Pegasus experimentally ships with a decoupled front end *example* single page application that can be used as a starting point for building out a decoupled front end / SPA with React. It uses [Vite](https://vitejs.dev/) as a development server and build tool. The features it includes are: * A standalone Vite / React application. * Authentication via headless allauth and sessions---including sign up, login, social login, email confirmation, two-factor authentication, and logout functionality. * A sample profile page which shows how to retrieve data from the logged in user (via the back end APIs) and display it. * The employee lifecycle demo that ships with Pegasus (if enabled), showing a full create, update, delete (CRUD) workflow. The standalone front end is *only available on TailwindCSS* and uses DaisyUI for styling. **The standalone is not intended to be a replacement for Pegasus’s UI, but a reference example you can use as a starting point to build standalone, single-page-applications with Pegasus and React.** Here’s a demo: And here are some technical details: ## Running the front end [Section titled “Running the front end”](#running-the-front-end) *If you are using Docker, your front end should start in a separate container after running `make init`.* The front end lives in the `/frontend` folder of your project. To set it up for the first time, first go into the directory: ```bash cd frontend ``` And install npm packages: ```bash npm install ``` Create your `.env` file: ```bash cp .env.example .env ``` Then run the dev server: ```bash npm run dev ``` Note: your Django backend must also be running for the front end to work, and you must also [build your Django front end](/front-end/overview) for styles to work. ## Authentication [Section titled “Authentication”](#authentication) Authentication uses session-based authentication against the Django backend (previous versions of the front end used JWTs). The authentication implementation borrows heavily from the [allauth example](https://github.com/pennersr/django-allauth/tree/main/examples/react-spa) project. In particular, the `src/lib/` and `src/allauth_auth/` folders have been copied in from that project and lightly modified to work with Pegasus. Authentication is primarily handled via *authenticated routes* and *authentication context*. You can see an example of how to set this up in the profile page. Any page in your application that requires login can be wrapped in the `AuthenticatedRoute` component. For example, like this: ```jsx

Hello authenticated user!

``` Alternatively, if you make a page a child of the `` component this will be automatically configured for you. See `main.tsx` as an example of how this is set up. When using the `AuthenticatedRoute`, if the user is not logged in they will be redirected to the login page. If they are logged in, they will be able to access the route, and you can assume access to the user object and other APIs that require login. If you want to access user data you can use the `useAuthInfo` helper function which returns an `AuthContext` context. Here is a simplified example taken from the Profile page: ```jsx import { useAuthInfo } from "../../allauth_auth/hooks"; export default function Profile() { const { user } = useAuthInfo(); return

The user's email address is: {user?.email}

} ``` ## Backend API access [Section titled “Backend API access”](#backend-api-access) The front end uses the [same api client](/apis/#api-clients) as the backend / hybrid model. The API client is installed as a local npm package. Authentication is handled via sessions and does not require any additional configuration. Here is a basic example from the employee app demo: ```jsx import {PegasusApi} from "api-client"; import EmployeeApplication from "../../../../assets/javascript/pegasus/examples/react/App.jsx"; import {getApiConfiguration} from "../../api/utils.tsx"; export default function EmployeeApp() { const client = new PegasusApi(getApiConfiguration()); return ( ); } ``` ## Routing [Section titled “Routing”](#routing) Routing is handled by [React Router](https://reactrouter.com/en/main). The main routes for the project are configured in `main.tsx`, and you can also include child routes by following the pattern used by the employee demo. ## URLs in Emails [Section titled “URLs in Emails”](#urls-in-emails) Some workflows, like email confirmation and password reset, require sending the user a link to your site. Allauth only supports a single link for the entire application so you need to choose whether that link should go to your Django application or your React front end. To use the React front end’s pages for these workflows, you can set `USE_HEADLESS_URLS = True` in your settings or environment variables. This will configure the [`HEADLESS_FRONTEND_URLS` setting](https://docs.allauth.org/en/dev/headless/configuration.html) to work with the built-in front end. ## Deployment [Section titled “Deployment”](#deployment) Big picture, you should deploy the standalone front end and Django backend separately, and use different subdomains to point to them. The most common set up is to deploy the front end to either “mydomain.com” or “[www.mydomain.com](http://www.mydomain.com)”, and then deploy the backend to “app.mydomain.com” or “platform.mydomain.com”. ### The Django Backend [Section titled “The Django Backend”](#the-django-backend) You will need to deploy your Django backend using any of the [standard deployment methods](/deployment/overview). In addition to a standard deployment, you will specifically need to set the following additional settings, by overriding them in your environment variables or a production settings file: * `FRONTEND_ADDRESS`: Your front end’s full URL, e.g. “” * `CORS_ALLOWED_ORIGINS`: Full URLs of both your frontend and backend addresses, e.g. “” * `CSRF_COOKIE_DOMAIN`: All domains and subdomains, e.g. “.mydomain.com” (note the leading ”.”). * `SESSION_COOKIE_DOMAIN`: Same as `CSRF_COOKIE_DOMAIN`. ### The React Frontend [Section titled “The React Frontend”](#the-react-frontend) The frontend can be deployed anywhere that hosts static sites, including Cloudflare Pages, Netlify, or S3. The basic steps for deployment are to run `npm run build` and then serve the output directory as a static site. In addition, the following environment variables need to be set during build. Do not include trailing slashes: * `VITE_APP_BASE_URL`: Your django backend url, e.g. “” * `VITE_ALLAUTH_BASE_URL`: The full allauth base route for your backend, e.g. “” Each static site host has their own way of configuring the above setup. Below are quick example instructions for deploying the front end on Cloudflare Pages: 1. In the Cloudflare dashboard, visit “Workers & Pages” and click “Create” 2. Under “pages”, select the option to connect a Github repository. 3. Pick your Pegasus Github repository. You may have to authenticate and provide access permissions. 4. Fill in the following settings: 1. Build command: `npm run build` 2. Build output directory: `dist` 3. Root directory: `frontend` 4. Add the following environment variables. *Note that the URLs should not end in slashes.* * `VITE_APP_BASE_URL: https://` * `VITE_ALLAUTH_BASE_URL: https:///_allauth/browser/v1` * `NODE_VERSION: 22.13.0` * `NPM_VERSION: 11.3.0` 5. Click “Save and Deploy” 6. After the initial deployment you can add a custom domain to your front end. ## Known Limitations [Section titled “Known Limitations”](#known-limitations) This is an experimental feature meant to provide a starting point for building a standalone React front end against your Pegasus app. It is *not* a complete, production-ready app, in the same way that standard Pegasus is. Here are some of the larger limitations: * Only a very limited subset of Pegasus functionality is available in the front end. * The front end styles only support Tailwind CSS. * Internationalization (translations) are not supported. ## Troubleshooting [Section titled “Troubleshooting”](#troubleshooting) **I’m getting a “URI malformed” error when I load the app.** This is likely because your `frontend/.env` file does not exist, or your `VITE_APP_BASE_URL` is not properly set inside it. See `frontend/.env.example` for an example `.env` file suitable for development. ## Feedback [Section titled “Feedback”](#feedback) If you have any feedback on this feature I would love to hear it! Feedback could include bug reports, feature requests, or any suggested architectural changes. # Front End Design Patterns > JavaScript design patterns for Pegasus projects including site-wide libraries, npm package integration, and SiteJS utility functions with Vite and Webpack. This section provides guidance on common front end tasks. ## Providing site-wide JavaScript [Section titled “Providing site-wide JavaScript”](#providing-site-wide-javascript) Sometimes you need access to a library or piece of code you’ve written on many different pages. Pegasus has a few patterns for dealing with this. ### `site.js` and `app.js` [Section titled “site.js and app.js”](#sitejs-and-appjs) There are two “site-wide” JavaScript files used in Pegasus. The `site.js` file contains code that you want loaded *on every page*. Its bundle file (`site-bundle.js`) is included in your `base.html`. This is a good place to put global code, library imports, etc. which should always be available. The `app.js` file contains code that you want loaded \*on some pages---typically after login. This is a good place to put helper functions that are only used in a few places. The `app-bundle.js` file is *not* included by default, and so must be explicitly added to any page that needs it, like this: Vite: ```jinja {% load django_vite %} {% block page_js %} {% vite_asset 'assets/javascript/app.js' %} {% endblock page_js %} ``` Webpack: ```jinja {% load static %} {% block page_js %} {% endblock page_js %} ``` The distinction between `site` and `app` is somewhat arbitrary---if you wanted you could create page-level files for every function/module, or dump all your code into `site.js`. But it’s done to balance page speed and complexity. The more individual JavaScript files you have, the less code will have to be loaded on any individual page. This should generally make your site faster. But it’s more complex to maintain as each new file needs to be added to your `vite.config.ts`/`webpack.config.js`. Meanwhile, dumping everything in a single file is easier to maintain, but can lead to bulky initial page load times. After the initial load, the browser’s cache should help, so this can be acceptable for most pages (apart from your landing page / marketing site). Because of this, Pegasus recommends keeping `site.js` lightweight, and lumping together code after login into `app.js`. But feel free to do something differently! ## Making an existing package available [Section titled “Making an existing package available”](#making-an-existing-package-available) To make a library available on every page, you can follow these steps. Note: there are many ways to do this, but this is the way it’s currently handled in Pegasus. 1. Install the library via `npm install `. 2. Create a javascript file for the library in `assets/javascript`. 3. Expose the library via the library’s instructions. E.g. `window.library = require('library')`. This step will vary based on the library. 4. Import the library in your `site.js` or `app.js` file (see above for the distinction). 5. Rebuild your front end. You can see an example with HTMX (version 2023.2 and later) or Alpine.js (version 2023.3 and later). ### Example: Adding simple-datatables [Section titled “Example: Adding simple-datatables”](#example-adding-simple-datatables) As an example, if you want to add [simple-datatables](https://github.com/fiduswriter/simple-datatables) to your project, first install it: ```bash npm install simple-datatables ``` Then add the following lines to your `site.js` file: ```javascript import { DataTable } from 'simple-datatables'; window.DataTable = DataTable; ``` Then you can access the `DataTable` object from any page: ```javascript // initialize the table with id "mytable" const dataTable = new DataTable("#mytable", { searchable: true, fixedHeight: true, }); ``` ### Using the SiteJS library [Section titled “Using the SiteJS library”](#using-the-sitejs-library) Pegasus previously used [webpack libraries](https://webpack.js.org/guides/author-libraries/) to expose helper code, however has shifted to providing this functionality by directly updating the `window.SiteJS` object. If you’d like to add utility functions to sitewide JavaScript, you can update this object in any front end file. For example in `app.js` we add modal functionality as follows: ```javascript import { Modals as AppModals } from './web/modals'; // Ensure SiteJS global exists if (typeof window.SiteJS === 'undefined') { window.SiteJS = {}; } // Assign this entry's exports to SiteJS.app window.SiteJS.app = { Modals: AppModals, } ``` Then, as long as you import the `app-bundle.js` file (as per above), you will have all the exported code available via the `SiteJS` library. So you can run: ```javascript const modal = SiteJS.app.Modals.initializeModal(); ``` The convention for using this functionality is: ```javascript SiteJS. ``` Where `` is the name of the file in the `module.exports` section of `vite.config.ts`/`webpack.config.js`. You can look at existing Pegasus examples to get a better sense of how this works. Note that this functionality was first built on Webpack libraries, but has since been made explicit in the code and is only a *convention*. You can use any other convention you want, but this is the one that Pegasus uses. # Migrating from Webpack to Vite > Step-by-step guide to migrate Pegasus projects from Webpack to Vite bundler with React JSX file extensions and template updates. This page describes how to migrate your project from Webpack to Vite. There is also a video walkthrough of the process here: ## 1. Upgrade your project to 2025.5 [Section titled “1. Upgrade your project to 2025.5”](#1-upgrade-your-project-to-20255) First upgrade your project to 2025.5 [according to the normal process](/upgrading). Do *not* change your bundler setting at this stage. Do normal testing and verification that everything is working with Webpack on version 2025.5. ## 2. React only: Rename all `.js` files using JSX to `.jsx` [Section titled “2. React only: Rename all .js files using JSX to .jsx”](#2-react-only-rename-all-js-files-using-jsx-to-jsx) Vite is stricter than Webpack about file extensions, so any file that uses JSX syntax (i.e. React code), needs to be in a file with a `.jsx` extension. After changing the extensions of your files you may need to tweak your JavaScript imports. You’ll also need to modify your `webpack.config.js` file if any referenced files have changed. ## 3. Change your bundler setting from “Webpack” to “Vite” and do another Pegasus upgrade [Section titled “3. Change your bundler setting from “Webpack” to “Vite” and do another Pegasus upgrade”](#3-change-your-bundler-setting-from-webpack-to-vite-and-do-another-pegasus-upgrade) Next, in your project settings, change the bundler to “Vite” and perform another upgrade. This should handle *most* of the Webpack —> Vite migration for you, including migrating your npm packages, build commands, and built-in CSS / JavaScript bundles. During this step *do not delete your `webpack.config.js` file*, as you’ll want to reference it for the next step. ## 4. Add your custom CSS / JavaScript exports to your vite config [Section titled “4. Add your custom CSS / JavaScript exports to your vite config”](#4-add-your-custom-css--javascript-exports-to-your-vite-config) Next find the `entry` section of your project’s `webpack.config.js` that configures your exported bundle files. It will look something like this, though the exact files listed will depend on your project settings: ```javascript entry: { 'site-base': './assets/site-base.js', // base styles shared between frameworks 'site-tailwind': './assets/site-tailwind.js', // required for tailwindcss styles site: './assets/javascript/site.js', // global site javascript app: './assets/javascript/app.js', // logged-in javascript dashboard: './assets/javascript/shadcn-dashboard/index.jsx', teams: './assets/javascript/teams/teams.jsx', 'edit-team': './assets/javascript/teams/edit-team.jsx', 'chat': './assets/javascript/chat/chat.jsx', }, ``` Importantly, *if you have added or changed anything in this section, you will need to re-apply those changes to the `build.rollupOptions.input` section of `vite.config.ts`.* The section that you need to modify will look something like this: ```javascript build: { rollupOptions: { input: { 'site-base': path.resolve(__dirname, './assets/site-base.js'), 'site-tailwind': path.resolve(__dirname, './assets/site-tailwind.js'), 'site': path.resolve(__dirname, './assets/javascript/site.js'), 'app': path.resolve(__dirname, './assets/javascript/app.js'), 'dashboard': path.resolve(__dirname, './assets/javascript/shadcn-dashboard/index.jsx'), 'teams': path.resolve(__dirname, './assets/javascript/teams/teams.jsx'), 'edit-team': path.resolve(__dirname, './assets/javascript/teams/edit-team.jsx'), 'chat': path.resolve(__dirname, './assets/javascript/chat/chat.jsx'), }, ``` You should update this in the same pattern with any changes you have made to your webpack config. ## 5. Update your front end file references in templates [Section titled “5. Update your front end file references in templates”](#5-update-your-front-end-file-references-in-templates) Finally, update any Django templates you had that imported bundle files. Specifically, reference that look something like this: ```jinja {% block page_js %} {% endblock %} ``` Will need to be updated to: ```jinja {% block page_js %} {% vite_asset 'assets/javascript/app.js' %} {% endblock %} ``` Note that this uses the *source* file path instead of the bundle file. You will also need to add `{% load django_vite %}` to the top of the template. And if the flie uses React you’ll also need to add the `{% vite_react_refresh %}` tag to the `page_js` section. ## 6. Update Webpack libraries [Section titled “6. Update Webpack libraries”](#6-update-webpack-libraries) *Most projects won’t need to do this.* If you have added any code that relies on [Pegasus’s `SiteJS` library](/front-end/design-patterns/#using-the-sitejs-library) you will need to update it to explicitly expose itself on the window object. In the associated JavaScript file (in this case `library.js`), you need to change something like: ```javascript export MyLibrary; ``` To: ```javascript if (typeof window.SiteJS === 'undefined') { window.SiteJS = {}; } window.SiteJS.library = { MyLibrary: MyLibrary, } ``` ## 7. Rebuild and run your front end [Section titled “7. Rebuild and run your front end”](#7-rebuild-and-run-your-front-end) Finally, rebuild and run your front end, according to the [vite docs](/front-end/vite): ```bash npm install npm run dev ``` And confirm everything is working as expected. Once everything is working as expected, you can delete your `webpack.config.js` file. If you run into any issues during the migration, reach out via standard support channels. # Front End Overview > Modern JavaScript build pipeline with Vite or Webpack, TypeScript support, CSS compilation, and hybrid Django template integration architecture. *This page documents the front end files that are integrated into Django. See [the standalone front end docs](/experimental/react-front-end) for the separate React front end.* ## Architecture [Section titled “Architecture”](#architecture) Pegasus’s front-end architecture is a hybrid model, with a front-end codebase that is compiled and used directly in Django templates via Django’s static files infrastructure. There are two setups, one built on top of Webpack, and a more modern one built on top of Vite. The architecture of these is very similar, just built on different tools. Big picture, the front end consists of a build tool ([Vite](https://vite.dev/) or [Webpack](https://webpack.js.org/)) and a compiler ([esbuild](https://esbuild.github.io/) or [Babel](https://babeljs.io/)) which compiles the front-end code into bundle files that can be referenced using Django’s static file system, as represented in the diagram below. **Vite** ![Vite Build Pipeline](/_astro/js-pipeline-with-django-vite.QgrpxV9D_ZmxfJY.webp) **Webpack** ![Build Pipeline](/_astro/js-pipeline-with-django.CTtPRtGS_ZpXqRz.webp) Pegasus’s styles use either the [Tailwind](https://tailwindcss.com/), [Bootstrap](https://getbootstrap.com/) or [Bulma](https://bulma.io/) CSS frameworks, and building the CSS files is included as part of the front-end build pipeline. For more details on CSS in Pegasus, see the [CSS documentation](/css/overview). **For a much more detailed overview of the rationale behind this architecture, and the details of the set up see the [Modern JavaScript for Django Developers](https://www.saaspegasus.com/guides/modern-javascript-for-django-developers/) series.** ## Choosing a front end build tool [Section titled “Choosing a front end build tool”](#choosing-a-front-end-build-tool) Pegasus currently lets you choose between Vite and Webpack as the primary build tool for your front end. Choosing is relatively simple: **if you don’t know what you want, use Vite**. Vite is faster, more modern, and includes a number of features not supported by webpack, including: 1. Hot Module Replacement (HMR)---a development feature that lets code changes in your front end files automatically update without a full-page reload. 2. Code splitting---a production feature that breaks your front end files into individual bundles that encapsulate code dependencies. This leads to less redundant JavaScript and faster page loads. The main reason to choose Webpack is if you are already using it and don’t want to switch tools. See [this video](https://www.youtube.com/watch?v=qVwRygtffiw) for more on the benefits of Vite over Webpack. ## Front-end files [Section titled “Front-end files”](#front-end-files) The source front-end files live in the `assets` directory, while the compiled files get created in the `static` directory. Generally you should only ever edit the front-end files in `assets` directly, and compile them using the instructions below. ## Prerequisites to building the front end [Section titled “Prerequisites to building the front end”](#prerequisites-to-building-the-front-end) To compile the front-end JavaScript and CSS files it’s expected that you have installed: * [Node.js](https://nodejs.org/) * [NPM](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) Pegasus is developed and tested on the latest LTS releases, which (at the time of this writing) are Node version 22 and npm 11. Later versions will likely work, but aren’t regularly-tested. It’s recommended to use [`nvm`](https://github.com/nvm-sh/nvm) to manage different node/npm environments more easily. `nvm` is essentially `virtualenv` for Node.js/npm. Alternatively, you can build and run your front end with Docker. However, this has been known to cause performance problems in some environments. ## Initial setup [Section titled “Initial setup”](#initial-setup) Once you’ve installed Node and NPM, you can install your front end dependencies by running: ```bash npm install ``` or in Docker: ```bash make npm-install ``` In your project’s root directory. This will install all the dependencies necessary to build the front end. It will also generate a `package-lock.json` file. **It is recommended that you add the `package-lock.json` to source control for consistency across installations.** ## Building in Development [Section titled “Building in Development”](#building-in-development) The development set up is slightly different between Vite and Webpack. For details see these links: * [Vite in Development](/front-end/vite/#vite-in-development) * [Webpack in Development](/front-end/webpack/#development-with-webpack) ## Building for production [Section titled “Building for production”](#building-for-production) To build for production, run: ```bash npm run build ``` or in Docker: ```bash make npm-build ``` This will compress your files, remove logging statements, etc. In most [supported deployment set ups](/deployment/overview), this will be run automatically for you as part of your deployment. ## TypeScript and type checking [Section titled “TypeScript and type checking”](#typescript-and-type-checking) Since the 2022.6 release, Pegasus includes TypeScript as part of the front end code. You can write TypeScript or JavaScript code and it will be transpiled to work in a browser as part of the build pipeline. The build pipeline does *not* explicitly do type checking. To do type checking you can run: ```bash npm run type-check ``` Or in Docker: ```bash make npm-type-check ``` Type checks will also automatically run on new pull requests if you have enabled Github Actions on your project. # Troubleshooting > Fix Docker cross-platform file watching issues with webpack polling configuration for Windows and other development environments. ## Changes are not being picked when running `make npm-watch` in a Docker container [Section titled “Changes are not being picked when running make npm-watch in a Docker container”](#changes-are-not-being-picked-when-running-make-npm-watch-in-a-docker-container) Some Docker configurations do not properly pick up file-system changes across operating systems. This can be a problem, e.g. when running on certain Windows environments. This causes changes made to not be automatically picked up. This can be fixed by updating `webpack.config.js` to use polling by adding this: ```javascript module.exports = { //... watchOptions: { poll: 1000, }, }; ``` Alternatively, you can switch to installing/running NPM natively instead of in Docker. This is a good option if you are also getting poor performance, which can also be caused by cross-platform issues. # Vite-Specific Instructions > Configure Vite with django-vite for fast development server, Hot Module Replacement, and seamless Django template integration. ## Vite architecture overview [Section titled “Vite architecture overview”](#vite-architecture-overview) The Vite integration with Django is managed by [`django-vite`](https://github.com/MrBin99/django-vite). Big picture this works in two ways: 1. In *development* front end assets are served directly from Vite’s server 2. In *production* front end assets are built and served through Django’s static files system. This toggle is configured through the `"dev_mode"` setting in your default `DJANGO_VITE` config in `settings.py`. Out of the box, this setting is tied to the `settings.DEBUG` flag. ## Vite in Development [Section titled “Vite in Development”](#vite-in-development) Unlike Webpack, Vite does *not* use bundle files in development. Instead, your front end files are served by Vite’s development server (which is configured though `django-vite`). This workflow makes gives you the benefit of added speed and fast page updates without reloading your browser, but it does mean that **your Vite server must be running at all times during development**. To run your Vite server and serve your front end files you should run: ```bash npm run dev ``` Or in Docker: ```bash make npm-dev ``` This command will also automatically refresh your front end whenever any changes are made. ## Adding files to Django templates [Section titled “Adding files to Django templates”](#adding-files-to-django-templates) To add CSS / JS files to Django templates you can use the `vite_asset` template tag from django-vite: ```jinja {% load django_vite %} {% vite_asset '' %} ``` If you are using React you also need to add the `vite_react_refresh` tag to get HMR working: ```jinja {% load django_vite %} {% vite_react_refresh %} {% vite_asset '' %} ``` ## Configuration [Section titled “Configuration”](#configuration) The [django-vite docs](https://github.com/MrBin99/django-vite) provide details about the vite configuration. Here is the relevant declaration from `settings.py`: ```python DJANGO_VITE = { "default": { "dev_mode": env.bool("DJANGO_VITE_DEV_MODE", default=DEBUG), "manifest_path": BASE_DIR / "static" / ".vite" / "manifest.json", } } ``` This should work without modification for most projects. If for some reason you want to change your vite server port or base path in `vite.config.ts` you will have to make corresponding changes to your `django-vite` settings as per their documentation. ## Production [Section titled “Production”](#production) In production, the above configuration should work out of the box. Production builds will disable `settings.DEBUG` which will in turn disable vite’s dev mode. If you need more fine-grained control, or want to test a production build, you can also explicitly set the `DJANGO_VITE_DEV_MODE` environment variable to `false`. You will also need to set up [Django’s static file serving](https://docs.saaspegasus.com/deployment/production-checklist/#check-your-static-file-setup). Again, if you’re using a supported Pegasus deployment mode, this should be already handled for you. # Webpack-Specific Instructions > Use Webpack for Django front-end builds with dev-watch mode, bundle compilation, and static file management for JavaScript and CSS assets. ## Development with Webpack [Section titled “Development with Webpack”](#development-with-webpack) Whenever you make modifications to the front-end files you will need to run the following command to rebuild the compiled JS bundles and CSS files: ```bash npm run dev ``` You can also set it up to watch for changes by running: ```bash npm run dev-watch ``` or in Docker: ```bash make npm-watch ``` ## Bundled Static Files and Source Control [Section titled “Bundled Static Files and Source Control”](#bundled-static-files-and-source-control) For ease of initial set up, the front-end bundle files can be optionally included with the Pegasus codebase. This allows you to get up and running with Pegasus without having to set up the Webpack build pipeline. However, keeping these files in source control will typically result in a lot of unnecessary changes and merge conflicts. Instead, it is recommended that you add the compiled CSS and JavaScript bundle files to your `.gitignore` so they are no longer managed by source control, and have your developers build them locally using the steps above. **You can switch to this workflow by unchecking the “include static files” option in your project configuration.** For production deployment, see the [production guidance](/deployment/production-checklist/#optimize-your-front-end) on this. # Working with Python Packages (pip-tools) > Manage Python dependencies with pip-tools using requirements.in files, pip-compile commands, and Docker container rebuilds for package management. Pegasus uses [pip tools](https://github.com/jazzband/pip-tools) to manage Python dependencies. This allows for more explicit dependency management than a standard `requirements.txt` file. ### Requirements Files [Section titled “Requirements Files”](#requirements-files) Pegasus has multiple requirements files, which live in the `requirements/` folder. For each set of requirements there are two files, one ending in `.in` and the other ending in `.txt`. The files ending in `requirements.in` have the first-class packages your app depends on. They do not have versions in them, though you can add version numbers if you want to. **These are the files that you should edit when adding/removing packages**. The files ending in `requirements.txt` have the full list of packages your app depends on, including the dependencies of your dependencies (recursively). This file is automatically generated from the `.in` counterpart, and **should typically not be edited by hand**. The `requirements.in`/`.txt` files are the main requirements for your application, `dev-requirements.in`/`.txt` files are requirements for development-only, and `prod-requirements.in`/`.txt` are for production-only. ### Working with requirements [Section titled “Working with requirements”](#working-with-requirements) To modify the requirements files, you first need to install `pip-tools`. It is included as a dependency in the `dev-requirements.txt` file so if you’ve followed the local setup steps it should already be installed. Then follow the instructions below, depending on what you want to do: #### Adding or removing a package [Section titled “Adding or removing a package”](#adding-or-removing-a-package) To add a package, add the package name to `requirements/requirements.in`. To remove a package, remove it from `requirements/requirements.in`. After finishing your edits, rebuild your `requirements.txt` file by running: ```bash # native version pip-compile requirements/requirements.in # docker version make pip-compile ``` After running this you should see the package and its dependencies added to the `requirements.txt` file. From there you can install the new dependencies, as [described below](#installing-packages). #### Upgrading a package [Section titled “Upgrading a package”](#upgrading-a-package) To upgrade a package, you can run the following command. In this example we are upgrading `django`: ```bash # native version pip-compile --upgrade-package django requirements/requirements.in # docker version make pip-complie ARGS="--upgrade-package django" ``` To upgrade *all* packages, you can run: ```bash # native version pip-compile --upgrade requirements/requirements.in # docker version make pip-compile ARGS="--upgrade" ``` From there you can install the new dependencies, as [described below](#installing-packages). #### Installing Packages [Section titled “Installing Packages”](#installing-packages) If you’re running Python natively, you can install your packages with the following command. Run this after activating your virtual environment: ```bash pip install -r requirements/requirements.txt ``` In Docker your Python packages are installed at container *build* time. This means that any time you want to change your installed new packages, you have to rebuild your container. You can do this by running ```bash docker compose build ``` Confusingly, running `pip install` or `docker compose exec web pip install` does *not* work. #### The `make requirements` shortcut for Docker [Section titled “The make requirements shortcut for Docker”](#the-make-requirements-shortcut-for-docker) Pegasus ships with a convenience target for rebuilding requirements with Docker. Any time you make changes to a `requirements.in` file you can run it with: ```bash make requirements ``` Behind the scenes this will: 1. Rebuild all your `-requirements.txt` files from your `-requirements.in` files with `uv`. 2. Rebuild your containers (installing the new packages). 3. Restart your containers. For more information, see the [docker documentation](/docker/#updating-python-packages). # Python Environment Setup > Set up Pegasus development environments using Docker, uv package manager, virtual environments, or IDE integration for Django projects. ## Choosing between Docker and native Python [Section titled “Choosing between Docker and native Python”](#choosing-between-docker-and-native-python) You can either run Python in a Docker container or natively on your development machine, through various different options. Docker is easier to set up and provides a more consistent way to package your application, however it is slower, takes more resources, and is more complex to integrate with IDEs, debuggers, and other development tools. Native Python can be more difficult to set up, especially on Windows, but once it is working it is typically easier to work with. Docker and uv are the recommended options. You can pick either of these and then switch if you run into problems. ## Using Docker [Section titled “Using Docker”](#using-docker) See the [Docker documentation](/docker) to set up your development environment with Docker. Docker environments support using uv or pip-tools as a package manager. Uv is recommended. For help adding and removing Python packages after setup, see the documentation for [uv](/python/uv) or [pip-tools](/python/packages). ## Using uv [Section titled “Using uv”](#using-uv) It’s recommended that new projects not using docker use [uv](https://docs.astral.sh/uv/) to manage their Python environments. It is faster and simpler to use than other alternatives, and can even install and set up Python for you. ***Note: uv support is only available from Pegasus version 2024.12 onwards. To use uv you must select it under the “Python package manager” setting in your project configuration.*** To set up Python with uv, first [install uv](https://docs.astral.sh/uv/getting-started/installation/). *Pegasus requires uv version 0.5 or higher.* On Linux / Mac: ```bash curl -LsSf https://astral.sh/uv/install.sh | sh ``` On Windows: ```bash powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex" ``` After installing `uv`, go into your project directory and run: ```bash uv sync ``` This should: 1. Install the right version of Python (if necessary). 2. Create a new virtual environment in a `.venv` folder inside your project. 3. Install all the project dependencies. To see if it worked, run: ```bash uv run manage.py shell ``` If you get a Python shell that looks something like this, it worked! ```plaintext $ uv run manage.py shell Python 3.12.6 (main, Sep 9 2024, 22:11:19) [Clang 18.1.8 ] on linux Type "help", "copyright", "credits" or "license" for more information. (InteractiveConsole) >>> ``` You should be able to use `uv run` to run any Python command on your project, or you can run: ```bash source .venv/bin/activate ``` in your project root to use Python and other commands normally. ## Using Native / System Python (with Virtual Environments) [Section titled “Using Native / System Python (with Virtual Environments)”](#using-native--system-python-with-virtual-environments) The following are other options---which are typically recommended for developers who are already familiar with Python and one of these choices. Unlike `docker` and `uv`, most of these require having Python installed on your machine, so if you haven’t already, first install Python version 3.12+: * On Mac and windows you can [download Python 3.12 installers from here](https://www.python.org/downloads/). * On Ubuntu it’s recommended to [use the deadsnakes repo](https://www.debugpoint.com/install-python-3-12-ubuntu/). *Note: running on older Python versions may work, but 3.12 is what’s tested and supported.* After installing Python, set up your virtual environment through one of the following methods: #### Using your IDE [Section titled “Using your IDE”](#using-your-ide) Many IDEs will manage your environments for you. This is a great and simple option that you won’t have to fiddle with. Check your specific IDE’s docs for guidance on how to do this. * [Virtual environments in VS Code](https://code.visualstudio.com/docs/python/environments) * [Virtual environments in PyCharm](https://www.jetbrains.com/help/pycharm/creating-virtual-environment.html) **Be sure to choose Python 3.12 when setting up your virtual environment.** If you don’t see 3.12 as an option, you may need to install it first. #### Using venv [Section titled “Using venv”](#using-venv) The easiest way to set up a virtual environment manually is to use Python’s built in [`venv` tool](https://docs.python.org/3/library/venv.html#module-venv): ```bash python3.12 -m venv /path/to/environment ``` In the command below, you should replace `python3` with the Python version you are using, and `/path/to/environment/` with the location on your system where you want to store the environment. This location can be somewhere in your project directory (`.venv` and `venv` are common choices) or anywhere else on your system. `/home//.virtualenvs/` is a common choice that works well with `virtualenvwrapper` (see below). To activate/use the environment run: ```bash source /path/to/environment/bin/activate ``` **You will need to activate this environment every time you work on your project.** #### Using virtualenv [Section titled “Using virtualenv”](#using-virtualenv) [virtualenv](https://virtualenv.pypa.io/en/stable/) is an alternate option to `venv`. On later versions of Python there’s no real reason to use it, but if you’re familiar with it you can keep using it without any issues. First make sure [it’s installed](https://virtualenv.pypa.io/en/stable/installation.html) and then run the following command: ```bash virtualenv -p python3.12 /path/to/environment ``` Like above, you should replace the `python3.12` variable with the version you want to use, and the `/path/to/environment` with wherever you want to set up the environment. Like with `venv`, to activate the environment run: ```bash source /path/to/environment/bin/activate ``` And, like `venv`, **you will need to activate this environment every time you work on your project.** #### Using virtualenvwrapper [Section titled “Using virtualenvwrapper”](#using-virtualenvwrapper) [Virtualenvwrapper](https://virtualenvwrapper.readthedocs.io/en/latest/) is an optional convenience tool that helps manage virtual environments. You can use it with either `venv` or `virtualenv` above. If you choose to use `virtualenvwrapper` you can use the following command to create your environment. This can be run from anywhere since `virtualenvwrapper` manages the location of your envs for you (usually in `/home//.virtualenvs/`). ```bash mkvirtualenv -p python3.12 {{ project_name }} ``` Then to activate the environment you use: ```bash workon {{ project_name }} ``` Note: You can use `virtualenvwrapper` no matter how you created the environment. It provides a nice set of helper tools, but can be a bit finicky to set up.