If you’re building Server-Side Rendering applications with React (next.js) or Vue.js (nuxt.js) you will have to deploy it using some process control tool to keep it running. I have seen that a lot of websites are teaching how to do this with PM2, but I decided to deploy SSR applications using Supervisord. It will work in the same way and it’s a very common tool, so chances are you already have Supervisord in your server. Especially you’ve followed the Deploy for Kids tutorial.
The number one reason to have a React or Vue.js SSR app is SEO. Google Bot doesn’t work well with CSR (Client-Side Rendering) and it can’t index your pages in that way. So, having an SSR app running in your server means you have node.js running some program that you’ve built in Javascript. But you can’t just run node in a screen and get out of it, you must have some process control tool to keep it running if the server restarts or if the application crashes for some reason.
Installing Supervisord:
sudo apt-get install supervisor
Now, create a new configuration file for your SSR application:
sudo vi /etc/supervisor/conf.d/my-ssr-app.conf
That’s the content:
[program:myappname]
directory=/home/username/yourproject/
command=npm run start
user=username
autostart=true
autorestart=true
Now, you have to tell Supervisord about this new process:
In this post, I’ll show how to containerize an existing project using Docker. I’ve picked a random project from GitHub that had an open issue saying Dockerize to contribute and use as an example here.
Why in the world do you want to Dockerize an existing Django web application? There are plenty of reasons, but if you don’t have one just do it for fun!
I decided to use docker because one of my applications was getting hard to install. Lots of system requirements, multiple databases, celery, and rabbitmq. So every time a new developer joined the team or had to work from a new computer, the system installation took a long time.
Difficult installations lead to time losses and time losses lead to laziness and laziness leads to bad habits and it goes on and on… For instance, one can decide to use SQLite instead of Postgres and not see truncate problems on a table until it hits the Test server.
If you don’t know what docker is, just picture it as a huge virtualenv that instead of containing just some python packages have Containers for isolating everything from the OS to your app, databases, workers, etc.
Getting Things Done
Ok, talk is cheap. Show me some code, dude.
First of all, install Docker. I did it using Ubuntu and Mac OS without any problem, but on Windows Home, I couldn’t have it working.
To tell Docker how to run your application as a container you’ll have to create a Dockerfile
FROM python:3.6
ENV PYTHONUNBUFFERED 1
RUN mkdir /webapps
WORKDIR /webapps
# Installing OS Dependencies
RUN apt-get update && apt-get upgrade -y && apt-get install -y \
libsqlite3-dev
RUN pip install -U pip setuptools
COPY requirements.txt /webapps/
COPY requirements-opt.txt /webapps/
RUN pip install -r /webapps/requirements.txt
RUN pip install -r /webapps/requirements-opt.txt
ADD . /webapps/
# Django service
EXPOSE 8000
So, let’s go line by line:
Docker Images
FROM python:3.6
Here we’re using an Image from docker hub. e.q. One pre-formated container that helps build on top of it. In this case, Python 3.6 is an Ubuntu container that already has Python3.6 installed on it.
Environment Variables
You can create all sort of Environment Variables using Env.
ENV PYTHONUNBUFFERED 1 # Here we can create all Environment variables for our container
For instance, if you use them for storing your Django’s Secret Key, you could put it here:
import os
SECRET_KEY = os.environ['DJANGO_SECRET_KEY']
Run Commands
Docker Run Commands are kinda obvious. You’re running a command “inside” your container. I’m quoting inside because docker creates something as sub-containers, so it doesn’t have to run the same command again in case of rebuilding a container.
RUN mkdir /webapps
WORKDIR /webapps
# Installing OS Dependencies
RUN apt-get update && apt-get upgrade -y && apt-get install -y \
libsqlite3-dev
RUN pip install -U pip setuptools
COPY requirements.txt /webapps/
COPY requirements-opt.txt /webapps/
RUN pip install -r /webapps/requirements.txt
RUN pip install -r /webapps/requirements-opt.txt
ADD . /webapps/
In this case, We are creating the directory that will hold our files webapps/.
Workdir is also kind of self-evident. It just telling docker to run the commands in the indicated directory.
After that, I am including one OS dependency. When we’re just using requirements.txt we are not including any OS requirement for the project and believe me, for large projects you’ll have lots and lots of OS requirements.
COPY and ADD
Copy and ADD are similar. Both copy a file from your computer (the Host) into the container (The Guest OS). In my example, I’m just coping python requirements to pip install them.
EXPOSE
Expose Instruction is for forwarding a port from Guest to the Host.
# Django service
EXPOSE 8000
Ok, so now what? How can we add more containers and make them work together? What if I need a Postgresql inside a container too? Don’t worry, here we go.
Docker-Compose
Compose is a tool for running multiple Docker containers. It’s a yml file, you just need to create a docker-compose.yml on your project folder.
Once upon a time, there was Vagrant and it was a form to run a project inside a Virtual Machine but easily configuring it, forwarding ports, provisioning requirements and, sharing volumes. Your machine (Host) could share a volume with your Virtual Machine (Guest). In docker, it’s exactly the same. When you’re writing a file on a shared volume this file is being written on your container as well.
volumes:
- .:/webapps
In this case, the current directory (.) is being shared as webapps on the container.
LINKS
links:
- db
You can refer to another container that belongs to your compose using its name. Since we created a db container for our Postgres we can link it to our web container. You can see in our settings.py file that I’ve used ‘db‘ as host.
DEPENDS_ON
In order for your application to work, your database has to be ready for use before web container, otherwise, it will raise an exception.
depends_on:
- db
Command
Command is the default command that your container will run right after it is up.
For our example, I’ve created a run_web.sh, that will run the migrations, collect the static files and start the development server.
One can argue that run the migrate at this point, automatically, every time the container is up is not a good practice. I agree. You can run it directly on the web machine. You can access your container (just like the good’ol vagrant ssh) :
docker-compose exec web bash
If you’d like you can run it without accessing the container itself, just change the last argument from the previous command.
docker-compose exec web python manage.py migrate
The same for other commands
docker-compose exec web python manage.py test
docker-compose exec web python manage.py shell
Running Docker
With our Dockerfile, docker-compose.yml and run_web.sh set in place, just run it all together:
At first, I was using run instead of exec. But Bruno FS convinced me that exec is better because you’re executing a command inside the container you’re already running, instead of creating a new one.
There is a lot of tutorials out there, especially in English. Here it goes another one. I wrote it originally in Portuguese.
The reason many people has problems deploying is that they don’t pay enough attention to details. Deploying is easy when you are familiarized with all parts involved. You must know how to authenticate through ssh, be used to command line and Linux, understand how to configure and set up your project, have an idea of what serving static files is, what is Gunicorn… Ok, it’s not that simple. That’s why there is a lot of deploy tools, kits, and tutorials. Currently, with Ansible, Docker and whatever kids are using these days it’s easier to deploy, but what happens under the hood gets more abstract.
Maybe in a couple of years, this post is going to be obsolete if it’s not already with serverless and everything else. Anyway, just a few people want to learn how to deploy Django as I’ll show here, but if it helps at least one person, I’ll be satisfied.
Enjoy this Old-Style guide!
The Server
I presume you don’t have a server or AWS account, DigitalOcean, Linode… Nothing! You have to create an account in one of them and launch a server with the distro you want. If it’s your first time, don’t go with AWS because it’s way more complicated than the others.
In this tutorial, I’m using an Ubuntu 16.04, the most common distro you’ll see around. You can also pick a Debian if you like.
Initial Set Up
Configure server timezone
sudo locale-gen --no-purge --lang pt_BR # I'm using pt_BR, because HUE HUE BR BR
sudo dpkg-reconfigure tzdata
Update and upgrade OS Packages:
sudo apt-get update
sudo apt-get -y upgrade
Installing Python 3.6 over Python 3.5
Replace Python 3.5 which is default on our distro with Python 3.6.
If your project has more OS requirements, install them as well.
VirtualEnvWrapper for Python3
I’m a fan of VirtualEnvWrapper. It’s super easy and creates all my virtual environments in the same place. That’s a personal choice, if you don’t like it, use what you know how to use.
First, you install virtualenvwrapper, and then define where to put your virtualenvs. (WORKON_HOME).
If you need to use it with multiple Python versions, you must define VIRTUALENVWRAPPER_PYTHON. Here I’m using always with python3. It’s not a problem since you can create a virtualenv pointing which Python that env will use.
See and copy the content of your public key (id_rsa.pub)
cat ~/.ssh/id_rsa.pub
Then sign in your GitHub account and go to Settings > SSH and GPG Keys. Click on New SSH Key, give it a name, like (“test server keys”) and in Key paste the content of your id_rsa.pub
Clone your Django Project
Copy the SSH link from GitHub to clone your project. In this case, I’m using a project that I just have found as an example.
In the project folder, install the project requirements.
Remember that you have to be in your virtual environment
cd django-sample-app/
pip install -r requirements.txt
Now, make the necessary alterations for your deploy, such as create a settings_local.py file, change database settings or anything specific to your project.
After you’re done, run your migrations and collect your static files (if you’re using it).
Nginx, like Apache, is an entirely separate world. Right now, you just need the basics.
/etc/nginx/sites-available/ is a directory where you have to put the config files of available sites. There is another directory, /etc/nginx/sites-enabled/ that shows which sites are enabled. They are the same thing, but what is put on enabled will be served by Nginx.
It’s usual to create your config file on sites-available and create just a symlink to sites-enabled.
First of all, I’ll remove the default site from Nginx.
sudo rm /etc/nginx/sites-enabled/default
Now, create the config file for your site. (If you don’t know how to use VIM, use nano instead of vi)
sudo vi /etc/nginx/sites-available/mysite
Past this on your file, changing the necessary paths:
Ok, if you made it till here, if you access your website you will see a 502 Bad Gateway from Nginx. That’s because it’s nothing here: http://127.0.0.1:8000
Now, configure the website to run on 8000 port.
Configuring Gunicorn
Are you guys alive? Don’t give up, we’re almost there.
In your virtualenv (remember workon name_env?) install Gunicorn
pip install gunicorn
In your project’s directory, make a gunicorn_conf file:
There is a lot of things involved in a deploy process. You have to configure a firewall, probably you’ll have to serve more than one static folder, etc, etc… But you have to start somewhere.
I can’t believe I wrote a whole post without using any GIF. So, just to finish, pay attention to all paths I’ve used here.
Let’s say your task depends on an external API or connects to another web service and for any reason, it’s raising a ConnectionError, for instance. It’s plausible to think that after a few seconds the API, web service, or anything you are using may be back on track and working again. In this cases, you may want to catch an exception and retry your task.
from celery import shared_task
@shared_task(bind=True, max_retries=3) # you can determine the max_retries here
def access_awful_system(self, my_obj_id):
from core.models import Object
from requests import ConnectionError
o = Object.objects.get(pk=my_obj_id)
# If ConnectionError try again in 180 seconds
try:
o.access_awful_system()
except ConnectionError as exc:
self.retry(exc=exc, countdown=180) # the task goes back to the queue
The self.retry inside a function is what’s interesting here. That’s possible thanks to bind=True on the shared_task decorator. It turns our function access_awful_system into a method of Task class. And it forced us to use self as the first argument of the function too.
Another nice way to retry a function is using exponential backoff:
Now, imagine that your application has to call an asynchronous task, but need to wait one hour until running it.
In this case, we just need to call the task using the ETA(estimated time of arrival) property and it means your task will be executed any time after ETA. To be precise not exactly in ETA time because it will depend if there are workers available at that time. If you want to schedule tasks exactly as you do in crontab, you may want to take a look at CeleryBeat).
from django.utils import timezone
from datetime import timedelta
now = timezone.now()
# later is one hour from now
later = now + timedelta(hours=1)
access_awful_system.apply_async((object_id), eta=later)
Using more queues
When you execute celery, it creates a queue on your broker (in the last blog post it was RabbitMQ). If you have a few asynchronous tasks and you use just the celery default queue, all tasks will be going to the same queue.
Suppose that we have another task called too_long_task and one more called quick_task and imagine that we have one single queue and four workers.
In that scenario, imagine if the producer sends ten messages to the queue to be executed by too_long_task and right after that, it produces ten more messages to quick_task. What is going to happen? All your workers may be occupied executing too_long_task that went first on the queue and you don’t have workers on quick_task.
The solution for this is routing each task using named queues.
Another common issue is having to call two asynchronous tasks one after the other. It can happen in a lot of scenarios, e.g. if the second tasks use the first task as a parameter.
You can use chain to do that
from celery import chain
from tasks import first_task, second_task
chain(first_task.s(meu_objeto_id) | second_task.s())
The chain is a task too, so you can use parameters on apply_async, for instance, using an ETA:
Hey, what’s up guys, that’s another quick post! I’ll show you how to create a new non-nullable field in Django and how to populate it using Django migrations.
SAY WUUUUT?????
Here’s the thing, do you know when you have your website in production and everything set in order and then some guy (there’s always some guy) appears with a new must-have mandatory field that nobody, neither the client nor the PO, no one, thought about it? That’s the situation.
But it happens that you use Django Migrations and you want to add those baby fields and run your migrations back and forth, right?
So, as usual, clone and create your virtual environment.
git clone [email protected]:garmoncheg/django-polls_1.10.git
cd django-polls_1.10/
mkvirtualenv --python=/usr/bin/python3 django-polls
pip install django==1.10 # Nesse projeto o autor não criou um requirements.txt
python manage.py migrate # rodando as migrações existentes
python manage.py createsuperuser
python manage.py runserver
Note: This project has one missing migration, so if you’re following this step-by-step run python manage.py makemigrations to create the migration 0002 (that’s just a minor change on a verbose_name)
Alright, you can go to the app and see your poll there, answer it and whatever. Until now we did nothing.
The idea is to create more questions with different pub_dates to get the party started.
After you use your Polls app a little you’ll notice that any poll stay forever on your website, i.e., you never close it.
So, our update on this project will be that: From now on, all polls will have an expiration date. When the user creates a poll, he/she must enter the expiration date. That’s a non-nullable, mandatory field. For the polls that already exists in our database, we will arbitrarily decide they will have a single month to expire from the publication date.
Before migrations exist, it was done through SQL, you had to add a DateField that allowed NULL, then you’d create a query to populate this field and finally another ALTER TABLE to turn that column into a mandatory field. With the migrations, it works in the same way.
First of all, create a function to populate the database with the expires dates:
def populate_expires_date(apps, schema_editor):
"""
Populates expire_date fields for polls already on the database.
"""
from datetime import timedelta
db_alias = schema_editor.connection.alias
Question = apps.get_model('polls', 'Question')
for row in Question.objects.using(db_alias).filter(expires_date__isnull=True):
row.expires_date = row.pub_date + timedelta(days=30)
row.save()
Originally, I’ve used this code in a project with multiple databases, so I needed to use db_alias and I think it’s interesting to show it here.
Inside a migration, you’ll find a operations list. On that list, we’ll add the commands to run our populate_expires_date function and after that, we’ll alter this field to make it non-nullable.
You can see that we used migrations.RunPython to run our function during the migration. The reverse_code is for cases of unapplying a migration. In this case, the field didn’t exist before, so we’ll do nothing.
Right after, we’ll add the migration to alter the field and turn null=True. We also could have done that just removing that from the model and running makemigrations again. ( Now, we have to remove it from the model, anyway).
models.py
class Question(models.Model):
question_text = models.CharField(max_length=200)
pub_date = models.DateTimeField('date published')
expires_date = models.DateTimeField('expires at')
def __str__(self):
return self.question_text
def was_published_recently(self):
now = timezone.now()
return now - datetime.timedelta(days=1) <= self.pub_date <= now
was_published_recently.admin_order_field = 'pub_date'
was_published_recently.boolean = True
was_published_recently.short_description = 'Published recently?'
And we’re ready to run the migrations
python mange.py migrate
Done! To see this working I’ll add this field to admin.py:
That’s a python quick tip. It’s very basic, but still very helpful. When your company uses GitHub for private repositories you often want to put them on the requirements.
First of all, remember to add your public key on your GitHub settings.
You can even use on your requirements, without the pip install. E.g. if your organization is called Django and your project is called… let’s say… Django and you’d like to add Django 1.11.4 in your requirements you can use like this:
Probably you already have a deploy key configured to your server or a machine user, it will work for your private Repos on your server, if you don’t, take a look at this
SSH Keys
If you don’t know how to generate your ssh-key, It’s easy: