Deploy SSR applications using Supervisord

If you’re building Server-Side Rendering applications with React (next.js) or Vue.js (nuxt.js) you will have to deploy it using some process control tool to keep it running. I have seen that a lot of websites are teaching how to do this with PM2, but I decided to deploy SSR applications using Supervisord. It will work in the same way and it’s a very common tool, so chances are you already have Supervisord in your server. Especially you’ve followed the Deploy for Kids tutorial.

The number one reason to have a React or Vue.js SSR app is SEO. Google Bot doesn’t work well with CSR (Client-Side Rendering) and it can’t index your pages in that way. So, having an SSR app running in your server means you have node.js running some program that you’ve built in Javascript. But you can’t just run node in a screen and get out of it, you must have some process control tool to keep it running if the server restarts or if the application crashes for some reason.

Installing Supervisord:

sudo apt-get install supervisor

Now, create a new configuration file for your SSR application:

sudo vi /etc/supervisor/conf.d/my-ssr-app.conf

That’s the content:

command=npm run start

Now, you have to tell Supervisord about this new process:

sudo supervisorctl reread
sudo supervisorctl update

And if in the future you need to restart just your app, use the name in the conf file:

sudo supervisorctl restart myappname

That’s it. Now you know how to deploy SSR applications using Supervisord.

I Know Kung Fu GIF from The Matrix (Now you know how to deploy ssr applications using supervisord)

Interesting links:

Dockerizing Django for Development

In this post, I’ll show how to containerize an existing project using Docker. I’ve picked a random project from GitHub that had an open issue saying Dockerize to contribute and use as an example here.

Why in the world do you want to Dockerize an existing Django web application? There are plenty of reasons, but if you don’t have one just do it for fun!

I decided to use docker because one of my applications was getting hard to install. Lots of system requirements, multiple databases, celery, and rabbitmq. So every time a new developer joined the team or had to work from a new computer, the system installation took a long time.

Difficult installations lead to time losses and time losses lead to laziness and laziness leads to bad habits and it goes on and on… For instance, one can decide to use SQLite instead of Postgres and not see truncate problems on a table until it hits the Test server.

If you don’t know what docker is, just picture it as a huge virtualenv that instead of containing just some python packages have Containers for isolating everything from the OS to your app, databases, workers, etc.

Getting Things Done

Ok, talk is cheap. Show me some code, dude.

First of all, install Docker. I did it using Ubuntu and Mac OS without any problem, but on Windows Home, I couldn’t have it working.

To tell Docker how to run your application as a container you’ll have to create a Dockerfile

FROM python:3.6

RUN mkdir /webapps
WORKDIR /webapps

# Installing OS Dependencies
RUN apt-get update && apt-get upgrade -y && apt-get install -y \

RUN pip install -U pip setuptools

COPY requirements.txt /webapps/
COPY requirements-opt.txt /webapps/

RUN pip install -r /webapps/requirements.txt
RUN pip install -r /webapps/requirements-opt.txt

ADD . /webapps/

# Django service

So, let’s go line by line:

Docker Images

FROM python:3.6

Here we’re using an Image from docker hub. e.q. One pre-formated container that helps build on top of it. In this case, Python 3.6 is an Ubuntu container that already has Python3.6 installed on it.

Environment Variables

You can create all sort of Environment Variables using Env.

ENV PYTHONUNBUFFERED 1  # Here we can create all Environment variables for our container

For instance, if you use them for storing your Django’s Secret Key, you could put it here:

ENV DJANGO_SECRET_KEY abcde0s&&$uyc)[email protected]!a95nasd22e-dxt^9k^7!f+$jxkk+$k-

And in your code use it like this:

import os

Run Commands

Docker Run Commands are kinda obvious. You’re running a command “inside” your container. I’m quoting inside because docker creates something as sub-containers, so it doesn’t have to run the same command again in case of rebuilding a container.

RUN mkdir /webapps
WORKDIR /webapps

# Installing OS Dependencies
RUN apt-get update && apt-get upgrade -y && apt-get install -y \

RUN pip install -U pip setuptools

COPY requirements.txt /webapps/
COPY requirements-opt.txt /webapps/

RUN pip install -r /webapps/requirements.txt
RUN pip install -r /webapps/requirements-opt.txt

ADD . /webapps/

In this case, We are creating the directory that will hold our files webapps/.

Workdir is also kind of self-evident. It just telling docker to run the commands in the indicated directory.

After that, I am including one OS dependency. When we’re just using requirements.txt  we are not including any OS requirement for the project and believe me, for large projects you’ll have lots and lots of OS requirements.


Copy and ADD are similar. Both copy a file from your computer (the Host) into the container (The Guest OS). In my example, I’m just coping python requirements to pip install them.


Expose Instruction is for forwarding a port from Guest to the Host.

# Django service

Ok, so now what? How can we add more containers and make them work together? What if I need a Postgresql inside a container too? Don’t worry, here we go.


Compose is a tool for running multiple Docker containers. It’s a yml file, you just need to create a docker-compose.yml on your project folder.

version: '3.3'

  # Postgres
    image: postgres
      - POSTGRES_USER=postgres
      - POSTGRES_PASSWORD=postgres
      - POSTGRES_DB=postgres

    build: .
    command: ["./"]
      - .:/webapps
      - "8000:8000"
      - db
      - db

In this case, I’m using an Image of Postgres from Docker Hub.

Now, let’s change the to use Postgres as Database.

    'default': {
        'ENGINE': 'django.db.backends.postgresql_psycopg2',
        'NAME': 'postgres',
        'USER': 'postgres',
        'PASSWORD': 'postgres',
        'HOST': 'db',
        'PORT': '5432',

We’re almost done. Let me talk a little about the docker-compose file.


Remember vagrant?

Once upon a time, there was Vagrant and it was a form to run a project inside a Virtual Machine but easily configuring it, forwarding ports, provisioning requirements and, sharing volumes. Your machine (Host) could share a volume with your Virtual Machine (Guest). In docker, it’s exactly the same. When you’re writing a file on a shared volume this file is being written on your container as well.

  - .:/webapps

In this case, the current directory (.) is being shared as webapps on the container.


  - db

You can refer to another container that belongs to your compose using its name. Since we created a db container for our Postgres we can link it to our web container. You can see in our file that I’ve used ‘db‘ as host.


In order for your application to work, your database has to be ready for use before web container, otherwise, it will raise an exception.

  - db


Command is the default command that your container will run right after it is up.

For our example, I’ve created a, that will run the migrations, collect the static files and start the development server.

#!/usr/bin/env bash

cd django-boards/
python migrate
python collectstatic --noinput
python runserver

One can argue that run the migrate at this point, automatically, every time the container is up is not a good practice. I agree. You can run it directly on the web machine. You can access your container (just like the good’ol vagrant ssh) :

docker-compose exec web bash

If you’d like you can run it without accessing the container itself, just change the last argument from the previous command.

docker-compose exec web python migrate

The same for other commands

docker-compose exec web python test
docker-compose exec web python shell

Running Docker

With our Dockerfile, docker-compose.yml and set in place, just run it all together:

docker-compose up

You can see this project here on my GitHub.


At first, I was using run instead of exec. But Bruno FS convinced me that exec is better because you’re executing a command inside the container you’re already running, instead of creating a new one.


Guide for Deploy – Django Python 3

crianças fazendo deploy

There is a lot of tutorials out there, especially in English. Here it goes another one. I wrote it originally in Portuguese.

The reason many people has problems deploying is that they don’t pay enough attention to details. Deploying is easy when you are familiarized with all parts involved. You must know how to authenticate through ssh, be used to command line and Linux, understand how to configure and set up your project, have an idea of what serving static files is, what is Gunicorn… Ok, it’s not that simple. That’s why there is a lot of deploy tools, kits, and tutorials. Currently, with Ansible, Docker and whatever kids are using these days it’s easier to deploy, but what happens under the hood gets more abstract.

Maybe in a couple of years, this post is going to be obsolete if it’s not already with serverless and everything else. Anyway, just a few people want to learn how to deploy Django as I’ll show here, but if it helps at least one person, I’ll be satisfied.

Enjoy this Old-Style guide!

The Server

I presume you don’t have a server or AWS account, DigitalOcean, Linode… Nothing! You have to create an account in one of them and launch a server with the distro you want. If it’s your first time, don’t go with AWS because it’s way more complicated than the others.

In this tutorial, I’m using an Ubuntu 16.04, the most common distro you’ll see around. You can also pick a Debian if you like.

Initial Set Up

Configure server timezone

sudo locale-gen --no-purge --lang pt_BR  # I'm using pt_BR, because HUE HUE BR BR
sudo dpkg-reconfigure tzdata

Update and upgrade OS Packages:

sudo apt-get update 
sudo apt-get -y upgrade

Installing Python 3.6 over Python 3.5

Replace Python 3.5 which is default on our distro with Python 3.6.

sudo apt-get update
sudo add-apt-repository ppa:jonathonf/python-3.6
sudo apt-get install python3.6
sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.5 1
sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.6 2

You can choose which Python version the OS will call when you type python3.

sudo update-alternatives --config python3

Having trouble, take a look here:

How to Install Python 3.6.1 in Ubuntu 16.04 LTS

Install OS requirements

sudo apt-get install python3-pip nginx supervisor git git-core libpq-dev python-dev 

If your project has more OS requirements, install them as well.

VirtualEnvWrapper for Python3

I’m a fan of VirtualEnvWrapper. It’s super easy and creates all my virtual environments in the same place. That’s a personal choice, if you don’t like it, use what you know how to use.

First, you install virtualenvwrapper, and then define where to put your virtualenvs. (WORKON_HOME).

If you need to use it with multiple Python versions, you must define VIRTUALENVWRAPPER_PYTHON. Here I’m using always with python3. It’s not a problem since you can create a virtualenv pointing which Python that env will use.

sudo pip3 install virtualenvwrapper
echo 'export WORKON_HOME=~/Envs' >> ~/.bashrc
echo ‘export VIRTUALENVWRAPPER_PYTHON=`which python3`’ >> ~/.bashrc
echo 'source /usr/local/bin/' >> ~/.bashrc
source ~/.bashrc

Now, create your virtualenv and define what Python is going to use.

mkvirtualenv name_venv --python=python3

VirtualEnvWrapper is really easy to use. If you want to activate a virtual env, you can use workon.

workon name_venv

To deactivate this virtualenv:


To remove a virtualenv:

rmvirtualenv name_venv

Generate SSH for GitHub Authentication

You don’t want (neither should) write your password to git pull your project on the server.

Generating SSH Keys:

cd ~/.ssh
ssh-keygen -t rsa -b 4096 -C "[email protected]"
eval "$(ssh-agent -s)"
ssh-add ~/.ssh/id_rsa

See and copy the content of your public key (

cat ~/.ssh/

Then sign in your GitHub account and go to Settings > SSH and GPG Keys. Click on New SSH Key, give it a name, like (“test server keys”) and in Key paste the content of your

Clone your Django Project

Copy the SSH link from GitHub to clone your project. In this case, I’m using a project that I just have found as an example.

git clone [email protected]:kirpit/django-sample-app.git

In the project folder, install the project requirements.

Remember that you have to be in your virtual environment

cd django-sample-app/
pip install -r requirements.txt

Now, make the necessary alterations for your deploy, such as create a file, change database settings or anything specific to your project.

After you’re done, run your migrations and collect your static files (if you’re using it).

python migrate
python collectstatic

Configuring NGINX

Nginx, like Apache, is an entirely separate world. Right now, you just need the basics.

/etc/nginx/sites-available/ is a directory where you have to put the config files of available sites. There is another directory, /etc/nginx/sites-enabled/ that shows which sites are enabled. They are the same thing, but what is put on enabled will be served by Nginx.

It’s usual to create your config file on sites-available and create just a symlink to sites-enabled.

First of all, I’ll remove the default site from Nginx.

sudo rm /etc/nginx/sites-enabled/default

Now, create the config file for your site. (If you don’t know how to use VIM, use nano instead of vi)

sudo vi /etc/nginx/sites-available/mysite

Past this on your file, changing the necessary paths:

server {
 listen 80;
 access_log /home/username/logs/access.log;
 error_log /home/username/logs/error.log;


 location / {

 proxy_pass_header Server;
 proxy_set_header X-Forwarded-Host $server_name;
 proxy_set_header X-Real-IP $remote_addr;
 proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
 proxy_set_header Host $http_host;


 location /static {

   alias /home/username/project_path/static/;


And create a symlink to sites-enabled:

sudo ln -s /etc/nginx/sites-available/mysite /etc/nginx/sites-enabled/mysite

Restart Nginx:

sudo service nginx restart

Ok, if you made it till here, if you access your website you will see a 502 Bad Gateway from Nginx. That’s because it’s nothing here:

Now, configure the website to run on 8000 port.

Configuring Gunicorn

Are you guys alive? Don’t give up, we’re almost there.

In your virtualenv (remember workon name_env?) install Gunicorn

pip install gunicorn

In your project’s directory, make a gunicorn_conf file:

bind = ""
logfile = "/home/username/logs/gunicorn.log"
workers = 3

Now, if you run Gunicorn you will see your website working!

/home/username/Envs/name_venv/bin/gunicorn project.wsgi:application -c gunicorn_conf

But what are you going to do? Run this command inside a screen and walk away? Of course not! You’ll use Supervisord to control Gunicorn.

Configuring Supervisor

Now create a gunicorn.conf:

sudo vi /etc/supervisor/conf.d/gunicorn.conf

That’s the content:

command=/home/username/Envs/name_venv/bin/gunicorn project.wsgi:application -c /home/username/project/project_django/gunicorn_conf

And now, you just tell Supervisor that there is a new process in town and Supervisord will take care of it:

sudo supervisorctl reread
sudo supervisorctl update
sudo supervisorctl restart gunicorn

And voilá! A new running you will have.


There is a lot of things involved in a deploy process. You have to configure a firewall, probably you’ll have to serve more than one static folder, etc, etc… But you have to start somewhere.

I can’t believe I wrote a whole post without using any GIF. So, just to finish, pay attention to all paths I’ve used here.


Using celery with multiple queues, retries and scheduled tasks

On this post, I’ll show how to work with multiple queues, scheduled tasks, and retry when something goes wrong.

If you don’t know how to use celery, read this post first:

Retrying a task

Let’s say your task depends on an external API or connects to another web service and for any reason, it’s raising a ConnectionError, for instance. It’s plausible to think that after a few seconds the API, web service, or anything you are using may be back on track and working again. In this cases, you may want to catch an exception and retry your task.

from celery import shared_task
@shared_task(bind=True, max_retries=3)  # you can determine the max_retries here
def access_awful_system(self, my_obj_id):
    from core.models import Object
    from requests import ConnectionError
    o = Object.objects.get(pk=my_obj_id)
    # If ConnectionError try again in 180 seconds
    except ConnectionError as exc:
        self.retry(exc=exc, countdown=180)  # the task goes back to the queue

The self.retry inside a function is what’s interesting here. That’s possible thanks to bind=True on the shared_task decorator. It turns our function access_awful_system into a method of Task class. And it forced us to use self as the first argument of the function too.

Another nice way to retry a function is using exponential backoff:

self.retry(exc=exc, countdown=2 ** self.request.retries)

ETA – Scheduling a task for later

Now, imagine that your application has to call an asynchronous task, but need to wait one hour until running it.

In this case, we just need to call the task using the ETA(estimated time of arrival)  property and it means your task will be executed any time after ETA. To be precise not exactly in ETA time because it will depend if there are workers available at that time. If you want to schedule tasks exactly as you do in crontab, you may want to take a look at CeleryBeat).

from django.utils import timezone
from datetime import timedelta

now = 
# later is one hour from now
later = now + timedelta(hours=1)

access_awful_system.apply_async((object_id), eta=later) 

Using more queues

When you execute celery, it creates a queue on your broker (in the last blog post it was RabbitMQ). If you have a few asynchronous tasks and you use just the celery default queue, all tasks will be going to the same queue.

Suppose that we have another task called too_long_task and one more called quick_task and imagine that we have one single queue and four workers.

In that scenario, imagine if the producer sends ten messages to the queue to be executed by too_long_task and right after that, it produces ten more messages to quick_task. What is going to happen? All your workers may be occupied executing too_long_task that went first on the queue and you don’t have workers on quick_task.

The solution for this is routing each task using named queues.

    'core.tasks.too_long_task': {'queue': 'too_long_queue'},
    'core.tasks.quick_task': {'queue': 'quick_queue'},

Now we can split the workers, determining which queue they will be consuming.

# For too long queue
celery --app=proj_name worker -Q too_long_queue -c 2

# For quick queue
celery --app=proj_name worker -Q quick_queue -c 2

I’m using 2 workers for each queue, but it depends on your system.

As, in the last post, you may want to run it on Supervisord

There is a lot of interesting things to do with your workers here.

Calling Sequential Tasks

Another common issue is having to call two asynchronous tasks one after the other. It can happen in a lot of scenarios, e.g. if the second tasks use the first task as a parameter.

You can use chain to do that

from celery import chain
from tasks import first_task, second_task
chain(first_task.s(meu_objeto_id) | second_task.s())

The chain is a task too, so you can use parameters on apply_async, for instance, using an ETA:

chain(salvar_dados.s(meu_objeto_id) | trabalhar_dados.s()).apply_async(eta=depois)

Ignoring the results from ResultBackend

If you just use tasks to execute something that doesn’t need the return from the task you can ignore the results and improve your performance.

If you’re just saving something on your models, you’d like to use this in your





Super Bônus

Celery Messaging at Scale at Instagram – Pycon 2013

Creating and populating a non-nullable field in Django

Hey, what’s up guys, that’s another quick post! I’ll show you how to create a new non-nullable field in Django and how to populate it using Django migrations.


george costanza taking his glasses off

Here’s the thing, do you know when you have your website in production and everything set in order and then some guy (there’s always some guy) appears with a new must-have mandatory field that nobody, neither the client nor the PO, no one, thought about it? That’s the situation.

But it happens that you use Django Migrations and you want to add those baby fields and run your migrations back and forth, right?

For this example, I decided to use a random project from the web. I chose this Django Polls on Django 1.10.

So, as usual, clone and create your virtual environment.

git clone [email protected]:garmoncheg/django-polls_1.10.git
cd django-polls_1.10/
mkvirtualenv --python=/usr/bin/python3 django-polls 
pip install django==1.10  # Nesse projeto o autor não criou um requirements.txt
python migrate  # rodando as migrações existentes
python createsuperuser 
python runserver

Note: This project has one missing migration, so if you’re following this step-by-step run python makemigrations to create the migration 0002 (that’s just a minor change on a verbose_name)

Now, access the admin website and add a poll

django polls admin

Alright, you can go to the app and see your poll there, answer it and whatever. Until now we did nothing.

The idea is to create more questions with different pub_dates to get the party started.

After you use your Polls app a little you’ll notice that any poll stay forever on your website, i.e., you never close it.

So, our update on this project will be that: From now on, all polls will have an expiration date. When the user creates a poll, he/she must enter the expiration date. That’s a non-nullable, mandatory field. For the polls that already exists in our database, we will arbitrarily decide they will have a single month to expire from the publication date.

Before migrations exist, it was done through SQL, you had to add a DateField that allowed NULL, then you’d create a query to populate this field and finally another ALTER TABLE to turn that column into a mandatory field. With the migrations, it works in the same way.

So, let’s add the expires_date field to

expires_date = models.DateTimeField('expires at', null=True)

The whole models:

class Question(models.Model):
    question_text = models.CharField(max_length=200)
    pub_date = models.DateTimeField('date published')
    expires_date = models.DateTimeField('expires at', null=True)

    def __str__(self):
        return self.question_text

    def was_published_recently(self):
        now =
        return now - datetime.timedelta(days=1) <= self.pub_date <= now
    was_published_recently.admin_order_field = 'pub_date'
    was_published_recently.boolean = True
    was_published_recently.short_description = 'Published recently?'

It’s time to make migrations:

python makemigrations

This is going to generate the 0003_question_expires_date migration, like this:

class Migration(migrations.Migration):

    dependencies = [
        ('polls', '0002_auto_20170429_2220'),

    operations = [
            field=models.DateTimeField(null=True, verbose_name='expires at'),

Let’s alter this migration’s code, NO PANIC!

Populating the new field

First of all, create a function to populate the database with the expires dates:

def populate_expires_date(apps, schema_editor):
    Populates expire_date fields for polls already on the database.
    from datetime import timedelta

    db_alias = schema_editor.connection.alias
    Question = apps.get_model('polls', 'Question')

    for row in Question.objects.using(db_alias).filter(expires_date__isnull=True):
        row.expires_date = row.pub_date + timedelta(days=30)

Originally, I’ve used this code in a project with multiple databases, so I needed to use db_alias and I think it’s interesting to show it here.

Inside a migration, you’ll find a operations list. On that list, we’ll add the commands to run our populate_expires_date function and after that, we’ll alter this field to make it non-nullable.

operations = [
        field=models.DateTimeField(null=True, verbose_name='expires at'),
    migrations.RunPython(populate_expires_date, reverse_code=migrations.RunPython.noop),
        field=models.DateTimeField(verbose_name='expires at'),

You can see that we used migrations.RunPython to run our function during the migration. The reverse_code is for cases of unapplying a migration. In this case, the field didn’t exist before, so we’ll do nothing.

Right after, we’ll add the migration to alter the field and turn null=True. We also could have done that just removing that from the model and running makemigrations again. ( Now, we have to remove it from the model, anyway).

class Question(models.Model):
    question_text = models.CharField(max_length=200)
    pub_date = models.DateTimeField('date published')
    expires_date = models.DateTimeField('expires at')

    def __str__(self):
        return self.question_text

    def was_published_recently(self):
        now =
        return now - datetime.timedelta(days=1) <= self.pub_date <= now
    was_published_recently.admin_order_field = 'pub_date'
    was_published_recently.boolean = True
    was_published_recently.short_description = 'Published recently?'

And we’re ready to run the migrations

python migrate

Done! To see this working I’ll add this field to

class QuestionAdmin(admin.ModelAdmin):
    fieldsets = [
        (None,               {'fields': ['question_text']}),
        ('Date information', {'fields': ['pub_date', 'expires_date'], 'classes': ['collapse']}),
    inlines = [ChoiceInline]
    list_display = ('question_text', 'pub_date', 'expires_date', 'was_published_recently')
    list_filter = ['pub_date']
    search_fields = ['question_text']

And voilá, all Questions you had on polls now have an expires_date, mandatory and with 30 days by default for the old ones.

That’s it, the field we wanted! The modified project is here on my GitHub:

If you like it, share it and leave a comment, if you didn’t, just leave the comment.




Pip Installing a Package From a Private Repository

That’s a python quick tip. It’s very basic, but still very helpful. When your company uses GitHub for private repositories you often want to put them on the requirements.

First of all, remember to add your public key on your GitHub settings.


You just have to use like this:

pip install git+ssh://[email protected]/<<your organization>>/<<the project>>[email protected]<< the tag>>

You can even use on your requirements, without the pip install. E.g. if your organization is called Django and your project is called… let’s say… Django and you’d like to add Django 1.11.4 in your requirements you can use like this:

git+ssh://[email protected]/django/[email protected]

Probably you already have a deploy key configured to your server or a machine user, it will work for your private Repos on your server, if you don’t, take a look at this


SSH Keys

If you don’t know how to generate your ssh-key, It’s easy:

cd ~/.ssh
ssh-keygen -t rsa -b 4096 -C "[email protected]"
eval "$(ssh-agent -s)"
ssh-add ~/.ssh/id_rsa

Now, copy the content of your public key (

cat ~/.ssh/

In your GitHub account go to  Settings > SSH and GPG Keys and add it.

Executing time-consuming tasks asynchronously with Django and Celery

This post is based on a lightning talk I gave on 2015, at GruPy-SP (July/15) in Sao Paulo.

What’s the matter of having time-consuming tasks on the server side?

Every time the client makes a request, the server has to read the request, parse the received data, retrieve or create something into the database, process what the user will receive, renders a template and send a response to the client. That’s usually what happens in a Django app.

Depending on what you are executing on the server, the response can take too long and it leads to problems such as poor user experience or even a time-out error. It’s a big issue. Loading time is a major contributing factor to page abandonment. So, slow pages lose money.

There’s a lot of functions that can take a long time to run, for instance, a large data report requested by a web client, emailing a long list or even editing a video after it’s uploaded on your website.

Real Case:

That’s a problem that I’ve face once when I was creating a report. The report took around 20 minutes to be sent and the client got a time-out error and obviously, nobody wants to wait 20 minutes for getting something. So, to handle this I’d have to let the task run in the background. (On Linux, you can do this putting a & at the end of a command and the OS will execute the command in the background)


It looks like the worst code ever.

Me arrependo imediatamente dessa decisão

Celery – the solution for those problems!

Celery is a distributed system to process lots of messages. You can use it to run a task queue (through messages). You can schedule tasks on your own project, without using crontab and it has an easy integration with the major Python frameworks.

How does celery work?

Celery Architecture Overview. (from this SlideShare)


  • The User (or Client or Producer) is your Django Application.
  • The AMPQ Broker is a Message Broker. A program responsible for the message queue, it receives messages from the Client and delivers it to the workers when requested. For Celery the AMPQ Broker is generally RabbitMQ or Redis
  • The workers (or consumers) that will run your tasks asynchronously.
  • The Result Store, a persistent layer where workers store the result of tasks.

The client produces messages, deliver them to the Message Broker and the workers read this messages from the broker, execute them and can store the results on a Memcached, RDBMS, MongoDB, whatever the client can access later to read the result.

Installing and configuring RabbitMQ

There is a lot of examples on How to Use Celery with Redis. I’m doing this with RabbitMQ.

  1. Install RabbitMQ
    sudo apt-get install rabbitmq-server
  2. Create a User, a virtual host and grant permissions for this user on the virtual host:
    sudo rabbitmqctl add_user myuser mypassword
    sudo rabbitmqctl add_vhost myvhost
    sudo rabbitmqctl set_permissions -p myvhost myuser ".*" ".*" ".*"

Installing and configuring Celery

pip install celery

In your

#Celery Config
BROKER_URL = 'amqp://guest:[email protected]:5672//'

In your project’s directory (the same folder of, creates a file as following.

from __future__ import absolute_import

import os
from celery import Celery
from django.conf import settings

# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'nome_do_proj.settings')

app = Celery('nome_do_proj')

# Using a string here means the worker will not have to
# pickle the object when using Windows.
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)

This autodiscover_tasks allows your project to find the asynchronous task of each Django App. In the same directory, you have to modify your

from __future__ import absolute_import

# This will make sure the app is always imported when
# Django starts so that shared_task will use this app.
from .celery import app as celery_app

Creating tasks for your app

In your app’s directory, put a

from __future__ import absolute_import
from celery import shared_task
from reports import generate_report_excel

@shared_task  # Use this decorator to make this a asyncronous function
def generate_report(data_inicial, data_final, email):
        ini_date = ini_date,
        final_date = final_date,
        email = email

Now you just have to import this function anywhere you want and call the delay method, that was added by the shared_task decorator.

from tasks import generate_report

def my_view(request):
    generate_report.delay(ini_date, final_date, email)
    return "You will receive an email when the report is done"


Running celery workers

Now you have to run the celery workers so they can execute the tasks getting the messages from the RabbitMQ Broker. By default, celery starts one worker per available CPU. But you can change it using the concurrency parameter (-c)

celery --app=nome_projeto worker --loglevel=INFO

And your screen will look like this:


In another terminal you can open the shell and call your task to see it working:

In [1]: from app.tasks import generate_report

 In [2]: generate_report("2012-01-01", "2015-03-14", "[email protected]")


And you will see something like this on Celery:

aparece no celery

Deploying Celery

To use celery in production you’ll need a process control system like Supervisor

To install supervisor:

sudo apt-get install supervisor

Now you have to create a configuration file for celery in /etc/supervisor/conf.d/

command=/home/deploy/.virtualenvs/my_env/bin/celery --app=proj_name worker --loglevel=INFO

Now inform Supervisor that there is a new process:

sudo supervisorctl reread
sudo supervisorctl update

And now starts celery on supervisor:

sudo supervisorctl start celery

My presentation in Brazilian Portuguese:

Originally posted in Portuguese



How Instagram Feed Work – Celery and RabbitMQ

Distributing Python Apps for Windows Desktops

I’ve started working on a blog post about how to create a Python app auto-update and it turned into three. After these 3 articles, you will be able to create a Python app that fully works on windows and you can distribute it within an installer.

This text was originally written in Portuguese.

  1. How to create a Python .exe with MSI Installer and Cx_freeze
  2. How to create an application with auto-update using Python and Esky
  3. How to create an MSI installer using Inno Setup

It has just 4 Steps:

  • Create a simple project called boneca
  • Build an MSI installer using Cx_freeze
  • Add an Auto-update feature to the project, using Esky
  • Show how to use Inno Setup to build a more powerful and custom installer

In the end will be able to pack and distribute Python apps for windows desktop in an easy way.

Some people still think Python is just a script language or it works only for web development through frameworks, but it’s not. It can be compiled and it can be shipped without source code, turned into a commercial application.

The great example of all time is Dropbox. Dropbox client was written in Python to be portable for Windows, Mac, and Linux. The only difference is the interface. For Windows and Linux, Dropbox uses wxPython and for Mac it uses Python-ObjC. I like this words from Guido Van Rossum about Dropbox:


“Python plays an important role in Dropbox’s success: the Dropbox client, which runs on Windows, Mac and Linux (!), is written in Python. This is key to the portability: everything except the UI is cross-platform. (The UI uses a Python-ObjC bridge on Mac, and wxPython on the other platforms.) Performance has never been a problem — understanding that a small number of critical pieces were written in C, including a custom memory allocator used for a certain type of objects whose pattern of allocation involves allocating 100,000s of them and then releasing all but a few. Before you jump in to open up the Dropbox distro and learn all about how it works, beware that the source code is not included and the bytecode is obfuscated. Drew’s no fool. And he laughs at the poor competitors who are using Java.”

From depth and breadth of python

How to create an MSI installer using Inno Setup

Alright, guys, that’s the 3rd and last part of our Distributing Python Apps for Windows Desktops series. In this post, I’ll show how to create an MSI installer using Inno Setup and add MSVCR’s DLLs to make Python work on any Windows computer.

The other two parts are:

In the first part, we’ve learned how to create an MSI with cx_freeze and use the MSVCR from your own OS with the include_msvcr parameter. After that, we have updated our program to include an Auto-Update service.

OK, but now we can’t use the cx_freeze to make an installer anymore. It happens because Esky modifies your program creating an executable that verifies your program updates on FTP if it has some update available, esky downloads it, checks if everything is ok and remove the old files. No problem, let’s solve this with Inno Setup

1st thing, download and install Inno Setup.

Inno Setup generates a script file (.iss) for you to make your own installer. You can write your own script or use the Script Wizard.


First, we’ll use the wizard and the file that we have generated on the previous post (Part II). Unzip this file.



Back to Inno Setup click File >> New. The wizard is pretty straight forward. Fill the blanks as you like.


In the next screen, you can choose the folder to install your App. The default is Program Files, but if your code is not signed (using a Code Signing tool) you may have problems with Windows UAC. It will not recognize the authenticity of your code and you can struggle with antivirus, Windows security and it can stop your program from doing the auto-updates. So, at first, you better use another folder. You can type a path or use a Directory Constant.


On the next screen, you’ll add the programs, folders, and files that will be installed. In this case, boneca.exe and python27.dll at the root level and the boneca-1.0.1.win32 folder with its content.

Don’t forget to add boneca.exe as Application main executable file.


Now, go ahead with the standard procedure to windows programs (next, next, next…). At the end, it creates a .iss file. You can compile and it will generate a .msi Installer. But, hold on! We still need to add the MSVCR’s DLLs. So download it according to your python version:

Now update your .iss file, so it can install those DLLs too. I used a solution I’ve found on StackOverFlow and it works fine.

At Files section insert the vc_redist’s path that you’ve just downloaded:

Source: "vcredist_x86.exe"; DestDir: {tmp}; Flags: deleteafterinstall

At the end of the Run section, paste it as it is:

; add the Parameters, WorkingDir and StatusMsg as you wish, just keep here
; the conditional installation Check
Filename: "{tmp}\vcredist_x86.exe"; Check: VCRedistNeedsInstall

 INSTALLSTATE_INVALIDARG = -2; // An invalid parameter was passed to the function.
 INSTALLSTATE_UNKNOWN = -1; // The product is neither advertised or installed.
 INSTALLSTATE_ADVERTISED = 1; // The product is advertised but not installed.
 INSTALLSTATE_ABSENT = 2; // The product is installed for a different user.
 INSTALLSTATE_DEFAULT = 5; // The product is installed for the current user.

 VC_2005_REDIST_X86 = '{A49F249F-0C91-497F-86DF-B2585E8E76B7}';
 VC_2005_REDIST_X64 = '{6E8E85E8-CE4B-4FF5-91F7-04999C9FAE6A}';
 VC_2005_REDIST_IA64 = '{03ED71EA-F531-4927-AABD-1C31BCE8E187}';
 VC_2005_SP1_REDIST_X86 = '{7299052B-02A4-4627-81F2-1818DA5D550D}';
 VC_2005_SP1_REDIST_X64 = '{071C9B48-7C32-4621-A0AC-3F809523288F}';
 VC_2005_SP1_REDIST_IA64 = '{0F8FB34E-675E-42ED-850B-29D98C2ECE08}';
 VC_2005_SP1_ATL_SEC_UPD_REDIST_X86 = '{837B34E3-7C30-493C-8F6A-2B0F04E2912C}';
 VC_2005_SP1_ATL_SEC_UPD_REDIST_X64 = '{6CE5BAE9-D3CA-4B99-891A-1DC6C118A5FC}';
 VC_2005_SP1_ATL_SEC_UPD_REDIST_IA64 = '{85025851-A784-46D8-950D-05CB3CA43A13}';

 VC_2008_REDIST_X86 = '{FF66E9F6-83E7-3A3E-AF14-8DE9A809A6A4}';
 VC_2008_REDIST_X64 = '{350AA351-21FA-3270-8B7A-835434E766AD}';
 VC_2008_REDIST_IA64 = '{2B547B43-DB50-3139-9EBE-37D419E0F5FA}';
 VC_2008_SP1_REDIST_X86 = '{9A25302D-30C0-39D9-BD6F-21E6EC160475}';
 VC_2008_SP1_REDIST_X64 = '{8220EEFE-38CD-377E-8595-13398D740ACE}';
 VC_2008_SP1_REDIST_IA64 = '{5827ECE1-AEB0-328E-B813-6FC68622C1F9}';
 VC_2008_SP1_ATL_SEC_UPD_REDIST_X86 = '{1F1C2DFC-2D24-3E06-BCB8-725134ADF989}';
 VC_2008_SP1_ATL_SEC_UPD_REDIST_X64 = '{4B6C7001-C7D6-3710-913E-5BC23FCE91E6}';
 VC_2008_SP1_ATL_SEC_UPD_REDIST_IA64 = '{977AD349-C2A8-39DD-9273-285C08987C7B}';
 VC_2008_SP1_MFC_SEC_UPD_REDIST_X86 = '{9BE518E6-ECC6-35A9-88E4-87755C07200F}';
 VC_2008_SP1_MFC_SEC_UPD_REDIST_X64 = '{5FCE6D76-F5DC-37AB-B2B8-22AB8CEDB1D4}';
 VC_2008_SP1_MFC_SEC_UPD_REDIST_IA64 = '{515643D1-4E9E-342F-A75A-D1F16448DC04}';

 VC_2010_REDIST_X86 = '{196BB40D-1578-3D01-B289-BEFC77A11A1E}';
 VC_2010_REDIST_X64 = '{DA5E371C-6333-3D8A-93A4-6FD5B20BCC6E}';
 VC_2010_REDIST_IA64 = '{C1A35166-4301-38E9-BA67-02823AD72A1B}';
 VC_2010_SP1_REDIST_X86 = '{F0C3E5D1-1ADE-321E-8167-68EF0DE699A5}';
 VC_2010_SP1_REDIST_X64 = '{1D8E6291-B0D5-35EC-8441-6616F567A0F7}';
 VC_2010_SP1_REDIST_IA64 = '{88C73C1C-2DE5-3B01-AFB8-B46EF4AB41CD}';

 // Microsoft Visual C++ 2012 x86 Minimum Runtime - 11.0.61030.0 (Update 4) 
 VC_2012_REDIST_MIN_UPD4_X86 = '{BD95A8CD-1D9F-35AD-981A-3E7925026EBB}';
 VC_2012_REDIST_MIN_UPD4_X64 = '{CF2BEA3C-26EA-32F8-AA9B-331F7E34BA97}';
 // Microsoft Visual C++ 2012 x86 Additional Runtime - 11.0.61030.0 (Update 4) 
 VC_2012_REDIST_ADD_UPD4_X86 = '{B175520C-86A2-35A7-8619-86DC379688B9}';
 VC_2012_REDIST_ADD_UPD4_X64 = '{37B8F9C7-03FB-3253-8781-2517C99D7C00}';

function MsiQueryProductState(szProduct: string): INSTALLSTATE; 
 external 'MsiQueryProductState{#AW}@msi.dll stdcall';

function VCVersionInstalled(const ProductID: string): Boolean;
 Result := MsiQueryProductState(ProductID) = INSTALLSTATE_DEFAULT;

function VCRedistNeedsInstall: Boolean;
 // here the Result must be True when you need to install your VCRedist
 // or False when you don't need to, so now it's upon you how you build
 // this statement, the following won't install your VC redist only when
 // the Visual C++ 2010 Redist (x86) and Visual C++ 2010 SP1 Redist(x86)
 // are installed for the current user
 Result := not (VCVersionInstalled(VC_2010_REDIST_X86) and 

And now compile your file. You have a setup.exe as the Output and this is able to install our boneca.exe and the necessary DLLs to run it on every goddamn Windows.


If you read the 3 posts you’ve learned how to create an executable using Python with auto-update feature and an MSI installer to distribute it for any Windows version.

Originally published in Portuguese!

How to create an application with auto-update using Python and Esky

This is the 2nd part of  Distributing Python Apps for Windows Desktops series. The 1st part is here: How to create a Python .exe with MSI Installer and CX_freeze

Every time a program has to be updated is a burden. Remember Java! It feels so uncomfortable, even if you’re an IT guy. You don’t like it, nor does your users. So be nice to them and make an auto-update on your program so they don’t have to download new versions.

To show how to create an application with auto-update using Python and Esky I’ll use the boneca app from part 1. The program was written and compiled but it doesn’t have an auto-update yet. You just generated an installer and the clients using that version will never have updates again. So, now we’re creating a new version using Esky:

pip install esky


Let’s modify the script to import esky and find for updates on the internet when they’re available.

# right after import win32con
import esky

if hasattr(sys,"frozen"):
    app = esky.Esky(sys.executable,"")

When the program initializes it will look up the given URL for some update to download. It does that based on version number. You just have to create a folder and enable it on Apache. I’m using one of my sites:

Now, instead of using the boneca.jpg I’ll use this image (chuck.jpg):


#replace boneca.jpg to chuck.jpg

Now, let’s update to use esky:
import esky.bdist_esky
from esky.bdist_esky import Executable as Executable_Esky
from cx_Freeze import setup, Executable

include_files = ['boneca.jpg','chuck.jpg']

    name = 'boneca',
    version = '1.0.1',
    options = {
        'build_exe': {
            'packages': ['os','sys','ctypes','win32con'],
            'excludes': ['tkinter','tcl','ttk'],
            'include_files': include_files,
            'include_msvcr': True,
        'bdist_esky': {
            'freezer_module': 'cx_freeze',
    data_files = include_files,
    scripts = [
            gui_only = True,
            #icon = XPTO  # Use an icon if you want.
    executables = [Executable('',base='Win32GUI')]

As you can see, I’ve changed the version number to 1.0.1 and from this version, our program will have an auto-update. This file is different from the previous one, so I’ll try to explain everything that is happening here.

1. Importing bdist_esky and an Executable Esky.

import esky.bdist_esky
from esky.bdist_esky import Executable as Executable_Esky

2. Defining options for the new argument bdist_esky:

        'bdist_esky': {
            'freezer_module': 'cx_freeze',

3. Adding  data_files cause Esky uses it to include files on the app’s folder.

      data_files = include_files,

4. Adding scripts so Esky will now which files will be the executables.

    scripts = [
            gui_only = True,
            #icon = XPTO  # Use an icon if you want,

Run the with bdist_esky version to generate the version.

python bdist_esky

On your dist folder:

Inside the zip file you will see this:


So what happened? Esky created a program boneca.exe that will be responsible for the update of your app, including itself. When it’s open it’ll look for new Zip Files with New versions on the URL that I’ve specified. If it’s a new version it will download and change the folder content cleaning the last version.

Esky handles issues such as internet connection problems, energy failure or any other problems while downloading the new version.

So from now on, our app has auto-update which is So Cool, isn’t it??? But unfortunately this version doesn’t have MSVCR support, so in the next and last part of this series, I’ll show how to create your own installer with Inno Setup.

To show how the update works I’ll create one more version (1.0.2) and I’ll change the image again from chuck.jpg to seu-boneco.jpg (it means Mr doll in Portuguese):


Don’t forget to add seu-boneco.jpg in the include_files of

include_files = ['boneca.jpg','chuck.jpg', 'seu-boneco.jpg']

Now, let’s generate the new Esky file:

python bdist_esky

New file on our dist folder We just have to put this on the URL that we provided and next time someone uses the boneca-1.0.1 it will be auto-updated.

If you want to test this app, download, unzip the file and open boneca.exe

When you press Print Screen it will show the Chuck Image, but, at this point, it will be looking for updates and the next time you open the program it will show “Seu Boneco” image.


Code on Github.

Wait for the 3rd Part!


Originally published in Portuguese!