Django Redis Pipeline Trick

If you have projects where many of the pages/views somehow interact with the cache, make sure you use the Redis pipeline provided in redis-py.

Pipelines allow you to run redis queries in batches so that the network latency isn’t multiplied per request. So for example if the network roundtrip between redis and your webserver is 10ms, 20 cache requests would take .2 seconds of network time and very minimal processing time (redis is really fast). However, if you can pipeline those requests into one big request you only get hit with 10ms of latency with the same processing time. Minimizing network latency and external requests is one of the fastest ways to reduce you’re speed overhead.

Now, I took this a step further when I wanted a ‘global’-esque pipeline to work with in Django. So I made this middleware in base/ :

from redis_server import redis_server

class RedisMiddleware(object):

    def process_request(self, request):
        request.pipeline = redis_server.pipeline()

    def process_response(self, request, response):
        if request.pipeline.command_stack:
        return response

Where redis_server is your initiated redis connection. Then you want to go into your settings and add in the “”"”middleware:

    'base.middleware.RedisMiddleware' #Added This

And thats all there is to it. Now anywhere you can access the request object you can add on to your pipeline to execute one the response is complete. For example in some view:

def sample_view(request):
    request.pipeline.set('user', 'myuser')


    request.pipeline.incr('pageview:user:{}'.format('myuser'), 1)

Some points/considerations:

  • The pipeline automatically executes on the way out (returning the response)
  • Redis-py executes an empty pipeline as well by making a connection, checking if the command_stack is an empty list ([]) makes sure we avoid that
  • This is for regular occurring transaction where the response doesn’t really matter (incrementing pageviews, etc)

Also read...


  1. Nice idea.

    But, still, I don’t understand the motivation about thinking the network latency on this case.

    If you use Django, you should probably gave up premature optimization in the first place :)

    still cool, though.


Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>