Git push to multiple remotes asynchronously

If you use something like Dokku or Heroku or any git hook type deploy setup, sometimes you git deploys can become time consuming, especially if you have to do them to multiple servers/remotes. Heres a quick tip to solve that since I had a hard time finding a async solution.

To add multiple urls to a git remote add them to your .git/config manually or run:

git config --add remote.<NAME>.pushurl git@<GIT_REMOTE>
git config --add remote.<NAME>.pushurl git@<GIT_REMOTE>

Now doing

git push NAME master

runs through each remote url individually and does a push. To do this asynchronously simply add this command to your shell profile (.bashrc, .zshrc, .profile, etc):

gpasync () { while read -r url; do git push "$url" "$2" &; done < <(git config --get-all "remote.$1.pushurl"); }

Source the shell profile and from here on out if you do:

gpasync production master

A git push process gets launched for each url and the pushes happen asynchronously.

Grab your docs using the Github API

In attempt to minimize the number of places I have to update every time I make a change to some documentation, I’ve decided to pull markdown documentation straight from my git repo and render it on GAuthify. This was surprisingly easy and the final results will look like this:

From a markdown from Github, straight to HTML on GAuthify

Heres how to do this in python (very easy to port in any other language):

import requests
import base64
import json

def get_github_readme(owner, repo):
    response = requests.get(
        "{}/{}/readme".format(owner, repo))
    content_b64 = response.json['content']
    content = base64.b64decode(content_b64)
    return content

def markdown_to_html(markdown):
                             data=json.dumps({'text': markdown})).text

And thats all there is to it. get_github_readme grabs the preferred readme from github (its as easy to do any other specific file), decodes the base64 response and returns the raw markdown. markdown_to_html uses Github’s API to return the HTML version of the markdown. I simply inject this into my django template and with no modifications I get results like this:

Now, Github’s API allows for 60 unauthenticated requests per hour, to make sure we dont reach this and to improve site load times, I cache the docs in Redis for one hour using this function here:

def github_html_readme(owner, repo):
    cache_key = github_readme(owner, repo)
    content = redis_server.get(cache_key)
    if not content:
        content_md = get_github_readme(owner, repo)
        content = markdown_to_html(content_md)
        redis_server.set(cache_key, content)
        redis_server.expire(cache_key, 3600)
    return content

From here on out when I update the docs in the project directories and push them to github, the website will update on its own. Ill probably find more uses for this kind of thing in the future.

Django Redis Pipeline Trick

If you have projects where many of the pages/views somehow interact with the cache, make sure you use the Redis pipeline provided in redis-py.

Pipelines allow you to run redis queries in batches so that the network latency isn’t multiplied per request. So for example if the network roundtrip between redis and your webserver is 10ms, 20 cache requests would take .2 seconds of network time and very minimal processing time (redis is really fast). However, if you can pipeline those requests into one big request you only get hit with 10ms of latency with the same processing time. Minimizing network latency and external requests is one of the fastest ways to reduce you’re speed overhead.

Now, I took this a step further when I wanted a ‘global’-esque pipeline to work with in Django. So I made this middleware in base/ :

from redis_server import redis_server

class RedisMiddleware(object):

    def process_request(self, request):
        request.pipeline = redis_server.pipeline()

    def process_response(self, request, response):
        if request.pipeline.command_stack:
        return response

Where redis_server is your initiated redis connection. Then you want to go into your settings and add in the “”"”middleware:

    'base.middleware.RedisMiddleware' #Added This

And thats all there is to it. Now anywhere you can access the request object you can add on to your pipeline to execute one the response is complete. For example in some view:

def sample_view(request):
    request.pipeline.set('user', 'myuser')


    request.pipeline.incr('pageview:user:{}'.format('myuser'), 1)

Some points/considerations:

  • The pipeline automatically executes on the way out (returning the response)
  • Redis-py executes an empty pipeline as well by making a connection, checking if the command_stack is an empty list ([]) makes sure we avoid that
  • This is for regular occurring transaction where the response doesn’t really matter (incrementing pageviews, etc)

Wheezy: A No Bullshit Python Web Framework.

My web framework choice is always simple. If I’m dealing with a big project consisting of many pages/apps I’ll choose Django, if I want to prototype something or make a mini-app I’ll go with Flask or Bottle. Nowadays I just choose the framework I’m familiar with but originally my choice involved some sort of comprehensive framework review to make sure it was worth my invested time. However, this comprehensive review never consisted of a performance check; most python web frameworks were plenty fast and pre-mature optimization would be a waste of time. When developing the http/rest api for GAuthify performance became important and I stumbled upon Wheezy. Heres are some quick thoughts.

Wheezy.web is fast

For me it wasn’t really a case of pre-mature optimization. When I built the API I decided to spend extra time planning before so I would have to make minimal adjustments in production. After using Django for the API and testing it with Apache Benchmark, I wasn’t really satisfied by the perfomrance. Luckily the API code was well separated from Django so I began search for a new framework to swap it.

I found this post on Andriy Kornatskyy’s blog, the creator of wheezy and looking over his data I decided to test out bottle and wheezy.web

My results were phenomenal for wheezy and alright for bottle although it took a bit longer to get it to work in wheezy. My requests-per-second was factors better especially at higher concurrencies.

Wheezy is light but complete

Wheezy is a collection of tools that make it a complete web framework. The tools include wheezy.web, wheezy.http, wheezy.routing, wheezy.core, wheezy.captcha, etc. All these parts work completely independently of the others and can be mixed and matched when needed. This leads to one of my other favorite things about the framework: no bullshit. There’s not a lot of magic going on in the background with hundreds of function calls and with 500 point stacktraces. This allows you to easily dig in the code, make modifications to make wheezy work the way you need it to. The minimalism is probably what gives it very similar performance to Python’s wsgi.

Wheezy is still under development

There are some issues you’re going to run into. For example regex groups using {} can get picked up by the curly routing (instead of the regex routing). However, the cleanliness of the framework makes is easy to go through and submit fixes for the issues. The lack of documentation can also be irritating  but again the framework’s clarity makes it easy to figure out what the code is doing.

In short

Performance and clarity are often two non-primary features when looking for a web-framework. If you seek them though, wheezy has my vote for a great overall webframework for python. TL;DR from the documents: is a lightweight, high performance, high concurrency WSGI web framework with the key features to build modern, efficient web.

Make sure you check it out here.