Reprinted from Djangozen.
The other week I faced a real slow down on Arecibo due to a high number of errors being generated by a few other sites. At the time I started to find out how bad Postgres is compared to MySQL in terms of performance when doing a SQL count. MySQL does more work to make that faster.
In Arecibo, we present a list of all the errors that have recently occured. This page is paginated using the Django pagination class. To figure out how big the result set is and hence how many pages you have, the default pagination class will call the method count on the queryset.
So how can we make this faster, the first obvious strategy, change count on your query set. To do this write a custom object manager and then create your own count. For example:
def count(self): # rather stupid count return 999999
Hmm that actually works pretty well on some larger Arecibo error pages, because we don't really need to do a count if you've got a lot of results. If you do this, then the pagination class just thinks the list is big and the user sees lots of "next" links.
How about caching it? That's straightforward too.
def count(self): # not fully tested if cache.get("count", None): return cache["count"] count = self.get_query_set().count() cache.set("count", count) return count
Hmm again, ok. The problem is that this is fast on the second hit. The first hit is slow.
But you've got the general idea. The problem with these is that they change the ObjectManager and hence the change all count calls for that model (unless I start having multiple ObjectManagers). Actually for me that's a problem in my application. I still want the django-admin and other parts of the site to work and this change broke quite a lot of unit tests as I quickly looked for a solution.
You could also overwrite count on a per query set basis. Since there was on key view in Arecibo that was causing the problem, I focused on that and wrote the following:
class CountProxy: def __call__(self): # do something clever here, cache, guess etc # to create a count return count
Next in your view, assign the CountProxy to the object.
queryset = Error.objects.filter(...) queryset.count = CountProxy() Paginator(queryset, ...)
In the end my CountProxy class got quite complicated as it did work to cache and optimise the query based on the filtering. The nice thing is that you have an opportunity to make count as complex and as custom you'd like at this point. Add in some Postgres index tuning and things turned out a lot better. The end result was that I got page that was taking sometimes around 8 seconds (it was a bad day), down to about 40ms.
The ability to do fine grained control on that count can be quite useful.