MongoDB Paging using Ranged Queries (Avoiding Skip())

This post has been rewritten

Originally I gave only one scenario and only one form of paging the paging was open to loop holes. Also I did not take sharding into account for certain paging scenarios etc.

So what is wrong with skip()?

Nothing. Skip is an awesome and necessary function for 90% of paging cases, however, there are certain times where skip becomes slow and cumbersome for the server to work on.

The first consideration with using skip is to understand exactly what is required to commit a skip of x documents. In order to do a skip MongoDB must effectively do the work server-side that you would normally do client side of reading each record out and going over it until it reaches the right one.

A skip does not make very effective use of indexes so it is mostly pointless to put indexes on a collection to improve skipping speed.

That being said, skip will work wonders on small result sets so you must be aware at all times whether reading this post for your queries is actually micro-optimisation or not. I personally cannot tell you whether you are micro-optimising your queries. Skip deprecation should be done with massive testing prior to reading this post however I personally have found that skip can work effectively into the 100′s of thousands of rows, maybe up to a million.

Believing that avoiding skip will solve all your problems through defacto is like…think of stored procedures, not triggers, or certain crazy querying patterns in SQL. It’s all bullshit.

So I can no longer live with Skip() what are my choices?

Well my furry little friend that all depends. It depends on what application you are making and what your data is like and what your indexes are like, however I am going to outline the solution. RANGED QUERIES. Yep, I said it, ranged queries. The idea of this is to first limit anything before your highest skip with an operator like $gt and $gte and then use limit() to create the limited page you want.

Let’s look at an example. We want to make a Facebook feed for what would normally be the users homepage when they login. To begin with we want to get our first page:

var posts = db.find({}).sort({ ts: -1 }).limit(20)

Here I am using the documents timestamp (ISODate) to sort and range my documents.

Note: the ISODate in MongoDB has a visibility issue in that it can only go down to seconds so if you are likely to get documents the same second then use something else. We will discuss this later.

So now we have the first 20 stories for our feed. When the user scrolls the page, to the bottom, we want to fire off another query which will get our next page. In order to accomplish the range we need to know the last timestamp of the document (story) on the page:

var last_ts = posts[posts.length-1]['ts']['sec'];

With this last timestamp we then make the query that will get the next lot of stories:

var posts = db.find({ ts: {$gt: {last_ts}}}).sort({ ts: -1 }).limit(20)

And bingo! Keep repeating steps 1 and 2 and you will have successful paging for large data sets.

But…what about skipping huge sections of my data?

Now you have come to the gritty nasty bit. If skipping huge amounts of data were that easy don’t you think MongoDB would already do it for you?

To skip huge sections you need to understand that it isn’t just the DB that has to do the work, the application has to do something as well. Just as many sites who deal with log data or other peoples drunk nights out like Cloudflare and Facebook does.

Facebook timelines and many other large paging systems now accommodate a large skip via the interface. Notice how you have that year filter on the right hand side of your new timeline? Well that’s how they do it! They simply narrow down the index via a year you pick, good isn’t it? Simple, clean and more effectively and user friendly than putting in a random page number into the address bar.

Cloudflare, on their statistics page, gives you a huge summary graph allowing you pick out specific ranges using your mouse to highlight parts of the graph. Again this works a lot like Facebooks own, whittling it down to a pre-defined set, or range, of documents.

In fact if you look at 99% of all applications, whether web based or not, that have to deal with huge amounts of data (like Apache logs) you will realise they do not implement normal paging.

So the first and most important thing to understand about how to do large scale paging is that you can no longer solve your problems with just a bit of HTML and a skip(), you need an interface.

Just look at how other programs/sites do it and you will realise how you should do it.

What if I need to pick random records out for paging or just in a normal query?

This is a little off topic but can be used for paging as well.

sometimes people like to pick random records out from their collection. One option is to use skip() like:

$cursor = $collection->find();
$cursor->skip(3);
$cursor->next();
$doc = $cursor->current();

And again, this is good for small skips. But what if you need to do this continuously and at random points?

If you were to do this often and in random ways it might be better to use an incrementing ID combining another collection with findAndModify to produce an accurate document number ( Doc Link ).

This however induces problems, you must maintain this ID especially when deletes occur. One method around this is to mark documents as deleted instead of actually deleting them. When you query for the exact document you omit deletes and limit() by one allowing you to get the next closest document to that position like so:

$cur = $db->collection->find(array('ai_id' => array('$gte' => 403454), 'deleted' => 0))->limit(1);
$cursor->next();
$doc = $cursor->current();

How do I choose my range key?

As I said earlier, a timestamp may not always work for you. Choosing the right cardinality and granularity for your range key is very important. A timestamp is not always unique especially on a Facebook wall. You might have two posts that occur the same second. As such when getting the next page you will actually miss this post. So the better key to use here is _id with a sort on timestamp and a compound index on _id and your timestamp field “ts”.

So what is the best key in your scenario? Again I cannot answer that from this post, however, I will try and give a few points and hints on how to choose a good one. I will also try and throw some scenarios in.

First thing to get a combination THAT WILL be unique to each document. So say you are doing some rated items and you want to sort by specific ratings. You would normally create a a compound index and range and use that as your sort as well, like:

$db->find(array('type' => 'video', 'liked' => 1, 'reaction' => 'love'))->sort(array('type' => 1, 'liked' => 1, 'reaction' => 1))

Of course I do not recommend that schema but this is just an example of how a unique range key would work.

The second thing to consider is your sharding key. This is important because you don’t want to scan all computers for a small range that you might be able to find on one computer, right? So make sure to take your shard key into consideration. For example when dong the Facebook feed over an _id range key and ts sort key I would make a shard key of _id and ts to make my ranging as fast and possible.

The problem that can be experienced here, as stated in the comments below, is that you do not hit the shard key when ranging. This still ok since MongoDB will house index ranges and will still be able to hit specific areas of the collection however it will still force a global operation.

You can find a very nice (as @Matt pointed out) pictographic presentation of how queries work in a sharding environment here: http://www.mongodb.org/download/attachments/2097354/how+queries+work+with+sharding.pdf. This will show you how your queries might react on certain shards and under certain conditions.

Now we have covered the absolute basics of how to page without skip() in MongoDB. I am sure there are more parts I have forgotten to add and will probably update this post as time goes on.

Post a comment if you would like further info or see errors.

About these ads

10 Comments

  1. Matt
    Posted September 29, 2012 at 3:07 am | Permalink | Reply

    This only works if your time stamps are unique. If you have a post with identical time stamps and the last item in a paging range end on the first of these, the second one won’t be displayed because your next query shows items greater than the first of the duplicate time stamps which doesn’t select the second.

    Admittedly this may occur in a small number of cases but for some apps where showing all records are important ie logging or financial systems this solution may not be acceptable.

    • sammaye
      Posted September 29, 2012 at 10:10 am | Permalink | Reply

      Indeed, I actually wrote this post too quickly to answer a common scenario on mongodb-user. In that case (which has happened to me before) I would use the _id. I might rewrite this to be comprehensive, provided I can find the time

  2. Matt
    Posted October 1, 2012 at 1:31 am | Permalink | Reply

    Well done on the update, what you describe is currently the best solution to this type of request but as you have hinted to when sharding gets involved its not ideal.

    What if you sharded on the _id and timestamp to enable fast activity streams but then need to be able to show the last 20 posts by a particular user? Would this query need to hit all the machines to get a dataset?

    And what if the shard key isn’t used in the query? Is it processed by all shards and merge back?

    Can you create extra shard lookup indexes on non sharded keys as in the example above (id, user, timestamp)?

    • sammaye
      Posted October 1, 2012 at 7:24 am | Permalink | Reply

      It depends on the activity stream. I personally go by user_id and _id (as you show for the last question) and I get all subscriptions for that a user. It hits about 5 machines at any given time which as you said isn’t ideal but it is stil dead fast fortunately and very scalable. In this case I have actually sharded on _id, user_id and ts which seems to work.

      There are ways to optimise this. For example you can, everytime a users subscription writes a new activity, copy that activity to the users stream. This way you only query by one user_id and that should hit only one machine.

      If the shard key isn’t used then it will be forced to do a scatter and gather query: http://www.mongodb.org/display/DOCS/Sharding+Limits#ShardingLimits-Queryspeed which is ok if your dataset is small but if it is larger…watch out

      That last one is a good question I wouldn’t mind looking into that. I know that indexes work differently and even though the sharding may not occur on that index I believe MongoDB can do something special with the index (even if it isn’t the shard lookup) that will shortcut, but I would post on mongodb-user about that.

      It should be very fast since even though it would be a global scatter and gather op (http://www.mongodb.org/display/DOCS/Sharding+Introduction#ShardingIntroduction-OperationTypes) MongoDB should be able to whittle down index entries very quickly. I personally have never had to query by another index over an extensive shard range with the user activity stream being the biggest range which I can query by either _id or the compound index of all three fields to identify the users posts (remember MongoDB, like SQL can use partial indexes too so even though you have _id,user_id and ts a query by _id will still hit that index since the shard index should take priority over the default _id index).

      MongoDB docs do have a protion on index optimisation in sharding: http://www.mongodb.org/display/DOCS/Choosing+a+Shard+Key#ChoosingaShardKey-Indexoptimization but it is more about making sure only small portions are queried.

  3. Matt
    Posted October 1, 2012 at 1:43 am | Permalink | Reply

    For anyone interested in seeing how queries are processed on shards there is an excellent pdf available here: http://www.mongodb.org/download/attachments/2097354/how+queries+work+with+sharding.pdf

    Many of the rhetorical questions I pose in the above post are answered here.

    MongoDB is a great DB, but does have its limits.

    • sammaye
      Posted October 1, 2012 at 11:11 am | Permalink | Reply

      That is indeed a very nice presentation. I have added it to the post :)

  4. Edemilson Lima
    Posted October 13, 2012 at 5:17 am | Permalink | Reply

    Very nice and illuminating post!

    I was wondering how could I get pagination without duplicated items in a very dynamic collection. I am developing an application where people will post votes for each other, but while an user load the next page, the votes could had change the order as the list is ordered from who get more votes to who get less. Since I don’t need to show live data, I could update it every 5 minutes or more, my solution was to load the entire data set and cache it, then paginate in the cached items. Is there a better way to do this?

    • sammaye
      Posted October 13, 2012 at 11:00 am | Permalink | Reply

      Hmm that is a tricky one and score boards can be very difficult here. The way that I see other sites do it such as stackoverflow or game sites is that they cache, as you mentioned.

      Normally for about an hour or so; other sites like stackoverflow cache for upto the entire day (due to user base) and then when the cache refreshes they just don’t care if page 2 has the same user or two as page 1.

      I think most sites consider it too much work to do this kind of stuff truly dynamically.

      One way to do this in a way that would not be confusing is to solve a little bit of an interface problem in your app. You could show how many “places” the user has fallen. This way if the same user on page 1 appears on page 2 due to score recalc the user will know and the cache will stop the list from constantly moving. You could even make a message be sent out to your app to display to everyone when the list has been refreshed.

  5. yavuz
    Posted April 5, 2013 at 12:05 pm | Permalink | Reply

    hi sammaye, right now I have a web site, that has 100.000 products and, products need to be filtered by their price, city etc., after I get the results, there should be a pagination, and I have no idea other than using skip for pagination, do you have any idea or any schema so I do not use skip ?, thank you

    • sammaye
      Posted April 5, 2013 at 12:30 pm | Permalink | Reply

      At 100,000 I am not sure if you will notice problems from pagination to be honest.

      However in this case there must be a sort on the returned products right? If you use that sort as a means to pick out an _id then it should work and you should be able to do what I do above pretty much.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: