Good Night, Westley: Time-To-Live Collections

In The Princess Bride, every night the Dread Pirate Roberts tells Westley: “Good night, Westley. Good work. Sleep well. I’ll most likely kill you in the morning.”

Let’s say the Dread Pirate Roberts wants to optimize this process, so he stores prisoners in a database. When he captures Westley, he can put:

> db.prisoners.insert({
... name: "Westley",
... sentenceStart: new Date()
... })

However, now he has to run some sort of cron job that runs all the time in order to kill everyone who needs killing and keep his database up-to-date.

Enter time-to-live (TTL) collections. TTL collections are going to be released in MongoDB 2.2 and they’re collections where documents expire in a more controlled way than with capped collections.

What the Dread Pirate Roberts can do is:

> db.prisoners.ensureIndex({sentenceStart: 1}, {expireAfterSeconds: 24*60*60}

Now, MongoDB will regularly comb this index looking for docs to expire (so it’s actually more of a TTL index than a TTL collection).

Let’s try it out ourselves. You’ll need to download version 2.1.2 or higher to use this feature. Start up the mongod and run the following in the Mongo shell:

> db.prisoners.ensureIndex({sentenceStart: 1}, {expireAfterSeconds: 30})

We’re on a schedule here, so our pirate ship is more brutal: death after 30 seconds. Let’s take aboard a prisoner and watch him die.

> var start = new Date()
> db.prisoners.insert({name: "Haggard Richard", sentenceStart: start})
> while (true) { 
... var count = db.prisoners.count(); 
... print("# of prisoners: " + count + " (" + (new Date() - start) + "ms)");
... if (count == 0) 
...      break; 
... sleep(4000); }
# of prisoners: 1 (2020ms)
# of prisoners: 1 (6021ms)
# of prisoners: 1 (10022ms)
# of prisoners: 1 (14022ms)
# of prisoners: 1 (18023ms)
# of prisoners: 1 (22024ms)
# of prisoners: 1 (26024ms)
# of prisoners: 0 (30025ms)

…and he’s gone.

Edited to add: Stennie pointed out that the TTL job only runs once a minute, so YMMV on when Westley gets bumped.

Conversely, let’s say we want to play the “maybe I’ll kill you tomorrow” game and keep bumping Westley’s expiration date. We can do that by updating the TTL-indexed field:

> db.prisoners.insert({name: "Westley", sentenceStart: new Date()})
> for (i=0; i  db.prisoners.count()

…and Westley’s still there, even though it’s been more than 30 seconds.

Once he gets promoted and becomes the Dread Pirate Roberts himself, he can remove himself from the execution rotation by changing his sentenceStart field to a non-date (or removing it altogether):

> db.prisoners.update({name: "Westley"}, {$unset : {sentenceStart: 1}});

When not on pirate ships, developers generally use TTL collections for sessions and other cache expiration problems. If ye be wanting a less grim introduction to MongoDB’s TTL collections, there are some docs on it in the manual.

16 thoughts on “Good Night, Westley: Time-To-Live Collections

  1. Glad to see 10gen finally implements this must-have feature. Indeed, the lack of this feature is one of the reasons why we are looking for other alternative NoSQL solution right now.

    1) does it work on sharded collection (the shard key is not the TTL key);
    2) what kind of performance penalty i should expect when the background job doing auto expiry on a large collection (500m – 1billion records), say the TTL is 30 days.



    1. 1) Yes.
      2) We don’t really give numbers for this stuff because it varies on data set, hardware, day of the week, etc.  However, some stuff to think about:

      30 days of docs = 1 billion docs implies 33,333,333 docs/day or ~400 docs/second have to be removed.  Assuming your collection is always 1 billion docs, then 400 docs/second have to be inserted (at least, assuming you never delete anything else), so the system should be able to handle at least 800 writes/second.  

      Because it’s using a time-based index, MongoDB will only have to traverse a small part of the index tree as part of the background job, so it shouldn’t have to pull much into memory (part of the index + the docs it has to remove+any other indexes it has to update when it removes docs).  On the downside, if your app mostly hits your more recent data, the older stuff won’t be in memory so it’ll cause a lot of page faults.

      Sorry I can’t be more specific, but hopefully that will help.  Our general advice is, “try it with your data set.”


      1.  Thanks for your response, Kristina.
        I should have been more specific – 1) the collection is sharded into 4 nodes; 2) we use SSD, and each node can do 2+k wps. However, the concern we have is the activities from this background purging can disrupt the operation of the system — from replication to read/write throughput (and replication is our biggest painpoint, and 2.0 makes this worse so we had to downgrade to 1.8). We used to do data expiry manually, by opening a table-scan cursor and delete old data at a slow pace (rate limiting). However, doing that could easily bring the DB to its knee even we set a very low deletion rate. We end up doing a very manual intensive way — take a secondary offline, export with filter/import/resync/promote-it-to-primary. This works very well, just too ugly.

        Yes, I agree the best we can do is to try it out — hopeful you guys fix the replication speed issue (cause by write lock yielding before page-in) in 2.2 so we can upgrade to that version.


      2. I’m not sure how different the performance will be, as the background operation is pretty similar to a user-requested remove.  However, I don’t really know, I’d be interested to hear how it works for you guys.


  2. I can assume there is no difficulty in setting Westley’s sentence to be, say, half an hour in the future and use a TTL of zero? I ask because one of the external APIs I work with returns a “cacheUntil” key in each response which provides a clear “delete this on or after this datetime” value.


Leave a Reply to kristina1 Cancel reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: