The Comments Conundrum

One of the most common questions I see about MongoDB schema design is:

I have a collection of blog posts and each post has an array of comments. How do I get…
…all comments by a given author
…the most recent comments
…the most popular commenters?

And so on. The answer to this has always been “Well, you can’t do that on the server side…” You can either do it on the client side or store comments in their own collection. What you really want is the ability to treat embedded documents like a “real” collection.

The aggregation pipeline gives you this ability by letting you “unwind” arrays into separate documents, then doing whatever else you need to do in subsequent pipeline operators.

For example…

Note: you must be running at least version 2.1.0 of MongoDB to use the aggregation pipeline.

Getting all comments by Serious Cat

Serious Cat’s comments are scattered between post documents, so there wasn’t a good way of querying for just those embedded documents. Now there is.

Let’s assume we want each comment by Serious Cat, along with the title and url of the post Serious Cat was commenting on. So, the steps we need to take are:

  1. Extract the fields we want (title, url, comments)
  2. Unwind the comments field: make each comment into a “real” document
  3. Query our new “comments collection” for “Serious Cat”

Using the aggregation pipeline, this looks like:

> db.runCommand({aggregate: "posts", pipeline: [
{
   // extract the fields 
   $project: {
        title : 1,
        url : 1,
        comments : 1
    }
},
{
    // explode the "comments" array into separate documents
    $unwind: "$comments"
},
{
    // query like a boss
    $match: {comments.author : "Serious Cat"}
}]})

Now, this works well for something like a blog, where you have human-generated (small) data. If you’ve got gigs of comments to go through, you probably want to filter out as many as possible (e.g., with $match or $limit) before sending it to the “everything-in-memory” parts of the pipeline.

Getting the most recent comments

Let’s assume our site lists the 10 most recent comments across all posts, with links back to the posts they appeared on, e.g.,

  1. Great post! -Jerry (February 2nd, 2012) from This is a Great Post
  2. What does batrachophagous mean? -Fred (February 2nd, 2012) from Fun with Crosswords
  3. Where can I get discount Prada shoes? -Tom (February 1st, 2012) from Rant about Spam

To extract these comments from a collection of posts, you could do something like:

> db.runCommand({aggregate: "posts", pipeline: [
{
   // extract the fields
   $project: {
        title : 1,
        url : 1,
        comments : 1
    }
{
    // explode "comments" array into separate documents
    $unwind: "$comments"
},
{
    // sort newest first
    $sort: {
        "comments.date" : -1
    }
},
{
    // get the 10 newest
    $limit: 10
}]})

Let’s take a moment to look at what $unwind does to a sample document.

Suppose you have a document that looks like this after the $project:

{
    "url" : "/blog/spam",
    "title" : "Rant about Spam",
    "comments" : [
        {text : "Where can I get discount Prada shoes?", ...},
        {text : "First!", ...},
        {text : "I hate spam, too!", ...},
        {text : "I love spam.", ...}
    ]
}

Then, after unwinding the comments field, you’d have:

{
    "url" : "/blog/spam",
    "title" : "Rant about Spam",
    "comments" : [
        {text : "Where can I get discount Prada shoes?", ...},
    ]
}
{
    "url" : "/blog/spam",
    "title" : "Rant about Spam",
    "comments" : [
        {text : "First!", ...}
    ]
}
{
    "url" : "/blog/spam",
    "title" : "Rant about Spam",
    "comments" : [
        {text : "I hate spam, too!", ...}
    ]
},
{
    "url" : "/blog/spam",
    "title" : "Rant about Spam",
    "comments" : [
        {text : "I love spam.", ...}
    ]
}

Then we $sort, $limit, and Bob’s your uncle.

Rank commenters by popularity

Suppose we allow users to upvote comments and we want to see who the most popular commenters are.

The steps we want to take are:

  1. Project out the fields we need (similar to above)
  2. Unwind the comments array (similar to above)
  3. Group by author, taking a count of votes (this will sum up all of the votes for each comment)
  4. Sort authors to find the most popular commenters

Using the pipeline, this would look like:

> db.runCommand({aggregate: "posts", pipeline: [
{
   // extract the fields we'll need
   $project: {
        title : 1,
        url : 1,
        comments : 1
    }
},
{
    // explode "comments" array into separate documents
    $unwind: "$comments"
},
{
    // count up votes by author
    $group : {
        _id : "$comments.author",
        popularity : {$sum : "$comments.votes"}
    }
},
{
    // sort by the new popular field
    $sort: {
        "popularity" : -1
    }
}]})

As I mentioned before, there are a couple downsides to using the aggregation pipeline: a lot of the pipeline is done in-memory and can be very CPU- and memory-intensive. However, used judiciously, it give you a lot more freedom to mush around your embedded documents.

Hacking Chess: Data Munging

This is a supplement to the Hacking Chess with the MongoDB Pipeline. This post has instructions for rolling your own data sets from chess games.

Download a collection of chess games you like. I’m using 1132 wins in less than 10 moves, but any of them should work.

These files are in a format called portable game notation (.PGN), which is a human-readable notation for chess games. For example, the first game in TEN.PGN (helloooo 80s filenames) looks like:

[Event "?"]
[Site "?"]
[Date "????.??.??"]
[Round "?"]
[White "Gedult D"]
[Black "Kohn V"]
[Result "1-0"]
[ECO "B33/09"]

1.e4 c5 2.Nf3 Nc6 3.d4 cxd4 4.Nxd4 Nf6
5.Nc3 e5 6.Ndb5 d6 7.Nd5 Nxd5 8.exd5 Ne7
9.c4 a6 10.Qa4  1-0

This represents a 10-turn win at an unknown event. The “ECO” field shows which opening was used (a Sicilian in the game above).

Unfortunately for us, MongoDB doesn’t import PGNs in their native format, so we’ll need to convert them to JSON. I found a PGN->JSON converter in PHP that did the job here. Scroll down to the “download” section to get the .zip.

It’s one of those zips that vomits its contents into whatever directory you unzip it in, so create a new directory for it.

So far, we have:

$ mkdir chess
$ cd chess
$
$ ftp ftp://ftp.pitt.edu/group/student-activities/chess/PGN/Collections/ten-pg.zip ./
$ unzip ten-pg.zip
$
$ wget http://www.dhtmlgoodies.com/scripts/dhtml-chess/dhtml-chess.zip
$ unzip dhtml-chess.zip

Now, create a simple script, say parse.php, to run through the chess matches and output them in JSON, one per line:

getNumberOfGames();
for ($i=0; $igetGameDetailsAsJson($i)."n";
}

?>

Run parse.php and dump the results into a file:

$ php parse.php > games.json

Now you’re ready to import games.json.

Back to the original “hacking” post

Hacking Chess with the MongoDB Pipeline

MongoDB’s new aggegation framework is now available in the nightly build! This post demonstrates some of its capabilities by using it to analyze chess games.

Make sure you have a the “Development Release (Unstable)” nightly running before trying out the stuff in this post. The aggregation framework will be in 2.1.0, but as of this writing it’s only in the nightly build.

First, we need some chess games to analyze. Download games.json, which contains 1132 games that were won in 10 moves or less (crush their soul and do it quick).

You can use mongoimport to import games.json into MongoDB:

$ mongoimport --db chess --collection fast_win games.json
connected to: 127.0.0.1
imported 1132 objects

We can take a look at our chess games in the Mongo shell:

> use chess
switched to db chess
> db.fast_win.count()
1132
> db.fast_win.findOne()
{
	"_id" : ObjectId("4ed3965bf86479436d6f1cd7"),
	"event" : "?",
	"site" : "?",
	"date" : "????.??.??",
	"round" : "?",
	"white" : "Gedult D",
	"black" : "Kohn V",
	"result" : "1-0",
	"eco" : "B33/09",
	"moves" : {
		"1" : {
			"white" : {
				"move" : "e4"
			},
			"black" : {
				"move" : "c5"
			}
		},
		"2" : {
			"white" : {
				"move" : "Nf3"
			},
			"black" : {
				"move" : "Nc6"
			}
		},
                ...
		"10" : {
			"white" : {
				"move" : "Qa4"
			}
		}
	}
}

Not exactly the greatest schema, but that’s how the chess format exporter munged it. Regardless, now we can use aggregation pipelines to analyze these games.

Experiment #1: First Mover Advantage

White has a slight advantage in chess because you move first (Wikipedia says it’s a 52%-56% chance of winning). I’d hypothesize that, in a short game, going first matters even more.

Let’s find out.

The “result” field in these docs is “1-0” if white wins and “0-1” if black wins. So, we want to divide our docs into two groups based on the “result” field and count how many docs are in each group. Using the aggregation pipeline, this looks like:

> db.runCommand({aggregate : "fast_win", pipeline : [
... {
...    $group : {
...        _id : "$result",      // group by 'result' field
...        numGames : {$sum : 1} // add 1 for every document in the group
...    }
... }]})
{
	"result" : [
		{
			"_id" : "0-1",
			"numGames" : 435
		},
		{
			"_id" : "1-0",
			"numGames" : 697
		}
	],
	"ok" : 1
}

That gives a 62% chance white will win (697 wins/1132 total games). Pretty good (although, of course, this isn’t a very large sample set).

In case you're not familiar with it, a reference chessboard with 1-8, a-h marked.

Experiment #2: Best Starting Move

Given a starting move, what percent of the time will that move lead to victory? This probably depends on whether you’re playing white or black, so we’ll just focus on white’s opening move.

First, we’ll just determine what starting moves white uses with this series of steps:

  • project all of white’s first moves (the moves.1.white.move field)
  • group all docs with the same starting move together
  • and count how many documents (games) used that move.

These steps look like:

> db.runCommand({aggregate: "fast_win", pipeline: [
... // '$project' is used to extract all of white's opening moves
... {
...     $project : {
...         // extract moves.1.white.move into a new field, firstMove
...         firstMove : "$moves.1.white.move"
...     }
... },
... // use '$group' to calculate the number of times each move occurred
... {
...     $group : { 
...         _id : "$firstMove",
...         numGames : {$sum : 1}
...     }
... }]})
{
	"result" : [
		{
			"_id" : "d3",
			"numGames" : 2
		},
		{
			"_id" : "e4",
			"numGames" : 696
		},
		{
			"_id" : "b4",
			"numGames" : 17
		},
		{
			"_id" : "g3",
			"numGames" : 3
		},
		{
			"_id" : "e3",
			"numGames" : 2
		},
		{
			"_id" : "c4",
			"numGames" : 36
		},
		{
			"_id" : "b3",
			"numGames" : 4
		},
		{
			"_id" : "g4",
			"numGames" : 11
		},
		{
			"_id" : "h4",
			"numGames" : 1
		},
		{
			"_id" : "Nf3",
			"numGames" : 37
		},
		{
			"_id" : "f3",
			"numGames" : 1
		},
		{
			"_id" : "f4",
			"numGames" : 25
		},
		{
			"_id" : "Nc3",
			"numGames" : 14
		},
		{
			"_id" : "d4",
			"numGames" : 283
		}
	],
	"ok" : 1
}

Now let’s compare those numbers with whether white won or lost.

> db.runCommand({aggregate: "fast_win", pipeline: [
... // extract the first move
... {
...    $project : {
...        firstMove : "$moves.1.white.move",
...        // create a new field, "win", which is 1 if white won and 0 if black won
...        win : {$cond : [
...            {$eq : ["$result", "1-0"]}, 1, 0
...        ]}
...    }
... },
... // group by the move and count up how many winning games used it
... {
...     $group : {
...         _id : "$firstMove",
...         numGames : {$sum : 1},
...         numWins : {$sum : "$win"}
...     }
... },
... // calculate the percent of games won with this starting move
... {
...     $project : {
...         _id : 1,
...         numGames : 1,
...         percentWins : {
...             $multiply : [100, {
...                 $divide : ["$numWins","$numGames"]
...             }]
...         }
...     }
... },
... // discard moves that were used in less than 10 games (probably not representative) 
... {
...     $match : {
...         numGames : {$gte : 10}
...     }
... },
... // order from worst to best
... {
...     $sort : {
...         percentWins : 1
...     }
... }]})
{
	"result" : [
		{
			"_id" : "f4",
			"numGames" : 25,
			"percentWins" : 24
		},
		{
			"_id" : "b4",
			"numGames" : 17,
			"percentWins" : 35.294117647058826
		},
		{
			"_id" : "c4",
			"numGames" : 36,
			"percentWins" : 50
		},
		{
			"_id" : "d4",
			"numGames" : 283,
			"percentWins" : 50.53003533568905
		},
		{
			"_id" : "g4",
			"numGames" : 11,
			"percentWins" : 63.63636363636363
		},
		{
			"_id" : "Nf3",
			"numGames" : 37,
			"percentWins" : 67.56756756756756
		},
		{
			"_id" : "e4",
			"numGames" : 696,
			"percentWins" : 68.24712643678161
		},
		{
			"_id" : "Nc3",
			"numGames" : 14,
			"percentWins" : 78.57142857142857
		}
	],
	"ok" : 1
}

Pawn to e4 seems like the most dependable winner here. Knight to c3 also seems like a good choice (at a nearly 80% win rate), but it was only used in 14 winning games.

Experiment #3: Best and Worst Moves for Black

We basically want to do a similar pipeline to Experiment 2, but for black. At the end, we want to find the best and worst percent.

> db.runCommand({aggregate: "fast_win", pipeline: [
... // extract the first move
... {
...    $project : {
...        firstMove : "$moves.1.black.move",
...        win : {$cond : [
...            {$eq : ["$result", "0-1"]}, 1, 0
...        ]}
...    }
... },
... // group by the move and count up how many winning games used it
... {
...     $group : {
...         _id : "$firstMove",
...         numGames : {$sum : 1},
...         numWins : {$sum : "$win"}
...     }
... },
... // calculate the percent of games won with this starting move
... {
...     $project : {
...         _id : 1,
...         numGames : 1,
...         percentWins : {
...             $multiply : [100, {
...                 $divide : ["$numWins","$numGames"]
...             }]
...         }
...     }
... },
... // discard moves that were used in less than 10 games (probably not representative) 
... {
...     $match : {
...         numGames : {$gte : 10}
...     }
... },
... // sort by % win rate
... {
...     $sort : {
...         percentWins : -1
...     }
... },
... // get the best and worst
... {
...     $group : {
...          _id : 1,
...          best : {$first : "$_id"},
...          worst : {$last : "$_id"}
...     }
... }]})
{
	"result" : [
		{
			"_id" : 1,
			"best" : "Nf6",
			"worst" : "d6"
		}
	],
	"ok" : 1
}

I like this new aggregation functionality because it’s feels simpler than MapReduce. You can start with a one-operation pipeline and build it up, step-by-step, seeing exactly what a given operation does to your output. (And no Javascript required, which is always a plus.)

There’s lots more documentation on aggregation pipelines in the docs and I’ll be doing a couple more posts on it.

Replica Set Internals Bootcamp: Part I – Elections

I’ve been doing replica set “bootcamps” for new hires. It’s mainly focused on applying this to debug replica set issues and being able to talk fluently about what’s happening, but it occurred to me that you (blog readers) might be interested in it, too.

There are 8 subjects I cover in my bootcamp:

  1. Elections
  2. Creating a set
  3. Reconfiguring
  4. Syncing
  5. Initial Sync
  6. Rollback
  7. Authentication
  8. Debugging

I’m going to do one subject per post, we’ll see how many I can get through.

Prerequisites: I’m assuming you know what replica sets are and you’ve configured a set, written data to it, read from a secondary, etc. You understand the terms primary and secondary.

The most obvious feature of replica sets is their ability to elect a new primary, so the first thing we’ll cover is this election process.

Replica Set Elections

Let’s say we have a replica set with 3 members: X, Y, and Z. Every two seconds, each server sends out a heartbeat request to the other members of the set. So, if we wait a few seconds, X sends out heartbeats to Y and Z. They respond with information about their current situation: the state they’re in (primary/secondary), if they are eligible to become primary, their current clock time, etc.

X receives this info and updates its “map” of the set: if members have come up or gone down, changed state, and how long the roundtrip took.

At this point, if X map changed, X will check a couple of things: if X is primary and a member went down, it will make sure it can still reach a majority of the set. If it cannot, it’ll demote itself to a secondary.

Demotions

There is one wrinkle with X demoting itself: in MongoDB, writes default to fire-and-forget. Thus, if people are doing fire-and-forget writes on the primary and it steps down, they might not realize X is no longer primary and keep sending writes to it. The secondary-formerly-known-as-primary will be like, “I’m a secondary, I can’t write that!” But because the writes don’t get a response on the client, the client wouldn’t know.

Technically, we could say, “well, they should use safe writes if they care,” but that seems dickish. So, when a primary is demoted, it also closes all connections to clients so that they will get a socket error when they send the next message. All of the client libraries know to re-check who is primary if they get an error. Thus, they’ll be able to find who the new primary is and not accidentally send an endless stream of writes to a secondary.

Elections

Anyway, getting back to the heartbeats: if X is a secondary, it’ll occasionally check if it should elect itself, even if its map hasn’t changed. First, it’ll do a sanity check: does another member think it’s primary? Does X think it’s already primary? Is X ineligible for election? If it fails any of the basic questions, it’ll continue puttering along as is.

If it seems as though a new primary is needed, X will proceed to the first step in election: it sends a message to Y and Z, telling them “I am considering running for primary, can you advise me on this matter?”

When Y and Z get this message, they quickly check their world view. Do they already know of a primary? Do they have more recent data than X? Does anyone they know of have more recent data than X? They run through a huge list of sanity checks and, if everything seems satisfactory, they tentatively reply “go ahead.” If they find a reason that X cannot be elected, they’ll reply “stop the election!”

If X receives any “stop the election!” messages, it cancels the election and goes back to life as a secondary.

If everyone says “go ahead,” X continues with the second (and final) phase of the election process.

For the second phase, X sends out a second message that is basically, “I am formally announcing my candidacy.” At this point, Y and Z make a final check: do all of the conditions that held true before still hold? If so, they allow X to take their election lock and send back a vote. The election lock prevents them from voting for another candidate for 30 seconds.

If one of the checks doesn’t pass the second time around (fairly unusual, at least in 2.0), they send back a veto. If anyone vetos, the election fails.

Suppose that Y votes for X and Z vetos X. At that point, Y‘s election lock is taken, it cannot vote in another election for 30 seconds. That means that, if Z wants to run for primary, it had better be able to get X‘s vote. That said, it should be able to if Z is a viable candidate: it’s not like the members hold grudges (except for Y, for 30 seconds).

If no one vetos and the candidate member receives votes from a majority of the set, the candidate becomes primary.

Confused?

Feel free to ask questions in the comments below. This is a loving, caring bootcamp (as bootcamps go).

Popping Timestamps into ObjectIds

ObjectIds contain a timestamp, which tells you when the document was created. Because the _id field is always indexed, that means you have a “free” index on your “created at” time (unless you have persnickety requirements for creation times, like resolutions of less than a second, synchronization across app servers, etc.).

Actually using this index can seem daunting (how do you use an ObjectId to query for a certain date?) so let’s run through an example.

First, let’s insert 100 sample docs, 10 docs/second.

> for (i=0; i<10; i++) { 
... print(i+": "+Date.now()); 
... for (j=0; j<10; j++) { 
...    db.foo.insert({x:i,y:j}); 
... } 
... sleep(1000); 
... }
0: 1324417241111
1: 1324417242112
2: 1324417243112
3: 1324417244113
4: 1324417245114
5: 1324417246115
6: 1324417247115
7: 1324417248116
8: 1324417249117
9: 1324417250117

Let’s find all entries created after 1324417246115 (when i=5).

The time is currently in milliseconds (that’s how JavaScript does dates), so we’ll have to convert it to seconds:

> secs = Math.floor(1324417246115/1000)
1324417246

(Your secs will be different than mine, of course.)

ObjectIds can be constructed from a 24-character string, each two characters representing a byte (e.g., “ff” is 255). So, we need to convert secs to hexidecimal, which luckily is super-easy in JavaScript:

> hexSecs = secs.toString(16)
4ef100de

Now, we create an ObjectId from this:

> id = ObjectId(hexSecs+"0000000000000000")
ObjectId("4ef100de0000000000000000")

If you get the wrong number of zeros here, you’ll get an error message that is, er, hard to miss.

Now, we query for everything created after this timestamp:

> db.foo.find({_id : {$gt : id}})
{ "_id" : ObjectId("4ef100de7d435c39c3016405"), "x" : 5, "y" : 0 }
{ "_id" : ObjectId("4ef100de7d435c39c3016406"), "x" : 5, "y" : 1 }
{ "_id" : ObjectId("4ef100de7d435c39c3016407"), "x" : 5, "y" : 2 }
{ "_id" : ObjectId("4ef100de7d435c39c3016408"), "x" : 5, "y" : 3 }
{ "_id" : ObjectId("4ef100de7d435c39c3016409"), "x" : 5, "y" : 4 }
{ "_id" : ObjectId("4ef100de7d435c39c301640a"), "x" : 5, "y" : 5 }
{ "_id" : ObjectId("4ef100de7d435c39c301640b"), "x" : 5, "y" : 6 }
{ "_id" : ObjectId("4ef100de7d435c39c301640c"), "x" : 5, "y" : 7 }
{ "_id" : ObjectId("4ef100de7d435c39c301640d"), "x" : 5, "y" : 8 }
{ "_id" : ObjectId("4ef100de7d435c39c301640e"), "x" : 5, "y" : 9 }
{ "_id" : ObjectId("4ef100df7d435c39c301640f"), "x" : 6, "y" : 0 }
{ "_id" : ObjectId("4ef100df7d435c39c3016410"), "x" : 6, "y" : 1 }
{ "_id" : ObjectId("4ef100df7d435c39c3016411"), "x" : 6, "y" : 2 }
{ "_id" : ObjectId("4ef100df7d435c39c3016412"), "x" : 6, "y" : 3 }
{ "_id" : ObjectId("4ef100df7d435c39c3016413"), "x" : 6, "y" : 4 }
{ "_id" : ObjectId("4ef100df7d435c39c3016414"), "x" : 6, "y" : 5 }
{ "_id" : ObjectId("4ef100df7d435c39c3016415"), "x" : 6, "y" : 6 }
{ "_id" : ObjectId("4ef100df7d435c39c3016416"), "x" : 6, "y" : 7 }
{ "_id" : ObjectId("4ef100df7d435c39c3016417"), "x" : 6, "y" : 8 }
{ "_id" : ObjectId("4ef100df7d435c39c3016418"), "x" : 6, "y" : 9 }
Type "it" for more

If we look at the explain for the query, you can see that it’s using the index:

> db.foo.find({_id:{$gt:id}}).explain()
{
	"cursor" : "BtreeCursor _id_",
	"nscanned" : 50,
	"nscannedObjects" : 50,
	"n" : 50,
	"millis" : 0,
	"nYields" : 0,
	"nChunkSkips" : 0,
	"isMultiKey" : false,
	"indexOnly" : false,
	"indexBounds" : {
		"_id" : [
			[
				ObjectId("4ef100de0000000000000000"),
				ObjectId("ffffffffffffffffffffffff")
			]
		]
	}
}

We’re not quite done, because we’re actually not returning what we wanted: we’re getting all docs greater than or equal to the “created at” time, not just greater than. To fix this, we’d just need to add 1 to the secs before doing anything else. Or I can claim that we were querying for documents created after i=4 all along.

SQL to MongoDB: An Updated Mapping

Rick Osborne's original chart.

The aggregation pipeline code has finally been merged into the main development branch and is scheduled for release in 2.2. It lets you combine simple operations (like finding the max or min, projecting out fields, taking counts or averages) into a pipeline of operations, making a lot of things that were only possible by using MapReduce doable with a “normal” query.

In celebration of this, I thought I’d re-do the very popular MySQL to MongoDB mapping using the aggregation pipeline, instead of MapReduce.

Here is the original SQL:

SELECT
  Dim1, Dim2,
  SUM(Measure1) AS MSum,
  COUNT(*) AS RecordCount,
  AVG(Measure2) AS MAvg,
  MIN(Measure1) AS MMin
  MAX(CASE
    WHEN Measure2  123)
GROUP BY Dim1, Dim2
HAVING (MMin > 0)
ORDER BY RecordCount DESC
LIMIT 4, 8

We can break up this statement and replace each piece of SQL with the new aggregation pipeline syntax:

MongoDB Pipeline MySQL
aggregate: "DenormAggTable"
FROM DenormAggTable
{
    $match : {
        Filter1 : {$in : ['A','B']},
        Filter2 : 'C',
        Filter3 : {$gt : 123}
    }
}
WHERE (Filter1 IN (’A’,’B’))
  AND (Filter2 = ‘C’)
  AND (Filter3 > 123)
{
    $project : {
        Dim1 : 1,
        Dim2 : 1,
        Measure1 : 1,
        Measure2 : 1,
        lessThanAHundred : {
            $cond: [ 
                {$lt: ["$Measure2", 100] },
                "$Measure2", // if
                0]           // else
        }
    }
}
CASE
  WHEN Measure2 < 100
  THEN Measure2
END
{
    $group : {
        _id : {Dim1 : 1, Dim2 : 1},
        MSum : {$sum : "$Measure1"},
        RecordCount : {$sum : 1},
        MAvg : {$avg : "$Measure2"},
        MMin : {$min : "$Measure1"},
        MMax : {$max : "$lessThanAHundred"}
    }
}
SELECT
  Dim1, Dim2,
  SUM(Measure1) AS MSum,
  COUNT(*) AS RecordCount,
  AVG(Measure2) AS MAvg,
  MIN(Measure1) AS MMin
  MAX(CASE
    WHEN Measure2 < 100
    THEN Measure2
  END) AS MMax

GROUP BY Dim1, Dim2
{
    $match : {MMin : {$gt : 0}}
}
HAVING (MMin > 0)
{
    $sort : {RecordCount : -1}
}
ORDER BY RecordCount DESC
{
    $limit : 8
},
{
    $skip : 4
}
LIMIT 4, 8

Putting all of these together gives you your pipeline:

> db.runCommand({aggregate: "DenormAggTable", pipeline: [
{
    $match : {
        Filter1 : {$in : ['A','B']},
        Filter2 : 'C',
        Filter3 : {$gt : 123}
    }
},
{
    $project : {
        Dim1 : 1,
        Dim2 : 1,
        Measure1 : 1,
        Measure2 : 1,
        lessThanAHundred : {$cond: [{$lt: ["$Measure2", 100]}, {
            "$Measure2",
            0]
        }
    }
},
{
    $group : {
        _id : {Dim1 : 1, Dim2 : 1},
        MSum : {$sum : "$Measure1"},
        RecordCount : {$sum : 1},
        MAvg : {$avg : "$Measure2"},
        MMin : {$min : "$Measure1"},
        MMax : {$max : "$lessThanAHundred"}
    }
},
{
    $match : {MMin : {$gt : 0}}
},
{
    $sort : {RecordCount : -1}
},
{
    $limit : 8
},
{
    $skip : 4
}
]})

As you can see, the SQL matches the pipeline operations pretty clearly. If you want to play with it, it’ll be available soon to a the development nightly build.

If you’re at MongoSV today (December 9th, 2011), check out Chris Westin’s talk on the new aggregation framework at 3:45 in room B4.

On working at 10gen

10gen is trying to hire a gazillion people, so I’m averaging two interviews a day (bleh). A lot of people have asked what it’s like to work on MongoDB, so I thought I’d write a bit about it.

A Usual Day

Coffee: the lynchpin of my day.
  • Get in around 10am.
  • Check if there are any commercial support questions that need to be answered right now.
  • Have a cup of coffee and code until lunch.
  • Eat lunch.
  • If nothing dire has happened, go out for coffee+writing. This refuels my brain and is a creative outlet: that’s where I am now. My coffee does not look nearly as awesome as the coffee on the right.
  • Go back to the office, code all afternoon.
  • Depending on the day, usually between 5:30 and 6:30 the programmers will naturally start discussing problems we had over the day, interviews, support, the latest geek news, etc. Often beers are broken out.
  • Wrap up, go home.

There are some variations on this: as I mentioned, a lot of time lately is taken up by interviewing. Other coworkers spend a lot more time than I do at consults, trainings, speaking at conferences, etc.

Other General Workday Stuff

On Fridays, we have lunch as a team. After lunch, we have a tech talk where someone presents on what they’re working on (e.g., the inspiration for my geospatial post) or general info that’s good to know (e.g., the inspiration for my virtual memory post). This is a nice way to end the week, especially since Fridays often wrap up earlier than other days.

A couple people use OS X or Windows for development, most people use Linux. You can use whatever you want. I’d like to encourage emacs users, in particular, to apply, as we’re falling slightly behind vi in numbers.

We sit in an open office plan, everyone at tables in a big room (including the CEO and CTO, who are both programmers). The only people in separate rooms are the people who have to be on the phone all day (sales, marketers, basketweavers… I’m not really clear on what non-technical people do).

And speaking of what people actually do, here are three examples of my job (that are more specific than “coding”):

Fixing Other People’s Bugs

Recently, a developer was using MongoDB and IBM’s DB2 with PHP. After he installed the MongoDB driver, PHP started segfaulting all over the place. I downloaded the ibm_db2 PHP extension to take a look.

PHP keeps a “storage unit” for extensions’ long-term memory use. Every extension shares the space and can store things there.

The DB2 extension was basically fire-bombing the storage unit.

It went through the storage, object by object, casting the objects into DB2 types and then freeing them. This worked fine when DB2 was the only PHP extension being used, but broke down when anyone else tried to use that storage. I gave the user a small patch that stopped the DB2 extension from destroying objects it didn’t create, and everything worked fine for them, after that.

The Game is Afoot

A user reported that they couldn’t initialize their replica set: a member wasn’t coming online. The trick with this type of bug is to get enough evidence before the user wants to beat you over the head with the 800th log you’ve requested.

I asked them to send the first round of logs. It was weird, nothing was wrong from server1‘s point of view: it initialized properly and could connect to everyone in the set. I puzzled over the messages, figuring out that once server1 had created the set, server2 had accepted the connection from server1 but then somehow failed to connect back to server1 and so couldn’t pick up the set config. However, according to server1, it could connect fine to server2 and thought it was perfectly healthy!

I finally realized what must be happening: “It looks like server2 couldn’t connect to any of the others, but all of them could connect to it. Could you check your firewall?”

“Oh, that server was blocking all outgoing connections! Now its working fine.”

Elementary, my dear Watson.

You know you’re not at a big company when…

At least it had "handles."

Someone on Sparc complained that the Perl driver wasn’t working at all for them. My first thought was that Sparc is big-endian, so maybe the Perl driver wasn’t flipping memory correctly. I asked Eliot where our Power PC was, and he said we must have forgotten it when we moved: it was still in our old office around the corner.

“Bring someone to help carry it,” he told me. “It’s heavy.”

Pshaw, I thought. How heavy could an old desktop be?

I went around the corner and the other company graciously let me walk into their server room, choose a server, and walk out with it. Unfortunately, it weighed about 50 pounds, and I have a traditional geek physique (no muscles). The trip back to our office involved me staggering a couple steps, putting it down, shaking out my arms, and repeat.

When I got to our office, I just dragged it down the hallway to our server closet. Eliot saw me tugboating the thing down the hallway.

“You didn’t bring someone to help?”

“It’s *oof* fine!”

Unfortunately, once it was all set up, the Perl driver worked perfectly on it. So it wasn’t big-endian specific.

I was now pretty sure it was Sparc-specific (another person had reported the same problem on a Sparc), so I bought an elderly Sparc server for a couple hundred bucks off eBay. When it arrived a couple days later, Eliot showed me how to rack it and I spent a day fighting with the Solaris/Oracle package manager. However, it was all worth it: I tried running the Perl driver and it instantly failed (success!).

After some debugging, I realized that Sparc was much more persnickety than Intel about byte alignment. The Perl driver was playing fast and loose with a byte buffer, casting pieces of it into other types (which Sparc didn’t like). I changed some casts to memcpys and the Perl driver started working beautifully.

But every day is different

The episodes above are a very small sample of what I do: there are hundreds of other things I’ve worked on over the last few years from speaking to working on the database to writing a freakin Facebook app.

So, if this sounded interesting, please go to our jobs website and submit an application!

Getting Started with MMS

Edit: since this was written, Sam has written some excellent documentation on using MMS. I recommend reading through it as you explore MMS.

Telling someone “You should set up monitoring” is kind of like telling someone “You should exercise 20 minutes three times a week.” Yes, you know you should, but your chair is so comfortable and you haven’t keeled over dead yet.

For years*, 10gen has been planning to do monitoring “right,” making it painless to monitor your database. Today, we released the MongoDB Monitoring Service: MMS.

MMS is free hosted monitoring for MongoDB. I’ve been using it to help out paying customers for a while, so I thought I’d do a quick post on useful stuff I’ve discovered (documentation is… uh… a little light, so far).

So, first: you sign up.

There are two options: register a company and register another account for an existing company. For example, let’s say I wanted to monitor the servers for Snail in a Turtleneck Enterprises. I’ll create a new account and company group. Then Andrew, sys admin of my heart, can create an account with Snail in a Turtleneck Enterprises and have access to all the same monitoring info.

Once you’re registered, you’ll see a page encouraging you to download the MMS agent. Click on the “download the agent” link.

This is a little Python program that collects stats from MongoDB, so you need to have pymongo installed, too. Starting from scratch on Ubuntu, do:

$ # prereqs
$ sudo apt-get install python python-setuptools
$ sudo easy_install pymongo
$
$ # set up agent
$ unzip name-of-agent.zip
$ cd name-of-agent
$ mkdir logs
$
$ # start agent
$ nohup python agent.py > logs/agent.log 2>&1 &

Last step! Back to the website: see that “+” button next to the “Hosts” title?

Designed by programmers, for Vulcans

Click on that and type a hostname. If you have a sharded cluster, add a mongos. If you have a replica set, add any member.

Now go have a nice cup of coffee. This is an important part of the process.

When you get back, tada, you’ll have buttloads of graphs. They probably won’t have much on them, since MMS will have been monitoring them for all of a few minutes.

Cool stuff to poke

This is the top bar of buttons:

Of immediate interest: click “Hosts” to see a list of hosts.

You’ll see hostname, role, and the last time the MMS agent was able to reach this host. Hosts that it hasn’t reached recently will have a red ping time.

Now click on a server’s name to see all of the info about it. Let’s look at a single graph.

You can click & drag to see a smaller bit of time on the graph. See those icons in the top right? Those give you:

+
Add to dashboard: you can create a custom dashboard with any charts you’re interested in. Click on the “Dashboard” link next to “Hosts” to see your dashboard.
Link
Link to a private URL for this chart. You’ll have to be logged in to see it.
Email
Email a jpg of this chart to someone.
i
This is maybe the most important one: a description of what this chart represents.

That’s the basics. Some other points of interest:

  • You can set up alerts by clicking on “Alerts” in the top bar
  • “Events” shows you when hosts went down or came up, because primary or secondary, or were upgraded.
  • Arbiters don’t have their own chart, since they don’t have data. However, there is an “Arbiters” tab that lists them if you have some.
  • The “Last Ping” tab contains all of the info sent by MMS on the last ping, which I find interesting.
  • If you are confused, there is an “FAQ” link in the top bar that answers some common questions.

If you have any problems with MMS, there’s a little form at the bottom to let you complain:

This will file a bug report for you. This is a “private” bug tracker, only 10gen and people in your group will be able to see the bugs you file.

* If you ran mongod --help using MongoDB version 1.0.0 or higher, you might have noticed some options that started with --mms. In other words, we’ve been planning this for a little while.

PS1++

Since MongoDB was first created, the Mongo shell prompt has just been:

>

A couple of months ago, my prompt suddenly changed to:

myReplSetName:SECONDARY> 

It’s nice to have more information the prompt, but 1) I don’t care about the replica set name and 2) a programmer’s prompt is very personal. Having it change out from under you is like coming home and finding that someone replaced all of your underwear. It’s just disconcerting.

Anyway, I recently got an intern (well, I’m mentoring him, it’s not like I bought him), Matt Dannenberg, who’s interested in working on shell stuff. He committed some code last week that lets you customize the shell’s prompt (it will be in 1.9.1+).

Basically, you define a prompt() function, and then it’ll be executed every time the shell is displayed. Immediately, I did:

myReplSetName:SECONDARY> prompt = "> "
> 
> // ah, bliss
>
> // some sysadmins think > is a weird prompt, as it's also 
> // used for redirections, so they might prefer $
> prompt = "$ "
$ 
$ // there we go

Okay, that’s much better. But there is some information I’d like to add to my prompt.

I often forget which database db is referring to, and then I have to type db to check (which is especially annoying when I’m in the middle of a multi-line script). So, let’s just add the current database name to the prompt.

> prompt = function() { return db+"> "; }
test> use foo
foo> use bar
bar>

The prompt no longer shows whether I’m connected to a PRIMARY or a SECONDARY (or something else), which is useful information to have. I hate that long string, though, so let’s neaten it up. I want it to be:

foo>
I’m connected to the primary
(foo)>
I’m connected to a secondary
STATE:foo>
I’m connected to a server with state STATE.

This might look something like:

> states = ["STARTUP", "PRIMARY", "SECONDARY", "RECOVERING", "FATAL", 
... "STARTUP2", "UNKNOWN", "ARBITER", "DOWN", "ROLLBACK"]
>
> prompt = function() { 
... result = db.isMaster();
... if (result.ismaster) {
...     return db+"> "; 
... }
... else if (result.secondary) {
...    return "("+db+")> ";
... }
... result = db.adminCommand({replSetGetStatus : 1})
... return states[result.myState]+":"+db+"> ";
... }
(test)>

Also, the default prompt displays if you’re connected to a mongos (with mongos>), which is good for keeping track when you’re running a cluster:

> prompt = function() {
... result = db.adminCommand({isdbgrid : 1});
... if (result.ok == 1) {
...     return "mongos> ";
... }
... return "> ";
... }

Another nice thing would be to have the time each time it displays the prompt: then you can kick off a long-running job, go to lunch, and know what time it finished when you get back.

> prompt = function() { 
... var now = new Date(); 
... return now.getHours()+":"+now.getMinutes()+":"+now.getSeconds()+"> ";
... }
10:30:45> db.foo.count()
60000
10:30:46>

Defining prompt() as shown above is nice for playing around, but it’s a pain to define your prompt every time you start up the shell. So, you can add it to a function and then either load it on startup (a command line argument) or from the shell itself:

$ # load from command line arg:
$ mongo shellConfig.js
MongoDB shell version 1.9.1-
connecting to: test
> 
> // load from the shell itself
> load("/path/to/my/shellConfig.js")

Or, you can use another feature my intern has implemented: mongo will automatically look for (and, if it finds, load) a .mongorc.js file from your home directory on startup.

// my startup file

prompt = /* ... */

// getting "not master and slaveok=false" errors drives me nuts,
// so I'm overriding the getDB() code to ALWAYS set slaveok=true
Mongo.prototype.getDB = function(name) {
    this.setSlaveOk();
    return new DB(this, name);
}

/* and so on... */
Actually, 10gen employees would never have an intern make coffee, as they might mess it up: we have at least five different brewers, two grinders, two pages of close-typed instructions on how to grind/brew, and an RFC on coffee-making protocol.

Keep in mind that the is a “proper” JS file, you can’t use magic Mongo shell helpers, like use <dbname> (instead, use db.getSisterDB("<dbname>")). If you don’t want .mongorc.js loaded on startup, start the shell with –norc.

Hopefully these things will make life a little easier for people.

Both of these changes are in master and will be in 1.9.1+. They will not be backported to the 1.8 branch. You can use the 1.9 shell with the 1.8 database server, though, if you want to use these features with a production database.

Mongo in Flatland

MongoDB’s geospatial indexing lets you use a collection as a map. It works differently than “normal” indexing, but there’s actually a nice, visual way to see what geospatial indexing does.

Let’s say we have a 16×16 map; something that looks like this:

All of the coordinates in our map (as described above) are somewhere between [0,0] and [16,16], so I’m going to make the min value 0 and the max value 16.

db.map.ensureIndex({point : "2d"}, {min : 0, max : 16, bits : 4})

This essentially turns our collection into a map. (Don’t worry about bits, for now, I’ll explain that below.)

Let’s say we have something at the point [4,6]. MongoDB generates a geohash of this point, which describes the point in a way that makes it easier to find things near it (and still be able to distribute the map across multiple servers). The geohash for this point is a string of bits describing the position of [4,6]. We can find the geohash of this point by dividing our map up into quadrants and determining which quadrant it is in. So, first we divide the map into 4 parts:

This is the trickiest part: each quadrant can be described by two bits, as shown in the table below:

01 11
00 10

[4,6] is in the lower-left quadrant, which matches 00 in the table above. Thus, its geohash starts with 00.

Geohash so far: 00

Now we divide that quadrant again:

[4,6] is now in the upper-right quadrant, so the next two bits in the geohash are 11. Note that the bottom and left edges are included in the quadrant, the top and right edges are excluded.

Geohash so far: 0011

Now we divide that quadrant again:

[4,6] is now in the upper-left quadrant, so the next two bits in the geohash are 01.

Geohash so far: 001101

Now we divide that quadrant again:

[4,6] is now in the lower-left quadrant, so the next two bits in the geohash are 00.

Geohash so far: 00110100

You may wonder: how far do we keep dividing? That’s exactly what the bits setting is for. We set it to 4 when we created the index, so we divide into quadrants 4 times. If we wanted higher precision, we could set bits to something higher.

You can check your math above by using the geoNear command, which returns the geohash for the point you’re search near:

> db.runCommand({geoNear : "map", near : [4,6]})
{
	"ns" : "test.map",
	"near" : "00110100",
	"results" : [ ],
	"stats" : {
		"time" : 0,
		"btreelocs" : 0,
		"nscanned" : 0,
		"objectsLoaded" : 0,
		"avgDistance" : NaN,
		"maxDistance" : -1
	},
	"ok" : 1
}

As you can see, the “near” field contains exactly the geohash we’d expect from our calculations.

The interesting thing about geohashing is that this makes it easy to figure out what’s near us, because things are sorted according to their position on the map: every document with a point geohash starting with 00 is in the lower-left quadrant, every point starting with 00111111 is very near the middle, but in the lower-left quadrant. Thus, you can eyeball where a point is by looking at its geohash.

Bits and Precision

Let’s say a wizard casts a ring of fire around him with a radius of 2. Is the point [4,6] caught in that ring of fire?

It’s pretty obvious from the picture that it isn’t, but if we look at the geohash, we actually can’t tell: [4,6] hashes to 00110100, but so does [4.9, 6.9], and any other value in the square between [4,6] and [5,7]. So, in order to figure out whether the point is within the circle, MongoDB must go to the document and look at the actual value in the point field. Thus, setting bits to 4 is a bit low for the data we’re using/queries we’re doing.

Generally you shouldn’t bother setting bits, I’ve only set it above for purposes of demonstration. bits defaults to 26, which gives you approximately 1 foot resolution using latitude and longitude. The higher the number of bits, the slower geohashing gets (conversely, lower bit values mean faster geohashing, but more accessing documents on lookup). If you’re doing particularly high or low resolution queries, you might want to play around with different values of bits (in dev, on a representative data set) and see if you get better performance.

Thanks to Greg Studer, who gave a geospatial tech talk last Friday and inspired this post. (Every Friday, a 10gen engineer does a tech talk on something they’re working on, which is a really nice way to keep up with all of the cool stuff coworkers are doing. If you’re ever running an engineering department, I highly recommend them!)