With a name like Mongo, it has to be good

I just got back from MongoSF, which was awesome. Over 200 Mongo geeks, three tracks, and language-specific workshops all day.

I got to the conference early and took a picture of the venue... an hour later it was packed.

The highlight, for me, was Eliot Horowitz’s talk on sharding. He set up a MongoDB cluster of 25 large EC2 instances and started hammering them.

He pulled up an incredibly snazzy sharding GUI (okay, I wrote it, it’s not actually that snazzy) that displayed a row of stats for each shard. Each shard had about the same level of operations per second, so you could see that Mongo was doing pretty well distributing the data across the shards.

Eliot scrolled down to the bottom of the stats table where it showed the total number of operations per second across all the shards. The cluster was doing 8 million operations per second.

8,000,000 operations per second.

The whole audience burst into applause.

That’s about 320,000 operations per second per box, which is about what would be expected for Mongo on a powerful server (like a large EC2 instance).

Cool things about this:

  • If your app needs more than 8 million ops/sec, you can just add more machines and Mongo will take care of redistributing the load.
  • Your app doesn’t need to know if it’s talking to 1, 25, or 7,000 servers. You can focus on writing your app and let Mongo focus on scaling it.
Speaker's dinner the night before

If you missed out on MongoSF, there are a bunch of other Mongo conferences coming up: MongoNY at the end of May and MongoUK and MongoFR in June.

There must be 50 ways to start your Mongo

This blog post covers four major ones:

Feel free to jump to the ones that interest you (for instance, sharding).

Just start up a database, Ace

Starting up a vanilla MongoDB instance is super easy, it just needs a port it can listen on and a directory where it can save your info. By default, Mongo listens on port 27017, which should work fine (it’s not a very commonly used port). We’ll create a new directory for database files:

$ mkdir -p ~/dbs/mydb # -p creates parent directories if they don't exist

And then start up our database:

$ cd 
$ bin/mongod --dbpath ~/dbs/mydb

…and you’ll see a bunch of output:

$ bin/mongod --dbpath ~/dbs/mydb
Fri Apr 23 11:59:07 Mongo DB : starting : pid = 9831 port = 27017 dbpath = /data/db/ master = 0 slave = 0  32-bit 

** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
**       see http://blog.mongodb.org/post/137788967/32-bit-limitations for more

Fri Apr 23 11:59:07 db version v1.5.1-pre-, pdfile version 4.5
Fri Apr 23 11:59:07 git version: f86d93fd949777d5fbe00bf9784ec0947d6e75b9
Fri Apr 23 11:59:07 sys info: Linux ubuntu 2.6.31-15-generic #50-Ubuntu SMP Tue Nov 10 14:54:29 UTC 2009 i686 BOOST_LIB_VERSION=1_38
Fri Apr 23 11:59:07 waiting for connections on port 27017
Fri Apr 23 11:59:07 web admin interface listening on port 28017

Now, Mongo will “freeze” like this, which confuses some people. Don’t worry, it’s just waiting for requests. You’re all set to go.

Set up a slave, Maeve

As we’re running master and slave on the same machine, they’ll need separate ports. We’ll use port 10000 for the master and 20000 for the slave. We also need separate directories for data, so we’ll create those:

$ mkdir ~/dbs/master ~/dbs/slave

Now we start the master database:

$ bin/mongod --master --port 10000 --dbpath ~/dbs/master

And then the slave, in a different terminal:

$ bin/mongod --slave --port 20000 --dbpath ~/dbs/slave --source localhost:10000

The “source” option specifies where the master is that the slave should replicate data from.

Now, if we want to add another slave, we need to go though the herculean effort of choosing a port and creating a new directory:

$ mkdir ~/dbs/slave2
$ bin/mongod --slave --port 20001 --dbpath ~/dbs/slave2 --source localhost:10000

Tada! Two slaves, one master. For more information on master-slave, see the core docs on it and my previous post.

This example puts the master server and slave server on the same machine, but people generally have a master on one machine and a slave on another. It works fine to put them on a single machine, it just defeats the point of a bit.

Get auto-failover… Rover

Okay, so there aren’t many people named Rover, but you come up with a rhyme for “auto-failover” (I tried “replica”, too).

Replica pairs are cool because it’s like master-slave, but you get automatic failover: if the master becomes unavailable, the slave will become a master. So, it’s basically the same as master-slave, but the servers know about each other and there is, optionally, an arbiter server that doesn’t do anything other than resolve “disputes” over who is master.

When could the arbiter come it in handy? Suppose the master’s network cable is pulled. The server still thinks it’s master, but no one else knows it’s there. The slave becomes master and the rest of the world goes along happily. When the master’s network cable gets plugged back in, now both servers think they’re master! In this case, the arbiter steps in and gently informs the master who’s behind in the times that he is now a slave.

You don’t have to set up an arbiter, but we will since it’s good practice:

$ mkdir ~/dbs/arbiter ~/dbs/replica1 ~/dbs/replica2
$ bin/mongod --port 50000 --dbpath ~/dbs/arbiter

Now, in separate terminals, you start each of the replicas:

$ bin/mongod --port 60000 --dbpath ~/dbs/replica1 --pairwith localhost:60001 --arbiter localhost:50000

And then the other one:

$ bin/mongod --port 60001 --dbpath ~/dbs/replica2 --pairwith localhost:60000 --arbiter localhost:50000

After they’ve been running for a bit, try killing (Ctrl-C) one, then restarting it, then killing the other one, back and forth.

For more information on replica pairs, see the core docs.

What’s this? Replica pairs are evolving! *voop* *voop* *voop*

Replica pairs have evolved into… replica sets! Well, okay, they haven’t yet, but they’re coming soon. Then you’ll be able to have an arbitrary number of servers in the auto-failover ring.

Make a new cluster, Buster

For the grand finale, sharding. Sharding is how you distribute data with Mongo. If you don’t know what sharding is, check out my previous post explaining how it works.

First of all, download the latest 1.5.x nightly build from the website. Sharding is changing rapidly, you want the latest and greatest here, not stable.

We’re going to be creating a three-node cluster. So, same as ever, create your database directories. We want one directory for the cluster configuration and three directories for our shards (nodes):

$ mkdir ~/dbs/config ~/dbs/shard1 ~/dbs/shard2 ~/dbs/shard3

The config server keeps track of what’s where, so we need to start that up first:

$ bin/mongod --configsvr --port 70000 --dbpath ~/dbs/config

The mongos is just a request router that runs on top of the config server. It doesn’t even need a data directory, we just tell it where to look for the configuration:

$ bin/mongos --configdb localhost:70000

Note the “s”: the router is called “mongos”, not “mongod”. We haven’t specified a port for it, so it’ll listen on the default port (27017).

Okay! Now, we need to set up our shards. Start these each up in separate terminals:

$ bin/mongod --shardsvr --port 71000 --dbpath ~/dbs/shard1
$ bin/mongod --shardsvr --port 71001 --dbpath ~/dbs/shard2
$ bin/mongod --shardsvr --port 71002 --dbpath ~/dbs/shard3

mongos doesn’t actually know about the shards yet, you need to tell it to add these servers to the cluster. The easiest way is to fire up a mongo shell:

$ bin/mongo
MongoDB shell version: 1.5.1-pre-
url: test
connecting to: test
type "help" for help
>

Now, we add each shard to the cluster:

> db = connect("localhost:70000/admin");
connecting to: localhost:70000
admin
> db.runCommand({addshard : "localhost:71000", allowLocal : true})
{
    "added" : "localhost:71000",
    "ok" : 1
}
> db.runCommand({addshard : "localhost:71001", allowLocal : true})
{
    "added" : "localhost:71001",
    "ok" : 1
}
> db.runCommand({addshard : "localhost:71002", allowLocal : true})
{
    "added" : "localhost:71002",
    "ok" : 1
}
>

mongos expects shards to be on remote machines and by default won’t allow you to add local shards (i.e., shards with “localhost” in the name). Since we’re just playing around, we specify “allowLocal” to override this behavior. (Note that “addshard” IS NOT camel-case, and allowLocal IS camel-case, because we’re consistent like that.)

Congratulations, you’re running a distributed database!

What do you do now? Well, use it just like a normal database! Connect to “localhost:27017” and proceed normally (or, as normally as possible… please report any bugs to our bugtracker!). Try the tutorial (since you’ve already got the shell open) or connect through your favorite driver and play around.

Connecting to mongos should be an identical experience to connecting to a normal Mongo server. Behind the scenes, it splits up your requests/data across the shards so you can concentrate on making your application, not scaling it.


P.S. Obviously, this example setup is full of single points of failure, but that’s completely avoidable. I can go over how to set up distributed MongoDB with zero single points of failure in a follow-up post, if people are interested.

Once and Future Presentations

On Monday, I gave a presentation on MongoDB to the San Francisco MySQL user group.  It was a lot of fun, you can watch the recording on ustream:

http://www.ustream.tv/flash/live/1/3708550Streaming Video by Ustream.TV

Apparently the audio was buzzy (I haven’t actually listened to it myself yet).

The audience especially enjoyed this slide about MySQL’s current situation:

One of the guys told me that he was scrambling to take a picture of it but I went to the next slide too fast, so here it is in all it’s glory.

Thanks to everyone at the MySQL meetup for being so awesome, I had a great time!

Future Talks

April 30th: I’ll be in California again, giving a talk called “Map/reduce, geospatial indexing, and other cool features” at MongoSF

May 18-21: I’ll be in Chicago at Tek·X. I’ll be doing a regular session, “MongoDB for Mobile Applications“, and a tutorial on switching apps from MySQL to MongoDB (assuming no knowledge of MongoDB).

Sharding with the Fishes

Sharding is the not-so-revolutionary way that MongoDB scales writes (it’s very similar to techniques described in the Big Table paper and by PNUTS) but many people are unfamiliar with what it is and how it works.  If you’ve seen a talk on MongoDB or looked at the website, you’ve probably seen a diagram of sharding that looks like this:

…which probably looks a bit like “I hope I don’t have to understand that.”

However, it’s actually quite simple: it’s exactly how the Mafia is structured (or, at least, how The Godfather taught me it was):

  • The shards are the peons: someone tells them to do something (e.g., a query or an insert), they do it and report back.
  • The mongos is the godfather. It knows what data each peon has and gives them orders.  It’s basically a router for the requests.
  • The config server is the consigliere. It knows where all of the data is at any given time and lets the boss know so that he can focus on bossing. The consigliere keeps the organization running smoothly.

For a concrete example, let’s say we have a collection of blog posts.  You choose a “shard key,” which is the value Mongo will use to split the data across shards.  We’ll choose “date” as the shard key, which means it will be split up based on values in the “date” field.  If we have four shards, they might contain data something like:

  • shard #1: beginning of time up to June 2009
  • shard #2: July 2009 to November 2009
  • shard #3: December 2009 to February 2010
  • shard #4: March 2010 through the end of time

Now that we’ve got our peons set up, let’s ask the godfather for some favors.

Queries

Say you query for all documents created from the beginning of this year (January 1st, 2010) up to the present.  Here’s what happens:

  1. You (the client) send the query to the godfather.
  2. The godfather knows which shards contain the data you’re looking for, so he sends the query to shards #3 and #4.
  3. shard #3 and shard #4 execute the query and return the results to the godfather.
  4. The godfather puts together the results he’s received and sends them back to the client.

Note how all of the sharding stuff is done a layer away from the client, so your application doesn’t have to be sharding-aware, it can just query the godfather as though it were a normal mongod instance.

Inserts

Suppose you want to insert a new document with today’s date.  Here’s the sequence of events:

  1. You send the document to the godfather.
  2. It sees today’s date and routes it to shard #4.
  3. shard #4 inserts the document.

Again, identical to a single-server setup from the client’s point of view.

So where’s the consigliere?

Suppose you start getting millions of documents inserted with the date September 2009.  Shard #2 begins to swell up like a bloated corpse.  The consigliere will notice this and, when shard #2 gets too big it will split the data on shard #2 and migrate some of it to other, emptier shards.  Then it will let the godfather know that now shard #2 contains July 2009-September 15th 2009 and shard #3 contains September 16th 2009-February 2010.

The consigliere is also responsibly for figuring out what to do when you add a new shard to the cluster.  It figures out if it should keep the new shard in reserve or migrate some data to it right away.  Basically, it’s the brains of the operation.

Whenever the consigliere moves around data, it lets the godfather know what the final configuration is so that the godfather can continue routing requests correctly.

Leave the gun.  Take the cannolis.

This scaling deliciousness is, unfortunately, still very alpha.  You can help us out by telling us where our documentation sucks (specifics are better than “it sucks”), testing it out on your machine, and voting for features you’d like to see.

Sleepy.Mongoose: A MongoDB HTTP Interface

The first half of the MongoDB book is due this week, so I wrote a REST interface for Mongo (I’m a prolific procrastinator).  Anyway, it’s called Sleepy.Mongoose and it’s available at https://github.com/10gen-labs/sleepy.mongoose.

Installing Sleepy.Mongoose

  1. Install MongoDB.
  2. Install the Python driver:
    $ easy_install pymongo
  3. Download Sleepy.Mongoose.
  4. From the mongoose directory, run:
    $ python httpd.py

You’ll see something that looks like:

=================================
|      MongoDB REST Server      |
=================================

listening for connections on http://localhost:27080

Using Sleepy.Mongoose

First, we’re just going to ping Sleepy.Mongoose to make sure it’s awake. You can use curl:

$ curl 'http://localhost:27080/_hello'

and it’ll send back a Star Wars quote.

To really use the interface, we need to connect to a database server. To do this, we post our database server address to the URI “/_connect” (all actions start with an underscore):

$ curl --data server=localhost:27017 'http://localhost:27080/_connect'

This connects to the database running at localhost:27017.

Now let’s insert something into a collection.

$ curl --data 'docs=[{"x":1}]' 'http://localhost:27080/foo/bar/_insert'

This will insert the document {“x” : 1} into the foo database’s bar collection. If we open up the JavaScript shell (mongo), we can see the document we just added:

> use foo
> db.bar.find()
{ "_id" : ObjectId("4b7edc9a1d41c8137e000000"), "x" : 1 }

But why bother opening the shell when we can query with curl?

$ curl -X GET 'http://localhost:27080/foo/bar/_find'
{"ok": 1, "results": [{"x": 1, "_id": {"$oid": "4b7edc9a1d41c8137e000000"}}], "id": 0}

Note that queries are GET requests, whereas the other requests up to this point have been posts (well, the _hello can be either).

A query returns three fields:

  • “ok”, which will be 1 if the query succeeded, 0 otherwise
  • “results” which is an array of documents from the db
  • “id” which is an identifier for that particular query

In this case “id” is irrelevant as we only have one document in the collection but if we had a bunch, we could use the id to get more results (_find only returns the first 15 matching documents by default, although it’s configurable). This will probably be clearer with an example, so let’s add some more documents to see how this works:

$ curl --data 'docs=[{"x":2},{"x":3}]' 'http://localhost:27080/foo/bar/_insert'
{"ok" : 1}

Now we have three documents. Let’s do a query and ask for it to return one result at a time:

$ curl -X GET 'http://localhost:27080/foo/bar/_find?batch_size=1'
{"ok": 1, "results": [{"x": 1, "_id": {"$oid": "4b7edc9a1d41c8137e000000"}}], "id": 1}

The only difference between this query and the one above is the “?batch_size=1” which means “send one document back.” Notice that the cursor id is 1 now, too (not 0). To get the next result, we can do:

$ curl -X GET 'http://localhost:27080/foo/bar/_more?id=1&batch_size=1'
{"ok": 1, "results": [{"x": 2, "_id": {"$oid": "4b7ee0731d41c8137e000001"}}], "id": 1}
$ curl -X GET 'http://localhost:27080/foo/bar/_more?id=1&batch_size=1'
{"ok": 1, "results": [{"x": 3, "_id": {"$oid": "4b7ee0731d41c8137e000002"}}], "id": 1}

Now let’s remove a document:

$ curl --data 'criteria={"x":2}' 'http://localhost:27080/foo/bar/_remove'
{"ok" : 1}

Now if we do a _find, it only returns two documents:

$ curl -X GET 'http://localhost:27080/foo/bar/_find'
{"ok": 1, "results": [{"x": 1, "_id": {"$oid": "4b7edc9a1d41c8137e000000"}}, {"x": 3, "_id": {"$oid": "4b7ee0731d41c8137e000002"}}], "id": 2}

And finally, updates:

$ curl --data 'criteria={"x":1}&newobj={"$inc":{"x":1}}' 'http://localhost:27080/foo/bar/_update'

Let’s do a _find to see the updated object, this time using criteria: {“x”:2}. To put this in a URL, we need to escape the ‘{‘ and ‘}’ characters. You can do this by copy-pasting it into any javascript interpreter (Rhino, Spidermonkey, mongo, Firebug, Chome’s dev tools) as follows:

> escape('{"x":2}')
%7B%22x%22%3A2%7D

And now we can use that in our URL:

$ curl -X GET 'http://localhost:27080/foo/bar/_find?criteria=%7B%22x%22%3A2%7D'
{"ok": 1, "results": [{"x": 2, "_id": {"$oid": "4b7edc9a1d41c8137e000000"}}], "id": 0}

If you’re looking to go beyond the basic CRUD, there’s more documentation in the wiki.

This code is super-alpha. Comments, questions, suggestions, patches, and forks are all welcome.

Note: Sleepy.Mongoose is an offshoot of something I’m actually supposed to be working on: a JavaScript API we’re going to use to make an awesome sharding tool.  Administrating your cluster will be a point-and-click interface.  You’ll be able to see how everything is doing, drag n’ drop chunks, visually split collections… it’s going to be so cool.

“Introduction to MongoDB” Video

This is the video of the talk I gave last Sunday at the NoSQL Devroom at FOSDEM. It’s about why MongoDB was created, what it’s good at (and a bit about what it’s not good for), the basic syntax for it and how sharding and replication work (it covers a lot of ground).

You can also go to Parleys.com to see the video with my slides next to it (they’re a little tough to see below).

http://www.parleys.com/share/parleysshare2.swf?pageId=1864

Mongo Mailbag #2: Updating GridFS Files

Welcome to week two of Mongo Mailbag, where I take a question from the Mongo mailing list and answer it in more detail. If you have a question you’d like to see answered in excruciating detail, feel free to email it to me.

Is it possible (with the PHP driver) to storeBytes into GridFS (for example CSS data), and later change that data?!

I get some strange behavior when passing an existing _id value in the $extra array of MongoGridFS::storeBytes, sometimes Apache (under Windows) crashes when reloading the file, sometimes it doesn’t seem to be updated at all.

So I wonder, is it even possible to update files in GridFS?! 🙂

-Wouter

If you already understand GridFS, feel free to skip to the last section. For everyone else…

Intro to GridFS

GridFS is the standard way MongoDB drivers handle files; a protocol that allows you to save an arbitrarily large file to the database. It’s not the only way, it’s not the best way (necessarily), it’s just the built-in way that all of the drivers support. This means that you can use GridFS to save a file in Ruby and then retrieve it using Perl and visa versa.

Why would you want to store files in the database? Well, it can be handy for a number of reasons:

  • If you set up replication, you’ll have automatic backups of your files.
  • You can keep millions of files in one (logical) directory… something most filesystems either won’t allow or aren’t good at.
  • You can keep information associated with the file (who’s edited it, download count, description, etc.) right with the file itself.
  • You can easily access info from random sections of large files, another thing traditional file tools aren’t good at.

There are some limitations, too:

  • You can’t have an arbitrary number of files per document… it’s one file, one document.
  • You must use a specific naming scheme for the collections involved: prefix.files and prefix.chunks (by default prefix is “fs”: fs.files and fs.chunks).

If you have complex requirements for your files (e.g., YouTube), you’d probably want to come up with your own protocol for file storage. However, for most applications, GridFS is a good solution.

How it Works

GridFS breaks large files into manageable chunks. It saves the chunks to one collection (fs.chunks) and then metadata about the file to another collection (fs.files). When you query for the file, GridFS queries the chunks collection and returns the file one piece at a time.

Here are some common questions about GridFS:

Q: Why not just save the whole file in a single document?
A: MongoDB has a 4MB cap on document size.
Q: That’s inconvenient, why?
A: It’s an arbitrary limit, mostly to prevent bad schema design.
Q: But in this case it would be so handy!
A: Not really. Imagine you’re storing a 20GB file. Do you really want to return the whole thing at once? That means 20GB or memory will be used whenever you query for that document. Do you even have that much memory? Do you want it taken up by a single request?
Q: Well, no.
A: The nice thing about GridFS is that it streams the data back to the client, so you never need more than 4MB of memory.
Q: Now I know.
A: And knowing is half the battle.
Together: G.I. Joe!

Answer the Damn Question

Back to Wouter’s question: changing the metadata is easy: if we wanted to add, say, a “permissions” field, we could run the following PHP code:

$files = $db->fs->files;
$files->update(array("filename" => "installer.bin"), array('$set' => array("permissions" => "555")));

// or, equivalently, from the MongoGridFS object:

$grid->update(array("filename" => "installer.bin"), array('$set' => array("permissions" => "555")));

Updating the file itself, what Wouter is actually asking about, is significantly more complex. If we want to update the binary data, we’ll need to reach into the chunks collection and update every document associated with the file. Edit: Unless you’re using the C# driver! See Sam Corder’s comment below. It would look something like:

// get the target file's chunks
$chunks = $db->fs->chunks;
$cursor = $chunks->find(array("file_id" => $fileId))->sort(array("n" => 1));

$newLength = 0;

foreach ($cursor as $chunk) {
    // read in a string of bytes from the new version of the file
    $bindata = fread($file, MongoGridFS::$chunkSize);
    $newLength += strlen($bindata);

    // put the new version's contents in this chunk
    $chunk->data = new MongoBinData($bindata);

    // update the chunks collection with this new chunk
    $chunks->save($chunk);
}

// update the file length metadata (necessary for retrieving the file)
$db->fs->files->update(array("_id" => $fileId), array('$set' => array("length" => $newLength));

The code above doesn’t handle a bunch of cases (what if the new file is a different number of chunks than the old one?) and anything beyond this basic scenario gets irritatingly complex. If you’re updating individual chunks you should probably just remove the GridFS file and save it again. It’ll end up taking about the same amount of time and be less error-prone.

FOSDEM

I gave a talk at FOSDEM (Free and Open Source Developers European Meetup) this morning: “Introduction to MongoDB”. It went pretty well, I think. Slides are up at scribd.com and it was recorded, so the video for it should be somewhere soon (I’ll update when I find out where).

The trip across the Atlantic was interesting. It was so bumpy that one of the stewardesses serving drinks fell over and the captain announced “All flight attendants, take your seats with your seat belts fastened!” Takeoff had been delayed to fix something and, in addition to the regular dropping a couple dozen feet in altitude, the engine kept making funny noises, and I was pretty sure we were going to go down. I considered putting my shoes on, but I decided that the last thing I wanted while floating on a raft in the North Atlantic was wet shoes. The plane pulled through, though, and eventually we got to Belgium. So, no excellent story (or fiery death) for me.

One of the cool things about being here was that I got to meet chx, a Drupal developer. He’s helping integrate MongoDB and Drupal 7. We have been wanting to send him some schwag but he’s on an extended visit to relatives who live on a hill that the postman can’t climb. However, I knew he was going to be here, so I carried a mug all the way to Belgium and got to give it to him. Mongo devs: neither snow nor sleet nor gloom of 3000 miles of ocean keep these swift couriers from delivering mugs. Woot.

My talk was at 10am (4am New York time… ugh). Andrew and I went to the conference’s cafeteria beforehand so I could get some coffee. It was… interesting. I have a theory on how Belgians make coffee: they brew a pot of coffee, and then let it sit on a burner until only a cup is left in the pot. Then they serve you that cup. Now, I am grateful, because I managed to drink it (with the help of a chocolate croissant) and it kept me upright for my talk, but I am glad I live in a country where people like their coffee watery.

Andrew and I are at what I think is the Belgian equivalent of a diner, where we’re having some coffee and beer. I feel like a total philistine, but I can’t actually tell the difference between Belgian beer and a decent American beer. Obviously more data points are necessary, I’ll be looking into it further tonight.

Giving talks is fun, but stressful. I feel like my whole body is relaxing now. I’m looking forward to sleeping at least 12 hours tonight.

Mongo Mailbag: Master/Slave Configuration

Trying something new: each week, I’ll take an interesting question from the MongoDB mailing list and answer it in more depth.  Some of the replies on the list are a bit short, given that the developers are trying to, you know, develop (as well as answer over a thousand questions a month).  So, I’m going to grab some interesting ones and flesh things out a bit more.

Hi all,

Assume I have a Mongo master and 2 mongo slaves.  Using PHP, how do I do it so that writes goes to the master while reads are spread across the slaves (+maybe the master)?

1) 1 connect to all 3 nodes in one go, PHP/Mongo handles all the rest
2) 1 connect to the master for writes. Another connection to connect to all slave nodes and read from them.

Thanks all and sorry for the noobiness!

-Mr. Google

Basics first: what is master/slave?

One database server (the “master”) is in charge and can do anything.  A bunch of other database servers keep copies of all the data that’s been written to the master and can optionally be queried (these are the “slaves”).  Slaves cannot be written to directly, they are just copies of the master database.  Setting up a master and slaves allows you to scale reads nicely because you can just keep adding slaves to increase your read capacity.  Slaves also make great backup machines. If your master explodes, you’ll have a copy of your data safe and sound on the slave.

A handy-dandy comparison chart between master database servers and slave database servers:

Master Slave
# of servers 1
permissions read/write read
used for queries, inserts, updates, removes queries

So, how do you set up Mongo in a master/slave configuration?  Assuming you’ve downloaded MongoDB from mongodb.org, you can start a master and slave by cutting and pasting the following lines into your shell:

$ mkdir -p ~/dbs/master ~/dbs/slave
$ ./mongod --master --dbpath ~/dbs/master >> ~/dbs/master.log &
$ ./mongod --slave --port 27018 --dbpath ~/dbs/slave --source localhost:27017 >> ~/dbs/slave.log &

(I’m assuming you’re running *NIX.  The commands for Windows are similar, but I don’t want to encourage that sort of thing).

What are these lines doing?

  1. First, we’re making directories to keep the database in (~/dbs/master and ~/dbs/slave).
  2. Now we start the master, specifying that it should put its files in the ~/dbs/master directory and its log in the ~/dbs/master.log file.  So, now we have a master running on localhost:27017.
  3. Next, we start the slave. It needs to listen on a different port than the master since they’re on the same machine, so we’ll choose 27018. It will store its files in ~/db/slave and its logs in ~/dbs/slave.log.  The most important part is letting it know who’s boss: the –source localhost:27017 option lets it know that the master it should be reading from is at localhost:27017.

There are tons of possible master/slave configurations. Some examples:

  • You could have a dozen slave boxes where you want to distribute the reads evenly across them all.
  • You might have one wimpy little slave machine that you don’t want any reads to go to, you just use it for backup.
  • You might have the most powerful server in the world as your master machine and you want it to handle both reads and writes… unless you’re getting more than 1,000 requests per second, in which case you want some of them to spill over to your slaves.

In short, Mongo can’t automatically configure your application to take advantage of your master-slave setup. Sorry.  You’ll have to do this yourself. (Edit: the Python driver actually does handle case 1 for you, see Mike’s comment.)

However, it’s not too complicated, especially for what MG wants to do.  MG is using 3 servers: a master and two slaves, so we need three connections: one to the master and one to each slave.  Assuming he’s got the master at master.example.com and the slaves at slave1.example.com and slave2.example.com, he can create the connections with:

$master = new Mongo("master.example.com:27017");
$slave1 = new Mongo("slave1.example.com:27017");
$slave2 = new Mongo("slave2.example.com:27017");

This next bit is a little nasty and it would be cool if someone made a framework to do it (hint hint).  What we want to do is abstract the master-slave logic into a separate layer, so the application talks to the master slave logic which talks to the driver.  I’m lazy, though, so I’ll just extend the MongoCollection class and add some master-slave logic.  Then, if a person creates a MongoMSCollection from their $master connection, they can add their slaves and use the collection as though it were a normal MongoCollection.  Meanwhile, MongoMSCollection will evenly distribute reads amongst the slaves.

class MongoMSCollection extends MongoCollection {
    public $currentSlave = -1;

    // call this once to initialize the slaves
    public function addSlaves($slaves) {
        // extract the namespace for this collection: db name and collection name
        $db = $this->db->__toString();
        $c = $this->getName();

        // create an array of MongoCollections from the slave connections
        $this->slaves = array();
        foreach ($slaves as $slave) {
            $this->slaves[] = $slave->$db->$c;
        }

        $this->numSlaves = count($this->slaves);
    }

    public function find($query, $fields) {
        // get the next slave in the array
        $this->currentSlave = ($this->currentSlave+1) % $this->numSlaves;

        // use a slave connection to do the query
        return $this->slaves[$this->currentSlave]->find();
    }
}

To use this class, we instantiate it with the master database and then add an array of slaves to it:

$master = new Mongo("master.example.com:27017");
$slaves = array(new Mongo("slave1.example.com:27017"), new Mongo("slave2.example.com:27017"));

$c = new MongoMSCollection($master->foo, "bar");
$c->addSlaves($slaves);

Now we can use $c like a normal MongoCollection.  MongoMSCollection::find will alternate between the two slaves and all of the other operations (inserts, updates, and removes) will be done on the master.  If MG wants to have the master handle reads, too, he can just add it to the $slaves array (which might be better named the $reader array, now):

$slaves = array($master, new Mongo("slave1.example.com:27017"), new Mongo("slave2.example.com:27017"));

Alternatively, he could change the logic in the MongoMSCollection::find method.

Edit: as of version 1.4.0, slaveOkay is not neccessary for reading from slaves. slaveOkay should be used if you are using replica sets, not –master and –slave. Thus, the next section doesn’t really apply anymore to normal master/slave.

The only tricky thing about Mongo’s implementation of master/slave is that, by default, a slave isn’t even readable, it’s just a way of doing backup for the master database.  If you actually want to read off of a slave, you have to set a flag on your query, called “slaveOkay”.  Instead of saying:

$cursor = $slave->foo->bar->find();

we have:

$cursor = $slave->foo->bar->find()->slaveOkay();

Or, because this is a pain in the ass to set for every query (and almost impossible to do for findOnes unless you know the internals) you can set a static variable on MongoCursor that will hold for all of your queries:

MongoCursor::$slaveOkay = true;

And now you will be allowed to query your slave normally, without calling slaveOkay() on each cursor.

References: