Replica Sets Part 1: Master-Slave is so 2009

Replica sets are really cool and can be customized out the wazoo, so I’ll be doing a couple of posts on them (I have three written so far and I think I have a few more in there). If there’s any replica-set-related topic you’d like to see covered, please let me know and I’ll make sure to get to it.

This post shows how to do the “Hello, world” of replica sets. I was going to start with a post explaining what they are, but coding is more fun than reading. For now, all you have to know is that they’re master-slave with automatic failover.

Make sure you have version 1.5.7 or better of the database before trying out the code below.

Step 1: Choose a name for your set.

This is just organizational, so choose whatever. I’ll be using “unicomplex” for my example.

Step 2: Create the data directories.

We need a data directory for each server we’ll be starting:

$ mkdir -p ~/dbs/borg1 ~/dbs/borg2 ~/dbs/arbiter

Step 3: Start the servers.

We’ll start up our three servers:

$ ./mongod --dbpath ~/dbs/borg1 --port 27017 --replSet unicomplex/
$ ./mongod --dbpath ~/dbs/borg2 --port 27018 --replSet unicomplex/
$ ./mongod --dbpath ~/dbs/arbiter --port 27019 --replSet unicomplex/

Step 4: Initialize the set.

Now you have to tell the set, “hey, you exist!” Start up the mongo shell and run:

MongoDB shell version: 1.5.7
connecting to: test
> rs.initiate({"_id" : "unicomplex", "members" : [
... {"_id" : 0, "host" : "localhost:27017"}, 
... {"_id" : 1, "host" : "localhost:27018"}, 
... {"_id" : 2, "host" : "localhost:27019", "arbiterOnly" : true}]})
{
        "info" : "Config now saved locally.  Should come online in about a minute.",
        "ok" : 1
}

rs is a global variable that holds a bunch of useful replica set functions.

The message says it’ll be online in about a minute, but it’s always been ~5 seconds for me. Once you see the following line in one of the logs:

replSet PRIMARY

…your replica set is ready to go!

Playing with the set

One of the servers will be master, the other is a slave. You can figure out which is which by running the isMaster command in the shell:

> db.isMaster()
{
        "ismaster" : true,
        "secondary" : false,
        "hosts" : [
                "localhost:27017",
                "localhost:27018",
        ],
        "arbiters" : [
                "localhost:27019"
        ],
        "ok" : 1
}

If db isn’t primary, the server that is will be listed in the “primary” field:

> db.isMaster()
{
        "ismaster" : false,
        "secondary" : true,
        "hosts" : [
                "localhost:27017",
                "localhost:27018",
        ],
        "arbiters" : [
                "localhost:27019"
        ],
        "primary" : "localhost:27018",
        "ok" : 1
}

Now, try killing the primary server. Wait a couple seconds and you’ll see the other (non-arbiter) server be elected primary.

Once there’s a new primary, restart the mongod you just killed. You’ll see it join in the fray, though not become master (there’s already a master, so it won’t rock the boat). After a few seconds, kill the current master. Now the old master will become master again!

It’s pretty fun to play with this, bringing them up and down and watching the mastership go back and forth (or maybe I’m easily amused).

Inserting and querying data

By default, slaves are for backup only, but you can also use them for queries (reads) if you set the “slave ok” flag. Connect to each of the servers and set this flag:

> db.getMongo().setSlaveOk()
> borg2 = connect("localhost:27018/test")
connecting to: localhost:27018/test
test
> borg2.getMongo().setSlaveOk()

Now you can insert, update, and remove data on the master and read the changes on the slave.

On Monday: the “why” behind what we just did.

24 thoughts on “Replica Sets Part 1: Master-Slave is so 2009

  1. It should be noted that it appears that you can’t have a / in your replica name. Everything looks alright until you try to initialize your set it and it always complains that it can’t find any servers (even if you’re on one of them as you try to configure it!)

    Like

  2. Hi,how to execute the queries below.
    MongoDB shell version: 1.5.7
    connecting to: test
    > rs.initiate({“_id” : “unicomplex”, “members” : [
    … {“_id” : 0, “host” : “localhost:27017”},
    … {“_id” : 1, “host” : “localhost:27018”},
    … {“_id” : 2, “host” : “localhost:27019”, “arbiterOnly” : true}]})
    Atleast one full execution of the query…Please…

    Like

    1. You’ll need to upgrade, I don’t think replica sets existed in 1.5.7. I think 1.8.0-rc0 is coming out today, give that a try!

      Like

      1. Hi Kristina..U told that the version 1.8.0-rco will be get released I think its not yet release..Can u tel When it will be get releases?

        Like

      2. Hi kristina1,I am using 64-bit MongoDB and i had done testing with 64-bit MongoDB.The testing details as follows:- 1)I gave one million files as backup to my server using mongodb as my Database Storage. 2)My backup data size is “22.8-GB”. 3)The mongodb size is “1.95GB”. Now, my question, is there any settings available to compress the DB-Size Before configuring the backup to the server?.Please help me folks…Advance Thanks,

        Like

  3. Hi,

    The above example works perfectly and its great when all the three nodes working on a single server.
    But having problem when nodes are in different servers, using above example while working with Python (http://api.mongodb.org/python/1.9%2B/examples/replica_set.html), I’m getting problem:

    Problem: When I kill mongo server from sf1:27017 [Primary] instance then my secondary [sf2:27017] should elect as Primary to overcome the failover which is not happening.

    Complete discussion:
    http://comments.gmane.org/gmane.comp.db.mongodb.user/27810

    Need help on this.

    Like

    1. Failover isn’t instantaneous, especially with network latency. You have to put a try/except around your database calls and wait (or keep retrying in a try/except) until a new master is elected.

      Like

  4. Hi,
    I tried to use your example above, buy I keep getting the following error on the initiate code:
    ‘couldn’t initiate : need all members up to initiate, not ok: localhost27018’

    I have also tried adding using rs.add, but that gave me the same error. I’m using 2.0.2 x64 version on Windows. Any ideas on what to do?

    Thanks.

    Like

      1. Cayden,
        No I never did.  I’m just working with a single instance for now and figure I’ll attack the issue later.  But I was disappointed to not receive any help on this.

        Like

      2. Hey guys, not sure why I didn’t see this comment before.  If you have questions, the mailing list (groups.google.com/d/topic/mongodb-user/) is a good place to get help.  @ericklind:disqus in the error you pasted, it looks like you’re missing a : in your host string: “localhost:27018”.

        Like

    1. In 2.0, journaling is on by default.  You can turn it of with –nojournal.  If you’re still using 1.8, yes, use –journal. When this was written, journaling didn’t exist, yet.

      Like

  5. Hi,
    i tried with multiple systems instead of single system. i can able to done replicaset. this scenario i am using ip to specify systems but i need to do without hardcode anything like specific ip or system name.. how it is possible to make this replication…? please help me…

    Like

  6. Hi Kristina,

    I am going through your blog on MongoDB, it is really excellent and more informative with simple easy steps. I became a fan of your blog. i have configured 3 shards and added to cluster. I do not know how to add members to each shard as primary member, secondary member and Arbiter. I have mentioned below what I have done for your review. Could you please help me to add members as said.

    Shard configuration:

    [root@localhost bin]# mkdir -p /srv/sharding/shard1/data /srv/sharding/shard2/data /srv/sharding/shard3/data

    [root@localhost bin]# mkdir -p /srv/sharding/configsvr1/data /srv/sharding/configsvr2/data /srv/sharding/configsvr3/data

    ./mongod –shardsvr –dbpath /srv/sharding/shard1/data –port 4001

    ./mongod –shardsvr –dbpath /srv/sharding/shard2/data –port 4002

    ./mongod –shardsvr –dbpath /srv/sharding/shard3/data –port 4003

    Shard all 3 mongod instances considering each as a config server:

    ./mongod –configsvr –dbpath /srv/sharding/configsvr1/data –port 4011

    ./mongod –configsvr –dbpath /srv/sharding/configsvr2/data –port 4012

    ./mongod –configsvr –dbpath /srv/sharding/configsvr3/data –port 4013

    Start Query Router (mongos) giving reference to all config db’s at specific port:

    [root@localhost bin]# ./mongos –port 2000 –configdb localhost:4011,localhost:4012,localhost:4013

    Open mongo connecting to mongos on the same port on which mongos is running:

    ./mongo –port 2000

    add each shard to the cluster:

    mongos> sh.addShard(“localhost:4001”)

    { “shardAdded” : “shard0000”, “ok” : 1 }

    mongos> sh.addShard(“localhost:4002”)

    { “shardAdded” : “shard0001”, “ok” : 1 }

    mongos> sh.addShard(“localhost:4003”)

    { “shardAdded” : “shard0002”, “ok” : 1 }

    mongos>

    mongos> sh.status()

    — Sharding Status —

    sharding version: {

    “_id” : 1,

    “minCompatibleVersion” : 5,

    “currentVersion” : 6,

    “clusterId” : ObjectId(“56052aa6b9d93739b67bbb57”)

    }

    shards:

    { “_id” : “shard0000”, “host” : “localhost:4001” }

    { “_id” : “shard0001”, “host” : “localhost:4002” }

    { “_id” : “shard0002”, “host” : “localhost:4003” }

    balancer:

    Currently enabled: yes

    Currently running: no

    Failed balancer rounds in last 5 attempts: 0

    Migration Results for the last 24 hours:

    No recent migrations

    databases:

    { “_id” : “admin”, “partitioned” : false, “primary” : “config” }

    Like

Leave a comment