Replica Set Internals Part V: Initial Sync

I’ve been doing replica set “bootcamps” for new hires. It’s mainly focused on applying this to debug replica set issues and being able to talk fluently about what’s happening, but it occurred to me that you (blog readers) might be interested in it, too.

There are 8 subjects I cover in my bootcamp:

  1. Elections
  2. Creating a set
  3. Reconfiguring
  4. Syncing
  5. Initial Sync
  6. Rollback
  7. Authentication
  8. Debugging

Prerequisites: I’m assuming you know what replica sets are and you’ve configured a set, written data to it, read from a secondary, etc. You understand the terms primary and secondary.

The Initial Sync Processes

When you add a brand new member to the set, it needs to copy over all of the data before it’s ready to be a “real” member. This process is called initial syncing and there are, essentially, 7 steps:

  1. Check the oplog. If it is not empty, this node does not initial sync, it just starts syncing normally. If the oplog is empty, then initial sync is necessary, continue to step #2:
  2. Get the latest oplog time from the source member: call this time start.
  3. Clone all of the data from the source member to the destination member.
  4. Build indexes on destination.
  5. Get the latest oplog time from the sync target, which is called minValid.
  6. Apply the sync target’s oplog from start to minValid.
  7. Become a “normal” member (transition into secondary state).

Note that this process only checks the oplog. You could have a petabyte of data on the server, but if there’s no oplog, the new member will delete all of it and initial sync. Members depend on the oplog to know where to sync from.

So, suppose we’re starting up a new member with no data on it. MongoDB checks the oplog, sees that it doesn’t even exist, and begins the initial sync process.

Copying Data

The code for this couldn’t be much simpler, in pseudo code it is basically:

for each db on sourceServer:
    for each collection in db:
        for each doc in db.collection.find():
             destinationServer.getDB(db).getCollection(collection).insert(doc)

One of the issues with syncing is that it has to touch all of the source’s data, so if you’ve been carefully cultivating a working set on sourceServer, it’ll pretty much be destroyed.

There are benefits to initial syncing, though: it effectively compacts your data on the new secondary. As it’s doing are inserts, it’ll use pretty much the minimum amount of space. Some users actually use rotating resyncs to keep their data compact.

On the downside, initial sync doesn’t consider padding factor, so if that’s important to your application, the new server will have to build up the right padding factor over time.

Syncing on a Live System

The tricky part of initial syncing is that we’re trying to copy an (often) massive amount of data off of a live system. New data will be written while this copy is taking place, so it’s a bit like trying copy a tree over six months.

By the time you’re done, the data you copied first might have changed significantly on the source. The copy on the destination might not be… exactly what you’d expect.

That is what the oplog replay step is for: to get your data to a consistent state. Oplog ops are idempotent, they can be applied multiple times and yield the same answer. Thus, so long as we apply all of the writes at least once (remember, they may or may not have been applied on the source before the copy), we’ll end up with a consistent picture when we’re done.

Like so.

minValid (as mentioned in the list above) is the first timestamp where our new DB is in a consistent state: it may be behind the other members, but its data matches exactly how the other servers looked at some point in time.

Some examples of idempotency, as most people haven’t seen it since college:

// three idempotent functions:
function idemp1(doc) {
   doc.x = doc.x + 0;
}

function idemp2(doc) {
   doc.x = doc.x * 1;
}

function idemp3(doc) {
   // this is what replication does: it turns stuff like "$inc 4 by 1" into "$set to 5"
   doc.x = 5;
}

// two non-idempotent functions
function nonIdemp1(doc) {
   doc.x = doc.x + 1;
}

function nonIdemp2(doc) {
   doc.x = Math.random();
}

No matter how many times you call the idempotent functions the value of doc.x will be the same (as long as you call them at least once) .

Building Indexes

In 2.0, indexes were created on the secondary as part of the cloning step, but in 2.2, we moved index creation to after the oplog application. This is because of an interesting edge case. Let’s say we have a collection representing the tree above and we have a unique index on leaf height: no two leaves are at exactly the same height. So, pretend we have a document that looks like this:

{
    "_id" : 123,
    ...,
    "height" : 76.3
}

The cloner copies this doc from the source server to the destination server and moves on. On the source, we remove this leaf from the tree because of, I dunno, high winds.

> db.tree.remove({_id:123})

However, the cloner has already copied the leaf, so it doesn’t notice this change. Now another leaf might grow at this height. Let’s say leaf #10,012 grow to this height on the source.

> db.tree.update({_id:10012}, {$set : {height : 76.3}}) 

Now, when the cloner gets to document #10012, it’ll copy it to the destination server. Now there are two documents with the same height field in the destination collection, so when it tries to create a unique index on “height”, the index creation will fail!

So, we moved the index creation to after the oplog application. That way, we’re always building the index on a consistent data set, so it should always succeed.

There are a couple of other edge cases like that which have been super-fun to track down, which you can look up in Jira if you’re interested.

Restoring from Backup

Often, initial sync is too slow for people. If you want to get a secondary up and running as fast as possible, the best way to do so is to skip initial sync altogether and restore from backup. To restore from backup:

  1. Find a secondary you like the looks of.
  2. Either shut it down or fsync+lock it. Basically, get its data files into a clean state, so nothing is writing to them while you’re copying them.
  3. Copy the data files to your destination server. Make sure you get all of your data if you’re using any symlinks or anything.
  4. Start back up or unlock the source server.
  5. Point the destination server at the data you copied and start it up.

As there is already an oplog, it will not need to initial sync. It will begin syncing from another member’s oplog immediately when it starts up and usually catch up quite quickly.

Note that mongodump/mongorestore actually does not work very well as a “restoring from backup” strategy because it doesn’t give you an oplog. You can create one on your own and prime it, but it’s more work and more fiddly than just copying files. There is a feature request for mongorestore to be able to prime the oplog automatically, but it won’t be in 2.2.

P.S. Trees were done with an awesome program I recently discovered called ArtRage, which I highly recommend to anyone who likes painting/drawing. It “feels” like real paint.

Replica Set Internals Bootcamp Part III: Reconfiguring

I’ve been doing replica set “bootcamps” for new hires. It’s mainly focused on applying this to debug replica set issues and being able to talk fluently about what’s happening, but it occurred to me that you (blog readers) might be interested in it, too.

There are 8 subjects I cover in my bootcamp:

  1. Elections
  2. Creating a set
  3. Reconfiguring
  4. Syncing
  5. Initial Sync
  6. Rollback
  7. Authentication
  8. Debugging

Prerequisites: I’m assuming you know what replica sets are and you’ve configured a set, written data to it, read from a secondary, etc. You understand the terms primary and secondary.

Reconfiguring Prerequisites

One of the goals is to not let you reconfigure yourself into a corner (e.g., end up with all arbiters), so reconfig tries to make sure that a primary could be elected with the new config. Basically, we go through each node and tally up how many votes there will be and if a majority of those is up (the reconfig logic sends out heartbeats).

Also, the member you send the reconfig to has to be able to become primary in the new setup. It doesn’t have to become primary, but its priority has to be greater than 0. So, you can’t have all of the members have a priority of 0.

The reconfig also checks the version number, set name, and that nothing is going to an illegal state (e.g., arbiter-to-non-arbiter, upping the priority on a slave delayed node, and so on).

One thing to note is that you can change hostnames in a reconfig. If you’re using localhost for a single-node set and want to change it to an externally resolvable hostname so you can add some other members, you can just change the member’s hostname from localhost to someHostname and reconfig (so long as someHostname resolves, of course).

Additive Reconfiguration vs. Full Reconfigs

Once the reconfiguration has been checked for correctness, MongoDB checks to see if this is a simple reconfig or a full reconfig. A simple reconfig adds a new node. Anything else is a full reconfig.

A simple reconfig starts a new heartbeat thread for the new member and it’s done.

A full reconfig clears all state. This means that the current primary closes all connections. All the current heartbeat threads are stopped and a new heartbeat thread for each member is started. The old config is replaced by the new config. Then the member formerly known as primary becomes primary again.

We definitely take a scorched-earth approach to reconfiguring. If you are, say, changing the priority of a node from 0 to 1, it would make more sense to change that field than to tear down the whole old config. However, we didn’t want to miss an edge case, so we went with better safe than sorry. Reconfig is considered a “slow” operation anyway, so we’ll generally make the tradeoff of slower and safer.

Propegation of Reconfiguration

Even if you have a node that is behind on replication or slave delayed, reconfiguration will propegate almost immediately. How? New configs are communicated via heartbeat.

Suppose you have 2 nodes, A and B.

You run a reconfig on A, changing the version number from 6 to 7.

B sends a heartbeat request to A, which includes a field stating that B‘s version number is 6.

When A gets that heartbeat request, it will see that B‘s config version is less than it’s own, so it’ll send back its config (at version 7) as part of its heartbeat response.

When B sees that new config, it’ll load it (making the same checks for validity that A did originally) and follow the same procedure described above.

Force reconfig to the face.

Forcing Reconfig

Despite the checks made by reconfig, users sometimes get into a situation where they don’t have a primary. They’d permanently lose a couple servers or a data center and suddenly be stuck with a bunch of secondaries and no way to reconfig. So, in 2.0, we added a force:true option to reconfig, which allowed it to be run on a secondary. That is all that force:true does. Sometimes people complain that force:true wouldn’t let them load an invalid configuration. Indeed, it won’t. force:true does not relax any of the other reconfig constraints. You still have to pass in a valid config. You can just pass it to a secondary.

Why is my version number 6,203,493?

When you force-reconfigure a set, it adds a random (big) number to the version, which can be unnerving. Why does the version number jump by thousands? Suppose that we have a network partition and force-reconfigure the set on both sides of the partition. If we ended up with both sides having a config version of 8 and the set got reconnected, then everyone would assume they were in sync (everyone has a config version of 8, no problems here!) and you’d have half of your nodes with one config and half with another. By adding a random number to the version on reconfig, it’s very probable that one “side” will have a higher version number than the other. When the network is fixed, whichever side has a higher version number will “win” and your set will end up with a consistent config.

It might not end up choosing the config you want, but some config is better than the set puttering along happily with two primaries (or something stupid like that). Basically, if shenanigans happen during a network partition, check your config after the network is healthy again.

Removing Nodes and Sharding

I’d just like to rant for a second: removing nodes sucks! You’d think it’s would be so easy, right? Just take the node out of the config and boom, done. It turns out it’s a total nightmare. Not only do you have to stop all of the replication stuff happening on the removed node, you have to stop everything the rest of the set is doing with that node (e.g., syncing from it).

You also have to change the way the removed node reports itself so that mongos won’t try to update a set’s config from a node that’s been removed. And you can’t just shut it down because people want to be able to play around and do rs.add("foo"); rs.remove("foo"); rs.add("foo") so you have to be able to entirely shut down the replica set’s interaction with the removed node, but in any way that can be restarted on a dime.

Basically, there are a lot of edge cases around removing nodes, so if you want to be on the safe side, shut down a node before removing it from the set. However, Eric Milkie has done a lot of awesome work on removing nodes for 2.2, so it should be getting better.

Replica Set Internals Bootcamp: Part II – Creating a Set

I’ve been doing replica set “bootcamps” for new hires. It’s mainly focused on applying this to debug replica set issues and being able to talk fluently about what’s happening, but it occurred to me that you (blog readers) might be interested in it, too.

There are 8 subjects I cover in my bootcamp:

  1. Elections
  2. Creating a set
  3. Reconfiguring
  4. Syncing
  5. Initial Sync
  6. Rollback
  7. Authentication
  8. Debugging

I’m going to do one subject per post, we’ll see how many I can get through.

Prerequisites: I’m assuming you know what replica sets are and you’ve configured a set, written data to it, read from a secondary, etc. You understand the terms primary and secondary.

Initializing a Set

Suppose you are creating a new set. You start up some mongods with no data and the replSet option. When a server starts up with the replSet option, the first thing it does is check its local.system.replset collection for a replica set config it can load, but it won’t find one (because there’s no data). Since it cannot find a config, it goes into the EMPTYCONFIG state. If you look in the logs, you’ll see messages about EMPTYCONFIG.

Tue Mar  6 12:24:35 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
Tue Mar  6 12:24:35 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
Tue Mar  6 12:24:45 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
Tue Mar  6 12:24:55 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
Tue Mar  6 12:25:05 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
Tue Mar  6 12:25:15 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
Tue Mar  6 12:25:25 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)

Now suppose you send the replSetInitiate command to one of the members. replSetInitiate can either take a custom configuration or generate a config automatically. If you do not pass in a config, the server will try to figure out what its hostname is and generate a config using that.

Note: on EC2, the server always chooses the wrong hostname, so you have to pass in the config you want.

Once the server has a config document, either passed in or generated, it’ll make sure it can reach every node specified (all members must be up and reachable to initiate), then store the config in the local.system.replset collection. Finally, it begins sending heartbeat requests to the other members of the set.

The Log: Step-by-Step

Being able to interpret log files is critical to knowing what’s going on, so let’s walk through a sample log from running rs.initiate(). Here’s the whole log, feel free to skip over this for now. We’ll take it piece-by-piece below.

Tue Mar  6 12:45:00 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
Tue Mar  6 12:45:02 [initandlisten] connection accepted from 127.0.0.1:38839 #2 (2 connections now open)
Tue Mar  6 12:45:08 [conn2] replSet replSetInitiate admin command received from client
Tue Mar  6 12:45:08 [conn2] replSet info initiate : no configuration specified.  Using a default configuration for the set
Tue Mar  6 12:45:08 [conn2] replSet created this configuration for initiation : { _id: "foo", members: [ { _id: 0, host: "ubuntu:30000" } ] }
Tue Mar  6 12:45:08 [conn2] replSet replSetInitiate config object parses ok, 1 members specified
Tue Mar  6 12:45:08 [conn2] replSet replSetInitiate all members seem up
Tue Mar  6 12:45:08 [conn2] ******
Tue Mar  6 12:45:08 [conn2] creating replication oplog of size: 49085MB...
Tue Mar  6 12:45:08 [FileAllocator] allocating new datafile /datadir/local.ns, filling with zeroes...
Tue Mar  6 12:45:08 [FileAllocator] creating directory /datadir/_tmp
Tue Mar  6 12:45:08 [FileAllocator] done allocating datafile /datadir/local.ns, size: 16MB,  took 0.016 secs
Tue Mar  6 12:45:08 [FileAllocator] allocating new datafile /datadir/local.0, filling with zeroes...
Tue Mar  6 12:45:08 [FileAllocator] done allocating datafile /datadir/local.0, size: 2047MB,  took 0.016 secs
Tue Mar  6 12:45:08 [conn2] datafileheader::init initializing /datadir/local.0 n:0
Tue Mar  6 12:45:08 [FileAllocator] allocating new datafile /datadir/local.1, filling with zeroes...
...
Tue Mar  6 12:45:09 [conn2] datafileheader::init initializing /datadir/local.22 n:22
Tue Mar  6 12:45:09 [FileAllocator] allocating new datafile /datadir/local.23, filling with zeroes...
Tue Mar  6 12:45:09 [FileAllocator] done allocating datafile /datadir/local.23, size: 2047MB,  took 0.042 secs
Tue Mar  6 12:45:09 [conn2] datafileheader::init initializing /datadir/local.23 n:23
Tue Mar  6 12:45:10 [conn2] ******
Tue Mar  6 12:45:10 [conn2] replSet info saving a newer config version to local.system.replset
Tue Mar  6 12:45:10 [conn2] replSet saveConfigLocally done
Tue Mar  6 12:45:10 [conn2] replSet replSetInitiate config now saved locally.  Should come online in about a minute.
Tue Mar  6 12:45:10 [conn2] command admin.$cmd command: { replSetInitiate: undefined } ntoreturn:1 keyUpdates:0 reslen:196 2802ms
Tue Mar  6 12:45:10 [rsStart] replSet load config ok from self
Tue Mar  6 12:45:10 [rsStart] replSet I am ubuntu:30000
Tue Mar  6 12:45:10 [rsStart] replSet STARTUP2
Tue Mar  6 12:45:10 [rsSync] replSet SECONDARY
Tue Mar  6 12:45:25 [rsSync] waiting for 2 pings from other members before syncing
Tue Mar  6 12:45:26 [rsMgr] replSet info electSelf 0
Tue Mar  6 12:45:26 [rsMgr] replSet PRIMARY

Now, to understand this line-by-line. First, we start out in the EMPTYCONFIG state, as described above.

Tue Mar  6 12:45:00 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)

Now the shell connects to the server and runs rs.initiate() (replSetInitiate), which apparently takes me 6 seconds to type (take that, Mavis Beacon).

Tue Mar  6 12:45:02 [initandlisten] connection accepted from 127.0.0.1:38839 #2 (2 connections now open)
Tue Mar  6 12:45:08 [conn2] replSet replSetInitiate admin command received from client

Because we didn’t give a config, the server tries to generate one:

Tue Mar  6 12:45:08 [conn2] replSet info initiate : no configuration specified.  Using a default configuration for the set
Tue Mar  6 12:45:08 [conn2] replSet created this configuration for initiation : { _id: "foo", members: [ { _id: 0, host: "ubuntu:30000" } ] }

Now it does some sanity checks on this config (e.g., making sure all members are up and reachable). This matters more if we gave it a config, but sometimes things are so screwed up you can’t reach yourself.

Tue Mar  6 12:45:08 [conn2] replSet replSetInitiate config object parses ok, 1 members specified
Tue Mar  6 12:45:08 [conn2] replSet replSetInitiate all members seem up

Now it calculates the size of the oplog and allocates it between two lines of ******.

Tue Mar  6 12:45:08 [conn2] ******
Tue Mar  6 12:45:08 [conn2] creating replication oplog of size: 49085MB...
Tue Mar  6 12:45:08 [FileAllocator] allocating new datafile /datadir/local.ns, filling with zeroes...
Tue Mar  6 12:45:08 [FileAllocator] creating directory /datadir/_tmp
Tue Mar  6 12:45:08 [FileAllocator] done allocating datafile /datadir/local.ns, size: 16MB,  took 0.016 secs
Tue Mar  6 12:45:08 [FileAllocator] allocating new datafile /datadir/local.0, filling with zeroes...
Tue Mar  6 12:45:08 [FileAllocator] done allocating datafile /datadir/local.0, size: 2047MB,  took 0.016 secs
Tue Mar  6 12:45:08 [conn2] datafileheader::init initializing /datadir/local.0 n:0
Tue Mar  6 12:45:08 [FileAllocator] allocating new datafile /datadir/local.1, filling with zeroes...
...
Tue Mar  6 12:45:09 [conn2] datafileheader::init initializing /datadir/local.22 n:22
Tue Mar  6 12:45:09 [FileAllocator] allocating new datafile /datadir/local.23, filling with zeroes...
Tue Mar  6 12:45:09 [FileAllocator] done allocating datafile /datadir/local.23, size: 2047MB,  took 0.042 secs
Tue Mar  6 12:45:09 [conn2] datafileheader::init initializing /datadir/local.23 n:23
Tue Mar  6 12:45:10 [conn2] ******

It doesn’t actually print the “…” part, I just omitted 21 files being preallocated.

Now it stores the configuration in the local.system.replset collection.

Tue Mar  6 12:45:10 [conn2] replSet info saving a newer config version to local.system.replset
Tue Mar  6 12:45:10 [conn2] replSet saveConfigLocally done
Tue Mar  6 12:45:10 [conn2] replSet replSetInitiate config now saved locally.  Should come online in about a minute.

It seldom takes a minute for it to come online, but this way you’ll be pleasantly surprised.

replSetInitiate is now done running. It will often be logged, because slow operations are logged and allocating the oplog usually takes a while.

Tue Mar  6 12:45:10 [conn2] command admin.$cmd command: { replSetInitiate: undefined } ntoreturn:1 keyUpdates:0 reslen:196 2802ms

Now it actually loads this configuration as the replica set config we want to use:

Tue Mar  6 12:45:10 [rsStart] replSet load config ok from self

You won’t see this next line if you’re running 2.0, I added it for 2.2 because I was sick of having to decipher which server a log was from:

Tue Mar  6 12:45:10 [rsStart] replSet I am ubuntu:30000

Yay, now config has been loaded, making our state STARTUP2 (the “config has been loaded, starting up RS threads” state):

Tue Mar  6 12:45:10 [rsStart] replSet STARTUP2

At this point, it can go straight into PRIMARY state, but it doesn’t. We could fix this, but it hasn’t been a priority, so it goes into SECONDARY state briefly before becoming PRIMARY:

Tue Mar  6 12:45:10 [rsSync] replSet SECONDARY

A “real” secondary (we’re about to become primary, so we don’t count) will gather some stats about who’s best to sync from before it chooses a target:

Tue Mar  6 12:45:25 [rsSync] waiting for 2 pings from other members before syncing

Good luck with that, buddy, you’re the only one in the config.

Aha, we’ve noticed that we can become primary:

Tue Mar  6 12:45:26 [rsMgr] replSet info electSelf 0
Tue Mar  6 12:45:26 [rsMgr] replSet PRIMARY

And now we’re all set.

Propegating the Config

If we add members or provide an initial config with multiple members, the initial member’s heartbeat requests are sent to a bunch of servers in EMPTYCONFIG state. When these other servers receive the heartbeat request, they notice “Hey, I don’t have any config and this guy seems to think I’m part of his set, so I should check in with him.”

These EMPTYCONFIG members will then request a config from the member that requested the heartbeat. Seeing themselves in the config, they’ll save it to their own local.system.replset collection, then start sending out their own heartbeats. At that point, they’ll move into RECOVERING state and, once they’re synced to the original member, they’ll turn into secondaries (unless they’re arbiters, of course, but same general idea).

One quirk of initiating a set is that every member of the set must not have any data, other than the one you’re sending replSetInitiate to. After the set is initialized you can add members containing data, but at most one member can have data when you initialize the set.

Confused?

Feel free to ask questions in the comments below. This is a loving, caring bootcamp (as bootcamps go).

Replica Set Internals Bootcamp: Part I – Elections

I’ve been doing replica set “bootcamps” for new hires. It’s mainly focused on applying this to debug replica set issues and being able to talk fluently about what’s happening, but it occurred to me that you (blog readers) might be interested in it, too.

There are 8 subjects I cover in my bootcamp:

  1. Elections
  2. Creating a set
  3. Reconfiguring
  4. Syncing
  5. Initial Sync
  6. Rollback
  7. Authentication
  8. Debugging

I’m going to do one subject per post, we’ll see how many I can get through.

Prerequisites: I’m assuming you know what replica sets are and you’ve configured a set, written data to it, read from a secondary, etc. You understand the terms primary and secondary.

The most obvious feature of replica sets is their ability to elect a new primary, so the first thing we’ll cover is this election process.

Replica Set Elections

Let’s say we have a replica set with 3 members: X, Y, and Z. Every two seconds, each server sends out a heartbeat request to the other members of the set. So, if we wait a few seconds, X sends out heartbeats to Y and Z. They respond with information about their current situation: the state they’re in (primary/secondary), if they are eligible to become primary, their current clock time, etc.

X receives this info and updates its “map” of the set: if members have come up or gone down, changed state, and how long the roundtrip took.

At this point, if X map changed, X will check a couple of things: if X is primary and a member went down, it will make sure it can still reach a majority of the set. If it cannot, it’ll demote itself to a secondary.

Demotions

There is one wrinkle with X demoting itself: in MongoDB, writes default to fire-and-forget. Thus, if people are doing fire-and-forget writes on the primary and it steps down, they might not realize X is no longer primary and keep sending writes to it. The secondary-formerly-known-as-primary will be like, “I’m a secondary, I can’t write that!” But because the writes don’t get a response on the client, the client wouldn’t know.

Technically, we could say, “well, they should use safe writes if they care,” but that seems dickish. So, when a primary is demoted, it also closes all connections to clients so that they will get a socket error when they send the next message. All of the client libraries know to re-check who is primary if they get an error. Thus, they’ll be able to find who the new primary is and not accidentally send an endless stream of writes to a secondary.

Elections

Anyway, getting back to the heartbeats: if X is a secondary, it’ll occasionally check if it should elect itself, even if its map hasn’t changed. First, it’ll do a sanity check: does another member think it’s primary? Does X think it’s already primary? Is X ineligible for election? If it fails any of the basic questions, it’ll continue puttering along as is.

If it seems as though a new primary is needed, X will proceed to the first step in election: it sends a message to Y and Z, telling them “I am considering running for primary, can you advise me on this matter?”

When Y and Z get this message, they quickly check their world view. Do they already know of a primary? Do they have more recent data than X? Does anyone they know of have more recent data than X? They run through a huge list of sanity checks and, if everything seems satisfactory, they tentatively reply “go ahead.” If they find a reason that X cannot be elected, they’ll reply “stop the election!”

If X receives any “stop the election!” messages, it cancels the election and goes back to life as a secondary.

If everyone says “go ahead,” X continues with the second (and final) phase of the election process.

For the second phase, X sends out a second message that is basically, “I am formally announcing my candidacy.” At this point, Y and Z make a final check: do all of the conditions that held true before still hold? If so, they allow X to take their election lock and send back a vote. The election lock prevents them from voting for another candidate for 30 seconds.

If one of the checks doesn’t pass the second time around (fairly unusual, at least in 2.0), they send back a veto. If anyone vetos, the election fails.

Suppose that Y votes for X and Z vetos X. At that point, Y‘s election lock is taken, it cannot vote in another election for 30 seconds. That means that, if Z wants to run for primary, it had better be able to get X‘s vote. That said, it should be able to if Z is a viable candidate: it’s not like the members hold grudges (except for Y, for 30 seconds).

If no one vetos and the candidate member receives votes from a majority of the set, the candidate becomes primary.

Confused?

Feel free to ask questions in the comments below. This is a loving, caring bootcamp (as bootcamps go).