These files are in a format called portable game notation (.PGN), which is a human-readable notation for chess games. For example, the first game in TEN.PGN (helloooo 80s filenames) looks like:
This represents a 10-turn win at an unknown event. The “ECO” field shows which opening was used (a Sicilian in the game above).
Unfortunately for us, MongoDB doesn’t import PGNs in their native format, so we’ll need to convert them to JSON. I found a PGN->JSON converter in PHP that did the job here. Scroll down to the “download” section to get the .zip.
It’s one of those zips that vomits its contents into whatever directory you unzip it in, so create a new directory for it.
MongoDB’s new aggegation framework is now available in the nightly build! This post demonstrates some of its capabilities by using it to analyze chess games.
Make sure you have a the “Development Release (Unstable)” nightly running before trying out the stuff in this post. The aggregation framework will be in 2.1.0, but as of this writing it’s only in the nightly build.
First, we need some chess games to analyze. Download games.json, which contains 1132 games that were won in 10 moves or less (crush their soul and do it quick).
You can use mongoimport to import games.json into MongoDB:
Not exactly the greatest schema, but that’s how the chess format exporter munged it. Regardless, now we can use aggregation pipelines to analyze these games.
White has a slight advantage in chess because you move first (Wikipedia says it’s a 52%-56% chance of winning). I’d hypothesize that, in a short game, going first matters even more.
Let’s find out.
The “result” field in these docs is “1-0” if white wins and “0-1” if black wins. So, we want to divide our docs into two groups based on the “result” field and count how many docs are in each group. Using the aggregation pipeline, this looks like:
> db.runCommand({aggregate : "fast_win", pipeline : [
... {
... $group : {
... _id : "$result", // group by 'result' field
... numGames : {$sum : 1} // add 1 for every document in the group
... }
... }]})
{
"result" : [
{
"_id" : "0-1",
"numGames" : 435
},
{
"_id" : "1-0",
"numGames" : 697
}
],
"ok" : 1
}
That gives a 62% chance white will win (697 wins/1132 total games). Pretty good (although, of course, this isn’t a very large sample set).
In case you're not familiar with it, a reference chessboard with 1-8, a-h marked.
Given a starting move, what percent of the time will that move lead to victory? This probably depends on whether you’re playing white or black, so we’ll just focus on white’s opening move.
First, we’ll just determine what starting moves white uses with this series of steps:
project all of white’s first moves (the moves.1.white.move field)
group all docs with the same starting move together
and count how many documents (games) used that move.
Now let’s compare those numbers with whether white won or lost.
> db.runCommand({aggregate: "fast_win", pipeline: [
... // extract the first move
... {
... $project : {
... firstMove : "$moves.1.white.move",
... // create a new field, "win", which is 1 if white won and 0 if black won
... win : {$cond : [
... {$eq : ["$result", "1-0"]}, 1, 0
... ]}
... }
... },
... // group by the move and count up how many winning games used it
... {
... $group : {
... _id : "$firstMove",
... numGames : {$sum : 1},
... numWins : {$sum : "$win"}
... }
... },
... // calculate the percent of games won with this starting move
... {
... $project : {
... _id : 1,
... numGames : 1,
... percentWins : {
... $multiply : [100, {
... $divide : ["$numWins","$numGames"]
... }]
... }
... }
... },
... // discard moves that were used in less than 10 games (probably not representative)
... {
... $match : {
... numGames : {$gte : 10}
... }
... },
... // order from worst to best
... {
... $sort : {
... percentWins : 1
... }
... }]})
{
"result" : [
{
"_id" : "f4",
"numGames" : 25,
"percentWins" : 24
},
{
"_id" : "b4",
"numGames" : 17,
"percentWins" : 35.294117647058826
},
{
"_id" : "c4",
"numGames" : 36,
"percentWins" : 50
},
{
"_id" : "d4",
"numGames" : 283,
"percentWins" : 50.53003533568905
},
{
"_id" : "g4",
"numGames" : 11,
"percentWins" : 63.63636363636363
},
{
"_id" : "Nf3",
"numGames" : 37,
"percentWins" : 67.56756756756756
},
{
"_id" : "e4",
"numGames" : 696,
"percentWins" : 68.24712643678161
},
{
"_id" : "Nc3",
"numGames" : 14,
"percentWins" : 78.57142857142857
}
],
"ok" : 1
}
Pawn to e4 seems like the most dependable winner here. Knight to c3 also seems like a good choice (at a nearly 80% win rate), but it was only used in 14 winning games.
We basically want to do a similar pipeline to Experiment 2, but for black. At the end, we want to find the best and worst percent.
> db.runCommand({aggregate: "fast_win", pipeline: [
... // extract the first move
... {
... $project : {
... firstMove : "$moves.1.black.move",
... win : {$cond : [
... {$eq : ["$result", "0-1"]}, 1, 0
... ]}
... }
... },
... // group by the move and count up how many winning games used it
... {
... $group : {
... _id : "$firstMove",
... numGames : {$sum : 1},
... numWins : {$sum : "$win"}
... }
... },
... // calculate the percent of games won with this starting move
... {
... $project : {
... _id : 1,
... numGames : 1,
... percentWins : {
... $multiply : [100, {
... $divide : ["$numWins","$numGames"]
... }]
... }
... }
... },
... // discard moves that were used in less than 10 games (probably not representative)
... {
... $match : {
... numGames : {$gte : 10}
... }
... },
... // sort by % win rate
... {
... $sort : {
... percentWins : -1
... }
... },
... // get the best and worst
... {
... $group : {
... _id : 1,
... best : {$first : "$_id"},
... worst : {$last : "$_id"}
... }
... }]})
{
"result" : [
{
"_id" : 1,
"best" : "Nf6",
"worst" : "d6"
}
],
"ok" : 1
}
I like this new aggregation functionality because it’s feels simpler than MapReduce. You can start with a one-operation pipeline and build it up, step-by-step, seeing exactly what a given operation does to your output. (And no Javascript required, which is always a plus.)
There’s lots more documentation on aggregation pipelines in the docs and I’ll be doing a couple more posts on it.
Probably only relevant to a limited portion of my audience, but Silicon Valley Ryan Gosling is awesome. I have never seen anything like and I’m not sure what the point is, but I know I’m a fan.
Go forth and be sexy and supportive for the female programmers you know.
I’ve been doing replica set “bootcamps” for new hires. It’s mainly focused on applying this to debug replica set issues and being able to talk fluently about what’s happening, but it occurred to me that you (blog readers) might be interested in it, too.
I’m going to do one subject per post, we’ll see how many I can get through.
Prerequisites: I’m assuming you know what replica sets are and you’ve configured a set, written data to it, read from a secondary, etc. You understand the terms primary and secondary.
The most obvious feature of replica sets is their ability to elect a new primary, so the first thing we’ll cover is this election process.
Replica Set Elections
Let’s say we have a replica set with 3 members: X, Y, and Z. Every two seconds, each server sends out a heartbeat request to the other members of the set. So, if we wait a few seconds, X sends out heartbeats to Y and Z. They respond with information about their current situation: the state they’re in (primary/secondary), if they are eligible to become primary, their current clock time, etc.
X receives this info and updates its “map” of the set: if members have come up or gone down, changed state, and how long the roundtrip took.
At this point, if X map changed, X will check a couple of things: if X is primary and a member went down, it will make sure it can still reach a majority of the set. If it cannot, it’ll demote itself to a secondary.
Demotions
There is one wrinkle with X demoting itself: in MongoDB, writes default to fire-and-forget. Thus, if people are doing fire-and-forget writes on the primary and it steps down, they might not realize X is no longer primary and keep sending writes to it. The secondary-formerly-known-as-primary will be like, “I’m a secondary, I can’t write that!” But because the writes don’t get a response on the client, the client wouldn’t know.
Technically, we could say, “well, they should use safe writes if they care,” but that seems dickish. So, when a primary is demoted, it also closes all connections to clients so that they will get a socket error when they send the next message. All of the client libraries know to re-check who is primary if they get an error. Thus, they’ll be able to find who the new primary is and not accidentally send an endless stream of writes to a secondary.
Elections
Anyway, getting back to the heartbeats: if X is a secondary, it’ll occasionally check if it should elect itself, even if its map hasn’t changed. First, it’ll do a sanity check: does another member think it’s primary? Does X think it’s already primary? Is X ineligible for election? If it fails any of the basic questions, it’ll continue puttering along as is.
If it seems as though a new primary is needed, X will proceed to the first step in election: it sends a message to Y and Z, telling them “I am considering running for primary, can you advise me on this matter?”
When Y and Z get this message, they quickly check their world view. Do they already know of a primary? Do they have more recent data than X? Does anyone they know of have more recent data than X? They run through a huge list of sanity checks and, if everything seems satisfactory, they tentatively reply “go ahead.” If they find a reason that X cannot be elected, they’ll reply “stop the election!”
If X receives any “stop the election!” messages, it cancels the election and goes back to life as a secondary.
If everyone says “go ahead,” X continues with the second (and final) phase of the election process.
For the second phase, X sends out a second message that is basically, “I am formally announcing my candidacy.” At this point, Y and Z make a final check: do all of the conditions that held true before still hold? If so, they allow X to take their election lock and send back a vote. The election lock prevents them from voting for another candidate for 30 seconds.
If one of the checks doesn’t pass the second time around (fairly unusual, at least in 2.0), they send back a veto. If anyone vetos, the election fails.
Suppose that Y votes for X and ZvetosX. At that point, Y‘s election lock is taken, it cannot vote in another election for 30 seconds. That means that, if Z wants to run for primary, it had better be able to get X‘s vote. That said, it should be able to if Z is a viable candidate: it’s not like the members hold grudges (except for Y, for 30 seconds).
If no one vetos and the candidate member receives votes from a majority of the set, the candidate becomes primary.
Confused?
Feel free to ask questions in the comments below. This is a loving, caring bootcamp (as bootcamps go).
ObjectIds contain a timestamp, which tells you when the document was created. Because the _id field is always indexed, that means you have a “free” index on your “created at” time (unless you have persnickety requirements for creation times, like resolutions of less than a second, synchronization across app servers, etc.).
Actually using this index can seem daunting (how do you use an ObjectId to query for a certain date?) so let’s run through an example.
(Your secs will be different than mine, of course.)
ObjectIds can be constructed from a 24-character string, each two characters representing a byte (e.g., “ff” is 255). So, we need to convert secs to hexidecimal, which luckily is super-easy in JavaScript:
> hexSecs = secs.toString(16)
4ef100de
Now, we create an ObjectId from this:
> id = ObjectId(hexSecs+"0000000000000000")
ObjectId("4ef100de0000000000000000")
If you get the wrong number of zeros here, you’ll get an error message that is, er, hard to miss.
Now, we query for everything created after this timestamp:
> db.foo.find({_id : {$gt : id}})
{ "_id" : ObjectId("4ef100de7d435c39c3016405"), "x" : 5, "y" : 0 }
{ "_id" : ObjectId("4ef100de7d435c39c3016406"), "x" : 5, "y" : 1 }
{ "_id" : ObjectId("4ef100de7d435c39c3016407"), "x" : 5, "y" : 2 }
{ "_id" : ObjectId("4ef100de7d435c39c3016408"), "x" : 5, "y" : 3 }
{ "_id" : ObjectId("4ef100de7d435c39c3016409"), "x" : 5, "y" : 4 }
{ "_id" : ObjectId("4ef100de7d435c39c301640a"), "x" : 5, "y" : 5 }
{ "_id" : ObjectId("4ef100de7d435c39c301640b"), "x" : 5, "y" : 6 }
{ "_id" : ObjectId("4ef100de7d435c39c301640c"), "x" : 5, "y" : 7 }
{ "_id" : ObjectId("4ef100de7d435c39c301640d"), "x" : 5, "y" : 8 }
{ "_id" : ObjectId("4ef100de7d435c39c301640e"), "x" : 5, "y" : 9 }
{ "_id" : ObjectId("4ef100df7d435c39c301640f"), "x" : 6, "y" : 0 }
{ "_id" : ObjectId("4ef100df7d435c39c3016410"), "x" : 6, "y" : 1 }
{ "_id" : ObjectId("4ef100df7d435c39c3016411"), "x" : 6, "y" : 2 }
{ "_id" : ObjectId("4ef100df7d435c39c3016412"), "x" : 6, "y" : 3 }
{ "_id" : ObjectId("4ef100df7d435c39c3016413"), "x" : 6, "y" : 4 }
{ "_id" : ObjectId("4ef100df7d435c39c3016414"), "x" : 6, "y" : 5 }
{ "_id" : ObjectId("4ef100df7d435c39c3016415"), "x" : 6, "y" : 6 }
{ "_id" : ObjectId("4ef100df7d435c39c3016416"), "x" : 6, "y" : 7 }
{ "_id" : ObjectId("4ef100df7d435c39c3016417"), "x" : 6, "y" : 8 }
{ "_id" : ObjectId("4ef100df7d435c39c3016418"), "x" : 6, "y" : 9 }
Type "it" for more
If we look at the explain for the query, you can see that it’s using the index:
We’re not quite done, because we’re actually not returning what we wanted: we’re getting all docs greater than or equal to the “created at” time, not just greater than. To fix this, we’d just need to add 1 to the secs before doing anything else. Or I can claim that we were querying for documents created after i=4 all along.
The aggregation pipeline code has finally been merged into the main development branch and is scheduled for release in 2.2. It lets you combine simple operations (like finding the max or min, projecting out fields, taking counts or averages) into a pipeline of operations, making a lot of things that were only possible by using MapReduce doable with a “normal” query.
In celebration of this, I thought I’d re-do the very popular MySQL to MongoDB mapping using the aggregation pipeline, instead of MapReduce.
Here is the original SQL:
SELECT
Dim1, Dim2,
SUM(Measure1) AS MSum,
COUNT(*) AS RecordCount,
AVG(Measure2) AS MAvg,
MIN(Measure1) AS MMin
MAX(CASE
WHEN Measure2 123)
GROUP BY Dim1, Dim2
HAVING (MMin > 0)
ORDER BY RecordCount DESC
LIMIT 4, 8
We can break up this statement and replace each piece of SQL with the new aggregation pipeline syntax:
SELECT
Dim1, Dim2,
SUM(Measure1) AS MSum,
COUNT(*) AS RecordCount,
AVG(Measure2) AS MAvg,
MIN(Measure1) AS MMin
MAX(CASE
WHEN Measure2 < 100
THEN Measure2
END) AS MMax
GROUP BY Dim1, Dim2
{
$match : {MMin : {$gt : 0}}
}
HAVING (MMin > 0)
{
$sort : {RecordCount : -1}
}
ORDER BY RecordCount DESC
{
$limit : 8
},
{
$skip : 4
}
LIMIT 4, 8
Putting all of these together gives you your pipeline:
As you can see, the SQL matches the pipeline operations pretty clearly. If you want to play with it, it’ll be available soon to a the development nightly build.
If you’re at MongoSV today (December 9th, 2011), check out Chris Westin’s talk on the new aggregation framework at 3:45 in room B4.
10gen is trying to hire a gazillion people, so I’m averaging two interviews a day (bleh). A lot of people have asked what it’s like to work on MongoDB, so I thought I’d write a bit about it.
A Usual Day
Coffee: the lynchpin of my day.
Get in around 10am.
Check if there are any commercial support questions that need to be answered right now.
Have a cup of coffee and code until lunch.
Eat lunch.
If nothing dire has happened, go out for coffee+writing. This refuels my brain and is a creative outlet: that’s where I am now. My coffee does not look nearly as awesome as the coffee on the right.
Go back to the office, code all afternoon.
Depending on the day, usually between 5:30 and 6:30 the programmers will naturally start discussing problems we had over the day, interviews, support, the latest geek news, etc. Often beers are broken out.
Wrap up, go home.
There are some variations on this: as I mentioned, a lot of time lately is taken up by interviewing. Other coworkers spend a lot more time than I do at consults, trainings, speaking at conferences, etc.
Other General Workday Stuff
On Fridays, we have lunch as a team. After lunch, we have a tech talk where someone presents on what they’re working on (e.g., the inspiration for my geospatial post) or general info that’s good to know (e.g., the inspiration for my virtual memory post). This is a nice way to end the week, especially since Fridays often wrap up earlier than other days.
A couple people use OS X or Windows for development, most people use Linux. You can use whatever you want. I’d like to encourage emacs users, in particular, to apply, as we’re falling slightly behind vi in numbers.
We sit in an open office plan, everyone at tables in a big room (including the CEO and CTO, who are both programmers). The only people in separate rooms are the people who have to be on the phone all day (sales, marketers, basketweavers… I’m not really clear on what non-technical people do).
And speaking of what people actually do, here are three examples of my job (that are more specific than “coding”):
Fixing Other People’s Bugs
Recently, a developer was using MongoDB and IBM’s DB2 with PHP. After he installed the MongoDB driver, PHP started segfaulting all over the place. I downloaded the ibm_db2 PHP extension to take a look.
PHP keeps a “storage unit” for extensions’ long-term memory use. Every extension shares the space and can store things there.
The DB2 extension was basically fire-bombing the storage unit.
It went through the storage, object by object, casting the objects into DB2 types and then freeing them. This worked fine when DB2 was the only PHP extension being used, but broke down when anyone else tried to use that storage. I gave the user a small patch that stopped the DB2 extension from destroying objects it didn’t create, and everything worked fine for them, after that.
The Game is Afoot
A user reported that they couldn’t initialize their replica set: a member wasn’t coming online. The trick with this type of bug is to get enough evidence before the user wants to beat you over the head with the 800th log you’ve requested.
I asked them to send the first round of logs. It was weird, nothing was wrong from server1‘s point of view: it initialized properly and could connect to everyone in the set. I puzzled over the messages, figuring out that once server1 had created the set, server2 had accepted the connection from server1 but then somehow failed to connect back to server1 and so couldn’t pick up the set config. However, according to server1, it could connect fine to server2 and thought it was perfectly healthy!
I finally realized what must be happening: “It looks like server2 couldn’t connect to any of the others, but all of them could connect to it. Could you check your firewall?”
“Oh, that server was blocking all outgoing connections! Now its working fine.”
Elementary, my dear Watson.
You know you’re not at a big company when…
At least it had "handles."
Someone on Sparc complained that the Perl driver wasn’t working at all for them. My first thought was that Sparc is big-endian, so maybe the Perl driver wasn’t flipping memory correctly. I asked Eliot where our Power PC was, and he said we must have forgotten it when we moved: it was still in our old office around the corner.
“Bring someone to help carry it,” he told me. “It’s heavy.”
Pshaw, I thought. How heavy could an old desktop be?
I went around the corner and the other company graciously let me walk into their server room, choose a server, and walk out with it. Unfortunately, it weighed about 50 pounds, and I have a traditional geek physique (no muscles). The trip back to our office involved me staggering a couple steps, putting it down, shaking out my arms, and repeat.
When I got to our office, I just dragged it down the hallway to our server closet. Eliot saw me tugboating the thing down the hallway.
“You didn’t bring someone to help?”
“It’s *oof* fine!”
Unfortunately, once it was all set up, the Perl driver worked perfectly on it. So it wasn’t big-endian specific.
I was now pretty sure it was Sparc-specific (another person had reported the same problem on a Sparc), so I bought an elderly Sparc server for a couple hundred bucks off eBay. When it arrived a couple days later, Eliot showed me how to rack it and I spent a day fighting with the Solaris/Oracle package manager. However, it was all worth it: I tried running the Perl driver and it instantly failed (success!).
After some debugging, I realized that Sparc was much more persnickety than Intel about byte alignment. The Perl driver was playing fast and loose with a byte buffer, casting pieces of it into other types (which Sparc didn’t like). I changed some casts to memcpys and the Perl driver started working beautifully.
But every day is different
The episodes above are a very small sample of what I do: there are hundreds of other things I’ve worked on over the last few years from speaking to working on the database to writing a freakin Facebook app.
So, if this sounded interesting, please go to our jobs website and submit an application!
Edit: since this was written, Sam has written some excellent documentation on using MMS. I recommend reading through it as you explore MMS.
Telling someone “You should set up monitoring” is kind of like telling someone “You should exercise 20 minutes three times a week.” Yes, you know you should, but your chair is so comfortable and you haven’t keeled over dead yet.
For years*, 10gen has been planning to do monitoring “right,” making it painless to monitor your database. Today, we released the MongoDB Monitoring Service: MMS.
MMS is free hosted monitoring for MongoDB. I’ve been using it to help out paying customers for a while, so I thought I’d do a quick post on useful stuff I’ve discovered (documentation is… uh… a little light, so far).
There are two options: register a company and register another account for an existing company. For example, let’s say I wanted to monitor the servers for Snail in a Turtleneck Enterprises. I’ll create a new account and company group. Then Andrew, sys admin of my heart, can create an account with Snail in a Turtleneck Enterprises and have access to all the same monitoring info.
Once you’re registered, you’ll see a page encouraging you to download the MMS agent. Click on the “download the agent” link.
This is a little Python program that collects stats from MongoDB, so you need to have pymongo installed, too. Starting from scratch on Ubuntu, do:
Last step! Back to the website: see that “+” button next to the “Hosts” title?
Designed by programmers, for Vulcans
Click on that and type a hostname. If you have a sharded cluster, add a mongos. If you have a replica set, add any member.
Now go have a nice cup of coffee. This is an important part of the process.
When you get back, tada, you’ll have buttloads of graphs. They probably won’t have much on them, since MMS will have been monitoring them for all of a few minutes.
Cool stuff to poke
This is the top bar of buttons:
Of immediate interest: click “Hosts” to see a list of hosts.
You’ll see hostname, role, and the last time the MMS agent was able to reach this host. Hosts that it hasn’t reached recently will have a red ping time.
Now click on a server’s name to see all of the info about it. Let’s look at a single graph.
You can click & drag to see a smaller bit of time on the graph. See those icons in the top right? Those give you:
+
Add to dashboard: you can create a custom dashboard with any charts you’re interested in. Click on the “Dashboard” link next to “Hosts” to see your dashboard.
Link
Link to a private URL for this chart. You’ll have to be logged in to see it.
Email
Email a jpg of this chart to someone.
i
This is maybe the most important one: a description of what this chart represents.
That’s the basics. Some other points of interest:
You can set up alerts by clicking on “Alerts” in the top bar
“Events” shows you when hosts went down or came up, because primary or secondary, or were upgraded.
Arbiters don’t have their own chart, since they don’t have data. However, there is an “Arbiters” tab that lists them if you have some.
The “Last Ping” tab contains all of the info sent by MMS on the last ping, which I find interesting.
If you are confused, there is an “FAQ” link in the top bar that answers some common questions.
If you have any problems with MMS, there’s a little form at the bottom to let you complain:
This will file a bug report for you. This is a “private” bug tracker, only 10gen and people in your group will be able to see the bugs you file.
* If you ran mongod --help using MongoDB version 1.0.0 or higher, you might have noticed some options that started with --mms. In other words, we’ve been planning this for a little while.
By request, a quick post on using PHP references in extensions.
To start, here’s an example of references in PHP we’ll be translating into C:
This will print:
x is 1
called not_by_ref(1)
x is 1
called by_ref(1)
x is 3
If you want your C extension’s function to officially have a signature with ampersands in it, you have to declare to PHP that you want to pass in refs as arguments. Remember how we declared functions in this struct?
The second argument to PHP_FE, NULL, can optional be the argument spec. For example, let’s say we’re implementing by_ref() in C. We would add this to php_rlyeh.c:
// the 1 indicates pass-by-reference
ZEND_BEGIN_ARG_INFO(arginfo_by_ref, 1)
ZEND_END_ARG_INFO();
zend_function_entry rlyeh_functions[] = {
PHP_FE(cthulhu, NULL)
PHP_FE(by_ref, arginfo_by_ref)
{ NULL, NULL, NULL }
};
PHP_FUNCTION(by_ref) {
zval *zptr = 0;
if (zend_parse_parameters(ZEND_NUM_ARGS() TSRMLS_CC, "z", &zptr) == FAILURE) {
return;
}
php_printf("called (the c version of) by_ref(%d)n", (int)Z_LVAL_P(zptr));
ZVAL_LONG(zptr, 3);
}
Suppose we also add not_by_ref(). This might look something like:
Note that we set the refcount of copy to 1. This is because the refcount for zptr is 2: 1 ref from the calling function + 1 ref from the not_by_ref function. However, we don’t want the copy of zptr to have a refcount of 2, because it’s only being used by the current function.
Also note that memcpy-ing the zval only works because this is a scalar: if this were an array or object, we’d have to use PHP API functions to make a deep copy of the original.
If we run our PHP program again, it gives us:
x is 1
called (the c version of) not_by_ref(1)
x is 1
called (the c version of) by_ref(1)
x is 3
Okay, this is pretty good… but we’re actually missing a case. What happens if we pass in a reference to not_by_ref()? In PHP, this looks like:
…which displays “x is 2”. Unfortunately, we’ve overridden this behavior in our not_by_ref() C function, so we have to special case: if this is a reference, change its value, otherwise make a copy and change the copy’s value.
There may be a better way to do this, please leave a comment if you know of one. However, as far as I know, this is the only way to emulate the PHP reference behavior.
If you would like to read more about PHP references, Derick Rethans wrote a great article on it for PHP Architect.
When you run a process, it needs some memory to store things: its heap, its stack, and any libraries it’s using. Linux provides and cleans up memory for your process like an extremely conscientious butler. You can (and generally should) just let Linux do its thing, but it’s a good idea to understand the basics of what’s going on.
One easy way (I think) to understand this stuff is to actually look at what’s going on using the pmap command. pmap shows you memory information for a given process.
For example, let’s take a really simple C program that prints its own process id (PID) and pauses:
Save this as mem_munch.c. Now compile and run it with:
$ gcc mem_munch.c -o mem_munch
$ ./mem_munch
run `pmap 25681`
The PID you get will probably be different than mine (25681).
At this point, the program will “hang.” This is because of the pause() function, and it’s exactly what we want. Now we can look at the memory for this process at our leisure.
Open up a new shell and run pmap, replacing the PID below with the one mem_munch gave you:
This output is how memory “looks” to the mem_munch process. If mem_munch asks the operating system for 00007fcf5af88000, it will get libc. If it asks for 00007fcf5b31c000, it will get the ld library.
This output is a bit dense and abstract, so let’s look at how some more familiar memory usage shows up. Change our program to put some memory on the stack and some on the heap, then pause.
#include
#include
#include
#include
int main() {
int on_stack, *on_heap;
// local variables are stored on the stack
on_stack = 42;
printf("stack address: %pn", &on_stack);
// malloc allocates heap memory
on_heap = (int*)malloc(sizeof(int));
printf("heap address: %pn", on_heap);
printf("run `pmap %d`n", getpid());
pause();
}
Note that there’s a new entry between the final mem_munch section and libc-2.13.so. What could that be?
# from pmap 0000000001b84000 132K rw--- [ anon ]
# from our program
heap address: 0x1b84010
The addresses are almost the same. That block ([ anon ]) is the heap. (pmap labels blocks of memory that aren’t backed by a file [ anon ]. We’ll get into what being “backed by a file” means in a sec.)
The second thing to notice:
# from pmap 00007fff49747000 132K rw--- [ stack ]
# from our program
stack address: 0x7fff497670bc
And there’s your stack!
One other important thing to notice: this is how memory “looks” to your program, not how memory is actually laid out on your physical hardware. Look at how much memory mem_munch has to work with. According to pmap, mem_munch can address memory between address 0x0000000000400000 and 0xffffffffff600000 (well, actually 0x00007fffffffffffffff, beyond that is special). For those of you playing along at home, that’s almost 10 million terabytes of memory. That’s a lot of memory. (If your computer has that kind of memory, please leave your address and times you won’t be at home.)
So, the amount of memory the program can address is kind of ridiculous. Why does the computer do this? Well, lots of reasons, but one important one is that this means you can address more memory than you actually have on the machine and let the operating system take care of making sure the right stuff is in memory when you try to access it.
Memory Mapped Files
Memory mapping a file basically tells the operating system to load the file so the program can access it as an array of bytes. Then you can treat a file like an in-memory array.
For example, let’s make a (pretty stupid) random number generator ever by creating a file full of random numbers, then mmap-ing it and reading off random numbers.
First, we’ll create a big file called random (note that this creates a 1GB file, so make sure you have the disk space and be patient, it’ll take a little while to write):
$ dd if=/dev/urandom bs=1024 count=1000000 of=/home/user/random
1000000+0 records in
1000000+0 records out
1024000000 bytes (1.0 GB) copied, 123.293 s, 8.3 MB/s
$ ls -lh random
-rw-r--r-- 1 user user 977M 2011-08-29 16:46 random
Now we’ll mmap random and use it to generate random numbers.
#include
#include
#include
#include
#include
int main() {
char *random_bytes;
FILE *f;
int offset = 0;
// open "random" for reading
f = fopen("/home/user/random", "r");
if (!f) {
perror("couldn't open file");
return -1;
}
// we want to inspect memory before mapping the file
printf("run `pmap %d`, then press ", getpid());
getchar();
random_bytes = mmap(0, 1000000000, PROT_READ, MAP_SHARED, fileno(f), 0);
if (random_bytes == MAP_FAILED) {
perror("error mapping the file");
return -1;
}
while (1) {
printf("random number: %d (press for next number)", *(int*)(random_bytes+offset));
getchar();
offset += 4;
}
}
If we run this program, we’ll get something like:
$ ./mem_munch
run `pmap 12727`, then press
The program hasn’t done anything yet, so the output of running pmap will basically be the same as it was above (I’ll omit it for brevity). However, if we continue running mem_munch by pressing enter, our program will mmap random.
This is very similar to before, but with an extra line (bolded), which kicks up virtual memory usage a bit (from 4MB to 980MB).
However, let’s re-run pmap with the -x option. This shows the resident set size (RSS): only 4KB of random are resident. Resident memory is memory that’s actually in RAM. There’s very little of random in RAM because we’ve only accessed the very start of the file, so the OS has only pulled the first bit of the file from disk into memory.
If the virtual memory size (the Kbytes column) is all 0s for you, don’t worry about it. That’s a bug in Debian/Ubuntu’s -x option. The total is correct, it just doesn’t display correctly in the breakdown.
You can see that the resident set size, the amount that’s actually in memory, is tiny compared to the virtual memory. Your program can access any memory within a billion bytes of 0x00007fe261c6f000, but if it accesses anything past 4KB, it’ll probably have to go to disk for it*.
What if we modify our program so it reads the whole file/array of bytes?
#include
#include
#include
#include
#include
int main() {
char *random_bytes;
FILE *f;
int offset = 0;
// open "random" for reading
f = fopen("/home/user/random", "r");
if (!f) {
perror("couldn't open file");
return -1;
}
random_bytes = mmap(0, 1000000000, PROT_READ, MAP_SHARED, fileno(f), 0);
if (random_bytes == MAP_FAILED) {
printf("error mapping the filen");
return -1;
}
for (offset = 0; offset < 1000000000; offset += 4) {
int i = *(int*)(random_bytes+offset);
// to show we're making progress
if (offset % 1000000 == 0) {
printf(".");
}
}
// at the end, wait for signal so we can check mem
printf("ndone, run `pmap -x %d`n", getpid());
pause();
}
Now the resident set size is almost the same as the virtual memory size:
Now if we access any part of the file, it will be in RAM already. (Probably. Until something else kicks it out.) So, our program can access a gigabyte of memory, but the operating system can lazily load it into RAM as needed.
And that’s why your virtual memory is so damn high when you’re running MongoDB.
Left as an exercise to the reader: try running pmap on a mongod process before it’s done anything, once you’ve done a couple operations, and once it’s been running for a long time.
* This isn’t strictly true**. The kernel actually says, “If they want the first N bytes, they’re probably going to want some more of the file” so it’ll load, say, the first dozen KB of the file into memory but only tell the process about 4KB. When your program tries to access this memory that is in RAM, but it didn’t know was in RAM, it’s called a minor page fault (as opposed to a major page fault when it actually has to hit disk to load new info). back to context
** This note is also not strictly true. In fact, the whole file will probably be in memory before you map anything because you just wrote the thing with dd. So you’ll just be doing minor page faults as your program “discovers” it.