AST Financial: the dumpster fire of a company

As an employee at MongoDB for several years, I had a bunch of shares that MongoDB was holding for me when it went public which I had to get to my brokerage account to actually sell. It turns out that the company can’t just hand you your shares, they have to go through a transfer agent who keeps the record of who owns what shares of a company. Then the transfer agent transfers those shares to a brokerage.

So several months ago, I got an email that MongoDB would be transferring my shares to AST Financial (a transfer agent). I got an email from them with some forms to return. I heard from international friends that the email address they gave to people outside the US didn’t actually work, but luckily mine worked fine.

I received a somewhat confusing snail mail from AST with an account summary, which added together my first and last stock grants and broke those out into a separate section than the total, for reasons that are completely unclear to me.

I figured that maybe their website would have a more detailed breakdown, so I tried to activate my account. Their password form was pretty badly done but whatever, banks always have crappy logins. Then we got to the “security questions,” which either didn’t actually apply to me (‘What is your oldest sibling’s middle name?’) or might as well have been yes/no questions (‘What color was your first dog?’ Black. It’s the most common damn color for dogs in the world.) Then I agreed to their terms and services and… got an error page. I tried going back to the homepage, saying “Sign up” again, and it took me directly to the terms & services. I nervously accepted, again. My account appeared!

Note that every time I have logged in subsequently, the site has presented an error page. Then I go back to the home page, click “Log in” again, and it takes me to my account.

I decided it was time to transfer my shares to my usual brokerage account. So I called AST, navigated their phone maze, and waited for someone to pick up. And waited, and waited. Eventually, the robot said that they were experiencing heavy call volumes and asked me to leave a message.

I left a message and a support person called a few hours later. Yay! I explained what I wanted and she asked to put me on hold while she looked up my account. Then she disconnected me.

After I finished raging, I called them back. After an hour of waiting for someone to pick up, I gave up and hung up. I sent the original person who had responded to emailed-in forms, asking her to please have someone contact me.

I received a response from the original email they sent with the forms to fill out and send back.

Luckily, the MongoDB alumni had a group where everyone was bitching and, to a lesser extent, offering advice. Apparently AST actually had an online conversion process: under Account Holdings -> Account Profile, which of course takes you to the General Account Information page, which (of course) is where you transfer shares. I clicked on the button to convert my shares and… got a page that said “ERROR. An error occurred while processing your request.” I tried going to the previous page (which of course didn’t work, back button just took me to the error page again and history was unhelpfully a zillion pages with the title “AST” so I manually put in the URL. This got me to the conversion confirmation page, where it asked if I wanted to submit to convert all shares… with a universal “go back” symbol on the button.

Playing with fire, I entered my total number of shares and pressed submit. It gave me a confirmation page… with a submit button. So I submitted again, and finally got to the real confirmation page.

I didn’t want to publish this before I got extricated from ever having to deal with them again, but now that all my shares are safely out of their hands: AST is the worst. Anyone working on a blockchain-backed transfer agent?


Many years ago, I was speaking at a PHP meetup and got talking to a guy who worked at Microsoft. He told me about Microsoft’s BizSpark program, which lets startups use Microsoft’s software for free. As MongoDB was about a dozen people at the time, I filled out the application and started using PowerPoint for my presentations (which was a vast improvement over Open Office).

One of my friends just asked on Facebook how they could get a cheap license for Microsoft Word, so I looked up the BizSpark program page and saw:

BizSpark homepage

MongoDB went from being one of the startups that BizSpark was aimed at to being one of the features of BizSpark. Pretty cool.

User Support

On my last post, Jaime asked:

How the whole “Hacker News MongoDB random bashing” situation was dealt with from inside? There is a lot of MongoDB-hate out there, and I guess that it has to be difficult from an emotional point of view (especially when so many silly comments are made)

What I found the most difficult was when someone posted that they hated something that I, personally, was responsible for. They’d say something like, “the PHP driver sucks” on Twitter. Then I’d quietly go batshit, take a couple deep breaths, and reply with something, like, “Hey, I wrote the PHP driver, what’s up?” Once I had helped them debug the problem, they would often totally change their tune and proclaim MongoDB to be the best, which was very rewarding. However, it was really draining to have people casually hating my work, as though it was make by some uncaring drone at a big company.

More generally, if a user was bashing MongoDB, we would try to reach out and offer help. There was sort of triage process: someone would notice a blog post or tweet and put it up on the group chat. We would provide practically unlimited free support to anyone: their success was our success, after all. Sometimes we couldn’t help or they didn’t want help, but often we could turn bashers in boosters. And that was very rewarding.

The High Ground in Low Country

Part of MongoDB’s company philosophy was not to trash-talk any of our competitors, no matter what. If we were asked, we should describe what the other solutions’ strengths and weaknesses were, and what good use cases would be.

My coworkers researched the other databases out there and gave presentations on them, so we’d all be able to talk fluently about them. And then we took the high ground.

But sometimes it was so. hard. When the competition attacked MongoDB, I found it impossible not to take it personally. But you know what? Taking the high ground actually worked. People remembered MongoDB, not whatever company posted the inflammatory blog post.

However, I’m glad that I don’t have to say nice things about those craptastic databases anymore 🙂

(Except for Redis. That dude was always classy, and his database is cool.)

The Rise of Big Data

I was helping a MongoDB user with sharding one time. His chunks weren’t splitting and I was trying to diagnose the issue. His shard key looked reasonable, he didn’t have any errors in his log, and manually splitting the chunks worked. Finally, I looked at how much data he was storing: only a few MB per chunk. “Oh, I see the problem,” I told him. “It looks like your chunks are too small to split, you just need more data.”

“No, my data is huge, enormous.” he said.

“Um, okay. If you keep inserting data, it should split.”

“This is a bug. My data is big.”

We argued back and forth a bit, but I managed to back off from having called his data small and convince him it wasn’t a bug. That day I learned that people take their data size very personally.

Finished The Definitive Guide

Or at least the writing it, it still has to be tech edited, “real” edited, illustrated, formatted, etc. The second edition is going to be about 400 pages (almost twice the length of the first edition), with majorly expanded sections on sharding, replication, and server administration.


Now, some mea culpas:

To those of you who sent me schemas: I’m sorry if I never got back to you! I decided to go in a different direction and ended up not using any of them. Sorry to waste people’s time (but they were fascinating to read).

To those of you who sent in a schema and I asked for your mailing address: I forgot to forward those emails to my personal account before leaving 10gen so I’ve lost the addresses. Please resend your address to my personal email (k dot chodorow at gmail dot com).

Screen Shot 2013-03-08 at 1.57.54 PM

Databases & Dragons

Here are some exercises to battle-test your MongoDB instance before going into production. You’ll need a Database Master (aka DM) to make bad things happen to your MongoDB install and one or more players to try to figure out what’s going wrong and fix it.

This was going to go into MongoDB: The Definitive Guide, but it didn’t quite fit with the other material, so I decided put it here, instead. Enjoy!

Tomb of Horrors

Try killing off different components of your system: mongos processes, config servers, primaries, secondaries, and arbiters. Try killing them in every way you can think of, too. Here are some ideas to get you started:

  • Clean shutdown: shutdown from the MongoDB shell (db.serverShutdown()) or SIGINT.
  • Hard shutdown: kill -9.
  • Shut down the underlying machine.
  • If you’re running on a virtual machine, stop the virtual machine.
  • If you’re running on physical hardware, unplug the machine.

A slightly more difficult twist is to make these servers unrecoverable: decommission the virtual machine, firewall a box from the network, pick up a physical machine an hide it in a closet.

@markofu‘s suggestion: make netcat bind to 27017 so mongod can’t start back up again:

$ while [ 1 ]; do echo -e "MongoDB shell version: 2.4.0nconnecting to: testn>"; nc -l 27017; done

DM’s guide: make sure no data is lost.

The Adventure of the Disappearing Data Center

Similar to above, but more organized. You can either have a data center go down (shut down all the servers there) or you can just configure your network not to let any connections in or out, which is a more evil way of doing it. If you do this via networking, once your players have dealt with the data center going down, you can bring it back and make them deal with that, too.

Note that any replica set with a majority in the “down” data center will still have a primary when it comes back online. If your players have reconfigured the remainder of the set in another data center to be primary, these members will be kicked out of the set.

Find the Rogue Query

There are several types of queries that you can run that will pound on your system. If you’d like to teach operators how to track these types of queries down and kill them, this is a good game to play.

To test a query that stresses disk IO, run a query on a large collection that probably isn’t all in memory, such as the oplog. If you have a large, application-specific collection, that’s even better as it’ll raise less red-flags with the players as to why it’s running. Make sure it has to return hundreds of gigabytes of data.

Kicking off a complex MapReduce can pin a single core. Similarly, if you can do complex aggregations on non-indexed keys, you can probably get multiple cores.

Stressing memory and CPU can be done by building background indexes on numerous databases at the same time.

To be really tricky, you could find a frequently-used query that uses an index and drop the index.

DM’s guide: players should re-heat the cache to speed up the application returning to normal.

THAC0, aka Bad System Settings

Try setting readahead to 65,000 and watch MongoDB’s RAM utilization go down and the disk IO go through the roof.

Set slaveDelay=30 on most of your secondaries and watch all of your applications w: majority writes take 30 seconds.

Use rs.syncFrom() to create a replication chain where every server only has one server syncing from it (the longest possible replication chain). Then see how long it takes for w: majority writes to happen. How about if everyone is syncing directly from the primary?


What happens if your MongoDB instance gets more than it can handle? This is especially useful if you’re on a multi-tenant virtual machine: what’s going to happen to your application when one of your neighbors is behaving badly? However, it’s also good to test what might happen if you get a lot more traffic than you expect. You can use the Linux dd tool to write tons of garbage to the data volume (not the data directory!) and see what happens to your application.

Server Concealment

Try using a script to randomly turn network on and off using iptables. For increased realism, it’s more likely that you’ll lose connectivity between data centers than within a data center, so be sure to check that.

Network issues will generally cause failovers and application errors. It can be very difficult to figure out what’s going on without good monitoring or looking at logs.

MongoDB Puzzlers #1

Suppose that the collection contained the following documents:

{"x": -5}
{"x": 0}
{"x": 5}
{"x": 10}
{"x": [0, 5]}
{"x": [-5, 10]}
{"x": [-5, 5, 10]}

x is some combination of -5, 0, 5, and 10 in each document. Which documents would{"x" : {"$gt" : -1, "$lt" : 6}}) return?

Click here to show answer

function() {
jQuery(this).text(“Click here to hide answer”);
jQuery(‘#puzzlers-0-answer’).css(‘display’, ‘block’);
function() {
jQuery(this).text(“Click here to show answer”);
jQuery(‘#puzzlers-0-answer’).css(‘display’, ‘none’);

MongoDB changing default: now write errors are reported

I’m really happy to share that, in a coordinated effort, all official MongoDB drivers are changing their defaults to return a response from writes today.

I think that this is kind of a turning point: MongoDB is finally “newbie safe.” You can just spin up a mongod and it’ll default to journaling being on. Then you write to it from a client and it’ll default to telling you if a write didn’t go through.

There are a lot of awesome “quiet” changes like this going into 2.4. I’m actually pretty excited about all these little improvements: sexy features like aggregation and TTL collections are all well and good, but we’re in the process of making some structural improvements that should pay big dividends in the future. Now, back to coding so you guys might actually get some new $-mods in 2.4 🙂

TDG Update

Screen shot of my PDF viewer with TDG’s titlebar

I just hit 300 pages! (O’Reilly has a nice system where it automatically compiles my XML into a PDF, so I can obsessively check page count). The last edition topped out at just over 200 pages, which was nice: you could actually sit down and read the thing in a reasonable amount of time and not have your lap fall asleep.

I don’t think this edition is going to be small enough to do that because MongoDB itself has gotten bigger. I couldn’t cover all of the things a developer needs to know in less than, well, 300 pages (and I’m not even close to done). I feel like Mike and I did a good job covering MongoDB two years ago (I’ve had to change almost nothing in the first few chapters), but it’s just a larger product now.

Finally, thank you to everyone that sent me schemas! I got a lot bigger response than I expected and I’m running way behind on getting back to people, so I’m sorry if I haven’t emailed you back yet! However, I really appreciate all of your contributions.