Faster Than a Speeding Bullet (Yappy’s New Data-center)

Posted by Admin on April 25, 2015  /   Posted in Announcements

If you are a subscriber or have backed us on Kickstarter, thank you! This entire post would not be possible without your support! As a supporter and user of Yappy, we hope this post educates, excites, and exposes some interesting information.

All Bare, No Share

We have chosen to run our entire infrastructure on fully dedicated bare metal servers. What this means is that we own or lease our servers and house them in top tier secure data-centers. These data-centers provide us with electricity and premium bandwidth to utilize in exchange for a flat monthly fee.

Most “elastic” services run their infrastructure on cloud providers such as Microsoft Azure or Amazon EC2. These providers enable you to add and remove servers to your infrastructure on demand, for a price of course. They charge per instance (CPU + memory), for bandwidth, for # of IO calls to the data storage API’s, and so on. You can commit and pre-purchase resources for extended periods of time, but the pricing is still relatively high; you pay for the agility.

Due to the high traffic we deal with, and the amount of complicated processing we need to do in mere milliseconds  (such as phone number parsing, validation, matching, message history lookups, textual indexing, and so on) our servers are pretty beefy (32+GB of RAM per server, 6+ cores, gig interconnects, etc..). They are constantly crunching away as we handle an average of 15k API calls a minute with almost 500 calls a second peak at 25mb sustained traffic!


It would cost us significantly more to run Yappy on cloud providers than to run them on dedicated hardware. To best manage our resources, we only use cloud services in extreme cases where our pre-allocated hardware is unable to keep up with demand; when in need, we spin up an Azure or EC2 instance to help with the API calls while we have the data center provision a new server (which may take a day or two). Case in point, our #1 competitor charges $4.99 a month for the pro version of their service while we only charge $0.99 for Yappy Pro.

As a bonus, running our own hardware gives us complete control and visibility over our data. We transport and store data encrypted on our own hard drives, not on shared or ephemeral API’s which give us questionable assurance over the security. This way, we can claim and stick by our promise of data security.

At the end of the day, I guess you can say, we are old school with a dash of new school!

Milliseconds @ a Time

When you make a request to a website, API, etc.. each call accrues a small time cost. The time it takes for the data packets to travel from your device to our servers and back is known as latency. This time + the speed at which a server can process a request = your end user experience. The higher the latency, or the longer it takes us to process a request, the slower your end user experience will be.

Latency varies depending on many factors. One of the easiest to understand is geographical distance. The farther you are from our datacenter, the more ‘hops’ the data packets have to travel through. Each hop is basically a device such as a router; this router needs to establish a route and to route your data packet to the next hop. This process takes time, usually just a few milliseconds; given many hops, this adds up. The speed of electricity over copper of fiber is almost 300 million meters a second; while this is really fast, it still adds more time for your data packet to traverse the web.

While latency of 100ms (1/10th of a second) might not seem like a big deal, take into consideration scenarios where you have to make 10 or 20 calls to the server. That 100ms become 1 or 2 seconds. Scale this to an average Yappy users day of about 55 minutes spent on the user interface and you could be looking at tens of seconds or minutes of unwanted latency if they are physically far from our data center.

The LA Lure

Because I live in Orange County, it would only make sense that the first data center is physically close to me. The first data center Yappy started in is in LA at the famous Wilshire hub. With multiple multi-gigabit top tier drops and affordable servers, the data center we chose to go with has exhibited incredible uptime and performance. While most users in the US see latency of less than 100ms, we thought we could do better to server our large east coast user base.

Take a quick look at the average latency experienced by most Yappy users and you will quickly notice that while our West coast user base received excellent service, the rest of the continent was not as fortunate. Additionally, our growing European user base (and especially our Asian users) felt a bit unappreciated.

Double the Trouble

After a lot of research we found a datacenter in NJ within the NY metro area which provides comparable  SLA’s, speed, and most importantly high speed drops to Europe and Asia. We decided to expand our server farms to this data center to help lower the latency and increase the user experience as much as we could while ensuring we keep the Yappy subscription cost low.


After the new data-center was brought up, we significantly reduced the latency in the Americas while extending it to Europe. Additionally, our Asian latency went from about 250ms down to 150, not bad!

How it’s Done

Since Yappy is a very transaction oriented system, synchronizing and mirroring the back end database is not the easiest thing to do. Providers like Twitter and Facebook run on highly scalable databases which provide ‘eventual consistency’. This basically means the database sacrifices consistency across all nodes for performance; it may take time before data from Node A is copied to Node B. This is OK for tweets and posts but not for Yappy! The order in which message arrive and are sent from our servers is crucial, as is their ability to quickly propagate between all database servers.

Think about this, you live in CA and the Web interface is connected to our West coast servers. You tell the system to send a text via Our system sends your phone a command to connect to us to securely pick up the contents to send as a text. In some cases, your provider might provide your phone with a proxied IP that connects in the middle of the US. When trying to connect to Yappy, our system automatically transfers you to the geographically closest data center, in this case, the new East coast data center. If the message you wanted to send didn’t replicate from our west coast data center to the east, your phone would not be able to pick up the contents of the message and Yappy would fail.

To ensure we replicate the data as quickly as possible, we are riding on an encrypted backbone line between the two data centers to help us keep the sync time down to a minimum. On average, we sync all commands in 1-2 seconds.

How We Redirect

When your phone or browser looks up,, etc… it connects to our DNS (Domain Name Server). The DNS determines where you are geographically and re-reroutes you to the closest data center! We tell your phone to refresh this information every 60 seconds so in case a data-center goes down for any reason, you are re-routed to the other and have a continuous connection to Yappy.

Proof is in the Pudding

If you’d like to see exactly what kind of latency and connection you have to our servers, check out this cool service tester we just built! It will measure your connectivity to our servers on both coasts and will tell you which one you are currently being routed to, neat huh?

If you use the speed test drop us a line and let us know what your speed is!


Update (4/27/2015)

Here is the post data-center network latency map; most countries are experiencing exceptional service!


^ Back to Top