I have been working with streaming data sources on my current project and I needed to implement a way to stream the contents of Apache Kafka topics over HTTP with 1:N connection fanout. As a quick test, I implemented an ActionController::Live streaming controller in Rails 4, and I ran into some trouble with my Nginx reverse-proxy config for Rails. Specifically, the streaming response from the upstream Rails application would prematurely terminate, closing the downstream Nginx connection to the client and terminating the streaming response. I had to do a lot of searching and trial and error testing to find a working Nginx reverse-proxy config for Rails live streaming, so I thought I would capture it here to save others the trouble.

First, some version info:

  • Nginx: 1.5.8 (testing on both Mac OS X and CentOS)
  • Rails: 4.0.2
  • ruby 2.0.0-p353

As a simple test, I implemented the following action using ActionController::Live streaming. Note the use of the X-Accel-Buffering HTTP header to inform Nginx to turn off proxy buffering for requests to this action. Alternatively, you can uncomment proxy_buffering in the nginx location config to eliminate buffering for all Rails requests, but it is probably safer to selectively disable response buffering via the header.

class OutboundsController < ApplicationController
  include ActionController::Live

  def consumer
    response.headers['Content-Type'] = 'application/json'
    response.headers['X-Accel-Buffering'] = 'no'

    10.times { |i|
      hash = { id: "#{i}", message: "this is a test, message id #{i}" }
      response.stream.write "#{JSON.generate(hash)}\n"
      sleep 1



The Nginx reverse-proxy config that finally worked for me is as follows. I’m only listing the bits I added to my otherwise “standard” nginx setup.

http {


  map $http_upgrade $connection_upgrade {
    default Upgrade;
    '' close;



upstream rails {
  server localhost:3000;

server {


  location / {
    proxy_pass http://rails;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection $connection_upgrade;
    proxy_set_header Host $host;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    #proxy_buffering off; # using X-Accel-Buffering: off in ActionController:Live



With this config, I get the expected output — {“id”:”0″,”message”:”this is a test, message id 0″}, one per second, with incrementing IDs — streamed all the way through to my client.

One other note: if you are using curl as your client, be sure to set --no-buffer to avoid any client-induced delays in streaming data arrival.

That’s it. Hope this helps.

I have been following the massive coverage of the Snowden affair with detached professional interest as someone with some Internet security experience. I recently read Bruce Schneier’s call to arms to the technical community to take back the Internet, and it got me thinking about what we have lost and what we actually had in the first place. But first, here’s the link to Schneier’s post. It’s worth a read:


The thought that occurred to me is that the collective outrage over the alleged invasions of our privacy obscures a fundamental question about the nature of the Internet: did Internet privacy ever exist? This question is orthogonal to the legal and constitutional aspects of the issue, and goes to the heart of why many services on the Internet are free to end users.

Think about Internet privacy for a moment. Go back to your earliest experience with the internet – email, a web page, perhaps a search engine – and ask yourself if you ever considered if what you were doing was (a) secure, and (b) private. Consider an email service. There is an assumption that your emails on a service like Gmail or Hotmail are private correspondence, and that they are being stored and treated as protected communication under the Electronic Communication Privacy Act and in the context of the Fourth Amendment. However, from a hacker’s perspective the reality is that you are using a free service (unless you are one of the few who pay…more on paid services later) and as such you are not giving the service provider “consideration” (e.g. money) to maintain any specific quality of service, including the privacy and security of your emails. Mail service providers make money as a side effect, usually via advertising or by providing additional services linked to your use of an email service, and none of the mail service providers make any specific guarantees as to the security or privacy of their services. To do so would be to expose them to liability and liquidated damages in the event that security or privacy were breached. The paying customers of an email service are advertisers, not people who send and receive email.

This perspective applies equally well to search engines. Google did not build an intergalactic search engine at enormous cost as an act of altruism; instead, they provide a free search service in exchange for collecting and retaining a large set of information about your interests (your searches) and your location (the latitude and longitude of a device that presents a search request). They use this information to target and sell advertising, and this is an extremely lucrative business for them. There is an assumption that Google will not use the information it collects about you to do evil, but Google does not make any specific guarantees that would accrue liability in the event that the data were misused. Again, the paying customers of Google are advertisers, not people who search the Internet.

And now extend this perspective to social networking, where you literally tell Twitter and Facebook exactly what you are doing (tweets and status updates), who your friends are (your social network), what you see (your photo collection), and what you like (your like votes). Again, there is an assumption of privacy and security here, but these are free services, and free services rarely, if ever, make guarantees about the quality of their service. In fact, there are a variety of quotes attributed to leaders of social networking companies in which they state (paraphrasing) that users should share more information, rather than less, and that people with nothing to hide should have nothing to fear.


The point, I guess, is that we shouldn’t be surprised to hear that the free services we use on the Internet are not as private nor as secure as we assumed they were. We are not the paying customers of the Internet services of Google, Microsoft, Facebook, Twitter, Yahoo!, et al., yet somehow we expected that they would stand up to our government and act in our best interests when our personal interests came into conflict with their commercial interests, responsibilities and obligations. The assumption of security and privacy on the Internet is a myth that has been busted, and the surprise is that it took so long for us to figure it out.

Optimistically, we can hope that Schneier’s call to arms will lead to a new generation of Internet services that are designed to guarantee privacy and security with sophisticated new cryptographic technologies. This will become increasingly necessary as each week we hear about yet another hacked Internet security technology – like the alleged SSL hacks that render browser encryption ineffective reported by the New York Times here:


Maybe part of the solution is that we will have to start paying for Internet services that we have become accustomed to receiving for free, so that the commercial interests of the Internet service companies can be aligned with our personal interests as users of the services. Or maybe we will each have to decide whether we value free Internet services more than the privacy of our information and communication. But one thing is certain: privacy and security on the Internet is fundamentally broken.

I had a need for a C++ client API for Apache Kafka with support for the new v0.8 wire protocol, and I didn’t see one out in the wild yet. So I found some time and wrote one. You can find it here on github:


Released under an Apache 2.0 license as open source from my current employer (Adobe Research). I plan to provide ongoing support for this library.,

I had a chance to play with Siri on my wife’s iPhone 4S and after the novelty wore off I was left with a very strong feeling that Siri will usher in a paradigm shift in how we search the internet — both semantically and economically. And this can only be bad news for Google, and, to a much lesser extent, Microsoft.

Prior to Siri (and, to be fair, prior to the existing voice recognition tools on Android devices), internet search meant a text box somewhere — on your desktop, laptop, or mobile device — into which you would type your search terms and await a response. The response usually took the form of a web page with a list of matches and some targeted advertising to pay the bills. Many companies have operated in the internet search space over the past 15 years, but Google now unquestionably owns this space (or I should say: Google has unquestionably owned this space until now). It is worth noting that every Apple iPhone and iPad sold thus far uses Google as the default search engine powering its search text boxes, and therefore Apple’s very large customer base could be counted on to provide a steady stream of search traffic to Google. Enter Siri – the new gatekeeper of search. Now, if you want to do a search on your iPhone 4S you speak your request and Siri decides how to respond, collating data from a variety of sources (which might or might not include Google). The response still looks like a list, but it is a list served up by Apple and any advertising that might be associated with those results comes from Apple. With Siri, Apple has entered the search engine business and they pose an existential threat to Google’s (and Bing’s) multi-billion-dollar search businesses because they are vertically integrated. This threat is very similar to the threat that Microsoft created (and used to massive effect) against Netscape by vertically integrating Internet Explorer with Windows — the gatekeeper controls access and, ultimately, the market.

Microsoft has long claimed that Google’s dominant market share gives them an increasing advantage because they see a more comprehensive sample of search requests and therefore they can design better algorithms based on these inputs. This same advantage is likely to accrue to Apple as Siri blazes a trail into the voice recognition semantic search space: Apple will be in possession of a more comprehensive sample of voice requests and the quality of Siri relative to any competitive offering from Google or Microsoft will continue to improve. Which can only be more bad news for Google.

More and more sources are catching on to the threat that Siri poses to Google. One of the more cogent ones is here:


Interesting times.

This quote comes from an analysis of the costs of cloud-based computing vs. traditional colocation as a function of the work load and duty cycle. This type of analysis is increasingly germane for companies that are looking to make a transition to cloud-based service providers in the hope that it will allow them to lower their overall IT costs. The results, while not surprising, do raise some interesting points. First, here’s the link:


The crux of the argument here is the concept of the duty cycle. The duty cycle for a deployed application is the percentage of available hardware resources that are being consumed at any given moment in time. Intuitively, a higher duty cycle corresponds to more efficient and cost-effective use of the underlying hardware. One of the fundamental promises of cloud computing is that it will allow you to run applications in a way that will produce a much higher duty cycle through elasticity, i.e. idle resources can be released without impacting the ability to scale back up in the future.

The analysis includes a spreadsheet with some hard data for one specific style of application, and the result is that an equivalent workload for this style of application would cost $118,248 at Amazon and $70,079 in a colocation facility, with the implication that a higher duty cycle can be achieved via colocation. However, this result is not as clear cut as it might seem, owing to the fact that the “application style” is an extremely subjective and important attribute for any given application. In my experience, it is rare to find an application that can be characterized in this way; instead, most applications that I have run across are inherently custom in some important way. I would think that this analysis would need to be performed on a case-by-case basis for any application that is considering a move to the cloud, and the specific result of that analysis would apply only to that application.

A subtle conclusion of this type of analysis is that duty cycle optimization for a given application should be a key criterion for cost reduction. And, somewhat conversely, if an application already has a high duty cycle then the opportunities for cost reduction through cloud-based resources will be limited at best. Or, more simply: if you already run highly-utilized servers then you might do better with collocation.

Hope this helps.

Everyone seems to think that Amazon’s web services business (a.k.a. EC2, S3, and the rest of AWS) is very big and getting bigger, but Amazon stubbornly refuses to break out the AWS contribution to Amazon’s earnings. A recent blog post on GigaOm is the first that I have seen that includes some real data — both for Amazon and for the total accessible cloud services market — to estimate the size of the AWS business today and the size of the market going forward. First, here’s the link:


The data comes from a UBS Investment Research paper, and it estimates that AWS is a $500M business in 2010. It further estimates that AWS will grow to $750M in 2011 — 50% year over year growth — reaching $2.5B, with a B, by 2014. Amazon as a whole does roughly $25B in revenue, growing in the 30%-40% range over the past year, so the numbers for AWS, while still small, are a high-growth component of Amazon’s overall business. Add in the fact the the UBS paper reports the gross margin of AWS at around 50%, vs. around 23% for Amazon as a whole, and one might draw the conclusion that the profit contribution of AWS will be a growing and very significant piece of Amazon’s pie in the years to come.

The question I have is this: Why aren’t more internet companies doing the same thing? Amazon’s results are a clear and undeniable validation of their AWS business strategy. That strategy, in a generic sense, was to build a very efficient cloud infrastructure for their own retail applications, and sell the excess capacity to the general public. They have proved that there is demand for the excess capacity and the service APIs that they provide for that capacity. And their list of customers has slowly transformed from penny-pinching startups and early adopters to a who’s who list of the largest, richest Fortune 500 companies in every business domain. And don’t forget that AWS has data centers around the world, and there is every reason to think that the demand for AWS from foreign companies will mirror growth in the US.

I can think of a bunch of older, larger internet companies that should definitely be trying to duplicate Amazon’s success. Some of them have already tried, albeit with a slightly different level of offering (e.g. Google’s App Engine, Microsoft’s Azure). But the barrier to entry, such as it is, requires only a large number of machines and the will to build the necessary cloud infrastructure systems. I’m sure someone will call BS on this statement and tell me that some serious skill is also required, and I would agree with that. But we live in a time when skill moves around a lot, and no company has a monopoly on talent. So why isn’t everybody trying to copy AWS?

This latest quote comes from Adobe Systems’ CTO Kevin Lynch during an interview with Fortune. It’s actually an approximation of what he said, but I think it captures his intent, which is to suggest that Android will shortly become the dominant smartphone operating system, and to imply that Adobe — with its close partnership with Google surrounding Flash — is not quite as dead on smart devices as we have been lead to believe. More on that shortly.

But first, here is the link:


The article contains a set of statements attributed to Lynch in which he states that he expects Android to achieve a 50% share “in the springtime”. If true, this implies a meteoric rise in Android adoption from 3% in late 2009, to 26% in late 2010, to 50% by early 2011. The sheer number of devices that can potentially run Android along with the large number of wireless carriers that support it will give it a huge advantage over Apple and iOS in terms of growth and market share. This numerical superiority could justify Lynch’s prediction, but it does not imply that customers are choosing Android because of the quality of the experience. This is a key point, in my opinion, because Apple can close the proliferation gap — via a CDMA-based iPhone for Verizon, perhaps — and after they do so then market share will more closely track the customer experience. This is where the tightly controlled and polished iOS and iTunes App Store experience will continue to shine, and this is where Android will continue to deal with fragmentation issues in their device base that will affect the customer experience. What fragmentation issues, you ask?

A quick look at the Android Developers Site gives us the latest data on which versions of the Android OS are currently in use across the global Android device base. As of November 1, 2010, they are:

Android 1.5: 7.9%
Android 1.6: 15.0%
Android 2.1: 40.8%
Android 2.2: 36.2%
android versions

This data indicates that the Android device base is fragmented by OS version into three large buckets: obsolete versions (both 1.5 and 1.6) at 23%; version 2.1 at 40.8%; and version 2.2 at 36.2%. This presents a challenge, to say the least, to Android developers who wish to write apps that will run on any Android device, and this will certainly affect the quality of the overall Android experience. And, getting back to Adobe, it is worth noting that only Android 2.2 — at 36.2% — supports Adobe Flash 10.1 for a true Flash experience on a mobile device. Collectively, devices running any version of Android might achieve a 50% market share as Lynch suggests, but that metric does not appear to be an apples-to-apples comparison with Apple’s iOS because of this fragmentation. Apple does not report iOS version percentages as Android does, but an educated guess is that a much higher percentage of Apple devices are running the latest version of iOS due to the relative ease of the iOS upgrade experience via iTunes.

Which leads me back to Lynch’s implication that Adobe is well-positioned via its partnership with Google to capitalize on Android’s market share. The data above shows that only a minority of Android devices (36.2%) can run Flash 10.1 today. Unless this percentage rises significantly in the next six months, Flash adoption on Android devices will not keep pace with Android’s growth rate. To be fair, new growth in the Android device base should be almost exclusively newer devices running the latest version of Android, and this should serve to increase the overall percentage of Android devices that can run Flash 10.1. But for now, Adobe and Flash appear to be chasing an Android market that is accelerating away from them.

I recently ran into an issue with I/O bandwidth in EC2 that produced some unexpected results. As part of my day job, I built a system to run clustered deployments of enterprise software within AWS. The enterprise software I chose for the prototype is, as it turns out, very sensitive to I/O bandwidth, and my first attempt at a clustered deployment using m1.large instances with only ephemeral disks did not provide enough I/O performance for the application to operate. After a few internet searches, I was convinced that I needed to use EBS volumes, probably in a RAID 0 striped volume configuration, to effectively increase the I/O performance through parallelism of the network volumes. We also got some guidance from Amazon to this effect, so it appeared to be, by far, our best bet to solve the problem.

So I converted my prototype to use RAID 0 4X-striped EBS volumes. And the I/O performance was different, but still not enough for the application. Uh-oh.

To better understand the problem, we decided that the next step would be to quantify the I/O performance of the different volume options, in the hope that we could ascertain how to tweak the configuration to get enough performance. A colleague of mine ran a variety of tests with iometer and produced the following data that captures read and write performance for six different volume configurations: a single EBS volume, a 4X EBS striped volume, a 6X EBS striped volume, an m1.large local ephemeral drive, an m1.xlarge local ephemeral drive, and “equivalent hardware” which, roughly speaking, corresponds to a server box that should be similar to a large EC2 instance in terms of performance and which is not heavily optimized for I/O. Some of these choices deserve some explanation. We looked at both m1.large and m1.xlarge instances because we were told that the m1.xlarge instances had a larger slice of I/O bandwidth from the underlying hardware, and we wanted to see how much larger that slice actually is, given the cost difference. The EBS-based volumes ran on m2.large instances, as we were told that these instances had a larger slice of real I/O bandwidth as well as a larger network bandwidth slice that would better support the EBS volumes.

So here is the hard data:

A few things jumped out at us when we saw these numbers:

  • The performance of the 6X EBS striped volume was significantly worse than the 4X EBS striped volume. Going from 4X to 6X must saturate something that effectively causes contention and reduced performance. Some information online suggests that EBS network performance can be quite variable, but we got consistent results over several runs.
  • The EBS-based volumes all had better read performance than the local ephemeral disk-based volumes in both IOPS and Mbps. The local ephemeral disk-based volumes had much better write performance in IOPS, and similar write performance in Mbps.
  • Neither the EBS-based volumes nor the local ephemeral disk-based volumes could stand up to a real piece of dedicated hardware. Which is reasonable, given what EC2 is. But which also implies that it is not a good idea to run I/O intensive operations in EC2. And hardware optimized for I/O would probably blow away the performance of our unoptimized “equivalent hardware” box.

Hopefully this data can point you in the right direction if you are dealing with I/O issues in EC2. We’re hoping for a little magic from Amazon on this issue later this year (and I can’t say anything more about that). In the interim, I/O issues will continue to require some trial and error to find a configuration that best matches your particular I/O profile.

Hope this helps.

This quote comes to us courtesy of Google’s Eric Schmidt, who was offering his top 10 reasons why mobile is #1 in the following InformationWeek article:


To paraphrase the context of this quote, all of the information created between the beginning of time and 2003 was about 5 EB, give or take (exabyte – 1018 bytes, or 1 Billion GB ). But today, according to him, “we” create 5 EB every 2 days, and it is not clear if this is the majestic plural “we”, as in Google, or “we” as in the entire world. But 5 EB is an awful lot of data (those drunken cat videos on YouTube are really starting to add up…).

Which brings me to my point: the signal-to-noise ratio of that 5 EB is probably low and getting lower every day. And I think we’re starting to get some hard data to prove it. For example, at Chirp, Twitter founder Biz Stone mentioned that Twitter has 105M+ registered users, with 300k new users each day, and 3B+ requests to their API each day. That is a lot of users and a lot of traffic. But how many of those users are active? 20%? 80%? According to the following article, 5% of those Twitter users are responsible for 75% of all traffic, with a rapid drop off in activity beyond 5%:


If the premise of this article is true, then only about 20% of Twitter users are active in a measurable way. And the most active 5% are clogging the pipes with an enormous volume of Tweets — from which we might infer that the signal-to-noise ratio of those Tweets is not very high. Ask yourself this question: what percentage of Tweets that you see are relevant to you in any way? These numbers will only get worse, information-density-wise, as more people sign up for Twitter but otherwise don’t use it in a measurable (or relevant) way.

But back to the original quote. If Google is processing 5 EB of new data every two days, then they definitely need to keep building new data centers in the Columbia River gorge and elsewhere where power is inexpensive. But perhaps what they really need is better filtering technology to drop the 80% of that 5 EB that they don’t really need to keep. Maybe Google needs to keep 5 EB every 2 days to ensure that the 1 EB of good data buried within is not lost, but if they could separate the wheat from the chaff, then they could (1) build fewer data centers, which their investors would really like, and (2) deliver information to us with a much higher signal-to-noise ratio, which we would really, really like.

Google made a name for itself by making search results more relevant than any company that came before them; we can only hope that they (and Twitter, and Facebook, et al.) will continue to pursue information relevance with similar zeal as they process those 5 EB every two days. Otherwise, we should all plan for a whole lot more drunken cat videos.

First, let me state for the record that I am currently an Adobe employee and that the opinions stated here are entirely my own and not in any way the official position of Adobe. And also let me state that the quote above is not from me (more on that later). And let me state that I am drinking beer right now. But I am also a hardcore developer, and I’m having a really, really hard time digesting the latest directives coming out of Cupertino. If I understand the new rules for the iPhone 4.0 SDK, I can only write code in C, C++, and Objective C if I intend to compile that code, using any compiler, into an executable that runs on an iPad or iPhone. What. The. F.?

Before I launch this rant into orbit, here is the link to the post that gives us that priceless quote:


Believe it or not, the quote is actually from an Adobe employee (not me) in a blog post. I’m sure the author got some immediate love from Adobe Legal, so I am going to attempt to avoid that situation and I will not comment directly on the situation involving the new iPhone 4.0 SDK rules, Flash, and CS5. Instead, I’m going to step back and ask the real question that should be on everyone’s minds:

“Where have you gone, Joe DiMaggio Apple?”

Apple used to be the company that everyone loved. The underdog company in the 1990′s that fought the good fight against Microsoft and largely lost. The company that took on the music labels and finally, finally gave us a way to purchase and play digital music that made sense with the iPod and iTunes. The company that made laptops cool again with MacBooks and gave all of us hackers in the corporate world a way to use a non-Windows machine. The company that re-invented the smart phone with the iPhone and broke the wireless companies’ stranglehold on third-party applications on mobile devices with the AppStore. And, for better or worse, the company that gave us the iPad, which might just have a shot at initiating a paradigm shift for how we interface and use our personal computers.

But in the past year or two, Apple seems to have been tempted by the dark side of the force, and their response is looking more like Anakin than Luke. Which is another way to say that they are starting to look a lot like Microsoft in the 1990′s, a.k.a. the company that we all loved to hate. They launched the iPhone with Google’s CEO on stage, for crying out loud, but now they have a proxy suit against Google and Android via the lawsuit against HTC. The widespread complaints about iPhone app rejections, and the complete lack of transparency, and perceived capriciousness, of their criteria for acceptance. Their intransigent position about Flash and any technology that has the potential to bypass the AppStore business model. And now the latest rules, which dictate which languages are blessed for use when developing for the iPhone/iPad platform, with the severe implication that applications that were written in any other programming language — e.g. Java, or perhaps ActionScript — will be rejected from the AppStore.

Apple has built up a huge account balance of goodwill over the last decade, and they appear to have decided that it is time to empty that account. To be fair, Apple’s official statements seem to indicate that they believe that their inventions are being infringed (in the case of the HTC lawsuit); that they believe the AppStore requires a high bar for acceptance to maintain the quality of the experience on Apple devices; that they believe Flash is a security nightmare and CPU pigdog (to paraphrase some of the actual quotes that have been attributed to Apple’s CEO in the media); and that they believe that allowing people to compile their runtime interpreters into native applications is somehow an end-around move on their AppStore business model. But whichever side you take in these disputes, I think everyone has to accept that the tone of the conversation has irreversibly changed; with revenue comes arrogance, as they say in Silicon Valley (and probably everywhere else in the business world). The friendly days when Apple was a darling underdog and partner are gone forever, and we should get used to dealing with the 220 Billion Dollar Monster.

As a developer, I couldn’t be more turned off by a company that tells me which languages I can and can’t use. I want the freedom to choose the language that makes the most sense, or maybe seems the most fun to use, or maybe has the most available libraries for what I want to do, rather than being forced to live by someone else’s language laws. And I might just want to port some older code, rather than re-writing it from scratch in a “certified” language that is permitted by the device vendor. At the end of the day it is all object code running on the device, and the programming language that I used to capture my ideas before compiling it to object code isn’t really important. Unless you have a business model that you perceive to be at risk from certain programming languages, in which case it is very important…

So how does this end? I think it can only end badly for everyone. Some number of cool apps will never make it to certain devices under these rules. Some developers will be turned off and will leave this particular ecosystem for less-restrictive ones. And most importantly, all of the big companies will have their shields up and proton torpedoes armed and they will stop collaborating on the things that only big companies can do together. For example, remember how cool it was when the iPhone shipped with Google Maps and location awareness? Some might say that was revolutionary all by itself.

But I think this week was a very bad week for us all, because it clearly marks the end of any effective inter-corporate collaboration in the emerging mobile device universe.