I had a chance to play with Siri on my wife’s iPhone 4S and after the novelty wore off I was left with a very strong feeling that Siri will usher in a paradigm shift in how we search the internet — both semantically and economically. And this can only be bad news for Google, and, to a much lesser extent, Microsoft.
Prior to Siri (and, to be fair, prior to the existing voice recognition tools on Android devices), internet search meant a text box somewhere — on your desktop, laptop, or mobile device — into which you would type your search terms and await a response. The response usually took the form of a web page with a list of matches and some targeted advertising to pay the bills. Many companies have operated in the internet search space over the past 15 years, but Google now unquestionably owns this space (or I should say: Google has unquestionably owned this space until now). It is worth noting that every Apple iPhone and iPad sold thus far uses Google as the default search engine powering its search text boxes, and therefore Apple’s very large customer base could be counted on to provide a steady stream of search traffic to Google. Enter Siri – the new gatekeeper of search. Now, if you want to do a search on your iPhone 4S you speak your request and Siri decides how to respond, collating data from a variety of sources (which might or might not include Google). The response still looks like a list, but it is a list served up by Apple and any advertising that might be associated with those results comes from Apple. With Siri, Apple has entered the search engine business and they pose an existential threat to Google’s (and Bing’s) multi-billion-dollar search businesses because they are vertically integrated. This threat is very similar to the threat that Microsoft created (and used to massive effect) against Netscape by vertically integrating Internet Explorer with Windows — the gatekeeper controls access and, ultimately, the market.
Microsoft has long claimed that Google’s dominant market share gives them an increasing advantage because they see a more comprehensive sample of search requests and therefore they can design better algorithms based on these inputs. This same advantage is likely to accrue to Apple as Siri blazes a trail into the voice recognition semantic search space: Apple will be in possession of a more comprehensive sample of voice requests and the quality of Siri relative to any competitive offering from Google or Microsoft will continue to improve. Which can only be more bad news for Google.
More and more sources are catching on to the threat that Siri poses to Google. One of the more cogent ones is here:
This quote comes from an analysis of the costs of cloud-based computing vs. traditional colocation as a function of the work load and duty cycle. This type of analysis is increasingly germane for companies that are looking to make a transition to cloud-based service providers in the hope that it will allow them to lower their overall IT costs. The results, while not surprising, do raise some interesting points. First, here’s the link:
The crux of the argument here is the concept of the duty cycle. The duty cycle for a deployed application is the percentage of available hardware resources that are being consumed at any given moment in time. Intuitively, a higher duty cycle corresponds to more efficient and cost-effective use of the underlying hardware. One of the fundamental promises of cloud computing is that it will allow you to run applications in a way that will produce a much higher duty cycle through elasticity, i.e. idle resources can be released without impacting the ability to scale back up in the future.
The analysis includes a spreadsheet with some hard data for one specific style of application, and the result is that an equivalent workload for this style of application would cost $118,248 at Amazon and $70,079 in a colocation facility, with the implication that a higher duty cycle can be achieved via colocation. However, this result is not as clear cut as it might seem, owing to the fact that the “application style” is an extremely subjective and important attribute for any given application. In my experience, it is rare to find an application that can be characterized in this way; instead, most applications that I have run across are inherently custom in some important way. I would think that this analysis would need to be performed on a case-by-case basis for any application that is considering a move to the cloud, and the specific result of that analysis would apply only to that application.
A subtle conclusion of this type of analysis is that duty cycle optimization for a given application should be a key criterion for cost reduction. And, somewhat conversely, if an application already has a high duty cycle then the opportunities for cost reduction through cloud-based resources will be limited at best. Or, more simply: if you already run highly-utilized servers then you might do better with collocation.
Hope this helps.
Everyone seems to think that Amazon’s web services business (a.k.a. EC2, S3, and the rest of AWS) is very big and getting bigger, but Amazon stubbornly refuses to break out the AWS contribution to Amazon’s earnings. A recent blog post on GigaOm is the first that I have seen that includes some real data — both for Amazon and for the total accessible cloud services market — to estimate the size of the AWS business today and the size of the market going forward. First, here’s the link:
The data comes from a UBS Investment Research paper, and it estimates that AWS is a $500M business in 2010. It further estimates that AWS will grow to $750M in 2011 — 50% year over year growth — reaching $2.5B, with a B, by 2014. Amazon as a whole does roughly $25B in revenue, growing in the 30%-40% range over the past year, so the numbers for AWS, while still small, are a high-growth component of Amazon’s overall business. Add in the fact the the UBS paper reports the gross margin of AWS at around 50%, vs. around 23% for Amazon as a whole, and one might draw the conclusion that the profit contribution of AWS will be a growing and very significant piece of Amazon’s pie in the years to come.
The question I have is this: Why aren’t more internet companies doing the same thing? Amazon’s results are a clear and undeniable validation of their AWS business strategy. That strategy, in a generic sense, was to build a very efficient cloud infrastructure for their own retail applications, and sell the excess capacity to the general public. They have proved that there is demand for the excess capacity and the service APIs that they provide for that capacity. And their list of customers has slowly transformed from penny-pinching startups and early adopters to a who’s who list of the largest, richest Fortune 500 companies in every business domain. And don’t forget that AWS has data centers around the world, and there is every reason to think that the demand for AWS from foreign companies will mirror growth in the US.
I can think of a bunch of older, larger internet companies that should definitely be trying to duplicate Amazon’s success. Some of them have already tried, albeit with a slightly different level of offering (e.g. Google’s App Engine, Microsoft’s Azure). But the barrier to entry, such as it is, requires only a large number of machines and the will to build the necessary cloud infrastructure systems. I’m sure someone will call BS on this statement and tell me that some serious skill is also required, and I would agree with that. But we live in a time when skill moves around a lot, and no company has a monopoly on talent. So why isn’t everybody trying to copy AWS?
This latest quote comes from Adobe Systems’ CTO Kevin Lynch during an interview with Fortune. It’s actually an approximation of what he said, but I think it captures his intent, which is to suggest that Android will shortly become the dominant smartphone operating system, and to imply that Adobe — with its close partnership with Google surrounding Flash — is not quite as dead on smart devices as we have been lead to believe. More on that shortly.
But first, here is the link:
The article contains a set of statements attributed to Lynch in which he states that he expects Android to achieve a 50% share “in the springtime”. If true, this implies a meteoric rise in Android adoption from 3% in late 2009, to 26% in late 2010, to 50% by early 2011. The sheer number of devices that can potentially run Android along with the large number of wireless carriers that support it will give it a huge advantage over Apple and iOS in terms of growth and market share. This numerical superiority could justify Lynch’s prediction, but it does not imply that customers are choosing Android because of the quality of the experience. This is a key point, in my opinion, because Apple can close the proliferation gap — via a CDMA-based iPhone for Verizon, perhaps — and after they do so then market share will more closely track the customer experience. This is where the tightly controlled and polished iOS and iTunes App Store experience will continue to shine, and this is where Android will continue to deal with fragmentation issues in their device base that will affect the customer experience. What fragmentation issues, you ask?
A quick look at the Android Developers Site gives us the latest data on which versions of the Android OS are currently in use across the global Android device base. As of November 1, 2010, they are:
Android 1.5: 7.9%
Android 1.6: 15.0%
Android 2.1: 40.8%
Android 2.2: 36.2%
This data indicates that the Android device base is fragmented by OS version into three large buckets: obsolete versions (both 1.5 and 1.6) at 23%; version 2.1 at 40.8%; and version 2.2 at 36.2%. This presents a challenge, to say the least, to Android developers who wish to write apps that will run on any Android device, and this will certainly affect the quality of the overall Android experience. And, getting back to Adobe, it is worth noting that only Android 2.2 — at 36.2% — supports Adobe Flash 10.1 for a true Flash experience on a mobile device. Collectively, devices running any version of Android might achieve a 50% market share as Lynch suggests, but that metric does not appear to be an apples-to-apples comparison with Apple’s iOS because of this fragmentation. Apple does not report iOS version percentages as Android does, but an educated guess is that a much higher percentage of Apple devices are running the latest version of iOS due to the relative ease of the iOS upgrade experience via iTunes.
Which leads me back to Lynch’s implication that Adobe is well-positioned via its partnership with Google to capitalize on Android’s market share. The data above shows that only a minority of Android devices (36.2%) can run Flash 10.1 today. Unless this percentage rises significantly in the next six months, Flash adoption on Android devices will not keep pace with Android’s growth rate. To be fair, new growth in the Android device base should be almost exclusively newer devices running the latest version of Android, and this should serve to increase the overall percentage of Android devices that can run Flash 10.1. But for now, Adobe and Flash appear to be chasing an Android market that is accelerating away from them.
I recently ran into an issue with I/O bandwidth in EC2 that produced some unexpected results. As part of my day job, I built a system to run clustered deployments of enterprise software within AWS. The enterprise software I chose for the prototype is, as it turns out, very sensitive to I/O bandwidth, and my first attempt at a clustered deployment using m1.large instances with only ephemeral disks did not provide enough I/O performance for the application to operate. After a few internet searches, I was convinced that I needed to use EBS volumes, probably in a RAID 0 striped volume configuration, to effectively increase the I/O performance through parallelism of the network volumes. We also got some guidance from Amazon to this effect, so it appeared to be, by far, our best bet to solve the problem.
So I converted my prototype to use RAID 0 4X-striped EBS volumes. And the I/O performance was different, but still not enough for the application. Uh-oh.
To better understand the problem, we decided that the next step would be to quantify the I/O performance of the different volume options, in the hope that we could ascertain how to tweak the configuration to get enough performance. A colleague of mine ran a variety of tests with iometer and produced the following data that captures read and write performance for six different volume configurations: a single EBS volume, a 4X EBS striped volume, a 6X EBS striped volume, an m1.large local ephemeral drive, an m1.xlarge local ephemeral drive, and “equivalent hardware” which, roughly speaking, corresponds to a server box that should be similar to a large EC2 instance in terms of performance and which is not heavily optimized for I/O. Some of these choices deserve some explanation. We looked at both m1.large and m1.xlarge instances because we were told that the m1.xlarge instances had a larger slice of I/O bandwidth from the underlying hardware, and we wanted to see how much larger that slice actually is, given the cost difference. The EBS-based volumes ran on m2.large instances, as we were told that these instances had a larger slice of real I/O bandwidth as well as a larger network bandwidth slice that would better support the EBS volumes.
So here is the hard data:
A few things jumped out at us when we saw these numbers:
- The performance of the 6X EBS striped volume was significantly worse than the 4X EBS striped volume. Going from 4X to 6X must saturate something that effectively causes contention and reduced performance. Some information online suggests that EBS network performance can be quite variable, but we got consistent results over several runs.
- The EBS-based volumes all had better read performance than the local ephemeral disk-based volumes in both IOPS and Mbps. The local ephemeral disk-based volumes had much better write performance in IOPS, and similar write performance in Mbps.
- Neither the EBS-based volumes nor the local ephemeral disk-based volumes could stand up to a real piece of dedicated hardware. Which is reasonable, given what EC2 is. But which also implies that it is not a good idea to run I/O intensive operations in EC2. And hardware optimized for I/O would probably blow away the performance of our unoptimized “equivalent hardware” box.
Hopefully this data can point you in the right direction if you are dealing with I/O issues in EC2. We’re hoping for a little magic from Amazon on this issue later this year (and I can’t say anything more about that). In the interim, I/O issues will continue to require some trial and error to find a configuration that best matches your particular I/O profile.
Hope this helps.
To paraphrase the context of this quote, all of the information created between the beginning of time and 2003 was about 5 EB, give or take (exabyte – 1018 bytes, or 1 Billion GB ). But today, according to him, “we” create 5 EB every 2 days, and it is not clear if this is the majestic plural “we”, as in Google, or “we” as in the entire world. But 5 EB is an awful lot of data (those drunken cat videos on YouTube are really starting to add up…).
Which brings me to my point: the signal-to-noise ratio of that 5 EB is probably low and getting lower every day. And I think we’re starting to get some hard data to prove it. For example, at Chirp, Twitter founder Biz Stone mentioned that Twitter has 105M+ registered users, with 300k new users each day, and 3B+ requests to their API each day. That is a lot of users and a lot of traffic. But how many of those users are active? 20%? 80%? According to the following article, 5% of those Twitter users are responsible for 75% of all traffic, with a rapid drop off in activity beyond 5%:
If the premise of this article is true, then only about 20% of Twitter users are active in a measurable way. And the most active 5% are clogging the pipes with an enormous volume of Tweets — from which we might infer that the signal-to-noise ratio of those Tweets is not very high. Ask yourself this question: what percentage of Tweets that you see are relevant to you in any way? These numbers will only get worse, information-density-wise, as more people sign up for Twitter but otherwise don’t use it in a measurable (or relevant) way.
But back to the original quote. If Google is processing 5 EB of new data every two days, then they definitely need to keep building new data centers in the Columbia River gorge and elsewhere where power is inexpensive. But perhaps what they really need is better filtering technology to drop the 80% of that 5 EB that they don’t really need to keep. Maybe Google needs to keep 5 EB every 2 days to ensure that the 1 EB of good data buried within is not lost, but if they could separate the wheat from the chaff, then they could (1) build fewer data centers, which their investors would really like, and (2) deliver information to us with a much higher signal-to-noise ratio, which we would really, really like.
Google made a name for itself by making search results more relevant than any company that came before them; we can only hope that they (and Twitter, and Facebook, et al.) will continue to pursue information relevance with similar zeal as they process those 5 EB every two days. Otherwise, we should all plan for a whole lot more drunken cat videos.
First, let me state for the record that I am currently an Adobe employee and that the opinions stated here are entirely my own and not in any way the official position of Adobe. And also let me state that the quote above is not from me (more on that later). And let me state that I am drinking beer right now. But I am also a hardcore developer, and I’m having a really, really hard time digesting the latest directives coming out of Cupertino. If I understand the new rules for the iPhone 4.0 SDK, I can only write code in C, C++, and Objective C if I intend to compile that code, using any compiler, into an executable that runs on an iPad or iPhone. What. The. F.?
Before I launch this rant into orbit, here is the link to the post that gives us that priceless quote:
Believe it or not, the quote is actually from an Adobe employee (not me) in a blog post. I’m sure the author got some immediate love from Adobe Legal, so I am going to attempt to avoid that situation and I will not comment directly on the situation involving the new iPhone 4.0 SDK rules, Flash, and CS5. Instead, I’m going to step back and ask the real question that should be on everyone’s minds:
“Where have you gone,
Joe DiMaggio Apple?”
Apple used to be the company that everyone loved. The underdog company in the 1990’s that fought the good fight against Microsoft and largely lost. The company that took on the music labels and finally, finally gave us a way to purchase and play digital music that made sense with the iPod and iTunes. The company that made laptops cool again with MacBooks and gave all of us hackers in the corporate world a way to use a non-Windows machine. The company that re-invented the smart phone with the iPhone and broke the wireless companies’ stranglehold on third-party applications on mobile devices with the AppStore. And, for better or worse, the company that gave us the iPad, which might just have a shot at initiating a paradigm shift for how we interface and use our personal computers.
But in the past year or two, Apple seems to have been tempted by the dark side of the force, and their response is looking more like Anakin than Luke. Which is another way to say that they are starting to look a lot like Microsoft in the 1990’s, a.k.a. the company that we all loved to hate. They launched the iPhone with Google’s CEO on stage, for crying out loud, but now they have a proxy suit against Google and Android via the lawsuit against HTC. The widespread complaints about iPhone app rejections, and the complete lack of transparency, and perceived capriciousness, of their criteria for acceptance. Their intransigent position about Flash and any technology that has the potential to bypass the AppStore business model. And now the latest rules, which dictate which languages are blessed for use when developing for the iPhone/iPad platform, with the severe implication that applications that were written in any other programming language — e.g. Java, or perhaps ActionScript — will be rejected from the AppStore.
Apple has built up a huge account balance of goodwill over the last decade, and they appear to have decided that it is time to empty that account. To be fair, Apple’s official statements seem to indicate that they believe that their inventions are being infringed (in the case of the HTC lawsuit); that they believe the AppStore requires a high bar for acceptance to maintain the quality of the experience on Apple devices; that they believe Flash is a security nightmare and CPU pigdog (to paraphrase some of the actual quotes that have been attributed to Apple’s CEO in the media); and that they believe that allowing people to compile their runtime interpreters into native applications is somehow an end-around move on their AppStore business model. But whichever side you take in these disputes, I think everyone has to accept that the tone of the conversation has irreversibly changed; with revenue comes arrogance, as they say in Silicon Valley (and probably everywhere else in the business world). The friendly days when Apple was a darling underdog and partner are gone forever, and we should get used to dealing with the 220 Billion Dollar Monster.
As a developer, I couldn’t be more turned off by a company that tells me which languages I can and can’t use. I want the freedom to choose the language that makes the most sense, or maybe seems the most fun to use, or maybe has the most available libraries for what I want to do, rather than being forced to live by someone else’s language laws. And I might just want to port some older code, rather than re-writing it from scratch in a “certified” language that is permitted by the device vendor. At the end of the day it is all object code running on the device, and the programming language that I used to capture my ideas before compiling it to object code isn’t really important. Unless you have a business model that you perceive to be at risk from certain programming languages, in which case it is very important…
So how does this end? I think it can only end badly for everyone. Some number of cool apps will never make it to certain devices under these rules. Some developers will be turned off and will leave this particular ecosystem for less-restrictive ones. And most importantly, all of the big companies will have their shields up and proton torpedoes armed and they will stop collaborating on the things that only big companies can do together. For example, remember how cool it was when the iPhone shipped with Google Maps and location awareness? Some might say that was revolutionary all by itself.
But I think this week was a very bad week for us all, because it clearly marks the end of any effective inter-corporate collaboration in the emerging mobile device universe.
It’s great to see a thoughtful articulation of the other side of the “everybody needs to dump their SQL database” argument in this blog post:
To paraphrase the author’s argument, the vast majority of applications out there will simply never see the load that would require a move to a NoSQL solution. Thinking about scalability is a very good thing to do, but the choice to make the move still needs to flow from a rational decision making process.
I see this all the time in discussions I have about scalability and cloud computing. Everyone wants to claim that they are scalable (because
scalable == smart in the current lexicon), but they aren’t backing this up with a rational basis for why their (relatively small) application needs massive, search-engine-class scalability.
I ran across an interesting article that compares the largest of the botnets — the Conficker botnet — with the largest web application providers and the size of their infrastructure. The article uses the term “cloud” in a way that I don’t necessarily agree with, since they use it to describe the overall size of a company’s infrastructure (e.g. according to the article, Google has 500,000 servers) rather than the size of that company’s infrastructure that is available for use as part of a genuine cloud-based service. But, overall, the article does illuminate the fact that botnets have an architecture that is very similar to that of cloud-based service providers, leveraging a virtual datacenter comprised of compromised hardware around the globe. And the sheer size of the Conficker botnet, which, according the article, is over ten times the size of the largest web application provider, would make it, by far, the largest cloud provider in the world by an order of magnitude.
The article can be found here: http://www.networkworld.com/community/node/58829
This article is also noteworthy, I think, because it includes some sizing data for the largest players in the industry. If true, this data represents one of the first genuine, accurate comparisons of the infrastructure behind the biggest names on the web.
A thread has been emerging surrounding the observed performance of EC2 instances and the possibility that Amazon is experiencing capacity issues as their business continues to grow. Three excellent articles on this topic are linked below:
This is a question that I receive often in my day job, so I have a few comments to add to this thread. First, if you have used EC2 then you know that Amazon explicitly refuses to quantify the performance that you are entitled to receive in real-world units. Instead, they have created qualitative terms — “compute units” for CPU performance, “moderate” bandwidth, etc. — that provide a measure of comparison against the other levels of services that Amazon provides. In and of itself, these qualitative designators are not a problem, except when trying to determine the potential variance between two elements that should have the same qualitative performance. For example, any two “large” EC2 instances running the same operating system should, in theory, provide the same measurable performance against a reasonable benchmark. In practice, however, there is a degree of variance between elements that should be the same; testing across a sample of “large” EC2 instances resulted in variations in the results of the SPECjvm2008 benchmark of ~30%. This would seem to indicate that some “large” instances are better than others.
Second, if you have ever asked Amazon any question about their existing capacity, their existing utilization, or their rate of capacity growth, then you probably received a polite “we don’t break out results or data for our AWS unit” response. But there are some hard data points then can be detected externally. For example, if you have attempted to start a large number of EC2 instances simultaneously then you might have seen an error message stating that not enough instances of the requested type were available. In my experience, this response has been exceptionally rare, and it is a testament to Amazon’s capacity planning that this response is, in fact, very rare. It does provide a hard data point, however, to detect when EC2 becomes oversubscribed, and one would expect the frequency of this response to increase if EC2 were oversubscribed. In the thread thus far, I have not read that anyone has detected a measurable increase in the frequency of this response, and thus one might conclude that EC2 utilization continues to be below the saturation level, or at least it remains similar to historical levels.
And lastly, I do have one comment about network performance. I have observed network latency issues of the type described in the cloudkick blog, but only for smaller EC2 instance types that could be assumed to be sharing hardware and network interface ports with other EC2 instances. I have not observed latency issues with larger EC2 instance types that could be assumed to consume an entire piece of hardware. I am clearly making some unsubstantiated assumptions here, but my guess is that the observed network latency issues could be a sharing or starvation issue in the virtualization infrastructure, rather than a true network capacity problem. But I would stress that this is just an educated guess, so your mileage may vary.
Hope this helps.
- Siri – a Shot Across Google’s Bow
- “Is colocation cheaper than using a cloud computing service to run the same workload?”
- “How Big is Amazon’s Cloud Computing Business?”
- “Android will run majority of smartphones by Spring”
- Amazon EC2 I/O Performance: Local Ephemeral Disks vs. RAID 0 Striped EBS Volumes
- “We create 5 exabytes every two days.”
- “Go Screw Yourself, Apple.”
- “You are not Google. (or: you don’t really need NoSQL…)”
- “The largest cloud providers are botnets.”
- Observed Performance of Amazon EC2 Instances
- Cloud Computing and Mobile Devices
- Time and Clock Issues in Windows-Based EC2 Instances
- My experimental local and real-time search engine is now available
- Entropy in Cloud Computing Applications
- How to Jailbreak iPhone 3.01