A thread has been emerging surrounding the observed performance of EC2 instances and the possibility that Amazon is experiencing capacity issues as their business continues to grow. Three excellent articles on this topic are linked below:
This is a question that I receive often in my day job, so I have a few comments to add to this thread. First, if you have used EC2 then you know that Amazon explicitly refuses to quantify the performance that you are entitled to receive in real-world units. Instead, they have created qualitative terms — “compute units” for CPU performance, “moderate” bandwidth, etc. — that provide a measure of comparison against the other levels of services that Amazon provides. In and of itself, these qualitative designators are not a problem, except when trying to determine the potential variance between two elements that should have the same qualitative performance. For example, any two “large” EC2 instances running the same operating system should, in theory, provide the same measurable performance against a reasonable benchmark. In practice, however, there is a degree of variance between elements that should be the same; testing across a sample of “large” EC2 instances resulted in variations in the results of the SPECjvm2008 benchmark of ~30%. This would seem to indicate that some “large” instances are better than others.
Second, if you have ever asked Amazon any question about their existing capacity, their existing utilization, or their rate of capacity growth, then you probably received a polite “we don’t break out results or data for our AWS unit” response. But there are some hard data points then can be detected externally. For example, if you have attempted to start a large number of EC2 instances simultaneously then you might have seen an error message stating that not enough instances of the requested type were available. In my experience, this response has been exceptionally rare, and it is a testament to Amazon’s capacity planning that this response is, in fact, very rare. It does provide a hard data point, however, to detect when EC2 becomes oversubscribed, and one would expect the frequency of this response to increase if EC2 were oversubscribed. In the thread thus far, I have not read that anyone has detected a measurable increase in the frequency of this response, and thus one might conclude that EC2 utilization continues to be below the saturation level, or at least it remains similar to historical levels.
And lastly, I do have one comment about network performance. I have observed network latency issues of the type described in the cloudkick blog, but only for smaller EC2 instance types that could be assumed to be sharing hardware and network interface ports with other EC2 instances. I have not observed latency issues with larger EC2 instance types that could be assumed to consume an entire piece of hardware. I am clearly making some unsubstantiated assumptions here, but my guess is that the observed network latency issues could be a sharing or starvation issue in the virtualization infrastructure, rather than a true network capacity problem. But I would stress that this is just an educated guess, so your mileage may vary.
Hope this helps.