How to Create an Amazon EC2 AMI That is Larger Than 10GB

Recently, I have been dealing with an issue surrounding the 10GB size limit for AMIs within Amazon’s EC2 service. If you don’t what I’m talking about, here is a quick primer: a virtual instance running within Amazon’s Elastic Compute Cloud (EC2) service is launched from a read-only boot image that Amazon refers to as an Amazon Machine Image (AMI); Amazon has set the upper size limit for an AMI to be 10GB, and this restricts the amount of disk content that can be loaded on to the instance at boot. For a Windows-based EC2 instance, the 10GB AMI corresponds to the C: drive containing Windows; for a Linux-based instance, the 10GB AMI corresponds to the boot partition containing Linux. EC2 instances have several larger, ephemeral drives with capacities far in excess of 10GB, but those ephemeral drives have no persistence, and they will be empty when an EC2 instance boots. Amazon also has a service called the Elastic Block Store (EBS) that functions like a network mounted file system from a storage area network (but for various reasons EBS was not a feasible solution for my problem).

The problem I faced was that I needed about 16GB of data to be available on an EC2 instance at boot, and I needed it to otherwise operate like a standard instance launched from an AMI. It would be great if I could simply use a 16GB AMI, but Amazon does not permit this due to the 10GB size constraint. I was obviously going to need an alternate mechanism to load additional data on to the ephemeral drives at boot time.

My solution is ultimately derived from the same mechanism that Amazon uses to load an AMI at boot time. AMIs in EC2 are stored in Amazon’s Simple Storage Service (S3). When an instance is started in EC2, the AMI is loaded from S3 into the Xen domain that EC2 has provisioned for the instance (Xen is the open source virtualization software that is at the heart of Amazon’s EC2 service). I decided to take the same approach to populate the ephemeral drives at boot time. Specifically, I store a compressed archive in S3 that is downloaded and inflated on the first ephemeral drive in order to populate the instance with the additional content. The procedure to download the compressed archive from S3 and inflate it in the proper places is scripted and connected to the boot sequence (it’s a Windows service on Windows, and it is linked into the rc startup script mechanism on Linux).

The only issue I have found with this approach is latency. It takes a non-negligible amount of time to download and inflate several GB of data from S3, and this is all happening after the operating system boot has initiated. Amazon provides no quantitative guarantees about the network bandwidth that a given instance will be able to use, so the amount of time that the download will take is dependent on a variety of factors that are out of our control. In experiments I have measured download speeds from S3 to an EC2 instance to be in the range of 15 MB/sec to 25 MB/sec (those units are megabytes per second), so if you downloading several GB of data to your instance via this method then you can expect a delay of several minutes before the ephemeral drives are populated and available. This might or might not be a problem, depending on what else is starting on your instance immediately after boot. In my case, an application is starting that will take up to 10 minutes to start, so I have plenty of time to populate the ephemeral drives. If you are starting up instances to add to a cluster in response to load, and you need the additional cluster capacity as soon as possible, then this method is likely not for you. But in either case it is important to keep in mind that the startup latency will be directly proportional to the size of the additional content.

Hope this helps.