How to Detect the Front (Home) Page of a WordPress Blog

I recently wanted to add a WordPress widget that would be conditionally visible in the sidebar of the front (home) blog page only. A reasonable search on this topic turns up a large collection of information and discussion — most of which turns out not to work. The following is a brief overview I what I tried and what I ultimately did.

Most of the information on this topic points to either the is_home() or is_front_page() WordPress PHP functions. In theory, you can use either of these functions in a conditional expression and detect if you are in the front (home) page. In practice, however, it is a little more complicated. These functions are not based on the URI of the page that is being loading; instead they are based on several global boolean variables that are set based on what queries have occurred in a page. I imagine there is probably a good reason for this in WordPress-land (and I am not a WordPress expert), but it seems to me that if I call is_front_page() within the front page of a WordPress blog then it should reliably return true, and conversely if I call is_front_page() from any other page in the blog then it should reliably return false. In my case, I was trying to create a conditionally visible widget in the sidebar by modifying wp-include/widgets.php to contain a conditional expression within wp_widgets_init(). What I found was that is_front_page() always returned true when called from within widgets.php for any page on my blog. I traced this to the fact that the values of the global variables upon which is_front_page() is based are changed by some of the standard activities performed within wp_widgets_init() — specifically any query to get posts or categories to populate standard widgets that show archives and categories. The is_home() function seems to suffer from the same issue, as it appears to be based on the same global variables. Several posts described interesting ways to combine is_front_page() or is_home() with other functions to get the correct conditional behavior, but none of these worked in my case, and all of them seemed to be specific to their context, rather than general purpose solutions to the problem.

After banging my head against this for a while, I decided to try a different approach. The is_front_page() and is_home() functions are based on global variables rather than the URI of the blog page. So forget those functions…and find an expression that checks the URI directly. It seems simple enough, and it is:

if ( $_SERVER["REQUEST_URI"] == '/' )
{
/* do something */
}

And that’s it. This block of code works reliably throughout the pages of my blog. I am willing to bet that there might be some peculiar WordPress configurations (e.g. using a custom page as the front page, etc.) for which this does not work, but like I said, I am not a WordPress expert, so your mileage may vary.

Hope this helps.

How to Create an Amazon EC2 AMI That is Larger Than 10GB

Recently, I have been dealing with an issue surrounding the 10GB size limit for AMIs within Amazon’s EC2 service. If you don’t what I’m talking about, here is a quick primer: a virtual instance running within Amazon’s Elastic Compute Cloud (EC2) service is launched from a read-only boot image that Amazon refers to as an Amazon Machine Image (AMI); Amazon has set the upper size limit for an AMI to be 10GB, and this restricts the amount of disk content that can be loaded on to the instance at boot. For a Windows-based EC2 instance, the 10GB AMI corresponds to the C: drive containing Windows; for a Linux-based instance, the 10GB AMI corresponds to the boot partition containing Linux. EC2 instances have several larger, ephemeral drives with capacities far in excess of 10GB, but those ephemeral drives have no persistence, and they will be empty when an EC2 instance boots. Amazon also has a service called the Elastic Block Store (EBS) that functions like a network mounted file system from a storage area network (but for various reasons EBS was not a feasible solution for my problem).

The problem I faced was that I needed about 16GB of data to be available on an EC2 instance at boot, and I needed it to otherwise operate like a standard instance launched from an AMI. It would be great if I could simply use a 16GB AMI, but Amazon does not permit this due to the 10GB size constraint. I was obviously going to need an alternate mechanism to load additional data on to the ephemeral drives at boot time.

My solution is ultimately derived from the same mechanism that Amazon uses to load an AMI at boot time. AMIs in EC2 are stored in Amazon’s Simple Storage Service (S3). When an instance is started in EC2, the AMI is loaded from S3 into the Xen domain that EC2 has provisioned for the instance (Xen is the open source virtualization software that is at the heart of Amazon’s EC2 service). I decided to take the same approach to populate the ephemeral drives at boot time. Specifically, I store a compressed archive in S3 that is downloaded and inflated on the first ephemeral drive in order to populate the instance with the additional content. The procedure to download the compressed archive from S3 and inflate it in the proper places is scripted and connected to the boot sequence (it’s a Windows service on Windows, and it is linked into the rc startup script mechanism on Linux).

The only issue I have found with this approach is latency. It takes a non-negligible amount of time to download and inflate several GB of data from S3, and this is all happening after the operating system boot has initiated. Amazon provides no quantitative guarantees about the network bandwidth that a given instance will be able to use, so the amount of time that the download will take is dependent on a variety of factors that are out of our control. In experiments I have measured download speeds from S3 to an EC2 instance to be in the range of 15 MB/sec to 25 MB/sec (those units are megabytes per second), so if you downloading several GB of data to your instance via this method then you can expect a delay of several minutes before the ephemeral drives are populated and available. This might or might not be a problem, depending on what else is starting on your instance immediately after boot. In my case, an application is starting that will take up to 10 minutes to start, so I have plenty of time to populate the ephemeral drives. If you are starting up instances to add to a cluster in response to load, and you need the additional cluster capacity as soon as possible, then this method is likely not for you. But in either case it is important to keep in mind that the startup latency will be directly proportional to the size of the additional content.

Hope this helps.

Perl DBI and DBD::mysql on Cygwin — Connecting to a Native Windows Build of MySQL on a Windows 2003 AMI Within Amazon EC2

In my ongoing project involving Amazon’s EC2 service, I had a frustrating problem to solve this past weekend. I have an EC2 instance running Windows 2003, and on that instance I have a native Windows version of MySQL 5 and Cygwin. I wanted to use the mysqlhotcopy Perl script from the Cygwin command line against the Windows-native MySQL instance. Once again, I would have expected this to be a simple job with a simple solution, but in the end it turned into an extensive hacking session. Here is a quick roadmap of what I did.

My initial thought was that this should just work: MySQL and its scripts should not care if they are running in native Windows mode or in Cygwin, and mysqlhotcopy is just a Perl script that should run fine in either Cygwin or Windows…wrong! The native Windows version of MySQL does not ship with the mysqlhotcopy script, probably because that script uses Perl and DBI and there is no guarantee that Perl/DBI will be available on Windows. So I grabbed the mysqlhotcopy script from a UNIX box and attempted to run it via Cygwin. I got this Perl error saying that the DBI module was not found:

Can't locate DBI.pm in @INC (@INC contains: /usr/lib/perl5/5.10/i686-cygwin /usr/lib/perl5/5.10 /usr/lib/perl5/site_perl/5.10/i686-cygwin /usr/lib/perl5/site_perl/5.10 /usr/lib/perl5/vendor_perl/5.10/i686-cygwin /usr/lib/perl5/vendor_perl/5.10 /usr/lib/perl5/vendor_perl/5.10 /usr/lib/perl5/site_perl/5.8 /usr/lib/perl5/vendor_perl/5.8 .) at ./mysqlhotcopy line 8.
BEGIN failed--compilation aborted at ./mysqlhotcopy line 8.

So I guess I just need to get DBI installed for Perl and we should be good to go…right? Perl modules can be installed on Cygwin using cpan, so I ran:

cpan DBI

This command completed without errors. Let’s try the mysqlhotcopy script again…it runs without errors and prints out the usage page. Progress! So now let’s test it out with a real call to take a hot copy of the database:

mysqlhotcopy -u <username> -p <password> <database> <backup directory>

This command gives me the following error, complaining about DBD::mysql (the MySQL driver used by DBI to actually connect to MySQL):

install_driver(mysql) failed: Can't locate DBD/mysql.pm in @INC (@INC contains: /usr/lib/perl5/5.10/i686-cygwin /usr/lib/perl5/5.10 /usr/lib/perl5/site_perl/5.10/i686-cygwin /usr/lib/perl5/site_perl/5.10 /usr/lib/perl5/vendor_perl/5.10/i686-cygwin /usr/lib/perl5/vendor_perl/5.10 /usr/lib/perl5/vendor_perl/5.10 /usr/lib/perl5/site_perl/5.8 /usr/lib/perl5/vendor_perl/5.8 .) at (eval 9) line 3.
Perhaps the DBD::mysql perl module hasn't been fully installed, or perhaps the capitalisation of 'mysql' isn't right.
Available drivers: DBM, ExampleP, File, Gofer, Proxy, Sponge.  at ./mysqlhotcopy line 182

So we just need to install the DBD::mysql module and we should be good to go, right?. I ran the following command:

cpan DBD::mysql

This command failed with a build error:

Can't exec "mysql_config": No such file or directory at Makefile.PL line 76.

The DBD::mysql module is compiled locally, using the mysql_config script to find the location of the local MySQL installation. But the native Windows version of MySQL does not contain the mysql_config script. Ugh. I tried copying this file over from a UNIX box, but the output from the script (which is just configuration info for the MySQL installation and the settings in my.ini) looked a little screwy. So I guess I need to figure out what mysql_config is used for within the mysqlhotcopy script.

After some digging, it appears that the crux of the problem is that the MySQL client libraries are not available in the native Windows MySQL installation, and these libraries are required to build DBD::mysql. So if we can figure out a way to get these libraries to work in Cygwin, then we should have a working solution. Luckily, I found a note in the DBD::mysql readme file that pointed me in the right direction. Here is what I ultimately did:

0) Download and unzip the MySQL source code (I grabbed mysql 5.1.34).

1) Build the MySQL client libraries (without the server) via:
./configure --without-server --prefix=/usr/local/mysql-5.1.34
make

The build halts with an error for the file sys/ttypdefaults.h (not found), so I copied that file from /usr/include/sys/ttydefaults.h on a UNIX box into /usr/include/sys within Cygwin. Running make again completes the build after this file is in place. There is little of consequence in this file, so I am hoping that copying it from a UNIX box into Cygwin won’t have any serious side effects.

2) Once the MySQL build has finally completed (and this takes a while), run a manual build of the cpan download of DBD::mysql in the .cpan cache directory, using parameters for the location of the MySQL client libraries (which eliminates the need for mysql_config to be used to find them):

cd ~/.cpan/build/DBD-mysql-4.011-ynTTNR
perl Makefile.PL --libs="-L/usr/local/mysql-5.1.34/lib/mysql -lmysqlclient -lz" --cflags=-I/usr/local/mysql-5.1.34/include/mysql --testhost=127.0.0.1make
make install

So now we are ready to try mysqlhotcopy again. The MySQL client build installed a copy of mysqlhotcopy in /usr/local/mysql-5.1.34/bin, so let’s use that one instead of the one that was copied in from a UNIX box. Here’s the command:

/usr/bin/mysql-5.1.34/bin/mysqlhotcopy -u <username> -p <password> <database> <backup directory>

Still no joy; now we get this error:

DBI connect(';host=localhost;mysql_read_default_group=mysqlhotcopy','<database>',...) failed: Can't connect to local MySQL server through socket '/tmp/mysql.sock'(2) at /usr/local/mysql-5.1.34/bin/mysqlhotcopy line 177

This looks to me like DBI (using DBD::mysql) is trying to connect to a UNIX socket on the local machine instead of using TCP. Given that we’re on Windows, it will probably be a pain in the neck to figure out how to get the native Windows version of MySQL to listen on a local UNIX socket. Luckily, I’ve spent some time looking at the Perl code in mysqlhotcopy and it turns out that if you specify an IP address via the -h command, then this will override the use of the UNIX socket and will force DBI to use TCP to connect to MySQL. So let’s try the localhost loopback address (127.0.0.1) to see if that works:

/usr/bin/mysql-5.1.34/bin/mysqlhotcopy -h '127.0.0.1' -u <username> -p <password> <database> <backup directory>

Success! The command runs to completion without errors, and I can verify that the backup has taken place.

Hope this helps.

Enterprise Cloud Computing – What is it, exactly?

I am often asked to define cloud computing. The overwhelming marketing hype attached to the term at the moment has obscured the benefits, in both technology and economics, that are available if you look closely. My typical response goes something like this: cloud computing does not allow you to do anything you could not do before, but it does allow you to do those things faster, with better response times, and with a different and mostly better cost model. These benefits, when applied to enterprise software deployments, make the deployment and maintenance of enterprise software faster, easier, and less expensive.

For example, a cloud-based virtualization service like Amazon’s Elastic Compute Cloud (EC2) gives you virtual hardware instances that can be provisioned via an API. The virtual hardware, in and of itself, is no different than real hardware running in a data center. However, a new virtual instance in EC2 can be started in minutes, and a new box running in a data center would need to be purchased, installed, and configured over a period that is measured in hours, if not days. The reduction of the capacity provisioning response time to minutes has a profound effect on the ability of an application to scale in response to load, since new capacity can be called up in response to load, rather than in anticipation of it. And this benefit is not limited to cloud-based virtualization services alone; the collection of cloud based services for data, messaging, and applications all benefit from the ability to respond quickly to capacity scaling requests. This is one of the key benefits of cloud computing relative to traditional deployment approaches based on actual hardware.

In economic terms, the value proposition of cloud computing resides in its cost model. Cloud-based services, and cloud-based virtualization services in particular, have adopted a pay-as-you-go pricing model without initial costs. This is fundamentally different from the traditional enterprise software cost model in which the majority of costs are incurred immediately prior to deployment, and it changes the causality relationship between cost and revenue for enterprise application deployments. In a cloud-based service, cost trails revenue, and cost increases only in response to increasing load, rather than in advance of it. Additionally, cloud-based services can scale down in response to decreasing load, and this provides additional cost benefits that are simply unattainable in the traditional enterprise deployment model. The economic benefits of this pricing model will likely become an irresistible force in the marketplace for enterprise applications.

Ephemeral Drives in Amazon EC2 – When Are They Mounted?

Virtual instances running in Amazon’s EC2 service have several ephemeral disk drives that can be used for temporary storage (temporary because they are not persisted as part of the AMI). Recently, I had to figure out exactly when those drives were mounted and available during boot. The specific issue I was seeing was that I had registered some services to start automatically during boot, and those services started software packages that relied upon the ephemeral drives. This is on Windows 2003 Server, by the way; this is a non-issue on Linux, where mounted drives precede the init sequence for application level processes.

Through some trial and error (and I’ll abridge the details here), I was able to determine that the ephemeral drives are ready in all respects after the following two services have started: Ec2Config and Virtual Disk Service (vds). It was a simple matter of creating service dependencies for my registered services to ensure that they started after Ec2Config and VDS were started, and that fixed the glitch. I was using cygwin so I was able to use the cygrunsrv command to create the dependencies (via the --dep argument). People with more Windows kung fu would probably use regedit to do the same thing.

Hope this helps.

Cygwin Lighttpd with SSL

Last week, I needed to configure a Windows 2003 AMI in EC2 to run lighttpd with SSL. Once again, a simple job that I thought would be quick and painless turned into an extensive hacking session. Here is a quick roadmap of what I did.

My initial thought was that there must be a native port of lighttpd with SSL support for Windows…wrong again! I don’t know why I continue to think that open source projects will be ported to Windows; Linux is long since mature enough that people can use it directly, rather than porting open source Linux software to Windows.

In any event, cygwin lists lighttpd in its package list, so I was left with the option to run lighttpd via cygwin. A few quick clicks of the cygwin installer gave me lighttpd 1.4.20-1, all installed and ready to start. I dropped in a lighttpd.conf file containing ssl.engine declarations and started the server. Lighttpd promptly informed me that SSL support had not been compiled in.

I really have no idea why lighttpd on cygwin ships without SSL support compiled in, especially since the openssl libraries are available within cygwin and SSL support can be compiled in with just a simple option to ./configure. But, I should just be able to download the lighttpd source and compile it myself, on cygwin, with SSL support enabled, right?

Wrong again! After downloading lighttpd source for 1.4.20 and installing gcc on cygwin, I ran ./configure with the following options:

./configure --disable-shared --enable-static --prefix=/usr --with-openssl

I kicked off the build with make, and went to grab a beer (the build proceeds very slowly). When I returned, I found the build had failed with a symbol undeclared error for EAI_SYSTEM. This error has lots of search hits and some hacking-oriented solutions, but none of those solutions seemed to work for me, owing to conflicts between the static build of lighttpd and the main cygwin DLL. Even when I did manage to get the build to successfully complete, I found that some features (e.g. CGI execution) just didn’t work. The last thing I want is a flaky build of lighttpd running within cygwin on Windows, so I needed a plan B.

After a few more searches, I came across the WLMP project. WLMP has a standalone build of lighttpd — optionally bundled with MySQL and Php — that includes the cygwin DLLs but doesn’t otherwise require or run within cygwin. Funky, but promising. I installed the lighttpd-only version of WLMP, and I was able to bring lighttpd up with support for SSL, but only as a stand alone application, outside of cygwin. If I tried to run it from within a cygwin shell, or even with the cygwin DLL loaded, I found all sorts of silent errors without any messages. I intended to use cygwin extensively while lighttpd was running, so this was a significant problem.

On to plan C. I’m starting to think this isn’t going to happen when it occurs to me that the WLMP lighttpd build is a cygwin build with some conflicting cygwin files. If I download a version of WLMP that uses the same lighttpd 1.4.20 version as cygwin, and remove the files that conflict, then maybe I can get the WLMP build to run in cygwin. It’s worth a shot, right?

After some trial and error, I was able to make this work as follows:

0) Download WLMP lighttpd 1.4.20, matching the version of lighttpd in cygwin.
1) Delete the Cygwin1.dll file from the WLMP lighttpd directory
2) Add execute permission to all of the DLLs in the WLMP lighttpd directory, and all of the mod DLLs in the WLMP lighttpd/lib directory.
3) Cd into the WLMP lighttpd directory, and from within the WLMP lighttpd directory start lighttpd with the following command:

./lighttpd -V -m /usr/lib/lighttpd

If all went according to plan, you should see the list of features compiled into lighttpd, and there should be a “+” sign in front of “SSL Support”. My next test was to start lighttpd with the same –m argument, and using my lighttpd.conf with the SSL configuration, and it worked flawlessly without any cygwin conflicts.

I can’t tell you why the current directory (within the WLMP lighttpd directory) is important in getting this hack to work. My guess is that the WLMP version of cygwin is using a combination of DLLs from the standard cygwin lighttpd lib directory in /usr/lib/lighttpd and the WLMP lighttpd lib directory in the current directory, and somehow this combination works without cygwin DLL conflicts in this particular situation. Who knows? All I can say is that it would have been really nice if the cygwin folks had enabled SSL support in their build of lighttpd…

Anyway, that’s it. Hope that helps.

Security for Cloud-based Enterprise Applications

Being the architect of a cloud-based enterprise software-as-a-service (SaaS) product — Adobe LiveCycle Express — I get asked a lot about the security of cloud-based applications. They are all the questions you would expect: “Is my data secure in the cloud?”; “How safe are cloud service environments?; “Aren’t you just asking to be hacked?”. My response to all of these questions is, first, to tell them to take a deep breath and relax, and second, to think of the cloud as any other computer system faced with security threats, and for which you need to develop a threat response. Security for cloud-based applications is actually a multifaceted problem, with distinct threats and responses for network security, operating system security, data security, and virtualization security — but each of these distinct areas has an analog in non-cloud-based applications that has been analyzed and for which there are security procedures to help reduce threats. So, ultimately, we just need to ignore the hype surrounding cloud computing and focus on the fundamental computer system security issues that applications will face in this domain. Some brief commentary on each of the threat areas faced by cloud-based applications follows, along with a note about regulatory issues. As with anything you read on the internet, your mileage may vary.

Network Security for Cloud-based Enterprise Applications

The ideal network security paradigm for cloud-based enterprise applications would be to have the cloud services be an extension of the customer’s existing internal networks, using all of the same protection measures and security precautions that their administrators have developed. This implies a strategy that would allow a customer to extend their network security envelope to encompass the cloud services that they use. For my application, I have chosen to use an encrypted TCP port forwarding strategy to extend the customer’s network security envelope into the cloud. Specifically, I use an implementation of the ssh protocol that performs bi-directional TCP traffic forwarding across encrypted, compressed connections between cloud-based virtual instances and customer machines behind their corporate firewalls. The encryption strength is configurable, and is at least as secure as any corporate VPN encryption. This is another way of saying that it does not introduce any new network threats beyond what a customer already faces with their existing VPN technologies. Less paranoid customers can choose to downgrade the security of their cloud instances and access them directly via HTTP and HTTPS, but my recommended approach is to use encrypted traffic forwarding, both for robust security and to translate the problem of cloud-based network security into terms and concepts — VPNs and encryption — that system administrators will understand. In reducing the problem of cloud-based network security to one of corporate network security, we simplify the problem of assessing and responding to cloud-based network security threats.

Operating System Security for Cloud-based Enterprise Applications

Cloud-based virtual instances face the same OS-level threats as standard hardware-based installs. The typical responses for these threats are the same as for hardware-based installs (e.g. block all non-essential listen ports via firewalls, keep the OS patches current, and run modern anti-virus software). For my enterprise application, I recommend that all ports except the ssh port (port 22) are to be blocked by firewall, patches are to be updated periodically and rolled into new virtual images, and antivirus software is to be installed; these measures collectively ensure that the virtual instances face no new OS-level threats beyond what a customer already faces with their existing hardware. As with network security, by reducing the problem of cloud-based operating system security to one of traditional hardware-based operating system security we simplify the problem of assessing and responding to threats.

Data Security for Cloud-based Enterprise Applications

In cloud-based applications, data encryption is the simplest way to protect data. Data in a cloud-based application will likely be transiting public networks and shared resources, and it makes sense to protect it against simple observation. Modern encryption technologies will withstand all but the most determined attempts to crack them, and while they require some key sharing infrastructure to implement them properly, they can be deployed using traditional methods employed for non-cloud-based applications. There are potential performance implications for data encryption, but, in my opinion, when using modern hardware and fast networks with a well-designed application, the performance implications can be effectively minimized. In my application, virtual instances store their backups off-instance within a cloud storage service. These backups are encrypted using a PKI key pair, with only the public key stored on the virtual instance. The data storage needs of virtual instances in my application are simple and infrequent, so the application easily lends itself to a data encryption strategy. Not all applications will have these characteristics; however, there are very few alternatives that can ensure data security in cloud-based applications. Thus, data encryption in some form will likely be a necessary evil of all cloud-based applications…but it’s not really that evil. And once again, we have reduced the problem of data security in cloud-based applications to an existing, non-cloud-based technology domain.

Virtualization Security for Cloud-based Enterprise Applications

Cloud-based virtual instances face resource-sharing threats that are inherent to the virtualization environment. These threats, however, are similar to the resource-sharing threats seen by enterprise applications that share hardware resources, either by design or necessity, within corporate data centers, and for which there are some procedures and assumptions that can be put to use in the cloud domain. For cloud-based resource-sharing threats, the problem is largely within the domain of the cloud virtualization provider, and it is their responsibility to respond to the threat. Customers of a virtualization service are ultimately trusting the virtualization service provider to guarantee the inter-instance security of the virtual instances, much as shared applications on common hardware are trusting the operating system to enforce inter-process security. Cloud-based applications rely on cloud services to enforce inter-instance security and guarantee that shared memory and disk resources can not be used to access another customer’s applications. Many cloud service providers run their own business applications within the same virtualization fabric, so it is fair to assume that inter-instance security is being monitored by corporate security professionals. In my opinion, the security budgets of the major cloud service providers far exceeds the security budgets of most cloud service consumers, and thus the degree of virtualization security provided in these service environments will be at least as good as the resource-sharing security found within a customer’s data center. For example, Amazon runs their flagship business within the same data centers as their cloud services, and I imagine that they have some of the best security professionals in the industry ensuring that their business applications are not at risk to resource-sharing threats from their cloud services. Ultimately, however, the threat is not substantially different than the hardware-based shared resource threats that customers currently face in their own datacenters. We can reduce the issue of virtualization security to one that is similar to an existing threat, and for which there are standard practices to assess and respond to that threat.

Regulatory Issues for Cloud-based Enterprise Applications

A discussion of security for cloud-based enterprise applications would not be complete without some comments on the regulatory issues that come into play when enterprise applications, and specifically enterprise application data, are moved into cloud-based services. Legislation and regulation in many industries has led to strict guidelines regarding enterprise data, primarily to protect the privacy of individuals. For example, The Health Insurance Portability and Accountability Act (HIPAA) contains strict provisions on what can be done with individuals’ health records, and what permissions must be explicitly received from those individuals for any transfer of those records. For an enterprise application that contains health records, this presents an open question that must be answered before that application can integrate cloud services into its architecture: How do the regulations within HIPAA apply to cloud-based services, and what restrictions do they place, if any, upon the use of third-party cloud services within enterprise healthcare applications? The answer to this question, and similar questions for regulations in other industries, is evolving rapidly, but it will ultimately be answered by legislation and regulation rather than by technology. However, progress towards an answer will be made by software vendors and enterprise customers that make the case that threats faced by cloud-based enterprise applications are ultimately no different, and no more severe, than the existing threats faced by those applications in their current deployment formats.

Summary

I have tried to make the argument that the security threats seen by cloud-based enterprise applications are translations of threats seen by existing enterprise applications within corporate data centers, and thus there are existing procedures and responses that can be put to use in designing an appropriate security strategy. To paraphrase Bruce Schneier, security is theater, and the perception of security for cloud-based enterprise applications is likely more important than the actual manifestation of the security mechanisms. This has been true at each evolutionary stage for enterprise systems — does everyone remember the paranoia surrounding the first attempts to connect corporate networks to the public internet? — and the continuing deployment of ever more cloud-based applications will need to reach a critical mass, with respect to the perceived security story, before it becomes effectively mainstream. But that day is coming.

Hope this helps.

Cygwin SSHd on a Windows 2003 AMI Within Amazon EC2

Recently, I needed to configure a Windows 2003 AMI in EC2 to run a ssh server. I would have expected this to be a simple job, with a variety of choices for making this work, but in the end it was far more time consuming, complicated, and frustrating than I would have guessed. Here is a quick road map of what I did.

My initial thought was that there must be a free, native port of openssh for Windows that installs as a service and otherwise conforms to the Windows environment…wrong! I can’t tell you why this is the case — maybe ssh is just not a microsofty way of doing remote terminals and file transfers — but I couldn’t find anything resembling a free, functional port of openssh for Windows. I found a few blog posts that mentioned that people had tried this, but ultimately they gave up when faced with the integration between openssh’s user/group namespace functions and Windows’ user/group concepts (to say nothing of the differences between the Windows command prompt and the UNIX shells). And these blog posts ultimately suggested that it was easier to run sshd via cygwin than it would be to port sshd to run natively. So….cygwin time!

UNIX is my OS of choice, and I’ve had cygwin on every Windows box I have ever had, so it was a quick jump to download the cygwin installer and install the packages I needed on a freshly started Windows 2003 instance in EC2 (incidentally, I am running the 64-bit, large EC2 instance AMI of Windows 2003 Server with SQL Server Express and no Authentication Services). The openssh package comes with a simple script — ssh-host-config — to generate the server host keys and create the users needed for privilege separation, so it was a nice, simple, relatively painless install. There are a few things that the config script misses, however, which requires you to run it several times before it ultimately succeeds (although it is nice enough to point out the problem each time and prompt you to fix it). After playing with it, I came up with the following actions to perform before running ssh-host-config in order to make it succeed the first time without errors:

0) Add the following line to /cygwin.bat:
set CYGWIN=binmode tty ntsec

1) Run a new cygwin bash shell (after the edit of cygwin.bat) and enter:
mount -s --change-cygdrive-prefix /
chmod +r /etc/passwd /etc/group
chmod 755 /var

2) Run a new cygwin bash shell (to pick up the cygdrive prefix change) and enter:
ssh-host-config
-- yes for privilege separation
-- "binmode tty ntsec" for CYGWIN environment variable setting for the service
-- enter your password of choice for the cyg_server account

3) Enter the following to start sshd:
net start sshd

4) Open the Windows Firewall editor, and add an exception for TCP traffic on port 22 for sshd.

5) If you haven’t already done so, open up port 22 for your EC2 instance group (assuming you are running your instance in the default group):
ec2-authorize -p 22 default

If everything went well, sshd is running and available on port 22, and you can login normally via ssh from other machines. All that is left to do is bundle up a new AMI to capture the cygwin installation…and that should be a piece of cake, right? The updated EC2 API has a new method — ec2-bundle-instance — that kicks off an AMI bundling job for an EC2 instance running Windows, so it should be as simple as calling this method and then grabbing a beer to wait for it to complete. If only it were that simple…

Unlike the AMI bundling scripts for Linux-based EC2 instances, which are ultimately just packaging up the existing file system, the Windows AMI bundling mechanism needs to perform several Windows-specific functions that are ultimately a real pain in the neck. First and foremost is sysprep. Sysprep is Microsoft’s answer to the problem of Windows virtualization; apparently the simple cloning of a Windows installation is not acceptable, and a new Windows SID should be generated for each new instantiation of a Windows virtual image. Sysprep does some other things, too (search for sysprep on Microsoft’s support web site for a more complete description — I am certainly not an expert on it), but ultimately the SID generation is the one that causes problems for a lot of installed software…like cygwin. After bundling a new AMI and starting a new instance with it, I found that sshd is hosed for no apparent reason. Attempts to start sshd via “net start sshd” produce the following cryptic error message:

The CYGWIN sshd service is starting.
The CYGWIN sshd service could not be started.
The service did not report an error.More help is available by typing NET HELPMSG 3534.

WTF?

After several time-consuming iterations of start new instance -> install cygwin -> bundle new AMI -> start new AMI instance -> wonder why sshd is hosed, I found something in the HKEY_USERS tree of the Windows registry that changes after the bundling step. Prior to bundling, with a functioning cygwin/sshd, I see the following in the registry:

[HKEY_USERSS-1-5-21-2574196159-1727499900-3384088469-1013SoftwareCygnus Solutions]
[HKEY_USERSS-1-5-21-2574196159-1727499900-3384088469-1013SoftwareCygnus SolutionsCygwin]
[HKEY_USERSS-1-5-21-2574196159-1727499900-3384088469-1013SoftwareCygnus SolutionsCygwinmounts v2]
[HKEY_USERSS-1-5-21-2574196159-1727499900-3384088469-1013SoftwareCygnus SolutionsCygwinProgram Options]
[HKEY_USERSS-1-5-21-2574196159-1727499900-3384088469-500SoftwareCygnus Solutions]
[HKEY_USERSS-1-5-21-2574196159-1727499900-3384088469-500SoftwareCygnus SolutionsCygwin]
[HKEY_USERSS-1-5-21-2574196159-1727499900-3384088469-500SoftwareCygnus SolutionsCygwinmounts v2]
[HKEY_USERSS-1-5-21-2574196159-1727499900-3384088469-500SoftwareCygnus SolutionsCygwinProgram Options]

After bundling, in a new instance in which sshd is hosed, I see the following in the registry:

[HKEY_USERSS-1-5-21-4261372910-2505678249-1238160980-500SoftwareCygnus Solutions][HKEY_USERSS-1-5-21-4261372910-2505678249-1238160980-500SoftwareCygnus SolutionsCygwin]
[HKEY_USERSS-1-5-21-4261372910-2505678249-1238160980-500SoftwareCygnus SolutionsCygwinmounts v2]
[HKEY_USERSS-1-5-21-4261372910-2505678249-1238160980-500SoftwareCygnus SolutionsCygwinProgram Options]

All of the other registry entries related to cygwin remain the same before and after the bundling step, so my guess is that the loss of entries in the bundled instance is the source of the trouble. But what exactly are those entries?

Again, I’m no windows expert, but the entries in question appear to have the windows SID followed by a user identifier (e.g. in S-1-5-21-4261372910-2505678249-1238160980-500, S-1-5-21-4261372910-2505678249-1238160980 is the SID, and 500 is the user id). Looking at the /etc/passwd file for cygwin, the user id 500 corresponds to the Administrator account, and user id 1013 corresponds to the cyg_server account, used by sshd as a privileged account for switching effective user ids during login. So, my hypothesis is that the privileges for the cyg_server account are somehow lost by sysprep during the bundling step, and sshd is hosed without them in the new bundled AMI instance.To test my hypothesis, I decided to configure the AMI bundling step to skip sysprep. The base Windows EC2 AMIs come with an application in the start menu called “ec2Service Setting” that has a check box to enable/disable sysprep during AMI bundling, so it is easy enough to test this. However, I have no idea what happens to Windows if I disable sysprep during bundling, and I was not able to find a satisfactory answer via internet searches. The closest I got to an answer was to see several of the Amazon admins on the EC2 forum comment that it was not a good idea to disable sysprep if you were going to instantate multiple instances. I also found several documents online that discussed how sysprep was used to sanitize a Windows installation, generate a new SID, and make it generic for installation on any type of hardware. Since the virtual hardware of EC2 is, roughly speaking, identical (given that it is using Xen underneath the hood), I’m not too worried about the hardware issue. I have no idea about “sanitizing” the Windows instance or SID generation, though, so bundling without sysprep might mortally wound Windows (again…I’m no Windows expert). And I do want to run multiple instances from the bundled AMI, so that might be a non-starter as well. So I guess I will try the ready-shoot-aim approach of seeing what happens when I turn it off…

Compressing time, I started with a fresh Windows instance, installed cygwin and configured sshd like before, turned off sysprep and bundled it, started a new instance from the new bundled AMI, and…sshd still works. The new instance retains the SID that it had prior to bundling, and the registry entries are still there for the cyg_server account. Windows also appears to be working in all respects, but I’m not sure I could detect problems that might result internally from the omission of sysprep in the bundling. I guess I can run one more test, starting a bunch of instances at once, to see if having the same SID causes them to interfere with one another. I started four instances, running concurrently, and they each seem to be working fine. Or at least I can’t detect any problems.

So, in closing, it looks like I may have a solution: turn off sysprep if you want to use cygwin sshd in a bundled Windows AMI. Someone with more Microsoft kung-fu might be able to figure out how to make sysprep retain the registry entries for the cyg_server account, or maybe they would write a script to insert them directly into the registry at restart if they are missing…who knows. But for me, disabling sysprep seems to be the way to go. I found lots of other complaints on the internet about sysprep and what it does to installed software when the SID changes, so I’m guessing that there will be a lot of bundled AMIs in EC2 that are created with sysprep disabled. If there are, in fact, issues with multiple instances using the same SID, then I expect we will be reading about it in the EC2 forums, since everyone who creates a new AMI from the base Windows AMIs without sysprep will have the same base SID in their AMIs, and so on….

Anyway, that’s it. Hope that helps.

My experimental search engine is now available in 10 languages…

About 18 months ago, I created an experimental search engine to play with some new ways to extract and distill information from the web and present it in a more topic-focused way. The site is called doryoku (which is Japanese — 努力 — meaning supreme effort). It has been doing well, and my experiments and enhancements continue. My latest addition involves automatic translation: the site is now available in 10 languages: English, Japanese, Chinese, Korean, French, German, Italian, Spanish, Russian, and Greek. The translation capabilities are courtesy of the Google Language API.

More new features to come. Stay tuned!

Adobe LiveCycle Express — Cloud Computing Meets Adobe LiveCycle ES

Adobe LiveCycle Express is a new software-as-a-service (SaaS) product offering from Adobe that takes a cloud computing approach to delivering Adobe’s LiveCycle ES enterprise software suite using Ruby on Rails, Adobe Flex, and Amazon’s EC2 and S3 services. LiveCycle Express grew from a research project I conceived while working in Adobe’s Advanced Tchnology Labs division.

The site was launched on December 15, 2008 at http://livecycle.express.adobe.com.

The joint press release from Adobe and Amazon announcing LiveCycle Express can be found here: http://finance.yahoo.com/news/Adobe-Announces-LiveCycle-bw-14025946.html.