Migrating your legacy data center applications and services to commercial cloud providers is still all the rage in the government. There are plenty of great reasons to make this move; on demand services are relatively cheap, scaling is now a function of software not hardware procurement, you don’t have to maintain hardware, power redundancies etc.. Compute and storage are now commodities, you wouldn’t build your own power plant when you could just plug in to the wall…right?
Having now worked across federal agencies architecting cloud strategies I can tell you the challenge is breaking the data center, server focused mindset. While agencies are accepting of all the basic cost reasons to move to the cloud, they have trouble breaking out of the idea of physical servers, even if those physical servers are really just Virtual Machines.
In traditional data centers, agencies create static environments for each stage, typically a development, integration, testing, staging and production environment. Teams exist to maintain these environments and act as the gatekeepers. If a new development project is creating web app for example, the development team would:
- Request a new app server be stood up in the environment
- Request access to any existing database and/or
- Stand up their own database
Authentication services are usually provided by some existing LDAP or active director.

Traditional Static Server Environment Approach
When they decide to migrate from a data center to the cloud, what I see is they essentially just recreate the architecture above in the cloud. This doesn’t come close to taking advantages of the technical advancements of cloud but that is not the most egregious sin. They migrate their existing processes over as well. You still have gatekeepers guarding static environments. Development teams submitting request for a server(s) to be provisioned with a certain stack of software. Honestly if you are going to just rebuild your data center in Amazon, don’t bother.
The better way
The ultimate goal is serverless, deploying code directly to services that handle resource provisioning. Reality is we aren’t quite ready to accept that from a security standpoint. We can get a little closer by containerizing our apps and services and deploying to PaaS platforms such as Kubernetets. We can deploy Kubernetets to AWS EC2 (which is approved) and let the platform and AWS’s auto scaling functions handle our dynamic scaling. In this set up we are only provisioning the base set of servers, not quite serverless yet but closer.
The real gain here is immutable environments. Using the cloud provider API and tools like Terraform. We can fully automate the stand up of a Kubernets PaaS in amazon in minutes. This allows you to do away with static environments for dev, test and integration. Development teams can launch these environments themselves and you can remove a bulk of your process.
Bottom line, stop spending money an wasting time on static development, integration and test environments. Stop thinking you need gatekeepers for these things. Focus on a highly scalable production PaaS that was built from code because that same code is how you create identical development environments without sysadmin teams.