It’s 1:00 am. You get an email from an “application migration manger” (automation tool) that says your inventory-control application containers successfully migrated from your AWS instances to your new Google instances. What’s more, the connection to the on-premises database has been reestablished and all containers are up and running with all security and governance services reestablished.
This happened without any prompting from you — it is an automatic process that compares the cost of running the application containers on AWS versus the cost of running them on Google. The latter proved more cost-effective at that time, so the auto-migration occurred based on predefined policies and moved the containers from one public cloud to another. Of course, the same concept also works with private-to-public or private-to-private clouds, too.
While these scenarios might sound like science fiction today, the associated capabilities are coming, and fast. The ability to mix and match containers and automate the migration and localization of those containers could change the way we think about cloud development and what private and public PaaS and IaaS platforms provide.
The trouble with existing approaches to cloud computing, including IaaS and PaaS, is that they have a tendency to come with platform lock-in. Once an application is ported to a cloud-based platform such as Google, AWS, or Microsoft, it’s tough, risky, and expensive to move that application from one cloud to another. This is not by design. Rather it’s the result of a market moving so quickly that public and private cloud providers as of now do not do good job of building portability into their platforms. Currently it isn’t in their best interest to do so, but market demand has not yet caught up with this sector.
Enter new approaches based on old ones — namely, containers — and thus the open-source project Docker. The promise is to provide a common abstraction layer that allows applications to be localized within the container and then ported to other public and private cloud providers that support the container standard. Most do — or will very soon.
Finding new value
At the center of all this is a cloud-orchestration layer that can both provision the infrastructure required to support the containers and perform the live migration of the containers, including monitoring their health after the migration occurs (see the below figure).
Using containers is not a new procedure: They certainly predate Docker. However, auto-provisioning and auto-migration are concepts that were often pushed but very remained elusive in practice. The use of Docker to convert these concepts to reality has a few basic features and advantages, including:
- The ability to reduce complexity by leveraging container abstractions. The containers remove the dependencies on the underlying infrastructure services, which reduces the complexity of dealing with those platforms. They are truly small platforms that support an application or an application’s services that sit inside of a very well-defined domain: the containers.
- The ability to leverage automation with containers to maximize their portability, and with it their value. Through the use of automation, we’re scripting a feature we could also do manually, in essence, such as migrating containers from one cloud to another. This could also mean reconfiguring communications between the containers such as tiered services or data service access. Today it’s much harder to guarantee portability and the behavior of applications when using automation. Indeed, automation often relies upon many external dependencies that can break at any time. This remains a problem, but, fortunately, one that is solvable.
- The ability to provide better security and governance services by placing those services around rather than within containers. In many instances, security and governance services are platform-specific, not application-specific. The ability to place security and governance services outside of the application domain provides better portability and less complexity during implementation and operations.
- The ability to provide better-distributed computing capabilities, considering that an application can be divided into many different domains, all residing within containers. These containers can be run on any number of different cloud platforms, including those that provide the most cost and performance efficiencies. So applications can be distributed and optimized according to their utilization of the platform from within the container. For example, one could place an I/O-intensive portion of the application on a bare-metal cloud that provides the best performance, place a compute-intensive portion of the application on a public cloud that can provide the proper scaling and load balancing, and perhaps even place a portion of the application on traditional hardware and software. All of these elements work together to form the application, and the application has been separated into components that can be optimized.
- The ability to provide automation services with policy-based optimization and self-configuration. None of this works without providing an automation layer that “auto-magically” finds the best place to run the container as well as deal with the changes in the configurations and other things specific to the cloud platforms where the containers reside.
While this may seem like distributed-application Nirvana, and certainly a better way to utilize emerging cloud-based platforms, there are many roadblocks here.
The industry must consider the fact that today’s automation and orchestration technology can’t provide this type of automation — yet. While it can certainly manage machine instances and even containers using basic policy and scripting approaches, automatically moving containers from cloud-to-cloud using policy-driven automation, including auto-configuration and auto-localization, is really not there yet.
Also, we’ve only just begun our Docker container journey. There is a lot we don’t understand about the potential of this technology and its limitations. Taking a lesson from the use of containers and distributed objects from years ago, the only way this technology can provide value is through cloud coordination of those supporting containers. Yes, having a standard here is a great thing, but history shows us that vendors and providers have a tendency to march off in their own proprietary directions for the sake of market share. If that occurs, all is lost.
The final issue is that of complexity. While we seemingly make things less complex, the reality over time is that the use of containers as the platform abstraction means that applications will morph toward architectures that are much more complex and distributed. Moving forward, it may not be unusual to find applications that exist within hundreds of containers running on dozens of different models and brands of cloud computing. The more complex these things become, the more vulnerable they are to operational issues.
All things considered this could still be a much better approach to building applications on the cloud. PaaS and IaaS clouds will still provide the platform foundations and even development capabilities. These, however, will likely commoditize over time, moving from true platform to good container hosts. It will be interesting to see if the larger providers want to take on that role. Considering the interest in Docker, that could be the direction.
The core question now: If this is the destination of this technology, and for application hosting on cloud-based platforms, should organizations redirect their resources toward this new vision? I suspect that most enterprises aren’t far enough along into cloud computing to make that change. Indeed, the great cloud migration should continue. However, know that we’ll get better at cloud application architectures using approaches that account more for both automation and portability, and we’ll all eventually land here.