The first rule of managing deployments that people actually use is: don’t do anything that will affect the availability of the service. Ever. Once you get something setup and working, don’t touch anything and don’t break anything. This is a difficult challenge: maintenance often interrupts service. Any change to a system, including required configuration changes, upgrades, or system updates can break a working system. At the same time every deployment and system requires configuration changes, upgrades and updates. Balancing these requirements presents a core problem in systems administration.
Ensuring a stable production environment requires appropriate testing infrastructure and sufficient policies and automation around application deployment to ensure that deploying new software or running software updates are reliable and do not require manual intervention.
This document addresses both halves of this problem–the infrastructure and the policy–and includes different method, techniques, and strategies that make it possible for administers to ensure reliable application and system updates.
The following concepts introduce crucial components and requirements of deployment testing infrastructure.
Having a secure and reliable testing environment is essential, and without one its impossible to verify that changes a are “good” before deploying them. However, the extent of your testing, and the tolerances based on the administrative requirements and from stakeholder [1] needs.
When configuring your test environments consider the following basic requirements:
Testing needs to be easy. In order to ensure that you and your developers will rigorously test changes and updates, testing changes needs to be trivially easy. In addition to any other interface, it must be easy to restart and reset the environment to “base configurations,” to “back out” of bad configurations easily.
Usability in this case also mandates some measure of performance. If it takes too long to reset an environment, or if the environment is too slow for any number of reasons, test environments are less likely to get used.
Using a deployment automation system, either something custom based on build scripts, make files, or something similar to ensure consistency.
Use virtualization to isolate environments. Tools like “Vagrant great for this purpose (assuming that it doesn’t take too long to rebuild the test infrastructure.)
[1] | While the term “stakeholders,” comes to us from the world of management and bears a certain amount of distaste in the minds of most systems administrators, it’s useful to be able to recognize where operational needs originate. For some services, the administrators are the main consumers or stakeholders. Directory services, management tools and databases, logging and monitoring systems, and so forth are all primarily used to support infrastructure. For other systems: file servers, web-based applications, and so forth, other groups dictate operational requirements and tolerances. |
Once you have the infrastructure to perform testing, it’s important to ensure that you do perform tests. Software developers use continuous integration systems to automate tests, and in some cases you can automate testing for deployments work using a similar method. Often, the kind of testing that administrators need to do is more complex.
Where programmers can often write test cases that verify the behavior of a program, operational testing requires not only that a single program behave correctly, but rather that an entire collection of programs behave correctly together in a specific environment. In the process of testing it’s important to be able to affirmatively answer the following questions:
There are a number of different policies at the organizational level that can help you support testing requirements. Typically these standards and practices revolve around making testing easier, less burdensome, more automated, and more integrated into existing workflows. For instance, consider the following:
Mandate reviews and signoffs for changes. Make sure that except for fire call situations, more than one administrator is responsible for reviewing and signing off on any change. This is not possible in small teams and for some sets of changes. Also, while these multi-sign off policies lengthen timescales considerably fresh eyes and different perspectives are quite useful and prevent many bugs and issues.
If you manage configuration and deployment programatically, all changes to the production system must be code reviewed before propagating it to the production system.
Integrate testing into other tools and workflows. Including testing infrastructure that is either automated (of the “continuous integration” type,) connected to change requests and ticketing, or integrated with version control tools.
Provide local preliminary (“dev”) testing. If you and your developers and administrators have an easy way to test changes, and become familiar with software, it’s more likely that you and other administrators will test code regularly and that you’ll do experimentation with test and production environments. Lower barriers to entry are key to ensuring that developers use these systems.
Controlling access to resources and providing testing environments is crucial for maintaining production systems. While there are no substitutes for implementing policies and procedures to protect deployments and ensure that updates and upgrades go smoothly, it’s important to provide a rollback option when an upgrade has unforeseen consequences. These allow you to return a deployment to a previous “known working state,” if something breaks.
There are a few methods/technologies that you can use to provide rollbacks:
In general, you should script and automate rollbacks–like deployment processes–so that it’s possible to back out of an update without needing to remember the sequence of operations that you performed to update the system. Sometimes this is reasonably complicated, as in the case of operating system updates and upgrades. In other situations it may be as simple as changing a symbolic link, as in some application deployment schemes. Above all, remember to be as rigorous about rollbacks and testing as you are about the updates themselves.
See also
Change control software monitors systems and applications to insure that configuration remains constant and that configuration changes are not implemented outside of normal change policies. This is typically implemented as a special kind of monitoring or intrusion detection system.
While it’s important to develop policies regarding changes to production systems, it’s also important to provide some method to ensure that the system or systems remain intact and that some untracked change to the production system don’t either impact the integrity of the system or affect the operational conditions of the systems. Change control may help you address this requirement.
Change control is a difficult problem, and it’s beyond the scope of this article. As a security practice, it’s reactive and reliable change control is difficult to implement effectively. [2] For many (or most) deployments, the kinds of typical intrusion detection solutions used for change control are overblown.
Even if your deployment does not merit a change control solution, collecting some “change control data” may be useful. For instance, log ins and daemon restarts may indicate some tampering, and your logging and monitoring system should track these events. Also, use privilege escalation systems like sudo that provide more logging rather than shared privileged accounts for administrative tasks.
[2] | Typically if a user has the access to be able to impact a production system, they also have the ability to affect the change control monitor itself. Beyond this, change control systems cannot prevent intrusions or unwanted modifications except through Foucauldian methods, and can only report on them after the fact. |
Maintaining separation between test and production environments, as well as a usable and reliable deployment systems is not a significant technological problem: it’s a social and policy problem. To properly address these concerns you need some required infrastructure, but really need to develop sufficient policies and procedures that make sense in the context of your environment and that all of your administrators and operators can work within.
Devising policies that are functional from an administrative use perspective is a requisite first step, but it’s also important to ensure that the policy is also sufficiently flexible. A rigid policy may not allow for timely administrative response to unforeseen bugs or system events, which can be devastating. So called “fire call” systems are useful for providing an emergency exception: again, this is a thin technological wrapper around a policy problem.
Full-scale auditing is often unworkable: of logs in large clusters, of file system changes on any system, so while some level of auditing may be useful for “covering” and protecting your systems, the truth is that it’s not possible to fully audit production and test systems. In light of this, the most important aspects of maintaining sane deployment policies and practices are (in descending order:)
Make testing infrastructure and systems available and easy to use.
It’s difficult to test effectively if there aren’t properly configured test machines. Furthermore developers and administrators are unlikely to test effectively if the testing system is difficult to use.
Make sure that testing environments resemble production systems to the greatest extent possible.
The greater the differences between the test environment and the production environment, the less effective the test environment becomes at predicting what will happen in production.
Automate testing.
For important components use automated testing methods, either with continuous integration systems or by some other means, to ensure that most routine testing is ongoing and does not require active developer initiative.
Create and test rollbacks, to ensure that even if an update does not go as planned, it’s possible to return to a known working state.
Limit changes to production systems.
Use access control systems and monitoring tools to ensure that production and testing systems remain consistent and don’t drift from each other.