While there may be an ongoing debate about what DevOps actually is, there’s no denying that it’s not slowing down, regardless of the camp you fall in. A recent report by the DevOps Institute revealed that DevOps practices and skills continue to be highly sought after by organizations across industries.
DevOps was born out of the desire to dismantle the dichotomy between the software development process and the traditional IT operations that followed and supported it. Software typically undergoes the iterative process of new version releases, which means the hand-off to IT operations isn’t a one-off task but rather an ongoing loop. While the goal is to have an efficient workflow to develop and release high-quality software, the siloed nature between these two spheres led to poor collaboration between the respective teams, resulting in inefficient workflows and a frustrating experience for both parties.
DevOps, as the name implies, is a merging of these two worlds. Like any other amalgamation, DevOps is the combination of a set of things such as philosophies, practices, and tools that support the overall goal of achieving quality control, infrastructure management, and efficient operations in the software release lifecycle. John Willis, a co-author of the book The DevOps Handbook: How to Create World-Class Agility, Reliability, and Security in Technology Organizations, says that DevOps has four key pillars: culture, automation, measurement, and sharing.
Each of these pillars deserves its own deep dive. However, this article will deal solely with the topic of automation and will take a closer look at some of the top tasks that DevOps should automate.
Tasks to automate
The following tasks in this section have been selected because of their importance to DevOps workflows. A number of them are holistic concepts, many of which have some overlap with each other, but each entails a number of activities. For example, building and shipping software involves crafting CI/CD workflows, which is also related to application and environment security practices, such as scanning artifacts for vulnerabilities.
What makes these tasks notable is not just their importance but also that they can be automated for a much more efficient approach to managing DevOps activities.
Building and shipping software
One of the fundamental principles of DevOps is to optimize the building and shipping of software through the use of continuous integration/continuous delivery (CI/CD). Unlike traditional methods, CI/CD is the automated practice of optimizing the release of quality software into production. Before DevOps, the pipeline consisted of a hand-over process between teams. In a CI/CD approach, teams collaborate to create an automated CI/CD pipeline. This pipeline consists of a series of automated steps, such as running tests on committed changes. If the relevant quality gates are satisfied, the new software version will be released to a runtime environment. CI/CD consists of two main parts, which are detailed below.
- Continuous Integration: This step focuses on running automated tests to ensure that the application is not broken when new commits are integrated into the main branch of the repository containing the source code. In addition, this is typically the phase in which the software artifact is built.
- Continuous Delivery: This step picks up where continuous integration ends. The main goal of continuous delivery is to release the latest changes to customers quickly. Continuous delivery ensures that there’s an automated way to push changes that have successfully passed the automated tests to a respective runtime environment.
Application and environment security
As DevOps has grown in adoption, the need to integrate security best practices to mitigate security threats has also grown. The term DevSecOps is now often used to encapsulate the various security measures that are implemented and integrated into the software development and release lifecycle. Similar to the original issue that DevOps sought to address, DevSecOps eliminates the idea that security is the responsibility of a dedicated, siloed team. Instead, security best practices are integrated into the development and operations processes.
Examples of such security practices include automatically scanning container images and container registries for dependency vulnerabilities based on CVE database records. In addition, software teams can carry out scanning of application source code or IaC in repositories for any bad practices. Some platforms that offer DevSecOps tools are NeuVector, Snyk, Checkmarx, SonarQube, and AquaSec.
Infrastructure is the foundational layer of the runtime environments of software applications. Previously, IT operations required a manual approach to provisioning and maintaining IT infrastructure. Cloud computing, on the other hand, offers organizations greater flexibility and operational agility by allowing you to automatically provision computing resources on demand. All major cloud providers, such as Amazon Web Services (AWS), Google Cloud Platform, and Microsoft Azure, all offer APIs to allow the necessary automation of resources in their environments.
Using code, your application’s underlying infrastructure and configuration can be both automated and versioned. This approach is known as infrastructure as code (IaC), and it enables you to write and execute code to define, deploy, update, and destroy infrastructure in an automated way. As a result, software teams can have a simpler and more efficient way to provision infrastructure, as well as a more reliable outcome when managing it.
Examples of IaC tools include:
- AWS CloudFormation
- Azure Resource Manager
- Google Cloud Deployment Manager
Scaling applications and infrastructure
Two critical aspects of running application workloads are maintaining high availability and optimal performance, even in the midst of changes in incoming traffic. In a cloud environment, software teams can take advantage of the elasticity that cloud providers offer by configuring compute resources to automatically respond to changes in traffic. Resources can scale vertically or horizontally in accordance with environment changes.
Using AWS as an example, you can deploy your application in what is known as Auto Scaling Groups (ASGs). ASGs are a collection of EC2 instances that are treated as a logical group. Furthermore, ASGs allow you to specify a template that details the type of instances that can be automatically provisioned in response to specific conditions. Other examples of AWS services that can be used to scale applications and infrastructure are Amazon EKS and ECS, and AWS Lambda. The latter is a very popular serverless, event-driven service that integrates well with other platforms. For example, you can set up alarms to trigger a Lambda function that contains logic to provision additional resources for an application or data store that needs to scale with increased traffic.
Lifecycle of configuration data
Modern software development often includes adopting a workflow that consists of multiple isolated environments in order to improve the process of developing, testing, and releasing software to a production environment. This approach helps teams produce more reliable software that has been tried and tested before its production release. Configuration data is exposed as environment variables for the different runtime environments. However, this introduces the challenge of managing the lifecycle of this configuration data.
One traditional approach is to create, share, and manage `.env` files. However, this means teams have to deal with the administration of securely and efficiently sharing updated environment variables in file format, whilst trying to keep a standardized approach to managing these files within the team. This is very risky, because without fine-grained access control, sensitive data is visible to anyone with access.
Alternatively, software teams can make use of secret managers like HashiCorp Vault, AWS Secrets Manager, or Google Cloud Secret Manager. Platforms like this enable teams to easily store, modify, access, and automatically rotate their secret values. They also offer fine-grained access control, and audit logs to see when values were accessed—and by whom. Secrets managers can be used through an online dashboard, a CLI, or language-specific SDKs, which helps automate the lifecycle of the values, as well as their accessibility to application environments.
Monitoring and logging
In many ways, deploying your software into production is the start of another phase of operations: monitoring and logging your application and the underlying infrastructure. Monitoring allows you to keep a close eye on various key metrics, such as your resource consumption, workload performance, network performance, and system errors. It helps avoid issues ranging from inefficient use of resources to tracking down the root cause of unexpected costs that may crop up as part of the lifecycle of running your workloads. Logging, on the other hand, provides insight into system events related to input, output, and processing. This is especially useful for debugging and auditing.
There are tools that can be used for automated monitoring and logging and the activities they entail, such as consolidating metrics in a graphical dashboard, streaming logs to the desired destination, or sending alerts to communication channels in the case of certain system events. Examples of such solutions are Prometheus, Grafana, Logstash, and Fluentd.
Another critical task that has to be automated is backing up your database and any important data related to your application, including source code. While data loss can be caused by malicious actors such as attackers or disgruntled employees, it can also be caused by natural disasters, provider outages, or—most commonly and most difficult to avoid—simple human error. Data losses can result in more than “just” losing vital data. You also lose consumer trust and can incur high costs for failing to have a secure backup and recovery strategy.
It’s important to make regular backups and to have the ability to quickly and easily restore the backup so you can recover from the damage in such incidents. This can be achieved with a custom-built solution, but a dedicated service such as Rewind can provide increased ease of use, especially when it comes to tasks like automating or restoring from backups.
In this article, you’ve learned about why automation is one of the pillars of DevOps, as well as some of the most important tasks to automate. These tasks included the building and shipping of software, application and environment security, scaling and management of infrastructure, managing configuration data, monitoring, logging, and backups.
When it comes to protecting your data, Rewind allows you to easily schedule backups—and, just as importantly, allows you to easily restore it with just a few clicks. Sure, you could build and maintain your own backup solution: but what else could your devs be doing for your customers if they could rely on automated backups?