What Is the CI/CD Pipeline?

7 min.

A continuous integration and continuous delivery/deployment (CI/CD) pipeline is a series of steps that software delivery undergoes from code creation to deployment. Foundational to DevOps, CI/CD streamlines application development through automation of repetitive tasks, which enables early bug detection, reduces manual errors, and accelerates software delivery.

CI/CD Pipeline Explained

CI/CD encompasses a series of automated processes — from code development to production deployment — that enable frequent and reliable delivery of code changes to the production environment. It forms the backbone of DevOps, a shift in software development that emphasizes collaboration between development and operations teams to ultimately shorten the development lifecycle without compromising software quality.

Embodying the core principles of DevOps, the CI/CD pipeline bridges the gap between development, testing, and operations. In this collaborative environment, CI/CD promotes a culture of shared responsibility for a product's quality and timely delivery.

Various steps in the CI/CD pipeline

Figure 1: Various steps in the CI/CD pipeline

Continuous Integration (CI)

Continuous integration (CI) is a practice in software development where developers regularly merge their code changes into a central repository. After each merge, automated build and test processes run to ensure integration of the new code with the existing codebase — without introducing error. In this, CI minimizes the historic struggle with merging changes at the end of a development cycle.

Continuous Delivery and Deployment (CD)

Continuous delivery and continuous deployment, both abbreviated as CD, deal with the stages following CI. Continuous delivery automates the release process, maintaining a state where any version of the software can be deployed to a production environment at any given time. It keeps the software in a deployable state, despite constant changes. Continuous deployment goes a step further by automatically deploying every change that passes the automated tests to production, minimizing lead time.

Both continuous delivery and continuous deployment involve automatically deploying the application to various environments, such as staging and production, using predefined infrastructure configurations. The CD pipeline incorporates additional testing, such as integration, performance, and security assessments, to guarantee the quality and reliability of the application.

Continuous Delivery Vs. Continuous Deployment

The primary difference between continuous delivery and deployment lies in the final step of moving changes to production. In continuous delivery, the final step of deployment is a manual process, providing a safety net for catching potential issues that automated tests might miss. In contrast, continuous deployment automates the entire pipeline, including the final deployment to production, requiring a strict testing and monitoring setup to identify and fix issues.

In other words, CI/CD can refer to one of two approaches.

  1. Continuous integration and continuous delivery (CI/CD)
  2. Continuous integration and continuous deployment (CI/CD)

By implementing a CI/CD pipeline, organizations can achieve faster time-to-market, continuous feedback loops, and improved software quality. CI/CD empowers development, operations, and security teams to work together, enabling the delivery of secure, stable, and highly performant applications.

CI/CD division of steps in the pipeline

Figure 2: CI/CD division of steps in the pipeline

How CI/CD Works: A Day in the Life of the Pipeline

The CI/CD pipeline's day begins with a developer's first cup of coffee. As the developer settles in, they pull the latest code from the version control system, Git. Equipped with the most recent changes, they dive into the day's work — crafting new features and squashing bugs.

Once the developer completes their task, they commit their changes to a shared repository. This action sets the CI/CD pipeline in motion. The pipeline, configured with webhooks, detects the commit and triggers the build stage. Using a tool like Jenkins or CircleCI, the pipeline compiles the source code into an executable. If the codebase is a Java application, for instance, this would involve running a Maven or Gradle build.

Next, the pipeline packages the application into a deployable artifact. For a web application, this might involve creating a Docker image. The pipeline then pushes this image to a Docker registry, such as Docker Hub or a private registry hosted on AWS ECR or Google Container Registry.

With the build complete, the pipeline moves to the test stage and spins up a test environment, often using a container orchestration tool like Kubernetes. It deploys the application to this environment and runs a suite of automated tests. These tests could include unit tests run by JUnit, integration tests run by a tool like Postman, and end-to-end tests run by Selenium.

Assuming the tests pass, the pipeline proceeds to the deployment stage where it tears down the test environment and spins up a production environment. The pipeline then deploys the application to this environment, often using a blue/green deployment strategy to minimize downtime and facilitate quick rollback when needed.

Figure 3: The cyclical nature of the continuous integration and continuous delivery/deployment pipeline

Figure 3: The cyclical nature of the continuous integration and continuous delivery/deployment pipeline

Throughout the day, the pipeline repeats this process for each new commit. It also handles tasks such as managing database migrations with tools like Flyway or Liquibase, running static code analysis with SonarQube, and even autoscaling the production environment based on traffic patterns.

The pipeline also provides real-time feedback to the development team. It sends notifications of build results to a Slack channel, creates tickets in Jira for failed builds, and updates a dashboard with real-time metrics on the pipeline's performance.

As the day ends, the CI/CD pipeline stands ready for the next commit, continuing its mission to deliver high-quality software at a rapid pace. The pipeline's day may be repetitive, but each repetition brings the team nearer their goal of delivering value to users.

Stages of a CI/CD Pipeline

As a technology-driven process, CI/CD integrates with version control systems, build servers, and other development tools. The standard pipeline comprises several stages, each designed to validate the code from different angles and confirm its readiness for deployment.

When a developer commits code to the version control repository, the pipeline springs into action, automating the source, build, test, and deploy stages.

Source Phase

The source stage involves the version control system where developers commit their code changes. The CI/CD pipeline monitors the repository and triggers the next stage when a new commit is detected. Git, Mercurial, and Subversion are popular version control systems.

Build Phase

During the build stage, the CI/CD pipeline compiles the source code and creates executable artifacts. The build stage may also involve packaging the code into a Docker container or another format suitable for deployment. The build process should be repeatable and consistent to provide reliability.

Test Phase

The test phase of the CI/CD pipeline involves running a series of automated tests on the built artifacts. Tests can include unit tests, integration tests, and end-to-end tests. Test automation is crucial at this stage to quickly identify and fix issues.

Deploy Phase

The deploy stage is the final stage of the CI/CD pipeline. With a continuous delivery setup, the deploy stage prepares the release for manual deployment. In a continuous deployment setup, the pipeline automatically deploys the release to the production environment.

Types of CI/CD Pipelines

A CI/CD pipeline for a simple program typically involves stages like source, build, test, and deploy. Developers commit code to a version control system like Git. The pipeline triggers a build process to compile the code and create artifacts. Automated tests run against these artifacts for quality assurance. If tests pass, the pipeline deploys the artifacts to a production environment. Tools like Jenkins, CircleCI, or GitLab CI/CD can orchestrate this process.

Cloud-Native CI/CD Pipelines

A cloud-native CI/CD pipeline leverages the inherent modularity of microservices to facilitate independent development and deployment. Each microservice has its own pipeline, allowing for isolated testing, building, and deployment, which reduces the risk of cascading failures and enhances the speed of delivery.

Security in the microservices-based pipeline is enforced at multiple levels, one of which involves treating each microservice as a potential security boundary with its own set of permissions and controls. Following container security practices such as image scanning and runtime protection safeguards the integrity of microservices.

A common cloud-native pipeline technology, service meshes like Istio or Linkerd provide a uniform way to secure, connect, and monitor microservices by enabling mutual TLS and similar features for service-to-service communication.

Cloud-native CI/CD pipelines leverage cloud-based tools for code repositories, build servers, and deployment targets. A pipeline in AWS, for instance, might use CodeCommit for source control, CodeBuild for building and testing, and CodeDeploy for deployment. These pipelines can scale on demand, integrate with cloud-native features, and offer pay-as-you-go pricing.

Kubernetes-Native Pipelines

Kubernetes’ extensible architecture aligns with CI/CD principles, supporting rapid and reliable application delivery. The Kubernetes-native pipeline operates directly within a Kubernetes cluster, leveraging its features for orchestration, scaling, and management of containerized applications. It can deploy containerized applications across multiple clusters, handle rollbacks, and manage service discovery. The pipeline stages, including building, testing, and deploying, are run as Kubernetes jobs or pods, providing isolation and resource control.

Security in Kubernetes-native pipelines involves Kubernetes-specific practices. Role-based access control (RBAC) is used to limit the permissions of pipeline stages, reducing the blast radius of potential security issues. Pod security policies can fortify the security posture by restricting the capabilities of containers running the pipeline stages.

CI/CD tools like Jenkins X, Tekton, and Argo CD are designed for Kubernetes-native pipelines, offering features like environment promotion via GitOps and preview environments for pull requests.

CI/CD Pipeline for a Monorepo

A monorepo is a repository that contains more than one logical project. The CI/CD pipeline for a monorepo needs to efficiently handle changes across multiple projects. It should only build and test the projects affected by a commit, not the entire repository.

Developers can use advanced CI/CD tools like Bazel or Google's Cloud Build to create a dependency graph of the codebase from which they can then rebuild and retest only the parts of the codebase that depend on the changed code.

Security in a monorepo CI/CD pipeline prevents changes from affecting other components. Automated testing and static code analysis identify potential security issues early in the pipeline. Code review practices should be vigorous to maintain the integrity of the monorepo.

CI/CD in the Cloud

Cloud platforms offer powerful capabilities for implementing CI/CD pipelines, including unlimited scalability, high availability, and inherent disaster recovery mechanisms.

The elasticity of cloud resources supports the dynamic scaling of CI/CD processes based on workload, promoting efficiency and cost optimization. CI/CD in the cloud also supports distributed development teams, enhancing collaboration, and enabling a global software development approach.


Amazon Web Services (AWS) provides a suite of tools for implementing a CI/CD pipeline. AWS CodeCommit, a fully managed source control service, hosts secure Git repositories, facilitating collaborative coding and version control.

AWS CodeBuild, a managed build service, compiles source code, runs tests, and produces ready-to-deploy software packages. AWS CodePipeline, a continuous integration and continuous delivery service, orchestrates the workflow from source code to deployment, allowing you to model, visualize, and automate your software release process.

AWS CodeDeploy, an automated deployment service, facilitates application deployments to various AWS services like Amazon EC2, AWS Lambda, and Amazon ECS. AWS also integrates with popular open-source tools, providing a flexible and comprehensive CI/CD solution.

CI/CD in Azure

Azure Pipelines, a cloud service, supports both continuous integration and continuous delivery and is compatible with any language and platform, providing a versatile solution for diverse development environments.

Azure Repos provides unlimited cloud-hosted private Git repositories, enabling teams to collaborate and manage their code effectively. Azure Test Plans is a comprehensive solution for managing, tracking, and planning testing efforts, ensuring the delivery of high-quality software.

Azure also offers a range of extensions and integrations with popular open-source tools, enhancing its capabilities as a CI/CD platform.

CI/CD in Google Cloud

Google Cloud Platform (GCP) offers Cloud Build for CI/CD, a serverless product that enables developers to build, test, and deploy software in the cloud. Cloud Build allows you to define custom workflows for building, testing, and deploying across multiple environments such as VMs, serverless, Kubernetes, or Firebase.

Google Cloud Source Repositories, a single place for teams to store, manage, and track code, offers a secure, scalable, and highly available Git repository. GCP also integrates with popular open-source tools like Git, Jenkins, and Spinnaker, providing a flexible and customizable CI/CD solution.

CI/CD in IBM Cloud

IBM Cloud offers a comprehensive set of tools for implementing a CI/CD pipeline. IBM Cloud Continuous Delivery service provides toolchains that include open tool integrations and templates to automate building, deploying, and managing applications.

IBM Cloud Code Engine is a fully managed serverless platform that runs your containerized workloads, including web apps, microservices, event-driven functions, or batch jobs. IBM Cloud also integrates with popular open-source tools like Git, Jenkins, and Tekton, making it a versatile choice for CI/CD implementation.

CI/CD Pipeline Best Practices

To enhance your DevOps workflow and software delivery, incorporate the following best practices into your development lifecycle.

Single Source Repository

Using a single source repository will serve as your source code management (SCM) system, centralizing the storage of all the necessary files and scripts required to create builds. The repository should include everything from source code, database structure, and libraries to properties files and version control. It should also house test scripts and scripts to build applications.

Working from a single source repository enhances collaboration, promotes consistency, simplifies version control, reduces the risk of conflicts, and makes it easier to track changes.

Build Once

Compile the code and create build artifacts only once and then promote the artifacts through the pipeline. This practice promotes consistency by preventing discrepancies that might arise from building the code at every stage.

Automate Build Process

The practice of automated builds, or converting code into a deployable artifact, reduces human error and accelerates the development process. Your build scripts should be comprehensive, allowing you to build everything from a single command — web server files, database scripts, application software, etc. The CI processes should automatically package and compile the code into a usable application.

Prioritize Automation Efforts

Automate as much as possible, from code integration, testing, and deployment to infrastructure provisioning and configuration. Automation increases efficiency while guaranteeing repeatability. Once developers push code to the shared repository, the CI server automatically triggers a build-and-test process, highlighting any issues on the fly. The process significantly lessens the time and effort spent on manual integration, leaving developers free to focus on code enhancements.

Test Early and Often

Incorporate automated testing into the early stages of the pipeline. Run unit tests after the build stage, followed by integration tests and end-to-end tests. Design testing scripts to yield a failed build if code fails the test.

Use Clone-Testing Environments

Conducted testing in an environment that mirrors the production environment rather than testing new code in the live production version. Use rigorous testing scripts in this cloned environment to detect and identify bugs that may have slipped through the initial prebuild testing process.

Deploy Frequently

Frequent deployments reduce the batch size of changes, making it easier to identify and fix issues. They also accelerate feedback, make rollbacks more feasible, and reduce the time to deliver value to users.

Make the CI/CD Pipeline the Only Way to Deploy

Disallow manual deployments to production. All changes should go through the pipeline to ensure that every change is tested, consistent, and traceable.

Demand Visibility

Development teams should have access to the latest executables, as well as a line of sight to any changes made to the repository. Version control should be used to manage handoffs so that developers know which version is the latest.

Optimize Feedback Loop

Enable the pipeline to provide quick and useful feedback. Developers should be notified immediately if their changes break the build or fail tests. Fast feedback enables quick remediation and keeps the pipeline flowing.

Clean Environments with Every Release

Automate the cleanup of testing and staging environments after each release to save resources and allow each deployment to start with a clean state.

CI/CD Pipeline KPIs

Cycle or Deployment Time

Cycle time, also known as deployment time, measures the duration from code commit to production deployment. It's a key indicator of the efficiency of the CI/CD pipeline. Shorter cycle times mean faster delivery of value to users and quicker feedback for developers.

Development Frequency

Development frequency refers to how often code changes are committed to the version control system. High development frequency indicates an active development process associated with smaller, manageable changes that reduce the risk of errors.

Change Lead Time

Change lead time measures the period from when change is committed to when it’s deployed, as a measure of the speed of the CI/CD pipeline. Shorter lead times mean quicker realization of value and faster feedback loops.

Change Failure Rate

Change failure rate is the percentage of changes that result in a failure in production. A low change failure rate indicates a high-quality software delivery process. Factors such as testing quality, code review practices, and deployment practices influence change failure rate.


Mean time to recovery (MTTR) and mean time to failure (MTTF) reflect the reliability of the CI/CD pipeline. MTTR measures the average time it takes to recover from a failure, while MTTF measures the average time between failures. Lower MTTR and higher MTTF indicate a more reliable pipeline.

Video 1: Shifting toward modernized methodologies with new tech and DevOps

CI/CD Tools

Continuous Integration Tools


Codefresh, a CI/CD platform designed for Kubernetes, supports the complete lifecycle of application development from commit to deployment. Its distinctive Docker-native infrastructure enables fast and isolated builds, providing a versatile environment for developing, testing, and deploying containerized applications.

Bitbucket Pipelines

Bitbucket Pipelines is an integrated CI/CD service built into Bitbucket. It allows development teams to automatically build, test, and deploy code based on a configuration file in their repository. Its tight integration with Bitbucket and the Atlassian suite of tools can significantly improve the workflow for teams already embedded in the Atlassian ecosystem.


Jenkins is an open-source automation server that enables developers to reliably build, test, and deploy their software. It offers extensive plugin support and distributed builds, making it a highly flexible tool for complex CI/CD pipelines.


CircleCI is a modern continuous integration and delivery platform that supports rapid software development and release. With the focus on simplicity and efficiency, CircleCI offers smart automatic caching, parallelism, and job orchestration to optimize the software delivery process.


Bamboo, another tool from the Atlassian suite, provides continuous integration and delivery capabilities, with built-in Git and JIRA software integration. Though not as extensible as Jenkins, Bamboo's out-of-the-box features offer a more straightforward setup to development teams needing a fast and simple implementation.

GitLab CI

GitLab CI, an integral part of GitLab, is a stout solution that supports the entire DevOps lifecycle. GitLab CI offers flexible pipeline configurations and tight integration with GitLab's source control and issue tracking, providing an all-in-one solution for software development and deployment.

Continuous Delivery and Deployment Tools


Codefresh, besides providing CI capabilities, also supports continuous delivery. Its environment isolation and Helm chart support allow efficient and reliable delivery of Kubernetes applications.

Argo CD

Argo CD is a declarative, GitOps continuous delivery tool for Kubernetes. It leverages Git repositories as a source of truth for defining applications and automatically syncs the application when changes are detected in the repository.


GoCD is an open-source tool specialized in modeling and visualizing complex workflows for continuous delivery. Its value stream map visualizes the entire path from commit to deployment, fostering better understanding and control over the software delivery process.

AWS CodePipeline

AWS CodePipeline is a fully managed continuous delivery service that automates release pipelines for fast and reliable application updates. As part of the AWS suite, CodePipeline seamlessly integrates with other AWS services, allowing for effective management and automation of the entire release process within the AWS ecosystem.

Azure Pipelines

Azure Pipelines, part of Microsoft's Azure DevOps services, is a cloud service that provides CI/CD capabilities for applications of any language and platform. It's notable for its broad integration capabilities, able to work with most popular tools and services in the development landscape, as well as its unlimited free build minutes for open-source projects.


Spinnaker, a multicloud continuous delivery platform originally developed by Netflix, offers high configurability and powerful deployment capabilities across different cloud providers. With its focus on deployment, Spinnaker supports several strategies such as blue/green and canary releases, offering a high degree of control over the delivery process.

Machine Learning CI/CD Applications


MLOps, a compound of machine learning and operations, is designed to standardize and streamline the lifecycle of machine learning model development and deployment. It applies CI/CD principles to automate the testing, deployment, and monitoring of machine learning models, facilitating their reliable and consistent delivery.

Synthetic Data Generation Techniques

In machine learning development, synthetic data generation is a method to create data that mimics real data. Within CI/CD pipelines, this approach is valuable in testing machine learning models, as it provides a scalable and privacy-compliant method to evaluate models' performance and exhaustiveness.

AIOps Platforms

AIOps, short for artificial intelligence for IT operations, integrates AI and machine learning technologies into IT operations. In the context of CI/CD, AIOps can automate and enhance numerous operations tasks such as anomaly detection, event correlation, and root cause analysis, improving the efficiency and effectiveness of software delivery.

Security in CI/CD

The speed and automation of CI/CD introduce new security risks, such as:

  • Exposure of sensitive data
  • Use of insecure third-party components
  • Unauthorized access if CI/CD tools aren’t properly secured

But prioritizing CI/CD security by integrating security practices and tools throughout the pipeline — a practice known as DevSecOps — organizations can ensure that the software they deliver is both functional and secure.

Secure Coding Practices

Developers should uphold secure coding practices to prevent introducing security vulnerabilities into the codebase. Practices to prioritize include input validation, proper error handling, and adherence to the principle of least privilege.

Security Testing

Integrate automated security testing into the CI/CD pipeline. Tests such as static code analysis, dynamic analysis, and penetration testing can help pinpoint security vulnerabilities before deploying the application.

Security in Deployment

Secure the deployment process. Use secure protocols for data transmission, manage permissions and access controls during the deployment process, and monitor the application in production to detect any security incidents.

Secure CI/CD Pipeline Architecture

A secure CI/CD pipeline architecture integrates security controls at each stage of the pipeline. Use secure repositories for source control, conduct security checks during the build process, run automated security tests, and ensure secure deployment practices.

Security in Infrastructure as Code

Infrastructure as code (IaC), a key practice in DevOps, involves managing and provisioning computing infrastructure through machine-readable definition files. Security in IaC involves managing these definition files and the infrastructure they create. Encrypt sensitive data, limit access to the IaC files, and regularly audit the infrastructure for security compliance.

CI/CD Trends on the Horizon

Microservices and Serverless Architectures

As organizations increasingly adopt microservices and serverless architectures, CI/CD pipelines will need to adapt to manage more complex deployments. This includes deploying and managing multiple interdependent services, each potentially using different technologies and deployment platforms.

Artificial Intelligence and Machine Learning

AI and ML are increasingly being used to optimize CI/CD pipelines. Predicting and preventing potential issues, optimizing resource usage, and automating more complex tasks are some of the potential applications of AI and ML in CI/CD.

Infrastructure as Code (IaC)

IaC is becoming a standard practice in DevOps. As IaC tools and practices mature, they will play an increasingly important role in CI/CD pipelines.

CI/CD Pipeline FAQs

Configuration management is a systems engineering process for establishing and maintaining consistency in a product's performance, functional, and physical attributes with its requirements, design, and operational information throughout its life. In the context of software development, configuration management involves systematically managing, organizing, and controlling the changes in the documents, codes, and other entities during the development process.
Orchestrating the pipeline in CI/CD refers to the process of automating and managing the sequence of tasks that take place from the moment code is committed to when it's deployed. Orchestration aims to enhance efficiency and reliability of the pipeline by:
  • Streamlining these processes
  • Ensure they occur in the correct order
  • Handle any dependencies between tasks

Jenkins, CircleCI, and Bamboo are common CI/CD tools for pipeline orchestration. Kubernetes is also increasingly used for this purpose, especially in microservices architectures.

An artifact repository is a storage location for binary and other software artifacts produced during the software development process. It can include compiled code, libraries, modules, server images, or container images. Artifact repositories like JFrog Artifactory or Sonatype Nexus provide version control, metadata, and other features, making it easier to store, retrieve, and manage these artifacts.
Version control, also known as source control, is a system that records changes to a file or set of files over time so that specific versions can be recalled later. It allows you to revert selected files back to a previous state, revert the entire project back to a previous state, compare changes over time, see who last modified something that might be causing a problem, and more.
Maintaining a single source of truth in CI/CD means having one view of information that everyone considers as the definitive version. Typically referring to the codebase in a version control system like Git, a single source of truth guarantees that development and operations team members work with the same data, reducing inconsistencies and conflicts. The single source of truth provides a reliable basis for building, testing, and deploying software.
Pipeline-based access controls are security measures that regulate who can interact with a CI/CD pipeline and how. They can limit who can trigger a pipeline, make changes to its configuration, or access the build results. These controls are crucial for maintaining the integrity of the development and deployment process, preventing unauthorized changes, and maintaining compliance with security policies.
Branching strategies for CI/CD include feature branching, where new features are developed in separate branches and then merged into the main branch; trunk-based development, where developers work on a single branch with short-lived feature branches; and Gitflow, which uses separate branches for development, staging, and production, each serving a different stage in the CI/CD pipeline.
Trunk-based development is a software development approach where all developers work on a single branch, often called 'main' or 'trunk'. Developers frequently integrate their changes into this main branch, usually once a day, promoting integration and reducing the complexity of merges.
A lightweight JavaScript library, ua-parser-js detects browser, engine, OS, CPU, and device type and model from user-agent data. The library can prove useful for analytics, serving different webpages or resources based on the user's environment, or other situations where understanding the user's browser and device enhances the user experience or provides useful metrics.
A continuous delivery maturity model is a framework that helps organizations assess their proficiency and maturity in implementing continuous delivery practices. It typically includes several levels, from initial to managed to optimized, each with specific best practices and capabilities. The model guides organizations in identifying areas for improvement and planning their journey toward more mature practices.
A code commit, in the context of version control systems, is the action of storing changes to a codebase in a repository. Each commit represents a discrete change to the code, often accompanied by a message describing the change. Commits create a history of modifications, allowing developers to track progress, understand changes, and revert to previous versions if necessary.
Pipeline execution refers to the process of running all the tasks defined in a CI/CD pipeline, typically triggered by a code commit or a scheduled event. It involves executing stages like build, test, and deploy in a sequence or in parallel, depending on the pipeline configuration. The execution can be visualized as a flow of tasks, each dependent on the successful completion of the preceding ones, ensuring the code is ready for deployment.
Code coverage is a metric that helps measure the degree to which the source code of a program is executed when a particular test suite runs. It identifies which lines of code were executed and which were not, providing insight into the thoroughness of your testing suite. High code coverage can help prevent bugs from slipping through to production.
Static code analysis is a method of debugging by examining source code before a program is run. It's done by analyzing a set of code against a set (or multiple sets) of coding rules. SAST helps identify potential vulnerabilities, bugs, and breaches of coding standards, improving the quality and security of the code. Tools for static code analysis are often integrated into CI/CD pipelines.
Unit testing is a software testing method where individual components of a software application are tested in isolation. The purpose is to validate that each unit of the software performs as expected. A unit is the smallest testable part of any software, often a function or method. Unit tests are typically automated and written by developers to verify the correctness of their code, aiding in the detection of issues early in the development cycle.
Integration testing is a type of software testing where individual units are combined and tested as a group. The purpose is to expose faults in the interaction between integrated units. Test cases are created with the express purpose of exercising the interfaces between the units. This activity is carried out by testers after unit testing and can occur in a top-down, bottom-up, or sandwiched manner. Integration testing can reveal issues such as interface inconsistencies, communication problems, or data-related errors that unit tests might miss.
Regression testing is a type of software testing that confirms that previously developed and tested software still performs as expected after changes. The goal is to catch new bugs, or regressions, caused by alterations to the software. Regression tests can be performed at any or all testing levels and are often automated to prevent the introduction of defects into previously working functionality.
Flaky tests are automated tests that exhibit both a passing and a failing result with the same code. They are unpredictable because their outcome can change without any changes to the code. Flaky tests can be caused by several factors, including timing issues, dependencies on specific states, or asynchronous operations. They can undermine trust in a testing suite and should be identified and fixed or removed.
Feature flags, or feature toggles, are a software development technique that allows developers to enable or disable features in a software product to test the features and quickly roll back problematic ones. Developers can use feature flags even after the software product has been deployed to production.
A canary release is a technique to reduce the risk of introducing a new software version in production by gradually rolling out the change to a small subset of users before rolling it out to the entire infrastructure. It's used to catch potential issues and bugs that weren't detected during the testing phase, with minimal impact on the user base.
Blue/green deployments are a release management strategy that reduces downtime and risk by running two identical production environments, known as blue and green. At any time, only one environment is live, serving all production traffic. When releasing a new version of the application, the inactive environment is updated, tested, and, once ready, switched to be the live environment. Blue/green deployments allow quick rollback if problems are detected in the new version.
Release orchestration refers to the process of coordinating the various tasks involved in delivering software changes to production. It includes managing dependencies between tasks, automating workflows, and ensuring that each step, from code commit to deployment, is executed in the correct order. Release orchestration tools provide visibility into the release process, helping teams to manage complex deployments and reduce risks.

Value stream mapping (VSM) is a lean-management method for analyzing the current state and designing a future state for the series of events that take a product from concept to delivery. With respect to CI/CD, VSM visualizes the flow of code changes from development to production, identifying bottlenecks, redundancies, or wastage in the process. It helps teams understand the entire delivery lifecycle, improve flow efficiency, and reduce lead time.

By mapping the value stream, organizations can make data-driven decisions to optimize their CI/CD pipelines, aligning them more closely with business objectives.

Site reliability engineering (SRE) is a discipline that combines aspects of software engineering and systems engineering to build and run scalable, reliable, and efficient systems. Originating at Google, SRE implements DevOps principles with a specific focus on reliability. SREs use software as a tool to manage systems, solve problems, and automate operations tasks. Key practices include defining service level objectives (SLOs), error budgets, and toil reduction through automation. The goal is to create a balance between release velocity and system reliability.