Note: This is a draft published without editing.
The list isn't exhaustive and definitions are loosely based on personal experience rather than standard.
A rough overview of a Devops workflow (as per what's evolved so far) would look like this:
At its core.. devops tries to make everything-as-a-code, and easier to look deep into complex distributed systems.
Continuous Integration --> Continuous Delivery --> (Continuous) Deployment --> Provisioning Infrastructure --> Config management --> Service Discovery --> Monitoring (Centralised logging+ Infra monitoring + Error tracking + Metrics + Monitor thresholds) --> Alerting (Rules for alerting + Action items for alerts + Rotation mechanism + Ownership) --> Debugging (Self healing + Consolidating metrics + Toolset )
Continuous integration: The stories where people make code changes but something else broke somewhere else, due to this.. used to be common. Continuous integration (CI) solves this problem by executing unit tests, integration tests and certain sanity checks.
Handles the reliability part of new code changes.
Continuous Delivery: The objective of faster release starts at being able to package and ship software to various platforms reliably. Continuous delivery ensures packaging and putting up versions of software in storage for others to consume.
Continuous Deployment: While the packages being ready is ensured by Continuous delivery, deployment takes the automation to release new releases automatically to the production environment.
Most of the companies don't make the deployment continuous for a wide variety of reasons.
All the above 3 are typically solved with tools like: Jenkins, GitHub Actions, Gitlab ci, Bamboo, Travis CI, etc.
Some of the custom packaging tools: Docker, Packer.
Provisioning infrastructure: When infrastructure-as-a-code emerged as a pattern, the ability to automatically take up required resources from cloud on-demand became inevitable. Community driven tools helped people automate the provisioning part, and ensuring the demands for infrastructure are met timely.
Orchestration tools can be said as a sibling to provisioning tools, where resources are auto-allocated based on algorithms.
Tools: Terraform is one of the most popular tool. Pulumi, Cloudformation, etc. are others.
Kubernetes, Docker Swarm, ECS, etc. are some of the orchestrators.
Config management: While the ability to automate infrastructure ensures resources are delivered, what to do with resources is handled by deployment. How the resources are connected to each other, what goes in their configuration (specs) is handled by config management.
The practice of config management is getting outdated, due to standardised approaches in software ecosystem.
Tools: Chef, Ansible, Puppet, Salt.
Service Discovery: Adding the ability for microservices/resources to automatically see other's address and connect to them is made possible with service discovery. It handles various scenarios like if a new resource adds for a particular service, or a resource is faulty..
The advancement of this led to a model called Service Mesh which is sophisticated map of how services communicate with each other, and at what rate, which port number, etc.
Tools: Consul (offers both service discovery and service mesh)
Monitoring and alerting: While the above automations are release oriented, monitoring is feedback oriented. Once the software is released, how does it perform, what needs improvement, what's breaking, etc. is handled by monitoring.
There are several sub sections within monitoring: /monitoring-and-siblings/
Debugging: With the help of monitoring tools, there is an evolved process for debugging quickly and realising where to look for, what to look for.. how to spot anomalies, etc.