Ansible Execution Environments (EEs) are a really powerful use of OCI compliant containers to package Ansible automation dependencies and run-time packages. EEs inherit all of the power of containerization. But, for those that are not familiar with containers, diving into EEs can be daunting as it requires basic knowledge of container technologies, like Docker or Podman, and the Ansible contents that ultimately get packaged into the EE.
About a year ago, I had both a personal and a professional need for an EE that I could use to run cloud automation. At the time, I wanted to create a single EE that would allow me to run automation against all of the cloud providers that Ansible supports, including their CLI tools so that I could call the CLI if the cloud collection did not directly support my automation use case, such as Azure Arc. So, I began work on my Cloud EE, which is an EE that accomplishes the following goals:
- Packages the cloud collections that I need to use in a single EE.
- Installs the CLIs for the major hyperscalers and their Python dependencies.
- Runs on
arm64architectures since I develop on a Mac but my automation runs on AWX or Ansible Automation Platform on
In order to more easily package containers that would work across multiple architectures, I typically use Docker. I plan to switch to Podman when it will support more robust volume mapping on a Mac, but for now, Docker is the easier option.
Red Hat Developer Account
Access to a free Red Hat developer account will give you access to downstream Red Hat-built containers. As long as you’re not using the containers for commercial use cases, this is a great way to get access to officially supported container images.
You can build EEs with upstream container images, but sometimes they are not as stable as the downstream versions. The Cloud EE uses downstream containers.
You’ll need a container registry, such as quay.io or Docker Hub to push your container to when building. Both offer free accounts for containers that are public. If you have Ansible Automation Platform subscriptions, then you can use Private Automation Hub as your EE registry. Any container registry that you have access to will ultimately work; public or private.
The Execution Environment
There are a number of files that are used when building EEs. For the Cloud EE, the following contain the important information:
||Defines how the
||Defines the Python dependencies that will be installed into the container.|
||Defines the Ansible collections and roles that will be installed into the container.|
These files are fairly standard if you are used to working with Ansible content and containers. To summarize what these files are used for, when you build an EE, you use a command line tool called
ansible-builder. This tool expects these files as part of its convention. The tool will construct a file to build the container with all of the dependencies that have been listed in these files.
execution-environment.yml is fairly unique to this EE because it performs a number of tasks to install CLI tools. There are two primary build steps in this file
prepend operations happen prior to Ansible attempting to install the content defined in
append operations happen after the content installation from those files happens. For the Cloud EE, anything installed through package managers is installed via
prepend, while anything installed through scripts or other tools is installed via
Building a Multi-Architecture Container
If you are building EEs for
amd64 based systems only, the running the standard commands that are documented for Ansible Builder will work out-of-the-box. However, if you want the containers to run on a M1 Mac, then there are a few extra steps.
Clone the Cloud EE repository to your computer.
git clone email@example.com:scottharwell/cloud-ee.git
ansible-builder’s create command to generate a
Dockerfile. This file will be used with Docker to build multi-arch containers, but it explicitly tells Docker that the upstream container only supports
ansible-builder create --output-filename Dockerfile; gsed -i 's/FROM/FROM --platform=linux\/amd64/' context/Dockerfile
Change directory into the
context directory that was created in the previous step.
buildx tool to build a multi-arch image that will virtualize
arm64 CPUs. You would replace
quay.io/scottharwell with your registry and namespace.
Note: this step will fail if you haven’t configured your Red Hat developer account and logged in to the Red Hat container registry first since the container file expects to pull the downstream containers.
docker buildx build --no-cache --platform linux/arm64 -t quay.io/scottharwell/cloud-ee:local . --push
The build process can take a while. On my M1 Max, it typically takes about 45 minutes to build the container because of the
arm64 virtualization work. Building just for
amd64 is much faster on a native CPU.
Using the EE
Once built, you can use the EE in any Ansible tool that supports EEs such as AWX, Ansible Automation Controller, Ansible Runner, and Ansible Navigator. For the sake of simplicity, I’ll demonstrate a command using this EE to run a playbook that is in a local folder on my computer using the AWS Demos from the Ansible Cloud Content Lab. This playbook will use a few of the AWS collections for Ansible, which were previously installed into the EE so they are always available when I need to run automation for AWS, Azure, etc.
This command uses
ansible-navigator run in a similar manner that
ansible run was used prior to EEs. It basically tells
ansible-navigator how to run the playbook, inventory information, environment variables, and extra-vars used to run the automation. This utility is really great because I can map local folders from my Mac to pass files, such as ssh keys, to the container as automation runs. These would be credentials if I were to run the same playbook in AWX or Ansible Automation Controller.
ansible-navigator run playbook_create_transit_network.yml \ --pae false \ --mode stdout \ --ee true \ --ce docker \ --pp always \ --eei quay.io/scottharwell/cloud-ee:latest \ --penv "AWS_ACCESS_KEY" \ --penv "AWS_SECRET_ACCESS_KEY" \ --eev $HOME/.ssh:/home/runner/.ssh \ --extra-vars "dmz_ssh_key_name=aws-test-key" \ --extra-vars "priv_network_ssh_key_name=aws-test-key" \ --extra-vars "dmz_instance_ami=ami-06640050dc3f556bb" \ --extra-vars "priv_network_instance_ami=ami-06640050dc3f556bb" \ --extra-vars "aws_region=us-east-1" \ --extra-vars "ansible_ssh_private_key_file_local_path=~/.ssh/aws-test-key.pem" \ --extra-vars "ansible_ssh_private_key_file_dest_path=~/.ssh/aws-test-key.pem" \ --extra-vars "ansible_ssh_user=ec2-user" \ --extra-vars "ansible_ssh_private_key_file=~/.ssh/aws-test-key.pem" \ --extra-vars "priv_network_ssh_user=ec2-user" \ --extra-vars "priv_network_hosts_pattern=10.*"
Accessing the EE
I keep a new version of the EE on quay.io whenever any of the collections within the EE are incremented. This means that new builds happen regularly. This also means that you can use this execution environment without having to build it yourself. But, all of the files are on GitHub if you’re interested in building and extending the EE for your own use cases.