- Installing Ansible on CentOS 7.7
- Create an Azure Service Principal that we will use to allow Ansible to authenticate to Azure via Dynamic Inventories.
- Set up a basic Azure Dynamic Inventory filtering on “include_vm_resource_groups” to test pinging a VM as well as find out the name Ansible uses to refer to this Virtual Machine in order to capture Ansible hostvars.
- Capture information via the Metadata Instance Service in order to see how Ansible hostvars pulls information from the VM.
- Capture the Ansible Hostvars of an Azure Virtual Machine. These variables will then be used throughout this article series to filter what Virtual Machines we want to target.
- Install Azure CLI
- Deploy a Virtual Machine Scale Set (VMSS) into an existing VNET
- Modify our Azure Dynamic Inventory filtering for “include_vmss_resource_groups” to test connectivity to VMSS Instances (VM Nodes) within the VMSS.
Part 3
- Install a Second VM
- Configure Tags on VMs in Preparation of using Keyed_Groups
- Using Keyed_Groups to filter on Tags’ Key/Value Pair
- Using Keyed_Groups to filter on OS Type using Jinja2 Standard Dot Notation
- Filter out specific VMs using exclude_host_filters
- Using Conditional_Groups to filter on specific criteria
- Using Hostvar_Expressions to make modifications to specific hostvar values
Install a Second VM
Let’s install a second VM named vmblog2. We’ll create this VM using the Azure CLI. Just as with our VMSS, you’ll want to run this from our Ansible Control Node using Azure CLI. For instructions on how to install Azure CLI on our Ansible Control Node running CentOS 7.7, please refer to Part 2.
az vm create \
--resource-group vmblog \
--name vmblog02 \
--image centos \
--admin-username eshudnow \
--subnet $(az network vnet subnet show -g Networking --vnet-name NorthCentral-VNET -n default -o tsv --query id) \
--ssh-key-values ~/.ssh/id_rsa.pub
Couple things to note:
- Instead of generating a new ssh key pair, I specify to use the ssh key values from our Ansible Control Node. This is why it is important to run this using Azure CLI from the Ansible Control Node.
- I specified the subnet to use which is in the Resource Group Networking, in the VNET NorthCentral-VNET and the name of the subnet to use is default. .
The creation of our VM will take a while and display as running after executing the above in Azure CLI. But be patient. We can see Resources are being created in the Azure Portal as shown in the following screenshot.

If you are using a Network Security Group on the subnet this VM is attached to, go ahead and delete the VM NSG that was created as part of the Azure CLI creation process. In this case, I went ahead and deleted vmblog02NSG as this Virtual Machine is already protected from the Subnet NSG that allows Port 22.
Unlike the VMSS Creation, the JSON response we get back in Azure CLI does not include OSProfile and does not tell us if the RSA public key was successfully applied to this VM. So let’s try to SSH into our new VM.
We can see the Public IP of this new VM in the Azure Portal or in the JSON output in Azure CLI.

SSH to this VM by typing:
ssh [email protected]
We are successfully connected. Therefore, we know the Ansible Control Node’s SSH Public Key was successfully added to vmblog02.

Configure Tags on Both VMs
Create the following Tags:
VM | Key | Value |
vmblog01 | CostCenter | 1111 |
vmblog01 | Owner | John Doe |
vmblog02 | CostCenter | 2222 |
vmblog02 | Owner | Mary Jane |


Modifying our Dynamic Inventory File to use Keyed Groups for Tags
We’re going to focus on our two VMs: vmblog01 and vmblog02.
In our Dynamic Inventory File, change the file to include only the following:
plugin: azure_rm
# places hosts in dynamically-created groups based on a variable value.
keyed_groups:
# places each host in a group named 'tag_(tag name)_(tag value)' for each tag on a VM.
- prefix: tag
key: tags
auth_source: auto
When we run Ansible with the Dynamic Inventory using tags, the format is as follows:
ansible <prefix>_<tagname>_<tag value> -m ping -i ansible_azure_rm.yml
For our example of our real tags, keys, and values, let’s run the following:
ansible tag_Owner_John Doe -m ping -i ansible_azure_rm.yml
What you’ll find, is that it simply will not work with spaces since the Owner value is John Doe rather than JohnDoe.
[eshudnow@iac ansible]$ ansible tag_owner_John Doe -m ping -i ansible_azure_rm.yml
usage: ansible [-h] [--version] [-v] [-b] [--become-method BECOME_METHOD]
[--become-user BECOME_USER] [-K] [-i INVENTORY] [--list-hosts]
[-l SUBSET] [-P POLL_INTERVAL] [-B SECONDS] [-o] [-t TREE] [-k]
[--private-key PRIVATE_KEY_FILE] [-u REMOTE_USER]
[-c CONNECTION] [-T TIMEOUT]
[--ssh-common-args SSH_COMMON_ARGS]
[--sftp-extra-args SFTP_EXTRA_ARGS]
[--scp-extra-args SCP_EXTRA_ARGS]
[--ssh-extra-args SSH_EXTRA_ARGS] [-C] [--syntax-check] [-D]
[-e EXTRA_VARS] [--vault-id VAULT_IDS]
[--ask-vault-pass | --vault-password-file VAULT_PASSWORD_FILES]
[-f FORKS] [-M MODULE_PATH] [--playbook-dir BASEDIR]
[-a MODULE_ARGS] [-m MODULE_NAME]
pattern
ansible: error: unrecognized arguments: Doe
[eshudnow@iac ansible]$ ansible tag_owner_'John Doe' -m ping -i ansible_azure_rm.yml
[WARNING]: Could not match supplied host pattern, ignoring: tag_owner_John
[WARNING]: Could not match supplied host pattern, ignoring: Doe
[WARNING]: No hosts matched, nothing to do
[eshudnow@iac ansible]$ ansible tag_owner_"John Doe" -m ping -i ansible_azure_rm.yml
[WARNING]: Could not match supplied host pattern, ignoring: tag_owner_John
[WARNING]: Could not match supplied host pattern, ignoring: Doe
[WARNING]: No hosts matched, nothing to do
^[[A[eshudnow@iac ansible]$ ansible 'tag_owner_John Doe' -m ping -i ansible_azure_rm.yml
[WARNING]: Could not match supplied host pattern, ignoring: tag_owner_John
[WARNING]: Could not match supplied host pattern, ignoring: Doe
[WARNING]: No hosts matched, nothing to do
[eshudnow@iac ansible]$ ansible tag_owner_(John Doe) -m ping -i ansible_azure_rm.yml
bash: syntax error near unexpected token `('
Let’s modify the Owner to not have spaces and try again.


Go ahead and run the following:
ansible tag_owner_johndoe -m ping -i ansible_azure_rm.yml

We see a failure followed by a success. The important thing here is capitalization is important. We had to change the capitalization on both the tag’s key as well as the tag’s value. Both are important to have capitalization correct.So be sure you set your standards appropriately and enforce them. But we successfully retrieve our specific VM that has the keyed group we are filtering on.
Go ahead and try doing this for CostCenter. This requires no changes to the Dynamic Inventory File as we’re still using the prefix tags and using the key tags. The only thing we change is how we run the ansible command. Run the following command to see if we can get vmblog02 returned:
ansible tag_CostCenter_2222 -m ping -i ansible_azure_rm.yml

Modifying our Dynamic Inventory File to use Keyed Groups for OS using Jinja2
If you recall from Part 1, we used Ansible to get the hostvars. One of the hostvars that came back was the Operating System. Specifically, it came back as:
"os_profile": {
"system": "linux"
}
Let’s modify our Dynamic Inventory File to have the following using standard Dot Notation:
ansible os_linux -m ping -i ansible_azure_rm.yml
This time we see our Ansible Control Node come back too since it’s a Linux machine.

So how do we filter out the Ansible Control Node? Well, we could use what we did in Part 1 and filter on the vmblog Resource Group using the following:
include_vm_resource_groups:
- vmblog
But then we could be missing out on all the other Linux VMs we may want to manage. We may just not want to include a specific VM. In that case, simply leverage exclude_host_filters feature of the Dynamic Inventory File. Configure your Ansible Dynamic Inventory file to be the following:
plugin: azure_rm
# places hosts in dynamically-created groups based on a variable value.
keyed_groups:
# places each host in a group named 'azure_loc_(location name)', depending on the VM's location
- prefix: os
key: os_profile.system
# excludes a host from the inventory when any of these expressions is true, can refer to any vars defined on the host
exclude_host_filters:
# excludes iac Ansible Control Node
- name == 'iac'
auth_source: auto
Let’s try running the following command again:
ansible os_linux -m ping -i ansible_azure_rm.yml

We no longer get the iac Ansible Control Node included in the output. Essentially, when using exclude_host_filters, any evaluation that is true gets excluded in the response. Because the name of the VM is iac in hostvars (Part1 shows how to view hostvars on a VM), and we included an exclude where name == ‘iac’ and that evaluation is true, it is not included in the dynamic inventory response.
A couple other examples that can be used are:
exclude_host_filters:
# excludes hosts in the eastus region
- location in ['eastus']
# excludes hosts that are powered off
- powerstate != 'running'
In Part 4, we’ll take a look at using conditional_groups to create groups of servers based off filtering on certain values in hostvars. Then we’ll take a look at hostvar_expressions to make modifications to specific hostvar values.
Leave a Reply