- Installing Ansible on CentOS 7.7
- Create an Azure Service Principal that we will use to allow Ansible to authenticate to Azure via Dynamic Inventories.
- Set up a basic Azure Dynamic Inventory filtering on “include_vm_resource_groups” to test pinging a VM as well as find out the name Ansible uses to refer to this Virtual Machine in order to capture Ansible hostvars.
- Capture information via the Metadata Instance Service in order to see how Ansible hostvars pulls information from the VM.
- Capture the Ansible Hostvars of an Azure Virtual Machine. These variables will then be used throughout this article series to filter what Virtual Machines we want to target.
Part 2
- Install Azure CLI
- Deploy a Virtual Machine Scale Set (VMSS) into an existing VNET
- Modify our Azure Dynamic Inventory filtering for “include_vmss_resource_groups” to test connectivity to VMSS Instances (VM Nodes) within the VMSS.
- Install a Second VM
- Configure Tags on VMs in Preparation of using Keyed_Groups
- Using Keyed_Groups to filter on Tags’ Key/Value Pair
- Using Keyed_Groups to filter on OS Type using Jinja2 Standard Dot Notation
- Filter out specific VMs using exclude_host_filters
- Using Conditional_Groups to filter on specific criteria
- Using Hostvar_Expressions to make modifications to specific hostvar values
Install Azure CLI on our Ansible Control Node
We need Azure CLI installed on our Ansible Control Node VM called iac. The reason why is our Ansible Control Node uses SSH when connecting to Linux VMs. When we create the VMSS, if we use Azure CLI, we must pass the contents of the id_rsa.pub file to the VMSS. It is similar when using the Azure Portal to create a Linux VM in that you take the SSH Public Key and insert the value so that you can SSH into your Linux VM after it is created.
I wanted to demonstrate how this is done using the Azure CLI.
To install the Azure CLI on CentOS 7.7, start by inserting the Microsoft Repository Key in the terminal:
sudo rpm --import https://packages.microsoft.com/keys/microsoft.asc
Then create local azure-cli repository information:
sudo sh -c 'echo -e "[azure-cli]
name=Azure CLI
baseurl=https://packages.microsoft.com/yumrepos/azure-cli
enabled=1
gpgcheck=1
gpgkey=https://packages.microsoft.com/keys/microsoft.asc" > /etc/yum.repos.d/azure-cli.repo'
Finally, install azure-cli:
sudo yum install azure-cli
Provisioning our Virtual Machine Scale Set
Let’s set up a Virtual Machine Scale Set. This won’t be an in-depth guide on how to configure a Virtual Machine Scale Set (VMSS). Instead, we’ll get a VMSS set up just enough to leverage the Dynamic Inventory capabilities of Ansible.
There are five Quickstarts available in Azure Docs that walk you through creating a VMSS:
- Azure Portal – https://docs.microsoft.com/en-us/azure/virtual-machine-scale-sets/quick-create-portal
- Azure CLI – https://docs.microsoft.com/en-us/azure/virtual-machine-scale-sets/quick-create-cli
- Azure PowerShell – https://docs.microsoft.com/en-us/azure/virtual-machine-scale-sets/quick-create-powershell
- Azure ARM Template (Linux) – https://docs.microsoft.com/en-us/azure/virtual-machine-scale-sets/quick-create-template-linux
- Azure ARM Template (Windows) – https://docs.microsoft.com/en-us/azure/virtual-machine-scale-sets/quick-create-template-windows
As explained earlier, we’ll use Azure CLI. Be sure to follow this step from Azure CLI on your Ansible Control Node. First, we’ll need to login to Azure via Azure CLI. Type the following command:
az login
You will be asked to open a Web Browser and enter the code.
At the website, enter our code.
Once authenticated, in Azure CLI on your Ansible Control Node, create our new Resource Group using the following command:
az group create --name vmblogvmss --location northcentralus
The following output is returned with our Resource Group information:
{
"id": "/subscriptions/959b5d63-3xxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/vmblogvmss",
"location": "northcentralus",
"managedBy": null,
"name": "vmblogvmss",
"properties": {
"provisioningState": "Succeeded"
},
"tags": null,
"type": "Microsoft.Resources/resourceGroups"
}
To create our VMSS, use the following command:
az vmss create \
--resource-group vmblogvmss \
--name blogScaleSet \
--image centos \
--public-ip-per-vm \
--upgrade-policy-mode automatic \
--admin-username eshudnow \
--subnet $(az network vnet subnet show -g Networking --vnet-name NorthCentral-VNET -n default -o tsv --query id) \
--ssh-key-values ~/.ssh/id_rsa.pub
Note that we made a few changes here that are different from the quickstarts linked to above. I made the following additions/changes:
- Added –public-ip-per-VM so we have the ability to SSH directly into an instance (node)
- Changed the image from Ubuntu to CentOS
- Instead of generating a new ssh key pair, I specify to use the ssh key values from our Ansible Control Node. This is why it is important to run this using Azure CLI from the Ansible Control Node.
- I specified the subnet to use which is in the Resource Group Networking, in the VNET NorthCentral-VNET and the name of the subnet to use is default. If you also try to specify –vnet alongside –subnet when referencing an existing subnet, it will fail.
The creation of our VMSS will take a while and display as running after executing the above in Azure CLI. But be patient. We can see Resources are being created in the Azure Portal as shown in the following screenshot.
After the VMSS is created, you will get a large JSON Response back in Azure CLI. One piece of information that is of interest is whether the –ssh-key-values worked. Take a look at the osprofile section of the JSON Response.
"osProfile": {
"adminUsername": "eshudnow",
"allowExtensionOperations": true,
"computerNamePrefix": "blogs5ea8",
"linuxConfiguration": {
"disablePasswordAuthentication": true,
"provisionVMAgent": true,
"ssh": {
"publicKeys": [
{
"keyData": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC/sLLG2X42jvWs0oiiDUFAqUicALaBN7tYi6a4lDR/k3fFbH0YIUs+H4SrjfyI8c4nq6zA4WtLNKIEONhxM/YNm87j4X5QeJx4P3gvCcQ9Q6HYLtWLOC5xHHvPMPQlXnR3aP9nVM+OxqqznyC0UK31H4VdHlwm4fDnHFzYegIdnfr4TD2/u49nwI4wsPg/xyNAtZYVKDt4JbsLhg3zkk5vf8K25pUlCw+ShH+DqEXh4OZEL1uKCkLaep7j72afr/eSHvXQrqNluG8O6n72BrVAdPvjohqQyPlbo/ZX9RiclNGOrOyRaslRA5LedUCfLb63TZjuzxXo7GlKWZBm2CKF eshudnow@iac\n",
"path": "/home/eshudnow/.ssh/authorized_keys"
}
]
}
}
What better way to ensure that it worked by trying to SSH into one of our VMSS Instances. In the Azure Portal, I can see two Instances are running.
If I click on one of those Instances, I can see the Public IP Address.
On our Ansible Control Node, iac, SSH into this Public IP Address using the following command:
ssh [email protected]
It worked!
Now we’re ready to start working on our Dynamic Inventory file for Virtual Machine Scale Sets!
Modifying our Dynamic Inventory File to use VMSS Resource Groups
Let’s modify our ansible_azure_rm.yml that we created in Part 1. Currently, the file has the following text:
plugin: azure_rm
include_vm_resource_groups:
- vmblog
auth_source: auto
Let’s modify the file to look as such:
plugin: azure_rm
include_vm_resource_groups:
- vmblog
include_vmss_resource_groups:
- '*'
auth_source: auto
What we are expecting to see on our next execution (don’t forget to run the four EXPORTS for our Service Principal) is that Ansible will use the Dynamic Inventory and look for any VMs in the vmblog Resource Group as well as any VMSS Instances (VM Nodes) in any Resource Group.
Let’s go ahead and run the following command:
ansible all -m ping -i ansible_azure_rm.yml
What you will see is the following:
We successfully discovered our vmblog01VM and both of our VM Scale Set Instances. But one of them says Host key verification failed. This is because we tested an ssh into one of our VMSS Instances and added host key verification. Obviously, in a real-world environment, our VMSS can be scaling up and down and it wouldn’t make much sense to have to constantly be checking and SSH’ing into each of our Instances so Ansible will work.
So how do we solve this?
In our ansible.cfg, we must add the following:
host_key_checking=False
We can find the ansible.cfg we are currently using by running the following command:
ansible --version
Our output looks as such:
ansible 2.9.2
config file = /home/eshudnow/ansible/ansible.cfg
configured module search path = ['/home/eshudnow/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.6.8 (default, Aug 7 2019, 17:28:10) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
Run the following command to edit this file:
vi <config file path>
For us, we will be doing the following as I have my own ansible.cfg file in a folder called ansible located in my home directory:
vi /home/eshudnow/ansible/ansible.cfg
Ensure our host_key_checking = False is added to our ansible.cfg. My ansible.cfg looks as such:
[defaults]
inventory = ~/ansible/hosts
roles_path = ~/ansible/roles
deprecation_warnings=False
nocows = 1
host_key_checking=False
Run the following command again:
ansible all -m ping -i ansible_azure_rm.yml
Success! Our ping comes back successful for our vmblog01 and our two VMSS Instances. Now as your VMSS scales up or down, Ansible will be able to successfully connect and configure as necessary.
In Part 3, we’ll take a look at creating a second VM, configuring tags on our VMs, modifying our Ansible Dynamic Inventory to use keyed groups to filter on tags and operating system type, and filtering out specific VMs using exclude_host_filters.
Leave a Reply