Vagrant commands and config management as modules for Ansible to let you control vagrant VMs from an ansible playbook.
This collection of modules provide access to Vagrant commands and configuration of the Vagrantfile from ansible playbooks and roles.
By allowing you to run guests on your local system, this module facilitates testing and development of orchestrated, distributed applications via ansible.
This collection should not be confused with vagrant-ansible which allows you to run ansible playbooks on vagrant-launched hosts.
Preparing a GlusterFS cluster, I want to test it including the worst scenarios:
For all these cases, I want the required provisionning playbooks/roles to handle all the typical recovery steps.
vagrant
from the shell tooansible-test
assertions instead of tests written in golang
. Also the Virtualbox
provider for Terraform
doesn't work properly for now.Before this work is ready to be shared on ansible-galaxy, you can include it in yor playbooks this way
collections:
- name: [email protected]:jclaveau/ansible-vagrant-modules.git
type: git
version: 0
Afterwards this should work
ansible-galaxy collection install jclaveau.vagrant
pip install -r requirements.txt
Vagrant will require at least one provider like VirtualBox, Libvirt, Docker or VMware
This could be a test case for a GlusterFS role + playbook
# TODO chercher exemple de Playbook de provisionning lancé depuis Vagrant
- name: Add a vm to the Vagrantfile
jclaveau.vagrant.config:
args:
state: "present"
name: "{{ item }}"
config:
box: boxomatic/debian-11
ansible:
playbook: "glusterfs_provisionning_playbook.yml"
shell:
inline: 'echo "provisionning done"'
forwarded_ports:
- host: "808{{ i }}"
guest: 80
- host: "8{{ i }}43"
guest: 443
loop:
- srv001
- srv002
loop_control:
index_var: "i"
- name: starting the node
jclaveau.vagrant.up:
args:
name: "{{ item }}"
provision: true
register: up_result
loop:
- srv001
- srv002
- name: Check the status of the gluster peers
shell: "gluster peer status"
register: peers_status
- name: show peers_status
dbg:
var: peers_status
- name: Assert that all peers are available
assert:
that: '...'
# destroy
- name: destroy one node
jclaveau.vagrant.destroy:
args:
name: srv001
# check it's absence and throw a notification
- name: Check the status of the gluster peers
shell: "gluster peer status"
register: peers_status
- name: show peers_status
dbg:
var: peers_status
- name: Assert that one peer is missing
assert:
that: '...'
# recreate and reprovision it
- name: recreate and reprovision it
jclaveau.vagrant.up:
args:
name: srv001
# check that the cluster works and the node is replaced
- name: Check the status of the gluster peers
shell: "gluster peer status"
register: peers_status
- name: show peers_status
dbg:
var: peers_status
- name: Assert that all peers are available again
assert:
that: '...'
# TODO chercher exemple de Playbook de provisionning lancé depuis Vagrant
- name: Add a vm to the Vagrantfile
jclaveau.vagrant.config:
args:
state: "present"
name: "srv001"
config:
box: boxomatic/debian-11
box_path: '/path/to/a/box/file'
# Those options will trigger the required configuration of VirtualBox, Libvirt or Docker
cpus: 2
memory: 2048
# By default, this Vagrantfile with configure a private_network working with dhcp
ip: '192.168.10.1'
mac: '...'
netmask: '255.255.255.0'
auto_config: false
intnet: false # equivalent of :virtualbox__intnet
# forwarded_ports can be configured verry easily
forwarded_ports:
- host: "8080"
guest: 80
- host: "8043"
guest: 443
- guest: 22
host: 2270
id: ssh
# Provisionning
ansible:
playbook: "your_provisionning_playbook.yml"
shell:
inline: 'echo "provisionning done"'
# Example for Libvirt provider
provider: libvirt
libvirt_options:
nested: true
features:
- acpi
- apic
storage:
- - :file
- :path: libvirt_tests_shared_disk.img
:size: 10M
- - :file
- :path: libvirt_tests_shared_disk_2.img
:size: 15M
# Example for Docker provider
provider: docker
docker_options:
create_args:
--cpuset-cpus: 1
# image: tknerr/baseimage-ubuntu:18.04
ports:
- 9999:99
# Example for Virtualbox provider
provider: virtualbox
virtualbox_options:
name: "my_vm"
linked_clone: true
# gui: true
check_guest_additions: false
# All entries having '--' as prefix will trigger a call like provider.customize ['modifyvm', :id, key, value]
--groups: "/my-vb-group"
# Inline configuration for providers without dedicated support
# If you do not find an ption you need, you can pass inline Ruby code which will be evaluated against the `provider`
# object in the Vagrantfile
provider: my_custom_provider
provider_options_inline:
- my_proprty = "my_value"
- method_call ['param_1', 'param_2']
In integration tests using add_host
to add your newly created vm to your inventory wouldn't work.
This works perfectly in playbooks
but is still untested with roles
.
Rob Parrot, implemented a lock mechanism, commenting in 2014 that Vagrant had absolutelly no care of concurrency.
This doesn't seem to be the case anymore as Vagrant throws an error if you try to up
a vm twice at the same time.
In conclusion I removed this mechanism, letting the responsability of concurrency to Vagrant itslef.
Presently, two Vagrant commands have parellel
parameter available: up
and destroy
. This parameter delegates
concurrency handling to the provider (Libvirt
handles it while VirtualBox
doesn't for example).
Sadly these parameters are not implemented in python-vagrant
which doesn't seem maintained for a while.
As a result, implementing the binding to this represents quite a lot of work and I consider it out of the scope of this first version.
In consequences, I chose to allow only one vm by Vagrant
command and let the end user implement parallelism with Ansible
's async
featuer like shown below.
Also Amtega teem implemented a role handling this async use of vagrant: https://github.com/amtega/ansible_role_vagrant_provisioner
You are please to implement it if you wish: issue 39
- name: Start the 2 vagrant instances asynchronously
jclaveau.vagrant.up:
args:
name: "{{ item }}"
loop:
- "srv001"
- "srv002"
async: 90
poll: 0
register: async_loop
- name: dbg async_loop
ansible.builtin.debug:
var: async_loop
- name: wait for up to finish
async_status:
jid: "{{item.ansible_job_id}}"
mode: status
retries: 120
delay: 1
loop: "{{async_loop.results}}"
register: async_loop_jobs
until: async_loop_jobs.finished
Feel free to give a look to the issues if you need a feature and have time to implement it. The priority should ideally go to the current target milestone issues.
.../ansible_collections/jclaveau/ansible-vagrant-modules
(required by Ansible
)git config core.hooksPath .githooks
pip install -r requirements.txt
./tests.sh
Ansible
has a specific philosophy and we must follow it. Vagrant
also has its own way.As every Ansible module, this code is distributed under GPLv3.0+ licence.
ansible-test
and setup CI with github actions