---
- hosts: localhost
tasks:
- name: make a VPC for my app
register: vpc
ec2_vpc:
region: us-west-2
state: present
cidr_block: 172.30.0.0/16
subnets:
- cidr: 172.30.1.0/24
az: us-west-2c
resource_tags: { "Environment":"Dev", "Tier" : "Misc" }
- cidr: 172.30.2.0/24
az: us-west-2a
resource_tags: { "Environment":"Dev", "Tier" : "Main" }
- cidr: 172.30.3.0/24
az: us-west-2b
resource_tags: { "Environment":"Dev", "Tier" : "Reserve" }
internet_gateway: True
route_tables:
- subnets:
- 172.30.1.0/24
- 172.30.2.0/24
- 172.30.3.0/24
routes:
- dest: 0.0.0.0/0
gw: igw
- ec2:
key_name: ryansb
region: us-west-2
instance_type: t2.large
image: ami-d0f506b0
wait: yes
instance_tags:
foo: bar
exact_count: 1
count_tag: foo
assign_public_ip: yes
In a normal playbook (executing on a group of remote hosts) if you wanted to interleave tasks going to 3rd parties like AWS, you'd do something like:
- file: name=/tmp/something state=file
- ec2:
key_name: ryansb
region: us-west-2
instance_type: t2.large
image: ami-d0f506b0
wait: yes
instance_tags:
foo: bar
exact_count: 1
count_tag: foo
assign_public_ip: yes
delegate_to: localhost
run_once: true
- command: cat /proc/cpuinfo
So that way the file and command tasks run on all hosts, but only one instance is created (by the Ansible master)