AWS deployment with Ansible
Automation is a core component of modern cloud infrastructure management. AWS deployment with Ansible offers a simplified, reliable way to manage cloud infrastructure. Manually deploying and configuring resources in AWS can be time-consuming, error-prone, and inefficient. This is where Ansible, a powerful open-source automation tool, shines. It allows you to define your infrastructure as code, enabling repeatable, consistent, and scalable deployments.
This post will guide you through leveraging Ansible for automated deployments and integration with your AWS account. We’ll walk through a practical example: deploying an EC2 instance running the latest Debian Linux, making it publicly accessible with SSH, and installing Nginx with a custom configuration.
Why Ansible for AWS?
Ansible simplifies cloud automation through several key features:
Declarative YAML Syntax
Playbooks use human-readable YAML to define tasks. For example, installing Nginx is expressed as:
name: Install Nginx apt: name: nginx state: present
This approach reduces ambiguity compared to imperative scripting languages.
Agentless architecture
Ansible connects to nodes via SSH, eliminating the need for installed agents. This reduces maintenance overhead, particularly in environments with hundreds of servers.
Idempotent operations
Playbooks can be safely rerun without causing unintended side effects. Ansible checks the current state before making changes, ensuring consistency across deployments.
Comprehensive AWS integration
The Ansible AWS collection provides modules for managing EC2 instances, security groups, IAM roles, and other services, enabling full-stack automation.
Prerequisites
Before we dive into the playbook, ensure you have the following:
- Ansible Installed: If you don’t have it, follow the official Ansible installation guide.
- AWS Account: You’ll need an active AWS account.
- IAM User with Programmatic Access: Create an IAM user with an Access Key ID and Secret Access Key. Grant this user appropriate permissions to manage EC2 instances (e.g., AmazonEC2FullAccess for simplicity in this example, but always follow the principle of least privilege in production).
Important security note: Never hardcode your AWS credentials directly in your playbooks for production environments. Use Ansible Vault, environment variables, or IAM roles for EC2 instances. For this local example, we'll assume you've configured them as environment variables or in ~/.aws/credentials.
If you’ve opted for exporting the secrets as environment variables, you can set them in the following manner:
export AWS_ACCESS_KEY_ID=”your_access_key_id”
export AWS_SECRET_ACCESS_KEY=”your_access_key_contents”
Obviously, replace the placeholder contents with your actual secret data.
4. Boto3 and Botocore: The AWS SDK for Python. Ansible requires these to interact with AWS APIs. Install them using pip:
pip install boto3 botocore
5. SSH Public Key: You’ll need an SSH public key to securely access the EC2 instance. If you don’t have one, you can generate it using ssh-keygen. We’ll need the content of this public key file (e.g., ./ansible_aws_key.pub).
The Ansible Playbook: deploying a public Debian EC2 with Nginx
Let’s create a playbook named deploy_ec2_nginx.yml.
This playbook will run locally and perform the following actions:
- Gather AWS account information (optional, but good for verification).
- Create a new security group to allow SSH (port 22) and HTTP (port 80) access.
- Launch a new EC2 instance using the latest Debian AMI.
- Associate your specified SSH public key with the instance.
- Wait for the instance to be running and SSH to be available.
- Add the new instance to Ansible’s in-memory inventory.
- Install Nginx on the EC2 instance.
- Copy a local Nginx configuration file to the instance and restart Nginx.
First things first – we need to declare all the assets that we’ll use for this example. For this purpose, recreate the following directory structure:
ansible-aws-demo/
├── deploy_ec2_nginx.yml # Your ansible autmation playbook
├── nginx.conf # Your custom Nginx configuration
└── ansible_aws_key.pub # Your public SSH key
We’ll be using a very simple, barebones nginx configuration, that only only serves a static HTML page with a custom message, available at a custom location:
nginx.conf:
server { listen 80 default_server; listen [::]:80 default_server; root /var/www/html; index index.html index.htm index.nginx-debian.html; server_name _; location / { try_files $uri $uri/ =404; } location /hello { default_type text/plain; return 200 'Hello from Ansible-configured Nginx!\n'; } }
deploy_ec2_nginx.yml:
name: Deploy EC2 Instance with Nginx on AWS hosts: localhost connection: local gather_facts: false vars: aws_region: "eu-central-1" # Specify your desired AWS region instance_type: "t2.micro" # Choose an appropriate instance type key_name: "ansible-ec2-key" # Unique key pair name ssh_public_key_file: "ansible_aws_key.pub" # Path to your SSH public key nginx_config_file: "nginx.conf" # Path to your local Nginx config image_name_filter: "debian-12-amd64-*" # Filter for latest Debian 12 AMI image_owner_alias: "amazon" # Or the specific owner ID for Debian AMIs tasks: - name: Get latest Debian 12 AMI amazon.aws.ec2_ami_info: region: "{{ aws_region }}" owners: "{{ image_owner_alias }}" filters: name: "{{ image_name_filter }}" virtualization-type: "hvm" architecture: "x86_64" register: latest_debian_ami - name: Debug AMI Info (Optional) debug: msg: "Using AMI: {{ latest_debian_ami.images | sort(attribute='creation_date', reverse=true) | first | json_query('image_id') }}" - name: Set latest Debian AMI ID set_fact: ami_id: "{{ latest_debian_ami.images | sort(attribute='creation_date', reverse=true) | first | json_query('image_id') }}" - name: Import SSH public key to AWS for EC2 amazon.aws.ec2_key: name: "{{ key_name }}" key_material: "{{ lookup('file', ssh_public_key_file) }}" region: "{{ aws_region }}" state: present register: ec2_key_pair - name: Create a security group for web and SSH access amazon.aws.ec2_security_group: name: "web-ssh-access" description: "Allow HTTP and SSH traffic" region: "{{ aws_region }}" rules: - proto: tcp ports: - 80 # HTTP cidr_ip: 0.0.0.0/0 rule_desc: "Allow HTTP" - proto: tcp ports: - 22 # SSH cidr_ip: 0.0.0.0/0 # WARNING: For demo only. Restrict to your IP in production! rule_desc: "Allow SSH" tags: Name: "web-ssh-sg" register: web_ssh_sg - name: Launch EC2 instance amazon.aws.ec2_instance: key_name: "{{ key_name }}" instance_type: "{{ instance_type }}" image_id: "{{ ami_id }}" region: "{{ aws_region }}" security_group: "{{ web_ssh_sg.group_id }}" wait: true wait_timeout: 300 tags: Name: "ansible-debian-nginx" Environment: "Demo" count: 1 network_interfaces: - device_index: 0 # For the primary network interface (eth0) assign_public_ip: true # If you were using a specific subnet, you'd specify 'subnet_id' here. # Since we are relying on the default VPC's default subnet, # we don't need to specify subnet_id for this basic case. # You can also assign security groups per interface here using 'groups: [ "{{ web_ssh_sg.group_id }}" ]' register: ec2_instance - name: Add new instance to host group add_host: hostname: "{{ item.public_ip_address }}" groupname: launched_ec2_instances loop: "{{ ec2_instance.instances }}" - name: Wait for SSH to come up wait_for: host: "{{ item.public_ip_address }}" port: 22 state: started delay: 30 # seconds to wait before first check timeout: 320 # seconds to wait overall loop: "{{ ec2_instance.instances }}" vars: ansible_user: admin # Default user for Debian 12 AMIs often `admin` or `debian` ansible_ssh_private_key_file: "./ansible_aws_key" # Assuming your private key is here - name: Configure Nginx on the launched EC2 instance(s) hosts: launched_ec2_instances # Target the group created dynamically become: true # We need sudo to install and manage Nginx vars: ansible_user: admin # Ensure this user has sudo without password or configure accordingly ansible_ssh_private_key_file: "./ansible_aws_key" # Path to your private SSH key tasks: - name: Update apt cache and install Nginx apt: name: nginx state: present update_cache: yes - name: Copy custom Nginx configuration copy: src: "{{ lookup('env', 'PWD') }}/nginx.conf" # Assuming nginx.conf is in the same dir as playbook dest: /etc/nginx/sites-available/default owner: root group: root mode: '0644' notify: Restart Nginx - name: Ensure Nginx is started and enabled systemd: name: nginx state: started enabled: yes handlers: - name: Restart Nginx systemd: name: nginx state: restarted - name: Output Public IP hosts: localhost connection: local gather_facts: false vars: key_name: "{{ hostvars['localhost']['key_name'] }}" # Access var from previous play aws_region: "{{ hostvars['localhost']['aws_region'] }}" tasks: - name: Display EC2 instance Public IP debug: msg: "EC2 Instance Public IP: {{ hostvars[groups['launched_ec2_instances'][0]]['inventory_hostname'] }}" when: groups['launched_ec2_instances'] | length > 0
Playbook architecture: EC2 and Nginx deployment
The deploy_ec2_nginx.yml playbook executes these sequential tasks:
1. Retrieve latest Debian AMI
- name: Obtain current Debian 12 AMI amazon.aws.ec2_ami_info: region: "{{ aws_region }}" owners: amazon filters: name: "debian-12-amd64-*" virtualization-type: hvm register: debian_ami
This module queries AWS for the most recent HVM AMI, ensuring compatibility with modern instance types.
2. Configure security groups
name: Establish SSH and HTTP access rules amazon.aws.ec2_security_group: name: "web-ssh-sg" description: "Permit SSH and HTTP traffic" region: "{{ aws_region }}" rules: - proto: tcp from_port: 22 to_port: 22 cidr_ip: 0.0.0.0/0 # Restrict to specific IP in production - proto: tcp from_port: 80 to_port: 80 cidr_ip: 0.0.0.0/0
While this configuration allows broad access for testing, production environments should limit SSH access to authorized IP ranges.
3. Launch EC2 Instance
name: Deploy EC2 instance amazon.aws.ec2_instance: key_name: "{{ key_name }}" instance_type: t2.micro image_id: "{{ debian_ami.images[^0].image_id }}" region: us-west-2 security_groups: ["web-ssh-sg"] wait: yes tags: Purpose: "Ansible demonstration"
The wait: yes parameter ensures playbook execution pauses until AWS confirms instance availability.
4. Install and configure Nginx
- name: Install Nginx package apt: name: nginx state: present become: yes - name: Apply custom configuration copy: src: nginx.conf dest: /etc/nginx/sites-available/default notify: Restart Nginx
The copy module transfers local configuration files to the target system, while notify triggers service restarts only when changes occur.
5. Output deployment details
name: Display instance metadata debug: msg: "Public IP: {{ ec2_instance.instances[^0].public_ip }}"
This final task provides the information needed to validate the deployment.
Keep in mind that the settings used in this example are strictly geared towards a simple and effective PoC. Be careful of using this in production environments, or, at least, make sure to implement some of the security and cost optimizations noted below.
Best practices for production environments
Security considerations
- Replace 0.0.0.0/0 SSH rules with specific IP CIDR blocks
- Implement IAM roles instead of access keys where possible
- Enable AWS CloudTrail logging for audit purposes
Cost optimization
- Use t3.micro instances for better price-performance ratio
- Apply EC2 instance tags for cost allocation tracking
- Schedule automated shutdowns for non-production instances
Maintenance strategies
- Version control playbooks in Git repositories
- Use Ansible Vault for credential management
- Implement CI/CD pipelines for playbook testing
Running the Playbook
- Save the playbook as deploy_ec2_nginx.yml.
- Create the nginx.conf file in the same directory.
- Place your SSH public key file (e.g., your_ssh_public_key.pub) in the same directory.
- Make sure your AWS credentials (Access Key ID and Secret Access Key) are configured (e.g., via environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY, or ~/.aws/credentials).
- Open your terminal in the directory containing these files and run:
ansible-playbook deploy_ec2_nginx.yml
Ansible will now execute the tasks. You’ll see output indicating the progress. Once completed, it will display the public IP address of your new EC2 instance.
After all is set and done, you should then be able to SSH into your instance with the following command:
ssh -i ~/.ssh/id_rsa admin@<YOUR_EC2_PUBLIC_IP>
Note: Make sure to replace admin if your AMI’s default user is different, and if your your private key resides in a location other then ~/.ssh/id_rsa)
If you now point your browser at the following address:
http://<YOUR_EC2_PUBLIC_IP>/hello
You should be happily greeted by our custom message, yay! 🙂
Cleaning up
Don’t forget this crucial step!
Before saying goodbye, please ensure that you terminate your newly provisioned EC2 instance, as leaving AWS resources running will incur charges. Also, keeping things tidy goes a long way towards having the best possible AWS experience, without any nasty surpises !
Therefore,use the AWS Management Console or the AWS CLI commands provided in the playbook’s output to:
- Terminate the EC2 instance.
- Delete the Security Group.
- Delete the EC2 Key Pair.
Conclusion
Ansible provides a robust framework for codifying AWS infrastructure management. By combining declarative syntax with AWS service integration, you can achieve reproducible deployments while minimizing configuration drift. This approach scales effectively from single instances to complex multi-tier architectures.
For extended AWS deployment with Ansible use cases, consult the Ansible AWS Collection Documentation to explore modules for VPCs, RDS databases, and Lambda functions.