Introduction
I work a number of OpenStack-based projects. In order to make sure they work correctly, I need to test them against an OpenStack environment. It's usually not a good idea to test on a production environment since mistakes and bugs can cause damage to production.
So the next best option is to create an OpenStack environment strictly for testing purposes. This blog post will describe how to create such an environment.
- Introduction
- Where to Run OpenStack
- Provisioning a Virtual Machine
- Installing OpenStack
- Testing with OpenStack
- Automating the Process
- Conclusion
Where to Run OpenStack
The first topic of consideration is where to run OpenStack.
VirtualBox
At a minimum, you can install VirtualBox on your workstation and create a virtual machine. This is quick, easy, and free. However, you're limited to the resources of your workstation. For example, if your laptop only has 4GB of memory and two cores, OpenStack is going to run slow.
AWS
Another option is to use AWS. While AWS offers a free tier, it's restricted
(I think) to the t2.micro
flavor. This flavor only supports 1 vCPU, which is
is usually worse than your laptop. Larger instances will cost anywhere from
$0.25 to $5.00 (and up!) per hour to run. It can get expensive.
However, AWS offers "spot-instances". These are virtual machines that cost a fraction of normal virtual machines. This is possible because spot instances run on spare, unused capacity in Amazon's cloud. The catch is that your virtual machine could be deleted when a higher paying customer wants to use the space. You certainly don't want to do this for production (well, you can, and that's a fun exercise on its own), but for testing, it's perfect.
With Spot Instances, you can run an m3.xlarge
flavor, which consists of 4
vCPUs and 16GB of memory, for $0.05 per hour. An afternoon of work will cost you
$0.20. Well worth the cost of 4 vCPUs and 16GB of memory, in my opinion.
Spot Pricing is constantly changing. Make sure you check the current price before you begin working. And make sure you do not leave your virtual machine running indefinitely!
Other Spot Pricing Clouds
Both Google and Azure offer spot instances, however, I have not had time to try them, so I can't comment.
Your Own Cloud
The best resource is your own cloud. Maybe you already have a home lab set up
or your place of $work
has a cloud you can use. This way, you can have a large
amount of resources available to use for free.
Provisioning a Virtual Machine
Once you have your location sorted out, you need to decide how to interact with the cloud to provision a virtual machine.
At a minimum, you can use the standard GUI or console that the cloud provides. This works, but it's a hassle to have to manually go through all settings each time you want to launch a new virtual machine. It's always best to test with a clean environment, so you will be creating and destroying virtual machines a lot. Manually setting up virtual machines will get tedious and is prone to human error. Therefore, it's better to use a tool to automate the process.
Terraform
Terraform is a tool that enables you to declaratively create infrastructure. Think of it like Puppet or Chef, but for virtual machines and virtual networks instead of files and packages.
I highly recommend Terraform for this, though I admit I am biased because I spend a lot of time contributing to the Terraform project.
Deploying to AWS
As a reference example, I'll show how to use Terraform to deploy to AWS. Before you begin, make sure you have a valid AWS account and you have gone through the Terraform intro.
There's some irony about using AWS to deploy OpenStack. However, some readers might not have access to an OpenStack cloud to deploy to. Please don't turn this into a political discussion.
On your workstation, open a terminal and make a directory:
$ pwd
/home/jtopjian
$ mkdir terraform-openstack-test
$ cd terraform-openstack-test
Next, generate an SSH key pair:
$ pwd
/home/jtopjian/terraform-openstack-test
$ mkdir key
$ cd key
$ ssh-keygen -t rsa -N '' -f id_rsa
$ cd ..
Next, create a main.tf
file which will house our configuration:
$ pwd
/home/jtopjian/terraform-openstack-test
$ vi main.tf
Start by creating a key pair:
provider "aws" {
region = "us-west-2"
}
resource "aws_key_pair" "openstack" {
key_name = "openstack-key"
public_key = "${file("key/id_rsa.pub")}"
}
With that in place, run:
$ terraform init
$ terraform apply
Next, create a Security Group.
This will allow traffic in and out of the virtual machine. Add the following
to main.tf
:
provider "aws" {
region = "us-west-2"
}
resource "aws_key_pair" "openstack" {
key_name = "openstack-key"
public_key = "${file("key/id_rsa.pub")}"
}
+ resource "aws_security_group" "openstack" {
+ name = "openstack"
+ description = "Allow all inbound/outbound traffic"
+
+ ingress {
+ from_port = 0
+ to_port = 0
+ protocol = "tcp"
+ cidr_blocks = ["0.0.0.0/0"]
+ }
+
+ ingress {
+ from_port = 0
+ to_port = 0
+ protocol = "udp"
+ cidr_blocks = ["0.0.0.0/0"]
+ }
+
+ ingress {
+ from_port = 0
+ to_port = 0
+ protocol = "icmp"
+ cidr_blocks = ["0.0.0.0/0"]
+ }
+ }
Note: Don't include the +
. It's used to highlight what has been added to the
configuration.
With that in place, run:
$ terraform plan
$ terraform apply
If you log in to your AWS console through a browser, you can see that the key pair and security group have been added to your account.
You can easily destroy and recreate these resources at-will:
$ terraform plan
$ terraform destroy
$ terraform plan
$ terraform apply
$ terraform show
Finally, create a virtual machine. Add the following to main.tf
:
provider "aws" {
region = "us-west-2"
}
resource "aws_key_pair" "openstack" {
key_name = "openstack-key"
public_key = "${file("key/id_rsa.pub")}"
}
resource "aws_security_group" "openstack" {
name = "openstack"
description = "Allow all inbound/outbound traffic"
ingress {
from_port = 0
to_port = 0
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 0
to_port = 0
protocol = "udp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 0
to_port = 0
protocol = "icmp"
cidr_blocks = ["0.0.0.0/0"]
}
}
+ resource "aws_spot_instance_request" "openstack" {
+ ami = "ami-0c2aba6c"
+ spot_price = "0.0440"
+ instance_type = "m3.xlarge"
+ wait_for_fulfillment = true
+ spot_type = "one-time"
+ key_name = "${aws_key_pair.openstack.key_name}"
+
+ security_groups = ["default", "${aws_security_group.openstack.name}"]
+
+ root_block_device {
+ volume_size = 40
+ delete_on_termination = true
+ }
+
+ tags {
+ Name = "OpenStack Test Infra"
+ }
+ }
+
+ output "ip" {
+ value = "${aws_spot_instance_request.openstack.public_ip}"
+ }
Above, an aws_sport_instance_request
resource was added. This will launch a Spot Instance using the parameters we
specified.
It's important to note that the aws_spot_instance_request
resource also takes
the same parameters as the aws_instance
resource.
The ami
being used is the latest CentOS 7 AMI published in the us-west-2
region. You can see the list of AMIs here.
Make sure you use the correct AMI for the region you're deploying to.
Notice how this resource is referencing the other resources you created (the key
pair, and the security group). Additionally, notice how you're specifying a
spot_price
. This is helpful to limit the amount of money that will be spent
on this instance. You can get an accurate price by going to the
Spot Request
page and clicking on "Pricing History". Again, make sure you are looking at the
correct region.
An output
was also added to the main.tf
file. This will print out the
public IP address of the AWS instance when Terraform completes.
Amazon limits the amount of spot instances you can launch at any given time.
You might find that if you create, delete, and recreate a spot instance too
quickly, Terraform will give you an error. This is Amazon telling you to wait.
You can open a support ticket with Amazon/AWS and ask for a larger spot quota to
be placed on your account. I asked for the ability to launch 5 spot instances at
any given time in the us-west-1
and us-west-2
region. This took around two
business days to complete.
With all of this in place, run Terraform:
$ terraform apply
When it has completed, you should see output similar to the following:
Outputs:
ip = 54.71.64.171
You should now be able to SSH to the instance:
$ ssh -i key/id_rsa centos@54.71.64.171
And there you have it! You now have access to a CentOS virtual machine to continue testing OpenStack with.
Installing OpenStack
There are numerous ways to install OpenStack. Given that the purpose of this setup is to create an easy-to-deploy OpenStack environment for testing, let's narrow our choices down to methods that can provide a simple all-in-one setup.
DevStack
DevStack provides an easy way of creating an all-in-one environment for testing. It's mainly used to test the latest OpenStack source code. Because of that, it can buggy. I've found that even when using DevStack to deploy a stable version of OpenStack, there were times when DevStack failed to complete. Given that it takes approximately two hours for DevStack to install, having the installation fail has just wasted two hours of time.
Additionally, DevStack isn't suitable to run on a virtual machine which might reboot. When testing an application that uses OpenStack, it's possible that the application causes the virtual machine to be overloaded and lock up.
So for these reasons, I won't use DevStack here. That's not to say that DevStack isn't a suitable tool – after all, it's used as the core of all OpenStack testing.
PackStack
PackStack is also able to easily install an all-in-one OpenStack environment. Rather than building OpenStack from source, it leverages RDO packages and Puppet.
PackStack is also beneficial because it will install the latest stable release of OpenStack. If you are developing an application that end-users will use, these users will most likely be using an OpenStack cloud based on a stable release.
Installing OpenStack with PackStack
The PackStack home page has all necessary instructions to get a simple environment up and running. Here are all of the steps compressed into a shell script:
#!/bin/bash
systemctl disable firewalld
systemctl stop firewalld
systemctl disable NetworkManager
systemctl stop NetworkManager
systemctl enable network
systemctl start network
yum install -y https://repos.fedorapeople.org/repos/openstack/openstack-ocata/rdo-release-ocata-3.noarch.rpm
yum install -y centos-release-openstack-ocata
yum-config-manager --enable openstack-ocata
yum update -y
yum install -y openstack-packstack
packstack --allinone
OpenStack Pike is available at the time of this writing, however, I have not had a chance to test and verify the instructions work. Therefore, I'll be using Ocata.
Save the file as something like deploy.sh
and then run it in your virtual
machine:
$ sudo bash deploy.sh
Consider using a tool like tmux
or screen
after logging into your remote
virtual machine. This will ensure the deploy.sh
script continues to run, even
if your connection to the virtual machine was terminated.
The process will take approximately 30 minutes to complete.
When it's finished, you'll now have a usable all-in-one environment:
$ sudo su
$ cd /root
$ source keystonerc_demo
$ openstack network list
$ openstack image list
$ openstack server create --flavor 1 --image cirros test
Testing with OpenStack
Now that OpenStack is up and running, you can begin testing with it.
Let's say you want to add a new feature to Gophercloud. First, you need to install Go:
$ yum install -y wget
$ wget -O /usr/local/bin/gimme https://raw.githubusercontent.com/travis-ci/gimme/master/gimme
$ chmod +x /usr/local/bin/gimme
$ eval "$(/usr/local/bin/gimme 1.8)"
$ export GOPATH=$HOME/go
$ export PATH=$PATH:$GOROOT/bin:$GOPATH/bin
To make those commands permanent, add the following to your .bashrc
file:
if [[ -f /usr/local/bin/gimme ]]; then
eval "$(/usr/local/bin/gimme 1.8)"
export GOPATH=$HOME/go
export PATH=$PATH:$GOROOT/bin:$GOPATH/bin
fi
Next, install Gophercloud:
$ go get github.com/gophercloud/gophercloud
$ cd ~/go/src/github.com/gophercloud/gophercloud
$ go get -u ./...
In order to run Gophercloud acceptance tests, you need to have several environment variables set. These are described here.
It would be tedious to set each variable for each test or each time you log
in to the virtual machine. Therefore, embed the variables into the
/root/keystonerc_demo
and /root/keystonerc_admin
files:
source /root/keystonerc_admin
nova flavor-create m1.acctest 99 512 5 1 --ephemeral 10
nova flavor-create m1.resize 98 512 6 1 --ephemeral 10
_NETWORK_ID=$(openstack network show private -c id -f value)
_SUBNET_ID=$(openstack subnet show private_subnet -c id -f value)
_EXTGW_ID=$(openstack network show public -c id -f value)
_IMAGE_ID=$(openstack image show cirros -c id -f value)
echo "" >> /root/keystonerc_admin
echo export OS_IMAGE_NAME="cirros" >> /root/keystonerc_admin
echo export OS_IMAGE_ID="$_IMAGE_ID" >> /root/keystonerc_admin
echo export OS_NETWORK_ID=$_NETWORK_ID >> /root/keystonerc_admin
echo export OS_EXTGW_ID=$_EXTGW_ID >> /root/keystonerc_admin
echo export OS_POOL_NAME="public" >> /root/keystonerc_admin
echo export OS_FLAVOR_ID=99 >> /root/keystonerc_admin
echo export OS_FLAVOR_ID_RESIZE=98 >> /root/keystonerc_admin
echo export OS_DOMAIN_NAME=default >> /root/keystonerc_admin
echo export OS_TENANT_NAME=\$OS_PROJECT_NAME >> /root/keystonerc_admin
echo export OS_TENANT_ID=\$OS_PROJECT_ID >> /root/keystonerc_admin
echo export OS_SHARE_NETWORK_ID="foobar" >> /root/keystonerc_admin
echo "" >> /root/keystonerc_demo
echo export OS_IMAGE_NAME="cirros" >> /root/keystonerc_demo
echo export OS_IMAGE_ID="$_IMAGE_ID" >> /root/keystonerc_demo
echo export OS_NETWORK_ID=$_NETWORK_ID >> /root/keystonerc_demo
echo export OS_EXTGW_ID=$_EXTGW_ID >> /root/keystonerc_demo
echo export OS_POOL_NAME="public" >> /root/keystonerc_demo
echo export OS_FLAVOR_ID=99 >> /root/keystonerc_demo
echo export OS_FLAVOR_ID_RESIZE=98 >> /root/keystonerc_demo
echo export OS_DOMAIN_NAME=default >> /root/keystonerc_demo
echo export OS_TENANT_NAME=\$OS_PROJECT_NAME >> /root/keystonerc_demo
echo export OS_TENANT_ID=\$OS_PROJECT_ID >> /root/keystonerc_demo
echo export OS_SHARE_NETWORK_ID="foobar" >> /root/keystonerc_demo
Now try to run a test:
$ source ~/keystonerc_demo
$ cd ~/go/src/github.com/gophercloud/gophercloud
$ go test -v -tags "fixtures acceptance" -run "TestServersCreateDestroy" \
github.com/gophercloud/gophercloud/acceptance/openstack/compute/v2
That go
command is long and tedious. A shortcut would be more helpful. Add the
following to ~/.bashrc
:
gophercloudtest() {
if [[ -n $1 ]] && [[ -n $2 ]]; then
pushd ~/go/src/github.com/gophercloud/gophercloud
go test -v -tags "fixtures acceptance" -run "$1" github.com/gophercloud/gophercloud/acceptance/openstack/$2 | tee ~/gophercloud.log
popd
fi
}
You can now run tests by doing:
$ source ~/.bashrc
$ gophercloudtest TestServersCreateDestroy compute/v2
Automating the Process
There's been a lot of work done since first logging into the virtual machine and it would be a hassle to have to do it all over again. It would be better if the entire process was automated, from start to finish.
First, create a new directory on your workstation:
$ pwd
/home/jtopjian/terraform-openstack-test
$ mkdir files
$ cd files
$ vi deploy.sh
In the deploy.sh
script, add the following contents:
#!/bin/bash
systemctl disable firewalld
systemctl stop firewalld
systemctl disable NetworkManager
systemctl stop NetworkManager
systemctl enable network
systemctl start network
yum install -y https://repos.fedorapeople.org/repos/openstack/openstack-ocata/rdo-release-ocata-3.noarch.rpm
yum install -y centos-release-openstack-ocata
yum-config-manager --enable openstack-ocata
yum update -y
yum install -y openstack-packstack
packstack --allinone
source /root/keystonerc_admin
nova flavor-create m1.acctest 99 512 5 1 --ephemeral 10
nova flavor-create m1.resize 98 512 6 1 --ephemeral 10
_NETWORK_ID=$(openstack network show private -c id -f value)
_SUBNET_ID=$(openstack subnet show private_subnet -c id -f value)
_EXTGW_ID=$(openstack network show public -c id -f value)
_IMAGE_ID=$(openstack image show cirros -c id -f value)
echo "" >> /root/keystonerc_admin
echo export OS_IMAGE_NAME="cirros" >> /root/keystonerc_admin
echo export OS_IMAGE_ID="$_IMAGE_ID" >> /root/keystonerc_admin
echo export OS_NETWORK_ID=$_NETWORK_ID >> /root/keystonerc_admin
echo export OS_EXTGW_ID=$_EXTGW_ID >> /root/keystonerc_admin
echo export OS_POOL_NAME="public" >> /root/keystonerc_admin
echo export OS_FLAVOR_ID=99 >> /root/keystonerc_admin
echo export OS_FLAVOR_ID_RESIZE=98 >> /root/keystonerc_admin
echo export OS_DOMAIN_NAME=default >> /root/keystonerc_admin
echo export OS_TENANT_NAME=\$OS_PROJECT_NAME >> /root/keystonerc_admin
echo export OS_TENANT_ID=\$OS_PROJECT_ID >> /root/keystonerc_admin
echo export OS_SHARE_NETWORK_ID="foobar" >> /root/keystonerc_admin
echo "" >> /root/keystonerc_demo
echo export OS_IMAGE_NAME="cirros" >> /root/keystonerc_demo
echo export OS_IMAGE_ID="$_IMAGE_ID" >> /root/keystonerc_demo
echo export OS_NETWORK_ID=$_NETWORK_ID >> /root/keystonerc_demo
echo export OS_EXTGW_ID=$_EXTGW_ID >> /root/keystonerc_demo
echo export OS_POOL_NAME="public" >> /root/keystonerc_demo
echo export OS_FLAVOR_ID=99 >> /root/keystonerc_demo
echo export OS_FLAVOR_ID_RESIZE=98 >> /root/keystonerc_demo
echo export OS_DOMAIN_NAME=default >> /root/keystonerc_demo
echo export OS_TENANT_NAME=\$OS_PROJECT_NAME >> /root/keystonerc_demo
echo export OS_TENANT_ID=\$OS_PROJECT_ID >> /root/keystonerc_demo
echo export OS_SHARE_NETWORK_ID="foobar" >> /root/keystonerc_demo
yum install -y wget
wget -O /usr/local/bin/gimme https://raw.githubusercontent.com/travis-ci/gimme/master/gimme
chmod +x /usr/local/bin/gimme
eval "$(/usr/local/bin/gimme 1.8)"
export GOPATH=$HOME/go
export PATH=$PATH:$GOROOT/bin:$GOPATH/bin
go get github.com/gophercloud/gophercloud
pushd ~/go/src/github.com/gophercloud/gophercloud
go get -u ./...
popd
cat >> /root/.bashrc <<EOF
if [[ -f /usr/local/bin/gimme ]]; then
eval "\$(/usr/local/bin/gimme 1.8)"
export GOPATH=$HOME/go
export PATH=\$PATH:$GOROOT/bin:\$GOPATH/bin
fi
gophercloudtest() {
if [[ -n \$1 ]] && [[ -n \$2 ]]; then
pushd ~/go/src/github.com/gophercloud/gophercloud
go test -v -tags "fixtures acceptance" -run "\$1" github.com/gophercloud/gophercloud/acceptance/openstack/\$2 | tee ~/gophercloud.log
popd
fi
}
EOF
Next, add the following to main.tf
:
provider "aws" {
region = "us-west-2"
}
resource "aws_key_pair" "openstack" {
key_name = "openstack-key"
public_key = "${file("key/id_rsa.pub")}"
}
resource "aws_security_group" "openstack" {
name = "openstack"
description = "Allow all inbound/outbound traffic"
ingress {
from_port = 0
to_port = 0
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 0
to_port = 0
protocol = "udp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 0
to_port = 0
protocol = "icmp"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_spot_instance_request" "openstack" {
ami = "ami-0c2aba6c"
spot_price = "0.0440"
instance_type = "m3.xlarge"
wait_for_fulfillment = true
spot_type = "one-time"
key_name = "${aws_key_pair.openstack.key_name}"
security_groups = ["default", "${aws_security_group.openstack.name}"]
root_block_device {
volume_size = 40
delete_on_termination = true
}
tags {
Name = "OpenStack Test Infra"
}
}
+ resource "null_resource" "openstack" {
+ connection {
+ host = "${aws_spot_instance_request.openstack.public_ip}"
+ user = "centos"
+ private_key = "${file("key/id_rsa")}"
+ }
+
+ provisioner "file" {
+ source = "files"
+ destination = "/home/centos/files"
+ }
+
+ provisioner "remote-exec" {
+ inline = [
+ "sudo bash /home/centos/files/deploy.sh"
+ ]
+ }
+ }
output "ip" {
value = "${aws_spot_instance_request.openstack.public_ip}"
}
The above has added a null_resource
.
null_resource
is simply an empty Terraform resource. It's commonly used to
store all provisioning steps. In this case, the above null_resource
is doing
the following:
- Configuring the connection to the virtual machine.
- Copying the
files
directory to the virtual machine. - Remotely running the
deploy.sh
script.
Now when you run Terraform, once the Spot Instance has been created, Terraform
will copy the files
directory to it and then execute deploy.sh
. Terraform
will now take approximately 30-40 minutes to finish, but when it has finished,
OpenStack will be up and running.
Since a new resource type has been added (null_resource
), you will need to run
terraform init
:
$ pwd
/home/jtopjian/terraform-openstack-test
$ terraform init
Then to run everything from start to finish, do:
$ terraform destroy
$ terraform apply
When Terraform is finished, you will have a fully functional OpenStack environment suitable for testing.
Conclusion
This post detailed how to create an all-in-one OpenStack environment that is suitable for testing applications. Additionally, all configuration was recorded both in Terraform and shell scripts so the environment can be created automatically.
Granted, if you aren't creating a Go-based application, you will need to install other dependencies, but it should be easy to figure out from the example detailed here.
While this setup is a great way to easily build a testing environment, there are still other improvements that can be made. For example, instead of running PackStack each time, an AMI image can be created which already has OpenStack installed. Additionally, multi-node environments can be created for more advanced testing. These methods will be detailed in future posts.
Comments
comments powered by Disqus