Introduction
In the last post, I detailed how to create an all-in-one OpenStack environment in an isolated virtual machine for the purpose of testing OpenStack-based applications.
In this post, I'll cover how to create an image from the environment. This will allow you to launch virtual machines which already have the OpenStack environment installed and running. The benefits of this approach is that it reduces the time required to build the environment as well as pins the environment to a known working version.
In addition, I'll cover how to modify the all-in-one environment so that it can be accessed remotely. This way, testing does not have to be done locally on the virtual machine.
Note: I realize the title of this series might be a misnomer. This series is is not covering how to deploy OpenStack in general, but how to set up disposable OpenStack environments for testing purposes. Blame line wrapping.
- Introduction
- How to Generate the Image
- Creating a Reusable OpenStack Image
- Using the Image
- Conclusion
How to Generate the Image
AWS and OpenStack (and any other cloud provider) provide the ability to create an image (whether AMI, qcow, etc) from a running virtual machine. This is commonly known as "snapshotting".
The process described here will use snapshotting, but it's not that simple. OpenStack has a lot of moving pieces and some of those pieces are dependent on unique configurations of the host: the hostname, the IP address(es), etc. These items must be accounted for and configured correctly on the new virtual machine.
With this in mind, the process of generating an image is roughly:
- Launch a virtual machine.
- Install an all-in-one OpenStack environment.
- Remove any unique information from the OpenStack databases.
- Snapshot.
- Upon creation of a new virtual machine, ensure OpenStack knows about the new unique information.
Creating a Reusable OpenStack Image
Just like in Part 1, it's best to ensure this entire process is automated. Terraform works great to provision and deploy infrastructure, but it is not suited to provide a niche task such as snapshotting.
Fortunately, there's Packer. And even more fortunate is that Packer supports a wide array of cloud services.
If you haven't used Packer before, I recommend going through the intro before proceeding here.
In Part 1, I used AWS as the cloud being deployed to. For this part, I'll switch things up and use an OpenStack cloud.
Creating a Simple Image
To begin, you can continue using the same terraform-openstack-test
directory
that was used in Part 1.
First, create a new directory called packer/openstack
:
$ pwd
/home/jtopjian/terraform-openstack-test
$ mkdir -p packer/openstack
$ cd packer/openstack
Next, create a file called build.json
with the following contents:
{
"builders": [{
"type": "openstack",
"image_name": "packstack-ocata",
"reuse_ips": true,
"ssh_username": "centos",
"flavor": "{{user `flavor`}}",
"security_groups": ["{{user `secgroup`}}"],
"source_image": "{{user `image_id`}}",
"floating_ip_pool": "{{user `pool`}}",
"networks": ["{{user `network_id`}}"]
}]
}
I've broken the above into two sections: the top section has hard-coded values while the bottom section requires input on the command-line. This is because the values will vary between your OpenStack cloud and my OpenStack cloud.
With the above in place, run:
$ packer build \
-var 'flavor=m1.large' \
-var 'secgroup=AllowAll' \
-var 'image_id=9abadd38-a33d-44c2-8356-b8b8ae184e04' \
-var 'pool=public' \
-var 'network_id=b0b12e8f-a695-480e-9dc2-3dc8ac2d55fd' \
build.json
Note the following: the image_id
must be a CentOS 7 image and the Security
Group must allow traffic from your workstation to Port 22.
This command will take some time to complete. When it has finished, it will print the UUID of a newly generated image:
==> Builds finished. The artifacts of successful builds are:
--> openstack: An image was created: 53ecc829-60c0-4a87-81f4-9fc603ff2a8f
That UUID will point to an image titled "packstack-ocata".
Congratulations! You just created an image.
However, there is virtually nothing different about "packstack-ocata" and the CentOS image used to create it. All Packer did was launch a virtual machine and create a snapshot of it.
In order for Packer to make changes to the virtual machine, you must configure
"provisioners" in the build.json
file. Provisioners are just like Terraform's
concept of provisioners: steps that will execute commands on the running virtual
machine. Before you can add some provisioners to the Packer build file, you
first need to generate the scripts which will be run.
Generating an Answer File
In Part 1, PackStack was used to install an all-in-one OpenStack environment. The command used was:
$ packstack --allinone
This very simple command will use a lot of sane defaults and the result will be a fully functionall all-in-one environment.
However, in order to more easily make the OpenStack environment run correctly each time a virtual machine is created, the installation needs tuned. To do this, a custom "answer file" will be used when running PackStack.
An answer file is a file which contains each configurable setting within PackStack. This file is very large with lots of options. It's not something you want to write from scratch. Instead, PackStack can generate an answer file to be used as a template.
On a CentOS 7 virtual machine, which can even be the same virtual machine you created in Part 1, run:
$ packstack --gen-answer-file=packstack-answers.txt
Copy the file to your workstation using scp
or some other means. Make a
directory called files
to store this answer file:
$ pwd
/home/jtopjian/terraform-openstack-test
$ mkdir packer/files
$ scp -i key/id_rsa centos@<ip>:packstack-answers.txt packer/files
Once stored locally, make the following changes:
First, locate the setting CONFIG_CONTROLLER_HOST
. This setting will have the
value of an IP address local to the virtual machine which generated this file:
CONFIG_CONTROLLER_HOST=10.41.8.200
Do a global search and replace of 10.41.8.200
with 127.0.0.1
.
Next, use this opportunity to tune which services you want to enable for your test environment. For example:
- CONFIG_HORIZON_INSTALL=y
+ CONFIG_HORIZON_INSTALL=n
- CONFIG_CEILOMETER_INSTALL=y
+ CONFIG_CEILOMETER_INSTALL=n
- CONFIG_AODH_INSTALL=y
+ CONFIG_AODH_INSTALL=n
- CONFIG_GNOCCHI_INSTALL=y
+ CONFIG_GNOCCHI_INSTALL=n
- CONFIG_LBAAS_INSTALL=n
+ CONFIG_LBAAS_INSTALL=y
- CONFIG_NEUTRON_FWAAS=n
+ CONFIG_NEUTRON_FWAAS=y
These are all services I have personally changed the status of since either they are disabled by default and I want them enabled or they are enabled by default and I do not need them. Change the values to suit your needs.
You might notice that there are several embedded passwords and secrets in this answer file. Astute readers will realize that these passwords will all be used for every virtual machine created with this answer file. For production use, this is most definitely not secure. However, I consider this relatively safe since these OpenStack environments are temporary and only for testing.
Installing OpenStack
Next, begin building a deploy.sh
script. You can re-use the deploy.sh
script
from Part 1 as a start, with one initial change:
systemctl disable firewalld
systemctl stop firewalld
systemctl disable NetworkManager
systemctl stop NetworkManager
systemctl enable network
systemctl start network
yum install -y https://repos.fedorapeople.org/repos/openstack/openstack-ocata/rdo-release-ocata-3.noarch.rpm
yum install -y centos-release-openstack-ocata
yum-config-manager --enable openstack-ocata
yum update -y
yum install -y openstack-packstack
- packstack --allinone
+ packstack --answer-file /home/centos/files/packstack-answers.txt
source /root/keystonerc_admin
nova flavor-create m1.acctest 99 512 5 1 --ephemeral 10
nova flavor-create m1.resize 98 512 6 1 --ephemeral 10
_NETWORK_ID=$(openstack network show private -c id -f value)
_SUBNET_ID=$(openstack subnet show private_subnet -c id -f value)
_EXTGW_ID=$(openstack network show public -c id -f value)
_IMAGE_ID=$(openstack image show cirros -c id -f value)
echo "" >> /root/keystonerc_admin
echo export OS_IMAGE_NAME="cirros" >> /root/keystonerc_admin
echo export OS_IMAGE_ID="$_IMAGE_ID" >> /root/keystonerc_admin
echo export OS_NETWORK_ID=$_NETWORK_ID >> /root/keystonerc_admin
echo export OS_EXTGW_ID=$_EXTGW_ID >> /root/keystonerc_admin
echo export OS_POOL_NAME="public" >> /root/keystonerc_admin
echo export OS_FLAVOR_ID=99 >> /root/keystonerc_admin
echo export OS_FLAVOR_ID_RESIZE=98 >> /root/keystonerc_admin
echo export OS_DOMAIN_NAME=default >> /root/keystonerc_admin
echo export OS_TENANT_NAME=\$OS_PROJECT_NAME >> /root/keystonerc_admin
echo export OS_TENANT_ID=\$OS_PROJECT_ID >> /root/keystonerc_admin
echo export OS_SHARE_NETWORK_ID="foobar" >> /root/keystonerc_admin
echo "" >> /root/keystonerc_demo
echo export OS_IMAGE_NAME="cirros" >> /root/keystonerc_demo
echo export OS_IMAGE_ID="$_IMAGE_ID" >> /root/keystonerc_demo
echo export OS_NETWORK_ID=$_NETWORK_ID >> /root/keystonerc_demo
echo export OS_EXTGW_ID=$_EXTGW_ID >> /root/keystonerc_demo
echo export OS_POOL_NAME="public" >> /root/keystonerc_demo
echo export OS_FLAVOR_ID=99 >> /root/keystonerc_demo
echo export OS_FLAVOR_ID_RESIZE=98 >> /root/keystonerc_demo
echo export OS_DOMAIN_NAME=default >> /root/keystonerc_demo
echo export OS_TENANT_NAME=\$OS_PROJECT_NAME >> /root/keystonerc_demo
echo export OS_TENANT_ID=\$OS_PROJECT_ID >> /root/keystonerc_demo
echo export OS_SHARE_NETWORK_ID="foobar" >> /root/keystonerc_demo
yum install -y wget git
wget -O /usr/local/bin/gimme https://raw.githubusercontent.com/travis-ci/gimme/master/gimme
chmod +x /usr/local/bin/gimme
eval "$(/usr/local/bin/gimme 1.8)"
export GOPATH=$HOME/go
export PATH=$PATH:$GOROOT/bin:$GOPATH/bin
go get github.com/gophercloud/gophercloud
pushd ~/go/src/github.com/gophercloud/gophercloud
go get -u ./...
popd
cat >> /root/.bashrc <<EOF
if [[ -f /usr/local/bin/gimme ]]; then
eval "\$(/usr/local/bin/gimme 1.8)"
export GOPATH=\$HOME/go
export PATH=\$PATH:\$GOROOT/bin:\$GOPATH/bin
fi
gophercloudtest() {
if [[ -n \$1 ]] && [[ -n \$2 ]]; then
pushd ~/go/src/github.com/gophercloud/gophercloud
go test -v -tags "fixtures acceptance" -run "\$1" github.com/gophercloud/gophercloud/acceptance/openstack/\$2 | tee ~/gophercloud.log
popd
fi
}
EOF
Next, alter packer/openstack/build.json
with the following:
{
"builders": [{
"type": "openstack",
"image_name": "packstack-ocata",
"reuse_ips": true,
"ssh_username": "centos",
"flavor": "{{user `flavor`}}",
"security_groups": ["{{user `secgroup`}}"],
"source_image": "{{user `image_id`}}",
"floating_ip_pool": "{{user `pool`}}",
"networks": ["{{user `network_id`}}"]
- }]
+ }],
+ "provisioners": [
+ {
+ "type": "file",
+ "source": "../files",
+ "destination": "/home/centos/files"
+ },
+ {
+ "type": "shell",
+ "inline": [
+ "sudo bash /home/centos/files/deploy.sh"
+ ]
+ }
+ ]
}
There are two provisioners being created here: one which will copy the files
directory to /home/centos/files
and one to run the deploy.sh
script.
files
was created outside of the openstack
directory because these files
are not unique to OpenStack. You can use the same files to build images in
other clouds. For example, create a packer/aws
directory and create a
similar build.json
file for AWS.
With that in place, run… actually, don't run yet. I'll save you a step. While the current configuration will launch an instance, install an all-in-one OpenStack environment, and create a snapshot, OpenStack will not work correctly when you create a virtual machine based on that image.
In order for it to work correctly, there are some more modifications which need made to make sure OpenStack starts correctly on a new virtual machine.
Removing Unique Data
In order to remove the unique data of the OpenStack environment, add the
following to deploy.sh
:
+ hostnamectl set-hostname localhost
+
systemctl disable firewalld
systemctl stop firewalld
systemctl disable NetworkManager
systemctl stop NetworkManager
systemctl enable network
systemctl start network
yum install -y https://repos.fedorapeople.org/repos/openstack/openstack-ocata/rdo-release-ocata-3.noarch.rpm
yum install -y centos-release-openstack-ocata
yum-config-manager --enable openstack-ocata
yum update -y
yum install -y openstack-packstack
packstack --answer-file /home/centos/files/packstack-answers.txt
source /root/keystonerc_admin
nova flavor-create m1.acctest 99 512 5 1 --ephemeral 10
nova flavor-create m1.resize 98 512 6 1 --ephemeral 10
_NETWORK_ID=$(openstack network show private -c id -f value)
_SUBNET_ID=$(openstack subnet show private_subnet -c id -f value)
_EXTGW_ID=$(openstack network show public -c id -f value)
_IMAGE_ID=$(openstack image show cirros -c id -f value)
echo "" >> /root/keystonerc_admin
echo export OS_IMAGE_NAME="cirros" >> /root/keystonerc_admin
echo export OS_IMAGE_ID="$_IMAGE_ID" >> /root/keystonerc_admin
echo export OS_NETWORK_ID=$_NETWORK_ID >> /root/keystonerc_admin
echo export OS_EXTGW_ID=$_EXTGW_ID >> /root/keystonerc_admin
echo export OS_POOL_NAME="public" >> /root/keystonerc_admin
echo export OS_FLAVOR_ID=99 >> /root/keystonerc_admin
echo export OS_FLAVOR_ID_RESIZE=98 >> /root/keystonerc_admin
echo export OS_DOMAIN_NAME=default >> /root/keystonerc_admin
echo export OS_TENANT_NAME=\$OS_PROJECT_NAME >> /root/keystonerc_admin
echo export OS_TENANT_ID=\$OS_PROJECT_ID >> /root/keystonerc_admin
echo export OS_SHARE_NETWORK_ID="foobar" >> /root/keystonerc_admin
echo "" >> /root/keystonerc_demo
echo export OS_IMAGE_NAME="cirros" >> /root/keystonerc_demo
echo export OS_IMAGE_ID="$_IMAGE_ID" >> /root/keystonerc_demo
echo export OS_NETWORK_ID=$_NETWORK_ID >> /root/keystonerc_demo
echo export OS_EXTGW_ID=$_EXTGW_ID >> /root/keystonerc_demo
echo export OS_POOL_NAME="public" >> /root/keystonerc_demo
echo export OS_FLAVOR_ID=99 >> /root/keystonerc_demo
echo export OS_FLAVOR_ID_RESIZE=98 >> /root/keystonerc_demo
echo export OS_DOMAIN_NAME=default >> /root/keystonerc_demo
echo export OS_TENANT_NAME=\$OS_PROJECT_NAME >> /root/keystonerc_demo
echo export OS_TENANT_ID=\$OS_PROJECT_ID >> /root/keystonerc_demo
echo export OS_SHARE_NETWORK_ID="foobar" >> /root/keystonerc_demo
yum install -y wget git
wget -O /usr/local/bin/gimme https://raw.githubusercontent.com/travis-ci/gimme/master/gimme
chmod +x /usr/local/bin/gimme
eval "$(/usr/local/bin/gimme 1.8)"
export GOPATH=$HOME/go
export PATH=$PATH:$GOROOT/bin:$GOPATH/bin
go get github.com/gophercloud/gophercloud
pushd ~/go/src/github.com/gophercloud/gophercloud
go get -u ./...
popd
cat >> /root/.bashrc <<EOF
if [[ -f /usr/local/bin/gimme ]]; then
eval "\$(/usr/local/bin/gimme 1.8)"
export GOPATH=\$HOME/go
export PATH=\$PATH:\$GOROOT/bin:\$GOPATH/bin
fi
gophercloudtest() {
if [[ -n \$1 ]] && [[ -n \$2 ]]; then
pushd ~/go/src/github.com/gophercloud/gophercloud
go test -v -tags "fixtures acceptance" -run "\$1" github.com/gophercloud/gophercloud/acceptance/openstack/\$2 | tee ~/gophercloud.log
popd
fi
}
EOF
+
+ systemctl stop openstack-cinder-backup.service
+ systemctl stop openstack-cinder-scheduler.service
+ systemctl stop openstack-cinder-volume.service
+ systemctl stop openstack-nova-cert.service
+ systemctl stop openstack-nova-compute.service
+ systemctl stop openstack-nova-conductor.service
+ systemctl stop openstack-nova-consoleauth.service
+ systemctl stop openstack-nova-novncproxy.service
+ systemctl stop openstack-nova-scheduler.service
+ systemctl stop neutron-dhcp-agent.service
+ systemctl stop neutron-l3-agent.service
+ systemctl stop neutron-lbaasv2-agent.service
+ systemctl stop neutron-metadata-agent.service
+ systemctl stop neutron-openvswitch-agent.service
+ systemctl stop neutron-metering-agent.service
+
+ mysql -e "update services set deleted_at=now(), deleted=id" cinder
+ mysql -e "update services set deleted_at=now(), deleted=id" nova
+ mysql -e "update compute_nodes set deleted_at=now(), deleted=id" nova
+ for i in $(openstack network agent list -c ID -f value); do
+ neutron agent-delete $i
+ done
+
+ systemctl stop httpd
The above added 3 pieces to deploy.sh
: set the hostname to localhost
, stop
all OpenStack services, and then delete all known agents for Cinder, Nova, and
Neutron.
Now, with the above in place, run… no, not yet, either.
Remember the last step outlined in the beginning of this post:
Upon creation of a new virtual machine, ensure OpenStack knows about the new unique information.
How is the new virtual machine going to configure itself with new information?
One solution is to create an rc.local
file and place it in the /etc
directory during the Packer provisioning phase. This way, when the virtual
machine launches, rc.local
is triggered and acts as a post-boot script.
Adding an rc.local File
First, add the following to deploy.sh
:
hostnamectl set-hostname localhost
systemctl disable firewalld
systemctl stop firewalld
systemctl disable NetworkManager
systemctl stop NetworkManager
systemctl enable network
systemctl start network
yum install -y https://repos.fedorapeople.org/repos/openstack/openstack-ocata/rdo-release-ocata-3.noarch.rpm
yum install -y centos-release-openstack-ocata
yum-config-manager --enable openstack-ocata
yum update -y
yum install -y openstack-packstack
packstack --answer-file /home/centos/files/packstack-answers.txt
source /root/keystonerc_admin
nova flavor-create m1.acctest 99 512 5 1 --ephemeral 10
nova flavor-create m1.resize 98 512 6 1 --ephemeral 10
_NETWORK_ID=$(openstack network show private -c id -f value)
_SUBNET_ID=$(openstack subnet show private_subnet -c id -f value)
_EXTGW_ID=$(openstack network show public -c id -f value)
_IMAGE_ID=$(openstack image show cirros -c id -f value)
echo "" >> /root/keystonerc_admin
echo export OS_IMAGE_NAME="cirros" >> /root/keystonerc_admin
echo export OS_IMAGE_ID="$_IMAGE_ID" >> /root/keystonerc_admin
echo export OS_NETWORK_ID=$_NETWORK_ID >> /root/keystonerc_admin
echo export OS_EXTGW_ID=$_EXTGW_ID >> /root/keystonerc_admin
echo export OS_POOL_NAME="public" >> /root/keystonerc_admin
echo export OS_FLAVOR_ID=99 >> /root/keystonerc_admin
echo export OS_FLAVOR_ID_RESIZE=98 >> /root/keystonerc_admin
echo export OS_DOMAIN_NAME=default >> /root/keystonerc_admin
echo export OS_TENANT_NAME=\$OS_PROJECT_NAME >> /root/keystonerc_admin
echo export OS_TENANT_ID=\$OS_PROJECT_ID >> /root/keystonerc_admin
echo export OS_SHARE_NETWORK_ID="foobar" >> /root/keystonerc_admin
echo "" >> /root/keystonerc_demo
echo export OS_IMAGE_NAME="cirros" >> /root/keystonerc_demo
echo export OS_IMAGE_ID="$_IMAGE_ID" >> /root/keystonerc_demo
echo export OS_NETWORK_ID=$_NETWORK_ID >> /root/keystonerc_demo
echo export OS_EXTGW_ID=$_EXTGW_ID >> /root/keystonerc_demo
echo export OS_POOL_NAME="public" >> /root/keystonerc_demo
echo export OS_FLAVOR_ID=99 >> /root/keystonerc_demo
echo export OS_FLAVOR_ID_RESIZE=98 >> /root/keystonerc_demo
echo export OS_DOMAIN_NAME=default >> /root/keystonerc_demo
echo export OS_TENANT_NAME=\$OS_PROJECT_NAME >> /root/keystonerc_demo
echo export OS_TENANT_ID=\$OS_PROJECT_ID >> /root/keystonerc_demo
echo export OS_SHARE_NETWORK_ID="foobar" >> /root/keystonerc_demo
yum install -y wget git
wget -O /usr/local/bin/gimme https://raw.githubusercontent.com/travis-ci/gimme/master/gimme
chmod +x /usr/local/bin/gimme
eval "$(/usr/local/bin/gimme 1.8)"
export GOPATH=$HOME/go
export PATH=$PATH:$GOROOT/bin:$GOPATH/bin
go get github.com/gophercloud/gophercloud
pushd ~/go/src/github.com/gophercloud/gophercloud
go get -u ./...
popd
cat >> /root/.bashrc <<EOF
if [[ -f /usr/local/bin/gimme ]]; then
eval "\$(/usr/local/bin/gimme 1.8)"
export GOPATH=\$HOME/go
export PATH=\$PATH:\$GOROOT/bin:\$GOPATH/bin
fi
gophercloudtest() {
if [[ -n \$1 ]] && [[ -n \$2 ]]; then
pushd ~/go/src/github.com/gophercloud/gophercloud
go test -v -tags "fixtures acceptance" -run "\$1" github.com/gophercloud/gophercloud/acceptance/openstack/\$2 | tee ~/gophercloud.log
popd
fi
}
EOF
systemctl stop openstack-cinder-backup.service
systemctl stop openstack-cinder-scheduler.service
systemctl stop openstack-cinder-volume.service
systemctl stop openstack-nova-cert.service
systemctl stop openstack-nova-compute.service
systemctl stop openstack-nova-conductor.service
systemctl stop openstack-nova-consoleauth.service
systemctl stop openstack-nova-novncproxy.service
systemctl stop openstack-nova-scheduler.service
systemctl stop neutron-dhcp-agent.service
systemctl stop neutron-l3-agent.service
systemctl stop neutron-lbaasv2-agent.service
systemctl stop neutron-metadata-agent.service
systemctl stop neutron-openvswitch-agent.service
systemctl stop neutron-metering-agent.service
mysql -e "update services set deleted_at=now(), deleted=id" cinder
mysql -e "update services set deleted_at=now(), deleted=id" nova
mysql -e "update compute_nodes set deleted_at=now(), deleted=id" nova
for i in $(openstack network agent list -c ID -f value); do
neutron agent-delete $i
done
systemctl stop httpd
+ cp /home/centos/files/rc.local /etc
+ chmod +x /etc/rc.local
Next, create a file called rc.local
inside the packstack/files
directory:
#!/bin/bash
set -x
export HOME=/root
sleep 60
systemctl restart rabbitmq-server
while [[ true ]]; do
pgrep -f rabbit
if [[ $? == 0 ]]; then
break
fi
sleep 10
systemctl restart rabbitmq-server
done
nova-manage cell_v2 discover_hosts
The above is pretty simple: it's simply restarting RabbitMQ and running
nova-manage
to re-discover itself as a compute node.
Why restart RabbitMQ? I have no idea. I've found it needs to be done for OpenStack to work correctly.
I also mentioned I'll show how to to access the OpenStack services from outside the virtual machine, so you don't have to log in to the virtual machine to run tests.
To do that, add the following to rc.local
:
#!/bin/bash
set -x
export HOME=/root
sleep 60
+ public_ip=$(curl http://169.254.169.254/latest/meta-data/public-ipv4/)
+ if [[ -n $public_ip ]]; then
+ while true ; do
+ mysql -e "update endpoint set url = replace(url, '127.0.0.1', '$public_ip')" keystone
+ if [[ $? == 0 ]]; then
+ break
+ fi
+ sleep 10
+ done
+ sed -i -e "s/127.0.0.1/$public_ip/g" /root/keystonerc_demo
+ sed -i -e "s/127.0.0.1/$public_ip/g" /root/keystonerc_admin
+ fi
systemctl restart rabbitmq-server
while [[ true ]]; do
pgrep -f rabbit
if [[ $? == 0 ]]; then
break
fi
sleep 10
systemctl restart rabbitmq-server
done
+ systemctl restart openstack-cinder-api.service
+ systemctl restart openstack-cinder-backup.service
+ systemctl restart openstack-cinder-scheduler.service
+ systemctl restart openstack-cinder-volume.service
+ systemctl restart openstack-nova-cert.service
+ systemctl restart openstack-nova-compute.service
+ systemctl restart openstack-nova-conductor.service
+ systemctl restart openstack-nova-consoleauth.service
+ systemctl restart openstack-nova-novncproxy.service
+ systemctl restart openstack-nova-scheduler.service
+ systemctl restart neutron-dhcp-agent.service
+ systemctl restart neutron-l3-agent.service
+ systemctl restart neutron-lbaasv2-agent.service
+ systemctl restart neutron-metadata-agent.service
+ systemctl restart neutron-openvswitch-agent.service
+ systemctl restart neutron-metering-agent.service
+ systemctl restart httpd
nova-manage cell_v2 discover_hosts
+ iptables -I INPUT -p tcp --dport 80 -j ACCEPT
+ ip6tables -I INPUT -p tcp --dport 80 -j ACCEPT
+ cp /root/keystonerc* /var/www/html
+ chmod 666 /var/www/html/keystonerc*
Three steps have been added above:
The first uses cloud-init to discover the virtual machine's public IP. Once
the public IP is known, the endpoint
table in the keystone
database is
updated with it. By default, PackStack sets the endpoints of the Keystone
catalog to 127.0.0.1
. This prevents outside interaction of OpenStack.
Changing it to the public IP resolves this issue.
The keystonerc_demo
and keystonerc_admin
files are also updated with the
public IP.
Why not just set the public IP in the PackStack answer file? Because the
public IP will not be known until the virtual machine launches, which is
after PackStack has run. And that's why 127.0.0.1
was used earlier:
it's an easy placeholder to search and replace with and it will still
create a working OpenStack environment if it wasn't replaced.
The second stop restarts all OpenStack services so they're aware of the new endpoints.
The third step copies the keystonerc_demo
and keystonerc_admin
files to
/var/www/html/
. This way, you can wget
the files from
http://public-ip/keystonerc_demo and http://public-ip/keystonerc_admin
and save them to your workstation. You can then source them and begin
interacting with OpenStack remotely.
Now, with all of that in place, re-run Packer:
$ pwd
/home/jtopjian/terraform-openstack-test/packer/openstack
$ packer build \
-var 'flavor=m1.large' \
-var 'secgroup=AllowAll' \
-var 'image_id=9abadd38-a33d-44c2-8356-b8b8ae184e04' \
-var 'pool=public' \
-var 'network_id=b0b12e8f-a695-480e-9dc2-3dc8ac2d55fd' \
build.json
Using the Image
When the build is complete, you will have a new image called packstack-ocata
that you can create a virtual machine with.
As an example, you can use Terraform to launch the image:
variable "key_name" {}
variable "network_id" {}
variable "pool" {
default = "public"
}
variable "flavor" {
default = "m1.xlarge"
}
data "openstack_images_image_v2" "packstack" {
name = "packstack-ocata"
most_recent = true
}
resource "random_id" "security_group_name" {
prefix = "openstack_test_instance_allow_all_"
byte_length = 8
}
resource "openstack_networking_floatingip_v2" "openstack_acc_tests" {
pool = "${var.pool}"
}
resource "openstack_networking_secgroup_v2" "openstack_acc_tests" {
name = "${random_id.security_group_name.hex}"
description = "Rules for openstack acceptance tests"
}
resource "openstack_networking_secgroup_rule_v2" "openstack_acc_tests_rule_1" {
security_group_id = "${openstack_networking_secgroup_v2.openstack_acc_tests.id}"
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 1
port_range_max = 65535
remote_ip_prefix = "0.0.0.0/0"
}
resource "openstack_networking_secgroup_rule_v2" "openstack_acc_tests_rule_2" {
security_group_id = "${openstack_networking_secgroup_v2.openstack_acc_tests.id}"
direction = "ingress"
ethertype = "IPv6"
protocol = "tcp"
port_range_min = 1
port_range_max = 65535
remote_ip_prefix = "::/0"
}
resource "openstack_networking_secgroup_rule_v2" "openstack_acc_tests_rule_3" {
security_group_id = "${openstack_networking_secgroup_v2.openstack_acc_tests.id}"
direction = "ingress"
ethertype = "IPv4"
protocol = "udp"
port_range_min = 1
port_range_max = 65535
remote_ip_prefix = "0.0.0.0/0"
}
resource "openstack_networking_secgroup_rule_v2" "openstack_acc_tests_rule_4" {
security_group_id = "${openstack_networking_secgroup_v2.openstack_acc_tests.id}"
direction = "ingress"
ethertype = "IPv6"
protocol = "udp"
port_range_min = 1
port_range_max = 65535
remote_ip_prefix = "::/0"
}
resource "openstack_networking_secgroup_rule_v2" "openstack_acc_tests_rule_5" {
security_group_id = "${openstack_networking_secgroup_v2.openstack_acc_tests.id}"
direction = "ingress"
ethertype = "IPv4"
protocol = "icmp"
remote_ip_prefix = "0.0.0.0/0"
}
resource "openstack_networking_secgroup_rule_v2" "openstack_acc_tests_rule_6" {
security_group_id = "${openstack_networking_secgroup_v2.openstack_acc_tests.id}"
direction = "ingress"
ethertype = "IPv6"
protocol = "icmp"
remote_ip_prefix = "::/0"
}
resource "openstack_compute_instance_v2" "openstack_acc_tests" {
name = "openstack_acc_tests"
image_id = "${data.openstack_images_image_v2.packstack.id}"
flavor_name = "${var.flavor}"
key_pair = "${var.key_name}"
security_groups = ["${openstack_networking_secgroup_v2.openstack_acc_tests.name}"]
network {
uuid = "${var.network_id}"
}
}
resource "openstack_compute_floatingip_associate_v2" "openstack_acc_tests" {
instance_id = "${openstack_compute_instance_v2.openstack_acc_tests.id}"
floating_ip = "${openstack_networking_floatingip_v2.openstack_acc_tests.address}"
}
resource "null_resource" "rc_files" {
provisioner "local-exec" {
command = <<EOF
while true ; do
wget http://${openstack_compute_floatingip_associate_v2.openstack_acc_tests.floating_ip}/keystonerc_demo 2> /dev/null
if [ $? = 0 ]; then
break
fi
sleep 20
done
wget http://${openstack_compute_floatingip_associate_v2.openstack_acc_tests.floating_ip}/keystonerc_admin
EOF
}
}
The above Terraform configuration will do the following:
- Search for the latest image titled "openstack-ocata".
- Create a floating IP.
- Create a security group with a unique name and six rules to allow all TCP, UDP, and ICMP traffic.
- Launch an instance using the "openstack-ocata" image.
- Associate the floating IP to the instance.
- Poll the instance every 20 seconds to see if http://publicip/keystonerc_demo
is available. When it is available, download it, along with
keystonerc_admin
.
To run this Terraform configuration, do:
$ terraform apply \
-var "key_name=<keypair name>" \
-var "network_id=<network uuid>" \
-var "pool=<pool name>" \
-var "flavor=<flavor name>"
Conclusion
This blog post detailed how to create a reusable image with OpenStack Ocata already installed. This allows you to create a standard testing environment in a fraction of the time that it takes to build the environment from scratch.
Comments
comments powered by Disqus