Deploying OpenStack with MAAS and Ansible

I’m gonna describe how i setup openstack test environment using MAAS and Ansible on two servers using a dumb switch.

I’m inspired and taken most of implementation from this post. Hardware i used is as follows.

  • Lenovo Thinkpad (2 cores, 8GB RAM, 500GB HDD, 2 NICS )
  • XEON Workstation(4cores, 4GB RAM, 500GB HDD, 2 NICS)
  • one dumb switch

The environment more or less looks like this.

With this environment, to install OpenStack using the Ansible Playbooks, I essentially do the following steps:

  1. PXE Boot Ubuntu 16.04 across my VM’s on both Thinkpad and XEON server using MAAS.
  2. Configure the networking
  3. Configure OSAD deployment by grabbing the pre-built configuration files
  4. Run the OSAD Playbooks

Let’s prepare MAAS. I used my Thinkpad as MAAS server which has 2 NICs in which one is connected to my home router and other to dumb switch through a linux bridge ‘br0’.I enabled 8021q module on both thinkpad and  xeon server.

Below is my /etc/network/interfaces file of my thinkpad:

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

source /etc/network/interfaces.d/*

# The loopback network interface
auto lo
iface lo inet loopback

auto wlp3s0
iface wlp3s0 inet dhcp
 wpa-ssid 
 wpa-psk 

auto enp0s25
iface enp0s25 inet manual

auto enp0s25.10
iface enp0s25.10 inet manual
vlan-raw-device enp0s25

auto enp0s25.20
iface enp0s25.20 inet manual
vlan-raw-device enp0s25

auto enp0s25.30
iface enp0s25.30 inet manual
vlan-raw-device enp0s25

auto enp0s25.99
iface enp0s25.99 inet manual
vlan-raw-device enp0s25

## MAAS provisioning bridge.
auto br0
iface br0 inet static
address 10.14.0.1
netmask 255.255.255.0
dns-nameservers 10.14.0.1
dns-search maas
post-up iptables -t nat -A POSTROUTING -o wlp3s0 -j SNAT --to-source 192.168.99.58
post-down iptables -t nat -D POSTROUTING -o wlp3s0 -j SNAT --to-source 192.168.99.58
bridge_ports enp0s25
bridge_fd 9
bridge_hello 2
bridge_maxage 12
bridge_stp off

##Container/Host Management bridge.
auto br-mgmt
iface br-mgmt inet static
address 172.29.236.10
netmask 255.255.242.0
dns-nameservers 8.8.8.8 8.8.4.4
bridge_ports enp0s25.10
bridge_fd 9
bridge_hello 2
bridge_maxage 12
bridge_stp off

##VLAN bridge.
auto br-vxlan
iface br-vxlan inet manual
bridge_ports enp0s25.20
bridge_fd 9
bridge_hello 2
bridge_maxage 12
bridge_stp off

##VXLAN bridge.
auto br-vxlan
iface br-vxlan inet manual
bridge_ports enp0s25.30
bridge_fd 9
bridge_hello 2
bridge_maxage 12
bridge_stp off

##STORAGE bridge.
auto br-vlan99
iface br-vlan99 inet manual
bridge_ports enp0s25.99
bridge_fd 9
bridge_hello 2
bridge_maxage 12
bridge_stp off

source /etc/network/interfaces.d/*.cfg

Below is my /etc/network/interfaces file of my xeon server:

This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

source /etc/network/interfaces.d/*

# The loopback network interface
auto lo
iface lo inet loopback

auto enp3
iface enp3 inet dhcp

auto enp4
iface enp4 inet manual

auto enp4.10
iface enp4.10 inet manual
vlan-raw-device enp4

auto enp4.20
iface enp4.20 inet manual
vlan-raw-device enp4

auto enp4.30
iface enp4.30 inet manual
vlan-raw-device enp4

auto enp4.99
iface enp4.99 inet manual
vlan-raw-device enp4

## MAAS provisioning bridge.

auto br0
iface br0 inet static
address 10.14.0.2
netmask 255.255.255.0
dns-nameservers 10.14.0.1
dns-search maas
post-up iptables -t nat -A POSTROUTING -o enp0s25 -j SNAT --to-source 192.168.10.175
post-down iptables -t nat -D POSTROUTING -o enp0s25 -j SNAT --to-source 192.168.10.175
bridge_ports enp4
bridge_fd 9
bridge_hello 2
bridge_maxage 12
bridge_stp off

##Container/Host Management bridge.

auto br-mgmt
iface br-mgmt inet static
address 172.29.236.11
netmask 255.255.242.0
dns-nameservers 8.8.8.8  8.8.4.4
bridge_ports enp4.10
bridge_fd 9
bridge_hello 2
bridge_maxage 12
bridge_stp off

##VLAN bridge.

auto br-vxlan
iface br-vxlan inet manual
bridge_ports enp4.20
bridge_fd 9
bridge_hello 2
bridge_maxage 12
bridge_stp off

##VXLAN bridge.

auto br-vxlan
iface br-vxlan inet manual
bridge_ports enp4.30
bridge_fd 9
bridge_hello 2
bridge_maxage 12
bridge_stp off

## STORAGE bridge

auto br-vlan99
iface br-vlan99 inet manual
bridge_ports enp4.99
bridge_fd 9
bridge_hello 2
bridge_maxage 12
bridge_stp off

source /etc/network/interfaces.d/*.cfg

Then I installed MAAS on my thinkpad.

$ sudo apt install maas
$ sudo dpkg-reconfigure maas-region-controller
10.14.0.1
$ sudo dpkg-reconfigure maas-rack-controller
10.14.0.1
$ sudo maas-region createsuperuser ## I created an user named root here.

Go to MAAS dashboard at http://10.14.0.1/MAAS/ then provide dhcp on 10.14.0.1/24 network. Now i’m ready to pxe boot VM’s on 10.14.0.0/24 network. I will now create VM’s using KVM on both thinkpad , xeon servers and provision them using MAAS.

First install all required programs on the both of them to run VM’s.

$ sudo apt install libvirt-bin qemu-kvm cpu-checker virtinst
$ virsh net-destroy default
  1. virsh net-define –file maas-provisioning.xml
  2. virsh net-start maas-provisioning
  3. virsh net-autostart maas-provisioning

maas-provisioning.xml

 <network>
 <name>maas-provisioning</name>
 <forward mode='bridge'/>
 <bridge name='br0'/>
</network>

Repeat the above procedure for the following networks.

maas-provisioning.xml —> maas-provisioning —> br0

openstack-mgmt.xml –> openstack-mgmt —> br-mgmt

openstack-vlan.xml –> openstack-vlan  —> br-vlan

openstack-vxlan.xml –> openstack-vxlan —> br-vxlan

openstack-storage.xml –> openstack-storage —> br-storage

Install VM’s

 <network>
 <name>maas-provisioning</name>
 <forward mode='bridge'/>
 <bridge name='br0'/>
</network>

Openstack Neutron

inside-architecture-of-neutron-9-638
Basic Neutron Deployment
Neutron is an Openstack stand-alone project which aims at providing network connectivity for the compute resources created by nova.

Neutron comprises of multiple services and agents running on multiple nodes. Let us know about the services in the above basic neutron deployment.

  1. neutron-server provides an API layer that acts as an single point of access to manage other neutron services.
  2. L2 agent runs on compute and network nodes which creates various types of networks (flat,local,vlan,vxlan,gre) and provides isolation between tenant networks. It takes care of wiring the VM instances. L2 agent can use Linux bridge or OpenvSwitch or any other vendor technology to perform above tasks.
  3. L3 agent runs on network node allows its users to create routers that connects Layer2 networks. Behind the scenes L3 agent uses linux iptables to perform layer3 forwarding and NAT. It’s possible to create multiple routers with overlapping ip range through network namespaces. Each router creates its own namespace with name based on its UUID.
  4. DHCP agent runs on the network node allocates ip addresses to instances. It uses a dnsmasq instance per network.

Neutron Plugins

Neutron exposes a logical API which defines the network connectivity between the devices created by OpenStack nova. Under the hood all the CRUD operations on an attribute managed by neutron API is being handled by a Neutron Plugin.

As of Mitaka release core API of Neutron manages three kind of entities:

1.Network, representing isolated virtual Layer-2 domains; a network can also be regarded as a virtual (or logical) switch;

2.Subnet, representing IPv4 or IPv6 address blocks from which IPs to be assigned to VMs on a given network are selected;

  1. Port, representing virtual (or logical) switch ports on a given network.

All entities, discussed in detail in the rest of this chapter, support the basic CRUD operations with POST/GET/PUT/DELETE verbs, and have an auto-generated unique identifier

The Modular Layer 2 (ML2) plugin is a python module which providesneutron.neutron_plugin_base_v2.NeutronPluginBaseV2 class with a minimum set of methods that needs to be implemented.

  1. create_network(context, network)

def create_network(self, context, network): result, mech_context = self._create_network_db(context, network) kwargs = {'context': context, 'network': result} registry.notify(resources.NETWORK, events.AFTER_CREATE, self, **kwargs) try: self.mechanism_manager.create_network_postcommit(mech_context) except ml2_exc.MechanismDriverError: with excutils.save_and_reraise_exception(): LOG.error(_LE("mechanism_manager.create_network_postcommit " "failed, deleting network '%s'"), result['id']) self.delete_network(context, result['id']) return result

Let’s spend some time in understanding the above code. Motto is to Create a network, which represents an L2 network segment which can have a set of subnets and ports associated with it.

Parameters: context – neutron api request context

network – dictionary describing the network, with keys as listed in the RESOURCE_ATTRIBUTE_MAP object in neutron/api/v2/attributes.py. All keys will be populated.

Containerizing Virtual Machines

Recently Google joined hands with Mirantis and Intel to distribute Openstack components in docker containers managed with kubernetes. In the above deployment scenario, each and every component of openstack like nova, neutron, keystone etc.. runs in docker containers and are deployed, managed through kubernetes. I wondered if the nova service is running in a container then how is it going to span a vm instance. An another doubt flashed in my mind that is it possible to run a virtual machine inside a docker container? . The answer is yes but with some prerequisites installed and tweaks done on the docker host. In this post i will show you how to run a vm using kvm in a docker container.

As the docker containers don’t have a kernel of its own and it uses hosts kernel, so it’s not possible to insert kvm kernel module. So instead we will add /dev/kvm and /dev/net/tun devices to the container.

Make sure that you installed docker and kvm on the host. Kvm installation can be tested by

$ kvm-ok

INFO: /dev/kvm exists
KVM acceleration can be used

Run an ubuntu kvm image by the following command

$ docker run -e “RANCHER_VM=true” –cap-add NET_ADMIN -v \
/var/lib/rancher/vm:/vm –device /dev/kvm:/dev/kvm \
–device /dev/net/tun:/dev/net/tun rancher/vm-ubuntu -m 1024m -smp 1

Ubuntu vm which is spannned inside a container gets the ip address of your container. First know the ip address of docker container through the following command

$ docker inspect

ssh into the vm you created above

$ ssh ubuntu@

password: ubuntu