How we deployed OpenStack with OpenStack-Ansible 

OpenStack-Ansible allows you to deploy production-grade OpenStack cloud on LXC containers. OSAD(openstack ansible deployment project) enables you to rollout  hasssle-free OpenStack updates and this is only one of many uses. It directly pulls code from git source rather than packages from distributions. But for now we will only be focusing on the OpesStack deployment.

Hardware we used is as follows.

  • osad   (4 cores, 4GB RAM, 500GB HDD, 2 NICs)
  • osad1 (8 cores, 64GB RAM, two 500GB HDD, 4 NICs )
  • osad2 (8 cores, 32GB RAM, two 500GB HDD, 4 NICs)
  • a vlan enabled switch

 

Openstack Neutron

inside-architecture-of-neutron-9-638
Basic Neutron Deployment
Neutron is an Openstack stand-alone project which aims at providing network connectivity for the compute resources created by nova.

Neutron comprises of multiple services and agents running on multiple nodes. Let us know about the services in the above basic neutron deployment.

  1. neutron-server provides an API layer that acts as an single point of access to manage other neutron services.
  2. L2 agent runs on compute and network nodes which creates various types of networks (flat,local,vlan,vxlan,gre) and provides isolation between tenant networks. It takes care of wiring the VM instances. L2 agent can use Linux bridge or OpenvSwitch or any other vendor technology to perform above tasks.
  3. L3 agent runs on network node allows its users to create routers that connects Layer2 networks. Behind the scenes L3 agent uses linux iptables to perform layer3 forwarding and NAT. It’s possible to create multiple routers with overlapping ip range through network namespaces. Each router creates its own namespace with name based on its UUID.
  4. DHCP agent runs on the network node allocates ip addresses to instances. It uses a dnsmasq instance per network.

Neutron Plugins

Neutron exposes a logical API which defines the network connectivity between the devices created by OpenStack nova. Under the hood all the CRUD operations on an attribute managed by neutron API is being handled by a Neutron Plugin.

As of Mitaka release core API of Neutron manages three kind of entities:

1.Network, representing isolated virtual Layer-2 domains; a network can also be regarded as a virtual (or logical) switch;

2.Subnet, representing IPv4 or IPv6 address blocks from which IPs to be assigned to VMs on a given network are selected;

  1. Port, representing virtual (or logical) switch ports on a given network.

All entities, discussed in detail in the rest of this chapter, support the basic CRUD operations with POST/GET/PUT/DELETE verbs, and have an auto-generated unique identifier

The Modular Layer 2 (ML2) plugin is a python module which providesneutron.neutron_plugin_base_v2.NeutronPluginBaseV2 class with a minimum set of methods that needs to be implemented.

  1. create_network(context, network)

def create_network(self, context, network): result, mech_context = self._create_network_db(context, network) kwargs = {'context': context, 'network': result} registry.notify(resources.NETWORK, events.AFTER_CREATE, self, **kwargs) try: self.mechanism_manager.create_network_postcommit(mech_context) except ml2_exc.MechanismDriverError: with excutils.save_and_reraise_exception(): LOG.error(_LE("mechanism_manager.create_network_postcommit " "failed, deleting network '%s'"), result['id']) self.delete_network(context, result['id']) return result

Let’s spend some time in understanding the above code. Motto is to Create a network, which represents an L2 network segment which can have a set of subnets and ports associated with it.

Parameters: context – neutron api request context

network – dictionary describing the network, with keys as listed in the RESOURCE_ATTRIBUTE_MAP object in neutron/api/v2/attributes.py. All keys will be populated.

Containerizing Virtual Machines

Recently Google joined hands with Mirantis and Intel to distribute Openstack components in docker containers managed with kubernetes. In the above deployment scenario, each and every component of openstack like nova, neutron, keystone etc.. runs in docker containers and are deployed, managed through kubernetes. I wondered if the nova service is running in a container then how is it going to span a vm instance. An another doubt flashed in my mind that is it possible to run a virtual machine inside a docker container? . The answer is yes but with some prerequisites installed and tweaks done on the docker host. In this post i will show you how to run a vm using kvm in a docker container.

As the docker containers don’t have a kernel of its own and it uses hosts kernel, so it’s not possible to insert kvm kernel module. So instead we will add /dev/kvm and /dev/net/tun devices to the container.

Make sure that you installed docker and kvm on the host. Kvm installation can be tested by

$ kvm-ok

INFO: /dev/kvm exists
KVM acceleration can be used

Run an ubuntu kvm image by the following command

$ docker run -e “RANCHER_VM=true” –cap-add NET_ADMIN -v \
/var/lib/rancher/vm:/vm –device /dev/kvm:/dev/kvm \
–device /dev/net/tun:/dev/net/tun rancher/vm-ubuntu -m 1024m -smp 1

Ubuntu vm which is spannned inside a container gets the ip address of your container. First know the ip address of docker container through the following command

$ docker inspect

ssh into the vm you created above

$ ssh ubuntu@

password: ubuntu