OpenStack-Ansible Tutorial

This blog post covers how to use Openstack-Ansible to deploy OpenStack. OpenStack-Ansible allows you to deploy production-grade OpenStack cloud on LXC containers. OSAD(openstack ansible deployment project) enables you to rollout  hasssle-free OpenStack updates . It directly pulls code from git source. But for now we will only be focusing on the OpesStack deployment.

Hardware i used is as follows.

  • 2 servers as Infrastructure nodes(4 cores, 4GB RAM, 500GB HDD, 2 NICs)
  • 2 servers as Swift storage nodes  (12 cores, 64GB RAM, 24 TB HDD, 2 NICs)
  • 4 servers as compute nodes (8 cores, 20GB RAM, 500GB HDD, 2 NICs)
  •  1 server as deployment mode
  • a vlan enabled switch

Steps to follow

  1. Clone OSAD git repo and run bootstrap script which installs all required ansible roles.
# git clone -b 15.1.7 https://git.openstack.org/openstack/openstack-ansible \
  /opt/openstack-ansible

Change to the /opt/openstack-ansible

# scripts/bootstrap-ansible.sh
  1. Copy the configurational stuff to /etc/

# cp -R etc/openstack_deploy /etc/

  1. Create a passwords file.
# cd /opt/openstack-ansible/scripts
# python pw-token-gen.py --file /etc/openstack_deploy/user_secrets.yml
  1. Write Openstack user configure.yml according to your need but here I will be installing HA Openstack with 2 sets of each infrastructural component , swift storage , neutron with openvswitch dvr.


cidr_networks:
container: 10.0.4.0/22
tunnel: 10.0.1.0/22

used_ips:
– 10.0.4.1,10.0.5.20
– 10.50.0.1,10.50.0.20
– 10.0.1.1,10.0.1.20

global_overrides:
internal_lb_vip_address: <internal_vip>
external_lb_vip_address: <external_vip>
mgmt_bridge: “br-mgmt”
tunnel_bridge: “br-vxlan”

provider_networks:
– network:
group_binds:
– all_containers
– hosts
type: “raw”
container_bridge: “br-mgmt”
container_interface: “eth1”
container_type: “veth”
ip_from_q: “container”
is_container_address: true
is_ssh_address: true

– network:
group_binds:
– neutron_openvswitch_agent
container_bridge: “br-vlan”
container_interface: “eth12”
container_type: “veth”
type: “vlan”
range: “10:1000”
net_name: “physnet”

– network:
container_bridge: “br-vxlan”
container_type: “veth”
container_interface: “eth10”
ip_from_q: “tunnel”
type: “vxlan”
range: “1:1000”
net_name: “vxlan”
group_binds:
– neutron_openvswitch_agent

– network:
container_bridge: “br-mgmt”
container_type: “veth”
container_interface: “eth2”
ip_from_q: “container”
type: “raw”
group_binds:
– cinder_api
– cinder_volume
– nova_compute

– network:
container_bridge: “br-mgmt”
container_type: “veth”
container_interface: “eth3”
ip_from_q: “container”
type: “raw”
group_binds:
– glance_api
– swift_proxy
– nova_compute

swift:
part_power: 9
repl_number: 2
storage_network: ‘br-mgmt’
drives:
– name: sda
– name: sdb
– name: sdc
– name: sdd
mount_point: /srv/node
storage_policies:
– policy:
name: default
index: 0
default: True
repl_number: 2

swift-proxy_hosts:
node3:
ip: 10.0.5.3
container_vars:
swift_proxy_vars:
read_affinity:”r1=100″
write_affinity:”r1″
write_affinity_node_count:”2 * replicas”
node4:
ip: 10.0.5.4
container_vars:
swift_proxy_vars:
read_affinity:”r1=100″
write_affinity:”r1″
write_affinity_node_count:”2 * replicas”

swift_hosts:
node6:
ip: 10.0.5.6
container_vars:
swift_vars:
storage_ip: 10.0.5.6
repl_ip: 10.55.0.6
limit_container_types: swift
zone: 0
region: 1
node7:
ip: 10.0.5.7
container_vars:
swift_vars:
storage_ip: 10.0.5.7
repl_ip: 10.55.0.7
limit_container_types: swift
zone: 0
region: 1

 

shared-infra_hosts:
node3:
ip: 10.0.5.3
node4:
ip: 10.0.5.4

repo-infra_hosts:
node3:
ip: 10.0.5.3

os-infra_hosts:
node3:
affinity:
heat_apis_container: 0
heat_engine_container: 0
ip: 10.0.5.3
node4:
affinity:
heat_apis_container: 0
heat_engine_container: 0
ip: 10.0.5.4

identity_hosts:
node3:
ip: 10.0.5.3
node4:
ip: 10.0.5.4

network_hosts:
node6:
ip: 10.0.5.6
node7:
ip: 10.0.5.7

compute_hosts:
node6:
ip: 10.0.5.6
host_vars:
nova_virt_type: kvm
node7:
ip: 10.0.5.7
host_vars:
nova_virt_type: kvm
node11:
ip: 10.0.5.11
host_vars:
nova_virt_type: kvm
node12:
ip: 10.0.5.12
host_vars:
nova_virt_type: kvm
node13:
ip: 10.0.5.13
host_vars:
nova_virt_type: kvm
node14:
ip: 10.0.5.14
host_vars:
nova_virt_type: kvm

haproxy_hosts:
node5:
ip: 10.0.5.5
node2:
ip: 10.0.5.2

log_hosts:
node3:
ip: 10.0.5.3

dashboard_hosts:
node3:
ip: 10.0.5.3
node4:
ip: 10.0.5.4

image_hosts:
node3:
ip: 10.0.5.3
node4:
ip: 10.0.5.4

storage-infra_hosts:
node4:
ip: 10.0.5.4
node3:
ip: 10.0.5.3

storage_hosts:
node15:
ip: 10.0.5.15
container_vars:
cinder_backends:
limit_container_types: cinder_volume
rbd:
volume_driver: cinder.volume.drivers.rbd.RBDDriver
volume_backend_name: rbd
rbd_pool: cinder-volumes
rbd_ceph_conf: /etc/ceph/ceph.conf
rbd_user: cinder
rbd_secret_uuid: 5c618737-d4d4-4ee8-95e6-279ac54e080f
rbd_flatten_volume_from_snapshot: ‘false’
rbd_max_clone_depth: 5
rbd_store_chunk_size: 4
rados_connect_timeout: -1

node17:
ip: 10.0.5.17
container_vars:
cinder_backends:
limit_container_types: cinder_volume
rbd:
volume_driver: cinder.volume.drivers.rbd.RBDDriver
volume_backend_name: rbd
rbd_pool: cinder-volumes
rbd_ceph_conf: /etc/ceph/ceph.conf
rbd_user: cinder
rbd_secret_uuid: 5c618737-d4d4-4ee8-95e6-279ac54e080f
rbd_flatten_volume_from_snapshot: ‘false’
rbd_max_clone_depth: 5
rbd_store_chunk_size: 4
rados_connect_timeout: -1

node16:
ip: 10.0.5.16
container_vars:
cinder_backends:
limit_container_types: cinder_volume
rbd:
volume_driver: cinder.volume.drivers.rbd.RBDDriver
volume_backend_name: rbd
rbd_pool: cinder-volumes
rbd_ceph_conf: /etc/ceph/ceph.conf
rbd_user: cinder
rbd_secret_uuid: 5c618737-d4d4-4ee8-95e6-279ac54e080f
rbd_flatten_volume_from_snapshot: ‘false’
rbd_max_clone_depth: 5
rbd_store_chunk_size: 4
rados_connect_timeout: -1

 

  1.  User variables file

debug: false

swift_allow_all_users: true
glance_default_store: swift

glance_glance_api_conf_overrides:
DEFAULT:
show_multiple_locations:true

## Common Nova Overrides
# when nova_libvirt_images_rbd_pool is defined, ceph clients will be installed on the nova hosts.

nova_libvirt_images_rbd_pool: ephemeral-vms
cinder_ceph_client: cinder
cephx: true

ceph_mons:
– 10.0.5.15
– 10.0.5.16
– 10.0.5.17

apt_pinned_packages:
– { package: “lxc”, version: 2.0.0 }

haproxy_use_keepalived: True
haproxy_bind_on_non_local: True

haproxy_keepalived_external_vip_cidr: “<>”
haproxy_keepalived_internal_vip_cidr: “<>”

haproxy_keepalived_external_interface: <>
haproxy_keepalived_internal_interface: <>

haproxy_keepalived_external_virtual_router_id: 10
haproxy_keepalived_internal_virtual_router_id: 11

haproxy_keepalived_priority_master: 100
haproxy_keepalived_priority_backup: 90

keepalived_ping_address: “<>”
haproxy_keepalived_vars_file: ‘vars/configs/keepalived_haproxy.yml’
keepalived_use_latest_stable: True

haproxy_user_ssl_cert: ‘/root/certificate.crt’
haproxy_user_ssl_key: ‘/root/private.key’
haproxy_user_ssl_ca_cert: ‘/root/ca_bundle.crt’

apply_security_hardening: false

horizon_images_upload_mode: legacy
horizon_enable_ha_router: True

neutron_plugin_base:
– router
– metering
– neutron_dynamic_routing.services.bgp.bgp_plugin.BgpPlugin
– trunk
– qos

nova_nova_conf_overrides:
DEFAULT:
cpu_allocation_ratio:3.0
reserved_host_memory_mb:2048
ram_allocation_ratio:2.5

nova_libvirt_hw_disk_discard: ‘unmap’
nova_libvirt_disk_cachemodes: ‘network=writeback’

neutron_plugin_type: ml2.ovs.dvr
neutron_ml2_drivers_type: “flat,vlan,vxlan”
neutron_l2_population: “True”
neutron_vxlan_enabled: true
neutron_vxlan_group: “239.1.1.1”

neutron_provider_networks:
network_flat_networks: “*”
network_types: “flat,vlan,vxlan”
network_vlan_ranges: “physnet:10:1000”
network_mappings: “physnet:br-provider”
network_vxlan_ranges: “1:1000”

  1. Run playbooks.
Advertisements

Author: Dilip Renkila

A Cloud enthusiast.

1 thought on “OpenStack-Ansible Tutorial”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s