VALIDATED HARDWARE CONFIGURATION

Partner Name

Deployment Guide

Content

Document History 3

Introduction 4

Notes 5

Switches 5

Rack Physical Cabling schema 6

Logical networking schema 7

Switch configuration 8

Server configuration 8

RAID configuration 8

BIOS configuration 8

Controller and Storage Nodes 9

Compute and Infrastructure Nodes 9

Storage node boot order 9

Infrastructure node 9

Installation 9

Configuration 10

Fuel Installation 10

OpenStack Cluster Configuration 12

Network configuration 12

Configure VLANS 12

Configure “Public” section: 13

Configure “Neutron L2 configuration”: 13

Network interfaces layout 13

Controllers 13

Computes 14

Storages 15

Other settings 16

Verifying network setup 16

Disk layout 16

Controllers 17

Computes 17

Storage nodes 17

Deploy 18

Post Deployment Verification 18

Appendix 19

Cabling tables 19

<10G Switch>-1 20

<10G Switch>-2 22

Switch “show running-config” output 24

Document History

Version / Revision Date / Description
0.1 / DD-MM-YYYY / Initial Version

Introduction

The <Vendor>-Mirantis OpenStack Reference Architecture describes a modular, scalable rack configuration of <Vendor> compute, storage and network hardware, validated with Mirantis OpenStack, and engineered to <detail specific rationale, compatible workload types, target cluster size and other specifics>.

This companion document details steps required to deploy such a cluster and start it running.

The guide assumes that users are familiar with basic OpenStack operations, use of the Linux command line, with enterprise networking, virtualization and with the <Vendor> switch CLI. Basic knowledge of Fuel operations is also required.

Notes

Please note that IP addresses, VLAN IDs and network interface names are provided for reference only. Feel free to replace them with your own preferred values.

Switches

The rack contains three (3) <Vendor> Networking switches: one <1G Switch>- and pair of <10G Switches>, interconnected by Multi-chassis Link Aggregation Protocol (MC-LAG). Physical cabling and logical networking schemas can be found below.

Cabling tables can be found in the Appendix.

Rack Physical Cabling schema

Logical networking schema

Switch configuration

Please refer to the Appendix for complete switch configuration listings.

Server configuration

Please refer to <Link to IPMI user guide> to learn more about how to use IPMI to configure RAID and BIOS.

RAID configuration

<Provide an actual disks configuration using the table below as an example>

Node / Disks / RAID type / Notes
Fuel / 2x1.2 TB SATA / RAID1 / OS
Controller / 2x 400 Gb SSD / None / OS: sda
MongoDB: sdb
Compute / 1x 400 Gb SSD / None / OS: sda
Storage / 4x 200 Gb SSD / None / Ceph journal: sda-sdd
2x 1.2 TB SAS / RAID1 / OS: sde
20x 1.2 TB SAS / RAID0 / Ceph OSD: sdf-sdy (each disk configured separately as RAID0 to use controller’s cache)

BIOS configuration

The guide assumes that all BIOS settings are initially set to factory defaults. If they are not, please reset BIOS to default settings before proceeding.

Common Settings, shown below, should be applied to all nodes. Additional settings for Compute, Storage, or other nodes, also shown below, should also be applied, as appropriate.

Common settings

Display Name / Attribute / Settings
<Sample Name> / <Sample Attribute> / <Sample value>

Controller and Storage Nodes

Display Name / Attribute / Settings
<Sample Name> / <Sample Attribute> / <Sample value>

Compute and Infrastructure Nodes

Display Name / Attribute / Settings
<Sample Name> / <Sample Attribute> / <Sample value>

Storage node boot order

<Specify any changes in boot device priority if necessary. Typically, Storage nodes require precise boot configuration.>

Infrastructure node

Installation

The Infrastructure node serves as a hypervisor host for the Fuel Master VM and other VM nodes involved in the testing process. To install and configure it:

1.  Download the Ubuntu server 14.04 ISO from http://www.ubuntu.com/download/server to your local computer.

2.  Use IPMI to mount the Ubuntu installation ISO on the Infrastructure node as a virtual CD-ROM. Please refer to the <Link to IPMI user guide> for more details.

3.  Install Ubuntu with the following roles selected:

●  SSH server

●  Virtualization server.

Leave other settings with default values. Networking will be configured in the next steps.

Configuration

1.  Log into the Infrastructure node and configure the resolver:

# echo "nameserver 8.8.8.8" > /etc/resolv.conf

2.  Configure bridges at /etc/network/interfaces. Use the following configuration as an example. Please pay attention to interface names and ip addresses, changing them if needed:

auto em1

iface em1 inet manual

auto em2

iface em2 inet manual

auto br-ext

iface br-ext inet static

bridge_ports em2

address 172.16.224.2

netmask 255.255.255.128

gateway 172.16.224.1

bridge_stp off

auto br-pxe

iface br-pxe inet manual

bridge_ports em1

bridge_stp off

3.  Reboot the server and make sure the bridges are up and the gateway is accessible:

# brctl show

# ping 172.16.224.1

Fuel Installation

1.  Download MOS:

# cd ~

# wget http://9f2b43d3ab92f886c3f0-e8d43ffad23ec549234584e5c62a6e24.r60.cf1.rackcdn.com/MirantisOpenStack-7.0.iso

2.  Connect to the foundation node with virt-manager and create a new VM with the following parameters:

Name: Fuel

Networking: 2x NIC, each connected to corresponding bridge: br-ext and br-pxe

Memory: 8Gb

CPU: 2x core

Disk: 100GB

Another way to create this VM is with the following command:

# virt-install --os-variant=ubuntutrusty --ram 8192 --vcpus=2 --network bridge=br-ext,model=virtio --network bridge=br-pxe,model=virtio --name fuel1 --disk path=/var/lib/libvirt/images/fuel1.qcow2,cache=none,size=100 -c ~/MirantisOpenStack-7.0.iso --graphics vnc --autostart --noautoconsole

3.  Use virt-manager or VNC console to connect to the VM and proceed with the installation:

a)  Wait until Fuel-menu appears, go to the “Network setup” tab, select eth0 and set parameters as follows:

b)  Go to the “Quit setup” tab and select “Save and continue.” Wait until installation completes.

4.  Open your browser and check if the Fuel web UI is accessible at http://172.16.224.3:8000/

OpenStack Cluster Configuration

1.  Log into each hardware server’s IPMI and turn the power on.

2.  Wait until all servers are discovered by Fuel and shown in its Web UI.

3.  Go to to the Fuel web UI at http://172.16.224.3:8000/ and create a new environment with the following parameters (change them if needed):

Parameter / Value
Name and release / cloud1, Kilo on Ubuntu 14.04
Compute / Set radiobox “KVM”
Networking setup / Set radiobox “Neutron with VLAN segmentation”
Storage backends / Set radiobox “Yes, use Ceph”
Additional services / Set checkbox “Install Ceilometer (OpenStack Telemetry)”

·  Use <Vendor> <SERVER MODEL 1> servers to add 3 nodes with the following roles:

§  Controller

§  Telemetry - Mongo DB

·  Use <Vendor> <SERVER MODEL 1> servers to add 3 nodes with the following role:

§  Compute

·  Use <Vendor> <SERVER MODEL 2> servers to add 3 nodes with the following role:

§  Storage – Ceph OSD

Network configuration

Configure VLANS

Go to the “Networks” tab in the Fuel Web UI and select “Use VLAN tagging” checkbox for all the networks. VLAN IDs should be set according to the following table:

Network / VLAN ID
Public / 160
Storage / 180
Management / 140

Configure “Public” section:

Note: Make sure that “Gateway” field is filled in with your external router’s IP address from 172.16.224.0/25 subnet (VLAN 160) and that this provides internet access for the whole subnet.

Field / Value
IP range start / 172.16.224.4
IP range end / 172.16.224.14
CIDR / 172.16.224.0/25
Gateway / 172.16.224.1
Floating IP range start / 172.16.224.15
Floating IP range end / 172.16.224.126

Configure “Neutron L2 configuration”:

Field / Value
VLAN ID range start / 200
VLAN ID range end / 1000

Network interfaces layout

Go to the “Nodes” tab, and configure the network interface layout for each server according to its role. Nodes with the same role may be configured together by selecting all servers in a group, instead of a single server.

Use “Bond interfaces” to create bonds with the following parameters:

Parameter / Value
Xmit Hash Policy / layer2
Mode / 802.3ad(LACP)

Controllers

Interface / Slave interfaces / Assigned networks
bond0 / eth0
eth1 / Public
Storage
Management
Private
eth2 / Admin(PXE)

Computes

Interface / Slave interfaces / Assigned networks
bond0 / eth0
eth1 / Public
Storage
Management
Private
eth2 / Admin(PXE)

Storages

Storage traffic can have a strong performance impact on other networks within the same physical link, so we will separate it out.

Interface / Slave interfaces / Assigned networks
bond0 / eth0
eth2 / Public
Management
Private
bond1 / eth1
eth3 / Storage
eth4 / Admin(PXE)

Other settings

Go to the “Settings tab” -> “Neutron advanced configuration” and select the “Neutron DVR” checkbox.

Verifying network setup

Return to the “Networks” tab and press “Verify Network.” Please note that full network check is not possible on that stage because bonded interfaces are dropped from the verification list. Make sure other network checks pass successfully. If not, double check your external router and network settings.

Disk layout

<This configuration depends on actual disks available on each server. Change the description below if needed>

Go to the “Nodes” tab and configure disk layout for each node.

Controllers

MongoDB should be located on a separate physical disk for best performance.

Disk / Roles
sda / Base system, Logs, Mysql Database
sdb / MongoDB
Computes

Leave with the defaults.

Storage nodes

<This configuration based on 20 disks for Ceph, 4 SSD for Ceph Journal, and 2 disks in RAID1 for OS. Change it in case of different disk layout.>

Use first 4 disks for the Ceph Journals, 5th disk for the Base System and leave all the rest for Ceph.

Disk / Roles
sda-sdd / Ceph journal
sde / Base system
sdf through sdy / Ceph

Deploy

Go to the “Dashboard” tab and click “Deploy.”

Normally, the deployment process takes a few hours.

Post Deployment Verification

Go to the “Health Check” tab and click “Run Tests.”

Wait for a while and make sure all the tests (except for the credentials tests, which fail because of default username and password) are green (passed).

If the tests are fine, then everything is installed, working properly and ready for use.

Appendix

Cabling tables

<1G Switch>

Interface / Destination / VLANs / LAG
Name / Connector / Device Name / Port / Untagged / Tagged / # / Mode
Gi 0/0 / 1G Copper / OS Controller 1 / IPMI / 100 / - / -
Gi 0/1 / 1G Copper / OS Controller 2 / IPMI / 100 / - / -
Gi 0/2 / 1G Copper / OS Controller 3 / IPMI / 100 / - / -
Gi 0/3 / 1G Copper / Compute 1 / IPMI / 100 / - / -
Gi 0/4 / 1G Copper / Compute 2 / IPMI / 100 / - / -
Gi 0/5 / 1G Copper / Compute 3 / IPMI / 100 / - / -
Gi 0/6 / 1G Copper / Storage 1 / IPMI / 100 / - / -
Gi 0/7 / 1G Copper / Storage 2 / IPMI / 100 / - / -
Gi 0/8 / 1G Copper / Storage 3 / IPMI / 100 / - / -
Gi 0/9 / 1G Copper / Infrastructure node / IPMI / 100 / - / -
Gi 0/10 / - / -
Gi 0/11 / - / -
Gi 0/12 / - / -
Gi 0/13 / - / -
Gi 0/14 / - / -
Gi 0/15 / - / -
Gi 0/16 / 1G Copper / OS Controller 1 / 1G1 / 120 / - / -
Gi 0/17 / 1G Copper / OS Controller 2 / 1G1 / 120 / - / -
Gi 0/18 / 1G Copper / OS Controller 3 / 1G1 / 120 / - / -
Gi 0/19 / 1G Copper / Compute 1 / 1G1 / 120 / - / -
Gi 0/20 / 1G Copper / Compute 2 / 1G1 / 120 / - / -
Gi 0/21 / 1G Copper / Compute 3 / 1G1 / 120 / - / -
Gi 0/22 / 1G Copper / Storage 1 / 1G1 / 120 / - / -
Gi 0/23 / 1G Copper / Storage 2 / 1G1 / 120 / - / -
Gi 0/24 / 1G Copper / Storage 3 / 1G1 / 120 / - / -
Gi 0/25 / - / -
Gi 0/26 / - / -
Gi 0/27 / - / -
Gi 0/28 / - / -
Gi 0/29 / - / -
Gi 0/30 / - / -
Gi 0/31 / - / -
Gi 0/32 / - / -
Gi 0/33 / - / -
Gi 0/34 / - / -
Gi 0/35 / - / -
Gi 0/36 / - / -
Gi 0/37 / - / -
Gi 0/38 / - / -
Gi 0/39 / - / -
Gi 0/40 / - / -
Gi 0/41 / - / -
Gi 0/42 / 1G Copper / Infrastructure node / 1G1 / 120 / - / -
Gi 0/43 / 1G Copper / Infrastructure node / 1G2 / 160 / - / -
Gi 0/44 / 1G Copper / <10G Switch>-1 / Management / 100 / - / -
Gi 0/45 / 1G Copper / <10G Switch>-2 / Management / 100 / - / -
Gi 0/46 / 1G Copper / Uplink OOB / 100 / - / -
Gi 0/47 / 1G Copper / Uplink Public / 160 / - / -
Stack 0/48 / - / -
Stack 0/49 / - / -
Te 0/50 / 10G Fiber SR / <10G Switch>-1 / Te 0/47 / 100,120,160 / 1 / LACP