Glimpse of the contents:
- Purpose of the document
- Steps to configure a controller
Purpose of the document
This document aids in working on a workflow using OnCommand Workflow Automation tool.
Prerequisites:
- The networking is in place or direct connection from laptop to controller will be used.
- Boot the node and press Crtl-C when prompted for the Boot Menu and select Option #7 - Install new software to update the OS.
Notes:
This step may be done with a base controller without storage.
• You must match the OS if joining cluster (8.1 cannot join an 8.1.1 based cluster).
• You must match OS if joining cluster.
Note: 8.1 cannot join an 8.1.1 based cluster -- In 8.1.1, Option #7 makes this easy if http source is available.
Node base Setup, Disk Assignment, Initialization
Note: Node boots to Cluster Create/Join Wizard following initialization.
- Dislodge the partner connections to each Shelf Stack so that Each Stack is only connected to its intended owner node – Leave the SAS connectors partially inserted to ease the reconnection after Disk Initialization is Started
- Power on the First Node
- (Optional) - If this is not a new system:
Press Ctrl-C when prompted to abort AUTOBOOT.
LOADER> set-defaults
- Ensure Clustered ONTAP is enabled.
LOADER> setenv bootarg.init.boot_clustered true
- If the node management port is being changed from e0M, the set the bsdportname
LOADER> setenv bootarg.bsdportname <%%node#_mgmt_port>"
- Verify Clustered ONTAP Boot Parameter.
LOADER> printenv
- Boot System
LOADER> boot_ontap
- Press Ctrl-C for Boot Menu when the following banner is displayed:
******************************** *
* Press Ctrl-C for Boot Menu. *
* ********************************
- Select 5 - Maintenance Mode boot.
- Remove disk ownership from all disks.
disk remove_ownership all
- Verify that owner is set to “not owned” for all disks.
disk show –v
- Assign ownership of disks.
disk assign all
- Verify disk assignment.
disk show
- At this point, if a node has multiple stacks, completely dislodge the connections to the SATA and/or SSD stacks to ensure mroot is placed on SAS disk.
if a specific shelf or disk type has been designated for mroot than it may be necessary to disconnect everything except for the prescribed shelf.
- After disk assignment, reboot the node and go to Boot Menu.
- Press Ctrl-C for Boot Menu when the following message is displayed:
******************************** *
* Press Ctrl-C for Boot Menu. *
*********************************
- Select 4 - Clean Configuration and Initialize All Disks.
- Answer yes to the prompts. The system reboots to the Cluster Setup Wizard once initialization has completed.
- Perform the steps in this procedure (Steps 1 – 10) on partner node and each subsequent controller pair joining the cluster.
- Once the Clean Configuration and Initialize process has started on both nodes of the HA pair, reconnect all previously dislodged storage cabling on that HA pair.
Create the Cluster
Following initialization, the Cluster Setup Wizard starts automatically. Use the Node Cluster Setup Worksheet for the information to provide in this procedure.
- (Optional) If the cluster design calls for management IFGRPs, enter Ctrl-C to escape out of the Cluster Setup wizard.
- Configure Management IFGRP.
network port ifgrp create -node local -ifgrp <%%ifgrpname> -distr-func <%%ifgrpdistfunct> -mode <%%ifgrpmode>
Example:
network port ifgrp create –node cnode-01 –ifgrp mgmt_ifgrp -distr-func ip –mode multimode_lacp
- Restart the Cluster Setup Wizard.
- cluster setup
- At the Cluster Setup Wizard enter:
create
Private cluster network ports (use appropriate ports for controller type).
• FAS62xx: e0c, e0e
• FAS32xx: e1a, e2a
- Cluster base License Key -- enter the CL-BASE license key remaining license keys when prompted.
<%%cluster_base_license>
- Cluster creation proceeds.
When prompted for additional license keys - enter all additional customer license keys.
- You are prompted to set up a Vserver for the cluster administration.
When prompted, enter the following information:
password: <password>
cluster management interface port: <%%cluster_mgmt_port>
cluster management interface IP address: <%%cluster_mmgmt_ip>
cluster management interface netmask: <%%cluster_mgmt_netmask>
cluster management interface default gateway: <%%cluster_mgmt_gw>
- Enter the DNS Settings:
DNS domain names: <%%dns_domain>
Name server IP addresses: <%%dns_server1>,<%%dns_server2>
- Set up the Node - when prompted, enter the following information:
Controller location: <Location>
node management interface port: <%%node1_mgmt_port>
node management interface IP address: <%%node1_mgmt_ip>
node management interface netmask: <%%node1_mgmt_netmask>
node management interface default gateway: <%%node1_mgmt_gw>
- Join the remaining nodes to the cluster.
- Note: Do steps 3a thru 3e for each node.
(Optional) Configure management IFGRP. Press Ctrl-C to escape out of the Cluster
- Setup Wizard and run the following command:
network port ifgrp create -node local -ifgrp <%%ifgrpname> -distr-func <%%ifgrpdistfunct> -mode <%%ifgrpmode>
- Restart the Cluster Setup Wizard:
Join
- Accept or correct the system defaults.
Ensure predefined cluster ports are used.
Leave the MTU and IP address auto generation unchanged.
- When prompted, enter the cluster name:
<%%cluster_name>
Note: If the cluster name is not listed as the default input option, that is a strong indicator that your cluster network has configuration/connectivity issues.
- Set up the node:
When prompted, enter the following:
node management interface port: <%%node#_mgmt_port>
node management interface IP address: <%%node#_mgmt_ip>
node management interface netmask: <%%node#_mgmt_netmask>
node management interface default gateway: <%%node#_mgmt_gw>
Start OnCommand Workflow:
Open Web Browser goto and enter credentials
Click on Controller_Configuration_After_Setup workflow and follow the below steps
1: Connect to the Controller
add the credentials prior executing the workflow
2: Enable HA and SFO
3: Setup NTP manually (Not in workflow) (do it manually)
Enter the following commands to configure NTP:
system services ntp server create –node <%%node#> -server <%%ntp_server1>
system services ntp server create –node <%%node#> -server <%%ntp_server2>
system services ntp server show
system services ntp config modify –enabled true
system services ntp config show
4: Creating a Mgmt/Cluster Mgmt Failover Group.
Note: Only perform this step when using NetApp CN1601 management switches.
It is important to also have the latest SP firmware installed (version 1.3 or above). This enables the e0M port to be aware of the wrench port state so if the wrench port fails, e0M also reports the failure and triggers the LIF to migrate.
You do not need all the ports in the failover group to have the same role, so e0M stayed with the node-management role and e0a was configured with the data role.
a)create the cluster management failover group.
b) Assign the failover group to the cluster management LIF (not in workflow) (do it manually)
net int modify -vserver cluster_name -lif cluster_mgmt -home-node node1 -home-port e0a -failover-policy nextavail -use-failover-group enabled -failover-group cluster_mgmt_fg
c) Create the node management failover groups.
d) Assign the failover groups to the node management LIFs. (not in workflow) (do it manually)
net int modify -vserver node1 -lif mgmt1 -home-node node1 -home-port e0M -failover-policy nextavail -use-failover-group enabled -failover-group node1_fg
net int modify -vserver node2 -lif mgmt1 -home-node node1 -home-port e0M -failover-policy nextavail -use-failover-group enabled -failover-group node2_fg
net int modify -vserver node3 -lif mgmt1 -home-node node1 -home-port e0M -failover-policy nextavail -use-failover-group enabled -failover-group node3_fg
net int modify -vserver node4 -lif mgmt1 -home-node node1 -home-port e0M -failover-policy nextavail -use-failover-group enabled -failover-group node4_fg
5: Disable Flow-Control on all 10g ports. (not in workflow) (do it manually)
Clust1::> net port show -node node-01 -port e1a
(network port show)
Node: node-01
Flow Control Administrative: full
Clust1::> net port modify -node node-01-01 -port e1a -flowcontrol-admin none
Clust1::> net port show -node node-01 -port e1a
(network port show)
Node: node-01
Flow Control Administrative: none
6. Configure service Processor (not in workflow) (do it manually)
Ensure that network cable is connected to the SP/RLM.
node run –node * sp setup
or
node run –node * rlm setup
7: Rename the Root Aggregates
Recommended Naming Conventions
Naming convention for root aggregate: aggr0_root_<nodename>
Naming convention for data aggregates: nodename_disktype_aggregateID
Use underscores “_” instead of dashes “-“ in aggregate names.
8: Configure and Enable ACP (not in workflow) (do it manually)
Perform this step/run these commands for each node in the cluster.
node run –node <%%node#> acpadmin configure
node run –node <%%node#> storage show acp
9: Set Up AutoSupport
10: Run Config Advisor (not in workflow) (do it manually)
a) Enable CDP daemon on Cluster Nodes
system node run –node * options cdpd.enable on
b) Run Config Advisor on the cluster.
c) Run the following on each cluster switch CN1610 to correct:
enable
# configure
(Config)# port-channel name 3/1 ISL-LAG
d) Rerun Config Advisor to verify that error has been corrected.
e) Send the report to NetApp internal cDOT Program team and notify them of any issues found.
f) Correct as needed. Rerun Config Advisor to verify that corrective measures are successful.
g) Disable CDP daemon when Config Advisor is no longer needed.
system node run * options cdpd.enable off
h) (Optional) Telnet access to the switches is currently required for Config Advisor to work.
If the customer would like Telnet fully disabled, allowing only SSH, run the following command on each CN16xx switch:
# no ip telnet server enable
# write memory
1