GFS Installation

From Gfswiki

How to install and run clvm and gfs (imported from Usage.txt()).

Refer to the cluster project page for the latest information.

Table of contents[showhide]
1 Get source
2 Build and install
3 Load kernel modules
4 Startup procedure
5 Shutdown procedure
6 Creating CCS config
7 Creating CLVM logical volumes
8 Creating GFS file systems
9 Cluster startup/shutdown notes
10 Config file
11 Multiple clusters
12 Two node clusters
13 Advanced Network Configuration
13.1 Multihome
13.2 Multicast
13.3 IPv6

Get source

  • download the source tarballs

latest linux kernel - ftp://ftp.kernel.org/pub/linux/

device-mapper - ftp://sources.redhat.com/pub/dm/

lvm2 - ftp://sources.redhat.com/pub/lvm2/

iddev - ftp://sources.redhat.com/pub/cluster/

ccs - ftp://sources.redhat.com/pub/cluster/

fence - ftp://sources.redhat.com/pub/cluster/

cman - ftp://sources.redhat.com/pub/cluster/

cman-kernel - ftp://sources.redhat.com/pub/cluster/

dlm - ftp://sources.redhat.com/pub/cluster/

dlm-kernel - ftp://sources.redhat.com/pub/cluster/

gfs - ftp://sources.redhat.com/pub/cluster/

gfs-kernel - ftp://sources.redhat.com/pub/cluster/

  • or to download source from cvs see

summary: after cvs login,

cvs -d :pserver::/cvs/dm checkout device-mapper

cvs -d :pserver::/cvs/lvm2 checkout LVM2

cvs -d :pserver::/cvs/cluster checkout cluster

Build and install

  • Configure and compile your kernel
  • build and install userland programs and libraries (order is important)

cd device-mapper

./configure

make; make install

device-mapper build note: on Debian make sure you have either none of libselinux1 and libselinux1-dev installed, or both of them

# magma.h is needed by other parts, so install it first

cd cluster/magma

./configure --kernel_src=/path/to/patched/kernel

make & make install

# now build everything in cluster

cd ../

./configure --kernel_src=/path/to/patched/kernel

make; make install

lvm2

./configure --with-clvmd --with-cluster=shared --with-kernel-source=/path/to/patched/kernel

make; make install

scripts/clvmd_fix_conf.sh /lib/liblvm2clusterlock.so

device-mapper

./configure --with-kernel-source=/path/to/patched/kernel

make; make install

Load kernel modules

depmod -a

modprobe dm-mod

device-mapper/scripts/devmap_mknod.sh

modprobe gfs

modprobe lock_dlm

Modules that should be loaded: lock_dlm, dlm, cman, gfs, lock_harness and dm-mod if device-mapper was built as a module

Startup procedure

Run these commands on each cluster node:

> ccsd - Starts the CCS daemon

> cman_tool join - Joins the cluster

> fence_tool join - Joins the fence domain

(starts fenced, must start before any gfsstuff is used.)

> clvmd - Starts the CLVM daemon

> vgchange -aly - Activates LVM volumes (locally)

> mount -t gfs /dev/vg/lvol /mnt - Mounts a GFS file system

Shutdown procedure

Run these commands on each cluster node:

> umount /mnt - Unmounts a GFS file system

> vgchange -aln - Deactivates LVM volumes (locally)

> killall clvmd - Stops the CLVM daemon

> fence_tool leave - Leaves the fence domain (stops fenced)

> cman_tool leave - Leaves the cluster

> killall ccsd - Stops the CCS daemon

Creating CCS config

There is no GUI or command line program to create the config file yet. The cluster config file "cluster.conf" must therefore be created manually. Once created, cluster.conf should be placed in the /etc/cluster/ directory on one cluster node. CCS daemon (ccsd) will take care of transferring it to other nodes where it's needed. (FIXME: updating cluster.conf in a running cluster is supported but not documented.)

A minimal cluster.conf example is shown below.

Creating CLVM logical volumes

Use standard LVM commands (see LVM documentation on using pvcreate, vgcreate, lvcreate.) A node must be running the CLVM system to use the LVM commands. Running the CLVM system means successfully running the commands above up through starting clvmd.

Creating GFS file systems

> gfs_mkfs -p lock_dlm -t <ClusterName>:<FSName> -j <Journals> <Device>

<ClusterName> must match the cluster name used in CCS config

<FSName> is a unique name chosen now to distinguish this fs from others

<Journals> the number of journals in the fs, one for each node to mount

<Device> a block device, usually an LVM logical volume

Creating a GFS file system means writing to a CLVM volume which means the CLVM system must be running (see previous section.)

Cluster startup/shutdown notes

Fencing: In the start-up steps above, "fence_tool join" is the equivalent of simply starting fenced. fence_tool is useful because additional options can be specified to delay the actual starting of fenced. Delaying can be useful to avoid unnecessarily fencing nodes that haven't joined the cluster yet. The only option fence_tool now provides to address this is "-t <seconds>" to wait the given number of seconds before starting fenced.

Shutdown: There is also a practical timing issue with respect to the shutdown steps being run on all nodes when shutting down an entire cluster. When shutting down the entire cluster (or shutting down a node for an extended period) use "cman_tool leave remove". This automatically reduces the number of expected votes as each node leaves and prevents the loss of quorum which could keep the last nodes from cleanly completing shutdown.

Using the "remove" leave option should not be used in general since it introduces potential split-brain risks.

If the "remove" leave option is not used, quorum will be lost after enough nodes have left the cluster. Once the cluster is inquorate, remaining members that have not yet completed "fence_tool leave" in the steps above will be stuck. Operations such as umounting gfs or leaving the fence domain ("fence_tool leave") will block while the cluster is inquorate. They can continue and complete only once quorum is regained.

If this happens, one option is to join the cluster ("cman_tool join") on some of the nodes that have left so that the cluster regains quorum and the stuck nodes can complete their shutdown. Another option is to forcibly reduce the number of expected votes for the cluster which allows the cluster to become quorate again ("cman_tool expected <votes>"). This later method is the equivalent to using the "remove" option when leaving.

Config file

This example primarily illustrates the variety of fencing configurations.

The first node uses "cascade fencing"; if the first method fails (power cycling with an APC Masterswitch), the second is tried (port disable on a Brocade FC switch). In this example, the node has dual paths to the storage so the port on both paths must be disabled (the same idea applies to nodes with dual power supplies.)

There is only one method of fencing the second node (via an APC Masterswitch) so no cascade fencing is possible.

If no hardware is available for fencing, manual fencing can be used as shown for the third node. If a node with manual fencing fails, a human must take notice (a message appears in the system log) and run fence_ack_manual after resetting the failed node. (The node that actually carries out fencing operations is the node with the lowest ID in the fence domain.)

<?xml version="1.0"?>

<cluster name="alpha" config_version="1">

<cman>

</cman>

<clusternodes>

<clusternode name="nd01" votes="1">

<fence>

<method name="cascade1">

<device name="apc1" port="1"/>

</method>

<method name="cascade2">

<device name="brocade1" port="1"/>

<device name="brocade2" port="1"/>

</method>

</fence>

</clusternode>

<clusternode name="nd02" votes="1">

<fence>

<method name="single">

<device name="apc1" port="2"/>

</method>

</fence>

</clusternode>

<clusternode name="nd03" votes="1">

<fence>

<method name="single">

<device name="human" ipaddr="nd03"/>

</method>

</fence>

</clusternode>

</clusternodes>

<fence_devices>

<device name="apc1" agent="fence_apc" ipaddr="10.1.1.1" login="apc" passwd="apc"/>

<device name="brocade1" agent="fence_brocade" ipaddr="10.1.1.2" login="user" passwd="pw"/>

<device name="brocade2" agent="fence_brocade" ipaddr="10.1.1.3" login="user" passwd="pw"/>

<device name="human" agent="fence_manual"/>

</fence_devices>

</cluster>

Multiple clusters

When multiple clusters are used, it can be useful to specify the cluster name on the cman_tool command line. This forces CCS to select a cluster.conf with the same cluster name. The node then joins this cluster.

> cman_tool join -c <ClusterName>

[Note: If the -c option is not used, ccsd will first check the local copy of cluster.conf to extract the cluster name and will only grab a remote copy of cluster.conf if it has the same cluster name and a greater version number. If a local copy of cluster.conf does not exist, ccsd may grab a cluster.conf for a different cluster than intended -- cman_tool would then report an error that the node is not listed in the file.

So, if you don't currently have a local copy of cluster.conf (and there are other clusters running) or you wish to join a different cluster with a different cluster.conf from what exists locally, you must specify the -c option.]

Two node clusters

Ordinarily the loss of quorum after one node fails out of two will prevent the remaining node from continuing (if both nodes have one vote.) Some special configuration options can be set to allow the one remaining node to continue operating if the other fails. To do this only two nodes with one vote each can be defined in cluster.conf. The two_node and expected_votes values must then be set to 1 in the cman config section as follows.

<cman two_node="1" expected_votes="1">

</cman>

Advanced Network Configuration

Multihome

CMAN can be configured to use multiple network interfaces. If one fails it should be able to continue running with the one remaining. A node's name in cluster.conf is always associated with the IP address on one network interface; "nd1" in the following:

<node name="nd1" votes="1">

</node>

To use a second network interface, the node must have a second hostname associated with the IP address on that interface; "nd1-e1" in the following. The second hostname is specfied in an "altname" section.

<node name="nd1" votes="1">

<altname name="nd1-e1"/>

</node>

Multicast

CMAN can be configured to use multicast instead of broadcast (broadcast is used by default if no multicast parameters are given.) To configure multicast when one network interface is used add one line under the <cman> section and another under the <node> section:

<cman>

<multicast addr="224.0.0.1"/>

</cman>

<node name="nd1" votes="1">

<multicast addr="224.0.0.1" interface="eth0"/>

</node>

The multicast addresses must match and the address must be usable on the

interface name given for the node.

When two interfaces are used, multicast is configured as follows:

<cman>

<multicast addr="224.0.0.1"/>

<multicast addr="224.0.0.9"/>

</cman>

<node name="nd1" votes="1">

<altname name="nd1-e1"/>

<multicast addr="224.0.0.1" interface="eth0"/>

<multicast addr="224.0.0.9" interface="eth1"/>

</node>

GNBD installation

From Gfswiki

This page describes how to share a block device across two nodes using GNBD. It's based on the GFS/GNBD documentation() by DataCore GmbH().

Table of contents[showhide]
1 Prerequisites
2 Starting the services (DLM, CCSD, FENCE)
3 Fence
4 CLVMD
5 Final check
6 GNBD export
7 Importing a device
8 GFS on GNBD

Prerequisites

On both nodes:

  • Patched kernel
  • Userland tools (see Installation on how to build them) and /etc/cluster/cluster.conf like this:

<?xml version="1.0"?>

<cluster name="cluster1" config_version="1">

<cman two_node="1" expected_votes="1">

</cman>

<clusternodes>

<clusternode name="one" votes="1">

<fence>

<method name="single">

<device name="human" ipaddr="192.168.1.1"/>

</method>

</fence>

</clusternode>

<clusternode name="two" votes="1">

<fence>

<method name="single">

<device name="human" ipaddr="192.168.1.2"/>

</method>

</fence>

</clusternode>

</clusternodes>

<fence_devices>

<fence_device name="human" agent="fence_manual"/>

</fence_devices>

</cluster>

Use the host names of the servers as node name and make sure they see each other with that name.

Starting the services (DLM, CCSD, FENCE)

Load the DLM kernel module on both nodes:

root@one # modprobe lock_dlm

root@two # modprobe lock_dlm

CCSD (Cluster Configuration Daemon):

root@one # ccsd

root@two # ccsd

Cpu usage might be high temporarily, ignore that for now. You can test ccsd with the following commands (didn't work recently until i started cman, so don't worry too much if this fails):

root@one # ccs_test connect

Output should be something like this:

Connect successful. Connection descriptor = 1 }}}

Another test:

root@one # ccs_test get node '//cluster/@name'

Should return:

Get successful.

Value = <cluster1>

Now start the cluster manager CMAN and for the cluster:

root@one # /sbin/cman_tool join

root@two # /sbin/cman_tool join

In the syslog you'll see a message about the state of the cluster, after a while also /proc/cluster/nodes should show both nodes:

Node Votes Exp Sts Name

1 1 1 M one

2 1 1 M two

Fence

Don't proceed further until you have completed all of the above! Else the first node will fence the joining one before a cluster can be formed. Join the fence domain:

root@one # /sbin/fence_tool join

root@two # /sbin/fence_tool join

CLVMD

Start the clusteres LVM daemon:

root@one # /sbin/clvmd

root@two # /sbin/clvmd

Now your cluster is pretty much ready.

To acticate all LVM volumes:

root@one # vgchange -aly

root@two # vgchange -aly

Note: if vgchange -aly reports not active volume groups do a vgscan that should find the vg's for the cluster

Final check

/proc/cluster/status should report something along these lines on both machines:

Version: 2.0.1

Config version: 1

Cluster name: cluster1

Cluster ID: 26777

Membership state: Cluster-Member

Nodes: 2

Expected_votes: 1

Total_votes: 2

Quorum: 1

Active subsystems: 3

Node addresses: 192.168.1.1

/proc/cluster/services, also on both nodes should show FENCE and the DLM lock space up:

Service Name GID LID State Code

Fence Domain: "default" 1 2 run -

[1 2]

DLM Lock Space: "clvmd" 2 3 run -

[1 2]

Your cluster is ready.

GNBD export

The next step is to export a device to the network. This can be a partition, an LVM volume or a file filled with

dd if=/dev/zero of=/path/to/your/file bs=4096 count=1024

(this would be about 4Gb).

Load the gnbd module:

root@one # modprobe gnbd

Start the gnbd_serv daemon:

root@one # /sbin/gnbd_serv -v

Export the block device:

root@one # gnbd_export -v -e export1 -d /path/to/your/file_or_dev

To see the export:

root@one # gnbd_export -v -l

Server[1] : export1

------

file : /dev/sda1

sectors : 23789568

readonly : no

cached : no

timeout : 60

Make sure you DO NOT use the -c flag if you use a GFS filesystem or dm multipath later, other filesystems are ok with caching.

Possible problem

'Operation not permitted' error

On my system (debian unstable) it expects the plugin folder in /lib/magma/plugins, you could add a symlink and see if it works. Else you can run a test program from the magma source dir, magma/tests/cpt null. Stracing this (strace magma/tests/cpt null) will show you the place it's looking for (will show an ENOENT near the end of the strace). The reason for this problem seems to be the usage of $libdir in the magma-plugins makefiles or somesuch.

Importing a device

Load the gnbd module on the other node as well:

root@two # modprobe gnbd

Tell gnbd to import from node one:

root@two # gnbd_import -v -i one

All exports from node one will now be imported:

root@two # gnbd_import -v -l

Device name : export1

------

Minor # : 0

Proc name : /dev/gnbd0

Server : srv1

Port : 14567

State : Open Connected Clear

Readonly : No

Sectors : 23789568

The device is available like a normal local device at /dev/gnbd/export1.

You could just format it as ext3 or xfs and mount it for example.

GFS on GNBD

load the gfs kernel module:

root@one # modprobe gfs

root@two # modprobe gfs

from any of the two nodes you can create the gfs filesystem:

root@one # gfs_mkfs -p lock_dlm -t cluster1:export1 -j 2 /path/to/yourfile_or_partition_or_gnbd_import

Now mount the filesystem on the exporting machine:

root@one # mount -t gfs /path/to/yourfile_or_partition_or_gnbd_import /mnt

Same on the second node:

root@two # mount -t gfs /dev/gnbd/export1 /mnt

/proc/cluster/services will show you two additional services:

Service Name GID LID State Code

Fence Domain: "default" 1 2 run -

[1 2]

DLM Lock Space: "clvmd" 2 3 run -

[1 2]

DLM Lock Space: "export1" 3 4 run -

[1 2]

GFS Mount Group: "export1" 4 5 run -

[1 2]

GFS on NDAS

Requirements

  • Multiple systems can write the data simultaneously on one file system on NDAS device
  • It is seamless to join or leave the clustering to access the GFS file system

How to use GFS on NDAS

1. System Requirements

  • Two Fedore Core 4 system
  • NDAS device version 1.1 or above

2. Set up the system

  • Install Fedore Core 4 on the each system (one, two)
  • Install GFS module
  • yum install -y GFS GFS-kernel magma-plugins fence dlm gulm cman
  • Disable the firewall or disable the firewall to/from the following ports
  • UDP 6809
  • TCP 50008

3. Set up the cluster

  • Edit /etc/cluster/cluster.conf on both system

Two node only configuration

<cluster name="example" config_version="1">

<cman two_node="1" expected_votes="1">

</cman>

<clusternodes>

<clusternode name="one" votes="1">

<fence>

<method name="single">

<device name="human" ipaddr="192.168.2.1"/>

</method>

</fence>

</clusternode>

<clusternode name="two" votes="1">

<fence>

<method name="single">

<device name="human" ipaddr="192.168.2.2"/>

</method>

</fence>

</clusternode>

</clusternodes>

<fencedevices>

<fencedevice name="human" agent="fence_manual"/>

</fencedevices>

</cluster>

Three or more nodes configuration

<cluster name="example" config_version="1">