Veritas Cluster File System (CFS)

CFS allows the same file system to be simultaneously mounted on multiple nodes in the cluster.

The CFS is designed with master/slave architecture. Though any node can initiate an operation to create, delete, or resize data, the master node carries out the actual operation. CFS caches the metadata in memory, typically in the memory buffer cache or the vnode cache. A distributed locking mechanism, called GLM, is used for metadata and cache coherency among the multiple nodes.

The examples here are :

1. Based on VCS 5.x but should also work on 4.x
2. A new 4 node cluster with no resources defined.
3. Diskgroups and volumes will be created and shared across all nodes.

Before you configure CFS

1. Make sure you have an established Cluster and running properly.
2. Make sure these packages are installed on all nodes:

VRTScavf Veritas cfs and cvm agents by Symantec
VRTSglm Veritas LOCK MGR by Symantec

3. Make sure you have a license installed for Veritas CFS on all nodes.
4. Make sure vxfencing driver is active on all nodes (even if it is in disabled mode).

Check the status of the cluster

Here are some ways to check the status of your cluster. On these examples, CVM/CFS are not configured yet.

# cfscluster status
NODE CLUSTER MANAGER STATE CVM STATE
serverA running not-running
serverB running not-running
serverC running not-running
serverD running not-running

Error: V-35-41: Cluster not configured for data sharing application

# vxdctl -c mode
mode: enabled: cluster inactive

# /etc/vx/bin/vxclustadm nidmap
Out of cluster: No mapping information available

# /etc/vx/bin/vxclustadm -v nodestate
state: out of cluster

# hastatus -sum

-- SYSTEM STATE
-- System State Frozen

A serverA RUNNING 0
A serverB RUNNING 0
A serverC RUNNING 0
A serverD RUNNING 0


Configure the cluster for CFS

During configuration, veritas will pick up all information that is set on your cluster configuration. And will activate CVM on all the nodes.

# cfscluster config

The cluster configuration information as read from cluster
configuration file is as follows.
Cluster : MyCluster
Nodes : serverA serverB serverC serverD


You will now be prompted to enter the information pertaining
to the cluster and the individual nodes.

Specify whether you would like to use GAB messaging or TCP/UDP
messaging. If you choose gab messaging then you will not have
to configure IP addresses. Otherwise you will have to provide
IP addresses for all the nodes in the cluster.

------- Following is the summary of the information: ------
Cluster : MyCluster
Nodes : serverA serverB serverC serverD
Transport : gab
-----------------------------------------------------------


Waiting for the new configuration to be added.

========================================================

Cluster File System Configuration is in progress...
cfscluster: CFS Cluster Configured Successfully


Check the status of the cluster

Now let's check the status of the cluster. And notice that there is now a new service group cvm. CVM is required to be online before we can bring up any clustered filesystem on the nodes.

# cfscluster status

Node : serverA
Cluster Manager : running
CVM state : running
No mount point registered with cluster configuration


Node : serverB
Cluster Manager : running
CVM state : running
No mount point registered with cluster configuration


Node : serverC
Cluster Manager : running
CVM state : running
No mount point registered with cluster configuration


Node : serverD
Cluster Manager : running
CVM state : running
No mount point registered with cluster configuration

# vxdctl -c mode
mode: enabled: cluster active - MASTER
master: serverA

# /etc/vx/bin/vxclustadm nidmap
Name CVM Nid CM Nid State
serverA 0 0 Joined: Master
serverB 1 1 Joined: Slave
serverC 2 2 Joined: Slave
serverD 3 3 Joined: Slave

# /etc/vx/bin/vxclustadm -v nodestate
state: cluster member
nodeId=0
masterId=1
neighborId=1
members=0xf
joiners=0x0
leavers=0x0
reconfig_seqnum=0xf0a810
vxfen=off

# hastatus -sum

-- SYSTEM STATE
-- System State Frozen

A serverA RUNNING 0
A serverB RUNNING 0
A serverC RUNNING 0
A serverD RUNNING 0

-- GROUP STATE
-- Group System Probed AutoDisabled State

B cvm serverA Y N ONLINE
B cvm serverB Y N ONLINE
B cvm serverC Y N ONLINE
B cvm serverD Y N ONLINE





Creating a Shared Disk Group and Volumes/Filesystems

This procedure creates a shared disk group for use in a cluster environment. Disks must be placed in disk groups before they can be used by the Volume Manager.

When you place a disk under Volume Manager control, the disk is initialized. Initialization destroys any existing data on the disk.

Before you begin, make sure the disks that you add to the shared-disk group must be directly attached to all the cluster nodes.

First, make sure you are on the master node:

serverA # vxdctl -c mode
mode: enabled: cluster active - MASTER
master: serverA


Initialize the disks you want to use. Make sure they are attached to all the cluster nodes. You may optionally specify the disk format.

serverA # vxdisksetup -if EMC0_1 format=cdsdisk
serverA # vxdisksetup -if EMC0_2 format=cdsdisk


Create a shared disk group with the disks you just initialized.

serverA # vxdg -s init mysharedg mysharedg01=EMC0_1 mysharedg02=EMC0_2

serverA # vxdg list
mysharedg enabled,shared,cds 1231954112.163.serverA


Now let's add that new disk group in our cluster configuration. Giving all nodes in the cluster an option for Shared Write (sw).

serverA # cfsdgadm add mysharedg all=sw
Disk Group is being added to cluster configuration...


Verify that the cluster configuration has been updated.

serverA # grep mysharedg /etc/VRTSvcs/conf/config/main.cf
ActivationMode @serverA = { mysharedg = sw }
ActivationMode @serverB = { mysharedg = sw }
ActivationMode @serverC = { mysharedg = sw }
ActivationMode @serverD = { mysharedg = sw }

serverA # cfsdgadm display
Node Name : serverA
DISK GROUP ACTIVATION MODE
mysharedg sw

Node Name : serverB
DISK GROUP ACTIVATION MODE
mysharedg sw

Node Name : serverC
DISK GROUP ACTIVATION MODE
mysharedg sw

Node Name : serverD
DISK GROUP ACTIVATION MODE
mysharedg sw


We can now create volumes and filesystems within the shared diskgroup.

serverA # vxassist -g mysharedg make mysharevol1 100g
serverA # vxassist -g mysharedg make mysharevol2 100g

serverA # mkfs -F vxfs /dev/vx/rdsk/mysharedg/mysharevol1
serverA # mkfs -F vxfs /dev/vx/rdsk/mysharedg/mysharevol2


Then add these volumes/filesystems to the cluster configuration so they can be mounted on any or all nodes. Mountpoints will be automatically created.

serverA # cfsmntadm add mysharedg mysharevol1 /mountpoint1
Mount Point is being added...
/mountpoint1 added to the cluster-configuration

serverA # cfsmntadm add mysharedg mysharevol2 /mountpoint2
Mount Point is being added...
/mountpoint2 added to the cluster-configuration


Display the CFS mount configurations in the cluster.

serverA # cfsmntadm display -v
Cluster Configuration for Node: apqma519
MOUNT POINT TYPE SHARED VOLUME DISK GROUP STATUS MOUNT OPTIONS
/mountpoint1 Regular mysharevol1 mysharedg NOT MOUNTED crw
/mountpoint2 Regular mysharevol2 mysharedg NOT MOUNTED crw



That's it. Check you cluster configuration and try to ONLINE the filesystems on your nodes.

serverA # hastatus -sum

-- SYSTEM STATE
-- System State Frozen

A serverA RUNNING 0
A serverB RUNNING 0
A serverC RUNNING 0
A serverD RUNNING 0

-- GROUP STATE
-- Group System Probed AutoDisabled State

B cvm serverA Y N ONLINE
B cvm serverB Y N ONLINE
B cvm serverC Y N ONLINE
B cvm serverD Y N ONLINE
B vrts_vea_cfs_int_cfsmount1 serverA Y N OFFLINE
B vrts_vea_cfs_int_cfsmount1 serverB Y N OFFLINE
B vrts_vea_cfs_int_cfsmount1 serverC Y N OFFLINE
B vrts_vea_cfs_int_cfsmount1 serverD Y N OFFLINE
B vrts_vea_cfs_int_cfsmount2 serverA Y N OFFLINE
B vrts_vea_cfs_int_cfsmount2 serverB Y N OFFLINE
B vrts_vea_cfs_int_cfsmount2 serverC Y N OFFLINE
B vrts_vea_cfs_int_cfsmount2 serverD Y N OFFLINE


Each volume will have its own Service group and looks really ugly, so you may want to modify your main.cf file and group them. Be creative!



Good Luck!

11 comments:

Unknown said...
This comment has been removed by the author.
April 27, 2009 at 1:49 AM
Unknown said...

This page saved me hours of trying t figure out how to imlement CFS.

Our setup was 4.1 on RHEL.

If you don't run fencing then to load the vxfen driver on disabled mode do: echo "vxfen_mode=disabled" >/etc/vxfenmode the run the vxfen start script in /etc/rc3.d.
Also on 4.1 the options for cfsmntadm need to have all nodenames='mount opt' at the end of the command string.

Thanks - DJ

April 27, 2009 at 1:53 AM
Paul said...

Thanks for posting this, it was a lot easier to follow than trying to find the correct PDF doc... :)

One note - I could not add the shared volume without adding all=rw at the end of the cfsmntadm add command (with and without are both listed in the cfsmntadm usage help)

May 11, 2009 at 1:33 PM
Unknown said...

This is really a excellent blog. I was actually trying to collect info about how cfs works... this doc is really good and saved a lot of time for me to understand the configuration of cfs.

May 18, 2009 at 4:02 AM
Anonymous said...

Fantastic Howto thanks very clear and covered everything.

July 26, 2010 at 8:08 AM
JaganR said...

Works great. Very useful info. However mkfs command returned an error on RHEL 6.1

mkfs -F vxfs /dev/vx/rdsk/cfsvol1/vol1

mke2fs 1.41.12 (17-May-2010)
mkfs.ext2: invalid blocks count - /dev/vx/rdsk/cfsvol1/vol1

use the command below.

mkfs -t vxfs /dev/vx/rdsk/cfsvol1/vol1

February 7, 2012 at 5:19 PM
DM said...

One small query : when we are using cfsmntadm , will it also automatically set the dependencies ? or we need to set dependency and critical to 1 later.

- Dheeraj

August 27, 2012 at 7:25 AM
shafiqueahmadsiddiqui said...

if I want to add new filesystem in existing service group name "cvm".
not this service group name vrts_vea_cfs_int_cfsmount1.

Kindly suggest.

November 17, 2013 at 10:49 AM
Dan said...

thanks you very much for your help it saved lot of time for me..

one edit, cfsmntadm command needs all=rw at the end of that command and then it wont give usage error.

December 10, 2013 at 3:00 AM
Unknown said...

Hi

I am getting

Cluster Manager : running
CVM state : not-running

CVM is not running and vxdctl mode is inactive.. How to enable it ?

[root@test1 ~]# vxdctl -c mode
mode: enabled: cluster inactive

August 4, 2014 at 8:38 AM
Arocki said...

Very much helpful...

Much appreciated.

September 12, 2014 at 11:39 AM
Visit the Site
MARVEL and SPIDER-MAN: TM & 2007 Marvel Characters, Inc. Motion Picture © 2007 Columbia Pictures Industries, Inc. All Rights Reserved. 2007 Sony Pictures Digital Inc. All rights reserved. blogger template by blog forum