Tuesday, December 18, 2007
at
3:20 PM
|
The prtpicl command outputs information to accurately determine the make and model of an HBA.
The QLogic HBAs have a PCI identifier of 1077. Search through the output from the command prtpicl -v for the number 1077. A section displays similar to the following: QLGC,qla (scsi, 44000003ac) :DeviceID 0x4 :UnitAddress 4 :vendor-id 0x1077 :device-id 0x2300 :revision-id 0x1 :subsystem-vendor-id 0x1077 :subsystem-id 0x9 :min-grant 0x40 :max-latency 0 :cache-line-size 0x10 :latency-timer 0x40
The subsystem-ID value determines the model of HBA. Reference this chart to determine the model of HBA:
Vendor | HBA model | Vendor ID | Device ID | Subsys Vendor ID | Subsys Device ID | QLogic | QCP2340 | 1077 | 2312 | 1077 | 109 | QLogic | QLA200 | 1077 | 6312 | 1077 | 119 | QLogic | QLA210 | 1077 | 6322 | 1077 | 12F | QLogic | QLA2300/QLA2310 | 1077 | 2310 | 1077 | 9 | QLogic | QLA2340
| 1077 | 2312 | 1077 | 100 | QLogic | QLA2342 | 1077 | 2312 | 1077 | 101 | QLogic | QLA2344 | 1077 | 2312 | 1077 | 102 | QLogic | QLE2440 | 1077 | 2422 | 1077 | 145 | QLogic | QLA2460 | 1077 | 2422 | 1077 | 133 | QLogic | QLA2462 | 1077 | 2422 | 1077 | 134 | QLogic | QLE2360 | 1077 | 2432 | 1077 | 117 | QLogic | QLE2362 | 1077 | 2432 | 1077 | 118 | QLogic | QLE2440 | 1077 | 2432 | 1077 | 147 | QLogic | QLE2460 | 1077 | 2432 | 1077 | 137 | QLogic | QLE2462 | 1077 | 2432 | 1077 | 138 | QLogic | QSB2340 | 1077 | 2312 | 1077 | 104 | QLogic | QSB2342 | 1077 | 2312 | 1077 | 105 | Sun | SG-XPCI1FC-QLC | 1077 | 6322 | 1077 | 132 | Sun | 6799A | 1077 | 2200A | 1077 | 4082 | Sun | SG-XPCI1FC-QF2/x6767A | 1077 | 2310 | 1077 | 106 | Sun | SG-XPCI2FC-QF2/x6768A | 1077 | 2312 | 1077 | 10A | Sun | X6727A | 1077 | 2200A | 1077 | 4083 | Sun | SG-XPCI1FC-QF4 | 1077 | 2422 | 1077 | 140 | Sun | SG-XPCI2FC-QF4 | 1077 | 2422 | 1077 | 141 | Sun | SG-XPCIE1FC-QF4 | 1077 | 2432 | 1077 | 142 | Sun | SG-XPCIE2FC-QF4 | 1077 | 2432 | 1077 | 143 |
Posted by
JAUGHN
Labels:
HBA
Tuesday, December 4, 2007
at
1:49 PM
|
Details:
Here's an actual server I worked on for adding a NIC resource.
# haconf -makerw
# hares -add vvrnic NIC db2inst_grp VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
# hares -modify vvrnic Device ce3
# hares -modify vvrnic NetworkType ether
# hares -add vvrip IP db2inst_grp VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
# hares -modify vvrip Device ce3
# hares -modify vvrip Address "10.67.196.191"
# hares -modify vvrip NetMask "255.255.254.0"
# hares -link vvrip vvrnic
# hagrp -enableresources db2inst_grp
# hares -online vvrip -sys server620
# haconf -dump -makero
Posted by
JAUGHN
Labels:
Veritas Cluster Server
How to create a file system using VERITAS Volume Manager, controlled under VERITAS Cluster Server
Details:
Following is the algorithm to create a volume, file system and put them under VERITAS Cluster Server (VCS).
1. Create a disk group 2. Create a mount point and file system 3. Deport a disk group 4. Create a service group
Add following resources and modify attributes:
Resources Name Attributes 1. Disk group, disk group name 2. Mount block device, FSType, MountPoint
Create dependency between following resources:
1. Mount and disk group
Enable all resources in this service group.
The following example shows how to create a raid-5 volume with a VxFS file system and put it under VCS control.
Method 1 - Using the command line
1. Create a disk group using Volume Manager with a minimum of 4 disks:
# vxdg init datadg disk01=c1t1d0s2 disk02=c1t2d0s2 disk03=c1t3d0s2 disk04=c1t4d0s2
# vxassist -g datadg make vol01 2g layout=raid5
2. Create a mount point for this volume:
# mkdir /vol01
3. Create a file system on this volume:
# mkfs -F vxfs /dev/vx/rdsk/datadg/vol01
4. Deport this disk group:
# vxdg deport datadg
5. Create a service group:
# haconf -makerw
# hagrp -add newgroup
# hagrp -modify newgroup SystemList <sysa> 0 <sysb> 1
# hagrp -modify newgroup AutoStartList <sysa>
6. Create a disk group resource and modify its attributes:
# hares -add data_dg DiskGroup newgroup
# hares -modify data_dg DiskGroup datadg
7. Create a mount resource and modify its attributes:
# hares -add vol01_mnt Mount newgroup
# hares -modify vol01_mnt BlockDevice /dev/vx/dsk/datadg/vol01
# hares -modify vol01_mnt FSType vxfs
# hares -modify vol01_mnt MountPoint /vol01
# hares -modify vol01_mnt FsckOpt %-y
8. Link the mount resource to the disk group resource:
# hares -link vol01_mnt data_dg
9. Enable the resources and close the configuration:
# hagrp -enableresources newgroup
# haconf -dump -makero
Method 2 - Editing /etc/VRTSvcs/conf/config/main.cf
# hastop -all
# cd /etc/VRTSvcs/conf/config
# haconf -makerw
# vi main.cf
Add the following line to end of this file:
group newgroup ( SystemList = { sysA =0, sysB=1} AutoStartList = { sysA } )
DiskGroup data_dg ( DiskGroup = datadg )
Mount vol01_mnt ( MountPoint = "/vol01" BlockDevice = " /dev/vx/dsk/datadg/vol01" FSType = vxfs )
vol01_mnt requires data_dg
# haconf -dump -makero
# hastart -local
Check status of the new service group.
------------------------------------------------------------------------------------
Here's an actual example.
# umount /backup/pdpd415
# vxdg deport bkupdg
# haconf -makerw
# hares -add bkup_dg DiskGroup pdpd415_grp
# hares -modify bkup_dg DiskGroup bkupdg
# hares -add bkupdg_bkup_mnt Mount pdpd415_grp
# hares -modify bkupdg_bkup_mnt BlockDevice /dev/vx/dsk/bkupdg/bkupvol
# hares -modify bkupdg_bkup_mnt FSType vxfs
# hares -modify bkupdg_bkup_mnt MountPoint /backup/pdpd415
# hares -modify bkupdg_bkup_mnt FsckOpt %-y
# hares -link bkupdg_bkup_mnt bkup_dg
# hagrp -enableresources pdpd415_grp
# hares -online bkup_dg -sys sppwd620
# hares -online bkupdg_bkup_mnt -sys sppwd620
# haconf -dump -makero
Posted by
JAUGHN
Labels:
Veritas Cluster Server
To verify whether an HBA is connected to a fabric or not:
# /usr/sbin/luxadm -e port
Found path to 4 HBA ports
/devices/pci@1e,600000/SUNW,qlc@3/fp@0,0:devctl CONNECTED /devices/pci@1e,600000/SUNW,qlc@3,1/fp@0,0:devctl NOT CONNECTED /devices/pci@1e,600000/SUNW,qlc@4/fp@0,0:devctl CONNECTED /devices/pci@1e,600000/SUNW,qlc@4,1/fp@0,0:devctl NOT CONNECTED
Your SAN administrator will ask for the WWNs for Zoning. Here are some steps I use to get that information:
# prtconf -vp | grep wwn port-wwn: 210000e0.8b1d8d7d node-wwn: 200000e0.8b1d8d7d port-wwn: 210100e0.8b3d8d7d node-wwn: 200000e0.8b3d8d7d port-wwn: 210000e0.8b1eaeb0 node-wwn: 200000e0.8b1eaeb0 port-wwn: 210100e0.8b3eaeb0 node-wwn: 200000e0.8b3eaeb0
Or you may use fcinfo, if installed.
# fcinfo hba-port HBA Port WWN: 210000e08b8600c8 OS Device Name: /dev/cfg/c11 Manufacturer: QLogic Corp. Model: 375-3108-xx Type: N-port State: online Supported Speeds: 1Gb 2Gb Current Speed: 2Gb Node WWN: 200000e08b8600c8 HBA Port WWN: 210100e08ba600c8 OS Device Name: /dev/cfg/c12 Manufacturer: QLogic Corp. Model: 375-3108-xx Type: N-port State: online Supported Speeds: 1Gb 2Gb Current Speed: 2Gb Node WWN: 200100e08ba600c8 HBA Port WWN: 210000e08b86a1cc OS Device Name: /dev/cfg/c5 Manufacturer: QLogic Corp. Model: 375-3108-xx Type: N-port State: online Supported Speeds: 1Gb 2Gb Current Speed: 2Gb Node WWN: 200000e08b86a1cc HBA Port WWN: 210100e08ba6a1cc OS Device Name: /dev/cfg/c6 Manufacturer: QLogic Corp. Model: 375-3108-xx Type: N-port State: online Supported Speeds: 1Gb 2Gb Current Speed: 2Gb Node WWN: 200100e08ba6a1cc
Here are some commands you can use for QLogic Adapters:
# modinfo | grep qlc 76 7ba9e000 cdff8 282 1 qlc (SunFC Qlogic FCA v20060630-2.16)
# prtdiag | grep qlc pci 66 PCI5 SUNW,qlc-pci1077,2312 (scsi-+ okay /ssm@0,0/pci@18,600000/SUNW,qlc@1 pci 66 PCI5 SUNW,qlc-pci1077,2312 (scsi-+ okay /ssm@0,0/pci@18,600000/SUNW,qlc@1,1 pci 33 PCI2 SUNW,qlc-pci1077,2312 (scsi-+ okay /ssm@0,0/pci@19,700000/SUNW,qlc@1 pci 33 PCI2 SUNW,qlc-pci1077,2312 (scsi-+ okay /ssm@0,0/pci@19,700000/SUNW,qlc@1,1
# luxadm qlgc
Found Path to 4 FC100/P, ISP2200, ISP23xx Devices
Opening Device: /devices/ssm@0,0/pci@19,700000/SUNW,qlc@1,1/fp@0,0:devctl Detected FCode Version: ISP2312 Host Adapter Driver: 1.14.09 03/08/04
Opening Device: /devices/ssm@0,0/pci@19,700000/SUNW,qlc@1/fp@0,0:devctl Detected FCode Version: ISP2312 Host Adapter Driver: 1.14.09 03/08/04
Opening Device: /devices/ssm@0,0/pci@18,600000/SUNW,qlc@1,1/fp@0,0:devctl Detected FCode Version: ISP2312 Host Adapter Driver: 1.14.09 03/08/04
Opening Device: /devices/ssm@0,0/pci@18,600000/SUNW,qlc@1/fp@0,0:devctl Detected FCode Version: ISP2312 Host Adapter Driver: 1.14.09 03/08/04 Complete
# luxadm -e dump_map /devices/ssm@0,0/pci@19,700000/SUNW,qlc@1,1/fp@0,0:devctl Pos Port_ID Hard_Addr Port WWN Node WWN Type 0 1f0112 0 5006048accab4f8d 5006048accab4f8d 0x0 (Disk device) 1 1f011f 0 5006048accab4e0d 5006048accab4e0d 0x0 (Disk device) 2 1f012e 0 5006048acc7034cd 5006048acc7034cd 0x0 (Disk device) 3 1f0135 0 5006048accb4fc0d 5006048accb4fc0d 0x0 (Disk device) 4 1f02ef 0 50060163306043b6 50060160b06043b6 0x0 (Disk device) 5 1f06ef 0 5006016b306043b6 50060160b06043b6 0x0 (Disk device) 6 1f0bef 0 5006016330604365 50060160b0604365 0x0 (Disk device) 7 1f19ef 0 5006016b30604365 50060160b0604365 0x0 (Disk device) 8 1f0e00 0 210100e08ba6a1cc 200100e08ba6a1cc 0x1f (Unknown Type,Host Bus Adapter)
# prtpicl -v . . SUNW,qlc (scsi-fcp, 7f0000066b) <--- go to qLogic website to get model number :_fru_parent (7f0000dc86H) :DeviceID 0x1 :UnitAddress 1 :vendor-id 0x1077 :device-id 0x2312 :revision-id 0x2 :subsystem-vendor-id 0x1077 :subsystem-id 0x10a :min-grant 0x40 :max-latency 0 :cache-line-size 0x10 :latency-timer 0x40
. .
#### The subsystem-ID value determines the model of HBA. #### For reference table Click Here
Configuring NEW LUNs:
spdma501:# format < /dev/null Searching for disks...done
AVAILABLE DISK SELECTIONS: 0. c1t0d0 /pci@8,600000/SUNW,qlc@4/fp@0,0/ssd@w2100000c506b2fca,0 1. c1t1d0 /pci@8,600000/SUNW,qlc@4/fp@0,0/ssd@w2100000c506b39cf,0 Specify disk (enter its number):
spdma501:# cfgadm -o show_FCP_dev -al Ap_Id Type Receptacle Occupant Condition c1 fc-private connected configured unknown c1::2100000c506b2fca,0 disk connected configured unknown c1::2100000c506b39cf,0 disk connected configured unknown c3 fc-fabric connected unconfigured unknown c3::50060482ccaae5a3,61 disk connected unconfigured unknown c3::50060482ccaae5a3,62 disk connected unconfigured unknown c3::50060482ccaae5a3,63 disk connected unconfigured unknown c3::50060482ccaae5a3,64 disk connected unconfigured unknown c3::50060482ccaae5a3,65 disk connected unconfigured unknown c3::50060482ccaae5a3,66 disk connected unconfigured unknown c3::50060482ccaae5a3,67 disk connected unconfigured unknown c3::50060482ccaae5a3,68 disk connected unconfigured unknown c3::50060482ccaae5a3,69 disk connected unconfigured unknown c3::50060482ccaae5a3,70 disk connected unconfigured unknown c3::50060482ccaae5a3,71 disk connected unconfigured unknown c3::50060482ccaae5a3,72 disk connected unconfigured unknown c4 fc connected unconfigured unknown c5 fc-fabric connected unconfigured unknown c5::50060482ccaae5bc,61 disk connected unconfigured unknown c5::50060482ccaae5bc,62 disk connected unconfigured unknown c5::50060482ccaae5bc,63 disk connected unconfigured unknown c5::50060482ccaae5bc,64 disk connected unconfigured unknown c5::50060482ccaae5bc,65 disk connected unconfigured unknown c5::50060482ccaae5bc,66 disk connected unconfigured unknown c5::50060482ccaae5bc,67 disk connected unconfigured unknown c5::50060482ccaae5bc,68 disk connected unconfigured unknown c5::50060482ccaae5bc,69 disk connected unconfigured unknown c5::50060482ccaae5bc,70 disk connected unconfigured unknown c5::50060482ccaae5bc,71 disk connected unconfigured unknown c5::50060482ccaae5bc,72 disk connected unconfigured unknown c6 fc connected unconfigured unknown
spdma501:# cfgadm -c configure c3 Nov 16 17:32:25 spdma501 last message repeated 54 times Nov 16 17:32:26 spdma501 scsi: WARNING: /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,48 (ssd2): Nov 16 17:32:26 spdma501 corrupt label - wrong magic number Nov 16 17:32:26 spdma501 scsi: WARNING: /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,47 (ssd3): Nov 16 17:32:26 spdma501 corrupt label - wrong magic number Nov 16 17:32:26 spdma501 scsi: WARNING: /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,46 (ssd4): Nov 16 17:32:26 spdma501 corrupt label - wrong magic number Nov 16 17:32:26 spdma501 scsi: WARNING: /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,45 (ssd5): Nov 16 17:32:26 spdma501 corrupt label - wrong magic number Nov 16 17:32:26 spdma501 scsi: WARNING: /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,44 (ssd6): Nov 16 17:32:26 spdma501 corrupt label - wrong magic number Nov 16 17:32:26 spdma501 scsi: WARNING: /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,43 (ssd7): Nov 16 17:32:26 spdma501 corrupt label - wrong magic number Nov 16 17:32:26 spdma501 scsi: WARNING: /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,42 (ssd8): Nov 16 17:32:26 spdma501 corrupt label - wrong magic number Nov 16 17:32:26 spdma501 scsi: WARNING: /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,41 (ssd9): Nov 16 17:32:26 spdma501 corrupt label - wrong magic number Nov 16 17:32:26 spdma501 scsi: WARNING: /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,40 (ssd10): Nov 16 17:32:26 spdma501 corrupt label - wrong magic number Nov 16 17:32:26 spdma501 scsi: WARNING: /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,3f (ssd11): Nov 16 17:32:26 spdma501 corrupt label - wrong magic number Nov 16 17:32:26 spdma501 scsi: WARNING: /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,3e (ssd12): Nov 16 17:32:26 spdma501 corrupt label - wrong magic number Nov 16 17:32:26 spdma501 scsi: WARNING: /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,3d (ssd13):
spdma501:# cfgadm -c configure c5 Nov 16 17:32:55 spdma501 last message repeated 5 times Nov 16 17:32:59 spdma501 scsi: WARNING: /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w50060482ccaae5bc,48 (ssd14): Nov 16 17:32:59 spdma501 corrupt label - wrong magic number Nov 16 17:32:59 spdma501 scsi: WARNING: /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w50060482ccaae5bc,47 (ssd15): Nov 16 17:32:59 spdma501 corrupt label - wrong magic number Nov 16 17:32:59 spdma501 scsi: WARNING: /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w50060482ccaae5bc,46 (ssd16): Nov 16 17:32:59 spdma501 corrupt label - wrong magic number Nov 16 17:32:59 spdma501 scsi: WARNING: /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w50060482ccaae5bc,45 (ssd17): Nov 16 17:32:59 spdma501 corrupt label - wrong magic number Nov 16 17:32:59 spdma501 scsi: WARNING: /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w50060482ccaae5bc,44 (ssd18): Nov 16 17:32:59 spdma501 corrupt label - wrong magic number Nov 16 17:32:59 spdma501 scsi: WARNING: /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w50060482ccaae5bc,43 (ssd19): Nov 16 17:32:59 spdma501 corrupt label - wrong magic number Nov 16 17:32:59 spdma501 scsi: WARNING: /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w50060482ccaae5bc,42 (ssd20): Nov 16 17:32:59 spdma501 corrupt label - wrong magic number Nov 16 17:32:59 spdma501 scsi: WARNING: /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w50060482ccaae5bc,41 (ssd21): Nov 16 17:32:59 spdma501 corrupt label - wrong magic number Nov 16 17:32:59 spdma501 scsi: WARNING: /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w50060482ccaae5bc,40 (ssd22): Nov 16 17:32:59 spdma501 corrupt label - wrong magic number Nov 16 17:32:59 spdma501 scsi: WARNING: /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w50060482ccaae5bc,3f (ssd23): Nov 16 17:32:59 spdma501 corrupt label - wrong magic number Nov 16 17:32:59 spdma501 scsi: WARNING: /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w50060482ccaae5bc,3e (ssd24): Nov 16 17:32:59 spdma501 corrupt label - wrong magic number Nov 16 17:32:59 spdma501 scsi: WARNING: /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w50060482ccaae5bc,3d (ssd25): Nov 16 17:32:59 spdma501 corrupt label - wrong magic number
spdma501:# format < /dev/null Searching for disks...Nov 16 17:33:04 spdma501 last message repeated 1 time Nov 16 17:33:07 spdma501 scsi: WARNING: /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,48 (ssd2): Nov 16 17:33:07 spdma501 corrupt label - wrong magic numberdone
c3t50060482CCAAE5A3d61: configured with capacity of 17.04GB c3t50060482CCAAE5A3d62: configured with capacity of 17.04GB c3t50060482CCAAE5A3d63: configured with capacity of 17.04GB c3t50060482CCAAE5A3d64: configured with capacity of 17.04GB c3t50060482CCAAE5A3d65: configured with capacity of 17.04GB c3t50060482CCAAE5A3d66: configured with capacity of 17.04GB c3t50060482CCAAE5A3d67: configured with capacity of 17.04GB c3t50060482CCAAE5A3d68: configured with capacity of 17.04GB c3t50060482CCAAE5A3d69: configured with capacity of 17.04GB c3t50060482CCAAE5A3d70: configured with capacity of 17.04GB c3t50060482CCAAE5A3d71: configured with capacity of 17.04GB c3t50060482CCAAE5A3d72: configured with capacity of 17.04GB c5t50060482CCAAE5BCd67: configured with capacity of 17.04GB c5t50060482CCAAE5BCd68: configured with capacity of 17.04GB c5t50060482CCAAE5BCd69: configured with capacity of 17.04GB c5t50060482CCAAE5BCd70: configured with capacity of 17.04GB c5t50060482CCAAE5BCd71: configured with capacity of 17.04GB c5t50060482CCAAE5BCd72: configured with capacity of 17.04GB
AVAILABLE DISK SELECTIONS: 0. c1t0d0 /pci@8,600000/SUNW,qlc@4/fp@0,0/ssd@w2100000c506b2fca,0 1. c1t1d0 /pci@8,600000/SUNW,qlc@4/fp@0,0/ssd@w2100000c506b39cf,0 2. c3t50060482CCAAE5A3d61 /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,3d 3. c3t50060482CCAAE5A3d62 /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,3e 4. c3t50060482CCAAE5A3d63 /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,3f 5. c3t50060482CCAAE5A3d64 /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,40 6. c3t50060482CCAAE5A3d65 /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,41 7. c3t50060482CCAAE5A3d66 /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,42 8. c3t50060482CCAAE5A3d67 /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,43 9. c3t50060482CCAAE5A3d68 /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,44 10. c3t50060482CCAAE5A3d69 /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,45 11. c3t50060482CCAAE5A3d70 /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,46 12. c3t50060482CCAAE5A3d71 /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,47 13. c3t50060482CCAAE5A3d72 /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,48 14. c5t50060482CCAAE5BCd67 /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w50060482ccaae5bc,43 15. c5t50060482CCAAE5BCd68 /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w50060482ccaae5bc,44 16. c5t50060482CCAAE5BCd69 /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w50060482ccaae5bc,45 17. c5t50060482CCAAE5BCd70 /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w50060482ccaae5bc,46 18. c5t50060482CCAAE5BCd71 /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w50060482ccaae5bc,47 19. c5t50060482CCAAE5BCd72 /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w50060482ccaae5bc,48 Specify disk (enter its number):
IF YOU DON'T SEE THE NEW LUNS IN FORMAT, RUN devfsadm !!!!
# /usr/sbin/devfsadm
Label the new disks !!!!
# cd /tmp
# cat format.cmd label quit
# for disk in `format < /dev/null 2> /dev/null | grep "^c" | cut -d: -f1` do format -s -f /tmp/format.cmd $disk echo "labeled $disk ....." done
Posted by
JAUGHN
Labels:
SAN,
Solaris - SAN
Monday, December 3, 2007
at
2:05 PM
|
This document is aimed at providing a solid architectural and technical overview of VERITAS Volume Manager’s IP option, VERITAS Volume Replicator.
INTRODUCTION: REPLICATION AND DISASTER RECOVERY PLANNING
In today’s business environment, reliance on information processing systems continues to grow on an almost daily basis. Information systems, which once aided a company in doing business, have now become the business itself. As companies become more reliant on critical information systems, the potential disruption to the business due to a loss of data becomes even greater.
There are many threats that organizations face today when it comes to the reliability and viability of their data. Logical corruption, or loss of data due to threats such as virus and software bugs, can be protected by ensuring there is a viable copy of data available at all times. Performing regularly scheduled backups of the organizations data typically protects against logical types of data loss. Another threat that may result in unavailable may be contributed to component failure. While most devices have begun to build in redundancy there are other technologies such as application clustering technologies that can protect against a failure of a component while continuing to enable applications to be available.
Just as the levels of protection for logical and component failures have grown, so has the reliance on the information systems being protected. Many companies now realize that logical and local protection is no longer enough to guarantee that the organization will continue to be accessible. This loss can stem from planned downtime such as site complete site maintenance to unplanned downtime such as power or cooling loss to natural disasters such as fire and flooding to acts of terrorism or war. The loss of a complete data center facility would so greatly affect an organizations capability to continue to function that protection must be established at the data center level.
Many companies have implemented significant Disaster Recovery (DR) plans to protect against the complete loss of a facility. Complete plans are in place to recover voice and data capability in a remote location. One issue that is common among many DR plans is the information processing recovery plan. For many years companies have been taking regular data backups at the primary data center, then duplicating these tapes on a regular basis for shipment offsite to the DR facility. While a tape based backup solution is the needed safety net for disaster recovery planning there is still a need in the IT environment to provide higher levels of protection for critical data. While a tape backup approach may meet the needs of much of the data within an organization there are many data types that cannot afford the levels of data loss inherent to a tape backup approach.
The first step to understanding which technologies are necessary for particular data types is to understand when and if appropriate technologies are needed. The key measure of disaster recovery technologies is based on recovery point objectives and recovery time objectives.
Recovery Point Objective (RPO) – Point in time to which applications data must be recovered to resume business transactions. Recovery Time Objective (RTO) – Maximum elapsed time allowed before lack of business function severely impacts an organization.
A complete disaster recovery plan is not delivered by any one technology, service or vendor but rather a culmination of products that are implemented in order to provide the needed RPO and RTO of an application. When analyzing a disaster recovery solution many components must be implemented in order to guarantee application availability.
The diagram shown above outlines software technologies that map to a customer’s RPO and RTO requirements. The burst in the middle represents a disaster. To the left of the burst is the recovery point objective and has the appropriate software technology based on business needs. For example, if a particular application can afford a day or more worth of data loss then a tape backup approach is all that that is needed for that application. However, if a day or more worth of data loss will cause substantial business impact then replication technologies must be implemented into the IT environment to protect against substantial data loss. To the right of burst is the recovery time objective. If a business can afford to take a day or more in order to resume normal business activity then manual tape restore will satisfy their business needs. Organizations can improve on this RTO by using bare metal restore technologies that dramatically reduce the amount of time it takes to get a server up and running. However, if those technologies do not meet the applications RTO then clustering technologies must be implemented into the IT environment to protect against substantial downtime.
UNDERSTANDING THE NEED FOR REPLICATION
Replication is a technology designed to maintain a duplicate data set on a completely independent storage system at a difference geographical location. Replication differs from a tape backup and restore methods because replication is completely automatic and far less labor intensive. In addition, replication technologies can be used to reduce the recovery point objective of critical applications.
Whether motivated by disaster, site failure, or a planned site migration, VERITAS’ replication technologies provide the ability to distribute data for seamless availability across sites. VERITAS Volume Manager provides remote mirroring capabilities natively over Fibre Channel protocols. For organizations that wish to replicate their data natively over a standard IP network, an optional capability of VERITAS Volume Manager called VERITAS Volume Replicator can reliably, efficiently and consistently replicate data to remote locations. VERITAS’ replication technologies provide a robust storage-independent disaster recovery solution when data loss and prolonged downtime cannot be tolerated.
REPLICATION MODES The two main types of replication are synchronous and asynchronous. Both have their advantages and disadvantages and should be available options for the IT administrator. Each uses a different process to arrive at the same goal, and each deals somewhat differently with network conditions. The performance and effectiveness of both depend ultimately on business requirements such as how soon updates must be reflected at the target location. Performance is strongly determined by the available bandwidth, network latency, the number of participating servers, the amount of data to be replicated and the geographical distance between the hosts.
Synchronous Replication Synchronous replication ensures that a write update has been posted to the secondary location(s) and the primary location before the write operation is acknowledged to be complete at the application level. This way, in the event of a disaster at the primary location, the data recovered at the secondary location will be an exact copy of the data at the primary location. Synchronous replication produces the exact same data at both the primary and secondary location(s), which means the RPO of applications using synchronous replication would be zero. However, since the application transaction must travel to the secondary location(s) and back to the primary location before the application can continue with the next transaction there will be some application performance impact. Synchronous replication is most effective in metropolitan area networks with application environments that require zero data loss and can afford some application performance impact. For all other application, asynchronous replication should be a viable alternative.
There are many scenarios that could affect the performance of replication in synchronous mode including the amount of write activity on the system, the network pipe connecting the primary and secondary sites, and the distance between the two sites. A good rule of thumb to use is 3ms of latency for every 100 miles of distance between the primary and secondary systems. Most configurations that use synchronous replication have it set to change to asynchronous mode if the network link is lost between the primary and secondary site. This is so that the primary application is not affected by a network outage.
Asynchronous Replication Asynchronous replication eliminates the potential performance problems of synchronous methods. The secondary site may lag behind the primary site, typically only by less then one minute, offering essentially real-time replication without the application performance impact. During asynchronous replication, application updates are written at the primary, and queued for forwarding to each secondary location as network bandwidth allows. Unlike synchronous replication, the writing application does not suffer from the application performance impact of replication and can function as if replication is not occurring. Asynchronous replication should be used in organizations that can afford minimal data loss but want to eliminate application performance impact or organizations that would like to replicate data over a wide-area network. This can also be the right choice if the network bandwidth between the two sites is large enough to handle the average amount of data, but insufficient to handle the peak write activity.
Write order fidelity Whatever mode you select, you should ensure that the data at the secondary site is never corrupted or inconsistent. The last thing you need is a non-recoverable replicated data set at the secondary location the very moment you need it most. The only way to ensure that the data is recoverable at the secondary location(s) is to ensure that the data arrives in the same order as it was written at the primary location. This is called write order fidelity. Without write order fidelity, no guarantee exists that a secondary will have consistent recoverable data. In a database environment, updates are made to both the log and data spaces of a database management system in a fixed sequence. The log and data space are usually in different volumes, and the data itself can be spread over several additional volumes. A well-designed replication solution needs to consistently safeguard write order fidelity. This may be accomplished by a logical grouping of data volumes so the order of updates in that group is preserved within and among all secondary copies of these volumes.
Replication solutions running at the hardware level typically lacks the ability to maintain write order fidelity when running in asynchronous mode. This is due to a lack of a persistent queue of writes that have not yet been sent to the secondary. If the user using hardware replication wishes to avoid application performance impact imposed by synchronous replication, they lose recoverability on the remote site, rendering the remote copy essentially useless. Therefore, maintaining write order fidelity when using replication technologies should be an absolute requirement to ensure the recoverability of data at a remote location.
TECHNICAL REQUIREMENTS OF A REPLICATION SOLUTION
Architecturally, a complete replication solution must provide a copy of all data at the primary and secondary locations, including database files as well as any other necessary binary and control files and the replication technology must ensure that the data is accurate and recoverable.
The replication solution must be capable of being configured to support a secondary site over any distance. In today’s environment, organizations must be allowed to use current infrastructure, including current data center’s, regardless of distance. The replication solution must operate at any distance whether the data centers are a few kilometers apart or thousands of kilometers apart without adding undue cost or complexity. This means the replication technology must provide asynchronous support, over a long distance, without additional high cost items such as communication converters or additional disk space for staging data. The replication solution must be flexible enough to allow the customer to change the configuration of the data sets that are being replicated. This could include having volumes that are not replicated, because they are for temp files, or files that wouldn’t be needed in a disaster. There should also be the ability to grow and shrink the volumes with no application or customer downtime. The solution should also allow testing at the remote site in order to validate the data at the secondary is recoverable.
VERITAS REPLICATION AND REMOTE MIRRORING OVERVIEW
VERITAS replication and remote mirroring technologies can dramatically speed recovery time and eliminate data loss by making current data available immediately at an alternate location. Organizations can replicate or mirror data via a storage area network or over any IP network in order to meet their disaster recovery needs. Unlike proprietary, inflexible hardware approaches, VERITAS’ replication and remote mirroring technologies are not dependent on any specific storage hardware platform. For example, replication can occur between storage arrays from the same vendor, regardless of array model or size, or replication can occur between different storage vendor’s arrays. The only requirement is that there is matching Volume Manager volume sizes at each side.
VERITAS’ software-based replication provides a reliable, efficient and cost-effective solution for geographically mirroring data sets. It also has full database management system support, including DB2, Exchange, Oracle, SQL Server and Sybase.
VERITAS VOLUME MANAGER VERITAS Volume Manager is the industry leader in storage virtualization. It provides an easy-to-use, online storage management tool for heterogeneous enterprise environments. Organizations can extend their storage management functionality with Volume Manager’s remote mirroring capability in order to deliver a metropolitan area disaster recovery solution. VERITAS Volume Manager can synchronously mirror data natively over storage protocols such as Fibre Channel, which makes it an ideal solution for disaster recovery within a metropolitan area network. Customer’s wishing to implement a disaster recovery solution over Fibre Channel can create “just another mirror” of their data over an extended distance, using VERITAS Volume Manager, to be made available should a complete site outage occur. This solution allows for using different storage arrays at the two sites, and is seamless to the Storage Administrators and users of the data.
VERITAS VOLUME REPLICATOR For organizations who wish to replicate data natively over an IP network, VERITAS Volume Manager has an optional capability called Volume Replicator. VERITAS Volume Replicator reliably, efficiently and consistently replicates data to remote locations over an IP network for maximum business continuity, removing the need for expensive proprietary network hardware, and the need to have the exact same storage hardware at every site.
Since Volume Replicator (VVR) is just an optional capability to VERITAS Volume Manager (VxVM), VVR allows VxVM volumes on one system to be exactly replicated to identically sized volumes on another system. For example, an Oracle database may use several different volumes for various tablespaces, redo logs, archived redo logs, indexes and other storage. Each component is typically stored in a VxVM volume, or multiple volumes. The database may also use data stored in VERITAS File Systems, created in these volumes for better manageability. VVR can provide an exact duplicate of these volumes to another system, at another site and since the replication occurs at the volume level VVR can replicate data between any storage hardware arrays. VERITAS Volume Replicator can scale to support up to 32 secondary data storage sites and Volume Replicator elegantly handles one-to-one, one-to-many and many-to-one data replication configurations. There are four main components that are added to the VERITAS Volume Manager code base to provide VERITAS Volume Replicator. These four components are replicated volume groups (RVG), storage replicator log (SRL), rLinks and data change maps (DCM).
The above diagram shows the architecture of VERITAS Volume Replicator. VERITAS Volume Replicator is based on the same code base as VERITAS Volume Manager and adds three components: the Rlink, SRL and RVG.
Replicated Volume Groups VVR extends the concept of a disk group (found in VERITAS Volume Manager) to provide the concept of a replicated volume group (RVG). An RVG is a subset of volumes within a given VxVM disk group configured for replication to one or more secondary systems. An RVG can contain one or more data volumes in the disk group, up to and including all data volumes, but cannot span multiple disk groups. Multiple RVGs can be configured inside one disk group, but not the other way around. Volumes that are associated with an RVG and that contain application data are called replicated data volumes. The concept of an RVG gives the user the ability to pick and choose what data is replicated to the secondary site. Therefore, organizations need only to replicate their mission-critical data to a secondary location, which allows organization to save money on replication implementations because they need a lot less storage at their secondary site. In addition, organizations that pay for bandwidth usage can save on bandwidth costs because they are only replicating data that has stringent recovery point objectives. All other data can be protected by simply using a tape backup approach.
The data volumes in the RVG are under the control of an application, such as a database management system, that requires write-order fidelity among the updates to the volumes. Write ordering is strictly maintained within an RVG during replication to ensure that each remote volume is always consistent, both internally and with all other volumes of the group. At the simplest level, VVR exists within the VxVM code base and has the capability to intercept any write destined for a VxVM volume within an RVG and replicate the write, in the correct order, to designated secondaries before the write is passed on to the actual data volumes.
Data is replicated from a primary RVG to a secondary RVG. The primary RVG is in use by an application, while the secondary RVG receives replication and writes to local disk. The concept of primary and secondary is per
RVG, not per system. A system can simultaneously be a primary RVG for some RVGs and secondary RVG for others. The RVG also contains the storage replicator log (SRL) and replication link (RLINK) explained in the following sections.
Storage Replicator Log All data writes destined for volumes configured for replication are first persistently queued in a log called the Storage Replicator Log. VVR implements the SRL at the primary side to store all changes for transmission to the secondary(s). The SRL is a VxVM volume configured as part of an RVG. The SRL gives VVR the ability to associate writes to specific volumes within the replicated configuration in a specific order, maintaining write order fidelity at the secondary. All writes sent to the VxVM volume layer, whether from an application such as a database writing directly to storage, or an application accessing storage via a file system are faithfully replicated in application write order to the secondary.
When implementing asynchronous replication data integrity must be guaranteed at the remote site, or it must preserve write order fidelity. This essentially means that writes applied to the secondary storage must occur in the exact same order they were applied at the primary. Asynchronous replication without this capability compromises data consistency at the disaster recovery site and may jeopardize the recoverability of the data. The SRL within Volume Replicator tracks writes in the correct order and guarantees that the data will arrive at the secondary site in that same order, whether operating in synchronous or asynchronous mode.
The SRL can also be used with synchronous or asynchronous replication to protect against network outages. Should a network outage occur when replicating in synchronous mode, Volume Replicator can automatically switch to asynchronous mode and the writes can be stored in the SRL for later transmission to secondaries when the network is restored. This functionality protects the primary location from being impacted should a network outage occur.
Rlinks An Rlink is a VVR Replication Link to a secondary RVG. Each RLINK on a Primary RVG represents the communication link from the Primary RVG to a corresponding Secondary RVG, via an IP connection. RLinks are configured to communicate between specific host names/IP addresses and can support both TCP and UDP communication protocols between systems.
Data Change Maps Data Change Maps (DCM) are used to mark sections of volumes on the primary that have changed during extended network outages in order to minimize the amount of data that must be synchronized to the secondary site during the outage.
VOLUME REPLICATOR TECHNICAL DETAILS
OPERATIONAL MODES AND DATA FLOW: SYNCHRONOUS In Synchronous mode, all data writes are first posted to the SRL, and then sent to the secondary location(s). The posting of the data to the actual data volumes happens in this time period also. When the secondary location(s) receives the data write and acknowledgement is sent back to the primary location and then the application acknowledges the data write to be complete. Therefore, synchronous replication should be used in environments that cannot afford any data loss and can afford some application performance impact. Overall performance in synchronous mode is governed by the amount of time that it takes to write to the SRL plus the round trip time to send data to the secondary and receive acknowledgement.
The secondary acknowledges receipt as soon as the full write transaction (as sent by the application on the primary) is received into VVR kernel memory space. This removes actual writing to the secondary data volumes from the application latency. The primary tracks these writes in its SRL until a second acknowledgement is received from the secondary, signalling that the data has been written to physical storage. Both acknowledgements have an associated timeout so if a packet is not acknowledged, VVR will resend that packet.
In order to maximize performance, VVR does not wait for data to be written at the secondary, only received. This improves application performance. But it tracks all acknowledged, uncommitted transactions and can replay any necessary transactions if the secondary were to crash prior to actually writing its data to the physical storage.
Synchronous mode has two possible rlink settings, “fail” and “override”. These settings deal with the behavior of VVR when the network connection is lost to a secondary site. With synchronous=fail, writes will be returned as failed to the calling application if contact is lost with the secondary. This is used in rare situations where the primary and secondary storage must never differ by even one write. This is typically not used, as it will cause an application failure at the primary side when anything happens to the secondary or the interconnecting network. The “synchronous=override” is the most common setting. This will keep replication in synchronous mode unless contact with the secondary is lost, then will shift to asynchronous mode and begin tracking the writes in the SRL. This allows the primary application to continue to run and provide service to customers while the backup capability is restored.
OPERATIONAL MODES AND DATA FLOW: ASYNCHRONOUS In asynchronous mode, an application data write is first placed in the SRL, then immediately acknowledged to the calling application at the primary site. Data is then sent as soon as possible to the secondary location, based on available bandwidth. Therefore, asynchronous replication will not have any impact to the performance of the application but the secondary location may be a few write requests behind the primary site should a disaster occur. Asynchronous replication should be used in environments where minimal data loss (typically measured in seconds for adequately sized networks) can be tolerated but where application cannot afford the performance impact of synchronous replication. In asynchronous mode, if a disaster does occur at the primary site the secondary location will be able to recover, but it may come up a few write requests behind the state of primary systems.
Using Asynchronous Replication to Decouple Application Latency One of the most compelling features of VVR in real world environments is its ability to maintain full consistency at the secondary site while operating in asynchronous mode. Maintaining write order fidelity in asynchronous mode allows VVR to truly make use of the performance benefits available from asynchronous replication. By providing a high bandwidth connection, customers can completely remove the latency penalty from replication, and still maintain near up to the second data at the remote site. At the primary site, the application is acknowledged as soon as data is placed in the SRL. The application can continue to function as if replication is not occurring on the host. The data is then sent out almost instantaneously over the network to the secondary site. With adequate bandwidth, the SRL will not fill, so the actual data outstanding between primary and secondary is realistically whatever data is currently on the wire. This means a company can have near up to the second replication, at an arbitrary distance, with no application penalty.
Latency Protection Latency protection allows administrators to define how far a secondary is allowed to fall behind a primary in asynchronous mode. The latency protection feature allows automatic control of excessive lag between Primary and Secondary nodes. Latency protection gives the network administrator the option to set the maximum number of updates that are allowed in the SRL, which is referred to as a latency_high_mark. When this number is reached, all update activity is delayed until the update backlog has reached a preset level, or the latency_low_mark. Latency protection ensures that the number of recent updates that could be lost in a disaster does not exceed a maximum determined amount. Latency protection is typically used to prevent the secondary from falling too far behind the primary in order to meet the recovery point objectives of the organization.
No Distance Limitations Because VVR is truly unique in its ability to replicate data either synchronously or asynchronously over any standard IP network, there are no distance limitations. This allows organizations to utilize data centers that are already in place regardless of the distance between the locations.
INITIALIZING SECONDARY SYSTEMS FOR REPLICATION In order to begin replication of changed blocks, a replication solution must first begin with a known duplicate data set at each site. VVR offers several ways to get a secondary site up and running in order to begin the process of replication.
Empty The simplest method to begin replication is to start with completely empty systems at each side. This can be done if VVR is installed while initially constructing a data center, prior to production. For VVR to use an empty data set, both sides must be identically empty.
Over the Wire (Autosync) Over the wire initialization is essentially using VVR to move all data from the primary to secondary over the network connection. Overall this is a very simple process, however, with larger data sets it can take a prohibitively long time, especially if the primary is active while attempting to initialize the secondary. In addition, in situations where organizations are leasing bandwidth lines this process can become fairly expensive. The items that must be taken into consideration to do this are:
• The network bandwidth
• The amount of write activity on the system
• The size of the SRL
This is the optimum way for doing the initial synchronization, because it can be repeated at any given time, if the solution is designed to allow for this process.
Local Mirroring Local mirroring is an option for very large data sets. In this method, the data storage array for the secondary site is initially placed at the primary site and Volume Manager is used to mirror the data between the two storage devices. Once the mirror is complete, the Volume Manager plex is split off and the array is shipped to the secondary site. This mode will allow large amounts of data to be initialized at SAN speeds, as well as allowing subsequent data written during the shipping period to be spooled to the primary SRL. However, this can be fairly expensive if the array must be shipped over a long distance.
Tape Backup and Restore Initialization The final option for initialization is through the use of a tape backup and restore. This is a unique feature of VVR and is not an available option for other replication technologies on the market today. It allows huge data sets to be synchronized using tape technology and immediately begin replication.
Checkpoint initialization is a hot backup of the primary side, with the SRL providing the map of what was changed during the backup. When a checkpoint initialization is started, a “check start” pointer is placed in the SRL. A full block level backup is then taken of the primary volumes. When complete, a check-end pointer is placed into the SRL. The data written between the check-start and check-end pointers represents data that was modified while the backup was taking place. This constitutes a hot backup.
The tapes are then transported to the secondary site and the data is restored to the secondary systems using the tapes. When the tape load is complete, the secondary site is connected to the primary site with a “checkpoint attach” of the rlink. The primary will then forward any data that had been written during the backup (that data between the check-start and check-end). Once this data is written to the secondary, the secondary is an exact duplicate of the primary, at the time the backup completed. At this point the secondary is consistent, and simply out of date. The SRL is then replayed to bring the secondary up to date. Therefore, tape backup and restore initialization is ideal for organizations that wish to perform an initialization of a large data set or for environments where their secondary site is located over longer distances.
RECOVERY AFTER PROBLEMS
VVR is very robust in terms of tolerating outages of the network as well as secondary systems. The SRL provides the key to recovering from outages, while maintaining a consistent secondary.
SECONDARY/NETWORK OUTAGE VVR can be configured to handle network outages. An outage of a secondary system, or outage of the network to the secondary is identical as far as VVR is concerned. When the secondary is no longer available, as evidenced by loss of VVR heartbeat on the rlink, the primary will simply track the data write changes in the SRL. When the secondary is repaired or network problems are resolved, the SRL will then send all the changes to the secondary location(s). In addition, VVR can be configured to stop operations at the primary site should a network fail. In this case, the primary will not allow writes until the network connectivity is re-established, or the secondary is available for write activity
SECONDARY FAILURE A failure of the secondary would be better defined as a failure of the secondary storage, resulting in a data loss on the secondary side.
There are several methods to recover from a secondary loss. The first is to rebuild the secondary storage and re-initialize using one of the methods discussed above. The second method is to take regular backups of the secondary environment using a VVR feature called “Secondary Checkpoints”. Secondary Checkpoints allow a pointer to be placed in the primary SRL to designate a location where a backup was last taken on the secondary. Assuming the primary has a large enough SRL and secondary backups are routinely taken, a failure at the secondary can be repaired by reloading the last backup and rolling the SRL forward from the last secondary checkpoint.
PRIMARY FAILURE A failure of the primary can be broken into several possible problems. A complete failure of the primary site is handled by promotion of a secondary to a primary, affecting a disaster recovery takeover. This is exactly the scenario VVR was built for.
For primary outages, such as server failure, or server panic, the customer has the choice to wait for the primary to recover, or shift operations to a secondary server or location.
For situations involving actual data loss at the primary, the customer can shift operations to a secondary, or restore data on the primary.
VOLUME REPLICATOR ROLE CHANGES Role changes are actions to promote system that was previously a secondary to a primary. This can be due to a complete site outage where the primary site is not available or simply a role reversal to allow a secondary site to take over operations. For simple failures of a primary or secondary server that are members of a VCS cluster, VCS can move the primary or secondary to a new server without a role change. For example, imagine a two node cluster at Site A, acting as the VVR primary, and a two-node cluster at Site B acting as the VVR secondary. If a node in Site A dies, the VVR primary will simply move to the second node under VCS control. The same is true of a single system failure at the secondary site. VCS would restart the VVR secondary on the opposite system. For situations such as a complete failure of Site A, the VVR secondary at Site B can be promoted to a primary and applications started to access the underlying data in read-write mode. This is an example of using VVR to facilitate Disaster Tolerance for a data center. To automate this entire procedure, VERITAS offers VERITAS Global Cluster Manager with the Disaster Recovery option to monitor and control applications at separate sites, connected by replication.
Primary Migration A migration of Primary to Secondary systems is a controlled shift of Primary responsibility for an RVG. Data is flushed from the existing primary SRL if necessary, and then control is handed to the existing secondary. The original primary is demoted to a secondary, and the original secondary is promoted to a primary. This is a very simple operation carried out with one or two commands and allows rapid shift of replication primary between sites. There is zero chance for data loss, as all data outstanding at the primary site is sent to the secondary prior to allowing the migration to take place.
Secondary Takeover A secondary takeover is a somewhat less graceful event, in that the secondary is promoted to a primary without a corresponding demotion of the original primary. These types of migrations typically happen in the event of a complete site outage without and prior notification. When a takeover is accomplished, the secondary is brought up in read-write mode in the exact state it was in at time of takeover. Any data written by the primary in ASYNC mode and not sent to the secondary is not available. After a secondary takeover, the original primary must be re-synchronized to be an exact duplicate of the new primary.
Returning to the Primary Location After a migration has occurred to a secondary location, returning to the original primary can be accomplished using easy failback, a new feature in VVR 3.2, which allows for rapid resynchronization of an old primary after a secondary takeover. This removes the need for a complete over the wire synchronization or all the data or differential based synchronization.
When a secondary is promoted in a takeover operation, it immediately begins utilizing the Data Change Map (DCM) associated with each volume. This tracks where data has been written on the new primary. When the old primary comes back online and a failback operation is requested, the new primary communicates with the old primary and determines any data blocks that had been changed on the old primary that were not committed on the old secondary/new primary. These block locations are sent to the new primary so the corresponding blocks can be sent to the Data Change maps on the new primary. This means that any blocks written on the new primary, plus any blocks that were different on the old primary will be sent from new primary to old primary. This results in the old primary being made an exact duplicate of the new primary in a very short time. This also means that any data that was written on the old primary is permanently overwritten. VVR makes no attempt at “merging” differences between systems.
USING THE SECONDARY SYSTEM
FlashSnap is an option to VERITAS Foundation Suite and it allows a complete mirror break-off (snapshot) of a VVR volume to be taken and mounted for operation. The benefit of this option to VERITAS Volume Manager is that it allows organizations to access the data at the secondary sites for off host processing, such as reporting and backups.
USING IN BAND CONTROL MESSAGES TO CONTROL FLASHSNAP VVR provides an advanced messaging capability to control specific events at a secondary from the primary. It does this with In Band Control (IBC) messages sent from the primary to the secondary. IBC messages are placed in the SRL like any other write traffic and are processed in SRL order. For example, consider a database running at the primary site, with VVR in asynchronous mode. The administrator places the database in hot backup mode to perform an onsite backup. An IBC can be placed in the SRL at this time to signal the secondary when to snap off a mirror, knowing that the IBC will not be received until all data ahead of it in the SRL (right to the time of shifting to hot backup) has been received. As soon as the IBC is entered into the SRL, the database can be taken out of hot backup mode at the primary site. This allows operations at the primary and secondary sites to be coordinated to occur at the exact same time in terms of data consistency.
VOLUME REPLICATOR IN THE CUSTOMER ENVIRONMENT
EFFECTS OF VOLUME REPLICATOR ON HOST RESOURCES AND APPLICATION PERFORMANCE
VVR typically has very little effect on host CPU and memory resources and have been measured in the 2-5% range. This type of CPU usage can be similar to the CPU impact one may notice doing a simple find command in UNIX. VVR also converts all of the write activity to Sequential writes. This is the fastest configuration for writes in most cases, and some customers have observed an increase in write performance using VVR when compared to not using replication within their environment.
UNDERSTANDING BANDWIDTH NEEDS USING VRADVISOR In order to replicate data to another location, bandwidth must be available. For environments utilizing Fibre Channel connectivity VERITAS Volume Manager can be used to mirror the data between the two locations. For environments with IP connectivity, VERITAS Volume Replicator should be used. VERITAS Volume Replicator does not specifically require a network dedicated to itself, is resilient to temporary network outages and includes error-handling capabilities to alert the administrator of critical events.
A very common question is “How much bandwidth do I need?” The answer is enough bandwidth must be provided to move all write traffic to each secondary site in any given time period. For example, if 10 Gigabytes of data are written in a 24-hour period, then enough bandwidth must be provided to move 10 Gigabytes of data in 24 hours. If the configuration is set to Synchronous, then attention should be paid to the peak write activity. This activity can impact the performance of the application if the network bandwidth isn’t sufficient for the peak traffic.
The SRL can be used to spool data during time periods when write traffic exceeds replication bandwidth. In order to assist in the proper determination of the bandwidth and SRL size, the VERITAS Volume Replicator Advisor (VRAdvisor) utility is available for use. The VRAdv tool will assist the user in determining the optimal size of the SRL by taking into account the rate of data writes over a given time period, network bandwidth and different outage durations.
Collection of Data The VRAdivisor can collect sample data write statistics based on various parameters. The data is collected over a period of time, in a file that you have specified. If VxVM is installed then the vxstat command will be used for collecting data. Otherwise, the iostat command will be used. After the data change rate has been collected the data can then be analyzed to make determinations on the optimal size of the SRL based on the different parameters that you had supplied. This result would provide you the optimum size of the SRL for immediate requirements. You can also calculate the size of the SRL based on the future requirements, changes, and other factors which you are aware of and may affect the SRL.
Analysis Results This above screen displays the analysis results, which are generated based on the inputs that you have specified. The graphical display region displays the analysis results with the help of two graphs. The x-axis for both the graphs consists of the data write duration values based on the information collected on the system. The y-axis of the top graph highlights the SRL fill rate over the data collection period. In addition, the peak SRL fillup size is indicated against a maximum outage window. This window is displayed in yellow and would indicate a worst-case scenario. The second graph highlights the different write rates at periods throughout the collection period.
VOLUME REPLICATOR INTEGRATION WITH OTHER VERITAS PRODUCTS Volume Replicator is a component of an overall high availability and disaster recovery solution. It fits very well into an overall high availability infrastructure provide by VERITAS Cluster Server and VERITAS Global Cluster Manager.
VERITAS CLUSTER SERVER AND VERITAS GLOBAL CLUSTER MANAGER The full integration of VERITAS Cluster Server, VERITAS Volume Replicator and VERITAS Cluster Manager provides a powerful disaster recovery solution. VERITAS Cluster Server handles local availability issues. VERITAS Volume Replicator replicates critical data to a remote site and VERITAS Global Cluster Manager monitors and manages the clusters at each site. In the event of a site failure or complete failure of applications at the primary site, Global Cluster Manager will control the shift of replication roles to the secondary site, bring up critical applications and redirect client traffic with a single command or mouse click.
VERITAS DATABASE EDITIONS VERITAS Database Editions offers raw device performance with the manageability of the VERITAS File System, online administration of storage, and the flexibility of storage hardware independence. Database Editions can be used within the local environment to maximize the performance of the database while Volume Replicator can be used to replicate data to the secondary site for disaster recovery protection.
VERITAS NETBACKUP In order to have complete disaster recovery solution, every environment should be backup up on a regular basis using VERITAS NetBackup. By combining, VERITAS NetBackup and VERITAS Volume Replicator organizations can be assured that their data is protected.
VERITAS REPLICATION CAPABILITIES SUMMARY The following section will summarize the capabilities of VVR.
STORAGE ARCHITECTURE INDEPENDENCE VERITAS Volume Replicator replicates between any major hardware platforms to eliminate vendor-specific storage limitations. For example, using VERITAS Volume Replicator, customers can replicate between a single vendor's alike arrays, between a single vendor's dissimilar arrays, or between two different vendors' arrays. This means the only architecture restriction for VERITAS Volume Replicator is duplicate volume sizes at each end. In order to replicate a 200 Gigabyte volume, the customer must create a 200 Gigabyte volume on the primary and secondary(s). This allows the customer to create a secondary site utilizing older or less expensive hardware and only requires customers to replicate critical data, chosen based on volume, to the secondary site saving on bandwidth and storage costs. In addition, VVR provides the flexibility to change the storage configuration as data sizes grow, change, shrink or are moved without impacting replication.
MAINTAINING WRITE ORDER FIDELITY Volume Replicator maintains write order fidelity, even in asynchronous mode to guarantee data consistency on the secondary. This is critical to providing a complete, consistent copy of data at a remote site, without requiring synchronous replication.
HIGH PERFORMING REPLICATION TECHNOLOGIES Volume Manager and Volume Replicator are high performing replication technologies. Third party performance testing has proven that VERITAS is up to 72% faster then the leading hardware replication vendor on the market today.
NATIVE REPLICATION OVER FIBRE CHANNEL AND IP NETWORKS VERITAS technologies can replicate over Fibre Channel and IP networks natively. Volume Manager can replicate over Fibre Channel and Volume Replicator can replicate over IP networks without the need for any expensive specialized networking devices. In addition, native replication over IP allows organizations to replicate data over any distance.
SCALABLE Volume Replicator can scale to up to 31 separate locations for many to one and one to many replication scenarios.
INITIALIZATION OPTIONS Volume Replicator can assist in getting your disaster recovery site up and running quickly. There are three initialization options available with Volume Replicator. The first option involves sending all of the data over the wire. The second option is by doing local mirroring between arrays and shipping the array to the disaster recovery site. The third option that is unique to Volume Replictor combines replication and backup to get the disaster recovery site up and running quickly. The organization performs a normal backup at the primary site and inserts a checkpoint. Then the tapes are sent to the disaster recovery site and a tape restore is performed. Only the data that has changed since the time the checkpoint is inserted is sent over the wire. This allows organizations to get up and running without sending large datasets over the wire or having to pay expensive shipping costs to ship storage arrays. All three operations can be performed completely online.
SUMMARY VERITAS Volume Replicator can effectively and efficiently replicate data to another location in order to provide protection from disaster scenarios. VERITAS Volume Replicator allows organizations to replicate their data between any storage devices, over a standard IP connection, and across any distance for the ultimate in disaster recovery protection.
Saturday, December 1, 2007
at
3:02 PM
|
This entry is mainly for testing the different parameters in this template.
Sample text below....
Lesson 1: (using class:post_heading1)
A man is getting into the shower (post_text) just as his wife is finishing up her shower, when the doorbell rings.
The wife quickly wraps herself in a towel and runs downstairs.
When she opens the door, there stands Bob, the next-door neighbour.
Before she says a word, Bob says, "I'll give you $800 to drop that towel."
After thinking for a moment, the woman drops her towel and stands naked in front of Bob, after a few seconds, Bob hands her $800 and leaves.
The woman wraps back up in the towel and goes back upstairs. When she gets to the bathroom, her husband asks, "Who was that?" "It was Bob the next door neighbour," she replies.
"Great," the husband says, "did he say anything about the $800 he owes me?"
Friday, November 2, 2007
at
4:36 PM
|
Here’s one of the VVR projects I designed and implemented. I documented the whole process so I could use it as a reference for future projects.
The document has a detailed implementation (with screenshots). The changes were done on both primary and secondary sites. Downtime was not necessary but the customer decided to shut down the apps and database.
This implementation involves VCS,CCA and SNAPshots. I will not discuss CCA in this document. Contents
Project Description
As part of the Disaster recovery project, Veritas Volume Replicator (VVR) will be implemented on EMM servers to handle the replication between the Phoenix servers and the DR servers in Minneapolis. VVR will run on top of Veritas Cluster System (VCS). VCS is already installed and running on the servers. Replicated volumes in the secondary site will be configured to have snapshot volumes.
Server Configurations
E3 (Clustered)
- SUNSRV01 (Solaris 8, SunFire V440)
- SUNSRV02 (Solaris 8, SunFire V440)
DR (Clustered)
- SUNSRV01-DR (Solaris 8, SunFire V440)
- SUNSRV02-DR (Solaris 8, SunFire V440)
Network Configurations
IP Address
- SUNSRV01
- 10.10.231.130 sunsrv01 Host IP
- 10.11.196.192 sunsrv01-DRN E3/DR Link IP
- SUNSRV02
- 10.10.231.131 sunsrv02 Host IP
- 10.11.196.193 sunsrv02-DRN E3/DR Link IP
- SUNSRV01-DR
- 10.10.231.130 sunsrv01-dr Host IP
- 10.12.94.192 sunsrv01dr-DRN E3/DR Link IP
- SUNSRV02-DR
- 10.10.231.131 sunsrv02-dr Host IP
- 10.12.94.193 sunsrv02dr-DRN E3/DR Link IP
VVR Config
- RVG db2dg_rvg
- SRL db2dg_srl
- RLINKs rlk_db2instdr-vipvr_db2dg_rvg (E3)
rlk_db2inste3-vipvr_db2dg_rvg (DR)
Virtual IPs
- SUNSRV01/SUNSRV02
- Application 10.10.231.191
- E3/DR Link 10.11.196.191
- SUNSRV01-DR/SUNSRV02-DR
- Application 10.10.231.191
- E3/DR Link 10.12.94.191
VCS Heartbeats
- SUNSRV01 ce0/ce4
- SUNSRV02 ce0/ce4
- SUNSRV01-DR ce4
- SUNSRV02-DR ce4
Essential Terminology
Data Change Map (DCM) - An object containing a bitmap that can be optionally associated with a data volume on the Primary RVG. The bits represent regions of data that are different between the Primary and the Secondary. DCMs are used to mark sections of volumes on the primary that have changed during extended network outages in order to minimize the amount of data that must be synchronized to the secondary site during the outage.
Disk Change Object (DCO) - DCO volumes are used by Volume Manager FastResync (FR). FR is used to quickly resynchronize mirrored volumes that have been temporarily split and rejoined. FR works by copying only changes to the newly reattached volume using FR logging. This process reduces the time required to rejoin a split mirror, and requires less processing power than full mirror resynchronization without logging.
Data Volume - Volumes that are associated with an RVG and contain application data.
Primary Node - The node on which the primary RVG resides.
Replication Link (RLINK) - RLINKs represent the communication link to the couterpart of the RVG on another node. At the Primary node a replicated volume object has one RLINK for each of its network mirrors. On the Secondary node a replicated volume has a single RLINK object that links it to its Primary. Each RLINK on a Primary RVG represents the communication link from the Primary RVG to a corresponding Secondary RVG, via an IP connection.
Replicated Data Set (RDS) - The group of the RVG on a Primary and its corresponding Secondary hosts.
Replicated Volume Group (RVG) - A component of VVR that is made up of a set of data volumes, one or more RLINKs and an SRL. An RVG is a subset of volumes within a given VxVM disk group configured for replication to one or more secondary systems. Data is replicated from a primary RVG to a secondary RVG. The primary RVG is in use by an application, while the secondary RVG receives replication and writes to local disk. The concept of primary and secondary is per RVG, not per system. A system can simultaneously be a primary RVG for some RVGs and secondary RVG for others. The RVG also contains the storage replicator log (SRL) and replication link (RLINK).
Storage Replication Log (SRL) - Writes to the Primary RVG are saved in the SRL on the Primary side. The SRL is used to aid in recovery, as well as to buffer writes when the system operates in asynchronous mode. Each write to a data volume in the RVG generates two write requests: one to the Secondary SRL, and another to the Primary SRL.
Secondary Node - The node too which the primary RVG replicates.
Volumes to be Replicated
These are the volumes that will be configured for replication. These volumes belong to the same diskgroup db2dg.
v bcp - ENABLED ACTIVE 585105408 SELECT - fsgen v db - ENABLED ACTIVE 1172343808 SELECT - fsgen v dba - ENABLED ACTIVE 98566144 SELECT - fsgen v db2 - ENABLED ACTIVE 8388608 SELECT - fsgen v lg1 - ENABLED ACTIVE 396361728 SELECT - fsgen v tp01 - ENABLED ACTIVE 192937984 SELECT - fsgen
Storage Replication Log (SRL)
Storage Replication Log. All data writes destined for volumes configured for replication are first persistently queued in a log called the Storage Replicator Log. VVR implements the SRL at the primary side to store all changes for transmission to the secondary(s). The SRL is a VxVM volume configured as part of an RVG. The SRL gives VVR the ability to associate writes to specific volumes within the replicated configuration in a specific order, maintaining write order fidelity at the secondary. All writes sent to the VxVM volume layer, whether from an application such as a database writing directly to storage, or an application accessing storage via a file system are faithfully replicated in application write order to the secondary.
v db2dg_srl - ENABLED ACTIVE 856350720 SELECT - fsgen
Data Change Map (DCM)
Data Change Maps (DCM) are used to mark sections of volumes on the primary that have changed during extended network outages in order to minimize the amount of data that must be synchronized to the secondary site during the outage. Only one disk will be used for DCM logs at this time.
In both primary and secondary sites, the same disk device name is used.
GROUP DISK DEVICE TAG OFFSET LENGTH FLAGS db2dg db2dg95 EMC0_94 EMC0_94 0 35681280 -
Disk Change Object (DCO)
Disk Change Object (DCO) volumes are used by Volume Manager FastResync (FR). FR is used to quickly resynchronize mirrored volumes that have been temporarily split and rejoined. FR works by copying only changes to the newly reattached volume using FR logging. This process reduces the time required to rejoin a split mirror, and requires less processing power than full mirror resynchronization without logging.
GROUP DISK DEVICE TAG OFFSET LENGTH FLAGS db2dg db2dg95 EMC0_94 EMC0_94 0 35681280 -
Technical Implementation Steps/Procedures
Detailed Implementation
Install VVR License on all nodes for both sites
# /opt/VRTSvlic/sbin/vxlicinst
Install VVR packages on all nodes for both sites
List of Required and Optional Packages for VVR
Required software packages for VVR:
VRTSvlic VERITAS Licensing Utilities. VRTSvxvm VERITAS Volume Manager and Volume Replicator. VRTSob VERITAS Enterprise Administrator Service. VRTSvmpro VERITAS Volume Manager Management Services Provider. VRTSvrpro VERITAS Volume Replicator Management Services Provider. VRTSvcsvr VERITAS Cluster Server Agents for VERITAS Volume Replicator.
Optional software packages for VVR:
VRTSjre VERITAS JRE Redistribution. VRTSweb VERITAS Java Web Server. VRTSvrw VERITAS Volume Replicator Web Console. VRTSobgui VERITAS Enterprise Administrator. VRTSvmdoc VERITAS Volume Manager documentation. VRTSvrdoc VERITAS Volume Replicator documentation. VRTSvmman VERITAS Volume Manager manual pages. VRTSap VERITAS Action Provider. VRTStep VERITAS Task Execution Provider.
Installing the VVR Packages Using the pkgadd Command
1. Log in as root.
2. Mount from the software repository:
# mount software:/repos /mnt
# cd /mnt/software/os/SunOS/middleware/Veritas4.1/volume_replicator
3. Alternatively, you may choose to run the install script from the same directory. Refer to VVR Installation Guide for more information.
# ./installvvr
4. But this document follows the manual process.
# cd /mnt/software/os/SunOS/middleware/Veritas4.1/volume_replicator
Note: Install the packages in the order specified below to ensure proper installation.
5. Use the following command to install the required software packages in the specified order. Some of these may have already been installed.
# pkgadd -d . VRTSvlic VRTSvxvm VRTSob VRTSvcsvr
6. Install the following patch:
# patchadd 115209-<latest>
7. Install the following required packages in the specified order:
# pkgadd -d . VRTSvmpro VRTSvrpro
8. Use the following command to install the optional software packages:
# pkgadd -d . VRTSobgui VRTSjre VRTSweb VRTSvrw VRTSvmdoc \ VRTSvrdoc VRTSvmman VRTSap VRTStep
The system prints out a series of status messages as the installation progresses and prompts you for any required information, such as the license key.
Create a Diskgroup and volumes in the secondary site
initialize all LUNs and assign to Diskgroup db2dg
sunsrv01-dr:# vxdiskadm Create volumes with the same sizes as the primary site’s volumes.
sunsrv01-dr:# vxassist -g db2dg make bcp 585105408 layout=concat
sunsrv01-dr:# vxassist -g db2dg make db 1172343808 layout=concat
sunsrv01-dr:# vxassist -g db2dg make dba 98566144 layout=concat
sunsrv01-dr:# vxassist -g db2dg make db2 8388608 layout=concat
sunsrv01-dr:# vxassist -g db2dg make lg1 396361728 layout=concat
sunsrv01-dr:# vxassist -g db2dg make tp01 192937984 layout=concat
Create the SRL Volume. Allocate disks that are dedicated just for SRL volume. No other volumes should use those disks.
sunsrv01-dr:# vxassist -g db2dg make db2dg_srl 856350720 layout=concat \ > db2dg71 db2dg72 db2dg73 db2dg74 db2dg75 db2dg76 db2dg77 db2dg78 \ > db2dg79 db2dg80 db2dg81 db2dg82 db2dg83 db2dg84 db2dg85 db2dg86 \ > db2dg87 db2dg88 db2dg89 db2dg90 db2dg91 db2dg92 db2dg93 db2dg94
Add DCM logs to the volumes. Use the disk that’s been assigned only for logs.
sunsrv01-dr:# vxassist -g db2dg addlog bcp logtype=dcm nlog=1 db2dg95
sunsrv01-dr:# vxassist -g db2dg addlog db logtype=dcm nlog=1 db2dg95
sunsrv01-dr:# vxassist -g db2dg addlog dba logtype=dcm nlog=1 db2dg95
sunsrv01-dr:# vxassist -g db2dg addlog db2 logtype=dcm nlog=1 db2dg95
sunsrv01-dr:# vxassist -g db2dg addlog lg1 logtype=dcm nlog=1 db2dg95
sunsrv01-dr:# vxassist -g db2dg addlog tp01 logtype=dcm nlog=1 db2dg95
Update the .rdg file on both nodes in the secondary site
The Secondary Node must be given permission to manage the disk group created on the Primary Node. To do this add the diskgroup ID into /etc/vx/vras/.rdg . The diskgroup ID is the value of dgid in the output of vxprint -l db2dg on the Primary Node.
sunsrv01:# vxprint -l db2dg | grep dgid info: dgid=1138140445.1393.sunsrv01 noautoimport
sunsrv01-dr:# echo “1138140445.1393.sunsrv01” > /etc/vx/vras/.rdg
sunsrv02-dr:# echo “1138140445.1393.sunsrv01” > /etc/vx/vras/.rdg
Stop all applications – both clustered and non-clustered apps
Stop all clustered and non-clustered applications that are using the volumes, which will be replicated.
Note: You may leave the applications running while replicating.
# su – db2inst –c “db2stop”
Unmount all filesystems whose underlying volumes will be replicated.
# umount ` mount | grep '/dev/vx/dsk/db2dg/' | awk '{ print $1 }'`
Forcibly stop VCS so it leaves all running resources online – especially Volumes and the diskgroup
# hastop –all -force
Bring up the virtual IPs on all nodes on both sites
These are the VVR Replication IPs (R-Link IPs). Must be unique – one at the primary site and other at the secondary site.
Primary (sunsrv01 or sunsrv02):
ce3:1 - 10.11.196.191 netmask fffffe00 broadcast 10.11.197.255 Secondary (sunsrv01-dr or sunsrv02-dr):
ce5:1 - 10.12.94.191 netmask fffffe00 broadcast 10.12.95.255
Create the SRL Volume in the primary site
Allocate disks that are dedicated just for SRL volume. No other volumes should use those disks.
sunsrv01:# vxassist -g db2dg make db2dg_srl 856350720 layout=concat \ > db2dg71 db2dg72 db2dg73 db2dg74 db2dg75 db2dg76 db2dg77 db2dg78 \ > db2dg79 db2dg80 db2dg81 db2dg82 db2dg83 db2dg84 db2dg85 db2dg86 \ > db2dg87 db2dg88 db2dg89 db2dg90 db2dg91 db2dg92 db2dg93 db2dg94
Add DCM logs to all the primary site volumes that will be replicated
Use the disk that is dedicated for logs only.
sunsrv01:# vxassist -g db2dg addlog bcp logtype=dcm nlog=1 db2dg95
sunsrv01:# vxassist -g db2dg addlog db logtype=dcm nlog=1 db2dg95
sunsrv01:# vxassist -g db2dg addlog dba logtype=dcm nlog=1 db2dg95
sunsrv01:# vxassist -g db2dg addlog db2 logtype=dcm nlog=1 db2dg95
sunsrv01:# vxassist -g db2dg addlog lg1 logtype=dcm nlog=1 db2dg95
sunsrv01:# vxassist -g db2dg addlog tp01 logtype=dcm nlog=1 db2dg95
Update the .rdg file on both nodes in the primary site
The diskgroup ID is the value of dgid in the output of vxprint -l db2dg on the Secondary Node.
sunsrv02-dr:# vxprint -l db2dg | grep dgid info: dgid=1182230506.2373.sunsrv01-dr noautoimport
sunsrv01:# echo “1182230506.2373.sunsrv01-dr” > /etc/vx/vras/.rdg
sunsrv02:# echo “1182230506.2373.sunsrv01-dr” > /etc/vx/vras/.rdg
Ensure that VVRTypes.cf exists in /etc/VRTSvcs/conf
This types file must exist on all nodes in the cluster.
sunsrv01:# ls -l /etc/VRTSvcs/conf/VVRTypes.cf -rw-rw-r-- 1 root sys 1811 Jan 20 2005 /etc/VRTSvcs/conf/VVRTypes.cf
Ensure that other application types i.e. Db2udbTypes.cf exists in /etc/VRTSvcs/conf
This types file must exist on all nodes in the cluster.
sunsrv01:# ls -l /etc/VRTSvcs/conf/Db2udbTypes.cf -rw-rw-r-- 1 root sys 1080 Jun 26 2006 /etc/VRTSvcs/conf/Db2udbTypes.cf
Create the Primary RVG
First, ensure that /usr/sbin/vradmind is running. If not, start it with this command:
sunsrv01:# /etc/init.d/vras-vradmind.sh start
Run this from the primary site.
sunsrv01:# vradmin –g db2dg createpri db2dg_rvg bcp,db,dba,db2,lg1,tp01 \ db2dg_srl
Usage:
vradmin -g <diskgroup> createpri <RVGname> <vol1,vol2...> <SRLname>
Where:
<diskgroup> | is the VxVM diskgroup name | <RVGname> | is the name for the RVG, usually <diskgroupname>_rvg | <vol1,vol2...> | is a comma separated list of all volumes in the diskgroup to be replicated. | <SRLname> | is the name of the SRL volume |
Create the Secondary RVG
After creating the Primary RVG, go on to adding a Secondary. Use the vradmin addsec command to add a Secondary RVG. The VIP hostnames are in the /etc/hosts file.
sunsrv01:# vradmin -g db2dg addsec db2dg_rvg db2inste3-vipvr db2instdr-vipvr
Usage:
vradmin -g <diskgroup> addsec <RVGname> <primaryhost> <secondaryhost>
Where:
<diskgroup> | is the VxVM diskgroup name | <RVGname> | is the name of the RVG created on the Primary Node | <primaryhost> | is the hostname of the Primary Node, this could be a VCS ServiceGroup name | <secondaryhost> | is the hostname of the Secondary Node, this could be a VCS ServiceGroup name |
Note: The vradmin addsec command performs the following operations:
- Creates and adds a Secondary RVG of the same name as the Primary RVG to the specified RDS on the Secondary host. By default, the Secondary RVG is added to the disk group with the same name as the Primary disk group. Use the option -sdg with the vradminaddsec command to specify a different disk group on the Secondary.
- Automatically adds DCMs to the Primary and Secondary data volumes if they do not have DCMs.
- Associates to the Secondary RVG, existing data volumes of the same names and sizes as the Primary data volumes; it also associates an existing volume with the same name as the Primary SRL, as the Secondary SRL.
- Creates and associates to the Primary and Secondary RVGs respectively, the Primary and Secondary RLINKs with default RLINK names rlk_remotehost_rvgname.
Configure VCS to integrate VVR objects – Primary Site
There would be at least two VCS service Groups. One group represents replication group and the other group is the application (DB) group. Application group contains RVGPrimary resource. Replication group contains IP, RVG and the DiskGroup resource.
In this particular environment, VVR Service Group and Application Service Group must be online on the same node because the diskgroup is on the VVR Service Group.
VVR Service group must be started first before the Application Service Group. The dependencies are configured in the VCS configuration file.
VVR Service group maintains the synchronization between the primary and secondary sites.
VCS Service Groups:
db2inst_vvr VVR Service Group
db2dgAgent (RVG) vvrnic (NIC) vvrip (VIP for VVR communications) db2dg_dg (Diskgroup)
db2inst_grp Application Service Group
db2dg_rvg_primary (RVG Primary) db2inst_db (DB2 Database) db2inst_ce1 (NIC) db2inst_ip (VIP for the application) db2inst__mnt (Filesystems) adsm_db2inst (TSM Backup)
Primary Site VCS Configuration File:
include "types.cf" include "Db2udbTypes.cf" include "VVRTypes.cf"
cluster e3clus177 ( UserNames = { admin = ajkCjeJgkFkkIskEjh } ClusterAddress = "10.10.231.191" Administrators = { admin } CredRenewFrequency = 0 CounterInterval = 5 )
system sunsrv01 ( )
system sunsrv02 ( )
group db2inst_grp ( SystemList = { sunsrv01 = 0, sunsrv02 = 1 } AutoStart = 0 )
Application adsm_db2inst ( Critical = 0 User = root StartProgram = "/bcp/db2inst/tsm/adsmcad.db start" StopProgram = "/bcp/db2inst/tsm/adsmcad.db stop" MonitorProcesses = { "/usr/bin/dsmcad -optfile=/bcp/db2inst/tsm/dsm.opt.db2inst" } )
Db2udb db2inst_db ( Critical = 0 DB2InstOwner = db2inst DB2InstHome = "/db2/db2inst" MonScript = "/opt/VRTSvcs/bin/Db2udb/SqlTest.pl" )
DiskGroup db2dgB_dg ( DiskGroup = db2dgB )
IP db2inst_ip ( Device = ce1 Address = "10.10.231.191" )
Mount db2dgB_bkup_mnt ( MountPoint = "/backup/db2inst" BlockDevice = "/dev/vx/dsk/db2dgB/bkp" FSType = vxfs FsckOpt = "-y" )
Mount db2inst_bcp_mnt ( MountPoint = "/bcp/db2inst" BlockDevice = "/dev/vx/dsk/db2dg/bcp" FSType = vxfs MountOpt = rw FsckOpt = "-y" )
Mount db2inst_db2_mnt ( MountPoint = "/db2/db2inst" BlockDevice = "/dev/vx/dsk/db2dg/db2" FSType = vxfs MountOpt = rw FsckOpt = "-y" )
Mount db2inst_db_mnt ( MountPoint = "/db/db2inst/PEMMP00P/NODE0000" BlockDevice = "/dev/vx/dsk/db2dg/db" FSType = vxfs MountOpt = rw FsckOpt = "-y" )
Mount db2inst_dba_mnt ( MountPoint = "/dba/db2inst" BlockDevice = "/dev/vx/dsk/db2dg/dba" FSType = vxfs MountOpt = rw FsckOpt = "-y" )
Mount db2inst_lg1_mnt ( MountPoint = "/db/db2inst/log1" BlockDevice = "/dev/vx/dsk/db2dg/lg1" FSType = vxfs MountOpt = rw FsckOpt = "-y" )
Mount db2inst_tp01_mnt ( MountPoint = "/db/db2inst/PEMMP00P/tempspace01/NODE0000" BlockDevice = "/dev/vx/dsk/db2dg/tp01" FSType = vxfs MountOpt = rw FsckOpt = "-y" )
NIC db2inst_ce1 ( Device = ce1 NetworkType = ether )
RVGPrimary db2dg_rvg_primary ( Critical = 0 RvgResourceName = db2dgAgent )
requires group db2inst_vvr online local hard adsm_db2inst requires db2inst_bcp_mnt db2dgB_bkup_mnt requires db2dgB_dg db2inst_bcp_mnt requires db2dg_rvg_primary db2inst_db requires db2inst_bcp_mnt db2inst_db requires db2inst_db2_mnt db2inst_db requires db2inst_db_mnt db2inst_db requires db2inst_dba_mnt db2inst_db requires db2inst_ip db2inst_db requires db2inst_lg1_mnt db2inst_db requires db2inst_tp01_mnt db2inst_db2_mnt requires db2dg_rvg_primary db2inst_db_mnt requires db2dg_rvg_primary db2inst_dba_mnt requires db2dg_rvg_primary db2inst_ip requires db2inst_ce1 db2inst_lg1_mnt requires db2dg_rvg_primary db2inst_tp01_mnt requires db2dg_rvg_primary
group db2inst_vvr ( SystemList = { sunsrv01 = 0, sunsrv02 = 1 } )
DiskGroup db2dg_dg ( DiskGroup = db2dg )
IP vvrip ( Device = ce3 Address = "10.11.196.191" NetMask = "255.255.254.0" )
NIC vvrnic ( Device = ce3 NetworkType = ether )
RVG db2dgAgent ( Critical = 0 RVG = db2dg_rvg DiskGroup = db2dg )
db2dgAgent requires db2dg_dg vvrip requires vvrnic
Configure VCS to integrate VVR objects – Secondary Site
In this particular environment, there will be four VCS service Groups – CCA Group, Replication Group, Application (DB) Group, and SNAP Group. CCA group is for remote cluster administration. Application group contains RVGPrimary resource. Replication group contains IP, RVG and the DiskGroup resource. SNAP Group contains the resources for SNAP copy. This document will not discuss Veritas CCA.
In this particular environment, VVR Service Group, Application Service Group, and SNAP Service Group must be online on the same node because the diskgroup is on the VVR Service Group. SNAP volumes are on the same diskgroup as the application/DB volumes.
SNAP Configuration is discussed later in this document.
VVR Service group must be started first before the Application Service Group. The dependencies are configured in the VCS configuration file.
VVR Service group maintains the synchronization between the primary and secondary sites.
VCS Service Groups:
db2inst_vvr VVR Service Group Resources: · db2dgAgent (RVG) · vvrnic (NIC) · vvrip (VIP for VVR communications) · db2dg_dg (Diskgroup)
db2inst_grp Application Service Group Resources: · db2dg_rvg_primary (RVG Primary) · db2inst_db (DB2 Database) · db2inst_ce1 (NIC) · db2inst_ip (VIP for the application) · db2inst__mnt (Filesystems) · adsm_db2inst (TSM Backup)
snap_db2inst_grp Snap Copy Service Group Resources: · snap_db2inst_db (DB2 Database) · snap_db2inst_ce1 (NIC) · snap_db2inst_ip (VIP for the application) · snap_db2inst__mnt (Filesystems)
Secondary Site VCS Configuration File:
include "types.cf" include "ClusterMonitorConfigType.cf" include "Db2udbTypes.cf" include "VVRTypes.cf"
cluster e3clus177 ( UserNames = { admin = ajkCjeJgkFkkIskEjh } ClusterAddress = "10.10.231.191" Administrators = { admin } CredRenewFrequency = 0 CounterInterval = 5 )
system sunsrv01-dr ( )
system sunsrv02-dr ( )
group CCAvail ( SystemList = { sunsrv01-dr = 0, sunsrv02-dr = 1 } AutoStartList = { sunsrv01-dr, sunsrv02-dr } )
ClusterMonitorConfig CCAvail_ClusterConfig ( MSAddress = "10.11.198.53" ClusterId = 1183482234 VCSLoggingLevel = TAG_A Logging = "/opt/VRTSccacm/conf/k2_logging.properties" ClusterMonitorVersion = "4.1.2272.1" )
Process CCAvail_ClusterMonitor ( PathName = "/opt/VRTSccacm/bin/ClusterMonitor" Arguments = "-config" )
CCAvail_ClusterMonitor requires CCAvail_ClusterConfig
group db2inst_grp ( SystemList = { sunsrv01-dr = 0, sunsrv02-dr = 1 } Enabled @sunsrv01-dr = 0 Enabled @sunsrv02-dr = 0 AutoStart = 0 )
Application adsm_db2inst ( Enabled = 0 Critical = 0 User = root StartProgram = "/bcp/db2inst/tsm/adsmcad.db start" StopProgram = "/bcp/db2inst/tsm/adsmcad.db stop" MonitorProcesses = { "/usr/bin/dsmcad -optfile=/bcp/db2inst/tsm/dsm.opt.db2inst" } )
Db2udb db2inst_db ( Enabled = 0 Critical = 0 DB2InstOwner = db2inst DB2InstHome = "/db2/db2inst" MonScript = "/opt/VRTSvcs/bin/Db2udb/SqlTest.pl" )
DiskGroup db2dgB_dg ( Enabled = 0 DiskGroup = db2dgB )
IP db2inst_ip ( Enabled = 0 Device = ce1 Address = "10.10.231.191" )
Mount db2dgB_bkup_mnt ( Enabled = 0 MountPoint = "/backup/db2inst" BlockDevice = "/dev/vx/dsk/db2dgB/bkp" FSType = vxfs FsckOpt = "-y" )
Mount db2inst_bcp_mnt ( Enabled = 0 MountPoint = "/bcp/db2inst" BlockDevice = "/dev/vx/dsk/db2dg/bcp" FSType = vxfs MountOpt = rw FsckOpt = "-y" )
Mount db2inst_db2_mnt ( Enabled = 0 MountPoint = "/db2/db2inst" BlockDevice = "/dev/vx/dsk/db2dg/db2" FSType = vxfs MountOpt = rw FsckOpt = "-y" )
Mount db2inst_db_mnt ( Enabled = 0 MountPoint = "/db/db2inst/PEMMP00P/NODE0000" BlockDevice = "/dev/vx/dsk/db2dg/db" FSType = vxfs MountOpt = rw FsckOpt = "-y" )
Mount db2inst_dba_mnt ( Enabled = 0 MountPoint = "/dba/db2inst" BlockDevice = "/dev/vx/dsk/db2dg/dba" FSType = vxfs MountOpt = rw FsckOpt = "-y" )
Mount db2inst_lg1_mnt ( Enabled = 0 MountPoint = "/db/db2inst/log1" BlockDevice = "/dev/vx/dsk/db2dg/lg1" FSType = vxfs MountOpt = rw FsckOpt = "-y" )
Mount db2inst_tp01_mnt ( Enabled = 0 MountPoint = "/db/db2inst/PEMMP00P/tempspace01/NODE0000" BlockDevice = "/dev/vx/dsk/db2dg/tp01" FSType = vxfs MountOpt = rw FsckOpt = "-y" )
NIC db2inst_ce1 ( Enabled = 0 Device = ce1 NetworkType = ether )
RVGPrimary db2dg_rvg_primary ( Enabled = 0 Critical = 0 RvgResourceName = db2dgAgent )
requires group db2inst_vvr online local firm adsm_db2inst requires db2inst_bcp_mnt db2dgB_bkup_mnt requires db2dgB_dg db2inst_bcp_mnt requires db2dg_rvg_primary db2inst_db requires db2inst_bcp_mnt db2inst_db requires db2inst_db2_mnt db2inst_db requires db2inst_db_mnt db2inst_db requires db2inst_dba_mnt db2inst_db requires db2inst_ip db2inst_db requires db2inst_lg1_mnt db2inst_db requires db2inst_tp01_mnt db2inst_db2_mnt requires db2dg_rvg_primary db2inst_db_mnt requires db2dg_rvg_primary db2inst_dba_mnt requires db2dg_rvg_primary db2inst_ip requires db2inst_ce1 db2inst_lg1_mnt requires db2dg_rvg_primary db2inst_tp01_mnt requires db2dg_rvg_primary
group db2inst_vvr ( SystemList = { sunsrv01-dr = 0, sunsrv02-dr = 1 } )
DiskGroup db2dg_dg ( DiskGroup = db2dg )
IP vvrip ( Device = ce5 Address = "10.12.94.191" NetMask = "255.255.254.0" )
NIC vvrnic ( Device = ce5 NetworkType = ether )
RVG db2dgAgent ( Critical = 0 RVG = db2dg_rvg DiskGroup = db2dg )
db2dgAgent requires db2dg_dg vvrip requires vvrnic
group snap_db2inst_grp ( SystemList = { sunsrv01-dr = 0, sunsrv02-dr = 1 } AutoStart = 0 )
Db2udb snap_db2inst_db ( Critical = 0 DB2InstOwner = db2inst DB2InstHome = "/db2/db2inst" MonScript = "/opt/VRTSvcs/bin/Db2udb/SqlTest.pl" )
DiskGroup snap_db2dgB_dg ( DiskGroup = db2dgB )
IP snap_db2inst_ip ( Device = ce1 Address = "10.10.231.191" )
Mount snap_db2dgB_bkup_mnt ( MountPoint = "/backup/db2inst" BlockDevice = "/dev/vx/dsk/db2dgB/bkp" FSType = vxfs FsckOpt = "-y" )
Mount snap_db2inst_bcp_mnt ( MountPoint = "/bcp/db2inst" BlockDevice = "/dev/vx/dsk/db2dg/bcp_snapvol" FSType = vxfs FsckOpt = "-y" )
Mount snap_db2inst_db2_mnt ( MountPoint = "/db2/db2inst" BlockDevice = "/dev/vx/dsk/db2dg/db2_snapvol" FSType = vxfs FsckOpt = "-y" )
Mount snap_db2inst_db_mnt ( MountPoint = "/db/db2inst/PEMMP00P/NODE0000" BlockDevice = "/dev/vx/dsk/db2dg/db_snapvol" FSType = vxfs FsckOpt = "-y" )
Mount snap_db2inst_dba_mnt ( MountPoint = "/dba/db2inst" BlockDevice = "/dev/vx/dsk/db2dg/dba_snapvol" FSType = vxfs FsckOpt = "-y" )
Mount snap_db2inst_lg1_mnt ( MountPoint = "/db/db2inst/log1" BlockDevice = "/dev/vx/dsk/db2dg/lg1_snapvol" FSType = vxfs FsckOpt = "-y" )
Mount snap_db2inst_tp01_mnt ( MountPoint = "/db/db2inst/PEMMP00P/tempspace01/NODE0000" BlockDevice = "/dev/vx/dsk/db2dg/tp01_snapvol" FSType = vxfs FsckOpt = "-y" )
NIC snap_db2inst_ce1 ( Device = ce1 NetworkType = ether )
requires group db2inst_vvr online local firm snap_db2dgB_bkup_mnt requires snap_db2dgB_dg snap_db2inst_db requires snap_db2inst_bcp_mnt snap_db2inst_db requires snap_db2inst_db2_mnt snap_db2inst_db requires snap_db2inst_db_mnt snap_db2inst_db requires snap_db2inst_dba_mnt snap_db2inst_db requires snap_db2inst_lg1_mnt snap_db2inst_db requires snap_db2inst_tp01_mnt snap_db2inst_ip requires snap_db2inst_ce1
Bring up VCS engine and bring up the vvr service group in the Secondary Site
Start VCS on both nodes
sunsrv01-dr:# hastart
sunsrv02-dr:# hastart
Bring up the VVR Service Group on one node
sunsrv01-dr:# hagrp –online db2inst_vvr –sys sunsrv01-dr
Bring up VCS engine and bring up the vvr service group in the Primary Site
Start VCS on both nodes
sunsrv01:# hastart
sunsrv02:# hastart
Bring up the VVR Service Group on one node
sunsrv01:# hagrp –online db2inst_vvr –sys sunsrv01
Bring up the Application Service Group
sunsrv01:# hagrp –online db2inst_grp –sys sunsrv01
Check the Rlink and RVG status
Make sure that the flags are attached and connected. If for some reason links were detached or disconnected, check to make sure that the communications between the primary site and the secondary site are working fine. You should be able to ping, ssh, or telnet from each site thru the VIPs. If communications are good, you may restart VVR (see next step for restarting VVR engine).
sunsrv01:# vxprint -Pl Disk group: db2dg
Rlink: rlk_db2instdr-vipvr_db2dg_rvg info: timeout=500 packet_size=8400 rid=0.2007 latency_high_mark=10000 latency_low_mark=9950 bandwidth_limit=none state: state=ACTIVE synchronous=off latencyprot=off srlprot=autodcm assoc: rvg=db2dg_rvg remote_host=db2instdr-vipvr IP_addr=10.12.94.191 port=4145 remote_dg=db2dg remote_dg_dgid=1182230506.2373.sunsrv01-dr remote_rvg_version=21 remote_rlink=rlk_db2inste3-vipvr_db2dg_rvg remote_rlink_rid=0.2127 local_host=db2inste3-vipvr IP_addr=10.11.196.191 port=4145 protocol: UDP/IP flags: write enabled attached consistent cant_sync connected asynchronous autosync
sunsrv01:# vradmin -l printrvg Replicated Data Set: db2dg_rvg Primary: HostName: db2inste3-vipvr RvgName: db2dg_rvg DgName: db2dg datavol_cnt: 6 srl: db2dg_srl RLinks: name=rlk_db2instdr-vipvr_db2dg_rvg, detached=off, synchronous=off Secondary: HostName: db2instdr-vipvr RvgName: db2dg_rvg DgName: db2dg datavol_cnt: 6 srl: db2dg_srl RLinks: name=rlk_db2inste3-vipvr_db2dg_rvg, detached=off, synchronous=off
Restart VVR engine (if the link is detached or disconnected)
This is only required if you see the links between primary and secondary as detached or disconnected. If you think that the communication between the two is working fine, then run the following commands in the primary site:
sunsrv01:# vxstart_vvr stop
sunsrv01:# vxstart_vvr start
Then, check the status again.
sunsrv01:# vxprint -Pl
Start the replication
Initiate the command from the primary site.
sunsrv01:# vradmin -g db2dg -a startrep db2dg_rvg db2instdr-vipvr Message from Primary: VxVM VVR vxrlink WARNING V-5-1-3359 Attaching rlink to non-empty rvg. Autosync will be performed.
VxVM VVR vxrlink INFO V-5-1-3614 Secondary data volumes detected with rvg db2dg_rvg as parent:
VxVM VVR vxrlink INFO V-5-1-6183 bcp: len=585105408 primary_datavol=bcp VxVM VVR vxrlink INFO V-5-1-6183 db: len=1172343808 primary_datavol=db VxVM VVR vxrlink INFO V-5-1-6183 db2: len=8388608 primary_datavol=db2 VxVM VVR vxrlink INFO V-5-1-6183 dba: len=98566144 primary_datavol=dba VxVM VVR vxrlink INFO V-5-1-6183 lg1: len=396361728 primary_datavol=lg1 VxVM VVR vxrlink INFO V-5-1-6183 tp01: len=192937984 primary_datavol=tp01
VxVM VVR vxrlink INFO V-5-1-3365 Autosync operation has started
Usage:
vradmin -g <diskgroup> -a startrep <RVGname> <secondaryhost>
Where:
<diskgroup> | is the VxVM diskgroup name | <RVGname> | is the name for the RVG, usually <diskgroupname>_rvg | <secondaryhost> | is the hostname of the Secondary Site. Check the /etc/hosts file. |
Check the Replication status
Initiate the command from the primary site. You can tell if it’s syncing by looking at the number of bytes remaining. If it changes, then it’s working.
sunsrv01:# vxrlink -g db2dg -i 2 status rlk_db2instdr-vipvr_db2dg_rvg Fri Jun 22 00:01:37 MST 2007 VxVM VVR vxrlink INFO V-5-1-4464 Rlink rlk_db2instdr-vipvr_db2dg_rvg is in AUTOSYNC. 1226649984 Kbytes remaining. VxVM VVR vxrlink INFO V-5-1-4464 Rlink rlk_db2instdr-vipvr_db2dg_rvg is in AUTOSYNC. 1226644224 Kbytes remaining. VxVM VVR vxrlink INFO V-5-1-4464 Rlink rlk_db2instdr-vipvr_db2dg_rvg is in AUTOSYNC. 1226638464 Kbytes remaining. VxVM VVR vxrlink INFO V-5-1-4464 Rlink rlk_db2instdr-vipvr_db2dg_rvg is in AUTOSYNC. 1226632128 Kbytes remaining. VxVM VVR vxrlink INFO V-5-1-4464 Rlink rlk_db2instdr-vipvr_db2dg_rvg is in AUTOSYNC. 1226626368 Kbytes remaining.
Another way to check is to use the vradmin repstatus command. The key bits of information here are Data status and Replication status.
sunsrv01:# vradmin -g db2dg repstatus db2dg_rvg Replicated Data Set: db2dg_rvg Primary: Host name: db2inste3-vipvr RVG name: db2dg_rvg DG name: db2dg RVG state: enabled for I/O Data volumes: 6 SRL name: db2dg_srl SRL size: 952.84 G Total secondaries: 1
Secondary: Host name: db2instdr-vipvr RVG name: db2dg_rvg DG name: db2dg Data status: consistent, up-to-date Replication status: replicating (connected) Current mode: asynchronous Logging to: SRL Timestamp Information: N/A
Mount the Replicated Volumes
When replication is 100% complete, you may check the volumes on the secondary site by mounting the filesystems. But first, you need to check the status of replication.
sunsrv01:# vxrlink -g db2dg -i 2 status rlk_db2instdr-vipvr_db2dg_rvgThu Jun 28 08:54:42 MST 2007 VxVM VVR vxrlink INFO V-5-1-4467 Rlink rlk_db2instdr-vipvr_db2dg_rvg is up to date
Use VCS to mount the replicated Volumes
sunsrv01-dr:# hagrp –online db2inst_grp –sys sunsrv01-dr
Display the filesystems and compare them with the primary.
sunsrv01-dr:# df -k | grep db2dg
/dev/vx/dsk/db2dg/db2 4194304 78059 3859030 2% /db2/db2inst /dev/vx/dsk/db2dg/bcp 292552704 1527508 272836173 1% /bcp/db2inst /dev/vx/dsk/db2dg/dba 49283072 28555 46176114 1% /dba/db2inst /dev/vx/dsk/db2dg/db 586171904 1733538 547911037 1% /db/db2inst/PEMMP00P/NODE0000 /dev/vx/dsk/db2dg/tp01 96468992 40176 90402086 1% /db/db2inst/PEMMP00P/tempspace01/NODE0000 /dev/vx/dsk/db2dg/lg1 198180864 1779853 184126012 1% /db/db2inst/log1
sunsrv01:# df -k | grep db2dg
/dev/vx/dsk/db2dg/db2 4194304 78059 3859030 2% /db2/db2inst /dev/vx/dsk/db2dg/bcp 292552704 1527508 272836173 1% /bcp/db2inst /dev/vx/dsk/db2dg/dba 49283072 28555 46176114 1% /dba/db2inst /dev/vx/dsk/db2dg/db 586171904 1733538 547911037 1% /db/db2inst/PEMMP00P/NODE0000 /dev/vx/dsk/db2dg/tp01 96468992 40176 90402086 1% /db/db2inst/PEMMP00P/tempspace01/NODE0000 /dev/vx/dsk/db2dg/lg1 198180864 1779853 184126012 1% /db/db2inst/log1
This is the best time to test VCS for switching the service group.
sunsrv01-dr:# hagrp –switch db2inst_grp –to sunsrv02-dr
Prepare the Replicated Volumes for Snapshot in the secondary site
First, display the volume information
sunsrv02-dr:# vxprint -ht bcp Disk group: db2dg
v bcp db2dg_rvg ENABLED ACTIVE 585105408 SELECT - fsgen pl bcp-01 bcp ENABLED ACTIVE 585108480 CONCAT - RW sd db2dg214-01 bcp-03 db2dg214 5376 285473280 0 EMC0_219 ENA sd db2dg215-01 bcp-03 db2dg215 5376 285473280 285473280 EMC0_220 ENA sd db2dg219-01 bcp-03 db2dg219 5376 14161920 570946560 EMC0_224 ENA pl bcp-02 bcp ENABLED ACTIVE LOGONLY CONCAT - RW sd db2dg95-06 bcp-02 db2dg95 0 512 LOG EMC0_154 ENA
Prepare the volume for snapshot by adding a DCO log. The same disks is used for DCO and DCM logs.
sunsrv02-dr:# vxsnap -g db2dg prepare bcp ndcomirs=1 alloc=db2dg95 VxVM vxsnap INFO V-5-1-9270 Volume is under RVG, setting drl=no.
Display the volume information to see the changes. Note that the DCO volume has been added with two logs.
sunsrv02-dr:# vxprint -ht bcp Disk group: db2dg
v bcp db2dg_rvg ENABLED ACTIVE 585105408 SELECT - fsgen pl bcp-01 bcp ENABLED ACTIVE 585108480 CONCAT - RW sd db2dg214-01 bcp-03 db2dg214 5376 285473280 0 EMC0_219 ENA sd db2dg215-01 bcp-03 db2dg215 5376 285473280 285473280 EMC0_220 ENA sd db2dg219-01 bcp-03 db2dg219 5376 14161920 570946560 EMC0_224 ENA pl bcp-02 bcp ENABLED ACTIVE LOGONLY CONCAT - RW sd db2dg95-06 bcp-02 db2dg95 0 512 LOG EMC0_154 ENA dc bcp_dco bcp bcp_dcl v bcp_dcl - ENABLED ACTIVE 40368 SELECT - gen pl bcp_dcl-01 bcp_dcl ENABLED ACTIVE 40368 CONCAT - RW sd db2dg95-07 bcp_dcl-01 db2dg95 960 40368 0 EMC0_154 ENA
Run the same steps on the rest of the volumes
sunsrv02-dr:# vxsnap -g db2dg prepare db ndcomirs=1 alloc=db2dg95 VxVM vxsnap INFO V-5-1-9270 Volume is under RVG, setting drl=no.
sunsrv02-dr:# vxsnap -g db2dg prepare dba ndcomirs=1 alloc=db2dg95 VxVM vxsnap INFO V-5-1-9270 Volume is under RVG, setting drl=no.
sunsrv02-dr:# vxsnap -g db2dg prepare db2 ndcomirs=1 alloc=db2dg95 VxVM vxsnap INFO V-5-1-9270 Volume is under RVG, setting drl=no.
sunsrv02-dr:# vxsnap -g db2dg prepare lg1 ndcomirs=1 alloc=db2dg95 VxVM vxsnap INFO V-5-1-9270 Volume is under RVG, setting drl=no.
sunsrv02-dr:# vxsnap -g db2dg prepare tp01 ndcomirs=1 alloc=db2dg95 VxVM vxsnap INFO V-5-1-9270 Volume is under RVG, setting drl=no.
Create the SNAP Volumes in the secondary site
sunsrv02-dr:# vxassist -g db2dg make bcp_snapvol 585105408 layout=concat
sunsrv02-dr:# vxassist -g db2dg make db_snapvol 1172343808 layout=concat
sunsrv02-dr:# vxassist -g db2dg make dba_snapvol 98566144 layout=concat
sunsrv02-dr:# vxassist -g db2dg make db2_snapvol 8388608 layout=concat
sunsrv02-dr:# vxassist -g db2dg make db2_snapvol 8388608 layout=concat
sunsrv02-dr:# vxassist -g db2dg make lg1_snapvol 396361728 layout=concat
sunsrv02-dr:# vxassist -g db2dg make tp01_snapvol 192937984 layout=concat
Prepare the SNAP Volumes
Identify the region size of the main volumes. The region size must be the same for both the main volume and the snap volume.
sunsrv02-dr:# for DCONAME in `vxprint -tg db2dg | grep "^dc" | awk '{ print $2 }'` > do > echo "$DCONAME\t\c" > vxprint -g db2dg -F%regionsz $DCONAME > done bcp_dco 128 db_dco 128 dba_dco 128 db2_dco 128 lg1_dco 128 tp01_dco 128
Display the volume information
sunsrv02-dr:# vxprint -ht bcp_snapvol Disk group: db2dg
v bcp_snapvol fsgen ENABLED 585105408 - ACTIVE - - pl bcp_snapvol-01 bcp_snapvol ENABLED 585105600 - ACTIVE - - sd db2dg114-01 bcp_snapvol-01 ENABLED 35726400 0 - - - sd db2dg106-01 bcp_snapvol-01 ENABLED 35681280 35726400 - - - sd db2dg101-01 bcp_snapvol-01 ENABLED 35681280 71407680 - - - sd db2dg102-01 bcp_snapvol-01 ENABLED 35681280 107088960 - - - sd db2dg103-01 bcp_snapvol-01 ENABLED 35681280 142770240 - - - sd db2dg104-01 bcp_snapvol-01 ENABLED 35681280 178451520 - - - sd db2dg105-01 bcp_snapvol-01 ENABLED 35681280 214132800 - - - sd db2dg107-01 bcp_snapvol-01 ENABLED 35681280 249814080 - - - sd db2dg108-01 bcp_snapvol-01 ENABLED 35681280 285495360 - - - sd db2dg109-01 bcp_snapvol-01 ENABLED 13844160 321176640 - - - sd db2dg110-01 bcp_snapvol-01 ENABLED 35726400 335020800 - - - sd db2dg111-01 bcp_snapvol-01 ENABLED 35726400 370747200 - - - sd db2dg112-01 bcp_snapvol-01 ENABLED 35726400 406473600 - - - sd db2dg113-01 bcp_snapvol-01 ENABLED 35726400 442200000 - - - sd db2dg115-01 bcp_snapvol-01 ENABLED 35726400 477926400 - - - sd db2dg116-01 bcp_snapvol-01 ENABLED 35726400 513652800 - - - sd db2dg117-01 bcp_snapvol-01 ENABLED 35726400 549379200 - - -
Prepare the snapshot volume by adding a DCO log and the same region size as the main volume. The same disks is used for DCO and DCM logs.
sunsrv02-dr:# vxsnap -g db2dg prepare bcp_snapvol ndcomirs=1 regionsize=128 \ alloc=db2dg95
Display the volume information to see the changes. Note that the DCO volume has been added.
sunsrv02-dr:# vxprint -ht bcp_snapvol Disk group: db2dg
v bcp_snapvol fsgen ENABLED 585105408 - ACTIVE - - pl bcp_snapvol-01 bcp_snapvol ENABLED 585105600 - ACTIVE - - sd db2dg114-01 bcp_snapvol-01 ENABLED 35726400 0 - - - sd db2dg106-01 bcp_snapvol-01 ENABLED 35681280 35726400 - - - sd db2dg101-01 bcp_snapvol-01 ENABLED 35681280 71407680 - - - sd db2dg102-01 bcp_snapvol-01 ENABLED 35681280 107088960 - - - sd db2dg103-01 bcp_snapvol-01 ENABLED 35681280 142770240 - - - sd db2dg104-01 bcp_snapvol-01 ENABLED 35681280 178451520 - - - sd db2dg105-01 bcp_snapvol-01 ENABLED 35681280 214132800 - - - sd db2dg107-01 bcp_snapvol-01 ENABLED 35681280 249814080 - - - sd db2dg108-01 bcp_snapvol-01 ENABLED 35681280 285495360 - - - sd db2dg109-01 bcp_snapvol-01 ENABLED 13844160 321176640 - - - sd db2dg110-01 bcp_snapvol-01 ENABLED 35726400 335020800 - - - sd db2dg111-01 bcp_snapvol-01 ENABLED 35726400 370747200 - - - sd db2dg112-01 bcp_snapvol-01 ENABLED 35726400 406473600 - - - sd db2dg113-01 bcp_snapvol-01 ENABLED 35726400 442200000 - - - sd db2dg115-01 bcp_snapvol-01 ENABLED 35726400 477926400 - - - sd db2dg116-01 bcp_snapvol-01 ENABLED 35726400 513652800 - - - sd db2dg117-01 bcp_snapvol-01 ENABLED 35726400 549379200 - - - dc bcp_snapvol_dco bcp_snapvol - - - - - - v bcp_snapvol_dcl gen ENABLED 40368 - ACTIVE - - pl bcp_snapvol_dcl-01 bcp_snapvol_dcl ENABLED 40368 - ACTIVE - - sd db2dg95-01 bcp_snapvol_dcl-01 ENABLED 40368 0 - - -
Run the same steps on the rest of the volumes
sunsrv02-dr:# vxsnap -g db2dg prepare db_snapvol ndcomirs=1 regionsize=128 \ alloc=db2dg95
sunsrv02-dr:# vxsnap -g db2dg prepare dba_snapvol ndcomirs=1 regionsize=128 \ alloc=db2dg95
sunsrv02-dr:# vxsnap -g db2dg prepare db2_snapvol ndcomirs=1 regionsize=128 \ alloc=db2dg95
sunsrv02-dr:# vxsnap -g db2dg prepare lg1_snapvol ndcomirs=1 regionsize=128 \ alloc=db2dg95
sunsrv02-dr:# vxsnap -g db2dg prepare tp01_snapvol ndcomirs=1 regionsize=128 \ alloc=db2dg95
Verify the region sizes are the same
sunsrv02-dr:# for DCONAME in `vxprint -tg db2dg | grep "^dc" | awk '{ print $2 }'` > do > echo "$DCONAME\t\c" > vxprint -g db2dg -F%regionsz $DCONAME > done bcp_dco 128 bcp_snapvol_dco 128 db_dco 128 db_snapvol_dco 128 dba_dco 128 dba_snapvol_dco 128 db2_dco 128 db2_snapvol_dco 128 lg1_dco 128 lg1_snapvol_dco 128 tp01_dco 128 tp01_snapvol_dco 128
Run a point-in-time snapshot
Verify the RLINK and RVG are active and up to date. Run this command from the primary site.
sunsrv01:# vxrlink -g db2dg -i 2 status rlk_db2instdr-vipvr_db2dg_rvg Thu Jun 28 08:54:42 MST 2007 VxVM VVR vxrlink INFO V-5-1-4467 Rlink rlk_db2instdr-vipvr_db2dg_rvg is up to date
sunsrv02-dr:# vxprint -Pl Disk group: db2dg
Rlink: rlk_db2inste3-vipvr_db2dg_rvg info: timeout=500 packet_size=8400 rid=0.2127 latency_high_mark=10000 latency_low_mark=9950 bandwidth_limit=none state: state=ACTIVE synchronous=off latencyprot=off srlprot=autodcm assoc: rvg=db2dg_rvg remote_host=db2inste3-vipvr IP_addr=10.11.196.191 port=4145 remote_dg=db2dg remote_dg_dgid=1138140445.1393.sunsrv01 remote_rvg_version=21 remote_rlink=rlk_db2instdr-vipvr_db2dg_rvg remote_rlink_rid=0.2007 local_host=db2instdr-vipvr IP_addr=10.12.94.191 port=4145 protocol: UDP/IP flags: write enabled attached consistent connected
Start the Snap Copy
sunsrv02-dr:# vxsnap -g db2dg make source=bcp/snapvol=bcp_snapvol \ source=db/snapvol=db_snapvol source=dba/snapvol=dba_snapvol \ source=db2/snapvol=db2_snapvol source=lg1/snapvol=lg1_snapvol \ source=tp01/snapvol=tp01_snapvol
Display the sync status
sunsrv02-dr:# vxtask list TASKID PTID TYPE/STATE PCT PROGRESS 168 SNAPSYNC/R 00.07% 0/585105408/430080 SNAPSYNC bcp_snapvol db2dg 170 SNAPSYNC/R 00.05% 0/1172343808/567296 SNAPSYNC db_snapvol db2dg 172 SNAPSYNC/R 00.46% 0/98566144/452608 SNAPSYNC dba_snapvol db2dg 174 SNAPSYNC/R 04.30% 0/8388608/360448 SNAPSYNC db2_snapvol db2dg 176 SNAPSYNC/R 00.16% 0/396361728/641024 SNAPSYNC lg1_snapvol db2dg 178 SNAPSYNC/R 00.18% 0/192937984/346112 SNAPSYNC tp01_snapvol db2dg
sunsrv02-dr:# vxsnap -g db2dg print
NAME SNAPOBJECT TYPE PARENT SNAPSHOT %DIRTY %VALID
bcp -- volume -- -- -- 100.00 bcp_snapvol_snp volume -- bcp_snapvol 0.00 --
db -- volume -- -- -- 100.00 db_snapvol_snp volume -- db_snapvol 0.00 --
dba -- volume -- -- -- 100.00 dba_snapvol_snp volume -- dba_snapvol 0.00 --
db2 -- volume -- -- -- 100.00 db2_snapvol_snp volume -- db2_snapvol 0.00 --
lg1 -- volume -- -- -- 100.00 lg1_snapvol_snp volume -- lg1_snapvol 0.00 --
tp01 -- volume -- -- -- 100.00 tp01_snapvol_snp volume -- tp01_snapvol 0.00 --
bcp_snapvol bcp_snp volume bcp -- 0.00 0.11
db_snapvol db_snp volume db -- 0.00 0.02
dba_snapvol dba_snp volume dba -- 0.00 0.77
db2_snapvol db2_snp volume db2 -- 0.00 9.13
lg1_snapvol lg1_snp volume lg1 -- 0.00 0.09
tp01_snapvol tp01_snp volume tp01 -- 0.00 0.10
Mount the SNAP Volumes
When the Sync is complete, bring up the snap service group in VCS to verify all changes. But first, unmount the filesystems on replicated volumes. In this particular server, the replicated volumes and the snap volumes use the same mountpoints.
sunsrv02-dr:# umount ` mount | grep '/dev/vx/dsk/db2dg/' | awk '{ print $1 }'`
sunsrv02-dr:# vxtask list TASKID PTID TYPE/STATE PCT PROGRESS
sunsrv02-dr:# hastatus -sum
-- SYSTEM STATE -- System State Frozen
A sunsrv01-dr RUNNING 0 A sunsrv02-dr RUNNING 0
-- GROUP STATE -- Group System Probed AutoDisabled State
B CCAvail sunsrv01-dr Y N ONLINE B CCAvail sunsrv02-dr Y N OFFLINE B db2inst_grp sunsrv01-dr Y N OFFLINE B db2inst_grp sunsrv02-dr Y N OFFLINE B db2inst_vvr sunsrv01-dr Y N OFFLINE B db2inst_vvr sunsrv02-dr Y N ONLINE B snap_db2inst_grp sunsrv01-dr Y N OFFLINE B snap_db2inst_grp sunsrv02-dr Y N OFFLINE
sunsrv02-dr:# hagrp -online snap_db2inst_grp -sys sunsrv02-dr
sunsrv02-dr:# hastatus -sum
-- SYSTEM STATE -- System State Frozen
A sunsrv01-dr RUNNING 0 A sunsrv02-dr RUNNING 0
-- GROUP STATE -- Group System Probed AutoDisabled State
B CCAvail sunsrv01-dr Y N ONLINE B CCAvail sunsrv02-dr Y N OFFLINE B db2inst_grp sunsrv01-dr Y N OFFLINE B db2inst_grp sunsrv02-dr Y N OFFLINE B db2inst_vvr sunsrv01-dr Y N OFFLINE B db2inst_vvr sunsrv02-dr Y N ONLINE B snap_db2inst_grp sunsrv01-dr Y N OFFLINE B snap_db2inst_grp sunsrv02-dr Y N ONLINE
Verify the filesystems and compare the sizes with the primary site.
sunsrv02-dr:# df -k | grep snapvol
/dev/vx/dsk/db2dg/db2_snapvol 4194304 78059 3859030 2% /db2/db2inst /dev/vx/dsk/db2dg/bcp_snapvol 292552704 1527508 272836173 1% /bcp/db2inst /dev/vx/dsk/db2dg/dba_snapvol 49283072 28555 46176114 1% /dba/db2inst /dev/vx/dsk/db2dg/db_snapvol 586171904 1733538 547911037 1% /db/db2inst/PEMMP00P/NODE0000 /dev/vx/dsk/db2dg/tp01_snapvol 96468992 40176 90402086 1% /db/db2inst/PEMMP00P/tempspace01/NODE0000 /dev/vx/dsk/db2dg/lg1_snapvol 198180864 1779853 184126012 1% /db/db2inst/log1
sunsrv01:# df -k | grep db2dg
/dev/vx/dsk/db2dg/db2 4194304 78059 3859030 2% /db2/db2inst /dev/vx/dsk/db2dg/bcp 292552704 1527508 272836173 1% /bcp/db2inst /dev/vx/dsk/db2dg/dba 49283072 28555 46176114 1% /dba/db2inst /dev/vx/dsk/db2dg/db 586171904 1733538 547911037 1% /db/db2inst/PEMMP00P/NODE0000 /dev/vx/dsk/db2dg/tp01 96468992 40176 90402086 1% /db/db2inst/PEMMP00P/tempspace01/NODE0000 /dev/vx/dsk/db2dg/lg1 198180864 1779853 184126012 1% /db/db2inst/log1
Posted by
JAUGHN
Labels:
Veritas Volume Replicator
|
MARVEL and SPIDER-MAN: TM & 2007 Marvel Characters, Inc. Motion Picture © 2007 Columbia Pictures Industries, Inc. All Rights Reserved. 2007 Sony Pictures Digital Inc. All rights reserved. blogger template by blog forum
|