The prtpicl command outputs information to accurately determine the make and model of an HBA.

The QLogic HBAs have a PCI identifier of 1077. Search through the output from the command prtpicl -v for the number 1077. A section displays similar to the following:

QLGC,qla (scsi, 44000003ac)
:DeviceID 0x4
:UnitAddress 4
:vendor-id 0x1077
:device-id 0x2300
:revision-id 0x1
:subsystem-vendor-id 0x1077
:subsystem-id 0x9
:min-grant 0x40
:max-latency 0
:cache-line-size 0x10
:latency-timer 0x40

The subsystem-ID value determines the model of HBA. Reference this chart to determine the model of HBA:

Vendor HBA model Vendor ID Device ID Subsys Vendor ID Subsys Device ID
QLogic QCP2340 1077 2312 1077 109
QLogic QLA200 1077 6312 1077 119
QLogic QLA210 1077 6322 1077 12F
QLogic QLA2300/QLA2310 1077 2310 1077 9
QLogic QLA2340
1077 2312 1077 100
QLogic QLA2342 1077 2312 1077 101
QLogic QLA2344 1077 2312 1077 102
QLogic QLE2440 1077 2422 1077 145
QLogic QLA2460 1077 2422 1077 133
QLogic QLA2462 1077 2422 1077 134
QLogic QLE2360 1077 2432 1077 117
QLogic QLE2362 1077 2432 1077 118
QLogic QLE2440 1077 2432 1077 147
QLogic QLE2460 1077 2432 1077 137
QLogic QLE2462 1077 2432 1077 138
QLogic QSB2340 1077 2312 1077 104
QLogic QSB2342 1077 2312 1077 105
Sun SG-XPCI1FC-QLC 1077 6322 1077 132
Sun 6799A 1077 2200A 1077 4082
Sun SG-XPCI1FC-QF2/x6767A 1077 2310 1077 106
Sun SG-XPCI2FC-QF2/x6768A 1077 2312 1077 10A
Sun X6727A 1077 2200A 1077 4083
Sun SG-XPCI1FC-QF4 1077 2422 1077 140
Sun SG-XPCI2FC-QF4 1077 2422 1077 141
Sun SG-XPCIE1FC-QF4 1077 2432 1077 142
Sun SG-XPCIE2FC-QF4 1077 2432 1077 143
Posted by JAUGHN Labels:
Tuesday, December 4, 2007 at 1:49 PM | 0 comments
Details:

Here's an actual server I worked on for adding a NIC resource.

# haconf -makerw


# hares -add vvrnic NIC db2inst_grp
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors

# hares -modify vvrnic Device ce3

# hares -modify vvrnic NetworkType ether


# hares -add vvrip IP db2inst_grp
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors

# hares -modify vvrip Device ce3
# hares -modify vvrip Address "10.67.196.191"
# hares -modify vvrip NetMask "255.255.254.0"


# hares -link vvrip vvrnic


# hagrp -enableresources db2inst_grp


# hares -online vvrip -sys server620


# haconf -dump -makero
Posted by JAUGHN Labels:
How to create a file system using VERITAS Volume Manager, controlled under VERITAS Cluster Server

Details:


Following is the algorithm to create a volume, file system and put them under VERITAS Cluster Server (VCS).

    1. Create a disk group
    2. Create a mount point and file system
    3. Deport a disk group
    4. Create a service group


Add following resources and modify attributes:

Resources Name Attributes
1. Disk group, disk group name
2. Mount block device, FSType, MountPoint

Create dependency between following resources:

1. Mount and disk group

Enable all resources in this service group.

The following example shows how to create a raid-5 volume with a VxFS file system and put it under VCS control.

Method 1 - Using the command line

1. Create a disk group using Volume Manager with a minimum of 4 disks:

# vxdg init datadg disk01=c1t1d0s2 disk02=c1t2d0s2 disk03=c1t3d0s2 disk04=c1t4d0s2
# vxassist -g datadg make vol01 2g layout=raid5

2. Create a mount point for this volume:

# mkdir /vol01

3. Create a file system on this volume:

# mkfs -F vxfs /dev/vx/rdsk/datadg/vol01

4. Deport this disk group:

# vxdg deport datadg

5. Create a service group:

# haconf -makerw
# hagrp -add newgroup
# hagrp -modify newgroup SystemList <sysa> 0 <sysb> 1
# hagrp -modify newgroup AutoStartList <sysa>

6. Create a disk group resource and modify its attributes:

# hares -add data_dg DiskGroup newgroup
# hares -modify data_dg DiskGroup datadg

7. Create a mount resource and modify its attributes:

# hares -add vol01_mnt Mount newgroup
# hares -modify vol01_mnt BlockDevice /dev/vx/dsk/datadg/vol01
# hares -modify vol01_mnt FSType vxfs
# hares -modify vol01_mnt MountPoint /vol01
# hares -modify vol01_mnt FsckOpt %-y

8. Link the mount resource to the disk group resource:

# hares -link vol01_mnt data_dg

9. Enable the resources and close the configuration:

# hagrp -enableresources newgroup
# haconf -dump -makero



Method 2 - Editing /etc/VRTSvcs/conf/config/main.cf

# hastop -all
# cd /etc/VRTSvcs/conf/config
# haconf -makerw
# vi main.cf


Add the following line to end of this file:

group newgroup (
SystemList = { sysA =0, sysB=1}
AutoStartList = { sysA }
)

DiskGroup data_dg (
DiskGroup = datadg
)

Mount vol01_mnt (
MountPoint = "/vol01"
BlockDevice = " /dev/vx/dsk/datadg/vol01"
FSType = vxfs
)

vol01_mnt requires data_dg


# haconf -dump -makero
# hastart -local


Check status of the new service group.


------------------------------------------------------------------------------------

Here's an actual example.

# umount /backup/pdpd415

# vxdg deport bkupdg

# haconf -makerw

# hares -add bkup_dg DiskGroup pdpd415_grp
# hares -modify bkup_dg DiskGroup bkupdg

# hares -add bkupdg_bkup_mnt Mount pdpd415_grp
# hares -modify bkupdg_bkup_mnt BlockDevice /dev/vx/dsk/bkupdg/bkupvol
# hares -modify bkupdg_bkup_mnt FSType vxfs
# hares -modify bkupdg_bkup_mnt MountPoint /backup/pdpd415
# hares -modify bkupdg_bkup_mnt FsckOpt %-y

# hares -link bkupdg_bkup_mnt bkup_dg

# hagrp -enableresources pdpd415_grp

# hares -online bkup_dg -sys sppwd620
# hares -online bkupdg_bkup_mnt -sys sppwd620

# haconf -dump -makero
Posted by JAUGHN Labels:
To verify whether an HBA is connected to a fabric or not:

# /usr/sbin/luxadm -e port

Found path to 4 HBA ports

/devices/pci@1e,600000/SUNW,qlc@3/fp@0,0:devctl CONNECTED
/devices/pci@1e,600000/SUNW,qlc@3,1/fp@0,0:devctl NOT CONNECTED
/devices/pci@1e,600000/SUNW,qlc@4/fp@0,0:devctl CONNECTED
/devices/pci@1e,600000/SUNW,qlc@4,1/fp@0,0:devctl NOT CONNECTED



Your SAN administrator will ask for the WWNs for Zoning. Here are some steps I use to get that information:

# prtconf -vp | grep wwn
port-wwn: 210000e0.8b1d8d7d
node-wwn: 200000e0.8b1d8d7d
port-wwn: 210100e0.8b3d8d7d
node-wwn: 200000e0.8b3d8d7d
port-wwn: 210000e0.8b1eaeb0
node-wwn: 200000e0.8b1eaeb0
port-wwn: 210100e0.8b3eaeb0
node-wwn: 200000e0.8b3eaeb0


Or you may use fcinfo, if installed.

# fcinfo hba-port
HBA Port WWN: 210000e08b8600c8
OS Device Name: /dev/cfg/c11
Manufacturer: QLogic Corp.
Model: 375-3108-xx
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb
Current Speed: 2Gb
Node WWN: 200000e08b8600c8
HBA Port WWN: 210100e08ba600c8
OS Device Name: /dev/cfg/c12
Manufacturer: QLogic Corp.
Model: 375-3108-xx
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb
Current Speed: 2Gb
Node WWN: 200100e08ba600c8
HBA Port WWN: 210000e08b86a1cc
OS Device Name: /dev/cfg/c5
Manufacturer: QLogic Corp.
Model: 375-3108-xx
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb
Current Speed: 2Gb
Node WWN: 200000e08b86a1cc
HBA Port WWN: 210100e08ba6a1cc
OS Device Name: /dev/cfg/c6
Manufacturer: QLogic Corp.
Model: 375-3108-xx
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb
Current Speed: 2Gb
Node WWN: 200100e08ba6a1cc



Here are some commands you can use for QLogic Adapters:

# modinfo | grep qlc
76 7ba9e000 cdff8 282 1 qlc (SunFC Qlogic FCA v20060630-2.16)

# prtdiag | grep qlc
pci 66 PCI5 SUNW,qlc-pci1077,2312 (scsi-+
okay /ssm@0,0/pci@18,600000/SUNW,qlc@1
pci 66 PCI5 SUNW,qlc-pci1077,2312 (scsi-+
okay /ssm@0,0/pci@18,600000/SUNW,qlc@1,1
pci 33 PCI2 SUNW,qlc-pci1077,2312 (scsi-+
okay /ssm@0,0/pci@19,700000/SUNW,qlc@1
pci 33 PCI2 SUNW,qlc-pci1077,2312 (scsi-+
okay /ssm@0,0/pci@19,700000/SUNW,qlc@1,1

# luxadm qlgc

Found Path to 4 FC100/P, ISP2200, ISP23xx Devices

Opening Device: /devices/ssm@0,0/pci@19,700000/SUNW,qlc@1,1/fp@0,0:devctl
Detected FCode Version: ISP2312 Host Adapter Driver: 1.14.09 03/08/04

Opening Device: /devices/ssm@0,0/pci@19,700000/SUNW,qlc@1/fp@0,0:devctl
Detected FCode Version: ISP2312 Host Adapter Driver: 1.14.09 03/08/04

Opening Device: /devices/ssm@0,0/pci@18,600000/SUNW,qlc@1,1/fp@0,0:devctl
Detected FCode Version: ISP2312 Host Adapter Driver: 1.14.09 03/08/04

Opening Device: /devices/ssm@0,0/pci@18,600000/SUNW,qlc@1/fp@0,0:devctl
Detected FCode Version: ISP2312 Host Adapter Driver: 1.14.09 03/08/04
Complete


# luxadm -e dump_map /devices/ssm@0,0/pci@19,700000/SUNW,qlc@1,1/fp@0,0:devctl
Pos Port_ID Hard_Addr Port WWN Node WWN Type
0 1f0112 0 5006048accab4f8d 5006048accab4f8d 0x0 (Disk device)
1 1f011f 0 5006048accab4e0d 5006048accab4e0d 0x0 (Disk device)
2 1f012e 0 5006048acc7034cd 5006048acc7034cd 0x0 (Disk device)
3 1f0135 0 5006048accb4fc0d 5006048accb4fc0d 0x0 (Disk device)
4 1f02ef 0 50060163306043b6 50060160b06043b6 0x0 (Disk device)
5 1f06ef 0 5006016b306043b6 50060160b06043b6 0x0 (Disk device)
6 1f0bef 0 5006016330604365 50060160b0604365 0x0 (Disk device)
7 1f19ef 0 5006016b30604365 50060160b0604365 0x0 (Disk device)
8 1f0e00 0 210100e08ba6a1cc 200100e08ba6a1cc 0x1f (Unknown Type,Host Bus Adapter)



# prtpicl -v
.
.
SUNW,qlc (scsi-fcp, 7f0000066b) <--- go to qLogic website to get model number
:_fru_parent (7f0000dc86H)
:DeviceID 0x1
:UnitAddress 1
:vendor-id 0x1077
:device-id 0x2312
:revision-id 0x2
:subsystem-vendor-id 0x1077
:subsystem-id 0x10a
:min-grant 0x40
:max-latency 0
:cache-line-size 0x10
:latency-timer 0x40

.
.


#### The subsystem-ID value determines the model of HBA.
#### For reference table Click Here


Configuring NEW LUNs:


spdma501:# format < /dev/null
Searching for disks...done


AVAILABLE DISK SELECTIONS:
0. c1t0d0
/pci@8,600000/SUNW,qlc@4/fp@0,0/ssd@w2100000c506b2fca,0
1. c1t1d0
/pci@8,600000/SUNW,qlc@4/fp@0,0/ssd@w2100000c506b39cf,0
Specify disk (enter its number):


spdma501:# cfgadm -o show_FCP_dev -al
Ap_Id Type Receptacle Occupant Condition
c1 fc-private connected configured unknown
c1::2100000c506b2fca,0 disk connected configured unknown
c1::2100000c506b39cf,0 disk connected configured unknown
c3 fc-fabric connected unconfigured unknown
c3::50060482ccaae5a3,61 disk connected unconfigured unknown
c3::50060482ccaae5a3,62 disk connected unconfigured unknown
c3::50060482ccaae5a3,63 disk connected unconfigured unknown
c3::50060482ccaae5a3,64 disk connected unconfigured unknown
c3::50060482ccaae5a3,65 disk connected unconfigured unknown
c3::50060482ccaae5a3,66 disk connected unconfigured unknown
c3::50060482ccaae5a3,67 disk connected unconfigured unknown
c3::50060482ccaae5a3,68 disk connected unconfigured unknown
c3::50060482ccaae5a3,69 disk connected unconfigured unknown
c3::50060482ccaae5a3,70 disk connected unconfigured unknown
c3::50060482ccaae5a3,71 disk connected unconfigured unknown
c3::50060482ccaae5a3,72 disk connected unconfigured unknown
c4 fc connected unconfigured unknown
c5 fc-fabric connected unconfigured unknown
c5::50060482ccaae5bc,61 disk connected unconfigured unknown
c5::50060482ccaae5bc,62 disk connected unconfigured unknown
c5::50060482ccaae5bc,63 disk connected unconfigured unknown
c5::50060482ccaae5bc,64 disk connected unconfigured unknown
c5::50060482ccaae5bc,65 disk connected unconfigured unknown
c5::50060482ccaae5bc,66 disk connected unconfigured unknown
c5::50060482ccaae5bc,67 disk connected unconfigured unknown
c5::50060482ccaae5bc,68 disk connected unconfigured unknown
c5::50060482ccaae5bc,69 disk connected unconfigured unknown
c5::50060482ccaae5bc,70 disk connected unconfigured unknown
c5::50060482ccaae5bc,71 disk connected unconfigured unknown
c5::50060482ccaae5bc,72 disk connected unconfigured unknown
c6 fc connected unconfigured unknown


spdma501:# cfgadm -c configure c3
Nov 16 17:32:25 spdma501 last message repeated 54 times
Nov 16 17:32:26 spdma501 scsi: WARNING: /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,48 (ssd2):
Nov 16 17:32:26 spdma501 corrupt label - wrong magic number
Nov 16 17:32:26 spdma501 scsi: WARNING: /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,47 (ssd3):
Nov 16 17:32:26 spdma501 corrupt label - wrong magic number
Nov 16 17:32:26 spdma501 scsi: WARNING: /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,46 (ssd4):
Nov 16 17:32:26 spdma501 corrupt label - wrong magic number
Nov 16 17:32:26 spdma501 scsi: WARNING: /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,45 (ssd5):
Nov 16 17:32:26 spdma501 corrupt label - wrong magic number
Nov 16 17:32:26 spdma501 scsi: WARNING: /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,44 (ssd6):
Nov 16 17:32:26 spdma501 corrupt label - wrong magic number
Nov 16 17:32:26 spdma501 scsi: WARNING: /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,43 (ssd7):
Nov 16 17:32:26 spdma501 corrupt label - wrong magic number
Nov 16 17:32:26 spdma501 scsi: WARNING: /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,42 (ssd8):
Nov 16 17:32:26 spdma501 corrupt label - wrong magic number
Nov 16 17:32:26 spdma501 scsi: WARNING: /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,41 (ssd9):
Nov 16 17:32:26 spdma501 corrupt label - wrong magic number
Nov 16 17:32:26 spdma501 scsi: WARNING: /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,40 (ssd10):
Nov 16 17:32:26 spdma501 corrupt label - wrong magic number
Nov 16 17:32:26 spdma501 scsi: WARNING: /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,3f (ssd11):
Nov 16 17:32:26 spdma501 corrupt label - wrong magic number
Nov 16 17:32:26 spdma501 scsi: WARNING: /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,3e (ssd12):
Nov 16 17:32:26 spdma501 corrupt label - wrong magic number
Nov 16 17:32:26 spdma501 scsi: WARNING: /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,3d (ssd13):

spdma501:# cfgadm -c configure c5
Nov 16 17:32:55 spdma501 last message repeated 5 times
Nov 16 17:32:59 spdma501 scsi: WARNING: /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w50060482ccaae5bc,48 (ssd14):
Nov 16 17:32:59 spdma501 corrupt label - wrong magic number
Nov 16 17:32:59 spdma501 scsi: WARNING: /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w50060482ccaae5bc,47 (ssd15):
Nov 16 17:32:59 spdma501 corrupt label - wrong magic number
Nov 16 17:32:59 spdma501 scsi: WARNING: /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w50060482ccaae5bc,46 (ssd16):
Nov 16 17:32:59 spdma501 corrupt label - wrong magic number
Nov 16 17:32:59 spdma501 scsi: WARNING: /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w50060482ccaae5bc,45 (ssd17):
Nov 16 17:32:59 spdma501 corrupt label - wrong magic number
Nov 16 17:32:59 spdma501 scsi: WARNING: /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w50060482ccaae5bc,44 (ssd18):
Nov 16 17:32:59 spdma501 corrupt label - wrong magic number
Nov 16 17:32:59 spdma501 scsi: WARNING: /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w50060482ccaae5bc,43 (ssd19):
Nov 16 17:32:59 spdma501 corrupt label - wrong magic number
Nov 16 17:32:59 spdma501 scsi: WARNING: /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w50060482ccaae5bc,42 (ssd20):
Nov 16 17:32:59 spdma501 corrupt label - wrong magic number
Nov 16 17:32:59 spdma501 scsi: WARNING: /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w50060482ccaae5bc,41 (ssd21):
Nov 16 17:32:59 spdma501 corrupt label - wrong magic number
Nov 16 17:32:59 spdma501 scsi: WARNING: /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w50060482ccaae5bc,40 (ssd22):
Nov 16 17:32:59 spdma501 corrupt label - wrong magic number
Nov 16 17:32:59 spdma501 scsi: WARNING: /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w50060482ccaae5bc,3f (ssd23):
Nov 16 17:32:59 spdma501 corrupt label - wrong magic number
Nov 16 17:32:59 spdma501 scsi: WARNING: /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w50060482ccaae5bc,3e (ssd24):
Nov 16 17:32:59 spdma501 corrupt label - wrong magic number
Nov 16 17:32:59 spdma501 scsi: WARNING: /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w50060482ccaae5bc,3d (ssd25):
Nov 16 17:32:59 spdma501 corrupt label - wrong magic number


spdma501:# format < /dev/null
Searching for disks...Nov 16 17:33:04 spdma501 last message repeated 1 time
Nov 16 17:33:07 spdma501 scsi: WARNING: /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,48 (ssd2):
Nov 16 17:33:07 spdma501 corrupt label - wrong magic numberdone

c3t50060482CCAAE5A3d61: configured with capacity of 17.04GB
c3t50060482CCAAE5A3d62: configured with capacity of 17.04GB
c3t50060482CCAAE5A3d63: configured with capacity of 17.04GB
c3t50060482CCAAE5A3d64: configured with capacity of 17.04GB
c3t50060482CCAAE5A3d65: configured with capacity of 17.04GB
c3t50060482CCAAE5A3d66: configured with capacity of 17.04GB
c3t50060482CCAAE5A3d67: configured with capacity of 17.04GB
c3t50060482CCAAE5A3d68: configured with capacity of 17.04GB
c3t50060482CCAAE5A3d69: configured with capacity of 17.04GB
c3t50060482CCAAE5A3d70: configured with capacity of 17.04GB
c3t50060482CCAAE5A3d71: configured with capacity of 17.04GB
c3t50060482CCAAE5A3d72: configured with capacity of 17.04GB
c5t50060482CCAAE5BCd67: configured with capacity of 17.04GB
c5t50060482CCAAE5BCd68: configured with capacity of 17.04GB
c5t50060482CCAAE5BCd69: configured with capacity of 17.04GB
c5t50060482CCAAE5BCd70: configured with capacity of 17.04GB
c5t50060482CCAAE5BCd71: configured with capacity of 17.04GB
c5t50060482CCAAE5BCd72: configured with capacity of 17.04GB


AVAILABLE DISK SELECTIONS:
0. c1t0d0
/pci@8,600000/SUNW,qlc@4/fp@0,0/ssd@w2100000c506b2fca,0
1. c1t1d0
/pci@8,600000/SUNW,qlc@4/fp@0,0/ssd@w2100000c506b39cf,0
2. c3t50060482CCAAE5A3d61
/pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,3d
3. c3t50060482CCAAE5A3d62
/pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,3e
4. c3t50060482CCAAE5A3d63
/pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,3f
5. c3t50060482CCAAE5A3d64
/pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,40
6. c3t50060482CCAAE5A3d65
/pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,41
7. c3t50060482CCAAE5A3d66
/pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,42
8. c3t50060482CCAAE5A3d67
/pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,43
9. c3t50060482CCAAE5A3d68
/pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,44
10. c3t50060482CCAAE5A3d69
/pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,45
11. c3t50060482CCAAE5A3d70
/pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,46
12. c3t50060482CCAAE5A3d71
/pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,47
13. c3t50060482CCAAE5A3d72
/pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,48
14. c5t50060482CCAAE5BCd67
/pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w50060482ccaae5bc,43
15. c5t50060482CCAAE5BCd68
/pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w50060482ccaae5bc,44
16. c5t50060482CCAAE5BCd69
/pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w50060482ccaae5bc,45
17. c5t50060482CCAAE5BCd70
/pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w50060482ccaae5bc,46
18. c5t50060482CCAAE5BCd71
/pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w50060482ccaae5bc,47
19. c5t50060482CCAAE5BCd72
/pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w50060482ccaae5bc,48
Specify disk (enter its number):



IF YOU DON'T SEE THE NEW LUNS IN FORMAT, RUN devfsadm !!!!


# /usr/sbin/devfsadm



Label the new disks !!!!



# cd /tmp


# cat format.cmd
label
quit


# for disk in `format < /dev/null 2> /dev/null | grep "^c" | cut -d: -f1`
do
format -s -f /tmp/format.cmd $disk
echo "labeled $disk ....."
done
Posted by JAUGHN Labels: ,
This document is aimed at providing a solid architectural and technical overview of VERITAS Volume Manager’s IP option, VERITAS Volume Replicator.

INTRODUCTION: REPLICATION AND DISASTER RECOVERY PLANNING

In today’s business environment, reliance on information processing systems continues to grow on an almost daily basis. Information systems, which once aided a company in doing business, have now become the business itself. As companies become more reliant on critical information systems, the potential disruption to the business due to a loss of data becomes even greater.

There are many threats that organizations face today when it comes to the reliability and viability of their data. Logical corruption, or loss of data due to threats such as virus and software bugs, can be protected by ensuring there is a viable copy of data available at all times. Performing regularly scheduled backups of the organizations data typically protects against logical types of data loss. Another threat that may result in unavailable may be contributed to component failure. While most devices have begun to build in redundancy there are other technologies such as application clustering technologies that can protect against a failure of a component while continuing to enable applications to be available.

Just as the levels of protection for logical and component failures have grown, so has the reliance on the information systems being protected. Many companies now realize that logical and local protection is no longer enough to guarantee that the organization will continue to be accessible. This loss can stem from planned downtime such as site complete site maintenance to unplanned downtime such as power or cooling loss to natural disasters such as fire and flooding to acts of terrorism or war. The loss of a complete data center facility would so greatly affect an organizations capability to continue to function that protection must be established at the data center level.

Many companies have implemented significant Disaster Recovery (DR) plans to protect against the complete loss of a facility. Complete plans are in place to recover voice and data capability in a remote location. One issue that is common among many DR plans is the information processing recovery plan. For many years companies have been taking regular data backups at the primary data center, then duplicating these tapes on a regular basis for shipment offsite to the DR facility. While a tape based backup solution is the needed safety net for disaster recovery planning there is still a need in the IT environment to provide higher levels of protection for critical data. While a tape backup approach may meet the needs of much of the data within an organization there are many data types that cannot afford the levels of data loss inherent to a tape backup approach.

The first step to understanding which technologies are necessary for particular data types is to understand when and if appropriate technologies are needed. The key measure of disaster recovery technologies is based on recovery point objectives and recovery time objectives.

Recovery Point Objective (RPO) – Point in time to which applications data must be recovered to resume business transactions.
Recovery Time Objective (RTO) – Maximum elapsed time allowed before lack of business function severely impacts an organization.

A complete disaster recovery plan is not delivered by any one technology, service or vendor but rather a culmination of products that are implemented in order to provide the needed RPO and RTO of an application. When analyzing a disaster recovery solution many components must be implemented in order to guarantee application availability.



The diagram shown above outlines software technologies that map to a customer’s RPO and RTO requirements. The burst in the middle represents a disaster. To the left of the burst is the recovery point objective and has the appropriate software technology based on business needs. For example, if a particular application can afford a day or more worth of data loss then a tape backup approach is all that that is needed for that application. However, if a day or more worth of data loss will cause substantial business impact then replication technologies must be implemented into the IT environment to protect against substantial data loss. To the right of burst is the recovery time objective. If a business can afford to take a day or more in order to resume normal business activity then manual tape restore will satisfy their business needs. Organizations can improve on this RTO by using bare metal restore technologies that dramatically reduce the amount of time it takes to get a server up and running. However, if those technologies do not meet the applications RTO then clustering technologies must be implemented into the IT environment to protect against substantial downtime.

UNDERSTANDING THE NEED FOR REPLICATION

Replication is a technology designed to maintain a duplicate data set on a completely independent storage system at a difference geographical location. Replication differs from a tape backup and restore methods because replication is completely automatic and far less labor intensive. In addition, replication technologies can be used to reduce the recovery point objective of critical applications.

Whether motivated by disaster, site failure, or a planned site migration, VERITAS’ replication technologies provide the ability to distribute data for seamless availability across sites. VERITAS Volume Manager provides remote mirroring capabilities natively over Fibre Channel protocols. For organizations that wish to replicate their data natively over a standard IP network, an optional capability of VERITAS Volume Manager called VERITAS Volume Replicator can reliably, efficiently and consistently replicate data to remote locations. VERITAS’ replication technologies provide a robust storage-independent disaster recovery solution when data loss and prolonged downtime cannot be tolerated.

REPLICATION MODES
The two main types of replication are synchronous and asynchronous. Both have their advantages and disadvantages and should be available options for the IT administrator. Each uses a different process to arrive at the same goal, and each deals somewhat differently with network conditions. The performance and effectiveness of both depend ultimately on business requirements such as how soon updates must be reflected at the target location. Performance is strongly determined by the available bandwidth, network latency, the number of participating servers, the amount of data to be replicated and the geographical distance between the hosts.

Synchronous Replication
Synchronous replication ensures that a write update has been posted to the secondary location(s) and the primary location before the write operation is acknowledged to be complete at the application level. This way, in the event of a disaster at the primary location, the data recovered at the secondary location will be an exact copy of the data at the primary location. Synchronous replication produces the exact same data at both the primary and secondary location(s), which means the RPO of applications using synchronous replication would be zero. However, since the application transaction must travel to the secondary location(s) and back to the primary location before the application can continue with the next transaction there will be some application performance impact. Synchronous replication is most effective in metropolitan area networks with application environments that require zero data loss and can afford some application performance impact. For all other application, asynchronous replication should be a viable alternative.

There are many scenarios that could affect the performance of replication in synchronous mode including the amount of write activity on the system, the network pipe connecting the primary and secondary sites, and the distance between the two sites. A good rule of thumb to use is 3ms of latency for every 100 miles of distance between the primary and secondary systems. Most configurations that use synchronous replication have it set to change to asynchronous mode if the network link is lost between the primary and secondary site. This is so that the primary application is not affected by a network outage.

Asynchronous Replication
Asynchronous replication eliminates the potential performance problems of synchronous methods. The secondary site may lag behind the primary site, typically only by less then one minute, offering essentially real-time replication without the application performance impact. During asynchronous replication, application updates are written at the primary, and queued for forwarding to each secondary location as network bandwidth allows. Unlike synchronous replication, the writing application does not suffer from the application performance impact of replication and can function as if replication is not occurring. Asynchronous replication should be used in organizations that can afford minimal data loss but want to eliminate application performance impact or organizations that would like to replicate data over a wide-area network. This can also be the right choice if the network bandwidth between the two sites is large enough to handle the average amount of data, but insufficient to handle the peak write activity.

Write order fidelity
Whatever mode you select, you should ensure that the data at the secondary site is never corrupted or inconsistent. The last thing you need is a non-recoverable replicated data set at the secondary location the very moment you need it most. The only way to ensure that the data is recoverable at the secondary location(s) is to ensure that the data arrives in the same order as it was written at the primary location. This is called write order fidelity. Without write order fidelity, no guarantee exists that a secondary will have consistent recoverable data. In a database environment, updates are made to both the log and data spaces of a database management system in a fixed sequence. The log and data space are usually in different volumes, and the data itself can be spread over several additional volumes. A well-designed replication solution needs to consistently safeguard write order fidelity. This may be accomplished by a logical grouping of data volumes so the order of updates in that group is preserved within and among all secondary copies of these volumes.

Replication solutions running at the hardware level typically lacks the ability to maintain write order fidelity when running in asynchronous mode. This is due to a lack of a persistent queue of writes that have not yet been sent to the secondary. If the user using hardware replication wishes to avoid application performance impact imposed by synchronous replication, they lose recoverability on the remote site, rendering the remote copy essentially useless. Therefore, maintaining write order fidelity when using replication technologies should be an absolute requirement to ensure the recoverability of data at a remote location.

TECHNICAL REQUIREMENTS OF A REPLICATION SOLUTION

Architecturally, a complete replication solution must provide a copy of all data at the primary and secondary locations, including database files as well as any other necessary binary and control files and the replication technology must ensure that the data is accurate and recoverable.

The replication solution must be capable of being configured to support a secondary site over any distance. In today’s environment, organizations must be allowed to use current infrastructure, including current data center’s, regardless of distance. The replication solution must operate at any distance whether the data centers are a few kilometers apart or thousands of kilometers apart without adding undue cost or complexity. This means the replication technology must provide asynchronous support, over a long distance, without additional high cost items such as communication converters or additional disk space for staging data. The replication solution must be flexible enough to allow the customer to change the configuration of the data sets that are being replicated. This could include having volumes that are not replicated, because they are for temp files, or files that wouldn’t be needed in a disaster. There should also be the ability to grow and shrink the volumes with no application or customer downtime. The solution should also allow testing at the remote site in order to validate the data at the secondary is recoverable.

VERITAS REPLICATION AND REMOTE MIRRORING OVERVIEW

VERITAS replication and remote mirroring technologies can dramatically speed recovery time and eliminate data loss by making current data available immediately at an alternate location. Organizations can replicate or mirror data via a storage area network or over any IP network in order to meet their disaster recovery needs. Unlike proprietary, inflexible hardware approaches, VERITAS’ replication and remote mirroring technologies are not dependent on any specific storage hardware platform. For example, replication can occur between storage arrays from the same vendor, regardless of array model or size, or replication can occur between different storage vendor’s arrays. The only requirement is that there is matching Volume Manager volume sizes at each side.

VERITAS’ software-based replication provides a reliable, efficient and cost-effective solution for geographically mirroring data sets. It also has full database management system support, including DB2, Exchange, Oracle, SQL Server and Sybase.

VERITAS VOLUME MANAGER
VERITAS Volume Manager is the industry leader in storage virtualization. It provides an easy-to-use, online storage management tool for heterogeneous enterprise environments. Organizations can extend their storage management functionality with Volume Manager’s remote mirroring capability in order to deliver a metropolitan area disaster recovery solution. VERITAS Volume Manager can synchronously mirror data natively over storage protocols such as Fibre Channel, which makes it an ideal solution for disaster recovery within a metropolitan area network. Customer’s wishing to implement a disaster recovery solution over Fibre Channel can create “just another mirror” of their data over an extended distance, using VERITAS Volume Manager, to be made available should a complete site outage occur. This solution allows for using different storage arrays at the two sites, and is seamless to the Storage Administrators and users of the data.

VERITAS VOLUME REPLICATOR
For organizations who wish to replicate data natively over an IP network, VERITAS Volume Manager has an optional capability called Volume Replicator. VERITAS Volume Replicator reliably, efficiently and consistently replicates data to remote locations over an IP network for maximum business continuity, removing the need for expensive proprietary network hardware, and the need to have the exact same storage hardware at every site.

Since Volume Replicator (VVR) is just an optional capability to VERITAS Volume Manager (VxVM), VVR allows VxVM volumes on one system to be exactly replicated to identically sized volumes on another system. For example, an Oracle database may use several different volumes for various tablespaces, redo logs, archived redo logs, indexes and other storage. Each component is typically stored in a VxVM volume, or multiple volumes. The database may also use data stored in VERITAS File Systems, created in these volumes for better manageability. VVR can provide an exact duplicate of these volumes to another system, at another site and since the replication occurs at the volume level VVR can replicate data between any storage hardware arrays. VERITAS Volume Replicator can scale to support up to 32 secondary data storage sites and Volume Replicator elegantly handles one-to-one, one-to-many and many-to-one data replication configurations.
There are four main components that are added to the VERITAS Volume Manager code base to provide VERITAS Volume Replicator. These four components are replicated volume groups (RVG), storage replicator log (SRL), rLinks and data change maps (DCM).



The above diagram shows the architecture of VERITAS Volume Replicator. VERITAS Volume Replicator is based on the same code base as VERITAS Volume Manager and adds three components: the Rlink, SRL and RVG.

Replicated Volume Groups
VVR extends the concept of a disk group (found in VERITAS Volume Manager) to provide the concept of a replicated volume group (RVG). An RVG is a subset of volumes within a given VxVM disk group configured for replication to one or more secondary systems. An RVG can contain one or more data volumes in the disk group, up to and including all data volumes, but cannot span multiple disk groups. Multiple RVGs can be configured inside one disk group, but not the other way around. Volumes that are associated with an RVG and that contain application data are called replicated data volumes. The concept of an RVG gives the user the ability to pick and choose what data is replicated to the secondary site. Therefore, organizations need only to replicate their mission-critical data to a secondary location, which allows organization to save money on replication implementations because they need a lot less storage at their secondary site. In addition, organizations that pay for bandwidth usage can save on bandwidth costs because they are only replicating data that has stringent recovery point objectives. All other data can be protected by simply using a tape backup approach.

The data volumes in the RVG are under the control of an application, such as a database management system, that requires write-order fidelity among the updates to the volumes. Write ordering is strictly maintained within an RVG during replication to ensure that each remote volume is always consistent, both internally and with all other volumes of the group. At the simplest level, VVR exists within the VxVM code base and has the capability to intercept any write destined for a VxVM volume within an RVG and replicate the write, in the correct order, to designated secondaries before the write is passed on to the actual data volumes.

Data is replicated from a primary RVG to a secondary RVG. The primary RVG is in use by an application, while the secondary RVG receives replication and writes to local disk. The concept of primary and secondary is per

RVG, not per system. A system can simultaneously be a primary RVG for some RVGs and secondary RVG for others. The RVG also contains the storage replicator log (SRL) and replication link (RLINK) explained in the following sections.

Storage Replicator Log
All data writes destined for volumes configured for replication are first persistently queued in a log called the Storage Replicator Log. VVR implements the SRL at the primary side to store all changes for transmission to the secondary(s). The SRL is a VxVM volume configured as part of an RVG. The SRL gives VVR the ability to associate writes to specific volumes within the replicated configuration in a specific order, maintaining write order fidelity at the secondary. All writes sent to the VxVM volume layer, whether from an application such as a database writing directly to storage, or an application accessing storage via a file system are faithfully replicated in application write order to the secondary.

When implementing asynchronous replication data integrity must be guaranteed at the remote site, or it must preserve write order fidelity. This essentially means that writes applied to the secondary storage must occur in the exact same order they were applied at the primary. Asynchronous replication without this capability compromises data consistency at the disaster recovery site and may jeopardize the recoverability of the data. The SRL within Volume Replicator tracks writes in the correct order and guarantees that the data will arrive at the secondary site in that same order, whether operating in synchronous or asynchronous mode.

The SRL can also be used with synchronous or asynchronous replication to protect against network outages. Should a network outage occur when replicating in synchronous mode, Volume Replicator can automatically switch to asynchronous mode and the writes can be stored in the SRL for later transmission to secondaries when the network is restored. This functionality protects the primary location from being impacted should a network outage occur.

Rlinks
An Rlink is a VVR Replication Link to a secondary RVG. Each RLINK on a Primary RVG represents the communication link from the Primary RVG to a corresponding Secondary RVG, via an IP connection. RLinks are configured to communicate between specific host names/IP addresses and can support both TCP and UDP communication protocols between systems.

Data Change Maps
Data Change Maps (DCM) are used to mark sections of volumes on the primary that have changed during extended network outages in order to minimize the amount of data that must be synchronized to the secondary site during the outage.

VOLUME REPLICATOR TECHNICAL DETAILS

OPERATIONAL MODES AND DATA FLOW: SYNCHRONOUS
In Synchronous mode, all data writes are first posted to the SRL, and then sent to the secondary location(s). The posting of the data to the actual data volumes happens in this time period also. When the secondary location(s) receives the data write and acknowledgement is sent back to the primary location and then the application acknowledges the data write to be complete. Therefore, synchronous replication should be used in environments that cannot afford any data loss and can afford some application performance impact. Overall performance in synchronous mode is governed by the amount of time that it takes to write to the SRL plus the round trip time to send data to the secondary and receive acknowledgement.

The secondary acknowledges receipt as soon as the full write transaction (as sent by the application on the primary) is received into VVR kernel memory space. This removes actual writing to the secondary data volumes from the application latency. The primary tracks these writes in its SRL until a second acknowledgement is received from the secondary, signalling that the data has been written to physical storage. Both acknowledgements have an associated timeout so if a packet is not acknowledged, VVR will resend that packet.

In order to maximize performance, VVR does not wait for data to be written at the secondary, only received. This improves application performance. But it tracks all acknowledged, uncommitted transactions and can replay any necessary transactions if the secondary were to crash prior to actually writing its data to the physical storage.

Synchronous mode has two possible rlink settings, “fail” and “override”. These settings deal with the behavior of VVR when the network connection is lost to a secondary site. With synchronous=fail, writes will be returned as failed to the calling application if contact is lost with the secondary. This is used in rare situations where the primary and secondary storage must never differ by even one write. This is typically not used, as it will cause an application failure at the primary side when anything happens to the secondary or the interconnecting network. The “synchronous=override” is the most common setting. This will keep replication in synchronous mode unless contact with the secondary is lost, then will shift to asynchronous mode and begin tracking the writes in the SRL. This allows the primary application to continue to run and provide service to customers while the backup capability is restored.

OPERATIONAL MODES AND DATA FLOW: ASYNCHRONOUS
In asynchronous mode, an application data write is first placed in the SRL, then immediately acknowledged to the calling application at the primary site. Data is then sent as soon as possible to the secondary location, based on available bandwidth. Therefore, asynchronous replication will not have any impact to the performance of the application but the secondary location may be a few write requests behind the primary site should a disaster occur. Asynchronous replication should be used in environments where minimal data loss (typically measured in seconds for adequately sized networks) can be tolerated but where application cannot afford the performance impact of synchronous replication. In asynchronous mode, if a disaster does occur at the primary site the secondary location will be able to recover, but it may come up a few write requests behind the state of primary systems.

Using Asynchronous Replication to Decouple Application Latency
One of the most compelling features of VVR in real world environments is its ability to maintain full consistency at the secondary site while operating in asynchronous mode. Maintaining write order fidelity in asynchronous mode allows VVR to truly make use of the performance benefits available from asynchronous replication. By providing a high bandwidth connection, customers can completely remove the latency penalty from replication, and still maintain near up to the second data at the remote site. At the primary site, the application is acknowledged as soon as data is placed in the SRL. The application can continue to function as if replication is not occurring on the host. The data is then sent out almost instantaneously over the network to the secondary site. With adequate bandwidth, the SRL will not fill, so the actual data outstanding between primary and secondary is realistically whatever data is currently on the wire. This means a company can have near up to the second replication, at an arbitrary distance, with no application penalty.

Latency Protection
Latency protection allows administrators to define how far a secondary is allowed to fall behind a primary in asynchronous mode. The latency protection feature allows automatic control of excessive lag between Primary and Secondary nodes. Latency protection gives the network administrator the option to set the maximum number of updates that are allowed in the SRL, which is referred to as a latency_high_mark. When this number is reached, all update activity is delayed until the update backlog has reached a preset level, or the latency_low_mark. Latency protection ensures that the number of recent updates that could be lost in a disaster does not exceed a maximum determined amount. Latency protection is typically used to prevent the secondary from falling too far behind the primary in order to meet the recovery point objectives of the organization.

No Distance Limitations
Because VVR is truly unique in its ability to replicate data either synchronously or asynchronously over any standard IP network, there are no distance limitations. This allows organizations to utilize data centers that are already in place regardless of the distance between the locations.

INITIALIZING SECONDARY SYSTEMS FOR REPLICATION
In order to begin replication of changed blocks, a replication solution must first begin with a known duplicate data set at each site. VVR offers several ways to get a secondary site up and running in order to begin the process of replication.

Empty
The simplest method to begin replication is to start with completely empty systems at each side. This can be done if VVR is installed while initially constructing a data center, prior to production. For VVR to use an empty data set, both sides must be identically empty.

Over the Wire (Autosync)
Over the wire initialization is essentially using VVR to move all data from the primary to secondary over the network connection. Overall this is a very simple process, however, with larger data sets it can take a prohibitively long time, especially if the primary is active while attempting to initialize the secondary. In addition, in situations where organizations are leasing bandwidth lines this process can become fairly expensive. The items that must be taken into consideration to do this are:

• The network bandwidth

• The amount of write activity on the system

• The size of the SRL

This is the optimum way for doing the initial synchronization, because it can be repeated at any given time, if the solution is designed to allow for this process.

Local Mirroring
Local mirroring is an option for very large data sets. In this method, the data storage array for the secondary site is initially placed at the primary site and Volume Manager is used to mirror the data between the two storage devices. Once the mirror is complete, the Volume Manager plex is split off and the array is shipped to the secondary site. This mode will allow large amounts of data to be initialized at SAN speeds, as well as allowing subsequent data written during the shipping period to be spooled to the primary SRL. However, this can be fairly expensive if the array must be shipped over a long distance.

Tape Backup and Restore Initialization
The final option for initialization is through the use of a tape backup and restore. This is a unique feature of VVR and is not an available option for other replication technologies on the market today. It allows huge data sets to be synchronized using tape technology and immediately begin replication.

Checkpoint initialization is a hot backup of the primary side, with the SRL providing the map of what was changed during the backup. When a checkpoint initialization is started, a “check start” pointer is placed in the SRL. A full block level backup is then taken of the primary volumes. When complete, a check-end pointer is placed into the SRL. The data written between the check-start and check-end pointers represents data that was modified while the backup was taking place. This constitutes a hot backup.

The tapes are then transported to the secondary site and the data is restored to the secondary systems using the tapes. When the tape load is complete, the secondary site is connected to the primary site with a “checkpoint attach” of the rlink. The primary will then forward any data that had been written during the backup (that data between the check-start and check-end). Once this data is written to the secondary, the secondary is an exact duplicate of the primary, at the time the backup completed. At this point the secondary is consistent, and simply out of date. The SRL is then replayed to bring the secondary up to date. Therefore, tape backup and restore initialization is ideal for organizations that wish to perform an initialization of a large data set or for environments where their secondary site is located over longer distances.

RECOVERY AFTER PROBLEMS

VVR is very robust in terms of tolerating outages of the network as well as secondary systems. The SRL provides the key to recovering from outages, while maintaining a consistent secondary.

SECONDARY/NETWORK OUTAGE
VVR can be configured to handle network outages. An outage of a secondary system, or outage of the network to the secondary is identical as far as VVR is concerned. When the secondary is no longer available, as evidenced by loss of VVR heartbeat on the rlink, the primary will simply track the data write changes in the SRL. When the secondary is repaired or network problems are resolved, the SRL will then send all the changes to the secondary location(s). In addition, VVR can be configured to stop operations at the primary site should a network fail. In this case, the primary will not allow writes until the network connectivity is re-established, or the secondary is available for write activity

SECONDARY FAILURE
A failure of the secondary would be better defined as a failure of the secondary storage, resulting in a data loss on the secondary side.

There are several methods to recover from a secondary loss. The first is to rebuild the secondary storage and re-initialize using one of the methods discussed above. The second method is to take regular backups of the secondary environment using a VVR feature called “Secondary Checkpoints”. Secondary Checkpoints allow a pointer to be placed in the primary SRL to designate a location where a backup was last taken on the secondary. Assuming the primary has a large enough SRL and secondary backups are routinely taken, a failure at the secondary can be repaired by reloading the last backup and rolling the SRL forward from the last secondary checkpoint.

PRIMARY FAILURE
A failure of the primary can be broken into several possible problems. A complete failure of the primary site is handled by promotion of a secondary to a primary, affecting a disaster recovery takeover. This is exactly the scenario VVR was built for.

For primary outages, such as server failure, or server panic, the customer has the choice to wait for the primary to recover, or shift operations to a secondary server or location.

For situations involving actual data loss at the primary, the customer can shift operations to a secondary, or restore data on the primary.

VOLUME REPLICATOR ROLE CHANGES
Role changes are actions to promote system that was previously a secondary to a primary. This can be due to a complete site outage where the primary site is not available or simply a role reversal to allow a secondary site to take over operations. For simple failures of a primary or secondary server that are members of a VCS cluster, VCS can move the primary or secondary to a new server without a role change. For example, imagine a two node cluster at Site A, acting as the VVR primary, and a two-node cluster at Site B acting as the VVR secondary. If a node in Site A dies, the VVR primary will simply move to the second node under VCS control. The same is true of a single system failure at the secondary site. VCS would restart the VVR secondary on the opposite system. For situations such as a complete failure of Site A, the VVR secondary at Site B can be promoted to a primary and applications started to access the underlying data in read-write mode. This is an example of using VVR to facilitate Disaster Tolerance for a data center. To automate this entire procedure, VERITAS offers VERITAS Global Cluster Manager with the Disaster Recovery option to monitor and control applications at separate sites, connected by replication.

Primary Migration
A migration of Primary to Secondary systems is a controlled shift of Primary responsibility for an RVG. Data is flushed from the existing primary SRL if necessary, and then control is handed to the existing secondary. The original primary is demoted to a secondary, and the original secondary is promoted to a primary. This is a very simple operation carried out with one or two commands and allows rapid shift of replication primary between sites. There is zero chance for data loss, as all data outstanding at the primary site is sent to the secondary prior to allowing the migration to take place.

Secondary Takeover
A secondary takeover is a somewhat less graceful event, in that the secondary is promoted to a primary without a corresponding demotion of the original primary. These types of migrations typically happen in the event of a complete site outage without and prior notification. When a takeover is accomplished, the secondary is brought up in read-write mode in the exact state it was in at time of takeover. Any data written by the primary in ASYNC mode and not sent to the secondary is not available. After a secondary takeover, the original primary must be re-synchronized to be an exact duplicate of the new primary.

Returning to the Primary Location
After a migration has occurred to a secondary location, returning to the original primary can be accomplished using easy failback, a new feature in VVR 3.2, which allows for rapid resynchronization of an old primary after a secondary takeover. This removes the need for a complete over the wire synchronization or all the data or differential based synchronization.

When a secondary is promoted in a takeover operation, it immediately begins utilizing the Data Change Map (DCM) associated with each volume. This tracks where data has been written on the new primary. When the old primary comes back online and a failback operation is requested, the new primary communicates with the old primary and determines any data blocks that had been changed on the old primary that were not committed on the old secondary/new primary. These block locations are sent to the new primary so the corresponding blocks can be sent to the Data Change maps on the new primary. This means that any blocks written on the new primary, plus any blocks that were different on the old primary will be sent from new primary to old primary. This results in the old primary being made an exact duplicate of the new primary in a very short time. This also means that any data that was written on the old primary is permanently overwritten. VVR makes no attempt at “merging” differences between systems.

USING THE SECONDARY SYSTEM

FlashSnap is an option to VERITAS Foundation Suite and it allows a complete mirror break-off (snapshot) of a VVR volume to be taken and mounted for operation. The benefit of this option to VERITAS Volume Manager is that it allows organizations to access the data at the secondary sites for off host processing, such as reporting and backups.

USING IN BAND CONTROL MESSAGES TO CONTROL FLASHSNAP
VVR provides an advanced messaging capability to control specific events at a secondary from the primary. It does this with In Band Control (IBC) messages sent from the primary to the secondary. IBC messages are placed in the SRL like any other write traffic and are processed in SRL order. For example, consider a database running at the primary site, with VVR in asynchronous mode. The administrator places the database in hot backup mode to perform an onsite backup. An IBC can be placed in the SRL at this time to signal the secondary when to snap off a mirror, knowing that the IBC will not be received until all data ahead of it in the SRL (right to the time of shifting to hot backup) has been received. As soon as the IBC is entered into the SRL, the database can be taken out of hot backup mode at the primary site. This allows operations at the primary and secondary sites to be coordinated to occur at the exact same time in terms of data consistency.

VOLUME REPLICATOR IN THE CUSTOMER ENVIRONMENT

EFFECTS OF VOLUME REPLICATOR ON HOST RESOURCES AND APPLICATION PERFORMANCE

VVR typically has very little effect on host CPU and memory resources and have been measured in the 2-5% range. This type of CPU usage can be similar to the CPU impact one may notice doing a simple find command in UNIX. VVR also converts all of the write activity to Sequential writes. This is the fastest configuration for writes in most cases, and some customers have observed an increase in write performance using VVR when compared to not using replication within their environment.

UNDERSTANDING BANDWIDTH NEEDS USING VRADVISOR
In order to replicate data to another location, bandwidth must be available. For environments utilizing Fibre Channel connectivity VERITAS Volume Manager can be used to mirror the data between the two locations. For environments with IP connectivity, VERITAS Volume Replicator should be used. VERITAS Volume Replicator does not specifically require a network dedicated to itself, is resilient to temporary network outages and includes error-handling capabilities to alert the administrator of critical events.

A very common question is “How much bandwidth do I need?” The answer is enough bandwidth must be provided to move all write traffic to each secondary site in any given time period. For example, if 10 Gigabytes of data are written in a 24-hour period, then enough bandwidth must be provided to move 10 Gigabytes of data in 24 hours. If the configuration is set to Synchronous, then attention should be paid to the peak write activity. This activity can impact the performance of the application if the network bandwidth isn’t sufficient for the peak traffic.

The SRL can be used to spool data during time periods when write traffic exceeds replication bandwidth. In order to assist in the proper determination of the bandwidth and SRL size, the VERITAS Volume Replicator Advisor (VRAdvisor) utility is available for use. The VRAdv tool will assist the user in determining the optimal size of the SRL by taking into account the rate of data writes over a given time period, network bandwidth and different outage durations.

Collection of Data
The VRAdivisor can collect sample data write statistics based on various parameters. The data is collected over a period of time, in a file that you have specified. If VxVM is installed then the vxstat command will be used for collecting data. Otherwise, the iostat command will be used. After the data change rate has been collected the data can then be analyzed to make determinations on the optimal size of the SRL based on the different parameters that you had supplied. This result would provide you the optimum size of the SRL for immediate requirements. You can also calculate the size of the SRL based on the future requirements, changes, and other factors which you are aware of and may affect the SRL.

Analysis Results
This above screen displays the analysis results, which are generated based on the inputs that you have specified. The graphical display region displays the analysis results with the help of two graphs. The x-axis for both the graphs consists of the data write duration values based on the information collected on the system. The y-axis of the top graph highlights the SRL fill rate over the data collection period. In addition, the peak SRL fillup size is indicated against a maximum outage window. This window is displayed in yellow and would indicate a worst-case scenario. The second graph highlights the different write rates at periods throughout the collection period.

VOLUME REPLICATOR INTEGRATION WITH OTHER VERITAS PRODUCTS
Volume Replicator is a component of an overall high availability and disaster recovery solution. It fits very well into an overall high availability infrastructure provide by VERITAS Cluster Server and VERITAS Global Cluster Manager.

VERITAS CLUSTER SERVER AND VERITAS GLOBAL CLUSTER MANAGER
The full integration of VERITAS Cluster Server, VERITAS Volume Replicator and VERITAS Cluster Manager provides a powerful disaster recovery solution. VERITAS Cluster Server handles local availability issues. VERITAS Volume Replicator replicates critical data to a remote site and VERITAS Global Cluster Manager monitors and manages the clusters at each site. In the event of a site failure or complete failure of applications at the primary site, Global Cluster Manager will control the shift of replication roles to the secondary site, bring up critical applications and redirect client traffic with a single command or mouse click.

VERITAS DATABASE EDITIONS
VERITAS Database Editions offers raw device performance with the manageability of the VERITAS File System, online administration of storage, and the flexibility of storage hardware independence. Database Editions can be used within the local environment to maximize the performance of the database while Volume Replicator can be used to replicate data to the secondary site for disaster recovery protection.

VERITAS NETBACKUP
In order to have complete disaster recovery solution, every environment should be backup up on a regular basis using VERITAS NetBackup. By combining, VERITAS NetBackup and VERITAS Volume Replicator organizations can be assured that their data is protected.

VERITAS REPLICATION CAPABILITIES SUMMARY
The following section will summarize the capabilities of VVR.

STORAGE ARCHITECTURE INDEPENDENCE
VERITAS Volume Replicator replicates between any major hardware platforms to eliminate vendor-specific storage limitations. For example, using VERITAS Volume Replicator, customers can replicate between a single vendor's alike arrays, between a single vendor's dissimilar arrays, or between two different vendors' arrays. This means the only architecture restriction for VERITAS Volume Replicator is duplicate volume sizes at each end. In order to replicate a 200 Gigabyte volume, the customer must create a 200 Gigabyte volume on the primary and secondary(s). This allows the customer to create a secondary site utilizing older or less expensive hardware and only requires customers to replicate critical data, chosen based on volume, to the secondary site saving on bandwidth and storage costs. In addition, VVR provides the flexibility to change the storage configuration as data sizes grow, change, shrink or are moved without impacting replication.

MAINTAINING WRITE ORDER FIDELITY
Volume Replicator maintains write order fidelity, even in asynchronous mode to guarantee data consistency on the secondary. This is critical to providing a complete, consistent copy of data at a remote site, without requiring synchronous replication.

HIGH PERFORMING REPLICATION TECHNOLOGIES
Volume Manager and Volume Replicator are high performing replication technologies. Third party performance testing has proven that VERITAS is up to 72% faster then the leading hardware replication vendor on the market today.

NATIVE REPLICATION OVER FIBRE CHANNEL AND IP NETWORKS
VERITAS technologies can replicate over Fibre Channel and IP networks natively. Volume Manager can replicate over Fibre Channel and Volume Replicator can replicate over IP networks without the need for any expensive specialized networking devices. In addition, native replication over IP allows organizations to replicate data over any distance.

SCALABLE
Volume Replicator can scale to up to 31 separate locations for many to one and one to many replication scenarios.

INITIALIZATION OPTIONS
Volume Replicator can assist in getting your disaster recovery site up and running quickly. There are three initialization options available with Volume Replicator. The first option involves sending all of the data over the wire. The second option is by doing local mirroring between arrays and shipping the array to the disaster recovery site. The third option that is unique to Volume Replictor combines replication and backup to get the disaster recovery site up and running quickly. The organization performs a normal backup at the primary site and inserts a checkpoint. Then the tapes are sent to the disaster recovery site and a tape restore is performed. Only the data that has changed since the time the checkpoint is inserted is sent over the wire. This allows organizations to get up and running without sending large datasets over the wire or having to pay expensive shipping costs to ship storage arrays. All three operations can be performed completely online.

SUMMARY
VERITAS Volume Replicator can effectively and efficiently replicate data to another location in order to provide protection from disaster scenarios. VERITAS Volume Replicator allows organizations to replicate their data between any storage devices, over a standard IP connection, and across any distance for the ultimate in disaster recovery protection.
Posted by JAUGHN
This entry is mainly for testing the different parameters in this template.

Sample text below....

Lesson 1: (using class:post_heading1)

A man is getting into the shower (post_text)
just as his wife is finishing up
her shower, when the doorbell rings.

The wife quickly wraps herself in
a towel and runs downstairs.

When she opens the door,
there stands Bob,
the next-door neighbour.

Before she says a word, Bob says,
"I'll give you $800 to drop that towel."

After thinking for a moment,
the woman drops her towel
and stands naked in front of Bob,
after a few seconds,
Bob hands her $800 and leaves.

The woman wraps back up in
the towel and goes back upstairs.
When she gets to the bathroom,
her husband asks, "Who was that?"
"It was Bob the next door neighbour,"
she replies.

"Great," the husband says,
"did he say anything about
the $800 he owes me?"
Posted by JAUGHN
Visit the Site
MARVEL and SPIDER-MAN: TM & 2007 Marvel Characters, Inc. Motion Picture © 2007 Columbia Pictures Industries, Inc. All Rights Reserved. 2007 Sony Pictures Digital Inc. All rights reserved. blogger template by blog forum