Friday, November 2, 2007
at
4:36 PM
|
Here’s one of the VVR projects I designed and implemented. I documented the whole process so I could use it as a reference for future projects.
The document has a detailed implementation (with screenshots). The changes were done on both primary and secondary sites. Downtime was not necessary but the customer decided to shut down the apps and database.
This implementation involves VCS,CCA and SNAPshots. I will not discuss CCA in this document. Contents
Project Description
As part of the Disaster recovery project, Veritas Volume Replicator (VVR) will be implemented on EMM servers to handle the replication between the Phoenix servers and the DR servers in Minneapolis. VVR will run on top of Veritas Cluster System (VCS). VCS is already installed and running on the servers. Replicated volumes in the secondary site will be configured to have snapshot volumes.
Server Configurations
E3 (Clustered)
- SUNSRV01 (Solaris 8, SunFire V440)
- SUNSRV02 (Solaris 8, SunFire V440)
DR (Clustered)
- SUNSRV01-DR (Solaris 8, SunFire V440)
- SUNSRV02-DR (Solaris 8, SunFire V440)
Network Configurations
IP Address
- SUNSRV01
- 10.10.231.130 sunsrv01 Host IP
- 10.11.196.192 sunsrv01-DRN E3/DR Link IP
- SUNSRV02
- 10.10.231.131 sunsrv02 Host IP
- 10.11.196.193 sunsrv02-DRN E3/DR Link IP
- SUNSRV01-DR
- 10.10.231.130 sunsrv01-dr Host IP
- 10.12.94.192 sunsrv01dr-DRN E3/DR Link IP
- SUNSRV02-DR
- 10.10.231.131 sunsrv02-dr Host IP
- 10.12.94.193 sunsrv02dr-DRN E3/DR Link IP
VVR Config
- RVG db2dg_rvg
- SRL db2dg_srl
- RLINKs rlk_db2instdr-vipvr_db2dg_rvg (E3)
rlk_db2inste3-vipvr_db2dg_rvg (DR)
Virtual IPs
- SUNSRV01/SUNSRV02
- Application 10.10.231.191
- E3/DR Link 10.11.196.191
- SUNSRV01-DR/SUNSRV02-DR
- Application 10.10.231.191
- E3/DR Link 10.12.94.191
VCS Heartbeats
- SUNSRV01 ce0/ce4
- SUNSRV02 ce0/ce4
- SUNSRV01-DR ce4
- SUNSRV02-DR ce4
Essential Terminology
Data Change Map (DCM) - An object containing a bitmap that can be optionally associated with a data volume on the Primary RVG. The bits represent regions of data that are different between the Primary and the Secondary. DCMs are used to mark sections of volumes on the primary that have changed during extended network outages in order to minimize the amount of data that must be synchronized to the secondary site during the outage.
Disk Change Object (DCO) - DCO volumes are used by Volume Manager FastResync (FR). FR is used to quickly resynchronize mirrored volumes that have been temporarily split and rejoined. FR works by copying only changes to the newly reattached volume using FR logging. This process reduces the time required to rejoin a split mirror, and requires less processing power than full mirror resynchronization without logging.
Data Volume - Volumes that are associated with an RVG and contain application data.
Primary Node - The node on which the primary RVG resides.
Replication Link (RLINK) - RLINKs represent the communication link to the couterpart of the RVG on another node. At the Primary node a replicated volume object has one RLINK for each of its network mirrors. On the Secondary node a replicated volume has a single RLINK object that links it to its Primary. Each RLINK on a Primary RVG represents the communication link from the Primary RVG to a corresponding Secondary RVG, via an IP connection.
Replicated Data Set (RDS) - The group of the RVG on a Primary and its corresponding Secondary hosts.
Replicated Volume Group (RVG) - A component of VVR that is made up of a set of data volumes, one or more RLINKs and an SRL. An RVG is a subset of volumes within a given VxVM disk group configured for replication to one or more secondary systems. Data is replicated from a primary RVG to a secondary RVG. The primary RVG is in use by an application, while the secondary RVG receives replication and writes to local disk. The concept of primary and secondary is per RVG, not per system. A system can simultaneously be a primary RVG for some RVGs and secondary RVG for others. The RVG also contains the storage replicator log (SRL) and replication link (RLINK).
Storage Replication Log (SRL) - Writes to the Primary RVG are saved in the SRL on the Primary side. The SRL is used to aid in recovery, as well as to buffer writes when the system operates in asynchronous mode. Each write to a data volume in the RVG generates two write requests: one to the Secondary SRL, and another to the Primary SRL.
Secondary Node - The node too which the primary RVG replicates.
Volumes to be Replicated
These are the volumes that will be configured for replication. These volumes belong to the same diskgroup db2dg.
v bcp - ENABLED ACTIVE 585105408 SELECT - fsgen v db - ENABLED ACTIVE 1172343808 SELECT - fsgen v dba - ENABLED ACTIVE 98566144 SELECT - fsgen v db2 - ENABLED ACTIVE 8388608 SELECT - fsgen v lg1 - ENABLED ACTIVE 396361728 SELECT - fsgen v tp01 - ENABLED ACTIVE 192937984 SELECT - fsgen
Storage Replication Log (SRL)
Storage Replication Log. All data writes destined for volumes configured for replication are first persistently queued in a log called the Storage Replicator Log. VVR implements the SRL at the primary side to store all changes for transmission to the secondary(s). The SRL is a VxVM volume configured as part of an RVG. The SRL gives VVR the ability to associate writes to specific volumes within the replicated configuration in a specific order, maintaining write order fidelity at the secondary. All writes sent to the VxVM volume layer, whether from an application such as a database writing directly to storage, or an application accessing storage via a file system are faithfully replicated in application write order to the secondary.
v db2dg_srl - ENABLED ACTIVE 856350720 SELECT - fsgen
Data Change Map (DCM)
Data Change Maps (DCM) are used to mark sections of volumes on the primary that have changed during extended network outages in order to minimize the amount of data that must be synchronized to the secondary site during the outage. Only one disk will be used for DCM logs at this time.
In both primary and secondary sites, the same disk device name is used.
GROUP DISK DEVICE TAG OFFSET LENGTH FLAGS db2dg db2dg95 EMC0_94 EMC0_94 0 35681280 -
Disk Change Object (DCO)
Disk Change Object (DCO) volumes are used by Volume Manager FastResync (FR). FR is used to quickly resynchronize mirrored volumes that have been temporarily split and rejoined. FR works by copying only changes to the newly reattached volume using FR logging. This process reduces the time required to rejoin a split mirror, and requires less processing power than full mirror resynchronization without logging.
GROUP DISK DEVICE TAG OFFSET LENGTH FLAGS db2dg db2dg95 EMC0_94 EMC0_94 0 35681280 -
Technical Implementation Steps/Procedures
Detailed Implementation
Install VVR License on all nodes for both sites
# /opt/VRTSvlic/sbin/vxlicinst
Install VVR packages on all nodes for both sites
List of Required and Optional Packages for VVR
Required software packages for VVR:
VRTSvlic VERITAS Licensing Utilities. VRTSvxvm VERITAS Volume Manager and Volume Replicator. VRTSob VERITAS Enterprise Administrator Service. VRTSvmpro VERITAS Volume Manager Management Services Provider. VRTSvrpro VERITAS Volume Replicator Management Services Provider. VRTSvcsvr VERITAS Cluster Server Agents for VERITAS Volume Replicator.
Optional software packages for VVR:
VRTSjre VERITAS JRE Redistribution. VRTSweb VERITAS Java Web Server. VRTSvrw VERITAS Volume Replicator Web Console. VRTSobgui VERITAS Enterprise Administrator. VRTSvmdoc VERITAS Volume Manager documentation. VRTSvrdoc VERITAS Volume Replicator documentation. VRTSvmman VERITAS Volume Manager manual pages. VRTSap VERITAS Action Provider. VRTStep VERITAS Task Execution Provider.
Installing the VVR Packages Using the pkgadd Command
1. Log in as root.
2. Mount from the software repository:
# mount software:/repos /mnt
# cd /mnt/software/os/SunOS/middleware/Veritas4.1/volume_replicator
3. Alternatively, you may choose to run the install script from the same directory. Refer to VVR Installation Guide for more information.
# ./installvvr
4. But this document follows the manual process.
# cd /mnt/software/os/SunOS/middleware/Veritas4.1/volume_replicator
Note: Install the packages in the order specified below to ensure proper installation.
5. Use the following command to install the required software packages in the specified order. Some of these may have already been installed.
# pkgadd -d . VRTSvlic VRTSvxvm VRTSob VRTSvcsvr
6. Install the following patch:
# patchadd 115209-<latest>
7. Install the following required packages in the specified order:
# pkgadd -d . VRTSvmpro VRTSvrpro
8. Use the following command to install the optional software packages:
# pkgadd -d . VRTSobgui VRTSjre VRTSweb VRTSvrw VRTSvmdoc \ VRTSvrdoc VRTSvmman VRTSap VRTStep
The system prints out a series of status messages as the installation progresses and prompts you for any required information, such as the license key.
Create a Diskgroup and volumes in the secondary site
initialize all LUNs and assign to Diskgroup db2dg
sunsrv01-dr:# vxdiskadm Create volumes with the same sizes as the primary site’s volumes.
sunsrv01-dr:# vxassist -g db2dg make bcp 585105408 layout=concat
sunsrv01-dr:# vxassist -g db2dg make db 1172343808 layout=concat
sunsrv01-dr:# vxassist -g db2dg make dba 98566144 layout=concat
sunsrv01-dr:# vxassist -g db2dg make db2 8388608 layout=concat
sunsrv01-dr:# vxassist -g db2dg make lg1 396361728 layout=concat
sunsrv01-dr:# vxassist -g db2dg make tp01 192937984 layout=concat
Create the SRL Volume. Allocate disks that are dedicated just for SRL volume. No other volumes should use those disks.
sunsrv01-dr:# vxassist -g db2dg make db2dg_srl 856350720 layout=concat \ > db2dg71 db2dg72 db2dg73 db2dg74 db2dg75 db2dg76 db2dg77 db2dg78 \ > db2dg79 db2dg80 db2dg81 db2dg82 db2dg83 db2dg84 db2dg85 db2dg86 \ > db2dg87 db2dg88 db2dg89 db2dg90 db2dg91 db2dg92 db2dg93 db2dg94
Add DCM logs to the volumes. Use the disk that’s been assigned only for logs.
sunsrv01-dr:# vxassist -g db2dg addlog bcp logtype=dcm nlog=1 db2dg95
sunsrv01-dr:# vxassist -g db2dg addlog db logtype=dcm nlog=1 db2dg95
sunsrv01-dr:# vxassist -g db2dg addlog dba logtype=dcm nlog=1 db2dg95
sunsrv01-dr:# vxassist -g db2dg addlog db2 logtype=dcm nlog=1 db2dg95
sunsrv01-dr:# vxassist -g db2dg addlog lg1 logtype=dcm nlog=1 db2dg95
sunsrv01-dr:# vxassist -g db2dg addlog tp01 logtype=dcm nlog=1 db2dg95
Update the .rdg file on both nodes in the secondary site
The Secondary Node must be given permission to manage the disk group created on the Primary Node. To do this add the diskgroup ID into /etc/vx/vras/.rdg . The diskgroup ID is the value of dgid in the output of vxprint -l db2dg on the Primary Node.
sunsrv01:# vxprint -l db2dg | grep dgid info: dgid=1138140445.1393.sunsrv01 noautoimport
sunsrv01-dr:# echo “1138140445.1393.sunsrv01” > /etc/vx/vras/.rdg
sunsrv02-dr:# echo “1138140445.1393.sunsrv01” > /etc/vx/vras/.rdg
Stop all applications – both clustered and non-clustered apps
Stop all clustered and non-clustered applications that are using the volumes, which will be replicated.
Note: You may leave the applications running while replicating.
# su – db2inst –c “db2stop”
Unmount all filesystems whose underlying volumes will be replicated.
# umount ` mount | grep '/dev/vx/dsk/db2dg/' | awk '{ print $1 }'`
Forcibly stop VCS so it leaves all running resources online – especially Volumes and the diskgroup
# hastop –all -force
Bring up the virtual IPs on all nodes on both sites
These are the VVR Replication IPs (R-Link IPs). Must be unique – one at the primary site and other at the secondary site.
Primary (sunsrv01 or sunsrv02):
ce3:1 - 10.11.196.191 netmask fffffe00 broadcast 10.11.197.255 Secondary (sunsrv01-dr or sunsrv02-dr):
ce5:1 - 10.12.94.191 netmask fffffe00 broadcast 10.12.95.255
Create the SRL Volume in the primary site
Allocate disks that are dedicated just for SRL volume. No other volumes should use those disks.
sunsrv01:# vxassist -g db2dg make db2dg_srl 856350720 layout=concat \ > db2dg71 db2dg72 db2dg73 db2dg74 db2dg75 db2dg76 db2dg77 db2dg78 \ > db2dg79 db2dg80 db2dg81 db2dg82 db2dg83 db2dg84 db2dg85 db2dg86 \ > db2dg87 db2dg88 db2dg89 db2dg90 db2dg91 db2dg92 db2dg93 db2dg94
Add DCM logs to all the primary site volumes that will be replicated
Use the disk that is dedicated for logs only.
sunsrv01:# vxassist -g db2dg addlog bcp logtype=dcm nlog=1 db2dg95
sunsrv01:# vxassist -g db2dg addlog db logtype=dcm nlog=1 db2dg95
sunsrv01:# vxassist -g db2dg addlog dba logtype=dcm nlog=1 db2dg95
sunsrv01:# vxassist -g db2dg addlog db2 logtype=dcm nlog=1 db2dg95
sunsrv01:# vxassist -g db2dg addlog lg1 logtype=dcm nlog=1 db2dg95
sunsrv01:# vxassist -g db2dg addlog tp01 logtype=dcm nlog=1 db2dg95
Update the .rdg file on both nodes in the primary site
The diskgroup ID is the value of dgid in the output of vxprint -l db2dg on the Secondary Node.
sunsrv02-dr:# vxprint -l db2dg | grep dgid info: dgid=1182230506.2373.sunsrv01-dr noautoimport
sunsrv01:# echo “1182230506.2373.sunsrv01-dr” > /etc/vx/vras/.rdg
sunsrv02:# echo “1182230506.2373.sunsrv01-dr” > /etc/vx/vras/.rdg
Ensure that VVRTypes.cf exists in /etc/VRTSvcs/conf
This types file must exist on all nodes in the cluster.
sunsrv01:# ls -l /etc/VRTSvcs/conf/VVRTypes.cf -rw-rw-r-- 1 root sys 1811 Jan 20 2005 /etc/VRTSvcs/conf/VVRTypes.cf
Ensure that other application types i.e. Db2udbTypes.cf exists in /etc/VRTSvcs/conf
This types file must exist on all nodes in the cluster.
sunsrv01:# ls -l /etc/VRTSvcs/conf/Db2udbTypes.cf -rw-rw-r-- 1 root sys 1080 Jun 26 2006 /etc/VRTSvcs/conf/Db2udbTypes.cf
Create the Primary RVG
First, ensure that /usr/sbin/vradmind is running. If not, start it with this command:
sunsrv01:# /etc/init.d/vras-vradmind.sh start
Run this from the primary site.
sunsrv01:# vradmin –g db2dg createpri db2dg_rvg bcp,db,dba,db2,lg1,tp01 \ db2dg_srl
Usage:
vradmin -g <diskgroup> createpri <RVGname> <vol1,vol2...> <SRLname>
Where:
<diskgroup> | is the VxVM diskgroup name | <RVGname> | is the name for the RVG, usually <diskgroupname>_rvg | <vol1,vol2...> | is a comma separated list of all volumes in the diskgroup to be replicated. | <SRLname> | is the name of the SRL volume |
Create the Secondary RVG
After creating the Primary RVG, go on to adding a Secondary. Use the vradmin addsec command to add a Secondary RVG. The VIP hostnames are in the /etc/hosts file.
sunsrv01:# vradmin -g db2dg addsec db2dg_rvg db2inste3-vipvr db2instdr-vipvr
Usage:
vradmin -g <diskgroup> addsec <RVGname> <primaryhost> <secondaryhost>
Where:
<diskgroup> | is the VxVM diskgroup name | <RVGname> | is the name of the RVG created on the Primary Node | <primaryhost> | is the hostname of the Primary Node, this could be a VCS ServiceGroup name | <secondaryhost> | is the hostname of the Secondary Node, this could be a VCS ServiceGroup name |
Note: The vradmin addsec command performs the following operations:
- Creates and adds a Secondary RVG of the same name as the Primary RVG to the specified RDS on the Secondary host. By default, the Secondary RVG is added to the disk group with the same name as the Primary disk group. Use the option -sdg with the vradminaddsec command to specify a different disk group on the Secondary.
- Automatically adds DCMs to the Primary and Secondary data volumes if they do not have DCMs.
- Associates to the Secondary RVG, existing data volumes of the same names and sizes as the Primary data volumes; it also associates an existing volume with the same name as the Primary SRL, as the Secondary SRL.
- Creates and associates to the Primary and Secondary RVGs respectively, the Primary and Secondary RLINKs with default RLINK names rlk_remotehost_rvgname.
Configure VCS to integrate VVR objects – Primary Site
There would be at least two VCS service Groups. One group represents replication group and the other group is the application (DB) group. Application group contains RVGPrimary resource. Replication group contains IP, RVG and the DiskGroup resource.
In this particular environment, VVR Service Group and Application Service Group must be online on the same node because the diskgroup is on the VVR Service Group.
VVR Service group must be started first before the Application Service Group. The dependencies are configured in the VCS configuration file.
VVR Service group maintains the synchronization between the primary and secondary sites.
VCS Service Groups:
db2inst_vvr VVR Service Group
db2dgAgent (RVG) vvrnic (NIC) vvrip (VIP for VVR communications) db2dg_dg (Diskgroup)
db2inst_grp Application Service Group
db2dg_rvg_primary (RVG Primary) db2inst_db (DB2 Database) db2inst_ce1 (NIC) db2inst_ip (VIP for the application) db2inst__mnt (Filesystems) adsm_db2inst (TSM Backup)
Primary Site VCS Configuration File:
include "types.cf" include "Db2udbTypes.cf" include "VVRTypes.cf"
cluster e3clus177 ( UserNames = { admin = ajkCjeJgkFkkIskEjh } ClusterAddress = "10.10.231.191" Administrators = { admin } CredRenewFrequency = 0 CounterInterval = 5 )
system sunsrv01 ( )
system sunsrv02 ( )
group db2inst_grp ( SystemList = { sunsrv01 = 0, sunsrv02 = 1 } AutoStart = 0 )
Application adsm_db2inst ( Critical = 0 User = root StartProgram = "/bcp/db2inst/tsm/adsmcad.db start" StopProgram = "/bcp/db2inst/tsm/adsmcad.db stop" MonitorProcesses = { "/usr/bin/dsmcad -optfile=/bcp/db2inst/tsm/dsm.opt.db2inst" } )
Db2udb db2inst_db ( Critical = 0 DB2InstOwner = db2inst DB2InstHome = "/db2/db2inst" MonScript = "/opt/VRTSvcs/bin/Db2udb/SqlTest.pl" )
DiskGroup db2dgB_dg ( DiskGroup = db2dgB )
IP db2inst_ip ( Device = ce1 Address = "10.10.231.191" )
Mount db2dgB_bkup_mnt ( MountPoint = "/backup/db2inst" BlockDevice = "/dev/vx/dsk/db2dgB/bkp" FSType = vxfs FsckOpt = "-y" )
Mount db2inst_bcp_mnt ( MountPoint = "/bcp/db2inst" BlockDevice = "/dev/vx/dsk/db2dg/bcp" FSType = vxfs MountOpt = rw FsckOpt = "-y" )
Mount db2inst_db2_mnt ( MountPoint = "/db2/db2inst" BlockDevice = "/dev/vx/dsk/db2dg/db2" FSType = vxfs MountOpt = rw FsckOpt = "-y" )
Mount db2inst_db_mnt ( MountPoint = "/db/db2inst/PEMMP00P/NODE0000" BlockDevice = "/dev/vx/dsk/db2dg/db" FSType = vxfs MountOpt = rw FsckOpt = "-y" )
Mount db2inst_dba_mnt ( MountPoint = "/dba/db2inst" BlockDevice = "/dev/vx/dsk/db2dg/dba" FSType = vxfs MountOpt = rw FsckOpt = "-y" )
Mount db2inst_lg1_mnt ( MountPoint = "/db/db2inst/log1" BlockDevice = "/dev/vx/dsk/db2dg/lg1" FSType = vxfs MountOpt = rw FsckOpt = "-y" )
Mount db2inst_tp01_mnt ( MountPoint = "/db/db2inst/PEMMP00P/tempspace01/NODE0000" BlockDevice = "/dev/vx/dsk/db2dg/tp01" FSType = vxfs MountOpt = rw FsckOpt = "-y" )
NIC db2inst_ce1 ( Device = ce1 NetworkType = ether )
RVGPrimary db2dg_rvg_primary ( Critical = 0 RvgResourceName = db2dgAgent )
requires group db2inst_vvr online local hard adsm_db2inst requires db2inst_bcp_mnt db2dgB_bkup_mnt requires db2dgB_dg db2inst_bcp_mnt requires db2dg_rvg_primary db2inst_db requires db2inst_bcp_mnt db2inst_db requires db2inst_db2_mnt db2inst_db requires db2inst_db_mnt db2inst_db requires db2inst_dba_mnt db2inst_db requires db2inst_ip db2inst_db requires db2inst_lg1_mnt db2inst_db requires db2inst_tp01_mnt db2inst_db2_mnt requires db2dg_rvg_primary db2inst_db_mnt requires db2dg_rvg_primary db2inst_dba_mnt requires db2dg_rvg_primary db2inst_ip requires db2inst_ce1 db2inst_lg1_mnt requires db2dg_rvg_primary db2inst_tp01_mnt requires db2dg_rvg_primary
group db2inst_vvr ( SystemList = { sunsrv01 = 0, sunsrv02 = 1 } )
DiskGroup db2dg_dg ( DiskGroup = db2dg )
IP vvrip ( Device = ce3 Address = "10.11.196.191" NetMask = "255.255.254.0" )
NIC vvrnic ( Device = ce3 NetworkType = ether )
RVG db2dgAgent ( Critical = 0 RVG = db2dg_rvg DiskGroup = db2dg )
db2dgAgent requires db2dg_dg vvrip requires vvrnic
Configure VCS to integrate VVR objects – Secondary Site
In this particular environment, there will be four VCS service Groups – CCA Group, Replication Group, Application (DB) Group, and SNAP Group. CCA group is for remote cluster administration. Application group contains RVGPrimary resource. Replication group contains IP, RVG and the DiskGroup resource. SNAP Group contains the resources for SNAP copy. This document will not discuss Veritas CCA.
In this particular environment, VVR Service Group, Application Service Group, and SNAP Service Group must be online on the same node because the diskgroup is on the VVR Service Group. SNAP volumes are on the same diskgroup as the application/DB volumes.
SNAP Configuration is discussed later in this document.
VVR Service group must be started first before the Application Service Group. The dependencies are configured in the VCS configuration file.
VVR Service group maintains the synchronization between the primary and secondary sites.
VCS Service Groups:
db2inst_vvr VVR Service Group Resources: · db2dgAgent (RVG) · vvrnic (NIC) · vvrip (VIP for VVR communications) · db2dg_dg (Diskgroup)
db2inst_grp Application Service Group Resources: · db2dg_rvg_primary (RVG Primary) · db2inst_db (DB2 Database) · db2inst_ce1 (NIC) · db2inst_ip (VIP for the application) · db2inst__mnt (Filesystems) · adsm_db2inst (TSM Backup)
snap_db2inst_grp Snap Copy Service Group Resources: · snap_db2inst_db (DB2 Database) · snap_db2inst_ce1 (NIC) · snap_db2inst_ip (VIP for the application) · snap_db2inst__mnt (Filesystems)
Secondary Site VCS Configuration File:
include "types.cf" include "ClusterMonitorConfigType.cf" include "Db2udbTypes.cf" include "VVRTypes.cf"
cluster e3clus177 ( UserNames = { admin = ajkCjeJgkFkkIskEjh } ClusterAddress = "10.10.231.191" Administrators = { admin } CredRenewFrequency = 0 CounterInterval = 5 )
system sunsrv01-dr ( )
system sunsrv02-dr ( )
group CCAvail ( SystemList = { sunsrv01-dr = 0, sunsrv02-dr = 1 } AutoStartList = { sunsrv01-dr, sunsrv02-dr } )
ClusterMonitorConfig CCAvail_ClusterConfig ( MSAddress = "10.11.198.53" ClusterId = 1183482234 VCSLoggingLevel = TAG_A Logging = "/opt/VRTSccacm/conf/k2_logging.properties" ClusterMonitorVersion = "4.1.2272.1" )
Process CCAvail_ClusterMonitor ( PathName = "/opt/VRTSccacm/bin/ClusterMonitor" Arguments = "-config" )
CCAvail_ClusterMonitor requires CCAvail_ClusterConfig
group db2inst_grp ( SystemList = { sunsrv01-dr = 0, sunsrv02-dr = 1 } Enabled @sunsrv01-dr = 0 Enabled @sunsrv02-dr = 0 AutoStart = 0 )
Application adsm_db2inst ( Enabled = 0 Critical = 0 User = root StartProgram = "/bcp/db2inst/tsm/adsmcad.db start" StopProgram = "/bcp/db2inst/tsm/adsmcad.db stop" MonitorProcesses = { "/usr/bin/dsmcad -optfile=/bcp/db2inst/tsm/dsm.opt.db2inst" } )
Db2udb db2inst_db ( Enabled = 0 Critical = 0 DB2InstOwner = db2inst DB2InstHome = "/db2/db2inst" MonScript = "/opt/VRTSvcs/bin/Db2udb/SqlTest.pl" )
DiskGroup db2dgB_dg ( Enabled = 0 DiskGroup = db2dgB )
IP db2inst_ip ( Enabled = 0 Device = ce1 Address = "10.10.231.191" )
Mount db2dgB_bkup_mnt ( Enabled = 0 MountPoint = "/backup/db2inst" BlockDevice = "/dev/vx/dsk/db2dgB/bkp" FSType = vxfs FsckOpt = "-y" )
Mount db2inst_bcp_mnt ( Enabled = 0 MountPoint = "/bcp/db2inst" BlockDevice = "/dev/vx/dsk/db2dg/bcp" FSType = vxfs MountOpt = rw FsckOpt = "-y" )
Mount db2inst_db2_mnt ( Enabled = 0 MountPoint = "/db2/db2inst" BlockDevice = "/dev/vx/dsk/db2dg/db2" FSType = vxfs MountOpt = rw FsckOpt = "-y" )
Mount db2inst_db_mnt ( Enabled = 0 MountPoint = "/db/db2inst/PEMMP00P/NODE0000" BlockDevice = "/dev/vx/dsk/db2dg/db" FSType = vxfs MountOpt = rw FsckOpt = "-y" )
Mount db2inst_dba_mnt ( Enabled = 0 MountPoint = "/dba/db2inst" BlockDevice = "/dev/vx/dsk/db2dg/dba" FSType = vxfs MountOpt = rw FsckOpt = "-y" )
Mount db2inst_lg1_mnt ( Enabled = 0 MountPoint = "/db/db2inst/log1" BlockDevice = "/dev/vx/dsk/db2dg/lg1" FSType = vxfs MountOpt = rw FsckOpt = "-y" )
Mount db2inst_tp01_mnt ( Enabled = 0 MountPoint = "/db/db2inst/PEMMP00P/tempspace01/NODE0000" BlockDevice = "/dev/vx/dsk/db2dg/tp01" FSType = vxfs MountOpt = rw FsckOpt = "-y" )
NIC db2inst_ce1 ( Enabled = 0 Device = ce1 NetworkType = ether )
RVGPrimary db2dg_rvg_primary ( Enabled = 0 Critical = 0 RvgResourceName = db2dgAgent )
requires group db2inst_vvr online local firm adsm_db2inst requires db2inst_bcp_mnt db2dgB_bkup_mnt requires db2dgB_dg db2inst_bcp_mnt requires db2dg_rvg_primary db2inst_db requires db2inst_bcp_mnt db2inst_db requires db2inst_db2_mnt db2inst_db requires db2inst_db_mnt db2inst_db requires db2inst_dba_mnt db2inst_db requires db2inst_ip db2inst_db requires db2inst_lg1_mnt db2inst_db requires db2inst_tp01_mnt db2inst_db2_mnt requires db2dg_rvg_primary db2inst_db_mnt requires db2dg_rvg_primary db2inst_dba_mnt requires db2dg_rvg_primary db2inst_ip requires db2inst_ce1 db2inst_lg1_mnt requires db2dg_rvg_primary db2inst_tp01_mnt requires db2dg_rvg_primary
group db2inst_vvr ( SystemList = { sunsrv01-dr = 0, sunsrv02-dr = 1 } )
DiskGroup db2dg_dg ( DiskGroup = db2dg )
IP vvrip ( Device = ce5 Address = "10.12.94.191" NetMask = "255.255.254.0" )
NIC vvrnic ( Device = ce5 NetworkType = ether )
RVG db2dgAgent ( Critical = 0 RVG = db2dg_rvg DiskGroup = db2dg )
db2dgAgent requires db2dg_dg vvrip requires vvrnic
group snap_db2inst_grp ( SystemList = { sunsrv01-dr = 0, sunsrv02-dr = 1 } AutoStart = 0 )
Db2udb snap_db2inst_db ( Critical = 0 DB2InstOwner = db2inst DB2InstHome = "/db2/db2inst" MonScript = "/opt/VRTSvcs/bin/Db2udb/SqlTest.pl" )
DiskGroup snap_db2dgB_dg ( DiskGroup = db2dgB )
IP snap_db2inst_ip ( Device = ce1 Address = "10.10.231.191" )
Mount snap_db2dgB_bkup_mnt ( MountPoint = "/backup/db2inst" BlockDevice = "/dev/vx/dsk/db2dgB/bkp" FSType = vxfs FsckOpt = "-y" )
Mount snap_db2inst_bcp_mnt ( MountPoint = "/bcp/db2inst" BlockDevice = "/dev/vx/dsk/db2dg/bcp_snapvol" FSType = vxfs FsckOpt = "-y" )
Mount snap_db2inst_db2_mnt ( MountPoint = "/db2/db2inst" BlockDevice = "/dev/vx/dsk/db2dg/db2_snapvol" FSType = vxfs FsckOpt = "-y" )
Mount snap_db2inst_db_mnt ( MountPoint = "/db/db2inst/PEMMP00P/NODE0000" BlockDevice = "/dev/vx/dsk/db2dg/db_snapvol" FSType = vxfs FsckOpt = "-y" )
Mount snap_db2inst_dba_mnt ( MountPoint = "/dba/db2inst" BlockDevice = "/dev/vx/dsk/db2dg/dba_snapvol" FSType = vxfs FsckOpt = "-y" )
Mount snap_db2inst_lg1_mnt ( MountPoint = "/db/db2inst/log1" BlockDevice = "/dev/vx/dsk/db2dg/lg1_snapvol" FSType = vxfs FsckOpt = "-y" )
Mount snap_db2inst_tp01_mnt ( MountPoint = "/db/db2inst/PEMMP00P/tempspace01/NODE0000" BlockDevice = "/dev/vx/dsk/db2dg/tp01_snapvol" FSType = vxfs FsckOpt = "-y" )
NIC snap_db2inst_ce1 ( Device = ce1 NetworkType = ether )
requires group db2inst_vvr online local firm snap_db2dgB_bkup_mnt requires snap_db2dgB_dg snap_db2inst_db requires snap_db2inst_bcp_mnt snap_db2inst_db requires snap_db2inst_db2_mnt snap_db2inst_db requires snap_db2inst_db_mnt snap_db2inst_db requires snap_db2inst_dba_mnt snap_db2inst_db requires snap_db2inst_lg1_mnt snap_db2inst_db requires snap_db2inst_tp01_mnt snap_db2inst_ip requires snap_db2inst_ce1
Bring up VCS engine and bring up the vvr service group in the Secondary Site
Start VCS on both nodes
sunsrv01-dr:# hastart
sunsrv02-dr:# hastart
Bring up the VVR Service Group on one node
sunsrv01-dr:# hagrp –online db2inst_vvr –sys sunsrv01-dr
Bring up VCS engine and bring up the vvr service group in the Primary Site
Start VCS on both nodes
sunsrv01:# hastart
sunsrv02:# hastart
Bring up the VVR Service Group on one node
sunsrv01:# hagrp –online db2inst_vvr –sys sunsrv01
Bring up the Application Service Group
sunsrv01:# hagrp –online db2inst_grp –sys sunsrv01
Check the Rlink and RVG status
Make sure that the flags are attached and connected. If for some reason links were detached or disconnected, check to make sure that the communications between the primary site and the secondary site are working fine. You should be able to ping, ssh, or telnet from each site thru the VIPs. If communications are good, you may restart VVR (see next step for restarting VVR engine).
sunsrv01:# vxprint -Pl Disk group: db2dg
Rlink: rlk_db2instdr-vipvr_db2dg_rvg info: timeout=500 packet_size=8400 rid=0.2007 latency_high_mark=10000 latency_low_mark=9950 bandwidth_limit=none state: state=ACTIVE synchronous=off latencyprot=off srlprot=autodcm assoc: rvg=db2dg_rvg remote_host=db2instdr-vipvr IP_addr=10.12.94.191 port=4145 remote_dg=db2dg remote_dg_dgid=1182230506.2373.sunsrv01-dr remote_rvg_version=21 remote_rlink=rlk_db2inste3-vipvr_db2dg_rvg remote_rlink_rid=0.2127 local_host=db2inste3-vipvr IP_addr=10.11.196.191 port=4145 protocol: UDP/IP flags: write enabled attached consistent cant_sync connected asynchronous autosync
sunsrv01:# vradmin -l printrvg Replicated Data Set: db2dg_rvg Primary: HostName: db2inste3-vipvr RvgName: db2dg_rvg DgName: db2dg datavol_cnt: 6 srl: db2dg_srl RLinks: name=rlk_db2instdr-vipvr_db2dg_rvg, detached=off, synchronous=off Secondary: HostName: db2instdr-vipvr RvgName: db2dg_rvg DgName: db2dg datavol_cnt: 6 srl: db2dg_srl RLinks: name=rlk_db2inste3-vipvr_db2dg_rvg, detached=off, synchronous=off
Restart VVR engine (if the link is detached or disconnected)
This is only required if you see the links between primary and secondary as detached or disconnected. If you think that the communication between the two is working fine, then run the following commands in the primary site:
sunsrv01:# vxstart_vvr stop
sunsrv01:# vxstart_vvr start
Then, check the status again.
sunsrv01:# vxprint -Pl
Start the replication
Initiate the command from the primary site.
sunsrv01:# vradmin -g db2dg -a startrep db2dg_rvg db2instdr-vipvr Message from Primary: VxVM VVR vxrlink WARNING V-5-1-3359 Attaching rlink to non-empty rvg. Autosync will be performed.
VxVM VVR vxrlink INFO V-5-1-3614 Secondary data volumes detected with rvg db2dg_rvg as parent:
VxVM VVR vxrlink INFO V-5-1-6183 bcp: len=585105408 primary_datavol=bcp VxVM VVR vxrlink INFO V-5-1-6183 db: len=1172343808 primary_datavol=db VxVM VVR vxrlink INFO V-5-1-6183 db2: len=8388608 primary_datavol=db2 VxVM VVR vxrlink INFO V-5-1-6183 dba: len=98566144 primary_datavol=dba VxVM VVR vxrlink INFO V-5-1-6183 lg1: len=396361728 primary_datavol=lg1 VxVM VVR vxrlink INFO V-5-1-6183 tp01: len=192937984 primary_datavol=tp01
VxVM VVR vxrlink INFO V-5-1-3365 Autosync operation has started
Usage:
vradmin -g <diskgroup> -a startrep <RVGname> <secondaryhost>
Where:
<diskgroup> | is the VxVM diskgroup name | <RVGname> | is the name for the RVG, usually <diskgroupname>_rvg | <secondaryhost> | is the hostname of the Secondary Site. Check the /etc/hosts file. |
Check the Replication status
Initiate the command from the primary site. You can tell if it’s syncing by looking at the number of bytes remaining. If it changes, then it’s working.
sunsrv01:# vxrlink -g db2dg -i 2 status rlk_db2instdr-vipvr_db2dg_rvg Fri Jun 22 00:01:37 MST 2007 VxVM VVR vxrlink INFO V-5-1-4464 Rlink rlk_db2instdr-vipvr_db2dg_rvg is in AUTOSYNC. 1226649984 Kbytes remaining. VxVM VVR vxrlink INFO V-5-1-4464 Rlink rlk_db2instdr-vipvr_db2dg_rvg is in AUTOSYNC. 1226644224 Kbytes remaining. VxVM VVR vxrlink INFO V-5-1-4464 Rlink rlk_db2instdr-vipvr_db2dg_rvg is in AUTOSYNC. 1226638464 Kbytes remaining. VxVM VVR vxrlink INFO V-5-1-4464 Rlink rlk_db2instdr-vipvr_db2dg_rvg is in AUTOSYNC. 1226632128 Kbytes remaining. VxVM VVR vxrlink INFO V-5-1-4464 Rlink rlk_db2instdr-vipvr_db2dg_rvg is in AUTOSYNC. 1226626368 Kbytes remaining.
Another way to check is to use the vradmin repstatus command. The key bits of information here are Data status and Replication status.
sunsrv01:# vradmin -g db2dg repstatus db2dg_rvg Replicated Data Set: db2dg_rvg Primary: Host name: db2inste3-vipvr RVG name: db2dg_rvg DG name: db2dg RVG state: enabled for I/O Data volumes: 6 SRL name: db2dg_srl SRL size: 952.84 G Total secondaries: 1
Secondary: Host name: db2instdr-vipvr RVG name: db2dg_rvg DG name: db2dg Data status: consistent, up-to-date Replication status: replicating (connected) Current mode: asynchronous Logging to: SRL Timestamp Information: N/A
Mount the Replicated Volumes
When replication is 100% complete, you may check the volumes on the secondary site by mounting the filesystems. But first, you need to check the status of replication.
sunsrv01:# vxrlink -g db2dg -i 2 status rlk_db2instdr-vipvr_db2dg_rvgThu Jun 28 08:54:42 MST 2007 VxVM VVR vxrlink INFO V-5-1-4467 Rlink rlk_db2instdr-vipvr_db2dg_rvg is up to date
Use VCS to mount the replicated Volumes
sunsrv01-dr:# hagrp –online db2inst_grp –sys sunsrv01-dr
Display the filesystems and compare them with the primary.
sunsrv01-dr:# df -k | grep db2dg
/dev/vx/dsk/db2dg/db2 4194304 78059 3859030 2% /db2/db2inst /dev/vx/dsk/db2dg/bcp 292552704 1527508 272836173 1% /bcp/db2inst /dev/vx/dsk/db2dg/dba 49283072 28555 46176114 1% /dba/db2inst /dev/vx/dsk/db2dg/db 586171904 1733538 547911037 1% /db/db2inst/PEMMP00P/NODE0000 /dev/vx/dsk/db2dg/tp01 96468992 40176 90402086 1% /db/db2inst/PEMMP00P/tempspace01/NODE0000 /dev/vx/dsk/db2dg/lg1 198180864 1779853 184126012 1% /db/db2inst/log1
sunsrv01:# df -k | grep db2dg
/dev/vx/dsk/db2dg/db2 4194304 78059 3859030 2% /db2/db2inst /dev/vx/dsk/db2dg/bcp 292552704 1527508 272836173 1% /bcp/db2inst /dev/vx/dsk/db2dg/dba 49283072 28555 46176114 1% /dba/db2inst /dev/vx/dsk/db2dg/db 586171904 1733538 547911037 1% /db/db2inst/PEMMP00P/NODE0000 /dev/vx/dsk/db2dg/tp01 96468992 40176 90402086 1% /db/db2inst/PEMMP00P/tempspace01/NODE0000 /dev/vx/dsk/db2dg/lg1 198180864 1779853 184126012 1% /db/db2inst/log1
This is the best time to test VCS for switching the service group.
sunsrv01-dr:# hagrp –switch db2inst_grp –to sunsrv02-dr
Prepare the Replicated Volumes for Snapshot in the secondary site
First, display the volume information
sunsrv02-dr:# vxprint -ht bcp Disk group: db2dg
v bcp db2dg_rvg ENABLED ACTIVE 585105408 SELECT - fsgen pl bcp-01 bcp ENABLED ACTIVE 585108480 CONCAT - RW sd db2dg214-01 bcp-03 db2dg214 5376 285473280 0 EMC0_219 ENA sd db2dg215-01 bcp-03 db2dg215 5376 285473280 285473280 EMC0_220 ENA sd db2dg219-01 bcp-03 db2dg219 5376 14161920 570946560 EMC0_224 ENA pl bcp-02 bcp ENABLED ACTIVE LOGONLY CONCAT - RW sd db2dg95-06 bcp-02 db2dg95 0 512 LOG EMC0_154 ENA
Prepare the volume for snapshot by adding a DCO log. The same disks is used for DCO and DCM logs.
sunsrv02-dr:# vxsnap -g db2dg prepare bcp ndcomirs=1 alloc=db2dg95 VxVM vxsnap INFO V-5-1-9270 Volume is under RVG, setting drl=no.
Display the volume information to see the changes. Note that the DCO volume has been added with two logs.
sunsrv02-dr:# vxprint -ht bcp Disk group: db2dg
v bcp db2dg_rvg ENABLED ACTIVE 585105408 SELECT - fsgen pl bcp-01 bcp ENABLED ACTIVE 585108480 CONCAT - RW sd db2dg214-01 bcp-03 db2dg214 5376 285473280 0 EMC0_219 ENA sd db2dg215-01 bcp-03 db2dg215 5376 285473280 285473280 EMC0_220 ENA sd db2dg219-01 bcp-03 db2dg219 5376 14161920 570946560 EMC0_224 ENA pl bcp-02 bcp ENABLED ACTIVE LOGONLY CONCAT - RW sd db2dg95-06 bcp-02 db2dg95 0 512 LOG EMC0_154 ENA dc bcp_dco bcp bcp_dcl v bcp_dcl - ENABLED ACTIVE 40368 SELECT - gen pl bcp_dcl-01 bcp_dcl ENABLED ACTIVE 40368 CONCAT - RW sd db2dg95-07 bcp_dcl-01 db2dg95 960 40368 0 EMC0_154 ENA
Run the same steps on the rest of the volumes
sunsrv02-dr:# vxsnap -g db2dg prepare db ndcomirs=1 alloc=db2dg95 VxVM vxsnap INFO V-5-1-9270 Volume is under RVG, setting drl=no.
sunsrv02-dr:# vxsnap -g db2dg prepare dba ndcomirs=1 alloc=db2dg95 VxVM vxsnap INFO V-5-1-9270 Volume is under RVG, setting drl=no.
sunsrv02-dr:# vxsnap -g db2dg prepare db2 ndcomirs=1 alloc=db2dg95 VxVM vxsnap INFO V-5-1-9270 Volume is under RVG, setting drl=no.
sunsrv02-dr:# vxsnap -g db2dg prepare lg1 ndcomirs=1 alloc=db2dg95 VxVM vxsnap INFO V-5-1-9270 Volume is under RVG, setting drl=no.
sunsrv02-dr:# vxsnap -g db2dg prepare tp01 ndcomirs=1 alloc=db2dg95 VxVM vxsnap INFO V-5-1-9270 Volume is under RVG, setting drl=no.
Create the SNAP Volumes in the secondary site
sunsrv02-dr:# vxassist -g db2dg make bcp_snapvol 585105408 layout=concat
sunsrv02-dr:# vxassist -g db2dg make db_snapvol 1172343808 layout=concat
sunsrv02-dr:# vxassist -g db2dg make dba_snapvol 98566144 layout=concat
sunsrv02-dr:# vxassist -g db2dg make db2_snapvol 8388608 layout=concat
sunsrv02-dr:# vxassist -g db2dg make db2_snapvol 8388608 layout=concat
sunsrv02-dr:# vxassist -g db2dg make lg1_snapvol 396361728 layout=concat
sunsrv02-dr:# vxassist -g db2dg make tp01_snapvol 192937984 layout=concat
Prepare the SNAP Volumes
Identify the region size of the main volumes. The region size must be the same for both the main volume and the snap volume.
sunsrv02-dr:# for DCONAME in `vxprint -tg db2dg | grep "^dc" | awk '{ print $2 }'` > do > echo "$DCONAME\t\c" > vxprint -g db2dg -F%regionsz $DCONAME > done bcp_dco 128 db_dco 128 dba_dco 128 db2_dco 128 lg1_dco 128 tp01_dco 128
Display the volume information
sunsrv02-dr:# vxprint -ht bcp_snapvol Disk group: db2dg
v bcp_snapvol fsgen ENABLED 585105408 - ACTIVE - - pl bcp_snapvol-01 bcp_snapvol ENABLED 585105600 - ACTIVE - - sd db2dg114-01 bcp_snapvol-01 ENABLED 35726400 0 - - - sd db2dg106-01 bcp_snapvol-01 ENABLED 35681280 35726400 - - - sd db2dg101-01 bcp_snapvol-01 ENABLED 35681280 71407680 - - - sd db2dg102-01 bcp_snapvol-01 ENABLED 35681280 107088960 - - - sd db2dg103-01 bcp_snapvol-01 ENABLED 35681280 142770240 - - - sd db2dg104-01 bcp_snapvol-01 ENABLED 35681280 178451520 - - - sd db2dg105-01 bcp_snapvol-01 ENABLED 35681280 214132800 - - - sd db2dg107-01 bcp_snapvol-01 ENABLED 35681280 249814080 - - - sd db2dg108-01 bcp_snapvol-01 ENABLED 35681280 285495360 - - - sd db2dg109-01 bcp_snapvol-01 ENABLED 13844160 321176640 - - - sd db2dg110-01 bcp_snapvol-01 ENABLED 35726400 335020800 - - - sd db2dg111-01 bcp_snapvol-01 ENABLED 35726400 370747200 - - - sd db2dg112-01 bcp_snapvol-01 ENABLED 35726400 406473600 - - - sd db2dg113-01 bcp_snapvol-01 ENABLED 35726400 442200000 - - - sd db2dg115-01 bcp_snapvol-01 ENABLED 35726400 477926400 - - - sd db2dg116-01 bcp_snapvol-01 ENABLED 35726400 513652800 - - - sd db2dg117-01 bcp_snapvol-01 ENABLED 35726400 549379200 - - -
Prepare the snapshot volume by adding a DCO log and the same region size as the main volume. The same disks is used for DCO and DCM logs.
sunsrv02-dr:# vxsnap -g db2dg prepare bcp_snapvol ndcomirs=1 regionsize=128 \ alloc=db2dg95
Display the volume information to see the changes. Note that the DCO volume has been added.
sunsrv02-dr:# vxprint -ht bcp_snapvol Disk group: db2dg
v bcp_snapvol fsgen ENABLED 585105408 - ACTIVE - - pl bcp_snapvol-01 bcp_snapvol ENABLED 585105600 - ACTIVE - - sd db2dg114-01 bcp_snapvol-01 ENABLED 35726400 0 - - - sd db2dg106-01 bcp_snapvol-01 ENABLED 35681280 35726400 - - - sd db2dg101-01 bcp_snapvol-01 ENABLED 35681280 71407680 - - - sd db2dg102-01 bcp_snapvol-01 ENABLED 35681280 107088960 - - - sd db2dg103-01 bcp_snapvol-01 ENABLED 35681280 142770240 - - - sd db2dg104-01 bcp_snapvol-01 ENABLED 35681280 178451520 - - - sd db2dg105-01 bcp_snapvol-01 ENABLED 35681280 214132800 - - - sd db2dg107-01 bcp_snapvol-01 ENABLED 35681280 249814080 - - - sd db2dg108-01 bcp_snapvol-01 ENABLED 35681280 285495360 - - - sd db2dg109-01 bcp_snapvol-01 ENABLED 13844160 321176640 - - - sd db2dg110-01 bcp_snapvol-01 ENABLED 35726400 335020800 - - - sd db2dg111-01 bcp_snapvol-01 ENABLED 35726400 370747200 - - - sd db2dg112-01 bcp_snapvol-01 ENABLED 35726400 406473600 - - - sd db2dg113-01 bcp_snapvol-01 ENABLED 35726400 442200000 - - - sd db2dg115-01 bcp_snapvol-01 ENABLED 35726400 477926400 - - - sd db2dg116-01 bcp_snapvol-01 ENABLED 35726400 513652800 - - - sd db2dg117-01 bcp_snapvol-01 ENABLED 35726400 549379200 - - - dc bcp_snapvol_dco bcp_snapvol - - - - - - v bcp_snapvol_dcl gen ENABLED 40368 - ACTIVE - - pl bcp_snapvol_dcl-01 bcp_snapvol_dcl ENABLED 40368 - ACTIVE - - sd db2dg95-01 bcp_snapvol_dcl-01 ENABLED 40368 0 - - -
Run the same steps on the rest of the volumes
sunsrv02-dr:# vxsnap -g db2dg prepare db_snapvol ndcomirs=1 regionsize=128 \ alloc=db2dg95
sunsrv02-dr:# vxsnap -g db2dg prepare dba_snapvol ndcomirs=1 regionsize=128 \ alloc=db2dg95
sunsrv02-dr:# vxsnap -g db2dg prepare db2_snapvol ndcomirs=1 regionsize=128 \ alloc=db2dg95
sunsrv02-dr:# vxsnap -g db2dg prepare lg1_snapvol ndcomirs=1 regionsize=128 \ alloc=db2dg95
sunsrv02-dr:# vxsnap -g db2dg prepare tp01_snapvol ndcomirs=1 regionsize=128 \ alloc=db2dg95
Verify the region sizes are the same
sunsrv02-dr:# for DCONAME in `vxprint -tg db2dg | grep "^dc" | awk '{ print $2 }'` > do > echo "$DCONAME\t\c" > vxprint -g db2dg -F%regionsz $DCONAME > done bcp_dco 128 bcp_snapvol_dco 128 db_dco 128 db_snapvol_dco 128 dba_dco 128 dba_snapvol_dco 128 db2_dco 128 db2_snapvol_dco 128 lg1_dco 128 lg1_snapvol_dco 128 tp01_dco 128 tp01_snapvol_dco 128
Run a point-in-time snapshot
Verify the RLINK and RVG are active and up to date. Run this command from the primary site.
sunsrv01:# vxrlink -g db2dg -i 2 status rlk_db2instdr-vipvr_db2dg_rvg Thu Jun 28 08:54:42 MST 2007 VxVM VVR vxrlink INFO V-5-1-4467 Rlink rlk_db2instdr-vipvr_db2dg_rvg is up to date
sunsrv02-dr:# vxprint -Pl Disk group: db2dg
Rlink: rlk_db2inste3-vipvr_db2dg_rvg info: timeout=500 packet_size=8400 rid=0.2127 latency_high_mark=10000 latency_low_mark=9950 bandwidth_limit=none state: state=ACTIVE synchronous=off latencyprot=off srlprot=autodcm assoc: rvg=db2dg_rvg remote_host=db2inste3-vipvr IP_addr=10.11.196.191 port=4145 remote_dg=db2dg remote_dg_dgid=1138140445.1393.sunsrv01 remote_rvg_version=21 remote_rlink=rlk_db2instdr-vipvr_db2dg_rvg remote_rlink_rid=0.2007 local_host=db2instdr-vipvr IP_addr=10.12.94.191 port=4145 protocol: UDP/IP flags: write enabled attached consistent connected
Start the Snap Copy
sunsrv02-dr:# vxsnap -g db2dg make source=bcp/snapvol=bcp_snapvol \ source=db/snapvol=db_snapvol source=dba/snapvol=dba_snapvol \ source=db2/snapvol=db2_snapvol source=lg1/snapvol=lg1_snapvol \ source=tp01/snapvol=tp01_snapvol
Display the sync status
sunsrv02-dr:# vxtask list TASKID PTID TYPE/STATE PCT PROGRESS 168 SNAPSYNC/R 00.07% 0/585105408/430080 SNAPSYNC bcp_snapvol db2dg 170 SNAPSYNC/R 00.05% 0/1172343808/567296 SNAPSYNC db_snapvol db2dg 172 SNAPSYNC/R 00.46% 0/98566144/452608 SNAPSYNC dba_snapvol db2dg 174 SNAPSYNC/R 04.30% 0/8388608/360448 SNAPSYNC db2_snapvol db2dg 176 SNAPSYNC/R 00.16% 0/396361728/641024 SNAPSYNC lg1_snapvol db2dg 178 SNAPSYNC/R 00.18% 0/192937984/346112 SNAPSYNC tp01_snapvol db2dg
sunsrv02-dr:# vxsnap -g db2dg print
NAME SNAPOBJECT TYPE PARENT SNAPSHOT %DIRTY %VALID
bcp -- volume -- -- -- 100.00 bcp_snapvol_snp volume -- bcp_snapvol 0.00 --
db -- volume -- -- -- 100.00 db_snapvol_snp volume -- db_snapvol 0.00 --
dba -- volume -- -- -- 100.00 dba_snapvol_snp volume -- dba_snapvol 0.00 --
db2 -- volume -- -- -- 100.00 db2_snapvol_snp volume -- db2_snapvol 0.00 --
lg1 -- volume -- -- -- 100.00 lg1_snapvol_snp volume -- lg1_snapvol 0.00 --
tp01 -- volume -- -- -- 100.00 tp01_snapvol_snp volume -- tp01_snapvol 0.00 --
bcp_snapvol bcp_snp volume bcp -- 0.00 0.11
db_snapvol db_snp volume db -- 0.00 0.02
dba_snapvol dba_snp volume dba -- 0.00 0.77
db2_snapvol db2_snp volume db2 -- 0.00 9.13
lg1_snapvol lg1_snp volume lg1 -- 0.00 0.09
tp01_snapvol tp01_snp volume tp01 -- 0.00 0.10
Mount the SNAP Volumes
When the Sync is complete, bring up the snap service group in VCS to verify all changes. But first, unmount the filesystems on replicated volumes. In this particular server, the replicated volumes and the snap volumes use the same mountpoints.
sunsrv02-dr:# umount ` mount | grep '/dev/vx/dsk/db2dg/' | awk '{ print $1 }'`
sunsrv02-dr:# vxtask list TASKID PTID TYPE/STATE PCT PROGRESS
sunsrv02-dr:# hastatus -sum
-- SYSTEM STATE -- System State Frozen
A sunsrv01-dr RUNNING 0 A sunsrv02-dr RUNNING 0
-- GROUP STATE -- Group System Probed AutoDisabled State
B CCAvail sunsrv01-dr Y N ONLINE B CCAvail sunsrv02-dr Y N OFFLINE B db2inst_grp sunsrv01-dr Y N OFFLINE B db2inst_grp sunsrv02-dr Y N OFFLINE B db2inst_vvr sunsrv01-dr Y N OFFLINE B db2inst_vvr sunsrv02-dr Y N ONLINE B snap_db2inst_grp sunsrv01-dr Y N OFFLINE B snap_db2inst_grp sunsrv02-dr Y N OFFLINE
sunsrv02-dr:# hagrp -online snap_db2inst_grp -sys sunsrv02-dr
sunsrv02-dr:# hastatus -sum
-- SYSTEM STATE -- System State Frozen
A sunsrv01-dr RUNNING 0 A sunsrv02-dr RUNNING 0
-- GROUP STATE -- Group System Probed AutoDisabled State
B CCAvail sunsrv01-dr Y N ONLINE B CCAvail sunsrv02-dr Y N OFFLINE B db2inst_grp sunsrv01-dr Y N OFFLINE B db2inst_grp sunsrv02-dr Y N OFFLINE B db2inst_vvr sunsrv01-dr Y N OFFLINE B db2inst_vvr sunsrv02-dr Y N ONLINE B snap_db2inst_grp sunsrv01-dr Y N OFFLINE B snap_db2inst_grp sunsrv02-dr Y N ONLINE
Verify the filesystems and compare the sizes with the primary site.
sunsrv02-dr:# df -k | grep snapvol
/dev/vx/dsk/db2dg/db2_snapvol 4194304 78059 3859030 2% /db2/db2inst /dev/vx/dsk/db2dg/bcp_snapvol 292552704 1527508 272836173 1% /bcp/db2inst /dev/vx/dsk/db2dg/dba_snapvol 49283072 28555 46176114 1% /dba/db2inst /dev/vx/dsk/db2dg/db_snapvol 586171904 1733538 547911037 1% /db/db2inst/PEMMP00P/NODE0000 /dev/vx/dsk/db2dg/tp01_snapvol 96468992 40176 90402086 1% /db/db2inst/PEMMP00P/tempspace01/NODE0000 /dev/vx/dsk/db2dg/lg1_snapvol 198180864 1779853 184126012 1% /db/db2inst/log1
sunsrv01:# df -k | grep db2dg
/dev/vx/dsk/db2dg/db2 4194304 78059 3859030 2% /db2/db2inst /dev/vx/dsk/db2dg/bcp 292552704 1527508 272836173 1% /bcp/db2inst /dev/vx/dsk/db2dg/dba 49283072 28555 46176114 1% /dba/db2inst /dev/vx/dsk/db2dg/db 586171904 1733538 547911037 1% /db/db2inst/PEMMP00P/NODE0000 /dev/vx/dsk/db2dg/tp01 96468992 40176 90402086 1% /db/db2inst/PEMMP00P/tempspace01/NODE0000 /dev/vx/dsk/db2dg/lg1 198180864 1779853 184126012 1% /db/db2inst/log1
Posted by
JAUGHN
Labels:
Veritas Volume Replicator
Thursday, November 1, 2007
at
8:17 AM
|
I just completed a project for migrating and resizing replicated volumes. I
documented the whole process as I implement it.
The document has a detailed implementation (with screenshots) for moving all replicated volumes to a different storage
and then resize them afterwards. The changes were done on both primary and secondary sites. No downtime was
necessary for this change.
It took me 3 nights to complete the project.
Table of Contents
Project Description Server Configurations Network Configurations Essential Terminology Existing Storage Area Network (SAN) Configuration New SAN Allocations Primary - NEW LUNs/Devices Distribution DR – NEW LUNs/Devices Distribution Technical Implementation Steps/Procedures Detailed Implementation- Note down the existing LUN and Volume configurations
- SAN Team to present/zone new LUNs to the servers
- Initialize the new LUNs
- Mirror all volumes including the SRL
- Disassociate the SNAP volumes in the DR environment
- Mirror the SNAP volumes in the DR environment
- Add DCM Logs with the new 17Gb LUNs
- Split and mirror and remove the ones on the original 17Gb LUNs
- Remove the OLD DCM logs.
- Remove the SRL mirror with the old 17Gb disks.
- Remove the OLD 17Gb disks from the diskgroup and from Volume Manager control.
- Resize the SRL Volume in Primary
- Resize the SRL Volume in DR
- Resize the Replicated Volumes
- Resize the DCM Logs in DR
- Prepare the DR Volumes for Snapshot
- Resize the SNAP Volumes in DR
- Prepare the SNAP Volumes in DR
- Run a point-in-time snapshot in DR
Project Description
To migrate all volumes in Replicated Volume Group from a smaller sized LUNs (17Gb) to a new larger size LUNs (136Gb).
This is required because the volumes are to be increased to a total SAN storage size of 7.2TB in Primary and 11.3TB in
DR will exceed the systems’ maximum limit for number of LUNs per HBA (or devices per controller) if 17Gb LUNs are
used.
Server Configurations
Primary (Clustered)- sunsrv01 (Solaris 8, SunFire V440)
- sunsrv02 (Solaris 8, SunFire V440)
Secondary (Clustered)- sunsrv01-DR (Solaris 8, SunFire V440)
- sunsrv02-DR (Solaris 8, SunFire V440)
Network Configurations
IP Address
• sunsrv01 o 10.10.231.130 sunsrv01 Host IP o 10.11.196.192 sunsrv01-DRN Primary/DR Link IP • sunsrv02 o 10.10.231.131 sunsrv02 Host IP o 10.11.196.193 sunsrv02-DRN Primary/DR Link IP
• sunsrv01-DR o 10.10.231.130 sunsrv01-dr Host IP o 10.12.94.192 sunsrv01dr-DRN Primary/DR Link IP • sunsrv02-DR o 10.10.231.131 sunsrv02-dr Host IP o 10.12.94.193 sunsrv02dr-DRN Primary/DR Link IP
VVR Config
• RVG dvgy415_rvg • SRL dvgy415_srl • RLINKs rlk_pdpd415dr-vipvr_dvgy415_rvg (Primary) rlk_pdpd415Primary-vipvr_dvgy415_rvg (DR)
Virtual IPs
• sunsrv01/sunsrv02 o Application 10.10.231.191 o Primary/DR Link 10.11.196.191
• sunsrv01-DR/sunsrv02-DR o Application 10.10.231.191 o Primary/DR Link 10.12.94.191
Essential Terminology
Name | align="center">Description | VVR | Veritas
Volume Replicator. A software that replicates data to remote locations over an IP network for maximum business
continuity. | RVG | Replication Volume Group. An RVG
is a subset of volumes within a given VxVM disk group configured for replication to one or more secondary systems.
Data is replicated from a primary RVG to a secondary RVG. The primary RVG is in use by an application, while the
secondary RVG receives replication and writes to local disk. The concept of primary and secondary is per RVG, not per
system. A system can simultaneously be a primary RVG for some RVGs and secondary RVG for others. The RVG also contains
the storage replicator log (SRL) and replication link (RLINK). | | width="60">SRL Storage Replication Log. All data writes destined for volumes
configured for replication are first persistently queued in a log called the Storage Replicator Log. VVR implements
the SRL at the primary side to store all changes for transmission to the secondary(s). The SRL is a VxVM volume
configured as part of an RVG. The SRL gives VVR the ability to associate writes to specific volumes within the
replicated configuration in a specific order, maintaining write order fidelity at the secondary. All writes sent to
the VxVM volume layer, whether from an application such as a database writing directly to storage, or an application
accessing storage via a file system are faithfully replicated in application write order to the
secondary. | RLink | An Rlink is a VVR Replication
Link to a secondary RVG. Each RLINK on a Primary RVG represents the communication link from the Primary RVG to a
corresponding Secondary RVG, via an IP connection. | DCM | | width="540">Data Change Maps (DCM) are used to mark sections of volumes on the primary that have changed during
extended network outages in order to minimize the amount of data that must be synchronized to the secondary site
during the outage. DCO | Disk Change Object (DCO)
volumes are used by Volume Manager FastResync (FR). FR is used to quickly resynchronize mirrored volumes that have
been temporarily split and rejoined. FR works by copying only changes to the newly reattached volume using FR logging.
This process reduces the time required to rejoin a split mirror, and requires less processing power than full mirror
resynchronization without logging. |
Existing Storage Area Network (SAN) Configuration
• Each Primary server has 2 HBAs attached to the fabric. • Primary servers are using Veritas DMP for Multipathing. • Each Primary server is presented with shared 95 x 17Gb EMC LUNs (IBM Managed) • Each DR server has 4 HBAs (2 dual ports) attached to the fabric. • DR servers are using Veritas DMP for multipathing. • Each DR server is presented with shared 8 x 136Gb EMC LUNs and 198 x 17Gb EMC LUNs. • DR SAN is managed by EMC.
New SAN Allocations
Primary - NEW LUNs/Devices Distribution
Note: Device Names may change during server reboot with reconfig option.
VOLUMES DEVICES DISK NAMES
DB EMC0_139 dvgy41596 EMC0_140 dvgy41597 EMC0_142 dvgy41598 EMC0_143 dvgy41599 EMC0_144 dvgy415100 EMC0_146 dvgy415101 EMC0_150 dvgy415102 EMC0_153 dvgy415103 EMC0_154 dvgy415104 EMC0_164 dvgy415105
LG1 EMC0_148 dvgy415106 EMC0_155 dvgy415107 EMC0_156 dvgy415108
BCP EMC0_157 dvgy415109 EMC0_158 dvgy415110
TP01 EMC0_161 dvgy415111 EMC0_138 dvgy415112 EMC0_141 dvgy415113
DB2/DBA/BCP EMC0_145 dvgy415114
SRL AND DCM LOGS DEVICES DISK NAMES
SRL EMC0_147 dvgy415115 EMC0_149 dvgy415116 EMC0_151 dvgy415117 EMC0_152 dvgy415118 EMC0_159 dvgy415119 EMC0_160 dvgy415120 EMC0_163 dvgy415121
DCM LOG EMC0_130 dvgy415122 EMC0_162 dvgy415123
DR – NEW LUNs/Devices Distribution
Note: Device Names may change during server reboot with reconfig option.
Technical Implementation Steps/Procedures (NO DOWNTIME!)
1. Note down the existing LUN configurations. 2. Present new LUNs to the servers. • 53 x 136Gb LUNs to Primary for Replicating Dataset. • 2 x 17Gb LUNs to Primary for DCM Logs. • 50 x 136Gb LUNs to DR for Landing area, Snap Copy, and Backup filesystems. The existing 17Gb LUNs will be
reformatted by EMC and change the LUN sizes to 136Gb. • 2 x 17Gb LUNs to DR for DCM Logs. 3. Initialize the new LUNs and assign them to the diskgroup for replicated volumes. This step is for both Primary
and DR. 4. Mirror each replicated volume and the SRL volume using the new LUNs as the mirror disks. This step is for both
Primary and DR. 5. Disassociate the SNAP volumes (this may not be necessary) a. Disassociate the snap volumes from the snapshot hierarchy. b. Unprepare the snap volumes. 6. Mirror the snap volumes in DR environment using the new LUNs as the mirror disks. 7. For each volume, add a DCM log using the 2 new 17Gb LUNs. 8. When all volumes are completely mirrored, remove the mirror copy that were lying on the 17Gb disks. 9. Remove the old DCM logs that were on the old LUNs. For both Primary and DR. 10. Remove both Primary and DR SRL mirror copy that is on the 17Gb disks. 11. Remove all the old 17Gb disks from the diskgroup and from Veritas Volume Manager control. For both Primary and
DR. 12. Resize the SRL volume in Primary. 13. Resize the SRL volume in DR. 14. Resize the replicated Volumes. 15. Remove and re-add the DCM logs on the Primary and DR volumes to have bigger log sizes. This step is necessary
because the original logs were too small for the new volume sizes. 16. Prepare the DR Volumes for Snapshot. 17. Resize the SNAP Volumes in DR (or re-create if you do not want to preserve the data on the snapshot volumes.) 18. Prepare the SNAP Volumes in DR. 19. Run a point-in-time snapshot in DR (this will overwrite the previous data in the snap volumes.)
Detailed Implementation
1. Note down the existing LUN and Volume configurations
# format
# vxdisk list
# vxprint –ht
2. SAN Team to present/zone new LUNs to the servers
Verify the connectivity to the fabric
# luxadm -e port
Found path to 4 HBA ports
/devices/pci@1e,600000/SUNW,qlc@3/fp@0,0:devctl CONNECTED /devices/pci@1e,600000/SUNW,qlc@3,1/fp@0,0:devctl CONNECTED /devices/pci@1e,600000/SUNW,qlc@4/fp@0,0:devctl CONNECTED /devices/pci@1e,600000/SUNW,qlc@4,1/fp@0,0:devctl CONNECTED
Discover the new LUNs (look for “connected unconfigured”)
# cfgadm -o show_FCP_dev –al Ap_Id Type Receptacle Occupant Condition c3::50060482ccaad688,1 unavailable connected unconfigured unknown c3::50060482ccaad688,2 unavailable connected unconfigured unknown c3::50060482ccaad688,3 unavailable connected unconfigured unknown . . c4 fc-fabric connected unconfigured unknown c4::50060482ccaad686 disk connected unconfigured unknown c4::50060482ccaad688 disk connected unconfigured unknown . . c5::50060482ccaad689,1 unavailable connected unconfigured unknown c5::50060482ccaad689,2 unavailable connected unconfigured unknown c5::50060482ccaad689,3 unavailable connected unconfigured unknown . . c6 fc-fabric connected unconfigured unknown c6::50060482ccaad687 disk connected unconfigured unknown c6::50060482ccaad689 disk connected unconfigured unknown
Configure the new LUNs so the operating system can see them. Run on all Cluster Nodes.
# cfgadm -c configure c3 # cfgadm -c configure c4 # cfgadm -c configure c5 # cfgadm -c configure c6
Verify if the new LUNs have been configured
# cfgadm -o show_FCP_dev –al Ap_Id Type Receptacle Occupant Condition c3::50060482ccaad688,1 disk connected configured unknown c3::50060482ccaad688,2 disk connected configured unknown c3::50060482ccaad688,3 disk connected configured unknown . . c4 fc-fabric connected configured unknown c4::50060482ccaad686,0 disk connected configured unknown c4::50060482ccaad686,43 disk connected configured unknown c4::50060482ccaad686,44 disk connected configured unknown . . c5::50060482ccaad689,1 disk connected configured unknown c5::50060482ccaad689,2 disk connected configured unknown c5::50060482ccaad689,3 disk connected configured unknown . . . c6::50060482ccaad689,21 disk connected configured unknown c6::50060482ccaad689,22 disk connected configured unknown c6::50060482ccaad689,23 disk connected configured unknown
Verify if you can see the new devices in format. If not, run devfsadm. Run on all Cluster Nodes.
# /usr/sbin/devfsadm
Label the new disks. Create a file /tmp/format.cmd. Run on all Cluster Nodes.
# cat /tmp/format.cmd label quit
# for disk in `format < /dev/null 2> /dev/null | grep "^c" | cut -d: -f1` > do > format -s -f /tmp/format.cmd $disk > echo "labeled $disk ....." > done
Update volume manager. Run on all Cluster Nodes.
# vxdisk scandisks # vxdctl enable
Verify the number of LUNs, the LUN sizes, and the number of paths for each LUN
# vxdmpadm listctlr all # vxdmpadm getsubpaths ctlr=c3 [run for c3 c4 c5 c6 ] # vxdisk list # vxdisk path
Verify the LUN sizes. If some of the LUNs have the wrong size, notify the SAN admin.
# for i in `vxdisk list | grep EMC | grep - | awk '{ print $1 }'` > do > echo "$i \c" > disk=`vxdisk list $i | grep "^c3" | awk '{ print $1 }'` > echo "$disk\t\c" > prtvtoc /dev/rdsk/$disk | grep "^ 2 " > done
3. Initialize the new LUNs
# vxdiskadm
4. Mirror all volumes including the SRL
Primary: Specify the target disks (refer to page 6)
sunsrv01:# vxassist -g dvgy415 mirror db dvgy41596 dvgy41597 dvgy41598 \ dvgy41599 dvgy415100 sunsrv01:# vxassist -g dvgy415 mirror lg1 dvgy415106 dvgy415107 sunsrv01:# vxassist -g dvgy415 mirror bcp dvgy415109 dvgy415110 dvgy415114 sunsrv01:# vxassist -g dvgy415 mirror tp01 dvgy415111 sunsrv01:# vxassist -g dvgy415 mirror dba dvgy415114 sunsrv01:# vxassist -g dvgy415 mirror db2 dvgy415114
sunsrv01:# vxassist -g dvgy415 mirror dvgy415_srl dvgy415115 dvgy415116 dvgy415117
DR: Specify the target disks (refer to page 7)
sunsrv02-dr:# vxassist -g dvgy415 mirror db dvgy415201 dvgy415202 dvgy415203 \ dvgy415204 dvgy415205 sunsrv02-dr:# vxassist -g dvgy415 mirror lg1 dvgy415211 dvgy415212 sunsrv02-dr:# vxassist -g dvgy415 mirror bcp dvgy415214 dvgy415215 dvgy415219 sunsrv02-dr:# vxassist -g dvgy415 mirror tp01 dvgy415216 dvgy415217 dvgy415218 sunsrv02-dr:# vxassist -g dvgy415 mirror dba dvgy415219 sunsrv02-dr:# vxassist -g dvgy415 mirror db2 dvgy415219
sunsrv02-dr:# vxassist -g dvgy415 mirror dvgy415_srl dvgy415239 dvgy415240 \ dvgy415241
5. Disassociate the SNAP volumes in the DR environment
sunsrv02-dr:# vxsnap -g dvgy415 dis db_snapvol sunsrv02-dr:# vxsnap -g dvgy415 -f unprepare db_snapvol sunsrv02-dr:# vxsnap -g dvgy415 unprepare db
sunsrv02-dr:# vxsnap -g dvgy415 dis lg1_snapvol sunsrv02-dr:# vxsnap -g dvgy415 -f unprepare lg1_snapvol sunsrv02-dr:# vxsnap -g dvgy415 unprepare lg1
sunsrv02-dr:# vxsnap -g dvgy415 dis db2_snapvol sunsrv02-dr:# vxsnap -g dvgy415 -f unprepare db2_snapvol sunsrv02-dr:# vxsnap -g dvgy415 unprepare db2
sunsrv02-dr:# vxsnap -g dvgy415 dis bcp_snapvol sunsrv02-dr:# vxsnap -g dvgy415 -f unprepare bcp_snapvol sunsrv02-dr:# vxsnap -g dvgy415 unprepare bcp
sunsrv02-dr:# vxsnap -g dvgy415 dis tp01_snapvol sunsrv02-dr:# vxsnap -g dvgy415 -f unprepare tp01_snapvol sunsrv02-dr:# vxsnap -g dvgy415 unprepare tp01
sunsrv02-dr:# vxsnap -g dvgy415 dis dba_snapvol sunsrv02-dr:# vxsnap -g dvgy415 -f unprepare dba_snapvol sunsrv02-dr:# vxsnap -g dvgy415 unprepare dba
6. Mirror the SNAP volumes in the DR environment
sunsrv02-dr:# vxassist -g dvgy415 mirror db_snapvol dvgy415220 dvgy415221 \ dvgy415222 dvgy415223 dvgy415224 sunsrv02-dr:# vxassist -g dvgy415 mirror lg1_snapvol dvgy415230 dvgy415231 sunsrv02-dr:# vxassist -g dvgy415 mirror bcp_snapvol dvgy415233 dvgy415234 \ dvgy415238 sunsrv02-dr:# vxassist -g dvgy415 mirror tp01_snapvol dvgy415235 dvgy415236 \ dvgy415237 sunsrv02-dr:# vxassist -g dvgy415 mirror dba_snapvol dvgy415238 sunsrv02-dr:# vxassist -g dvgy415 mirror db2_snapvol dvgy415238
7. Add DCM Logs with the new 17Gb LUNs
Primary: Specify the target disks (refer to page 6)
sunsrv01:# vxassist -g dvgy415 addlog bcp logtype=dcm nlog=2 dvgy415122 dvgy415123 sunsrv01:# vxassist -g dvgy415 addlog db logtype=dcm nlog=2 dvgy415122 dvgy415123 sunsrv01:# vxassist -g dvgy415 addlog dba logtype=dcm nlog=2 dvgy415122 dvgy415123 sunsrv01:# vxassist -g dvgy415 addlog db2 logtype=dcm nlog=2 dvgy415122 dvgy415123 sunsrv01:# vxassist -g dvgy415 addlog lg1 logtype=dcm nlog=2 dvgy415122 dvgy415123 sunsrv01:# vxassist -g dvgy415 addlog tp01 logtype=dcm nlog=2 dvgy415122 dvgy415123
DR: Specify the target disks (refer to page 7)
sunsrv02-dr:# vxassist -g dvgy415 addlog bcp logtype=dcm nlog=2 dvgy415199 \ dvgy415200 sunsrv02-dr:# vxassist -g dvgy415 addlog db logtype=dcm nlog=2 dvgy415199 \ dvgy415200 sunsrv02-dr:# vxassist -g dvgy415 addlog dba logtype=dcm nlog=2 dvgy415199 \ dvgy415200 sunsrv02-dr:# vxassist -g dvgy415 addlog db2 logtype=dcm nlog=2 dvgy415199 \ dvgy415200 sunsrv02-dr:# vxassist -g dvgy415 addlog lg1 logtype=dcm nlog=2 dvgy415199 \ dvgy415200 sunsrv02-dr:# vxassist -g dvgy415 addlog tp01 logtype=dcm nlog=2 dvgy415199 \ dvgy415200
8. Split and mirror and remove the ones on the original 17Gb LUNs
First, Verify that all volumes and plexes are in ENABLED ACTIVE state.
Notice the three new plexes 03, 04 and 05. These the the mirror copy of data and the two new logs.
sunsrv01:# vxprint -htg dvgy415 | egrep "^v|^pl"
v bcp dvgy415_rvg ENABLED ACTIVE 585105408 SELECT - fsgen pl bcp-01 bcp ENABLED ACTIVE 585105600 CONCAT - RW pl bcp-02 bcp ENABLED ACTIVE LOGONLY CONCAT - RW pl bcp-03 bcp ENABLED ACTIVE 585108480 CONCAT - RW pl bcp-04 bcp ENABLED ACTIVE LOGONLY CONCAT - RW pl bcp-05 bcp ENABLED ACTIVE LOGONLY CONCAT - RW
v dba dvgy415_rvg ENABLED ACTIVE 98566144 SELECT - fsgen pl dba-01 dba ENABLED ACTIVE 98567040 CONCAT - RW pl dba-02 dba ENABLED ACTIVE LOGONLY CONCAT - RW pl dba-03 dba ENABLED ACTIVE 98572800 CONCAT - RW pl dba-04 dba ENABLED ACTIVE LOGONLY CONCAT - RW pl dba-05 dba ENABLED ACTIVE LOGONLY CONCAT - RW
v db2 dvgy415_rvg ENABLED ACTIVE 8388608 SELECT - fsgen pl db2-01 db2 ENABLED ACTIVE 8389440 CONCAT - RW pl db2-02 db2 ENABLED ACTIVE LOGONLY CONCAT - RW pl db2-03 db2 ENABLED ACTIVE 8394240 CONCAT - RW pl db2-04 db2 ENABLED ACTIVE LOGONLY CONCAT - RW pl db2-05 db2 ENABLED ACTIVE LOGONLY CONCAT - RW
v tp01 dvgy415_rvg ENABLED ACTIVE 192937984 SELECT - fsgen pl tp01-01 tp01 ENABLED ACTIVE 192938880 CONCAT - RW pl tp01-02 tp01 ENABLED ACTIVE LOGONLY CONCAT - RW pl tp01-03 tp01 ENABLED ACTIVE 192944640 CONCAT - RW pl tp01-04 tp01 ENABLED ACTIVE LOGONLY CONCAT - RW pl tp01-05 tp01 ENABLED ACTIVE LOGONLY CONCAT - RW
v lg1 dvgy415_rvg ENABLED ACTIVE 396361728 SELECT - fsgen pl lg1-01 lg1 ENABLED ACTIVE 396361920 CONCAT - RW pl lg1-02 lg1 ENABLED ACTIVE LOGONLY CONCAT - RW pl lg1-03 lg1 ENABLED ACTIVE 396364800 CONCAT - RW pl lg1-04 lg1 ENABLED ACTIVE LOGONLY CONCAT - RW pl lg1-05 lg1 ENABLED ACTIVE LOGONLY CONCAT - RW
v db dvgy415_rvg ENABLED ACTIVE 1172343808 SELECT - fsgen pl db-01 db ENABLED ACTIVE 1172344320 CONCAT - RW pl db-02 db ENABLED ACTIVE LOGONLY CONCAT - RW pl db-03 db ENABLED ACTIVE 1172344320 CONCAT - RW pl db-04 db ENABLED ACTIVE LOGONLY CONCAT - RW pl db-05 db ENABLED ACTIVE LOGONLY CONCAT - RW
v dvgy415_srl dvgy415_rvg ENABLED ACTIVE 856350720 SELECT - SRL pl dvgy415_srl-01 dvgy415_srl ENABLED ACTIVE 856350720 CONCAT - RW pl dvgy415_srl-02 dvgy415_srl ENABLED ACTIVE 856350720 CONCAT - RW
In the DR environment, take note that the SRL has not completed the mirroring at this time.
sunsrv02-dr:# vxprint -htg dvgy415 | egrep "^v|^pl" v bcp_snapvol - ENABLED ACTIVE 585105408 SELECT - fsgen pl bcp_snapvol-01 bcp_snapvol ENABLED ACTIVE 585105600 CONCAT - RW pl bcp_snapvol-02 bcp_snapvol ENABLED ACTIVE 585108480 CONCAT - RW
v db_snapvol - ENABLED ACTIVE 1172343808 SELECT - fsgen pl db_snapvol-01 db_snapvol ENABLED ACTIVE 1172344320 CONCAT - RW pl db_snapvol-02 db_snapvol ENABLED ACTIVE 1172344320 CONCAT - RW
v dba_snapvol - ENABLED ACTIVE 98566144 SELECT - fsgen pl dba_snapvol-01 dba_snapvol ENABLED ACTIVE 98567040 CONCAT - RW pl dba_snapvol-02 dba_snapvol ENABLED ACTIVE 98572800 CONCAT - RW
v db2_snapvol - ENABLED ACTIVE 8388608 SELECT - fsgen pl db2_snapvol-01 db2_snapvol ENABLED ACTIVE 8389440 CONCAT - RW pl db2_snapvol-02 db2_snapvol ENABLED ACTIVE 8394240 CONCAT - RW
v lg1_snapvol - ENABLED ACTIVE 396361728 SELECT - fsgen pl lg1_snapvol-01 lg1_snapvol ENABLED ACTIVE 396361920 CONCAT - RW pl lg1_snapvol-02 lg1_snapvol ENABLED ACTIVE 396364800 CONCAT - RW
v tp01_snapvol - ENABLED ACTIVE 192937984 SELECT - fsgen pl tp01_snapvol-01 tp01_snapvol ENABLED ACTIVE 192938880 CONCAT - RW pl tp01_snapvol-02 tp01_snapvol ENABLED ACTIVE 192944640 CONCAT - RW
v bcp dvgy415_rvg ENABLED ACTIVE 585105408 SELECT - fsgen pl bcp-01 bcp ENABLED ACTIVE 585105600 CONCAT - RW pl bcp-02 bcp ENABLED ACTIVE LOGONLY CONCAT - RW pl bcp-03 bcp ENABLED ACTIVE 585108480 CONCAT - RW pl bcp-04 bcp ENABLED ACTIVE LOGONLY CONCAT - RW pl bcp-05 bcp ENABLED ACTIVE LOGONLY CONCAT - RW
v db dvgy415_rvg ENABLED ACTIVE 1172343808 SELECT - fsgen pl db-01 db ENABLED ACTIVE 1172344320 CONCAT - RW pl db-02 db ENABLED ACTIVE LOGONLY CONCAT - RW pl db-03 db ENABLED ACTIVE 1172344320 CONCAT - RW pl db-04 db ENABLED ACTIVE LOGONLY CONCAT - RW pl db-05 db ENABLED ACTIVE LOGONLY CONCAT - RW
v dba dvgy415_rvg ENABLED ACTIVE 98566144 SELECT - fsgen pl dba-01 dba ENABLED ACTIVE 98567040 CONCAT - RW pl dba-02 dba ENABLED ACTIVE LOGONLY CONCAT - RW pl dba-03 dba ENABLED ACTIVE 98572800 CONCAT - RW pl dba-04 dba ENABLED ACTIVE LOGONLY CONCAT - RW pl dba-05 dba ENABLED ACTIVE LOGONLY CONCAT - RW
v db2 dvgy415_rvg ENABLED ACTIVE 8388608 SELECT - fsgen pl db2-01 db2 ENABLED ACTIVE 8389440 CONCAT - RW pl db2-02 db2 ENABLED ACTIVE LOGONLY CONCAT - RW pl db2-03 db2 ENABLED ACTIVE 8394240 CONCAT - RW pl db2-04 db2 ENABLED ACTIVE LOGONLY CONCAT - RW pl db2-05 db2 ENABLED ACTIVE LOGONLY CONCAT - RW
v lg1 dvgy415_rvg ENABLED ACTIVE 396361728 SELECT - fsgen pl lg1-01 lg1 ENABLED ACTIVE 396361920 CONCAT - RW pl lg1-02 lg1 ENABLED ACTIVE LOGONLY CONCAT - RW pl lg1-03 lg1 ENABLED ACTIVE 396364800 CONCAT - RW pl lg1-04 lg1 ENABLED ACTIVE LOGONLY CONCAT - RW pl lg1-05 lg1 ENABLED ACTIVE LOGONLY CONCAT - RW
v tp01 dvgy415_rvg ENABLED ACTIVE 192937984 SELECT - fsgen pl tp01-01 tp01 ENABLED ACTIVE 192938880 CONCAT - RW pl tp01-02 tp01 ENABLED ACTIVE LOGONLY CONCAT - RW pl tp01-03 tp01 ENABLED ACTIVE 192944640 CONCAT - RW pl tp01-04 tp01 ENABLED ACTIVE LOGONLY CONCAT - RW pl tp01-05 tp01 ENABLED ACTIVE LOGONLY CONCAT - RW
v dvgy415_srl dvgy415_rvg ENABLED ACTIVE 856350720 SELECT - SRL pl dvgy415_srl-01 dvgy415_srl ENABLED ACTIVE 856350720 CONCAT - RW pl dvgy415_srl-02 dvgy415_srl ENABLED TEMPRMSD 856350720 CONCAT - WO
When all volumes and plexes are in ENABLED ACTIVE state, remove the old mirrors.
sunsrv01:# vxplex -g dvgy415 -o rm dis dba-01 sunsrv01:# vxplex -g dvgy415 -o rm dis db2-01 sunsrv01:# vxplex -g dvgy415 -o rm dis bcp-01 sunsrv01:# vxplex -g dvgy415 -o rm dis tp01-01 sunsrv01:# vxplex -g dvgy415 -o rm dis lg1-01 sunsrv01:# vxplex -g dvgy415 -o rm dis db-01
sunsrv02-dr:# vxplex -g dvgy415 -o rm dis dba-01 sunsrv02-dr:# vxplex -g dvgy415 -o rm dis db2-01 sunsrv02-dr:# vxplex -g dvgy415 -o rm dis bcp-01 sunsrv02-dr:# vxplex -g dvgy415 -o rm dis tp01-01 sunsrv02-dr:# vxplex -g dvgy415 -o rm dis lg1-01 sunsrv02-dr:# vxplex -g dvgy415 -o rm dis db-01
9. Remove the OLD DCM logs.
sunsrv01:# vxplex -g dvgy415 -o rm dis dba-02 sunsrv01:# vxplex -g dvgy415 -o rm dis db2-02 sunsrv01:# vxplex -g dvgy415 -o rm dis bcp-02 sunsrv01:# vxplex -g dvgy415 -o rm dis tp01-02 sunsrv01:# vxplex -g dvgy415 -o rm dis lg1-02 sunsrv01:# vxplex -g dvgy415 -o rm dis db-02
sunsrv02-dr:# vxplex -g dvgy415 -o rm dis dba-02 sunsrv02-dr:# vxplex -g dvgy415 -o rm dis db2-02 sunsrv02-dr:# vxplex -g dvgy415 -o rm dis bcp-02 sunsrv02-dr:# vxplex -g dvgy415 -o rm dis tp01-02 sunsrv02-dr:# vxplex -g dvgy415 -o rm dis lg1-02 sunsrv02-dr:# vxplex -g dvgy415 -o rm dis db-02
10. Remove the SRL mirror with the old 17Gb disks.
First, Verify the state of RVG. Make sure it’s still in active state and up to date.
sunsrv01:# vxrlink -g dvgy415 status rlk_pdpd415dr-vipvr_dvgy415_rvg Thu Dec 13 10:31:40 MST 2007 VxVM VVR vxrlink INFO V-5-1-4467 Rlink rlk_pdpd415dr-vipvr_dvgy415_rvg is up to date
sunsrv01:# vxprint -Pl Disk group: dvgy415
Rlink: rlk_pdpd415dr-vipvr_dvgy415_rvg info: timeout=500 packet_size=8400 rid=0.2007 latency_high_mark=10000 latency_low_mark=9950 bandwidth_limit=none state: state=ACTIVE synchronous=off latencyprot=off srlprot=autodcm assoc: rvg=dvgy415_rvg remote_host=pdpd415dr-vipvr IP_addr=10.12.94.191 port=4145 remote_dg=dvgy415 remote_dg_dgid=1182230506.2373.sunsrv01-dr remote_rvg_version=21 remote_rlink=rlk_pdpd415Primary-vipvr_dvgy415_rvg remote_rlink_rid=0.2127 local_host=pdpd415Primary-vipvr IP_addr=10.11.196.191 port=4145 protocol: UDP/IP flags: write enabled attached consistent connected asynchronous
Remove the SRL mirror. Remember that the first copy is the older copy.
sunsrv01:# vxplex -g dvgy415 -o rm dis dvgy415_srl-01
sunsrv02-dr:# vxplex -g dvgy415 -o rm dis dvgy415_srl-01
Again, verify the state of RVG. Make sure it’s still in active state and up to date.
sunsrv01:# vxrlink -g dvgy415 status rlk_pdpd415dr-vipvr_dvgy415_rvg Thu Dec 13 10:35:06 MST 2007 VxVM VVR vxrlink INFO V-5-1-4467 Rlink rlk_pdpd415dr-vipvr_dvgy415_rvg is up to date
sunsrv01:# vxprint -Pl Disk group: dvgy415
Rlink: rlk_pdpd415dr-vipvr_dvgy415_rvg info: timeout=500 packet_size=8400 rid=0.2007 latency_high_mark=10000 latency_low_mark=9950 bandwidth_limit=none state: state=ACTIVE synchronous=off latencyprot=off srlprot=autodcm assoc: rvg=dvgy415_rvg remote_host=pdpd415dr-vipvr IP_addr=10.12.94.191 port=4145 remote_dg=dvgy415 remote_dg_dgid=1182230506.2373.sunsrv01-dr remote_rvg_version=21 remote_rlink=rlk_pdpd415Primary-vipvr_dvgy415_rvg remote_rlink_rid=0.2127 local_host=pdpd415Primary-vipvr IP_addr=10.11.196.191 port=4145 protocol: UDP/IP flags: write enabled attached consistent connected asynchronous
11. Remove the OLD 17Gb disks from the diskgroup and from Volume Manager control.
We know that there were originally 95 17Gb LUNs (see third page of this document under “Existing SAN Configuration”)
on the Primary server and by now all those 95 LUNs should be unused. Use vxdg free to verify it. 17Gb should be
35681280 blocks.
sunsrv01:# vxdg -g dvgy415 free | grep 35681280 | wc -l 95
Now a small script to remove them:
sunsrv01:# vxdg -g dvgy415 free | grep 35681280 | awk '{ print $1" "$2 }' | while read disk device > do > vxdg -g dvgy415 rmdisk $disk > vxdisk rm $device > echo "disk:$disk device:$device REMOVED!" > done
disk:dvgy41501 device:EMC0_36 REMOVED! disk:dvgy41502 device:EMC0_29 REMOVED! disk:dvgy41503 device:EMC0_25 REMOVED! . . . disk:dvgy41590 device:EMC0_89 REMOVED! disk:dvgy41591 device:EMC0_90 REMOVED! disk:dvgy41592 device:EMC0_91 REMOVED! disk:dvgy41593 device:EMC0_92 REMOVED! disk:dvgy41594 device:EMC0_93 REMOVED! disk:dvgy41595 device:EMC0_94 REMOVED!
Since the server is part of a cluster, the other node must be cleaned up as well. Different logic this time.
sunsrv02:# for i in `vxdisk -o alldgs list | grep EMC | grep -v dvgy | awk '{ print $1 }'` > do > vxdisk rm $i > done
Now in the DR server, there were originally 198 17Gb disks. We should remove all of them. There are two different
17Gb sizes on the servers.
sunsrv02-dr:# vxdg -g dvgy415 free | egrep "35726400|35681280" | wc -l 198
In removing the disks, I displayed the WWN/LUN numbers so I can document and give to the SAN folks which LUNs should
be removed and reconfigured for the new 136Gb sizes (see project description on the first page).
sunsrv02-dr:# vxdg -g dvgy415 free | egrep "35726400|35681280" | awk '{ print $1" "$2 }' | while read disk device > do > physical=`vxdisk list $device | egrep "^c3|^c5" | awk '{ print $1 }'` > vxdg -g dvgy415 rmdisk $disk > vxdisk rm $device > echo "$physical disk:$disk device:$device REMOVED!" > done c3t50060482CC3722B9d204s2 c5t50060482CC3722B6d204s2 disk:dvgy41501 device:EMC0_21 REMOVED! c3t50060482CC3722B9d203s2 c5t50060482CC3722B6d203s2 disk:dvgy41502 device:EMC0_29 REMOVED! . . . . c5t50060482CC3722B6d101s2 disk:dvgy415194 device:EMC0_10 REMOVED! c3t50060482CC3722B9d102s2 c5t50060482CC3722B6d102s2 disk:dvgy415195 device:EMC0_147 REMOVED! c3t50060482CC3722B9d103s2 c5t50060482CC3722B6d103s2 disk:dvgy415196 device:EMC0_148 REMOVED! c3t50060482CC3722B9d104s2 c5t50060482CC3722B6d104s2 disk:dvgy415197 device:EMC0_149 REMOVED! c3t50060482CC3722B9d105s2 c5t50060482CC3722B6d105s2 disk:dvgy415198 device:EMC0_150 REMOVED!
Since the server is part of a cluster, the other node must be cleaned up as well.
sunsrv01-dr:# for i in `vxdisk -o alldgs list | grep EMC | grep -v dvgy | awk '{ print $1 }'` > do > vxdisk rm $i > done
12. Resize the SRL Volume in Primary
First, display the SRL volume information
sunsrv01:# vxprint -ht dvgy415_srl v dvgy415_srl dvgy415_rvg ENABLED ACTIVE 856350720 SELECT - SRL pl dvgy415_srl-02 dvgy415_srl ENABLED ACTIVE 856350720 CONCAT - RW sd dvgy415115-01 dvgy415_srl-02 dvgy415115 5376 285473280 0 EMC0_147 ENA sd dvgy415116-01 dvgy415_srl-02 dvgy415116 5376 285473280 285473280 EMC0_149 ENA sd dvgy415117-01 dvgy415_srl-02 dvgy415117 5376 285404160 570946560 EMC0_151 ENA
Check the rlink status and make sure it’s up to date
sunsrv01:# vxrlink -g dvgy415 status rlk_pdpd415dr-vipvr_dvgy415_rvg Fri Dec 14 21:24:49 MST 2007 VxVM VVR vxrlink INFO V-5-1-4467 Rlink rlk_pdpd415dr-vipvr_dvgy415_rvg is up to date
Display the maximum space we can add using the remaining disks we have alloted for SRL.
sunsrv01:# vxassist -g dvgy415 -p maxsize dvgy415118 dvgy415119 dvgy415120 dvgy415121
1141893120
Increase the SRL size by 1141893120 blocks.
sunsrv01:# vradmin -g dvgy415 resizesrl dvgy415_rvg +1141893120
Check the volume information again and verify if the size has been increased.
sunsrv01:# vxprint -ht dvgy415_srl Disk group: dvgy415
v dvgy415_srl dvgy415_rvg ENABLED ACTIVE 1998243840 SELECT - SRL pl dvgy415_srl-02 dvgy415_srl ENABLED ACTIVE 1998243840 CONCAT - RW sd dvgy415115-01 dvgy415_srl-02 dvgy415115 5376 285473280 0 EMC0_147 ENA sd dvgy415116-01 dvgy415_srl-02 dvgy415116 5376 285473280 285473280 EMC0_149 ENA sd dvgy415117-01 dvgy415_srl-02 dvgy415117 5376 285473280 570946560 EMC0_151 ENA sd dvgy415118-01 dvgy415_srl-02 dvgy415118 5376 285473280 856419840 EMC0_152 ENA sd dvgy415119-01 dvgy415_srl-02 dvgy415119 5376 285473280 1141893120 EMC0_159 ENA sd dvgy415120-01 dvgy415_srl-02 dvgy415120 5376 285473280 1427366400 EMC0_160 ENA sd dvgy415121-01 dvgy415_srl-02 dvgy415121 5376 285404160 1712839680 EMC0_163 ENA
13. Resize the SRL Volume in DR
First, display the SRL volume information
sunsrv02-dr:# vxprint -ht dvgy415_srl Disk group: dvgy415
v dvgy415_srl dvgy415_rvg ENABLED ACTIVE 856350720 SELECT - SRL pl dvgy415_srl-02 dvgy415_srl ENABLED ACTIVE 856350720 CONCAT - RW sd dvgy415239-01 dvgy415_srl-02 dvgy415239 5376 285473280 0 EMC0_244 ENA sd dvgy415240-01 dvgy415_srl-02 dvgy415240 5376 285473280 285473280 EMC0_245 ENA sd dvgy415241-01 dvgy415_srl-02 dvgy415241 5376 285404160 570946560 EMC0_246 ENA
Check the rlink status and make sure it’s up to date
sunsrv02-dr:# vxrlink -g dvgy415 det rlk_pdpd415Primary-vipvr_dvgy415_rvg
Disassociate the SRL volume from the RVG
sunsrv02-dr:# vxvol -g dvgy415 dis dvgy415_srl
Verify if the volume has been disassociated. The 3rd column should not show the RVG.
sunsrv02-dr:# vxprint -ht dvgy415_srl Disk group: dvgy415
v dvgy415_srl - ENABLED ACTIVE 856350720 SELECT - fsgen pl dvgy415_srl-02 dvgy415_srl ENABLED ACTIVE 856350720 CONCAT - RW sd dvgy415239-01 dvgy415_srl-02 dvgy415239 5376 285473280 0 EMC0_244 ENA sd dvgy415240-01 dvgy415_srl-02 dvgy415240 5376 285473280 285473280 EMC0_245 ENA sd dvgy415241-01 dvgy415_srl-02 dvgy415241 5376 285404160 570946560 EMC0_246 ENA
Display the RVG status. Note down the state is no longer active and the flag says detached.
sunsrv02-dr:# vxprint -Pl Disk group: dvgy415
Rlink: rlk_pdpd415Primary-vipvr_dvgy415_rvg info: timeout=500 packet_size=8400 rid=0.2127 latency_high_mark=10000 latency_low_mark=9950 bandwidth_limit=none state: state=STALE synchronous=off latencyprot=off srlprot=autodcm assoc: rvg=dvgy415_rvg remote_host=pdpd415Primary-vipvr IP_addr=10.11.196.191 port=4145 remote_dg=dvgy415 remote_dg_dgid=1138140445.1393.sunsrv01 remote_rvg_version=21 remote_rlink=rlk_pdpd415dr-vipvr_dvgy415_rvg remote_rlink_rid=0.2007 local_host=pdpd415dr-vipvr IP_addr=10.12.94.191 port=4145 protocol: UDP/IP flags: write enabled detached consistent disconnected
Increase the size of the SRL by the same size we used in the Primary SRL.
sunsrv02-dr:# vxassist -g dvgy415 growby dvgy415_srl 1141893120 dvgy415241 dvgy415242 dvgy415243 dvgy415244 dvgy415245
Check the volume information again and verify if the size has been increased and has the same size as the Primary SRL
volume.
sunsrv02-dr:# vxprint -ht dvgy415_srl Disk group: dvgy415
v dvgy415_srl - ENABLED ACTIVE 1998243840 SELECT - fsgen pl dvgy415_srl-02 dvgy415_srl ENABLED ACTIVE 1998243840 CONCAT - RW sd dvgy415239-01 dvgy415_srl-02 dvgy415239 5376 285473280 0 EMC0_244 ENA sd dvgy415240-01 dvgy415_srl-02 dvgy415240 5376 285473280 285473280 EMC0_245 ENA sd dvgy415241-01 dvgy415_srl-02 dvgy415241 5376 285473280 570946560 EMC0_246 ENA sd dvgy415242-01 dvgy415_srl-02 dvgy415242 5376 285473280 856419840 EMC0_247 ENA sd dvgy415243-01 dvgy415_srl-02 dvgy415243 5376 285826560 1141893120 EMC0_248 ENA sd dvgy415244-01 dvgy415_srl-02 dvgy415244 5376 285473280 1427719680 EMC0_249 ENA sd dvgy415245-01 dvgy415_srl-02 dvgy415245 5376 285050880 1713192960 EMC0_250 ENA
Re-associate the SRL volume from the RVG
sunsrv02-dr:# vxvol -g dvgy415 aslog dvgy415_rvg dvgy415_srl
Re-attach the Rlink
sunsrv02-dr:# vxrlink -g dvgy415 att rlk_pdpd415Primary-vipvr_dvgy415_rvg
Check the state of the RVG. It should be active and attached.
sunsrv02-dr:# vxprint -Pl Disk group: dvgy415
Rlink: rlk_pdpd415Primary-vipvr_dvgy415_rvg info: timeout=500 packet_size=8400 rid=0.2127 latency_high_mark=10000 latency_low_mark=9950 bandwidth_limit=none state: state=ACTIVE synchronous=off latencyprot=off srlprot=autodcm assoc: rvg=dvgy415_rvg remote_host=pdpd415Primary-vipvr IP_addr=10.11.196.191 port=4145 remote_dg=dvgy415 remote_dg_dgid=1138140445.1393.sunsrv01 remote_rvg_version=21 remote_rlink=rlk_pdpd415dr-vipvr_dvgy415_rvg remote_rlink_rid=0.2007 local_host=pdpd415dr-vipvr IP_addr=10.12.94.191 port=4145 protocol: UDP/IP flags: write enabled attached consistent connected
14. Resize the Replicated Volumes
Resize the volume using vradmin and run it from the primary site – Primary. The change will be applied on both the
primary and the secondary site – Primary and DR.
sunsrv01:# vradmin -g dvgy415 resizevol dvgy415_rvg db 1361g Message from Host pdpd415dr-vipvr: VxVM vxassist WARNING V-5-1-9592 DCM log size is smaller than recommended due to increased volume size.
The DCM log will be fixed later.
Verify that both sites have the same size and both in ENABLED ACTIVE state.
sunsrv01:# vxprint -ht db Disk group: dvgy415
v db dvgy415_rvg ENABLED ACTIVE 2854223872 SELECT - fsgen pl db-03 db ENABLED ACTIVE 2854225920 CONCAT - RW sd dvgy41596-01 db-03 dvgy41596 5376 285473280 0 EMC0_139 ENA sd dvgy41597-01 db-03 dvgy41597 5376 285473280 285473280 EMC0_140 ENA sd dvgy41598-01 db-03 dvgy41598 5376 285473280 570946560 EMC0_142 ENA sd dvgy41599-01 db-03 dvgy41599 5376 285473280 856419840 EMC0_143 ENA sd dvgy415100-01 db-03 dvgy415100 5376 285473280 1141893120 EMC0_144 ENA sd dvgy415101-01 db-03 dvgy415101 5376 285473280 1427366400 EMC0_146 ENA sd dvgy415102-01 db-03 dvgy415102 5376 285473280 1712839680 EMC0_150 ENA sd dvgy415103-01 db-03 dvgy415103 5376 285473280 1998312960 EMC0_153 ENA sd dvgy415104-01 db-03 dvgy415104 5376 285473280 2283786240 EMC0_154 ENA sd dvgy415105-01 db-03 dvgy415105 5376 284966400 2569259520 EMC0_164 ENA pl db-04 db ENABLED ACTIVE LOGONLY CONCAT - RW sd dvgy415122-02 db-04 dvgy415122 576 512 LOG EMC0_130 ENA pl db-05 db ENABLED ACTIVE LOGONLY CONCAT - RW sd dvgy415123-02 db-05 dvgy415123 576 512 LOG EMC0_162 ENA
sunsrv02-dr:# vxprint -ht db Disk group: dvgy415
v db dvgy415_rvg ENABLED ACTIVE 2854223872 SELECT - fsgen pl db-03 db ENABLED ACTIVE 2854225920 CONCAT - RW sd dvgy415201-01 db-03 dvgy415201 5376 285473280 0 EMC0_206 ENA sd dvgy415202-01 db-03 dvgy415202 5376 285473280 285473280 EMC0_207 ENA sd dvgy415203-01 db-03 dvgy415203 5376 285473280 570946560 EMC0_208 ENA sd dvgy415204-01 db-03 dvgy415204 5376 285473280 856419840 EMC0_209 ENA sd dvgy415205-01 db-03 dvgy415205 5376 285473280 1141893120 EMC0_210 ENA sd dvgy415206-01 db-03 dvgy415206 5376 285473280 1427366400 EMC0_211 ENA sd dvgy415207-01 db-03 dvgy415207 5376 285473280 1712839680 EMC0_212 ENA sd dvgy415208-01 db-03 dvgy415208 5376 285473280 1998312960 EMC0_213 ENA sd dvgy415209-01 db-03 dvgy415209 5376 285473280 2283786240 EMC0_214 ENA sd dvgy415210-01 db-03 dvgy415210 5376 284966400 2569259520 EMC0_215 ENA pl db-04 db ENABLED ACTIVE LOGONLY CONCAT - RW sd dvgy415199-02 db-04 dvgy415199 64 64 LOG EMC0_256 ENA pl db-05 db ENABLED ACTIVE LOGONLY CONCAT - RW sd dvgy415200-02 db-05 dvgy415200 64 64 LOG EMC0_257 ENA
Check the status of the Rlink
sunsrv01:# vxrlink -g dvgy415 status rlk_pdpd415dr-vipvr_dvgy415_rvg Fri Dec 14 22:16:21 MST 2007 VxVM VVR vxrlink INFO V-5-1-4467 Rlink rlk_pdpd415dr-vipvr_dvgy415_rvg is up to date
Verify that the filesystem has the new size
sunsrv01:# df -k /dev/vx/dsk/dvgy415/db Filesystem kbytes used avail capacity Mounted on /dev/vx/dsk/dvgy415/db 1427111936 30333310 1309480015 3% /db/pdpd415/PEMMP00P/NODE0000
Follow the same steps for the rest of the volumes in the RVG
sunsrv01:# df -k /dev/vx/dsk/dvgy415/lg1
Filesystem kbytes used avail capacity Mounted on /dev/vx/dsk/dvgy415/lg1 198180864 3231502 182765091 2% /db/pdpd415/log1
sunsrv01:# vxprint -ht lg1 Disk group: dvgy415
v lg1 dvgy415_rvg ENABLED ACTIVE 396361728 SELECT - fsgen pl lg1-03 lg1 ENABLED ACTIVE 396364800 CONCAT - RW sd dvgy415106-01 lg1-03 dvgy415106 5376 285473280 0 EMC0_148 ENA sd dvgy415107-01 lg1-03 dvgy415107 5376 110891520 285473280 EMC0_155 ENA pl lg1-04 lg1 ENABLED ACTIVE LOGONLY CONCAT - RW sd dvgy415122-05 lg1-04 dvgy415122 2496 512 LOG EMC0_130 ENA pl lg1-05 lg1 ENABLED ACTIVE LOGONLY CONCAT - RW sd dvgy415123-05 lg1-05 dvgy415123 2496 512 LOG EMC0_162 ENA
sunsrv01:# vradmin -g dvgy415 resizevol dvgy415_rvg lg1 +460054528 Message from Host pdpd415dr-vipvr: VxVM vxassist WARNING V-5-1-9592 DCM log size is smaller than recommended due to increased volume size.
sunsrv01:# df -k /dev/vx/dsk/dvgy415/lg1 Filesystem kbytes used avail capacity Mounted on /dev/vx/dsk/dvgy415/lg1 428208128 3287884 398362793 1% /db/pdpd415/log1
sunsrv01:# vxprint -ht lg1 Disk group: dvgy415
v lg1 dvgy415_rvg ENABLED ACTIVE 856416256 SELECT - fsgen pl lg1-03 lg1 ENABLED ACTIVE 856419840 CONCAT - RW sd dvgy415106-01 lg1-03 dvgy415106 5376 285473280 0 EMC0_148 ENA sd dvgy415107-01 lg1-03 dvgy415107 5376 285473280 285473280 EMC0_155 ENA sd dvgy415108-01 lg1-03 dvgy415108 5376 285473280 570946560 EMC0_156 ENA pl lg1-04 lg1 ENABLED ACTIVE LOGONLY CONCAT - RW sd dvgy415122-05 lg1-04 dvgy415122 2496 512 LOG EMC0_130 ENA pl lg1-05 lg1 ENABLED ACTIVE LOGONLY CONCAT - RW sd dvgy415123-05 lg1-05 dvgy415123 2496 512 LOG EMC0_162 ENA
sunsrv02-dr:# vxprint -ht lg1 Disk group: dvgy415
v lg1 dvgy415_rvg ENABLED ACTIVE 856416256 SELECT - fsgen pl lg1-03 lg1 ENABLED ACTIVE 856419840 CONCAT - RW sd dvgy415211-01 lg1-03 dvgy415211 5376 285473280 0 EMC0_216 ENA sd dvgy415212-01 lg1-03 dvgy415212 5376 285473280 285473280 EMC0_217 ENA sd dvgy415213-01 lg1-03 dvgy415213 5376 285473280 570946560 EMC0_218 ENA pl lg1-04 lg1 ENABLED ACTIVE LOGONLY CONCAT - RW sd dvgy415199-05 lg1-04 dvgy415199 256 64 LOG EMC0_256 ENA pl lg1-05 lg1 ENABLED ACTIVE LOGONLY CONCAT - RW sd dvgy415200-05 lg1-05 dvgy415200 256 64 LOG EMC0_257 ENA
sunsrv01:# df -k /dev/vx/dsk/dvgy415/tp01 Filesystem kbytes used avail capacity Mounted on /dev/vx/dsk/dvgy415/tp01 96468992 40177 90402085 1% /db/pdpd415/PEMMP00P/tempspace01/NODE0000 sunsrv01:# vxprint -ht tp01 Disk group: dvgy415
v tp01 dvgy415_rvg ENABLED ACTIVE 192937984 SELECT - fsgen pl tp01-03 tp01 ENABLED ACTIVE 192944640 CONCAT - RW sd dvgy415111-01 tp01-03 dvgy415111 5376 192944640 0 EMC0_161 ENA pl tp01-04 tp01 ENABLED ACTIVE LOGONLY CONCAT - RW sd dvgy415122-06 tp01-04 dvgy415122 3456 512 LOG EMC0_130 ENA pl tp01-05 tp01 ENABLED ACTIVE LOGONLY CONCAT - RW sd dvgy415123-06 tp01-05 dvgy415123 3456 512 LOG EMC0_162 ENA
sunsrv01:# vxrlink -g dvgy415 status rlk_pdpd415dr-vipvr_dvgy415_rvg Fri Dec 14 22:44:56 MST 2007 VxVM VVR vxrlink INFO V-5-1-4467 Rlink rlk_pdpd415dr-vipvr_dvgy415_rvg is up to date
sunsrv01:# vradmin -g dvgy415 resizevol dvgy415_rvg tp01 +663474176 Message from Host pdpd415dr-vipvr: VxVM vxassist WARNING V-5-1-9592 DCM log size is smaller than recommended due to increased volume size.
sunsrv01:# df -k /dev/vx/dsk/dvgy415/tp01 Filesystem kbytes used avail capacity Mounted on /dev/vx/dsk/dvgy415/tp01 428206080 121489 401329375 1% /db/pdpd415/PEMMP00P/tempspace01/NODE0000
sunsrv01:# vxprint -ht tp01 Disk group: dvgy415
v tp01 dvgy415_rvg ENABLED ACTIVE 856412160 SELECT - fsgen pl tp01-03 tp01 ENABLED ACTIVE 856412160 CONCAT - RW sd dvgy415111-01 tp01-03 dvgy415111 5376 285473280 0 EMC0_161 ENA sd dvgy415112-01 tp01-03 dvgy415112 5376 285473280 285473280 EMC0_138 ENA sd dvgy415113-01 tp01-03 dvgy415113 5376 285465600 570946560 EMC0_141 ENA pl tp01-04 tp01 ENABLED ACTIVE LOGONLY CONCAT - RW sd dvgy415122-06 tp01-04 dvgy415122 3456 512 LOG EMC0_130 ENA pl tp01-05 tp01 ENABLED ACTIVE LOGONLY CONCAT - RW sd dvgy415123-06 tp01-05 dvgy415123 3456 512 LOG EMC0_162 ENA
sunsrv02-dr:# vxprint -ht tp01 Disk group: dvgy415
v tp01 dvgy415_rvg ENABLED ACTIVE 856412160 SELECT - fsgen pl tp01-03 tp01 ENABLED ACTIVE 856412160 CONCAT - RW sd dvgy415216-01 tp01-03 dvgy415216 5376 285473280 0 EMC0_221 ENA sd dvgy415217-01 tp01-03 dvgy415217 5376 285473280 285473280 EMC0_222 ENA sd dvgy415218-01 tp01-03 dvgy415218 5376 285465600 570946560 EMC0_223 ENA pl tp01-04 tp01 ENABLED ACTIVE LOGONLY CONCAT - RW sd dvgy415199-06 tp01-04 dvgy415199 320 64 LOG EMC0_256 ENA pl tp01-05 tp01 ENABLED ACTIVE LOGONLY CONCAT - RW sd dvgy415200-06 tp01-05 dvgy415200 320 64 LOG EMC0_257 ENA
15. Resize the DCM Logs in DR.
This is necessary because the original logs were too small for the new volume sizes.
Display the current LOG sizes.
sunsrv02-dr:# vxprint -tg dvgy415 | grep " LOG "
sd dvgy415199-01 bcp-04 dvgy415199 0 64 LOG EMC0_256 ENA sd dvgy415199-02 db-04 dvgy415199 64 64 LOG EMC0_256 ENA sd dvgy415199-03 dba-04 dvgy415199 128 64 LOG EMC0_256 ENA sd dvgy415199-04 db2-04 dvgy415199 192 64 LOG EMC0_256 ENA sd dvgy415199-05 lg1-04 dvgy415199 256 64 LOG EMC0_256 ENA sd dvgy415199-06 tp01-04 dvgy415199 320 64 LOG EMC0_256 ENA
sd dvgy415200-01 bcp-05 dvgy415200 0 64 LOG EMC0_257 ENA sd dvgy415200-02 db-05 dvgy415200 64 64 LOG EMC0_257 ENA sd dvgy415200-03 dba-05 dvgy415200 128 64 LOG EMC0_257 ENA sd dvgy415200-04 db2-05 dvgy415200 192 64 LOG EMC0_257 ENA sd dvgy415200-05 lg1-05 dvgy415200 256 64 LOG EMC0_257 ENA sd dvgy415200-06 tp01-05 dvgy415200 320 64 LOG EMC0_257 ENA
Turn off the SRL protection
sunsrv02-dr:# vxedit -g dvgy415 set srlprot=off rlk_pdpd415Primary-vipvr_dvgy415_rvg
Verify the state of srlprot
sunsrv02-dr:# vxprint -Pl Disk group: dvgy415
Rlink: rlk_pdpd415Primary-vipvr_dvgy415_rvg info: timeout=500 packet_size=8400 rid=0.2127 latency_high_mark=10000 latency_low_mark=9950 bandwidth_limit=none state: state=ACTIVE synchronous=off latencyprot=off srlprot=off assoc: rvg=dvgy415_rvg remote_host=pdpd415Primary-vipvr IP_addr=10.11.196.191 port=4145 remote_dg=dvgy415 remote_dg_dgid=1138140445.1393.sunsrv01 remote_rvg_version=21 remote_rlink=rlk_pdpd415dr-vipvr_dvgy415_rvg remote_rlink_rid=0.2007 local_host=pdpd415dr-vipvr IP_addr=10.12.94.191 port=4145 protocol: UDP/IP flags: write enabled attached consistent connected
Remove the DCM Logs on resized volumes. Run it twice because each volume has 2 logs. This is a different method than
step 9.
sunsrv02-dr:# vxassist -g dvgy415 remove log tp01 sunsrv02-dr:# vxassist -g dvgy415 remove log tp01 sunsrv02-dr:# vxassist -g dvgy415 remove log lg1 sunsrv02-dr:# vxassist -g dvgy415 remove log lg1 sunsrv02-dr:# vxassist -g dvgy415 remove log db sunsrv02-dr:# vxassist -g dvgy415 remove log db
Recreate the DCM Logs on resized volumes.
sunsrv02-dr:# vxassist -g dvgy415 addlog bcp logtype=dcm nlog=2 dvgy415199 dvgy415200 sunsrv02-dr:# vxassist -g dvgy415 addlog db logtype=dcm nlog=2 dvgy415199 dvgy415200 sunsrv02-dr:# vxassist -g dvgy415 addlog dba logtype=dcm nlog=2 dvgy415199 dvgy415200 sunsrv02-dr:# vxassist -g dvgy415 addlog db2 logtype=dcm nlog=2 dvgy415199 dvgy415200 sunsrv02-dr:# vxassist -g dvgy415 addlog lg1 logtype=dcm nlog=2 dvgy415199 dvgy415200 sunsrv02-dr:# vxassist -g dvgy415 addlog tp01 logtype=dcm nlog=2 dvgy415199 dvgy415200
Display the new LOG sizes.
sunsrv02-dr:# vxprint -tg dvgy415 | grep " LOG "
sd dvgy415199-01 bcp-01 dvgy415199 0 512 LOG EMC0_256 ENA sd dvgy415199-02 db-01 dvgy415199 576 512 LOG EMC0_256 ENA sd dvgy415199-03 dba-01 dvgy415199 1536 512 LOG EMC0_256 ENA sd dvgy415199-04 db2-01 dvgy415199 1088 132 LOG EMC0_256 ENA sd dvgy415199-05 lg1-01 dvgy415199 2496 512 LOG EMC0_256 ENA sd dvgy415199-06 tp01-01 dvgy415199 3456 512 LOG EMC0_256 ENA
sd dvgy415200-01 bcp-02 dvgy415200 0 512 LOG EMC0_257 ENA sd dvgy415200-02 db-02 dvgy415200 576 512 LOG EMC0_257 ENA sd dvgy415200-03 dba-02 dvgy415200 1536 512 LOG EMC0_257 ENA sd dvgy415200-04 db2-02 dvgy415200 1088 132 LOG EMC0_257 ENA sd dvgy415200-05 lg1-02 dvgy415200 2496 512 LOG EMC0_257 ENA sd dvgy415200-06 tp01-02 dvgy415200 3456 512 LOG EMC0_257 ENA
Set the SRL protection back to autodcm
sunsrv02-dr:# vxedit -g dvgy415 set srlprot=autodcm rlk_pdpd415Primary-vipvr_dvgy415_rvg
Display the RVG status and the Rlink status
sunsrv02-dr:# vxprint -Pl Disk group: dvgy415
Rlink: rlk_pdpd415Primary-vipvr_dvgy415_rvg info: timeout=500 packet_size=8400 rid=0.2127 latency_high_mark=10000 latency_low_mark=9950 bandwidth_limit=none state: state=ACTIVE synchronous=off latencyprot=off srlprot=autodcm assoc: rvg=dvgy415_rvg remote_host=pdpd415Primary-vipvr IP_addr=10.11.196.191 port=4145 remote_dg=dvgy415 remote_dg_dgid=1138140445.1393.sunsrv01 remote_rvg_version=21 remote_rlink=rlk_pdpd415dr-vipvr_dvgy415_rvg remote_rlink_rid=0.2007 local_host=pdpd415dr-vipvr IP_addr=10.12.94.191 port=4145 protocol: UDP/IP flags: write enabled attached consistent connected
16. Prepare the DR Volumes for Snapshot
First, display the volume information
sunsrv02-dr:# vxprint -ht bcp Disk group: dvgy415
v bcp dvgy415_rvg ENABLED ACTIVE 585105408 SELECT - fsgen pl bcp-01 bcp ENABLED ACTIVE LOGONLY CONCAT - RW sd dvgy415199-01 bcp-01 dvgy415199 0 512 LOG EMC0_256 ENA pl bcp-02 bcp ENABLED ACTIVE LOGONLY CONCAT - RW sd dvgy415200-01 bcp-02 dvgy415200 0 512 LOG EMC0_257 ENA pl bcp-03 bcp ENABLED ACTIVE 585108480 CONCAT - RW sd dvgy415214-01 bcp-03 dvgy415214 5376 285473280 0 EMC0_219 ENA sd dvgy415215-01 bcp-03 dvgy415215 5376 285473280 285473280 EMC0_220 ENA sd dvgy415219-01 bcp-03 dvgy415219 5376 14161920 570946560 EMC0_224 ENA
Prepare the volume for snapshot by adding a DCO log. The same disks is used for DCO and DCM logs.
sunsrv02-dr:# vxsnap -g dvgy415 prepare bcp ndcomirs=2 alloc=dvgy415199,dvgy415200 VxVM vxsnap INFO V-5-1-9270 Volume is under RVG, setting drl=no.
Display the volume information to see the changes. Note that the DCO volume has been added with two logs.
sunsrv02-dr:# vxprint -ht bcp Disk group: dvgy415
v bcp dvgy415_rvg ENABLED ACTIVE 585105408 SELECT - fsgen pl bcp-01 bcp ENABLED ACTIVE LOGONLY CONCAT - RW sd dvgy415199-01 bcp-01 dvgy415199 0 512 LOG EMC0_256 ENA pl bcp-02 bcp ENABLED ACTIVE LOGONLY CONCAT - RW sd dvgy415200-01 bcp-02 dvgy415200 0 512 LOG EMC0_257 ENA pl bcp-03 bcp ENABLED ACTIVE 585108480 CONCAT - RW sd dvgy415214-01 bcp-03 dvgy415214 5376 285473280 0 EMC0_219 ENA sd dvgy415215-01 bcp-03 dvgy415215 5376 285473280 285473280 EMC0_220 ENA sd dvgy415219-01 bcp-03 dvgy415219 5376 14161920 570946560 EMC0_224 ENA dc bcp_dco bcp bcp_dcl v bcp_dcl - ENABLED ACTIVE 40368 SELECT - gen pl bcp_dcl-01 bcp_dcl ENABLED ACTIVE 40368 CONCAT - RW sd dvgy415199-07 bcp_dcl-01 dvgy415199 4416 40368 0 EMC0_256 ENA pl bcp_dcl-02 bcp_dcl ENABLED ACTIVE 40368 CONCAT - RW sd dvgy415200-07 bcp_dcl-02 dvgy415200 4416 40368 0 EMC0_257 ENA
Run the same steps on the rest of the volumes
sunsrv02-dr:# vxsnap -g dvgy415 prepare db ndcomirs=2 alloc=dvgy415199,dvgy415200 VxVM vxsnap INFO V-5-1-9270 Volume is under RVG, setting drl=no.
sunsrv02-dr:# vxsnap -g dvgy415 prepare dba ndcomirs=2 alloc=dvgy415199,dvgy415200 VxVM vxsnap INFO V-5-1-9270 Volume is under RVG, setting drl=no.
sunsrv02-dr:# vxsnap -g dvgy415 prepare db2 ndcomirs=2 alloc=dvgy415199,dvgy415200 VxVM vxsnap INFO V-5-1-9270 Volume is under RVG, setting drl=no.
sunsrv02-dr:# vxsnap -g dvgy415 prepare lg1 ndcomirs=2 alloc=dvgy415199,dvgy415200 VxVM vxsnap INFO V-5-1-9270 Volume is under RVG, setting drl=no.
sunsrv02-dr:# vxsnap -g dvgy415 prepare tp01 ndcomirs=2 alloc=dvgy415199,dvgy415200 VxVM vxsnap INFO V-5-1-9270 Volume is under RVG, setting drl=no.
17. Resize the SNAP Volumes in DR
If you do not want to preserve the data on the snapshot volumes, you may recreate all the snapshot volumes instead of
resize.
Resize the SNAP Volumes using the exact size as the replicated volumes
sunsrv02-dr:# /etc/vx/bin/vxresize -g dvgy415 db_snapvol 1361g dvgy415224 \ dvgy415225 dvgy415226 dvgy415227 dvgy415228 sunsrv02-dr:# /etc/vx/bin/vxresize -g dvgy415 lg1_snapvol +460054528 dvgy415231 \ dvgy415232
sunsrv02-dr:# /etc/vx/bin/vxresize -g dvgy415 tp01_snapvol +663474176 dvgy415235 \ dvgy415236 dvgy415237
sunsrv02-dr:# df -k | grep snapvol
/dev/vx/dsk/dvgy415/db_snapvol 1427111936 3555377 1334584344 1% /db/pdpd415/PEMMP00P/NODE0000 /dev/vx/dsk/dvgy415/lg1_snapvol 428208128 3078084 398559480 1% /db/pdpd415/log1 /dev/vx/dsk/dvgy415/tp01_snapvol 428206080 121489 401329375 1% /db/pdpd415/PEMMP00P/tempspace01/NODE0000
18. Prepare the SNAP Volumes in DR
Identify the region size of the main volumes. The region size must be the same for both the main volume and the snap
volume.
sunsrv02-dr:# for DCONAME in `vxprint -tg dvgy415 | grep "^dc" | awk '{ print $2 }'` > do > echo "$DCONAME\t\c" > vxprint -g dvgy415 -F%regionsz $DCONAME > done bcp_dco 128 db_dco 128 dba_dco 128 db2_dco 128 lg1_dco 128 tp01_dco 128
Display the volume information
sunsrv02-dr:# vxprint -ht bcp_snapvol Disk group: dvgy415
v bcp_snapvol - ENABLED ACTIVE 585105408 SELECT - fsgen pl bcp_snapvol-02 bcp_snapvol ENABLED ACTIVE 585108480 CONCAT - RW sd dvgy415233-01 bcp_snapvol-02 dvgy415233 5376 285473280 0 EMC0_238 ENA sd dvgy415234-01 bcp_snapvol-02 dvgy415234 5376 285473280 285473280 EMC0_239 ENA sd dvgy415238-01 bcp_snapvol-02 dvgy415238 5376 14161920 570946560 EMC0_243 ENA
Prepare the snapshot volume by adding a DCO log and the same region size as the main volume. The same disks is used
for DCO and DCM logs.
sunsrv02-dr:# vxsnap -g dvgy415 prepare bcp_snapvol ndcomirs=2 regionsize=128 \ alloc=dvgy415199,dvgy415200
Display the volume information to see the changes. Note that the DCO volume has been added with two logs.
sunsrv02-dr:# vxprint -ht bcp_snapvol Disk group: dvgy415
v bcp_snapvol - ENABLED ACTIVE 585105408 SELECT - fsgen pl bcp_snapvol-02 bcp_snapvol ENABLED ACTIVE 585108480 CONCAT - RW sd dvgy415233-01 bcp_snapvol-02 dvgy415233 5376 285473280 0 EMC0_238 ENA sd dvgy415234-01 bcp_snapvol-02 dvgy415234 5376 285473280 285473280 EMC0_239 ENA sd dvgy415238-01 bcp_snapvol-02 dvgy415238 5376 14161920 570946560 EMC0_243 ENA dc bcp_snapvol_dco bcp_snapvol bcp_snapvol_dcl v bcp_snapvol_dcl - ENABLED ACTIVE 40368 SELECT - gen pl bcp_snapvol_dcl-01 bcp_snapvol_dcl ENABLED ACTIVE 40368 CONCAT - RW sd dvgy415199-13 bcp_snapvol_dcl-01 dvgy415199 370176 40368 0 EMC0_256 ENA pl bcp_snapvol_dcl-02 bcp_snapvol_dcl ENABLED ACTIVE 40368 CONCAT - RW sd dvgy415200-13 bcp_snapvol_dcl-02 dvgy415200 370176 40368 0 EMC0_257 ENA
Run the same steps on the rest of the volumes
sunsrv02-dr:# vxsnap -g dvgy415 prepare db_snapvol ndcomirs=2 regionsize=128 \ alloc=dvgy415199,dvgy415200 sunsrv02-dr:# vxsnap -g dvgy415 prepare dba_snapvol ndcomirs=2 regionsize=128 \ alloc=dvgy415199,dvgy415200 sunsrv02-dr:# vxsnap -g dvgy415 prepare db2_snapvol ndcomirs=2 regionsize=128 \ alloc=dvgy415199,dvgy415200 sunsrv02-dr:# vxsnap -g dvgy415 prepare lg1_snapvol ndcomirs=2 regionsize=128 \ alloc=dvgy415199,dvgy415200 sunsrv02-dr:# vxsnap -g dvgy415 prepare tp01_snapvol ndcomirs=2 regionsize=128 \ alloc=dvgy415199,dvgy415200
Verify the region sizes are the same
sunsrv02-dr:# for DCONAME in `vxprint -tg dvgy415 | grep "^dc" | awk '{ print $2 }'` > do > echo "$DCONAME\t\c" > vxprint -g dvgy415 -F%regionsz $DCONAME > done bcp_dco 128 bcp_snapvol_dco 128 db_dco 128 db_snapvol_dco 128 dba_dco 128 dba_snapvol_dco 128 db2_dco 128 db2_snapvol_dco 128 lg1_dco 128 lg1_snapvol_dco 128 tp01_dco 128 tp01_snapvol_dco 128
19. Run a point-in-time snapshot in DR (This will overwrite your previous data on the snap
volumes)
Verify the RLINK and RVG are active and up to date
sunsrv01:# vxrlink -g dvgy415 status rlk_pdpd415dr-vipvr_dvgy415_rvg Mon Dec 17 17:40:19 MST 2007 VxVM VVR vxrlink INFO V-5-1-4467 Rlink rlk_pdpd415dr-vipvr_dvgy415_rvg is up to date
sunsrv02-dr:# vxprint -Pl Disk group: dvgy415
Rlink: rlk_pdpd415Primary-vipvr_dvgy415_rvg info: timeout=500 packet_size=8400 rid=0.2127 latency_high_mark=10000 latency_low_mark=9950 bandwidth_limit=none state: state=ACTIVE synchronous=off latencyprot=off srlprot=autodcm assoc: rvg=dvgy415_rvg remote_host=pdpd415Primary-vipvr IP_addr=10.11.196.191 port=4145 remote_dg=dvgy415 remote_dg_dgid=1138140445.1393.sunsrv01 remote_rvg_version=21 remote_rlink=rlk_pdpd415dr-vipvr_dvgy415_rvg remote_rlink_rid=0.2007 local_host=pdpd415dr-vipvr IP_addr=10.12.94.191 port=4145 protocol: UDP/IP flags: write enabled attached consistent connected
Start the Snap Copy
sunsrv02-dr:# vxsnap -g dvgy415 make source=bcp/snapvol=bcp_snapvol \ source=db/snapvol=db_snapvol source=dba/snapvol=dba_snapvol \ source=db2/snapvol=db2_snapvol source=lg1/snapvol=lg1_snapvol \ source=tp01/snapvol=tp01_snapvol
Display the sync status
sunsrv02-dr:# vxtask list TASKID PTID TYPE/STATE PCT PROGRESS 286 SNAPSYNC/R 00.02% 0/585105408/118784 SNAPSYNC bcp_snapvol dvgy415 288 SNAPSYNC/R 00.00% 0/2854223872/76160 SNAPSYNC db_snapvol dvgy415 290 SNAPSYNC/R 00.13% 0/98566144/126976 SNAPSYNC dba_snapvol dvgy415 292 SNAPSYNC/R 01.46% 0/8388608/122624 SNAPSYNC db2_snapvol dvgy415 294 SNAPSYNC/R 00.01% 0/856416256/108544 SNAPSYNC lg1_snapvol dvgy415 296 SNAPSYNC/R 00.01% 0/856412160/122880 SNAPSYNC tp01_snapvol dvgy415
sunsrv02-dr:# vxsnap -g dvgy415 print
NAME SNAPOBJECT TYPE PARENT SNAPSHOT %DIRTY %VALID
bcp -- volume -- -- -- 100.00 bcp_snapvol_snp volume -- bcp_snapvol 0.00 --
db -- volume -- -- -- 100.00 db_snapvol_snp volume -- db_snapvol 0.00 --
dba -- volume -- -- -- 100.00 dba_snapvol_snp volume -- dba_snapvol 0.00 --
db2 -- volume -- -- -- 100.00 db2_snapvol_snp volume -- db2_snapvol 0.00 --
lg1 -- volume -- -- -- 100.00 lg1_snapvol_snp volume -- lg1_snapvol 0.00 --
tp01 -- volume -- -- -- 100.00 tp01_snapvol_snp volume -- tp01_snapvol 0.00 --
bcp_snapvol bcp_snp volume bcp -- 0.00 0.11
db_snapvol db_snp volume db -- 0.00 0.02
dba_snapvol dba_snp volume dba -- 0.00 0.77
db2_snapvol db2_snp volume db2 -- 0.00 9.13
lg1_snapvol lg1_snp volume lg1 -- 0.00 0.09
tp01_snapvol tp01_snp volume tp01 -- 0.00 0.10
When the Sync is complete, bring up the snapvol service group in VCS to verify all changes
sunsrv02-dr:# vxtask list TASKID PTID TYPE/STATE PCT PROGRESS
sunsrv02-dr:# hastatus -sum
-- SYSTEM STATE -- System State Frozen
A sunsrv01-dr RUNNING 0 A sunsrv02-dr RUNNING 0
-- GROUP STATE -- Group System Probed AutoDisabled State
B CCAvail sunsrv01-dr Y N ONLINE B CCAvail sunsrv02-dr Y N OFFLINE B pdpd415_grp sunsrv01-dr Y N OFFLINE B pdpd415_grp sunsrv02-dr Y N OFFLINE B pdpd415_vvr sunsrv01-dr Y N OFFLINE B pdpd415_vvr sunsrv02-dr Y N ONLINE B snap_pdpd415_grp sunsrv01-dr Y N OFFLINE B snap_pdpd415_grp sunsrv02-dr Y N OFFLINE
sunsrv02-dr:# hagrp -online snap_pdpd415_grp -sys sunsrv02-dr
sunsrv02-dr:# hastatus -sum
-- SYSTEM STATE -- System State Frozen
A sunsrv01-dr RUNNING 0 A sunsrv02-dr RUNNING 0
-- GROUP STATE -- Group System Probed AutoDisabled State
B CCAvail sunsrv01-dr Y N ONLINE B CCAvail sunsrv02-dr Y N OFFLINE B pdpd415_grp sunsrv01-dr Y N OFFLINE B pdpd415_grp sunsrv02-dr Y N OFFLINE B pdpd415_vvr sunsrv01-dr Y N OFFLINE B pdpd415_vvr sunsrv02-dr Y N ONLINE B snap_pdpd415_grp sunsrv01-dr Y N OFFLINE B snap_pdpd415_grp sunsrv02-dr Y N ONLINE
Verify the filesystems and compare the sizes with the primary site
sunsrv02-dr:# df -k | grep snapvol
/dev/vx/dsk/dvgy415/bcp_snapvol 292552704 2059883 272337156 1% /bcp/pdpd415
/dev/vx/dsk/dvgy415/db_snapvol 1427111936 30338312 1309475336 3% /db/pdpd415/PEMMP00P/NODE0000
/dev/vx/dsk/dvgy415/db2_snapvol 4194304 79861 3857337 3% /db2/pdpd415
/dev/vx/dsk/dvgy415/dba_snapvol 49283072 6830907 39799049 15% /dba/pdpd415
/dev/vx/dsk/dvgy415/lg1_snapvol 428208128 3171480 398471922 1% /db/pdpd415/log1
/dev/vx/dsk/dvgy415/tp01_snapvol 428206080 121489 401329375 1% /db/pdpd415/PEMMP00P/tempspace01/NODE0000
sunsrv01:# df -k | grep dvgy415 | sort
/dev/vx/dsk/dvgy415/bcp 292552704 2059883 272337156 1% /bcp/pdpd415
/dev/vx/dsk/dvgy415/db 1427111936 30337038 1309476520 3% /db/pdpd415/PEMMP00P/NODE0000
/dev/vx/dsk/dvgy415/db2 4194304 79401 3857768 3% /db2/pdpd415
/dev/vx/dsk/dvgy415/dba 49283072 6830907 39799049 15% /dba/pdpd415
/dev/vx/dsk/dvgy415/lg1 428208128 3183012 398461110 1% /db/pdpd415/log1
/dev/vx/dsk/dvgy415/tp01 428206080 121489 401329375 1% /db/pdpd415/PEMMP00P/tempspace01/NODE0000
|
MARVEL and SPIDER-MAN: TM & 2007 Marvel Characters, Inc. Motion Picture © 2007 Columbia Pictures Industries, Inc. All Rights Reserved. 2007 Sony Pictures Digital Inc. All rights reserved. blogger template by blog forum
|