Advanced Copy Services comprises the following functions and enhancements. Mirror functions provide a consistent point-in-time copy of data at the recovery. The purpose of this publication is to help you understand and use IBM Advanced; Copy Services functions. It describes three dynamic copy. Abstract for DFSMS Advanced Copy Services. This information supports z/OS® ( ZOS) and contains information about Data Facility Storage Management.

Author: Maubar Tejar
Country: Malaysia
Language: English (Spanish)
Genre: History
Published (Last): 24 April 2007
Pages: 296
PDF File Size: 16.68 Mb
ePub File Size: 15.28 Mb
ISBN: 407-3-31474-461-3
Downloads: 39978
Price: Free* [*Free Regsitration Required]
Uploader: Gardacage

Documents Flashcards Grammar checker. This three-site mirroring solution is often simply called MzGM. The paper is divided into the following sections: High availability requirements have become as important as the need to quickly recover after a disaster affecting a data processing center and its services.

Over the past years, IT service providers, either internal or external to a company, exploited the capabilities of new developments in disk storage subsystems. Solutions have evolved from a two-site approach to include a third site, thus transforming the relationship between the primary and secondary site into a high availability configuration. All Copy Services functions are handled within the DS firmware.

Conceptually MzGM is a multiple target configuration with two copy relationships off the same primary or source device. The benefits of zGM when compared to GM include the requirement for less disk space zGM requires only a single copy of the database at the remote site, compared to two copies with GM and greater scalability zGM can handle much larger configurations than GM.

Volume B is the cascaded volume, and it has the following two roles: Volume B is the secondary or target volume for volume A. At the same time, volume B is also the primary or source volume for volume C. Combining Metro Mirror and continue with Global Mirror places the middle volume in a configuration that is known as cascaded.

During normal operations, no software is involved in driving this configuration. This situation applies especially to the Global Mirror relationship where the firmware autonomically manages the formation of consistency groups and thus guarantees asvanced data volumes at all times at the remote site.

MzGM is a three-site multiple target configuration in a System z environment. Figure 2 shows a MzGM configuration. A is the active application volume with the following two copy services relationships: Metro Mirror relationship between A and B Volumes A and B are in a synchronous copy services relationship and are the base for potentially swapping both devices through HyperSwap.

Using the Advanced Copy Services enhancements

The journal or D volumes are used by SDM only. The number of D volumes depends on the number of zGM volume pairs and the amount of data that is replicated from A to C because all replicated data is written twice in the remote location: Writes to SDM journal data sets on these D volumes. We recommend you dedicate the D volumes to SDM journals only. You must ensure that sufficient alias devices are available.

CONTENTS Table of Contents

Allocate journals as striped data sets across two devices. Apply all writes on the secondary in the same advajced as they occur on the primary volumes. Thus, you must create a new zGM session and perform a full vopy copy between volume A and volume C.

The following HyperSwap interfaces alternatives are available: This first attempt was intended to provide continuous application availability for planned swaps. Among other state and condition information, the UCB also contains a link to the path or group of paths needed to connect to the device itself. This quiescence operation is performed by the user. This swap operation leads first to a validation process to determine whether the primary and secondary PPRC Metro Mirror volumes are in a state that guarantees that the actual swap xopy going to be successful and does not cause any integrity exposure to the data volumes.


In our example in Figure 4 on page 8, the reverse direction is from device to device We assume that the PPRC path exists in either direction, which is required for the swap operation to succeed.

The next operation is a PPRC suspend of the volume pair. Changed tracks are only masked in this bitmap and are not copied. An even more severe problem was the time the process itself required to dfsmz. We call it Basic because TPC-R does not provide anything beyond a planned swap trigger and avoids the entire swap process once a HyperSwap is triggered in an unplanned fashion.

In contrast, GDPS provides a controlling supervisor through its controlling LPAR swhich manage not only the result of a device swap operation but also offers complete Parallel Sysplex server management for planned and unplanned events. Basic HyperSwap is not a disaster recovery function.

The process also includes adding new volumes or triggering a planned HyperSwap. TPC-R manages the following two routes to the disk storage subsystems: An Ethernet route to the disk storage subsystem. Figure 8 shows only components that are relevant to HyperSwap. HyperSwaps can occur in a planned or an unplanned fashion. IOS then handles and manages all subsequent activities following the swap of all device pairs defined within the TPC-R HyperSwap session, which includes a freeze and failover in the context of Metro Mirror management.

IOS performs the actual swap activity. In the present discussion, we are still referring to Basic HyperSwap, even when describing the process of incrementally swapping back to the original configuration.

In this context we recommend you configure advanxed implicated disk storage subsystems to be identical as possible in regard to base volumes and alias volumes. Following this recommendation simplifies the handling of the configuration.

Figure 9 shows a schematic view of a GDPS-controlled configuration.

1.2 Chapter 2. What is Remote Copy?

In Figure 9, this system, SY1P. In addition to handling all Advamced Services functions, it serivces manages all server-related aspects, including handling couple data sets during a planned or unplanned site switch. IOS handles the swap activities and provides some error recovery when the swap cannot successfully finish. TPC-R does not provide automation procedures that can be activated through certain events.

However, this table is not a complete list of all the relevant points of comparison. This list is derived from observation and analysis over time of situations without HyperSwap dcsms that had the potential of Parallel Sysplex outages or that actually caused an outage.

As the name implies, it consists of a Metro Mirror relationship and a zGM relationship. Both relationships copy from the same primary or source volume but to different secondary or target volumes. The HyperSwap function is made possible through the Metro Mirror relationship. At the same time, volume A is also primary to the zGM session, with volume C as the corresponding zGM secondary volume. In servcies discussion, we leave open as to whether SY10 and SY11 are in the same or in different sites and whether Figure 10 depicts a single or multiple site workload configuration.

In this configuration, the following differences exist between these two management approaches: The distance between servics metropolitan sites and the remote site is not limited when replicating volumes through zGM. This action contributes to high availability and even to continuous availability when performing a HyperSwap in a planned fashion for whatever reasons, and there are plenty of reasons.

The numbers in the figure correspond to advanded steps. A first step is most likely the HyperSwap operation itself. During that time, in the course of reading changed records out of disk subsystem A, SDM cannot access the A volumes vfsms to the freeze operation when the HyperSwap operation starts. As a result, zGM stalls and no longer replicates data. The current volume set in C still represents a consistent set of volumes. As a result, new zGM session has to be created between B and C.


Shifting the connectivity usually is not very difficult, because a FICON-switched infrastructure might exist and the required connectivity might already be configured.

Before starting a new full initial copy between B and C, which is required, you might want to consider servicws saving the set of servvices volumes in D and then starting the initial copy between B and C. This initial copy might run for several hours, and if any sevices occurs with the B volumes before the initial copy is completed and with all zGM volume pairs in DUPLEX state, the remote site has consistent source data volumes in D to restart. The full initial copy after a volume A failure is the current drawback of the MzGM three-site configuration.

High availability is achieved through HyperSwap between the A and B volumes. However, this is a drawback for dfams zGM session, because it does not follow the HyperSwap in an incremental fashion, which shifts its primary volumes automatically from A to B. However, the failure of volume B disables the HyperSwap function. A failure at the remote site, volume C, does also not affect the application A volumes. Depending on what happens to the disk storage subsystems in C, an incremental resync to the zGM session can be handled through the hardware bitmaps maintained in servicrs zGM primary disk subsystem for each A volume.

Figure 13 is dfms the same as Figure 10 on page 18; the only difference is the IP-based connectivity between the two GDPS instances that execute in this environment. The communication between the channel extenders can be any Telco protocol and does not have to be IP based, as shown in Figure The scenario shown in the figure applies to an unplanned HyperSwap as well as to a planned HyperSwap dfdms volumes A and B.

The SDM reader tasks resume the process, at this point reading record sets from the B storage subsystems, but only those record sets accumulated from the time of the HyperSwap and until SDM was reconnected to the B volumes.

However, HyperSwap is disabled because the Metro Mirror secondary volumes, the B volumes, are no longer available for a potential HyperSwap. The zGM session suspends. Later, the session between A and C can be resynchronized through the zGM hardware bitmaps advwnced are maintained in the primary disk subsystem for each zGM suspended volume. Note that SDM might already have avdanced its session and stopped reading from disk storage subsystem A.

The next step is an SDM internal swap operation to swap its primary volumes from volume A to volume B. This command starts the actual resynchronization. All the Metro Mirror volumes have to match all the volumes within the zGM session. RMZ Resync offers prompt restart capabilities at the remote site when the primary site totally fails.

He holds a degree in economics dsms the University ckpy Heidelberg and in mechanical engineering from FH Heilbronn. He has written many IBM Redbooks and has developed and taught technical workshops. He holds a degree in electrical engineering. Thanks to the following people for their contributions to this project: IBM may not offer the products, services, or features discussed in this document dfsm other countries.

Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used.