Tommy, This should not be a surprise. The name "Synchronous Remote Copy" implies the overhead that you are seeing, namely the time for the synchronous write to the remote site.
PPRC will more than double the response time of random writes because they the Host write to cache has the additional time of controller latency, round trip delay, and block transfer before the write is complete. On IBM and HDS (not sure with EMC) the impact is greater for single blocks, as chained sequential writes have some overlap between the host write, and the synchronous write. Some things to check: 1) Buffer Credits on ISLs between the sites. If no ISLs then settings on the storage host ports to cater for 30km B2B credits 2) Channel speed step-down - If your FICON channels are 8Gb, and the FCP connections are 2Gb, then PPRC writes will take up to four times longer to transfer. It dep[ends on the block size. 3) Unbalanced ISLs - ISLs do not automatically rebalance after one drops. The more concurrent IO there is on an ISL, the longer the transfer time for each PPRC write. There may be one opr more ISL that are not being used, while others are overloaded 4) Switch board connections not optimal - talk to your switch vendor 5) Host adapter ports connections not optimal - talk to your storage vendor 6) Sysplex tuning may identify IO that can convert from disk to Sysplex caching. Not my expertise, but I'm sure there are some red books. There is good information on PPRC activity in the RMF Type 78 records. You may want to do some analysis of these to see how transfer rates and PPRC write response time correlate with your DASD disconnect time. Final Comment: do you really need synchronous remote copy? If your company requires zero data loss, then you don't get this from synchronous replication alone. You must use the Critical=Yes option which has it's own set of risks and challenges. If you are not using GDPS and Hyperswap for hot failover, then synchronous is not much better than asynchronous. Rolling disasters, transaction roll back, and options that turn off in-flight data set recovery can all see synchronous recovery time end up with the same RPO as Asynchronous. Ron -----Original Message----- From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf Of Tommy Tsui Sent: Thursday, February 15, 2018 12:41 AM To: IBM-MAIN@LISTSERV.UA.EDU Subject: Re: [IBM-MAIN] DASD problem Hi, The distance is around 30km, do you know any settings on sysplex environment such as GRS and JES2 checkpoint need to aware? Direct DASD via San switch to Dr site , 2GBPS interface , we check with vendor, they didn't find any problem on San switch or DASD, I suspect the system settings Alan(GMAIL)Watthey <a.watt...@gmail.com> 於 2018年2月15日 星期四寫道: > Tommy, > > This sounds like the PPRC links might be a bit slow or there are not > enough of them. > > What do you have? Direct DASD to DASD or via a single SAN switch or > even cascaded? What settings (Gbps) are all the interfaces running at > (you can ask the switch for the switch and RMF for the DASD)? > > What type of fibre are they? LX or SX? What kind of length are they? > > Any queueing? > > There are so many variables that can affect the latency. Are there > any of the above that you can improve on? > > I can't remember what IBM recommends but 80% sounds a little high to me. > They are only used for writes (not reads). > > Regards, > Alan Watthey > > -----Original Message----- > From: Tommy Tsui [mailto:tommyt...@gmail.com] > Sent: 15 February 2018 12:15 am > Subject: DASD problem > > > > > Hi all, > > > Our shop found the most job elapse time prolong due to pprc > synchronization versus without pprc mode. It's almost 4 times faster > if without pprc synchronization. Is there any parameters we need to > tune on z/os or disk subsystem side? We found the % disk util in RMF > report over 80, Any help will be appreciated. Many thanks > > ---------------------------------------------------------------------- > For IBM-MAIN subscribe / signoff / archive access instructions, send > email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN > > ---------------------------------------------------------------------- > For IBM-MAIN subscribe / signoff / archive access instructions, send > email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN > ---------------------------------------------------------------------- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN ---------------------------------------------------------------------- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN