We use HDS instead not Ibm, we report this case to Ibm and perform the same
operation on monoplex lpar the result is around 7mins write 28gb data using
utility IEBDG, but use 12 mins while in sysplex lpar with same DASD, only
can find is high disconnect time from RMF report

Ron hawkins <ronjhawk...@sbcglobal.net> 於 2018年2月21日 星期三寫道:

> Tommy,
>
> The RTD at 30km is quite small, and the benefit of write spoofing will be
> small.
>
> There is an option to turn on write spoofing with the FCP PPRC links on
> IBM storage, but you should check with them that it is a benefit at small
> distances on your model of storage at all write rates.
>
> Ron
>
> -----Original Message-----
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
> Behalf Of Tommy Tsui
> Sent: Tuesday, February 20, 2018 8:22 PM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: [IBM-MAIN] DASD problem
>
> Is there any way to improve the pprc command latency and round trip delay
> time?
> Anything can tune on DASD Hardware or switch side?
> Anything can tune on os side? BUFNO,
>
> Rob Schramm <rob.schr...@gmail.com> 於 2018年2月21日 星期三寫道:
>
> > It used to be 20 or 25 buffers to establish the I/o sweet spot.  Maybe
> > with the faster dasd the amount is different.
> >
> > Rob
> >
> > On Tue, Feb 20, 2018, 7:53 PM Tommy Tsui <tommyt...@gmail.com> wrote:
> >
> > > Hi Ron,
> > > You are right when I changed BUFNO to 255,  The overall elapsed time
> > > reduce from 12mins to 6 mins, So what can I do now,? Change BUFNO
> > > only ? How about vsam or db2 performance?
> > >
> > >
> > > Ron hawkins <ronjhawk...@sbcglobal.net> 於 2018年2月21日 星期三寫道:
> > >
> > > > Tommy,
> > > >
> > > > With PPRC, TrueCopy or SRDF synchronous the FICON and FCP speed
> > > > are independent of one another, but the stepped down speed
> > > > elongate the
> > > Remote
> > > > IO.
> > > >
> > > > In simple terms a block that you write from the host to the P-VOL
> > > > takes 0.5ms to transfer on 16Gb FICON, and but then you do the
> > > > synchronous
> > > write
> > > > on 2Gb FCP to the S-VOL it will take 4ms, or 8 times longer to
> > transfer.
> > > > This time is in addition to command latency and round-trip delay
> time.
> > As
> > > > described below, this impact will be less for long, chained writes
> > > because
> > > > of the Host/PPRC overlap.
> > > >
> > > > I'm not sure how you simulate this on your monoplex, but I assume
> > > > you
> > set
> > > > up a PPRC pair to the remote site. If you are testing with BSAM or
> > > > QSAM (like OLDGENER), then set SYSUT2 BUFNO=1 to see the single
> > > > block
> > impact.
> > > If
> > > > you are using zHPF, I think you can vary the BUFNO or NCP to get
> > > > up to
> > > 255
> > > > chained blocks.
> > > >
> > > > I'm not aware of anything in GRS that adds to remote IO disconnect
> > time.
> > > >
> > > > Ron
> > > >
> > > > -----Original Message-----
> > > > From: IBM Mainframe Discussion List
> > > > [mailto:IBM-MAIN@LISTSERV.UA.EDU]
> > On
> > > > Behalf Of Tommy Tsui
> > > > Sent: Tuesday, February 20, 2018 2:42 AM
> > > > To: IBM-MAIN@LISTSERV.UA.EDU
> > > > Subject: Re: [IBM-MAIN] DASD problem
> > > >
> > > > Hi Ron,
> > > > What happens to if our ficon card is 16gb, and fcp connection is
> > > > 2gb, I try to do the simulation on monoplex  lpar , the result is
> > > > fine, now we
> > > are
> > > > suspect the GRS or other system parm which will increase the
> > > > disconnect
> > > time
> > > >
> > > > Ron hawkins <ronjhawk...@sbcglobal.net> 於 2018年2月
> > > >
> > > > 15日 星期四寫道:
> > > >
> > > > > Tommy,
> > > > >
> > > > > This should not be a surprise. The name "Synchronous Remote Copy"
> > > > > implies the overhead that you are seeing, namely the time for
> > > > > the synchronous write to the remote site.
> > > > >
> > > > > PPRC will more than double the response time of random writes
> > > > > because they the Host write to cache has the additional time of
> > > > > controller latency, round trip delay, and block transfer before
> > > > > the write is complete. On IBM and HDS (not sure with EMC) the
> > > > > impact is greater
> > for
> > > > > single blocks, as chained sequential writes have some overlap
> > > > > between the host write, and the synchronous write.
> > > > >
> > > > > Some things to check:
> > > > >
> > > > > 1) Buffer Credits on ISLs between the sites. If no ISLs then
> > > > > settings on the storage host ports to cater for 30km B2B credits
> > > > > 2) Channel speed step-down - If your FICON channels are 8Gb, and
> > > > > the FCP connections are 2Gb, then PPRC writes will take up to
> > > > > four times longer to transfer. It dep[ends on the block size.
> > > > > 3) Unbalanced ISLs - ISLs do not automatically rebalance after
> > > > > one
> > > drops.
> > > > > The more concurrent IO there is on an ISL, the longer the
> > > > > transfer time for each PPRC write. There may be one opr more ISL
> > > > > that are not being used, while others are overloaded
> > > > > 4) Switch board connections not optimal - talk to your switch
> > > > > vendor
> > > > > 5) Host adapter ports connections not optimal - talk to your
> > > > > storage vendor
> > > > > 6) Sysplex tuning may identify IO that can convert from disk to
> > > > > Sysplex caching. Not my expertise, but I'm sure there are some
> > > > > red
> > > books.
> > > > >
> > > > > There is good information on PPRC activity in the RMF Type 78
> > records.
> > > > > You may want to do some analysis of these to see how transfer
> > > > > rates and PPRC write response time correlate with your DASD
> > > > > disconnect
> > time.
> > > > >
> > > > > Final Comment: do you really need synchronous remote copy? If
> > > > > your company requires zero data loss, then you don't get this
> > > > > from synchronous replication alone. You must use the
> > > > > Critical=Yes option which has it's own set of risks and
> > > > > challenges. If you are not using GDPS and Hyperswap for hot
> > > > > failover, then synchronous is not much
> > > better
> > > > than asynchronous.
> > > > > Rolling disasters, transaction roll back, and options that turn
> > > > > off in-flight data set recovery can all see synchronous recovery
> > > > > time end up with the same RPO as Asynchronous.
> > > > >
> > > > > Ron
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > -----Original Message-----
> > > > > From: IBM Mainframe Discussion List
> > > > > [mailto:IBM-MAIN@LISTSERV.UA.EDU
> > ]
> > > > > On Behalf Of Tommy Tsui
> > > > > Sent: Thursday, February 15, 2018 12:41 AM
> > > > > To: IBM-MAIN@LISTSERV.UA.EDU
> > > > > Subject: Re: [IBM-MAIN] DASD problem
> > > > >
> > > > > Hi,
> > > > > The distance is around 30km, do you know any settings on sysplex
> > > > > environment such as GRS and JES2 checkpoint need to aware?
> > > > > Direct DASD via San switch to Dr site , 2GBPS interface , we
> > > > > check with vendor, they didn't find any problem on San switch or
> > > > > DASD, I suspect the system settings
> > > > >
> > > > > Alan(GMAIL)Watthey <a.watt...@gmail.com> 於 2018年2月15日 星期四寫道:
> > > > >
> > > > > > Tommy,
> > > > > >
> > > > > > This sounds like the PPRC links might be a bit slow or there
> > > > > > are
> > not
> > > > > > enough of them.
> > > > > >
> > > > > > What do you have?  Direct DASD to DASD or via a single SAN
> > > > > > switch
> > or
> > > > > > even cascaded?  What settings (Gbps) are all the interfaces
> > > > > > running at (you can ask the switch for the switch and RMF for
> the DASD)?
> > > > > >
> > > > > > What type of fibre are they?  LX or SX?  What kind of length
> > > > > > are
> > > they?
> > > > > >
> > > > > > Any queueing?
> > > > > >
> > > > > > There are so many variables that can affect the latency.  Are
> > > > > > there any of the above that you can improve on?
> > > > > >
> > > > > > I can't remember what IBM recommends but 80% sounds a little
> > > > > > high
> > to
> > > > me.
> > > > > > They are only used for writes (not reads).
> > > > > >
> > > > > > Regards,
> > > > > > Alan Watthey
> > > > > >
> > > > > > -----Original Message-----
> > > > > > From: Tommy Tsui [mailto:tommyt...@gmail.com]
> > > > > > Sent: 15 February 2018 12:15 am
> > > > > > Subject: DASD problem
> > > > > >
> > > > > > >
> > > > > > > Hi all,
> > > > > >
> > > > > >
> > > > > > Our shop found the most job elapse time prolong due to pprc
> > > > > > synchronization versus without pprc mode. It's almost 4 times
> > faster
> > > > > > if without pprc synchronization. Is there any parameters we
> > > > > > need to tune on z/os or disk subsystem side? We found the %
> > > > > > disk util in
> > RMF
> > > > > > report over 80, Any help will be appreciated. Many thanks
> > > > > >
> > > > > > ------------------------------------------------------------
> > --------
> > > > > > -- For IBM-MAIN subscribe / signoff / archive access
> > > > > > instructions, send email to lists...@listserv.ua.edu with the
> > > > > > message: INFO IBM-MAIN
> > > > > >
> > > > > > ------------------------------------------------------------
> > --------
> > > > > > -- For IBM-MAIN subscribe / signoff / archive access
> > > > > > instructions, send email to lists...@listserv.ua.edu with the
> > > > > > message: INFO IBM-MAIN
> > > > > >
> > > > >
> > > > > ------------------------------------------------------------
> > ----------
> > > > > For IBM-MAIN subscribe / signoff / archive access instructions,
> > > > > send email to lists...@listserv.ua.edu with the message: INFO
> > > > > IBM-MAIN
> > > > >
> > > > > ------------------------------------------------------------
> > ----------
> > > > > For IBM-MAIN subscribe / signoff / archive access instructions,
> > > > > send email to lists...@listserv.ua.edu with the message: INFO
> > > > > IBM-MAIN
> > > > >
> > > >
> > > > ------------------------------------------------------------------
> > > > ---- For IBM-MAIN subscribe / signoff / archive access
> > > > instructions, send
> > > email
> > > > to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
> > > >
> > > > ------------------------------------------------------------------
> > > > ---- For IBM-MAIN subscribe / signoff / archive access
> > > > instructions, send email to lists...@listserv.ua.edu with the
> > > > message: INFO IBM-MAIN
> > > >
> > >
> > > --------------------------------------------------------------------
> > > -- For IBM-MAIN subscribe / signoff / archive access instructions,
> > > send email to lists...@listserv.ua.edu with the message: INFO
> > > IBM-MAIN
> > >
> > --
> >
> > Rob Schramm
> >
> > ----------------------------------------------------------------------
> > For IBM-MAIN subscribe / signoff / archive access instructions, send
> > email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
> >
>
> ----------------------------------------------------------------------
> For IBM-MAIN subscribe / signoff / archive access instructions, send email
> to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>
> ----------------------------------------------------------------------
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Reply via email to