Kenneth Kripke wrote:
>1.      How to recover if there is a failure in deflation of a compressed
>dataset.  We have a mixture of z/14 and z/15 processors.

What failure(s) do you have in mind? Of zEDC Express adapters (cards) on your 
IBM z14 machines?

There should be software fallback on machine models that lack zEDC hardware 
capabilities — assuming you've got the prerequisite PTFs in place. IBM z15 
machines (and higher) are guaranteed to have zEDC functionality in hardware 
since it's integrated on the main processor chips.

You could test the software fallback on your IBM z14 machines both for function 
and performance if you're concerned about card failures. But more realistically 
you could simulate the effects of a single card failure (configure it offline) 
and test that failure scenario. Hardware failures that exceed a single card 
failure probably qualify as DR scenarios, equivalent to a whole site loss. At 
some point the double (and triple, and quadruple...) failure scenarios must 
qualify as whole site losses.

>2.      For the z/15 processor, the footnote for the SMF 30 record indicates
>that compression statistics are no longer recorded.  How do you measure
>compression?  Is this also true for the SMF 14 and 15 records?

No, that's not quite true. On z15 machines and higher certain parts of SMF Type 
30 records are moot due to the nature of the vastly improved hardware, that's 
all. You still get the zEDC usage statistics that are relevant to the newer 
machines: number of compression and decompression requests, and the byte counts 
(compressed/decompressed in/out). See here for reference:

https://www.ibm.com/docs/en/zos/2.5.0?topic=mapping-zedc-usage-statistics-section

>3.      Regarding deflation, is there a noticeable performance/delay ?

"Probably not," especially on IBM z15 and higher, but that'll be configuration 
dependent and something you'll want to test to a reasonable degree.

Sometimes/often your performance *improves* when you use zEDC. In particular, 
batch elapsed times can decrease. There are fewer bytes to fetch from 
disk/flash storage (and storage cache) when those bytes are compressed, so if 
you've got something(s) I/O intensive and compressible you tend to do quite 
well. I recall one of the customers I work with shaved about 25 minutes off 
their typical batch cycle. 25 minutes might not seem like a lot, but in fact 
it's a big deal. It's up to them what they do with those extra 25 minutes, but 
usually it means they can absorb more business growth than expected. They can 
handle more batch and online transaction processing within the same computing 
resources they have today. And/or they have more margin for errors in their 
batch cycles.

—————
Timothy Sipples
Senior Architect
Digital Assets, Industry Solutions, and Cybersecurity
IBM zSystems/LinuxONE, Asia-Pacific
sipp...@sg.ibm.com


----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Reply via email to