Re: Hardware-assisted compression: not CPU-efficient?

2010-12-08 Thread Yifat Oren
Pardon my bringing back an old thread, but -

I wanted to see how much better is the COMPRESS option over the HWCOMPRESS
in regards to CPU time and was pretty surprised when my results suggested
that HWCOMPRESS is persistently more efficient (both CPU and channel
utilization -wise) than COMPRESS:

DFDSS DUMP with OPT(4) of a VSAM basic format to disk (basic format):

STEPNAME PROCSTEPRC   EXCP   CONNTCBSRB  CLOCK
DUMP-HWCOMPRESS  00  14514  93575.25.072.3   output was 958
cyls.
DUMP-COMPRESS00  14819  92326.53.072.5   output was 978
cyls.
DUMP-NOCOMP  00  15283   103K.13.082.4   output was
1,017 cyls.


DFDSS DUMP with OPT(4) of a PS basic format to disk (basic format):

STEPNAME PROCSTEPRC   EXCP   CONNTCBSRB  CLOCK   
DUMP-HWCOMPRESS  00  13317   154K.44.196.2  output was 877
cyls.
DUMP-COMPRESS00  14692   157K.68.195.1  output was 969
cyls.
DUMP-NOCOMP  00  35827   238K.14.217.9  output was 2,363
cyls. 


Running on a 2098-I04. DFSMSDSS V1R09.0. 


So, how come I get different results than the original poster?  
The test data was database-type data sets..

Best Regards,
Yifat

-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf
Of Andrew N Wilt
Sent: Friday, December 03, 2010 1:45 AM
To: IBM-MAIN@bama.ua.edu
Subject: Re: Hardware-assisted compression: not CPU-efficient?

Ron,
Thank you for the good response. It is true that the DFSMSdss
COMPRESS keyword and HWCOMPRESS keyword do not perform the same types of
compression. Like Ron said, the COMPRESS keyword is using a Huffman encoding
technique, and works amazing for repeated bytes (just the types of things
you see on system volumes). The HWCOMPRESS keyword utilizes a dictionary
based method, and works well, supposedly, on customer type data.
The CPU utilization of the HWCOMPRESS (dictionary based) is indeed larger
due to what it is doing. So you should choose the type of compression that
suits your CPU utilization needs and data type.
It was mentioned elsewhere in this thread about using the Tape
Hardware compaction. If you have it available, that's what I would go for.
The main intent of the HWCOMPRESS keyword was to provide the dictionary
based compression for the cases where you were using the software
encryption, and thus couldn't utilize the compaction of the tape device.

Thanks,

 Andrew Wilt
 IBM DFSMSdss Architecture/Development
 Tucson, Arizona

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Hardware-assisted compression: not CPU-efficient?

2010-12-08 Thread Hal Merritt
I think your test was too small. I did not see any meaningful differences among 
your results. I'd go for test data of at least 100x in size. 

 

-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of 
Yifat Oren
Sent: Wednesday, December 08, 2010 12:22 PM
To: IBM-MAIN@bama.ua.edu
Subject: Re: Hardware-assisted compression: not CPU-efficient?

Pardon my bringing back an old thread, but -

I wanted to see how much better is the COMPRESS option over the HWCOMPRESS
in regards to CPU time and was pretty surprised when my results suggested
that HWCOMPRESS is persistently more efficient (both CPU and channel
utilization -wise) than COMPRESS:

DFDSS DUMP with OPT(4) of a VSAM basic format to disk (basic format):

STEPNAME PROCSTEPRC   EXCP   CONNTCBSRB  CLOCK
DUMP-HWCOMPRESS  00  14514  93575.25.072.3   output was 958
cyls.
DUMP-COMPRESS00  14819  92326.53.072.5   output was 978
cyls.
DUMP-NOCOMP  00  15283   103K.13.082.4   output was
1,017 cyls.


DFDSS DUMP with OPT(4) of a PS basic format to disk (basic format):

STEPNAME PROCSTEPRC   EXCP   CONNTCBSRB  CLOCK   
DUMP-HWCOMPRESS  00  13317   154K.44.196.2  output was 877
cyls.
DUMP-COMPRESS00  14692   157K.68.195.1  output was 969
cyls.
DUMP-NOCOMP  00  35827   238K.14.217.9  output was 2,363
cyls. 


Running on a 2098-I04. DFSMSDSS V1R09.0. 


So, how come I get different results than the original poster?  
The test data was database-type data sets..

Best Regards,
Yifat

-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf
Of Andrew N Wilt
Sent: Friday, December 03, 2010 1:45 AM
To: IBM-MAIN@bama.ua.edu
Subject: Re: Hardware-assisted compression: not CPU-efficient?

Ron,
Thank you for the good response. It is true that the DFSMSdss
COMPRESS keyword and HWCOMPRESS keyword do not perform the same types of
compression. Like Ron said, the COMPRESS keyword is using a Huffman encoding
technique, and works amazing for repeated bytes (just the types of things
you see on system volumes). The HWCOMPRESS keyword utilizes a dictionary
based method, and works well, supposedly, on customer type data.
The CPU utilization of the HWCOMPRESS (dictionary based) is indeed larger
due to what it is doing. So you should choose the type of compression that
suits your CPU utilization needs and data type.
It was mentioned elsewhere in this thread about using the Tape
Hardware compaction. If you have it available, that's what I would go for.
The main intent of the HWCOMPRESS keyword was to provide the dictionary
based compression for the cases where you were using the software
encryption, and thus couldn't utilize the compaction of the tape device.

Thanks,

 Andrew Wilt
 IBM DFSMSdss Architecture/Development
 Tucson, Arizona

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html
NOTICE: This electronic mail message and any files transmitted with it are 
intended
exclusively for the individual or entity to which it is addressed. The message, 
together with any attachment, may contain confidential and/or privileged 
information.
Any unauthorized review, use, printing, saving, copying, disclosure or 
distribution 
is strictly prohibited. If you have received this message in error, please 
immediately advise the sender by reply email and delete all copies.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Johnny Luo
Hi,

DSS DUMP supports COMPRESS/HWCOMPRESS keyword and I found out in my test
that HWCOMPRESS costs more CPU than COMPRESS.

Is it normal?

Currently we're dumping huge production data to tape and in order to
alleviate the tape channel utilization we need to compress the data before
writing to tape.  It works well but the cpu usage is a problem cause we have
many such backup jobs running simultaneously.

If hardware-assisted compression cannot reduce the cpu overhead,  I will
consider using resource group to cap those jobs.

Best Regards,
Johnny Luo

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Ron Hawkins
Johnny,

The saving in hardware assisted compression is in decompression - when you read 
it. Look at what should be a much lower CPU cost to decompress the files during 
restore and decide if the speed of restoring the data concurrently is worth the 
increase in CPU required to back it up in the first place.

Ron

 -Original Message-
 From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of
 Johnny Luo
 Sent: Thursday, December 02, 2010 2:13 AM
 To: IBM-MAIN@bama.ua.edu
 Subject: [IBM-MAIN] Hardware-assisted compression: not CPU-efficient?
 
 Hi,
 
 DSS DUMP supports COMPRESS/HWCOMPRESS keyword and I found out in my test
 that HWCOMPRESS costs more CPU than COMPRESS.
 
 Is it normal?
 
 Currently we're dumping huge production data to tape and in order to
 alleviate the tape channel utilization we need to compress the data before
 writing to tape.  It works well but the cpu usage is a problem cause we have
 many such backup jobs running simultaneously.
 
 If hardware-assisted compression cannot reduce the cpu overhead,  I will
 consider using resource group to cap those jobs.
 
 Best Regards,
 Johnny Luo
 
 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
 Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Martin Packer
Ron, is it generally the case that CPU is saved on read? I'm seeing QSAM 
with HDC jobsteps showing very high CPU. But then they seem to both write 
and read. Enough CPU to potentially suffer from queuing.

(And, yes, I know you were talking about a different category of HDC 
usage.)

Martin Packer,
Mainframe Performance Consultant, zChampion
Worldwide Banking Center of Excellence, IBM

+44-7802-245-584

email: martin_pac...@uk.ibm.com

Twitter / Facebook IDs: MartinPacker





Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number 
741598. 
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU






--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Miklos Szigetvari

Hi

A few years ago I have tried with hardware compression,  as we are using
intensively the zlib library (http://www.ietf.org/rfc/rfc1950.txt) to 
compress/expand .
Never got a proper answer, and  till now not clear,  in which case would 
bring the hardware compression some CPU reduction


On 12/2/2010 12:36 PM, Martin Packer wrote:

Ron, is it generally the case that CPU is saved on read? I'm seeing QSAM
with HDC jobsteps showing very high CPU. But then they seem to both write
and read. Enough CPU to potentially suffer from queuing.

(And, yes, I know you were talking about a different category of HDC
usage.)

Martin Packer,
Mainframe Performance Consultant, zChampion
Worldwide Banking Center of Excellence, IBM

+44-7802-245-584

email: martin_pac...@uk.ibm.com

Twitter / Facebook IDs: MartinPacker





Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number
741598.
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU






--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html




--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Johnny Luo
Miklos,

What do you mean by 'zlib'? Is it free on z/OS?

Best Regards,
Johnny Luo


On Thu, Dec 2, 2010 at 8:10 PM, Miklos Szigetvari 
miklos.szigetv...@isis-papyrus.com wrote:

Hi

 A few years ago I have tried with hardware compression,  as we are using
 intensively the zlib library (http://www.ietf.org/rfc/rfc1950.txt) to
 compress/expand .
 Never got a proper answer, and  till now not clear,  in which case would
 bring the hardware compression some CPU reduction


 On 12/2/2010 12:36 PM, Martin Packer wrote:

 Ron, is it generally the case that CPU is saved on read? I'm seeing QSAM
 with HDC jobsteps showing very high CPU. But then they seem to both write
 and read. Enough CPU to potentially suffer from queuing.

 (And, yes, I know you were talking about a different category of HDC
 usage.)

 Martin Packer,
 Mainframe Performance Consultant, zChampion
 Worldwide Banking Center of Excellence, IBM

 +44-7802-245-584

 email: martin_pac...@uk.ibm.com

 Twitter / Facebook IDs: MartinPacker





 Unless stated otherwise above:
 IBM United Kingdom Limited - Registered in England and Wales with number
 741598.
 Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU






 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
 Search the archives at http://bama.ua.edu/archives/ibm-main.html



 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
 Search the archives at http://bama.ua.edu/archives/ibm-main.html


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Miklos Szigetvari

Hi

Yes,  it is a c library

On 12/2/2010 1:26 PM, Johnny Luo wrote:

Miklos,

What do you mean by 'zlib'? Is it free on z/OS?

Best Regards,
Johnny Luo


On Thu, Dec 2, 2010 at 8:10 PM, Miklos Szigetvari
miklos.szigetv...@isis-papyrus.com  wrote:


Hi

A few years ago I have tried with hardware compression,  as we are using
intensively the zlib library (http://www.ietf.org/rfc/rfc1950.txt) to
compress/expand .
Never got a proper answer, and  till now not clear,  in which case would
bring the hardware compression some CPU reduction


On 12/2/2010 12:36 PM, Martin Packer wrote:


Ron, is it generally the case that CPU is saved on read? I'm seeing QSAM
with HDC jobsteps showing very high CPU. But then they seem to both write
and read. Enough CPU to potentially suffer from queuing.

(And, yes, I know you were talking about a different category of HDC
usage.)

Martin Packer,
Mainframe Performance Consultant, zChampion
Worldwide Banking Center of Excellence, IBM

+44-7802-245-584

email: martin_pac...@uk.ibm.com

Twitter / Facebook IDs: MartinPacker





Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number
741598.
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU






--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html




--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Paul Gilmartin
On Thu, 2 Dec 2010 02:53:17 -0800, Ron Hawkins wrote:

The saving in hardware assisted compression is in decompression - when you 
read it. Look at what should be a much lower CPU cost to decompress the files 
during restore and decide if the speed of restoring the data concurrently is 
worth the increase in CPU required to back it up in the first place.

So if you restore more frequently than you backup, you come out ahead?

-- gil

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Yifat Oren
Hi Johnny, 

I was under the impression that for DFDSS DUMP, COMPRESS and HWCOMPRESS are
synonymous;

Are you saying they are not?


If you are writing to tape why not use the drive compaction(DCB=TRTCH=COMP)
instead?

Best Regards,
Yifat

-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf
Of Johnny Luo
Sent: יום ה 02 דצמבר 2010 12:13
To: IBM-MAIN@bama.ua.edu
Subject: Hardware-assisted compression: not CPU-efficient?

Hi,

DSS DUMP supports COMPRESS/HWCOMPRESS keyword and I found out in my test
that HWCOMPRESS costs more CPU than COMPRESS.

Is it normal?

Currently we're dumping huge production data to tape and in order to
alleviate the tape channel utilization we need to compress the data before
writing to tape.  It works well but the cpu usage is a problem cause we have
many such backup jobs running simultaneously.

If hardware-assisted compression cannot reduce the cpu overhead,  I will
consider using resource group to cap those jobs.

Best Regards,
Johnny Luo

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email
to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the
archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Vernooij, CP - SPLXM
Yifat Oren yi...@tmachine.com wrote in message
news:3d0c19e6913742b282eeb9a7c4ae3...@yifato...
 Hi Johnny, 
 
 I was under the impression that for DFDSS DUMP, COMPRESS and
HWCOMPRESS are
 synonymous;
 
 Are you saying they are not?
 
 
 If you are writing to tape why not use the drive
compaction(DCB=TRTCH=COMP)
 instead?

Because he is trying to lower channel utilization, so he must compress
before sending it over the channel.

Kees.

For information, services and offers, please visit our web site: 
http://www.klm.com. This e-mail and any attachment may contain confidential and 
privileged material intended for the addressee only. If you are not the 
addressee, you are notified that no part of the e-mail or any attachment may be 
disclosed, copied or distributed, and that any other action related to this 
e-mail or attachment is strictly prohibited, and may be unlawful. If you have 
received this e-mail by error, please notify the sender immediately by return 
e-mail, and delete this message. 

Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its 
employees shall not be liable for the incorrect or incomplete transmission of 
this e-mail or any attachments, nor responsible for any delay in receipt. 
Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch 
Airlines) is registered in Amstelveen, The Netherlands, with registered number 
33014286


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Norbert Friemel
On Thu, 2 Dec 2010 16:29:56 +0200, Yifat Oren wrote:


I was under the impression that for DFDSS DUMP, COMPRESS and HWCOMPRESS are
synonymous;

Are you saying they are not?



Yes, they are not synonymous. HWCOMPRESS uses the CMPSC instruction
(dictionary-based compression). COMPRESS uses RLE (run-length encoding).

Norbert Friemel

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Tony Harminc
On 2 December 2010 05:53, Ron Hawkins ron.hawkins1...@sbcglobal.net wrote:
 Johnny,

 The saving in hardware assisted compression is in decompression - when you 
 read it. Look at what should be a much lower CPU cost to decompress the files 
 during restore and decide if the speed of restoring the data concurrently is 
 worth the increase in CPU required to back it up in the first place.

I am a little surprised at this. Certainly for most of the current
dynamic dictionary based algorithms (and many more as well),
decompression will always, except in pathological cases, be a good
deal faster than compression. This is intuitively obvious, since the
compression code must not only go through the mechanics of
transforming input data into the output codestream, but must do it
with some eye to actually compressing as best it can with the
knowledge available to it, rather than making things worse. The
decompression simply takes what it is given, and algorithmically
transforms it back with no choice.

Whether a hardware assisted - which in this case means one using the
tree manipulation instructions - decompression is disproportionately
faster than a similar compression, I don't know, but I'd be surprised
if it's much different.

But regardless, surely it is a strange claim that an installation
would use hardware assisted compression in order to make their
restores faster, particularly at the expense of their dumps. What
would be the business case for such a thing? How many installations do
restores on any kind of regular basis? How many have a need to have
them run even faster than they do naturally when compared to the
dumps?

Tony H.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Tom Marchant
On Thu, 2 Dec 2010 12:09:23 -0500, Tony Harminc wrote:

On 2 December 2010 05:53, Ron Hawkins wrote:

 The saving in hardware assisted compression is in 
decompression - when you read it. Look at what should be a 
much lower CPU cost to decompress the files during restore 
and decide if the speed of restoring the data concurrently is 
worth the increase in CPU required to back it up in the first place.

I am a little surprised at this

But regardless, surely it is a strange claim that an installation
would use hardware assisted compression in order to make their
restores faster, particularly at the expense of their dumps.

Increased CPU time to do the dump does not necessarily mean that 
the elapsed time is longer.  In fact, by compressing the data, I would 
expect that the time required to write it out (the I/O time) would be 
less.

-- 
Tom Marchant

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Staller, Allan
Unfortunately, IBM, et. al. *DO NOT* bill on elapsed time.  

More CPU used for Dump is less CPU available for productive work, or
worse yet, a bigger software bill!


snip
Increased CPU time to do the dump does not necessarily mean that 
the elapsed time is longer.  In fact, by compressing the data, I would 
expect that the time required to write it out (the I/O time) would be 
less.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Ron Hawkins
Martin,

Except for when the compression assist instructions were in millicode on the
G4 and G5, the hardware compression from Compression Services has always had
am asymmetric cost for DFDMS compression. I remember some early
documentation from IBM when it was first introduced in DFSMS that quoted 12
instructions per byte to compress, and two instructions per byte to
decompress. 

Ron


 -Original Message-
 From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of
 Martin Packer
 Sent: Thursday, December 02, 2010 3:36 AM
 To: IBM-MAIN@bama.ua.edu
 Subject: Re: [IBM-MAIN] Hardware-assisted compression: not CPU-efficient?
 
 Ron, is it generally the case that CPU is saved on read? I'm seeing QSAM
 with HDC jobsteps showing very high CPU. But then they seem to both write
 and read. Enough CPU to potentially suffer from queuing.
 
 (And, yes, I know you were talking about a different category of HDC
 usage.)
 
 Martin Packer,
 Mainframe Performance Consultant, zChampion
 Worldwide Banking Center of Excellence, IBM
 
 +44-7802-245-584
 
 email: martin_pac...@uk.ibm.com
 
 Twitter / Facebook IDs: MartinPacker
 
 
 
 
 
 Unless stated otherwise above:
 IBM United Kingdom Limited - Registered in England and Wales with number
 741598.
 Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU
 
 
 
 
 
 
 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
 Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Ron Hawkins
Gil,

I was thinking that a faster restore would be have some value as a reduction
in recovery time, as opposed to back-up duration which is usually outside of
any business critical path. 

This would have value in business continuance whether it was a small
application recovery or a full disaster recovery situation. I don't think
the frequency of recovery is a factor in this case.

Ron

 -Original Message-
 From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of
 Paul Gilmartin
 Sent: Thursday, December 02, 2010 6:09 AM
 To: IBM-MAIN@bama.ua.edu
 Subject: Re: [IBM-MAIN] Hardware-assisted compression: not CPU-efficient?
 
 On Thu, 2 Dec 2010 02:53:17 -0800, Ron Hawkins wrote:
 
 The saving in hardware assisted compression is in decompression - when
you
 read it. Look at what should be a much lower CPU cost to decompress the
files
 during restore and decide if the speed of restoring the data concurrently
is
 worth the increase in CPU required to back it up in the first place.
 
 So if you restore more frequently than you backup, you come out ahead?
 
 -- gil
 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Hal Merritt
Conversely, sometimes it is hard to get the backups all done in a low activity 
window, so one might compromise in favor of faster backups even at the expense 
of more CPU consumption. 

Depending on shop's strategy, getting a logically consistent PIT copy just 
might put the backups in the business critical path. That is, all have to 
complete before the next business day starts. 




-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of 
Ron Hawkins
Sent: Thursday, December 02, 2010 12:58 PM
To: IBM-MAIN@bama.ua.edu
Subject: Re: Hardware-assisted compression: not CPU-efficient?

Gil,

I was thinking that a faster restore would be have some value as a reduction
in recovery time, as opposed to back-up duration which is usually outside of
any business critical path. 

This would have value in business continuance whether it was a small
application recovery or a full disaster recovery situation. I don't think
the frequency of recovery is a factor in this case.

Ron

 -Original Message-
 From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of
 Paul Gilmartin
 Sent: Thursday, December 02, 2010 6:09 AM
 To: IBM-MAIN@bama.ua.edu
 Subject: Re: [IBM-MAIN] Hardware-assisted compression: not CPU-efficient?
 
 On Thu, 2 Dec 2010 02:53:17 -0800, Ron Hawkins wrote:
 
 The saving in hardware assisted compression is in decompression - when
you
 read it. Look at what should be a much lower CPU cost to decompress the
files
 during restore and decide if the speed of restoring the data concurrently
is
 worth the increase in CPU required to back it up in the first place.
 
 So if you restore more frequently than you backup, you come out ahead?
 
 -- gil
 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html
NOTICE: This electronic mail message and any files transmitted with it are 
intended
exclusively for the individual or entity to which it is addressed. The message, 
together with any attachment, may contain confidential and/or privileged 
information.
Any unauthorized review, use, printing, saving, copying, disclosure or 
distribution 
is strictly prohibited. If you have received this message in error, please 
immediately advise the sender by reply email and delete all copies.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Stephen Mednick
Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf
Of Hal Merritt
Sent: Friday, 3 December 2010 6:44 AM
To: IBM-MAIN@bama.ua.edu
Subject: Re: Hardware-assisted compression: not CPU-efficient?

Conversely, sometimes it is hard to get the backups all done in a low
activity window, so one might compromise in favor of faster backups even at
the expense of more CPU consumption. 

Depending on shop's strategy, getting a logically consistent PIT copy just
might put the backups in the business critical path. That is, all have to
complete before the next business day starts. 
---

Doesn't have to be if you combine the backups with hardware vendor
replication technologies such as SHADOWIMAGE, TIMEFINDER and FLASHCOPY. 

Read how Innovation's FDRINSTANT solution gets around the issue of taking
backups off the critical path:

http://www.innovationdp.fdr.com/products/fdrinstant/


Stephen Mednick
Computer Supervisory Services
Sydney, Australia
 
Asia/Pacific representatives for
Innovation Data Processing, Inc.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Ted MacNEIL
opposed to back-up duration which is usually outside of
any business critical path. 

It shouldn't be, especially if back-ups have to complete before sub-systems can 
come up.

If we ran out of window, we had senior IT management and business contacts 
decide which was more critical: back-up; availability.

Sometimes, like during the Christmas shopping season the decision was 
availability.

But, that was mortgaging the future.
Recovering without back-ups during your peak season takes longer than during 
'normal' times.

We never had to run recovery when we made the decision, but I was glad I didn't 
have the responsibility to make the choice.

Back-ups are insurance premiums.
If you pay and nothing happens, it's a business expense.
If you don't pay and something happens, it may be a career event!

-
Ted MacNEIL
eamacn...@yahoo.ca

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Ron Hawkins
Tony,

You are surprised, and then you explain your surprise by agreeing with me.
I'm confused.

I'm not sure if you realized that the Huffman encoding technique used by
DFMSdss COMPRESS keyword is not a dictionary based method, and has a
symmetrical CPU cost for compression and decompression.

Finally, as I mentioned in another email, there may be intrinsic Business
Continuance value in taking advantage of the asymmetric CPU cost to speed up
local recovery of an application, or Disaster Recovery that is based on
DFSMSdss restores. An improvement in Recovery time may be worth the
increased cost of the backup.

Ron

 -Original Message-
 From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of
 Tony Harminc
 Sent: Thursday, December 02, 2010 9:09 AM
 To: IBM-MAIN@bama.ua.edu
 Subject: Re: [IBM-MAIN] Hardware-assisted compression: not CPU-efficient?
 
 On 2 December 2010 05:53, Ron Hawkins ron.hawkins1...@sbcglobal.net
wrote:
  Johnny,
 
  The saving in hardware assisted compression is in decompression - when
you
 read it. Look at what should be a much lower CPU cost to decompress the
files
 during restore and decide if the speed of restoring the data concurrently
is
 worth the increase in CPU required to back it up in the first place.
 
 I am a little surprised at this. Certainly for most of the current
 dynamic dictionary based algorithms (and many more as well),
 decompression will always, except in pathological cases, be a good
 deal faster than compression. This is intuitively obvious, since the
 compression code must not only go through the mechanics of
 transforming input data into the output codestream, but must do it
 with some eye to actually compressing as best it can with the
 knowledge available to it, rather than making things worse. The
 decompression simply takes what it is given, and algorithmically
 transforms it back with no choice.
 
 Whether a hardware assisted - which in this case means one using the
 tree manipulation instructions - decompression is disproportionately
 faster than a similar compression, I don't know, but I'd be surprised
 if it's much different.
 
 But regardless, surely it is a strange claim that an installation
 would use hardware assisted compression in order to make their
 restores faster, particularly at the expense of their dumps. What
 would be the business case for such a thing? How many installations do
 restores on any kind of regular basis? How many have a need to have
 them run even faster than they do naturally when compared to the
 dumps?
 
 Tony H.
 
 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
 Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Ron Hawkins
Ted,

I think that why us DASD vendors invented Concurrent Copy, Snapshot,
Shadowimage, Timefinder and FlashCopy. The backup is done relatively
quickly, and copying the backup to tape can be completed outside the
Business Critical Path.

I'm not suggesting for a moment that everyone uses these products.

I did say that the increase cost and time for backup needs to evaluated
against any improvement in restoration time with hardware compression. Thank
you to all those that reinforced the need for this evaluation in their
response.

 -Original Message-
 From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of
 Ted MacNEIL
 Sent: Thursday, December 02, 2010 1:14 PM
 To: IBM-MAIN@bama.ua.edu
 Subject: Re: [IBM-MAIN] Hardware-assisted compression: not CPU-efficient?
 
 opposed to back-up duration which is usually outside of
 any business critical path.
 
 It shouldn't be, especially if back-ups have to complete before
sub-systems
 can come up.
 
 If we ran out of window, we had senior IT management and business contacts
 decide which was more critical: back-up; availability.
 
 Sometimes, like during the Christmas shopping season the decision was
 availability.
 
 But, that was mortgaging the future.
 Recovering without back-ups during your peak season takes longer than
during
 'normal' times.
 
 We never had to run recovery when we made the decision, but I was glad I
 didn't have the responsibility to make the choice.
 
 Back-ups are insurance premiums.
 If you pay and nothing happens, it's a business expense.
 If you don't pay and something happens, it may be a career event!
 
 -
 Ted MacNEIL
 eamacn...@yahoo.ca
 
 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
 Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Andrew N Wilt
Ron,
Thank you for the good response. It is true that the DFSMSdss
COMPRESS keyword and HWCOMPRESS keyword do not perform the same types of
compression. Like Ron said, the COMPRESS keyword is using a Huffman
encoding technique, and works amazing for repeated bytes (just the types of
things you see on system volumes). The HWCOMPRESS keyword utilizes a
dictionary based method, and works well, supposedly, on customer type data.
The CPU utilization of the HWCOMPRESS (dictionary based) is indeed larger
due to what it is doing. So you should choose the type of compression that
suits your CPU utilization needs and data type.
It was mentioned elsewhere in this thread about using the Tape
Hardware compaction. If you have it available, that's what I would go for.
The main intent of the HWCOMPRESS keyword was to provide the dictionary
based compression for the cases where you were using the software
encryption, and thus couldn't utilize the compaction of the tape device.

Thanks,

 Andrew Wilt
 IBM DFSMSdss Architecture/Development
 Tucson, Arizona


IBM Mainframe Discussion List IBM-MAIN@bama.ua.edu wrote on 12/02/2010
04:20:15 PM:

 From:

 Ron Hawkins ron.hawkins1...@sbcglobal.net

 To:

 IBM-MAIN@bama.ua.edu

 Date:

 12/02/2010 04:21 PM

 Subject:

 Re: Hardware-assisted compression: not CPU-efficient?

 Tony,

 You are surprised, and then you explain your surprise by agreeing with
me.
 I'm confused.

 I'm not sure if you realized that the Huffman encoding technique used by
 DFMSdss COMPRESS keyword is not a dictionary based method, and has a
 symmetrical CPU cost for compression and decompression.

 Finally, as I mentioned in another email, there may be intrinsic Business
 Continuance value in taking advantage of the asymmetric CPU cost to speed
up
 local recovery of an application, or Disaster Recovery that is based on
 DFSMSdss restores. An improvement in Recovery time may be worth the
 increased cost of the backup.

 Ron

  -Original Message-
  From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
 Behalf Of
  Tony Harminc
  Sent: Thursday, December 02, 2010 9:09 AM
  To: IBM-MAIN@bama.ua.edu
  Subject: Re: [IBM-MAIN] Hardware-assisted compression: not
CPU-efficient?
 
  On 2 December 2010 05:53, Ron Hawkins ron.hawkins1...@sbcglobal.net
 wrote:
   Johnny,
  
   The saving in hardware assisted compression is in decompression -
when
 you
  read it. Look at what should be a much lower CPU cost to decompress the
 files
  during restore and decide if the speed of restoring the data
concurrently
 is
  worth the increase in CPU required to back it up in the first place.
 
  I am a little surprised at this. Certainly for most of the current
  dynamic dictionary based algorithms (and many more as well),
  decompression will always, except in pathological cases, be a good
  deal faster than compression. This is intuitively obvious, since the
  compression code must not only go through the mechanics of
  transforming input data into the output codestream, but must do it
  with some eye to actually compressing as best it can with the
  knowledge available to it, rather than making things worse. The
  decompression simply takes what it is given, and algorithmically
  transforms it back with no choice.
 
  Whether a hardware assisted - which in this case means one using the
  tree manipulation instructions - decompression is disproportionately
  faster than a similar compression, I don't know, but I'd be surprised
  if it's much different.
 
  But regardless, surely it is a strange claim that an installation
  would use hardware assisted compression in order to make their
  restores faster, particularly at the expense of their dumps. What
  would be the business case for such a thing? How many installations do
  restores on any kind of regular basis? How many have a need to have
  them run even faster than they do naturally when compared to the
  dumps?
 
  Tony H.
 
  --
  For IBM-MAIN subscribe / signoff / archive access instructions,
  send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
  Search the archives at http://bama.ua.edu/archives/ibm-main.html

 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
 Search the archives at http://bama.ua.edu/archives/ibm-main.html
--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Tony Harminc
On 2 December 2010 18:20, Ron Hawkins ron.hawkins1...@sbcglobal.net wrote:
 Tony,

 You are surprised, and then you explain your surprise by agreeing with me.
 I'm confused.

Well now I'm confused; I'm not sure how I did what you say.

 I'm not sure if you realized that the Huffman encoding technique used by
 DFMSdss COMPRESS keyword is not a dictionary based method, and has a
 symmetrical CPU cost for compression and decompression.

No, I didn't know anything about the compression methods triggered by
these two keywords until this thread. But I do know to some extent how
both Huffman and the LZW-style dictionary compression schemes work,
and that there is a differential between encoding and decoding speed
when an inherently adaptive scheme like LZW is used, vs a usually
static Huffman scheme.

But I'm afraid I'm missing your point. You said  that the saving in
hardware assisted compression is in decompression, and I took this to
be a claim that hardware assisted decompression is somehow speeded up
- when compared to a plain software implementation - relatively more
than is compression, and I said that I doubt that that is the case.
But if it is indeed the case under some circumstances, then I don't
see why most shops would care in most cases.

 Finally, as I mentioned in another email, there may be intrinsic Business
 Continuance value in taking advantage of the asymmetric CPU cost to speed up
 local recovery of an application, or Disaster Recovery that is based on
 DFSMSdss restores. An improvement in Recovery time may be worth the
 increased cost of the backup.

It's certainly possible, but I think it is unlikely to be the common case.

Tony H.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Ron Hawkins
Tony,

Then the misunderstanding is that Compression Services as called by DFSMSdfp
and DFSMSdss with HWCOMPRESS uses an LZW compression scheme, while DFSMShsm
and DFSMSdss with the COMPRESS keyword use Huffman technique.

The Asymmetric cost of HWCOMPRESS I was referring to, and that apparently
confused you, is the same differential for LZW that you mention below. I
suspected that you did not know the difference in encoding techniques, which
is why I pointed it out.

I'm aware of customers with Disaster Recovery schemes that rely on restores
from DFSMSdss back-ups, and spend the first 12-24 hours of the DR drill
restoring data from port and channel constrained cartridge drives.
HWCOMPRESS would relieve that situation and potentially speed up the restore
process by 50% or more without creating a CPU bottleneck that may occur with
COMPRESS.

My own findings a few years ago were that dumping to disk in the absence of
channel and hardware buffer saturation (i.e. disk output) using COMPRESS
runs slower than NOCOMPRESS. A decade and a half ago ago when I looked at
this with ASTEX trace and GTFPARS it looked like read/write overlap was
disabled when COMPRESS was used. I have not run any tests with HWCOMPRESS.

Ron

 -Original Message-
 From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of
 Tony Harminc
 Sent: Thursday, December 02, 2010 4:51 PM
 To: IBM-MAIN@bama.ua.edu
 Subject: Re: [IBM-MAIN] Hardware-assisted compression: not CPU-efficient?
 
 On 2 December 2010 18:20, Ron Hawkins ron.hawkins1...@sbcglobal.net
wrote:
  Tony,
 
  You are surprised, and then you explain your surprise by agreeing with
me.
  I'm confused.
 
 Well now I'm confused; I'm not sure how I did what you say.
 
  I'm not sure if you realized that the Huffman encoding technique used by
  DFMSdss COMPRESS keyword is not a dictionary based method, and has a
  symmetrical CPU cost for compression and decompression.
 
 No, I didn't know anything about the compression methods triggered by
 these two keywords until this thread. But I do know to some extent how
 both Huffman and the LZW-style dictionary compression schemes work,
 and that there is a differential between encoding and decoding speed
 when an inherently adaptive scheme like LZW is used, vs a usually
 static Huffman scheme.
 
 But I'm afraid I'm missing your point. You said  that the saving in
 hardware assisted compression is in decompression, and I took this to
 be a claim that hardware assisted decompression is somehow speeded up
 - when compared to a plain software implementation - relatively more
 than is compression, and I said that I doubt that that is the case.
 But if it is indeed the case under some circumstances, then I don't
 see why most shops would care in most cases.
 
  Finally, as I mentioned in another email, there may be intrinsic
Business
  Continuance value in taking advantage of the asymmetric CPU cost to
speed up
  local recovery of an application, or Disaster Recovery that is based on
  DFSMSdss restores. An improvement in Recovery time may be worth the
  increased cost of the backup.
 
 It's certainly possible, but I think it is unlikely to be the common case.
 
 Tony H.
 
 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
 Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html