Re: Netview FTP - Hardware or software compression?

2011-11-04 Thread Chris Mason
Fred

>...> And sorry if my question confused ...

Unlike Hal Merritt, I wasn't *confused* by the question, only intrigued ...

> ... does VTAM compression apply only for SNA traffic or also IP traffic?

VTAM provides compression as a service to applications using the VTAM API. 
Because it is architected within SNA, it is a service which may also be 
supported by other implementations of SNA.

By "IP traffic" I assume you mean applications supported by - these days - the 
sockets API. This has nothing whatsoever to do with VTAM. Enterprise Extender 
is a technique whereby the IP network is employed as a logical SNA connection 
but I don't believe that qualifies as your "IP traffic".

I would expect that where you might look for support for compression is in what 
can be done with individual IP-based applications.

Not so long ago in August we had the thread "FTP Question"[1] - not the most 
imaginative subject line - which involved a discussion of how FTP supported 
compression. It was concluded - not without some supercilious comments from Mr 
Gilmartin[2] - that, in essence, what FTP offered as "compression", FTP 
subcommand COMPRESS[3] or MODE C[4], corresponded to VTAM's level 1 
compression, RLE, as described before.

Incidentally, I verified these points by using "compression" as a search word 
in the "Search text" box of the "Communications Server" bookshelf in say the 
following URL:

http://www-03.ibm.com/systems/z/os/zos/bkserv/zshelves13.html

Using this technique, I checked also the JES2 bookshelf in order to check on 
the picture for NJE over IP. It seems that compression is just *not* supported 
- perhaps yet ...

-

[1] http://aime.ua.edu/cgi-bin/wa?A1=ind1108&L=ibm-main#97

[2] He may still be picking felt fragments from his teeth!

[3] http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/F1A1B9A0/5.14

[4] http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/F1A1B9A0/5.44

-

Chris Mason

On Thu, 3 Nov 2011 03:19:13 -0500, Fred Schmidt  wrote:

>Chris (or anyone)... does VTAM compression apply only for SNA traffic or also 
>IP traffic? In case you haven't already guessed, I am not a comm's person, so 
>please excuse my ignorance.
>
>Regards, Fred

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Netview FTP - Hardware or software compression?

2011-11-03 Thread Fred Schmidt
Chris (or anyone)... does VTAM compression apply only for SNA traffic or also 
IP traffic? In case you haven't already guessed, I am not a comm's person, so 
please excuse my ignorance.

Regards, Fred

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Netview FTP - Hardware or software compression?

2011-11-03 Thread Fred Schmidt
Thanks Chris, for the most helpful and informative reply. And sorry if my 
question confused must be an Australianism.

Regards, Fred

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Netview FTP - Hardware or software compression?

2011-11-02 Thread Chris Mason
Fred

I assume that you question is to be interpreted as "Would someone be able to 
tell me whether NetView FTP uses hardware or software compression?"

-

According to 

http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?infotype=dd&subtype=sm&appname=ShopzSeries&htmlfid=897/ENUS5685-108

NetView FTP for MVS V2R2.1 is probably what you have.

I downloaded some of the product manuals and I see that, assuming you are not 
using the OSI interface, your NetView FTP will be using the VTAM API.

By finding references to compression in various of the manuals, it becomes 
evident that the three types of compression used by NetView FTP mean the 
following:

- NONE - No compression is used - actually the easiest to work out!

- SNA - SNA-type compression is used which, anywhere else in the world would be 
described as "Run-Length Encoding (RLE) or Compaction"

- ADAPT - Adaptive compression is used - strong hints that the good gentlemen 
Lempel, Ziv and Welch were the authors of the technique

Nothing is said in the NetView FTP manuals about whether there is any use of 
the hardware capability to assist with compression which I would tend to assume 
means that there is no use of hardware. Note that, if hardware were used, there 
would need to be a statement somewhere in the documentation regarding the 
appropriate processor 

I happen to have some presentation notes on when this hardware capability 
became available and the notes are as follows:



Note: Hardware compression is available with certain ES/9000 models from 
February 1993. It also requires a minimum MVS level of SP V4R3 with PTF UY91011.



What I think has happened is that the IBM developers in one of those 
Baden-Württemberg locations implemented this pair of compression options in 
their product without having checked with their colleagues in the Research 
Triangle Park location whether or not the product managing the API they were 
using was going to set about implementing the self-same pair of compression 
options in the very near future thus making their efforts essentially pointless!

Thus what I actually recommend is that you abandon using the NetView FTP 
options, that is specify "NONE", and arrange for VTAM to do what you were 
thinking of asking NetView FTP to do. I realise this is not very patriotic but 
may be more sensible.

What I can now say is that the VTAM mechanisms *do* use the hardware capability 
where it seems appropriate.

Furthermore there is a VTAM start option, CMPMIPS, which must be set in order 
to indicate when hardware compression "switches in". I'm pretty sure there 
would need to be specification of a similar parameter by NetView FTP if it were 
to be capable of using hardware compression.

I'm sending you my presentation on the topic which I happen to have converted 
to a document from a GML form. It is intended to cover everything you need to 
know.

-

I find this technique of starting the question with "Anyone know ...", already 
an abbreviation of "Does anyone know ...", most curious! In many cases, such as 
this one, a perfectly valid reply, answering the question fully, would be "I 
imagine the developers of the product would know."!

-

Chris Mason

On Wed, 2 Nov 2011 11:28:44 +0100, Fred Schmidt  wrote:

>Anyone know whether Netview FTP uses hardware or software compression?
> 
>Regards, Fred Schmidt

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Netview FTP - Hardware or software compression?

2011-11-02 Thread Hal Merritt
Netview and FTP are generally considered two separate things. 

The FTP supplied with z/os uses software compression if certain conditions are 
met. Hardware compression may occur downstream in the network appliances. 
 

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@bama.ua.edu] On Behalf Of 
Fred Schmidt
Sent: Wednesday, November 02, 2011 5:29 AM
To: IBM-MAIN@bama.ua.edu
Subject: Netview FTP - Hardware or software compression?

Anyone know whether Netview FTP uses hardware or software compression?
 
Regards, Fred Schmidt


-- 

Informationen (einschließlich Pflichtangaben) zu einzelnen, innerhalb der EU 
tätigen Gesellschaften und Zweigniederlassungen des Konzerns Deutsche Bank 
finden Sie unter http://www.deutsche-bank.de/de/content/pflichtangaben.htm. 
Diese E-Mail enthält vertrauliche und/ oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, informieren Sie bitte sofort den Absender und vernichten Sie diese 
E-Mail. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser E-Mail 
ist nicht gestattet.

Please refer to http://www.db.com/en/content/eu_disclosures.htm for information 
(including mandatory corporate particulars) on selected Deutsche Bank branches 
and group companies registered or incorporated in the European Union. This 
e-mail may contain confidential and/or privileged information. If you are not 
the intended recipient (or have received this e-mail in error) please notify 
the sender immediately and delete this e-mail. Any unauthorized copying, 
disclosure or distribution of the material in this e-mail is strictly forbidden.

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at 
http://bama.ua.edu/archives/ibm-main.html
NOTICE: This electronic mail message and any files transmitted with it are 
intended
exclusively for the individual or entity to which it is addressed. The message, 
together with any attachment, may contain confidential and/or privileged 
information.
Any unauthorized review, use, printing, saving, copying, disclosure or 
distribution 
is strictly prohibited. If you have received this message in error, please 
immediately advise the sender by reply email and delete all copies.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Netview FTP - Hardware or software compression?

2011-11-02 Thread Fred Schmidt
Anyone know whether Netview FTP uses hardware or software compression?
 
Regards, Fred Schmidt


-- 

Informationen (einschließlich Pflichtangaben) zu einzelnen, innerhalb der EU 
tätigen Gesellschaften und Zweigniederlassungen des Konzerns Deutsche Bank 
finden Sie unter http://www.deutsche-bank.de/de/content/pflichtangaben.htm. 
Diese E-Mail enthält vertrauliche und/ oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, informieren Sie bitte sofort den Absender und vernichten Sie diese 
E-Mail. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser E-Mail 
ist nicht gestattet.

Please refer to http://www.db.com/en/content/eu_disclosures.htm for information 
(including mandatory corporate particulars) on selected Deutsche Bank branches 
and group companies registered or incorporated in the European Union. This 
e-mail may contain confidential and/or privileged information. If you are not 
the intended recipient (or have received this e-mail in error) please notify 
the sender immediately and delete this e-mail. Any unauthorized copying, 
disclosure or distribution of the material in this e-mail is strictly forbidden.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: where 2 find SMS compression code

2011-08-02 Thread Rick Fochtman

--
Thanks. these are small datasets. I don't know why they were compressed 
with Data Accelerator. We greatly overused that product. Management at 
the time said: "Great! Compress everything and we don't need to get any 
more DASD!" Management today says: "Use SMS compression and eliminate 
the cost of Data Accelerator!" We did not testing to see how this will 
affect CPU usage or compression ratio. Just say "save money!" and eyes 
glisten like a child in a candy shop.

---
Funny how quickly that "kid in a candy shop" gets a pain is his tum-tum. :-)

I jerked sodas one summer as a teenager; I was told to help myself to 
anything I could see, without limits. By the end of a week, the mere 
THOUGHT of ice-cream made me physically ill.


Rick

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: where 2 find SMS compression code

2011-08-02 Thread Norbert Friemel
On Tue, 2 Aug 2011 06:32:22 -0500, Scott Chapman wrote:

>I realize you said you aren't testing for compression ratio or CPU usage, but 
>you might still want to take a quick look at those with both tailored and 
>generic/standard compression.   I just recently found that switching my SMF 
>data to tailored compression saved about 40% of the space, but at the cost of 
>about a 25% increase in CPU time.  Everybody probably has different views 
>about that trade-off, but those percentages are big enough to make it worth 
>looking at regardless of which resource is more precious to you at the moment.
>

Tailored compression is not supported for VSAM KSDSes (AFAIK).

Norbert Friemel

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: where 2 find SMS compression code

2011-08-02 Thread McKown, John
I'll need to "hit the books" to see how to do that. 

--
John McKown 
Systems Engineer IV
IT

Administrative Services Group

HealthMarkets(r)

9151 Boulevard 26 * N. Richland Hills * TX 76010
(817) 255-3225 phone * 
john.mck...@healthmarkets.com * www.HealthMarkets.com

Confidentiality Notice: This e-mail message may contain confidential or 
proprietary information. If you are not the intended recipient, please contact 
the sender by reply e-mail and destroy all copies of the original message. 
HealthMarkets(r) is the brand name for products underwritten and issued by the 
insurance subsidiaries of HealthMarkets, Inc. -The Chesapeake Life Insurance 
Company(r), Mid-West National Life Insurance Company of TennesseeSM and The 
MEGA Life and Health Insurance Company.SM

 

> -Original Message-
> From: IBM Mainframe Discussion List 
> [mailto:IBM-MAIN@bama.ua.edu] On Behalf Of Scott Chapman
> Sent: Tuesday, August 02, 2011 6:32 AM
> To: IBM-MAIN@bama.ua.edu
> Subject: Re: where 2 find SMS compression code
> 
> I realize you said you aren't testing for compression ratio 
> or CPU usage, but you might still want to take a quick look 
> at those with both tailored and generic/standard compression. 
>   I just recently found that switching my SMF data to 
> tailored compression saved about 40% of the space, but at the 
> cost of about a 25% increase in CPU time.  Everybody probably 
> has different views about that trade-off, but those 
> percentages are big enough to make it worth looking at 
> regardless of which resource is more precious to you at the moment.  
> 
> Your mileage may vary.  Past performance is not indicative of 
> future gains.  
> 
> Scott Chapman
> 
> >Thanks. these are small datasets. I don't know why they were 
> compressed
> >with Data Accelerator. We greatly overused that product. 
> Management at
> >the time said: "Great! Compress everything and we don't need 
> to get any
> >more DASD!" Management today says: "Use SMS compression and eliminate
> >the cost of Data Accelerator!" We did not testing to see how 
> this will
> >affect CPU usage or compression ratio. Just say "save 
> money!" and eyes
> >glisten like a child in a candy shop.
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
> Search the archives at http://bama.ua.edu/archives/ibm-main.html
> 
> 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: where 2 find SMS compression code

2011-08-02 Thread Scott Chapman
I realize you said you aren't testing for compression ratio or CPU usage, but 
you might still want to take a quick look at those with both tailored and 
generic/standard compression.   I just recently found that switching my SMF 
data to tailored compression saved about 40% of the space, but at the cost of 
about a 25% increase in CPU time.  Everybody probably has different views about 
that trade-off, but those percentages are big enough to make it worth looking 
at regardless of which resource is more precious to you at the moment.  

Your mileage may vary.  Past performance is not indicative of future gains.  

Scott Chapman

>Thanks. these are small datasets. I don't know why they were compressed
>with Data Accelerator. We greatly overused that product. Management at
>the time said: "Great! Compress everything and we don't need to get any
>more DASD!" Management today says: "Use SMS compression and eliminate
>the cost of Data Accelerator!" We did not testing to see how this will
>affect CPU usage or compression ratio. Just say "save money!" and eyes
>glisten like a child in a candy shop.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: where 2 find SMS compression code

2011-08-02 Thread John McKown
Thanks. these are small datasets. I don't know why they were compressed
with Data Accelerator. We greatly overused that product. Management at
the time said: "Great! Compress everything and we don't need to get any
more DASD!" Management today says: "Use SMS compression and eliminate
the cost of Data Accelerator!" We did not testing to see how this will
affect CPU usage or compression ratio. Just say "save money!" and eyes
glisten like a child in a candy shop.

On Mon, 2011-08-01 at 09:33 -0500, Norbert Friemel wrote:
> On Mon, 1 Aug 2011 09:16:52 -0500, McKown, John wrote:
> 
> >Thanks. Of course, I was really hoping as to WHY it is "no benefit". Guess 
> >I'll need to double check the allocation / max lrecl / cisize.
> >
> 
> Primary space < 5 or 8MB or *minimum* lrecl (w/o key) < 40
> 
> Norbert Friemel  
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
> Search the archives at http://bama.ua.edu/archives/ibm-main.html
-- 
John McKown
Maranatha! <><

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: where 2 find SMS compression code

2011-08-01 Thread Norbert Friemel
On Mon, 1 Aug 2011 09:16:52 -0500, McKown, John wrote:

>Thanks. Of course, I was really hoping as to WHY it is "no benefit". Guess 
>I'll need to double check the allocation / max lrecl / cisize.
>

Primary space < 5 or 8MB or *minimum* lrecl (w/o key) < 40

Norbert Friemel  

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: where 2 find SMS compression code

2011-08-01 Thread McKown, John
Thanks. Of course, I was really hoping as to WHY it is "no benefit". Guess I'll 
need to double check the allocation / max lrecl / cisize. 

--
John McKown 
Systems Engineer IV
IT

Administrative Services Group

HealthMarkets(r)

9151 Boulevard 26 * N. Richland Hills * TX 76010
(817) 255-3225 phone * 
john.mck...@healthmarkets.com * www.HealthMarkets.com

Confidentiality Notice: This e-mail message may contain confidential or 
proprietary information. If you are not the intended recipient, please contact 
the sender by reply e-mail and destroy all copies of the original message. 
HealthMarkets(r) is the brand name for products underwritten and issued by the 
insurance subsidiaries of HealthMarkets, Inc. -The Chesapeake Life Insurance 
Company(r), Mid-West National Life Insurance Company of TennesseeSM and The 
MEGA Life and Health Insurance Company.SM

 

> -Original Message-
> From: IBM Mainframe Discussion List 
> [mailto:IBM-MAIN@bama.ua.edu] On Behalf Of Norbert Friemel
> Sent: Monday, August 01, 2011 5:17 AM
> To: IBM-MAIN@bama.ua.edu
> Subject: Re: where 2 find SMS compression code
> 
> On Mon, 1 Aug 2011 04:48:05 -0500, John McKown wrote:
> 
> >
> >I don't seem to be able to find the 5F01083F code.
> >
> >
> 
> X'5F'  = Compression Management Services
> X'01' = CMPSVCAL (allocation)
> http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DGT
> 2R171/5.1.2.2
> 
> X'083F' = 2111 (DEC) = RS_NO_BENEFIT
> http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DGT
> 2R171/5.1.5
> 
> Norbert Friemel
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
> Search the archives at http://bama.ua.edu/archives/ibm-main.html
> 
> 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: where 2 find SMS compression code

2011-08-01 Thread Norbert Friemel
On Mon, 1 Aug 2011 04:48:05 -0500, John McKown wrote:

>
>I don't seem to be able to find the 5F01083F code.
>
>

X'5F'  = Compression Management Services
X'01' = CMPSVCAL (allocation)
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DGT2R171/5.1.2.2

X'083F' = 2111 (DEC) = RS_NO_BENEFIT
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DGT2R171/5.1.5

Norbert Friemel

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


where 2 find SMS compression code

2011-08-01 Thread John McKown
In my SMS conversion to compress some VSAM dataset, I am getting a
message like:

IGD17162I RETURN CODE (12) REASON CODE (5F01083F) RECEIVED FROM
COMPRESSION SERVICES WHILE PROCESSING DATA SET
PRITV.PR.GCR26KSD , COMPRESSION REQUEST NOT
HONORED BECAUSE DATA SET CHARACTERISTICS DO NOT MEET COMPRESSION
CRITERIA,
ALLOCATION CONTINUES
IGD17070I DATA SET PRITV.PR.GCR26KSD
ALLOCATED SUCCESSFULLY WITH 1 STRIPE(S).
IGD17172I DATA SET PRITV.PR.GCR26KSD
IS ELIGIBLE FOR EXTENDED ADDRESSABILITY

I don't seem to be able to find the 5F01083F code.


-- 
John McKown
Maranatha! <><

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: where is ESA/390 Data Compression manual SA22-7208

2011-05-12 Thread Steve Horein
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR602/CCONTENTS?DT=19961127103547

On Thu, May 12, 2011 at 8:02 PM, Tom Simons  wrote:

> We're looking into using the CMPSC "Compression Call" instruction, and the
> z/Arch POP says ".. assumes knowledge of the introductory information and
> information about dictionary formats in *Enterprise Systems
> Architecture/390
> Data Compression, SA22-7208-01*."
>
> I find lots of references to SA22-7208, but have been unable to locate that
> manual.
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
> Search the archives at http://bama.ua.edu/archives/ibm-main.html
>

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


where is ESA/390 Data Compression manual SA22-7208

2011-05-12 Thread Tom Simons
We're looking into using the CMPSC "Compression Call" instruction, and the
z/Arch POP says ".. assumes knowledge of the introductory information and
information about dictionary formats in *Enterprise Systems Architecture/390
Data Compression, SA22-7208-01*."

I find lots of references to SA22-7208, but have been unable to locate that
manual.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: RES: using SMS compression - how to manage

2011-04-28 Thread McKown, John
> -Original Message-
> From: IBM Mainframe Discussion List 
> [mailto:IBM-MAIN@bama.ua.edu] On Behalf Of Ron Hawkins
> Sent: Thursday, April 28, 2011 10:03 AM
> To: IBM-MAIN@bama.ua.edu
> Subject: Re: RES: using SMS compression - how to manage
> 
> John,
> 
> I didn't think &MAXSIZE took multivolume into account. Isn't 
> it just primary
> + (15 * secondary)?
> 
> I've often thought that compression products should come with 
> a sampling
> utility to read one CYL of a dataset and provide a 
> compression report. This
> could be used to isolate find the best compression candidates.
> 
> If you're willing to write out that sample CYL one could 
> probably write
> something to do this with REXX, SAS or lower level language.
> 
> Ron

Well, I'm basically giving up. There is not a "simple" way to do this. So we'll 
just stay with Tech Services maintaining a table. In the old product, it was a 
"registration database". With SMS, it will be a set of FILTLIST statements in 
the DATACLAS ACS routine. I'm not too keen on any of this. But I just checked. 
Our largest VSAM file is spread onto 30 volumes. It is getting over 2:1 
compression. So it would blow the 59 volume limit unless we change to something 
other than 3390-3 volumes. 

--
John McKown 
Systems Engineer IV
IT

Administrative Services Group

HealthMarkets(r)

9151 Boulevard 26 * N. Richland Hills * TX 76010
(817) 255-3225 phone * 
john.mck...@healthmarkets.com * www.HealthMarkets.com

Confidentiality Notice: This e-mail message may contain confidential or 
proprietary information. If you are not the intended recipient, please contact 
the sender by reply e-mail and destroy all copies of the original message. 
HealthMarkets(r) is the brand name for products underwritten and issued by the 
insurance subsidiaries of HealthMarkets, Inc. -The Chesapeake Life Insurance 
Company(r), Mid-West National Life Insurance Company of TennesseeSM and The 
MEGA Life and Health Insurance Company.SM

 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: RES: using SMS compression - how to manage

2011-04-28 Thread Ron Hawkins
John,

I didn't think &MAXSIZE took multivolume into account. Isn't it just primary
+ (15 * secondary)?

I've often thought that compression products should come with a sampling
utility to read one CYL of a dataset and provide a compression report. This
could be used to isolate find the best compression candidates.

If you're willing to write out that sample CYL one could probably write
something to do this with REXX, SAS or lower level language.

Ron

> to maintain. Is there an easier way which does not require that the
individual
> DEFINE desks specify the DCEXTC data class? I cannot depend on the
programmers
> to do this correctly. And they would rise up in arms if I tried, anyway.
Also,

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


RES: using SMS compression - how to manage

2011-04-28 Thread ITURIEL DO NASCIMENTO NETO
John,

If they really Love Windows, make it similar.
Create a Filtlist with only one entry like '*.**.COMPRESS'.
Only datasets that match this filter will be compressed...

Atenciosamente / Regards / Saludos

Ituriel do Nascimento Neto
BANCO BRADESCO S.A.
4254 / DPCD Engenharia de Software
Sistemas Operacionais Mainframes
Tel: +55 11 4197-2021 R: 22021
Fax: +55 11 4197-2814

-Mensagem original-
De: IBM Mainframe Discussion List [mailto:IBM-MAIN@bama.ua.edu] Em nome de 
McKown, John
Enviada em: quarta-feira, 27 de abril de 2011 15:14
Para: IBM-MAIN@bama.ua.edu
Assunto: using SMS compression - how to manage

We are replacing BMC's Data Accelerator compression with SMS compression. I 
have a DATACLAS (named DCEXTC) created which implements this. The DATACLAS 
works. At present, Data Accelerator works by having a list of dataset names and 
patterns which are used to determine if (and how) a dataset is to be 
compressed. The only way that I have found to duplicate this is by using a 
FILTLIST (actually a number of them) to list the DSNs and patterns to be 
assigned the DCEXTC DATACLAS. I consider this to be very clumsy and difficult 
to maintain. Is there an easier way which does not require that the individual 
DEFINE desks specify the DCEXTC data class? I cannot depend on the programmers 
to do this correctly. And they would rise up in arms if I tried, anyway. Also, 
trying to use the &MAXSIZE is likely to be a failure due to the way that our 
previous storage person set up SMS. Every dataclas has a DVC count of 59. And 
the storage admin, under pressure from the programming management, just!
  gave the programmers some vague sizing guidelines. Basically, the DVC count 
is used the way that StopX37 was used in the past. To make sizing datasets 
unnecessary. After all, they don't need to size their files on Windows or UNIX, 
so why do they need to on z/OS? Just more proof that z/OS is obsolete. OOPS, 
I'm whining again.

John McKown
Systems Engineer IV
IT

Administrative Services Group

HealthMarkets(r)

9151 Boulevard 26 * N. Richland Hills * TX 76010
(817) 255-3225 phone *
john.mck...@healthmarkets.com * www.HealthMarkets.com

Confidentiality Notice: This e-mail message may contain confidential or 
proprietary information. If you are not the intended recipient, please contact 
the sender by reply e-mail and destroy all copies of the original message. 
HealthMarkets(r) is the brand name for products underwritten and issued by the 
insurance subsidiaries of HealthMarkets, Inc. -The Chesapeake Life Insurance 
Company(r), Mid-West National Life Insurance Company of TennesseeSM and The 
MEGA Life and Health Insurance Company.SM


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

AVISO LEGAL ...Esta mensagem é destinada exclusivamente para a(s) pessoa(s) 
a quem é dirigida, podendo conter informação confidencial e/ou legalmente 
privilegiada. Se você não for destinatário desta mensagem, desde já fica 
notificado de abster-se a divulgar, copiar, distribuir, examinar ou, de 
qualquer forma, utilizar a informação contida nesta mensagem, por ser ilegal. 
Caso você tenha recebido esta mensagem por engano, pedimos que nos retorne este 
E-Mail, promovendo, desde logo, a eliminação do seu conteúdo em sua base de 
dados, registros ou sistema de controle. Fica desprovida de eficácia e validade 
a mensagem que contiver vínculos obrigacionais, expedida por quem não detenha 
poderes de representação. 
LEGAL ADVICE...This message is exclusively destined for the people to whom 
it is directed, and it can bear private and/or legally exceptional information. 
If you are not addressee of this message, since now you are advised to not 
release, copy, distribute, check or, otherwise, use the information contained 
in this message, because it is illegal. If you received this message by 
mistake, we ask you to return this email, making possible, as soon as possible, 
the elimination of its contents of your database, registrations or controls 
system. The message that bears any mandatory links, issued by someone who has 
no representation powers, shall be null or void.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: using SMS compression - how to manage

2011-04-27 Thread Walt Farrell
Well, it might be as much work to manage, and so might not be what you want,
but you could make use of the RACF DFP segments that everyone seems to
ignore, which were designed originally to eliminate the need for programming
ACS exit routines and things like FILTLISTs.

The DFP segment for the generic profile that will protect a new data set
specifies a RESOWNER, which may be a user or group. Or, in the absence of a
DFP segment, the high-level qualifier of the data set profile provides the
RESOWNER value.

The RESOWNER (a user or group) also has a DFP segment, which specifies the
default management class, storage class, and data class for all new data
sets created on behalf of that RESOWNER. 

The defaults can be overridden by JCL or IDCAMS constructs, or the ACS
routines if you have them, but they could easily assign appropriate default
values without the need to keep updating ACS routines or FILTLISTs if you
have a data set naming convention that is amenable to this use. You might
need to assign a few dummy user IDs or group names to hold DFP information,
but then you can simply set the DFP segment for an appropriate DATASET
profile to the proper RESOWNER, and get defaults for anything new protected
by that profile, and you're using RACF's generics rather than coding FILTLISTs.

(For some reason, it always seemed that the first (and possibly later)
generations of DFSMS educators seemed to be more RACF-phobic than the DFSMS
designers were, and so they seemed to actively discourage usage of these
functions, instead preferring that storage management folks become
programmers. Of course, it may be that they simply didn't want to have to
understand RACF, and felt that storage administrators shouldn't have to,
either. Thus my comment above that everyone seems to ignore these functions.
But especially if you're in a shop where you do both RACF and storage
functions, or if you're in a shop where you actually talk to your colleagues
and have good working relationships with them, the use of DFP segments might
help you.)

-- 
Walt Farrell
IBM STSM, z/OS Security Design

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: using SMS compression - how to manage

2011-04-27 Thread Greg Shirey
>From z/OS V1R9.0 DFSMS Storage Administration Reference: 

"For a VSAM data set definition, the &SIZE and &MAXSIZE read-only variables 
reflect the space value specified in the CLUSTER component. If one is not 
specified in the CLUSTER component, then the space value specified in the DATA 
component is used. If a space value also is specified for the INDEX component 
and it is of the same type of space unit; for example, both are in tracks, 
cylinders, KB or MB, it is added to what was specified for the DATA component."

DVC doesn't seem to enter into it.  (unless it's changed since 1.9)  

You're right about CICS not supporting extended format ESDS, at least at one 
time - a newer release may have lifted that restriction.  

Greg Shirey
Ben E. Keith Co. 


-Original Message-
From: IBM Mainframe Discussion List On Behalf Of McKown, John
Sent: Wednesday, April 27, 2011 2:06 PM

Extended is the default for all non-ESDS VSAM files. I can't remember what it 
was, but we had some problem with ESDS files with Extended Addressing (perhaps 
in CICS). I may try to reduce the number of files in the FILTLIST. I was trying 
to duplicate the current compression environment, but that is likely overkill. 
Of course, if a file expands "for no reason", people will complain. And the 
DASD allocation report will show a sudden spike. Which will cause "weeping and 
wailing and gnashing of teeth" in management.
 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: using SMS compression - how to manage

2011-04-27 Thread McKown, John
> -Original Message-
> From: IBM Mainframe Discussion List 
> [mailto:IBM-MAIN@bama.ua.edu] On Behalf Of Gibney, Dave
> Sent: Wednesday, April 27, 2011 1:59 PM
> To: IBM-MAIN@bama.ua.edu
> Subject: Re: using SMS compression - how to manage
> 
> Tell them to stop spending zCycles on compression. :) Simplify the
> FILTLIST to just the one VSAM file.
> 
> Or, do as I did. Extended, stripped, compressed is the default. With a
> DATACLAS=NOEXTEND for the few cases that can't handle it.
> 
> Dave Gibney

Extended is the default for all non-ESDS VSAM files. I can't remember what it 
was, but we had some problem with ESDS files with Extended Addressing (perhaps 
in CICS). I may try to reduce the number of files in the FILTLIST. I was trying 
to duplicate the current compression environment, but that is likely overkill. 
Of course, if a file expands "for no reason", people will complain. And the 
DASD allocation report will show a sudden spike. Which will cause "weeping and 
wailing and gnashing of teeth" in management.

--
John McKown 
Systems Engineer IV
IT

Administrative Services Group

HealthMarkets(r)

9151 Boulevard 26 * N. Richland Hills * TX 76010
(817) 255-3225 phone * 
john.mck...@healthmarkets.com * www.HealthMarkets.com

Confidentiality Notice: This e-mail message may contain confidential or 
proprietary information. If you are not the intended recipient, please contact 
the sender by reply e-mail and destroy all copies of the original message. 
HealthMarkets(r) is the brand name for products underwritten and issued by the 
insurance subsidiaries of HealthMarkets, Inc. -The Chesapeake Life Insurance 
Company(r), Mid-West National Life Insurance Company of TennesseeSM and The 
MEGA Life and Health Insurance Company.SM

 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: using SMS compression - how to manage

2011-04-27 Thread Gibney, Dave
Tell them to stop spending zCycles on compression. :) Simplify the
FILTLIST to just the one VSAM file.

Or, do as I did. Extended, stripped, compressed is the default. With a
DATACLAS=NOEXTEND for the few cases that can't handle it.

Dave Gibney
Information Technology Services
Washington State University


> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@bama.ua.edu] On
> Behalf Of McKown, John
> Sent: Wednesday, April 27, 2011 11:52 AM
> To: IBM-MAIN@bama.ua.edu
> Subject: Re: using SMS compression - how to manage
> 
> Well, I guess it is not "more difficult" than maintaining the list.
Except that
> there is a limit to the number of entries in a single FILTLIST. To
duplicate the
> current list requires 3 FILTLISTs and a statement like:
> 
> WHEN (&DSN EQ &F1 OR
>   &DSN EQ &F2 OR
>   &DSN EQ &F3) SET DATACLAS='DCEXTC'
> 
> If F3 FILTLIST gets to long, I'll need to create F4 and update the
WHEN. I just
> don't like it. I would like something better. If, as you say, DVC does
not
> influence &MAXSIZE, then I might end up not assigning DCEXTC when I
really
> should. I would really like to eliminate all compression. But we
actually do
> have one VSAM KSDS which, uncompressed, might get very close to the
limit
> of 59 volumes (of 3390-3 space). We do not have any RDMS on z/OS and
> won't get one. Management is doing their best to find a justification
to
> eliminate z/OS and convert to a 100% Windows shop (they want to
convert
> all Linux, AIX, and Solaris servers to Windows too.) They won't do
> __anything__ to make z/OS better, unless is also reduces the cost to
run
> z/OS. Wish I could find a new vocation. Like professional
Tiddley-Winkes
> player (except that I have arthritis) or "mattress tester".
> 
> --
> John McKown
> Systems Engineer IV
> IT
> 
> Administrative Services Group
> 
> HealthMarkets(r)
> 
> 9151 Boulevard 26 * N. Richland Hills * TX 76010
> (817) 255-3225 phone *
> john.mck...@healthmarkets.com * www.HealthMarkets.com
> 
> Confidentiality Notice: This e-mail message may contain confidential
or
> proprietary information. If you are not the intended recipient, please
contact
> the sender by reply e-mail and destroy all copies of the original
message.
> HealthMarkets(r) is the brand name for products underwritten and
issued by
> the insurance subsidiaries of HealthMarkets, Inc. -The Chesapeake Life
> Insurance Company(r), Mid-West National Life Insurance Company of
> TennesseeSM and The MEGA Life and Health Insurance Company.SM
> 
> 
> 
> > -Original Message-
> > From: IBM Mainframe Discussion List
> > [mailto:IBM-MAIN@bama.ua.edu] On Behalf Of Gibney, Dave
> > Sent: Wednesday, April 27, 2011 1:39 PM
> > To: IBM-MAIN@bama.ua.edu
> > Subject: Re: using SMS compression - how to manage
> >
> > How is a FILTLIST substantially different/harder to maintain than a
"
> > list of dataset names and patterns"?
> >
> > I don't remember DVC being part of &MAXSIZE when I did this. I also
> > found that setting DVC to a more reasonable number like 4 or 8 works
> > just as well as 59 without the serious impacts 59 can have
elsewhere.
> >
> > Dave Gibney
> > Information Technology Services
> > Washington State University
> >
> >
> > > -Original Message-
> > > From: IBM Mainframe Discussion List [mailto:IBM-MAIN@bama.ua.edu]
> On
> > > Behalf Of McKown, John
> > > Sent: Wednesday, April 27, 2011 11:14 AM
> > > To: IBM-MAIN@bama.ua.edu
> > > Subject: using SMS compression - how to manage
> > >
> > > We are replacing BMC's Data Accelerator compression with SMS
> > > compression. I have a DATACLAS (named DCEXTC) created which
> > implements
> > > this. The DATACLAS works. At present, Data Accelerator
> > works by having
> > a list
> > > of dataset names and patterns which are used to determine
> > if (and how)
> > a
> > > dataset is to be compressed. The only way that I have found to
> > duplicate this
> > > is by using a FILTLIST (actually a number of them) to list the
DSNs
> > and
> > > patterns to be assigned the DCEXTC DATACLAS. I consider this to be
> > very
> > > clumsy and difficult to maintain. Is there an easier way which
does
> > not
> > > require that the individual DEFINE desks specify the DCEXTC data
> > class? I
> > > cannot depend on the programmers to do this correctly. And
> > they would
> > rise
> > > up in arms if I tried, anyway. Also, t

Re: using SMS compression - how to manage

2011-04-27 Thread McKown, John
Well, I guess it is not "more difficult" than maintaining the list. Except that 
there is a limit to the number of entries in a single FILTLIST. To duplicate 
the current list requires 3 FILTLISTs and a statement like:

WHEN (&DSN EQ &F1 OR
  &DSN EQ &F2 OR
  &DSN EQ &F3) SET DATACLAS='DCEXTC'

If F3 FILTLIST gets to long, I'll need to create F4 and update the WHEN. I just 
don't like it. I would like something better. If, as you say, DVC does not 
influence &MAXSIZE, then I might end up not assigning DCEXTC when I really 
should. I would really like to eliminate all compression. But we actually do 
have one VSAM KSDS which, uncompressed, might get very close to the limit of 59 
volumes (of 3390-3 space). We do not have any RDMS on z/OS and won't get one. 
Management is doing their best to find a justification to eliminate z/OS and 
convert to a 100% Windows shop (they want to convert all Linux, AIX, and 
Solaris servers to Windows too.) They won't do __anything__ to make z/OS 
better, unless is also reduces the cost to run z/OS. Wish I could find a new 
vocation. Like professional Tiddley-Winkes player (except that I have 
arthritis) or "mattress tester". 

--
John McKown 
Systems Engineer IV
IT

Administrative Services Group

HealthMarkets(r)

9151 Boulevard 26 * N. Richland Hills * TX 76010
(817) 255-3225 phone * 
john.mck...@healthmarkets.com * www.HealthMarkets.com

Confidentiality Notice: This e-mail message may contain confidential or 
proprietary information. If you are not the intended recipient, please contact 
the sender by reply e-mail and destroy all copies of the original message. 
HealthMarkets(r) is the brand name for products underwritten and issued by the 
insurance subsidiaries of HealthMarkets, Inc. -The Chesapeake Life Insurance 
Company(r), Mid-West National Life Insurance Company of TennesseeSM and The 
MEGA Life and Health Insurance Company.SM

 

> -Original Message-
> From: IBM Mainframe Discussion List 
> [mailto:IBM-MAIN@bama.ua.edu] On Behalf Of Gibney, Dave
> Sent: Wednesday, April 27, 2011 1:39 PM
> To: IBM-MAIN@bama.ua.edu
> Subject: Re: using SMS compression - how to manage
> 
> How is a FILTLIST substantially different/harder to maintain than a "
> list of dataset names and patterns"?
> 
> I don't remember DVC being part of &MAXSIZE when I did this. I also
> found that setting DVC to a more reasonable number like 4 or 8 works
> just as well as 59 without the serious impacts 59 can have elsewhere.
> 
> Dave Gibney
> Information Technology Services
> Washington State University
> 
> 
> > -Original Message-
> > From: IBM Mainframe Discussion List [mailto:IBM-MAIN@bama.ua.edu] On
> > Behalf Of McKown, John
> > Sent: Wednesday, April 27, 2011 11:14 AM
> > To: IBM-MAIN@bama.ua.edu
> > Subject: using SMS compression - how to manage
> > 
> > We are replacing BMC's Data Accelerator compression with SMS
> > compression. I have a DATACLAS (named DCEXTC) created which 
> implements
> > this. The DATACLAS works. At present, Data Accelerator 
> works by having
> a list
> > of dataset names and patterns which are used to determine 
> if (and how)
> a
> > dataset is to be compressed. The only way that I have found to
> duplicate this
> > is by using a FILTLIST (actually a number of them) to list the DSNs
> and
> > patterns to be assigned the DCEXTC DATACLAS. I consider this to be
> very
> > clumsy and difficult to maintain. Is there an easier way which does
> not
> > require that the individual DEFINE desks specify the DCEXTC data
> class? I
> > cannot depend on the programmers to do this correctly. And 
> they would
> rise
> > up in arms if I tried, anyway. Also, trying to use the &MAXSIZE is
> likely to be a
> > failure due to the way that our previous storage person set up SMS.
> Every
> > dataclas has a DVC count of 59. And the storage admin, 
> under pressure
> from
> > the programming management, just gave the programmers some vague
> > sizing guidelines. Basically, the DVC count is used the way that
> StopX37 was
> > used in the past. To make sizing datasets unnecessary. 
> After all, they
> don't
> > need to size their files on Windows or UNIX, so why do they 
> need to on
> > z/OS? Just more proof that z/OS is obsolete. OOPS, I'm 
> whining again.
> > 
> > John McKown
> > Systems Engineer IV
> > IT
> > 
> > Administrative Services Group
> > 
> > HealthMarkets(r)
> > 
> > 9151 Boulevard 26 * N. Richland Hills * TX 76010
> > (817) 255-3225 phone *
> > john.mck...@healthmarkets.com * www.HealthMarkets.com
&

Re: using SMS compression - how to manage

2011-04-27 Thread Gibney, Dave
How is a FILTLIST substantially different/harder to maintain than a "
list of dataset names and patterns"?

I don't remember DVC being part of &MAXSIZE when I did this. I also
found that setting DVC to a more reasonable number like 4 or 8 works
just as well as 59 without the serious impacts 59 can have elsewhere.

Dave Gibney
Information Technology Services
Washington State University


> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@bama.ua.edu] On
> Behalf Of McKown, John
> Sent: Wednesday, April 27, 2011 11:14 AM
> To: IBM-MAIN@bama.ua.edu
> Subject: using SMS compression - how to manage
> 
> We are replacing BMC's Data Accelerator compression with SMS
> compression. I have a DATACLAS (named DCEXTC) created which implements
> this. The DATACLAS works. At present, Data Accelerator works by having
a list
> of dataset names and patterns which are used to determine if (and how)
a
> dataset is to be compressed. The only way that I have found to
duplicate this
> is by using a FILTLIST (actually a number of them) to list the DSNs
and
> patterns to be assigned the DCEXTC DATACLAS. I consider this to be
very
> clumsy and difficult to maintain. Is there an easier way which does
not
> require that the individual DEFINE desks specify the DCEXTC data
class? I
> cannot depend on the programmers to do this correctly. And they would
rise
> up in arms if I tried, anyway. Also, trying to use the &MAXSIZE is
likely to be a
> failure due to the way that our previous storage person set up SMS.
Every
> dataclas has a DVC count of 59. And the storage admin, under pressure
from
> the programming management, just gave the programmers some vague
> sizing guidelines. Basically, the DVC count is used the way that
StopX37 was
> used in the past. To make sizing datasets unnecessary. After all, they
don't
> need to size their files on Windows or UNIX, so why do they need to on
> z/OS? Just more proof that z/OS is obsolete. OOPS, I'm whining again.
> 
> John McKown
> Systems Engineer IV
> IT
> 
> Administrative Services Group
> 
> HealthMarkets(r)
> 
> 9151 Boulevard 26 * N. Richland Hills * TX 76010
> (817) 255-3225 phone *
> john.mck...@healthmarkets.com * www.HealthMarkets.com
> 
> Confidentiality Notice: This e-mail message may contain confidential
or
> proprietary information. If you are not the intended recipient, please
contact
> the sender by reply e-mail and destroy all copies of the original
message.
> HealthMarkets(r) is the brand name for products underwritten and
issued by
> the insurance subsidiaries of HealthMarkets, Inc. -The Chesapeake Life
> Insurance Company(r), Mid-West National Life Insurance Company of
> TennesseeSM and The MEGA Life and Health Insurance Company.SM
> 
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
> Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


using SMS compression - how to manage

2011-04-27 Thread McKown, John
We are replacing BMC's Data Accelerator compression with SMS compression. I 
have a DATACLAS (named DCEXTC) created which implements this. The DATACLAS 
works. At present, Data Accelerator works by having a list of dataset names and 
patterns which are used to determine if (and how) a dataset is to be 
compressed. The only way that I have found to duplicate this is by using a 
FILTLIST (actually a number of them) to list the DSNs and patterns to be 
assigned the DCEXTC DATACLAS. I consider this to be very clumsy and difficult 
to maintain. Is there an easier way which does not require that the individual 
DEFINE desks specify the DCEXTC data class? I cannot depend on the programmers 
to do this correctly. And they would rise up in arms if I tried, anyway. Also, 
trying to use the &MAXSIZE is likely to be a failure due to the way that our 
previous storage person set up SMS. Every dataclas has a DVC count of 59. And 
the storage admin, under pressure from the programming management, just gave 
the programmers some vague sizing guidelines. Basically, the DVC count is used 
the way that StopX37 was used in the past. To make sizing datasets unnecessary. 
After all, they don't need to size their files on Windows or UNIX, so why do 
they need to on z/OS? Just more proof that z/OS is obsolete. OOPS, I'm whining 
again.

John McKown
Systems Engineer IV
IT

Administrative Services Group

HealthMarkets(r)

9151 Boulevard 26 * N. Richland Hills * TX 76010
(817) 255-3225 phone *
john.mck...@healthmarkets.com * www.HealthMarkets.com

Confidentiality Notice: This e-mail message may contain confidential or 
proprietary information. If you are not the intended recipient, please contact 
the sender by reply e-mail and destroy all copies of the original message. 
HealthMarkets(r) is the brand name for products underwritten and issued by the 
insurance subsidiaries of HealthMarkets, Inc. -The Chesapeake Life Insurance 
Company(r), Mid-West National Life Insurance Company of TennesseeSM and The 
MEGA Life and Health Insurance Company.SM


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Encryption, compression, etc.

2011-04-05 Thread Kirk Wolf
On Tue, Apr 5, 2011 at 10:20 AM, Hal Merritt  wrote:
> Certificate based TLS FTP is native to the z/os platform. While certificates 
> are very secure, they do carry a pretty good learning curve. Any z/os 
> hardware features installed on the box are exploited by default, I think.
>
> Typically encryption defeats compression.  It seems that you can have one or 
> the other but not both. I haven't looked, but z/os FTP may compress before 
> encryption. (I think the compression occurs in the application layer and the 
> encryption occurs in the transport layer.)

IBM Ported Tools OpenSSH supports both compression and encryption at
the same time - its part of the ssh2 RFCs.

>
> The z/os client/server software is pretty stable and behaves very close to 
> the RFC's that govern such things. No so much with Windows based 
> client/server software. Finding compatible software for the far end may be a 
> bit of a challenge.

Conformance to the FTP/S RFCs (FTP with TLS, RFC 2228, etc)  isn't
really the problem.   The real issue is that the architecture of FTP
IMO is crap and to get two implementations to talk together in the
context of firewalls and NAT routers can be quite complicated.   See
IBM's recent SHARE presentation if you don't believe me:
http://share.confex.com/share/116/webprogram/Session8239.html

>
> There is also the SSH option using the free ported tools. However, SSH is a 
> lot more difficult to automate. If your shop speaks fluent *NIX, this may be 
> an attractive option. Not so much in a crusty old MVS shop.

I agree that a *little* knowledge of *nix helps with OpenSSH on z/OS,
but fluency is not really required.
Lots of "crusty MVS shops" use our free Co:Z SFTP product along with
IBM Ported Tools OpenSSH successfully.  We support exits that are
compatible with IBM FTP and cut SMF 119 records that are compatible.
Several third party FTP automation products have found it easy to
support it along with IBM FTP, and we have customers that have been
able to use their existing FTP automation exits.

Many shops will use both FTP/S and OpenSSH on z/OS, but ssh/sftp is
becoming more dominate in open systems environments since it is
available by default on all Unix/Linux distros and its single-socket
architecture is superior.

Kirk Wolf
Dovetailed Technologies
http://dovetail.com

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Encryption, compression, etc.

2011-04-05 Thread Mark Jacobs
Encrypted data is usually though to be non-compressible. If you want 
compression in addition to encryption you'd compress first and then 
encrypt the compressed data file.


Mark Jacobs

On 04/05/11 11:20, Hal Merritt wrote:

Certificate based TLS FTP is native to the z/os platform. While certificates 
are very secure, they do carry a pretty good learning curve. Any z/os hardware 
features installed on the box are exploited by default, I think.

Typically encryption defeats compression.  It seems that you can have one or 
the other but not both. I haven't looked, but z/os FTP may compress before 
encryption. (I think the compression occurs in the application layer and the 
encryption occurs in the transport layer.)

The z/os client/server software is pretty stable and behaves very close to the 
RFC's that govern such things. No so much with Windows based client/server 
software. Finding compatible software for the far end may be a bit of a 
challenge.

There is also the SSH option using the free ported tools. However, SSH is a lot 
more difficult to automate. If your shop speaks fluent *NIX, this may be an 
attractive option. Not so much in a crusty old MVS shop.

HTH and good luck.




-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@bama.ua.edu] On Behalf Of 
R.S.
Sent: Tuesday, April 05, 2011 8:31 AM
To: IBM-MAIN@bama.ua.edu
Subject: Encryption, compression, etc.

I'm looking for some solution for file exchange between z/OS and Windows/Linux 
platform.

The only requirement is to encrypt the file (PS dataset) on z/OS side and 
decrypt it on distributed side and vice versa.

Nice to have:
- hash calculation
- compression
- exploitation of CPACF or CryptoExpress or zIIP hardware (to reduce cost of 
CPU)

Any clues and suggestions including both home-grown (DIY) solutions and 
commercial products are welcome.

--
Radoslaw Skorupka
Lodz, Poland


P.S. If one feels uncomfortable with "advertising" commercial products,
please write to me directly.

NOTICE: This electronic mail message and any files transmitted with it are 
intended
exclusively for the individual or entity to which it is addressed. The message,
together with any attachment, may contain confidential and/or privileged 
information.
Any unauthorized review, use, printing, saving, copying, disclosure or 
distribution
is strictly prohibited. If you have received this message in error, please
immediately advise the sender by reply email and delete all copies.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

   



--
Mark Jacobs
Time Customer Service
Tampa, FL


A schlemiel is a waiter who spills hot soup, and
the schlimazel is the one who gets it in his lap.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Encryption, compression, etc.

2011-04-05 Thread Staller, Allan

Is z/OS Encryption Facility different from ICSF ? A link to the app prog
guide here :
http://publib.boulder.ibm.com/infocenter/zos/v1r10/topic/com.ibm.zos.r10
.csfb400/toc.htm


YES!

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Encryption, compression, etc.

2011-04-05 Thread Hal Merritt
Certificate based TLS FTP is native to the z/os platform. While certificates 
are very secure, they do carry a pretty good learning curve. Any z/os hardware 
features installed on the box are exploited by default, I think. 

Typically encryption defeats compression.  It seems that you can have one or 
the other but not both. I haven't looked, but z/os FTP may compress before 
encryption. (I think the compression occurs in the application layer and the 
encryption occurs in the transport layer.) 

The z/os client/server software is pretty stable and behaves very close to the 
RFC's that govern such things. No so much with Windows based client/server 
software. Finding compatible software for the far end may be a bit of a 
challenge. 

There is also the SSH option using the free ported tools. However, SSH is a lot 
more difficult to automate. If your shop speaks fluent *NIX, this may be an 
attractive option. Not so much in a crusty old MVS shop. 

HTH and good luck.   


 

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@bama.ua.edu] On Behalf Of 
R.S.
Sent: Tuesday, April 05, 2011 8:31 AM
To: IBM-MAIN@bama.ua.edu
Subject: Encryption, compression, etc.

I'm looking for some solution for file exchange between z/OS and Windows/Linux 
platform.

The only requirement is to encrypt the file (PS dataset) on z/OS side and 
decrypt it on distributed side and vice versa.

Nice to have:
- hash calculation
- compression
- exploitation of CPACF or CryptoExpress or zIIP hardware (to reduce cost of 
CPU)

Any clues and suggestions including both home-grown (DIY) solutions and 
commercial products are welcome.

--
Radoslaw Skorupka
Lodz, Poland


P.S. If one feels uncomfortable with "advertising" commercial products, 
please write to me directly.
 
NOTICE: This electronic mail message and any files transmitted with it are 
intended
exclusively for the individual or entity to which it is addressed. The message, 
together with any attachment, may contain confidential and/or privileged 
information.
Any unauthorized review, use, printing, saving, copying, disclosure or 
distribution 
is strictly prohibited. If you have received this message in error, please 
immediately advise the sender by reply email and delete all copies.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Encryption, compression, etc.

2011-04-05 Thread Tony Harminc
2011/4/5 R.S. :
> I'm looking for some solution for file exchange between z/OS and
> Windows/Linux platform.
>
> The only requirement is to encrypt the file (PS dataset) on z/OS side and
> decrypt it on distributed side and vice versa.
>
> Nice to have:
> - hash calculation
> - compression
> - exploitation of CPACF or CryptoExpress or zIIP hardware (to reduce cost of
> CPU)
>
> Any clues and suggestions including both home-grown (DIY) solutions and
> commercial products are welcome.

The company I used to work for (Proginet - acquired last year by
Tibco) has a comprehensive managed file transfer product that does all
of the above, and a lot more. The z/OS portion was written by
long-time mainframe people, so it's not some port of a UNIX or Windows
product.

I don't work for them, know nothing about pricing, and have no special
contacts there anymore. Certainly I don't get a cut for suggesting
it... But it was a good product last time I looked.

Tony H.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Encryption, compression, etc.

2011-04-05 Thread Nagesh S
Is z/OS Encryption Facility different from ICSF ? A link to the app prog
guide here :
http://publib.boulder.ibm.com/infocenter/zos/v1r10/topic/com.ibm.zos.r10.csfb400/toc.htm

N

2011/4/5 Kirk Wolf 

> Thanks for the kind plug John
>
> A few comments -
>
> - With IBM Ported Tools OpenSSH, you can put your SSH keys in a SAF /
> RACF Keyring, which is much better than having them in a file
> (regardless of the protection of that file).
>
> - Co:Z Launcher and Co:Z SFTP definitely work with Windows using the
> free OpenSSH server available through cygwin.
>
> - With our new "OpenSSH Accelerator for z/OS, you can exploit CPACF on
> z/OS for SSH encryption.  Also, with Co:Z Launcher you can disable
> encryption of data connections which is even faster/cheaper and a
> killer solution if the partner machines are on a secure network,
> hipersocket, zBX IEDN, etc.
>
>  (see: http://dovetail.com/webinars.html for slides and a recording
> of a recent webinar)
>
> Either Co:Z Launcher or Co:Z SFTP use z/OS OpenSSH - the choice
> depends on whether you want simple file transfer or more cooperative
> processing.
>
> For a quick comparison of z/OS SFTP with FTP/S that focuses on crypto
> hardware exploitation, see slide 14 in the webinar:
> http://dovetail.com/docs/oshxl/openssh-accelerator-webinar.pdf
>
> Kirk Wolf
> Dovetailed Technologies
> http://dovetail.com
>
>
> 2011/4/5 McKown, John :
> > Why encrypt and decrypt? Does it need to be on Linux in encrypted form?
> If not, and if it were me, I'd use Dovetailed Technologies' Co:Z dspipes
> utilities and simply transfer the files over an SSH tunnel. Using Co:Z, it
> is easy. And the product is free to download. It contains some Linux
> programs as well as z/OS programs. Go here:
> http://dovetail.com/products/dspipes.html
> >
> > What is nice is that Co:Z can transfer the data from/to z/OS over an SSH
> tunnel and do code conversion at the same time! And it does it to/from z/OS
> legacy datasets or z/OS UNIX files. Excellent product. Totally cost free!
> Support does cost. But they host a no cost support forum for informal
> support.
> >
> > Example JCL:
> >
> > //PROCLIB JCLLIB ORDER=SYS1.COZ.SAMPJCL
> > //EX1 EXEC PROC=COZPROC,
> > // ARGS='linux-user@linux-server'
> > //STDIN DD *
> > fromdsn '//DD:INPUT ' >linux.file
> > //INPUT DD DISP=SHR,DSN=MY.INPUT.PS.FILE
> > //
> >
> > Now one thing you may notices is that I didn't include any kind of
> password or passphrase. That's because on my z/OS system, I have the ssh key
> for the linux system user, and that ssh key does not have a passphrase (null
> passphrase). This is not the best idea, but I'm lazy. The documentation on
> Co:Z shows how to use an ssh key which has a passphrase.
> >
> > I know this doesn't answer your question. But I'm hoping that maybe it is
> a possible solution to your need - securely transferring data from z/OS to
> Linux. You also mentioned Windows. I think this will work if you install an
> SSH server on your Windows server. Perhaps Cygwin's would do - it is free
> for the download.
> >
> > --
> > John McKown
> > Systems Engineer IV
> > IT
> >
> > Administrative Services Group
> >
> > HealthMarketsR
> >
> > 9151 Boulevard 26 . N. Richland Hills . TX 76010
> > (817) 255-3225 phone .
> > john.mck...@healthmarkets.com . www.HealthMarkets.com
> >
> > Confidentiality Notice: This e-mail message may contain confidential or
> proprietary information. If you are not the intended recipient, please
> contact the sender by reply e-mail and destroy all copies of the original
> message. HealthMarketsR is the brand name for products underwritten and
> issued by the insurance subsidiaries of HealthMarkets, Inc. -The Chesapeake
> Life Insurance CompanyR, Mid-West National Life Insurance Company of
> TennesseeSM and The MEGA Life and Health Insurance Company.SM
> >
> >
> >
> >> -Original Message-
> >> From: IBM Mainframe Discussion List
> >> [mailto:IBM-MAIN@bama.ua.edu] On Behalf Of R.S.
> >> Sent: Tuesday, April 05, 2011 8:31 AM
> >> To: IBM-MAIN@bama.ua.edu
> >> Subject: Encryption, compression, etc.
> >>
> >> I'm looking for some solution for file exchange between z/OS and
> >> Windows/Linux platform.
> >>
> >> The only requirement is to encrypt the file (PS dataset) on z/OS side
> >> and decrypt it on distributed side and vice versa.
> >>
> >> Nice to have:
> >> - hash calculation
> >> - compression
> &g

Re: Encryption, compression, etc.

2011-04-05 Thread Kirk Wolf
Thanks for the kind plug John

A few comments -

- With IBM Ported Tools OpenSSH, you can put your SSH keys in a SAF /
RACF Keyring, which is much better than having them in a file
(regardless of the protection of that file).

- Co:Z Launcher and Co:Z SFTP definitely work with Windows using the
free OpenSSH server available through cygwin.

- With our new "OpenSSH Accelerator for z/OS, you can exploit CPACF on
z/OS for SSH encryption.  Also, with Co:Z Launcher you can disable
encryption of data connections which is even faster/cheaper and a
killer solution if the partner machines are on a secure network,
hipersocket, zBX IEDN, etc.

  (see: http://dovetail.com/webinars.html for slides and a recording
of a recent webinar)

Either Co:Z Launcher or Co:Z SFTP use z/OS OpenSSH - the choice
depends on whether you want simple file transfer or more cooperative
processing.

For a quick comparison of z/OS SFTP with FTP/S that focuses on crypto
hardware exploitation, see slide 14 in the webinar:
http://dovetail.com/docs/oshxl/openssh-accelerator-webinar.pdf

Kirk Wolf
Dovetailed Technologies
http://dovetail.com


2011/4/5 McKown, John :
> Why encrypt and decrypt? Does it need to be on Linux in encrypted form? If 
> not, and if it were me, I'd use Dovetailed Technologies' Co:Z dspipes 
> utilities and simply transfer the files over an SSH tunnel. Using Co:Z, it is 
> easy. And the product is free to download. It contains some Linux programs as 
> well as z/OS programs. Go here: http://dovetail.com/products/dspipes.html
>
> What is nice is that Co:Z can transfer the data from/to z/OS over an SSH 
> tunnel and do code conversion at the same time! And it does it to/from z/OS 
> legacy datasets or z/OS UNIX files. Excellent product. Totally cost free! 
> Support does cost. But they host a no cost support forum for informal support.
>
> Example JCL:
>
> //PROCLIB JCLLIB ORDER=SYS1.COZ.SAMPJCL
> //EX1 EXEC PROC=COZPROC,
> // ARGS='linux-user@linux-server'
> //STDIN DD *
> fromdsn '//DD:INPUT ' >linux.file
> //INPUT DD DISP=SHR,DSN=MY.INPUT.PS.FILE
> //
>
> Now one thing you may notices is that I didn't include any kind of password 
> or passphrase. That's because on my z/OS system, I have the ssh key for the 
> linux system user, and that ssh key does not have a passphrase (null 
> passphrase). This is not the best idea, but I'm lazy. The documentation on 
> Co:Z shows how to use an ssh key which has a passphrase.
>
> I know this doesn't answer your question. But I'm hoping that maybe it is a 
> possible solution to your need - securely transferring data from z/OS to 
> Linux. You also mentioned Windows. I think this will work if you install an 
> SSH server on your Windows server. Perhaps Cygwin's would do - it is free for 
> the download.
>
> --
> John McKown
> Systems Engineer IV
> IT
>
> Administrative Services Group
>
> HealthMarketsR
>
> 9151 Boulevard 26 . N. Richland Hills . TX 76010
> (817) 255-3225 phone .
> john.mck...@healthmarkets.com . www.HealthMarkets.com
>
> Confidentiality Notice: This e-mail message may contain confidential or 
> proprietary information. If you are not the intended recipient, please 
> contact the sender by reply e-mail and destroy all copies of the original 
> message. HealthMarketsR is the brand name for products underwritten and 
> issued by the insurance subsidiaries of HealthMarkets, Inc. -The Chesapeake 
> Life Insurance CompanyR, Mid-West National Life Insurance Company of 
> TennesseeSM and The MEGA Life and Health Insurance Company.SM
>
>
>
>> -Original Message-
>> From: IBM Mainframe Discussion List
>> [mailto:IBM-MAIN@bama.ua.edu] On Behalf Of R.S.
>> Sent: Tuesday, April 05, 2011 8:31 AM
>> To: IBM-MAIN@bama.ua.edu
>> Subject: Encryption, compression, etc.
>>
>> I'm looking for some solution for file exchange between z/OS and
>> Windows/Linux platform.
>>
>> The only requirement is to encrypt the file (PS dataset) on z/OS side
>> and decrypt it on distributed side and vice versa.
>>
>> Nice to have:
>> - hash calculation
>> - compression
>> - exploitation of CPACF or CryptoExpress or zIIP hardware (to reduce
>> cost of CPU)
>>
>> Any clues and suggestions including both home-grown (DIY)
>> solutions and
>> commercial products are welcome.
>>
>> --
>> Radoslaw Skorupka
>> Lodz, Poland
>>
>>
>> P.S. If one feels uncomfortable with "advertising" commercial
>> products,
>> please write to me directly.
>>
>>
>> --
>> Treść tej wiadomości może zawierać informacje prawnie
>> chronione Banku przeznaczone wyłączni

Re: Encryption, compression, etc.

2011-04-05 Thread McKown, John
Why encrypt and decrypt? Does it need to be on Linux in encrypted form? If not, 
and if it were me, I'd use Dovetailed Technologies' Co:Z dspipes utilities and 
simply transfer the files over an SSH tunnel. Using Co:Z, it is easy. And the 
product is free to download. It contains some Linux programs as well as z/OS 
programs. Go here: http://dovetail.com/products/dspipes.html

What is nice is that Co:Z can transfer the data from/to z/OS over an SSH tunnel 
and do code conversion at the same time! And it does it to/from z/OS legacy 
datasets or z/OS UNIX files. Excellent product. Totally cost free! Support does 
cost. But they host a no cost support forum for informal support.

Example JCL:

//PROCLIB JCLLIB ORDER=SYS1.COZ.SAMPJCL
//EX1 EXEC PROC=COZPROC,
// ARGS='linux-user@linux-server'
//STDIN DD *
fromdsn '//DD:INPUT ' >linux.file
//INPUT DD DISP=SHR,DSN=MY.INPUT.PS.FILE
//

Now one thing you may notices is that I didn't include any kind of password or 
passphrase. That's because on my z/OS system, I have the ssh key for the linux 
system user, and that ssh key does not have a passphrase (null passphrase). 
This is not the best idea, but I'm lazy. The documentation on Co:Z shows how to 
use an ssh key which has a passphrase.

I know this doesn't answer your question. But I'm hoping that maybe it is a 
possible solution to your need - securely transferring data from z/OS to Linux. 
You also mentioned Windows. I think this will work if you install an SSH server 
on your Windows server. Perhaps Cygwin's would do - it is free for the download.

--
John McKown 
Systems Engineer IV
IT

Administrative Services Group

HealthMarketsR

9151 Boulevard 26 . N. Richland Hills . TX 76010
(817) 255-3225 phone . 
john.mck...@healthmarkets.com . www.HealthMarkets.com

Confidentiality Notice: This e-mail message may contain confidential or 
proprietary information. If you are not the intended recipient, please contact 
the sender by reply e-mail and destroy all copies of the original message. 
HealthMarketsR is the brand name for products underwritten and issued by the 
insurance subsidiaries of HealthMarkets, Inc. -The Chesapeake Life Insurance 
CompanyR, Mid-West National Life Insurance Company of TennesseeSM and The MEGA 
Life and Health Insurance Company.SM

 

> -Original Message-
> From: IBM Mainframe Discussion List 
> [mailto:IBM-MAIN@bama.ua.edu] On Behalf Of R.S.
> Sent: Tuesday, April 05, 2011 8:31 AM
> To: IBM-MAIN@bama.ua.edu
> Subject: Encryption, compression, etc.
> 
> I'm looking for some solution for file exchange between z/OS and 
> Windows/Linux platform.
> 
> The only requirement is to encrypt the file (PS dataset) on z/OS side 
> and decrypt it on distributed side and vice versa.
> 
> Nice to have:
> - hash calculation
> - compression
> - exploitation of CPACF or CryptoExpress or zIIP hardware (to reduce 
> cost of CPU)
> 
> Any clues and suggestions including both home-grown (DIY) 
> solutions and 
> commercial products are welcome.
> 
> -- 
> Radoslaw Skorupka
> Lodz, Poland
> 
> 
> P.S. If one feels uncomfortable with "advertising" commercial 
> products, 
> please write to me directly.
> 
> 
> --
> Treść tej wiadomości może zawierać informacje prawnie 
> chronione Banku przeznaczone wyłącznie do użytku służbowego 
> adresata. Odbiorcą może być jedynie jej adresat z wyłączeniem 
> dostępu osób trzecich. Jeżeli nie jesteś adresatem niniejszej 
> wiadomości lub pracownikiem upoważnionym do jej przekazania 
> adresatowi, informujemy, że jej rozpowszechnianie, 
> kopiowanie, rozprowadzanie lub inne działanie o podobnym 
> charakterze jest prawnie zabronione i może być karalne. 
> Jeżeli otrzymałeś tę wiadomość omyłkowo, prosimy niezwłocznie 
> zawiadomić nadawcę wysyłając odpowiedź oraz trwale usunąć tę 
> wiadomość włączając w to wszelkie jej kopie wydrukowane lub 
> zapisane na dysku.
> 
> This e-mail may contain legally privileged information of the 
> Bank and is intended solely for business use of the 
> addressee. This e-mail may only be received by the addressee 
> and may not be disclosed to any third parties. If you are not 
> the intended addressee of this e-mail or the employee 
> authorised to forward it to the addressee, be advised that 
> any dissemination, copying, distribution or any other similar 
> activity is legally prohibited and may be punishable. If you 
> received this e-mail by mistake please advise the sender 
> immediately by using the reply facility in your e-mail 
> software and delete permanently this e-mail including any 
> copies of it either printed or saved to hard drive. 
> 
> BRE Bank SA, 00-950 Warszawa, ul. Senatorska 18, tel. +48 
> (22) 829 00 00, fax +48 (22) 829 00 33, e-mail: i...@breba

Re: Encryption, compression, etc.

2011-04-05 Thread Jóhannes Magnússon
z/OS Encryption facility might be just the right thing for you.
It is based on OpenPGP and can utilize the Crypto coprocessor.

http://www-03.ibm.com/systems/z/os/zos/encryption_facility/

Cheers, Johannes

> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@bama.ua.edu] On
> Behalf Of R.S.
> Sent: 5. apríl 2011 13:31
> To: IBM-MAIN@bama.ua.edu
> Subject: Encryption, compression, etc.
> 
> I'm looking for some solution for file exchange between z/OS and
> Windows/Linux platform.
> 
> The only requirement is to encrypt the file (PS dataset) on z/OS side
> and decrypt it on distributed side and vice versa.
> 
> Nice to have:
> - hash calculation
> - compression
> - exploitation of CPACF or CryptoExpress or zIIP hardware (to reduce
> cost of CPU)
> 
> Any clues and suggestions including both home-grown (DIY) solutions and
> commercial products are welcome.
> 
> --
> Radoslaw Skorupka
> Lodz, Poland
> 
> 
> P.S. If one feels uncomfortable with "advertising" commercial products,
> please write to me directly.
> 
> 
> --
> Treść tej wiadomości może zawierać informacje prawnie chronione Banku
> przeznaczone wyłącznie do użytku służbowego adresata. Odbiorcą może być
> jedynie jej adresat z wyłączeniem dostępu osób trzecich. Jeżeli nie
> jesteś adresatem niniejszej wiadomości lub pracownikiem upoważnionym do
> jej przekazania adresatowi, informujemy, że jej rozpowszechnianie,
> kopiowanie, rozprowadzanie lub inne działanie o podobnym charakterze
> jest prawnie zabronione i może być karalne. Jeżeli otrzymałeś tę
> wiadomość omyłkowo, prosimy niezwłocznie zawiadomić nadawcę wysyłając
> odpowiedź oraz trwale usunąć tę wiadomość włączając w to wszelkie jej
> kopie wydrukowane lub zapisane na dysku.
> 
> This e-mail may contain legally privileged information of the Bank and
> is intended solely for business use of the addressee. This e-mail may
> only be received by the addressee and may not be disclosed to any third
> parties. If you are not the intended addressee of this e-mail or the
> employee authorised to forward it to the addressee, be advised that any
> dissemination, copying, distribution or any other similar activity is
> legally prohibited and may be punishable. If you received this e-mail
> by mistake please advise the sender immediately by using the reply
> facility in your e-mail software and delete permanently this e-mail
> including any copies of it either printed or saved to hard drive.
> 
> BRE Bank SA, 00-950 Warszawa, ul. Senatorska 18, tel. +48 (22) 829 00
> 00, fax +48 (22) 829 00 33, e-mail: i...@brebank.pl
> Sąd Rejonowy dla m. st. Warszawy XII Wydział Gospodarczy Krajowego
> Rejestru Sądowego, nr rejestru przedsiębiorców KRS 025237, NIP:
> 526-021-50-88.
> Według stanu na dzień 01.01.2011 r. kapitał zakładowy BRE Banku SA (w
> całości wpłacony) wynosi 168.346.696 złotych.
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
> Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Encryption, compression, etc.

2011-04-05 Thread Mark Jacobs

On 04/05/11 09:31, R.S. wrote:
I'm looking for some solution for file exchange between z/OS and 
Windows/Linux platform.


The only requirement is to encrypt the file (PS dataset) on z/OS side 
and decrypt it on distributed side and vice versa.


Nice to have:
- hash calculation
- compression
- exploitation of CPACF or CryptoExpress or zIIP hardware (to reduce 
cost of CPU)


Any clues and suggestions including both home-grown (DIY) solutions 
and commercial products are welcome.




I have a home grown application that uses ICSF services to encrypt an 
entire file with a secure 3DES key that I'd be willing to share.


The problem is going to be the key exchange with the target servers 
since if the shared encryption key gets compromised your data can be 
easily be decrypted.

--

Mark Jacobs
Time Customer Service
Tampa, FL


A schlemiel is a waiter who spills hot soup, and
the schlimazel is the one who gets it in his lap.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Encryption, compression, etc.

2011-04-05 Thread Staller, Allan
z/OS Encryption Facility. Should be distributed with z/OS 1.9 and above.
I believe this is ziip enabled/
FTPS or SFTP (can never remember which is which). Both should be
available with z/OS ported tools
AT-TLS feature of z/OS Comm Server. (I believe this is zip enabled)

HTH,


I'm looking for some solution for file exchange between z/OS and 
Windows/Linux platform.

The only requirement is to encrypt the file (PS dataset) on z/OS side 
and decrypt it on distributed side and vice versa.

Nice to have:
- hash calculation
- compression
- exploitation of CPACF or CryptoExpress or zIIP hardware (to reduce 
cost of CPU)


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Encryption, compression, etc.

2011-04-05 Thread Chase, John
> -Original Message-
> From: IBM Mainframe Discussion List On Behalf Of R.S.
> 
> I'm looking for some solution for file exchange between z/OS and
> Windows/Linux platform.
> 
> The only requirement is to encrypt the file (PS dataset) on z/OS side
> and decrypt it on distributed side and vice versa.
> 
> Nice to have:
> - hash calculation
> - compression
> - exploitation of CPACF or CryptoExpress or zIIP hardware (to reduce
> cost of CPU)
> 
> Any clues and suggestions including both home-grown (DIY) solutions
and
> commercial products are welcome.

Why isn't FTP over SSL desirable?

-jc-

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Encryption, compression, etc.

2011-04-05 Thread R.S.
I'm looking for some solution for file exchange between z/OS and 
Windows/Linux platform.


The only requirement is to encrypt the file (PS dataset) on z/OS side 
and decrypt it on distributed side and vice versa.


Nice to have:
- hash calculation
- compression
- exploitation of CPACF or CryptoExpress or zIIP hardware (to reduce 
cost of CPU)


Any clues and suggestions including both home-grown (DIY) solutions and 
commercial products are welcome.


--
Radoslaw Skorupka
Lodz, Poland


P.S. If one feels uncomfortable with "advertising" commercial products, 
please write to me directly.



--
Treść tej wiadomości może zawierać informacje prawnie chronione Banku 
przeznaczone wyłącznie do użytku służbowego adresata. Odbiorcą może być jedynie 
jej adresat z wyłączeniem dostępu osób trzecich. Jeżeli nie jesteś adresatem 
niniejszej wiadomości lub pracownikiem upoważnionym do jej przekazania 
adresatowi, informujemy, że jej rozpowszechnianie, kopiowanie, rozprowadzanie 
lub inne działanie o podobnym charakterze jest prawnie zabronione i może być 
karalne. Jeżeli otrzymałeś tę wiadomość omyłkowo, prosimy niezwłocznie 
zawiadomić nadawcę wysyłając odpowiedź oraz trwale usunąć tę wiadomość 
włączając w to wszelkie jej kopie wydrukowane lub zapisane na dysku.

This e-mail may contain legally privileged information of the Bank and is intended solely for business use of the addressee. This e-mail may only be received by the addressee and may not be disclosed to any third parties. If you are not the intended addressee of this e-mail or the employee authorised to forward it to the addressee, be advised that any dissemination, copying, distribution or any other similar activity is legally prohibited and may be punishable. If you received this e-mail by mistake please advise the sender immediately by using the reply facility in your e-mail software and delete permanently this e-mail including any copies of it either printed or saved to hard drive. 


BRE Bank SA, 00-950 Warszawa, ul. Senatorska 18, tel. +48 (22) 829 00 00, fax 
+48 (22) 829 00 33, e-mail: i...@brebank.pl
Sąd Rejonowy dla m. st. Warszawy XII Wydział Gospodarczy Krajowego Rejestru Sądowego, nr rejestru przedsiębiorców KRS 025237, NIP: 526-021-50-88. 
Według stanu na dzień 01.01.2011 r. kapitał zakładowy BRE Banku SA (w całości wpłacony) wynosi 168.346.696 złotych.


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Hardware-assisted compression: not CPU-efficient?

2010-12-08 Thread Hal Merritt
I think your test was too small. I did not see any meaningful differences among 
your results. I'd go for test data of at least 100x in size. 

 

-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of 
Yifat Oren
Sent: Wednesday, December 08, 2010 12:22 PM
To: IBM-MAIN@bama.ua.edu
Subject: Re: Hardware-assisted compression: not CPU-efficient?

Pardon my bringing back an old thread, but -

I wanted to see how much better is the COMPRESS option over the HWCOMPRESS
in regards to CPU time and was pretty surprised when my results suggested
that HWCOMPRESS is persistently more efficient (both CPU and channel
utilization -wise) than COMPRESS:

DFDSS DUMP with OPT(4) of a VSAM basic format to disk (basic format):

STEPNAME PROCSTEPRC   EXCP   CONNTCBSRB  CLOCK
DUMP-HWCOMPRESS  00  14514  93575.25.072.3   output was 958
cyls.
DUMP-COMPRESS00  14819  92326.53.072.5   output was 978
cyls.
DUMP-NOCOMP  00  15283   103K.13.082.4   output was
1,017 cyls.


DFDSS DUMP with OPT(4) of a PS basic format to disk (basic format):

STEPNAME PROCSTEPRC   EXCP   CONNTCBSRB  CLOCK   
DUMP-HWCOMPRESS  00  13317   154K.44.196.2  output was 877
cyls.
DUMP-COMPRESS00  14692   157K.68.195.1  output was 969
cyls.
DUMP-NOCOMP  00  35827   238K.14.217.9  output was 2,363
cyls. 


Running on a 2098-I04. DFSMSDSS V1R09.0. 


So, how come I get different results than the original poster?  
The test data was database-type data sets..

Best Regards,
Yifat

-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf
Of Andrew N Wilt
Sent: Friday, December 03, 2010 1:45 AM
To: IBM-MAIN@bama.ua.edu
Subject: Re: Hardware-assisted compression: not CPU-efficient?

Ron,
Thank you for the good response. It is true that the DFSMSdss
COMPRESS keyword and HWCOMPRESS keyword do not perform the same types of
compression. Like Ron said, the COMPRESS keyword is using a Huffman encoding
technique, and works amazing for repeated bytes (just the types of things
you see on system volumes). The HWCOMPRESS keyword utilizes a dictionary
based method, and works well, supposedly, on customer type data.
The CPU utilization of the HWCOMPRESS (dictionary based) is indeed larger
due to what it is doing. So you should choose the type of compression that
suits your CPU utilization needs and data type.
It was mentioned elsewhere in this thread about using the Tape
Hardware compaction. If you have it available, that's what I would go for.
The main intent of the HWCOMPRESS keyword was to provide the dictionary
based compression for the cases where you were using the software
encryption, and thus couldn't utilize the compaction of the tape device.

Thanks,

 Andrew Wilt
 IBM DFSMSdss Architecture/Development
 Tucson, Arizona

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html
NOTICE: This electronic mail message and any files transmitted with it are 
intended
exclusively for the individual or entity to which it is addressed. The message, 
together with any attachment, may contain confidential and/or privileged 
information.
Any unauthorized review, use, printing, saving, copying, disclosure or 
distribution 
is strictly prohibited. If you have received this message in error, please 
immediately advise the sender by reply email and delete all copies.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Hardware-assisted compression: not CPU-efficient?

2010-12-08 Thread Yifat Oren
Pardon my bringing back an old thread, but -

I wanted to see how much better is the COMPRESS option over the HWCOMPRESS
in regards to CPU time and was pretty surprised when my results suggested
that HWCOMPRESS is persistently more efficient (both CPU and channel
utilization -wise) than COMPRESS:

DFDSS DUMP with OPT(4) of a VSAM basic format to disk (basic format):

STEPNAME PROCSTEPRC   EXCP   CONNTCBSRB  CLOCK
DUMP-HWCOMPRESS  00  14514  93575.25.072.3   output was 958
cyls.
DUMP-COMPRESS00  14819  92326.53.072.5   output was 978
cyls.
DUMP-NOCOMP  00  15283   103K.13.082.4   output was
1,017 cyls.


DFDSS DUMP with OPT(4) of a PS basic format to disk (basic format):

STEPNAME PROCSTEPRC   EXCP   CONNTCBSRB  CLOCK   
DUMP-HWCOMPRESS  00  13317   154K.44.196.2  output was 877
cyls.
DUMP-COMPRESS00  14692   157K.68.195.1  output was 969
cyls.
DUMP-NOCOMP  00  35827   238K.14.217.9  output was 2,363
cyls. 


Running on a 2098-I04. DFSMSDSS V1R09.0. 


So, how come I get different results than the original poster?  
The test data was database-type data sets..

Best Regards,
Yifat

-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf
Of Andrew N Wilt
Sent: Friday, December 03, 2010 1:45 AM
To: IBM-MAIN@bama.ua.edu
Subject: Re: Hardware-assisted compression: not CPU-efficient?

Ron,
Thank you for the good response. It is true that the DFSMSdss
COMPRESS keyword and HWCOMPRESS keyword do not perform the same types of
compression. Like Ron said, the COMPRESS keyword is using a Huffman encoding
technique, and works amazing for repeated bytes (just the types of things
you see on system volumes). The HWCOMPRESS keyword utilizes a dictionary
based method, and works well, supposedly, on customer type data.
The CPU utilization of the HWCOMPRESS (dictionary based) is indeed larger
due to what it is doing. So you should choose the type of compression that
suits your CPU utilization needs and data type.
It was mentioned elsewhere in this thread about using the Tape
Hardware compaction. If you have it available, that's what I would go for.
The main intent of the HWCOMPRESS keyword was to provide the dictionary
based compression for the cases where you were using the software
encryption, and thus couldn't utilize the compaction of the tape device.

Thanks,

 Andrew Wilt
 IBM DFSMSdss Architecture/Development
 Tucson, Arizona

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Ron Hawkins
Tony,

Then the misunderstanding is that Compression Services as called by DFSMSdfp
and DFSMSdss with HWCOMPRESS uses an LZW compression scheme, while DFSMShsm
and DFSMSdss with the COMPRESS keyword use Huffman technique.

The Asymmetric cost of HWCOMPRESS I was referring to, and that apparently
confused you, is the same differential for LZW that you mention below. I
suspected that you did not know the difference in encoding techniques, which
is why I pointed it out.

I'm aware of customers with Disaster Recovery schemes that rely on restores
from DFSMSdss back-ups, and spend the first 12-24 hours of the DR drill
restoring data from port and channel constrained cartridge drives.
HWCOMPRESS would relieve that situation and potentially speed up the restore
process by 50% or more without creating a CPU bottleneck that may occur with
COMPRESS.

My own findings a few years ago were that dumping to disk in the absence of
channel and hardware buffer saturation (i.e. disk output) using COMPRESS
runs slower than NOCOMPRESS. A decade and a half ago ago when I looked at
this with ASTEX trace and GTFPARS it looked like read/write overlap was
disabled when COMPRESS was used. I have not run any tests with HWCOMPRESS.

Ron

> -Original Message-
> From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of
> Tony Harminc
> Sent: Thursday, December 02, 2010 4:51 PM
> To: IBM-MAIN@bama.ua.edu
> Subject: Re: [IBM-MAIN] Hardware-assisted compression: not CPU-efficient?
> 
> On 2 December 2010 18:20, Ron Hawkins 
wrote:
> > Tony,
> >
> > You are surprised, and then you explain your surprise by agreeing with
me.
> > I'm confused.
> 
> Well now I'm confused; I'm not sure how I did what you say.
> 
> > I'm not sure if you realized that the Huffman encoding technique used by
> > DFMSdss COMPRESS keyword is not a dictionary based method, and has a
> > symmetrical CPU cost for compression and decompression.
> 
> No, I didn't know anything about the compression methods triggered by
> these two keywords until this thread. But I do know to some extent how
> both Huffman and the LZW-style dictionary compression schemes work,
> and that there is a differential between encoding and decoding speed
> when an inherently adaptive scheme like LZW is used, vs a usually
> static Huffman scheme.
> 
> But I'm afraid I'm missing your point. You said  that the saving in
> hardware assisted compression is in decompression, and I took this to
> be a claim that hardware assisted decompression is somehow speeded up
> - when compared to a plain software implementation - relatively more
> than is compression, and I said that I doubt that that is the case.
> But if it is indeed the case under some circumstances, then I don't
> see why most shops would care in most cases.
> 
> > Finally, as I mentioned in another email, there may be intrinsic
Business
> > Continuance value in taking advantage of the asymmetric CPU cost to
speed up
> > local recovery of an application, or Disaster Recovery that is based on
> > DFSMSdss restores. An improvement in Recovery time may be worth the
> > increased cost of the backup.
> 
> It's certainly possible, but I think it is unlikely to be the common case.
> 
> Tony H.
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
> Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Tony Harminc
On 2 December 2010 18:20, Ron Hawkins  wrote:
> Tony,
>
> You are surprised, and then you explain your surprise by agreeing with me.
> I'm confused.

Well now I'm confused; I'm not sure how I did what you say.

> I'm not sure if you realized that the Huffman encoding technique used by
> DFMSdss COMPRESS keyword is not a dictionary based method, and has a
> symmetrical CPU cost for compression and decompression.

No, I didn't know anything about the compression methods triggered by
these two keywords until this thread. But I do know to some extent how
both Huffman and the LZW-style dictionary compression schemes work,
and that there is a differential between encoding and decoding speed
when an inherently adaptive scheme like LZW is used, vs a usually
static Huffman scheme.

But I'm afraid I'm missing your point. You said  that the saving in
hardware assisted compression is in decompression, and I took this to
be a claim that hardware assisted decompression is somehow speeded up
- when compared to a plain software implementation - relatively more
than is compression, and I said that I doubt that that is the case.
But if it is indeed the case under some circumstances, then I don't
see why most shops would care in most cases.

> Finally, as I mentioned in another email, there may be intrinsic Business
> Continuance value in taking advantage of the asymmetric CPU cost to speed up
> local recovery of an application, or Disaster Recovery that is based on
> DFSMSdss restores. An improvement in Recovery time may be worth the
> increased cost of the backup.

It's certainly possible, but I think it is unlikely to be the common case.

Tony H.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Andrew N Wilt
Ron,
Thank you for the good response. It is true that the DFSMSdss
COMPRESS keyword and HWCOMPRESS keyword do not perform the same types of
compression. Like Ron said, the COMPRESS keyword is using a Huffman
encoding technique, and works amazing for repeated bytes (just the types of
things you see on system volumes). The HWCOMPRESS keyword utilizes a
dictionary based method, and works well, supposedly, on customer type data.
The CPU utilization of the HWCOMPRESS (dictionary based) is indeed larger
due to what it is doing. So you should choose the type of compression that
suits your CPU utilization needs and data type.
It was mentioned elsewhere in this thread about using the Tape
Hardware compaction. If you have it available, that's what I would go for.
The main intent of the HWCOMPRESS keyword was to provide the dictionary
based compression for the cases where you were using the software
encryption, and thus couldn't utilize the compaction of the tape device.

Thanks,

 Andrew Wilt
 IBM DFSMSdss Architecture/Development
 Tucson, Arizona


IBM Mainframe Discussion List  wrote on 12/02/2010
04:20:15 PM:

> From:
>
> Ron Hawkins 
>
> To:
>
> IBM-MAIN@bama.ua.edu
>
> Date:
>
> 12/02/2010 04:21 PM
>
> Subject:
>
> Re: Hardware-assisted compression: not CPU-efficient?
>
> Tony,
>
> You are surprised, and then you explain your surprise by agreeing with
me.
> I'm confused.
>
> I'm not sure if you realized that the Huffman encoding technique used by
> DFMSdss COMPRESS keyword is not a dictionary based method, and has a
> symmetrical CPU cost for compression and decompression.
>
> Finally, as I mentioned in another email, there may be intrinsic Business
> Continuance value in taking advantage of the asymmetric CPU cost to speed
up
> local recovery of an application, or Disaster Recovery that is based on
> DFSMSdss restores. An improvement in Recovery time may be worth the
> increased cost of the backup.
>
> Ron
>
> > -Original Message-
> > From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
> Behalf Of
> > Tony Harminc
> > Sent: Thursday, December 02, 2010 9:09 AM
> > To: IBM-MAIN@bama.ua.edu
> > Subject: Re: [IBM-MAIN] Hardware-assisted compression: not
CPU-efficient?
> >
> > On 2 December 2010 05:53, Ron Hawkins 
> wrote:
> > > Johnny,
> > >
> > > The saving in hardware assisted compression is in decompression -
when
> you
> > read it. Look at what should be a much lower CPU cost to decompress the
> files
> > during restore and decide if the speed of restoring the data
concurrently
> is
> > worth the increase in CPU required to back it up in the first place.
> >
> > I am a little surprised at this. Certainly for most of the current
> > dynamic dictionary based algorithms (and many more as well),
> > decompression will always, except in pathological cases, be a good
> > deal faster than compression. This is intuitively obvious, since the
> > compression code must not only go through the mechanics of
> > transforming input data into the output codestream, but must do it
> > with some eye to actually compressing as best it can with the
> > knowledge available to it, rather than making things worse. The
> > decompression simply takes what it is given, and algorithmically
> > transforms it back with no choice.
> >
> > Whether a hardware assisted - which in this case means one using the
> > tree manipulation instructions - decompression is disproportionately
> > faster than a similar compression, I don't know, but I'd be surprised
> > if it's much different.
> >
> > But regardless, surely it is a strange claim that an installation
> > would use hardware assisted compression in order to make their
> > restores faster, particularly at the expense of their dumps. What
> > would be the business case for such a thing? How many installations do
> > restores on any kind of regular basis? How many have a need to have
> > them run even faster than they do naturally when compared to the
> > dumps?
> >
> > Tony H.
> >
> > --
> > For IBM-MAIN subscribe / signoff / archive access instructions,
> > send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
> > Search the archives at http://bama.ua.edu/archives/ibm-main.html
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
> Search the archives at http://bama.ua.edu/archives/ibm-main.html
--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Ron Hawkins
Ted,

I think that why us DASD vendors invented Concurrent Copy, Snapshot,
Shadowimage, Timefinder and FlashCopy. The backup is done relatively
quickly, and copying the backup to tape can be completed outside the
Business Critical Path.

I'm not suggesting for a moment that everyone uses these products.

I did say that the increase cost and time for backup needs to evaluated
against any improvement in restoration time with hardware compression. Thank
you to all those that reinforced the need for this evaluation in their
response.

> -Original Message-
> From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of
> Ted MacNEIL
> Sent: Thursday, December 02, 2010 1:14 PM
> To: IBM-MAIN@bama.ua.edu
> Subject: Re: [IBM-MAIN] Hardware-assisted compression: not CPU-efficient?
> 
> >opposed to back-up duration which is usually outside of
> any business critical path.
> 
> It shouldn't be, especially if back-ups have to complete before
sub-systems
> can come up.
> 
> If we ran out of window, we had senior IT management and business contacts
> decide which was more critical: back-up; availability.
> 
> Sometimes, like during the Christmas shopping season the decision was
> availability.
> 
> But, that was mortgaging the future.
> Recovering without back-ups during your peak season takes longer than
during
> 'normal' times.
> 
> We never had to run recovery when we made the decision, but I was glad I
> didn't have the responsibility to make the choice.
> 
> Back-ups are insurance premiums.
> If you pay and nothing happens, it's a business expense.
> If you don't pay and something happens, it may be a career event!
> 
> -
> Ted MacNEIL
> eamacn...@yahoo.ca
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
> Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Ron Hawkins
Tony,

You are surprised, and then you explain your surprise by agreeing with me.
I'm confused.

I'm not sure if you realized that the Huffman encoding technique used by
DFMSdss COMPRESS keyword is not a dictionary based method, and has a
symmetrical CPU cost for compression and decompression.

Finally, as I mentioned in another email, there may be intrinsic Business
Continuance value in taking advantage of the asymmetric CPU cost to speed up
local recovery of an application, or Disaster Recovery that is based on
DFSMSdss restores. An improvement in Recovery time may be worth the
increased cost of the backup.

Ron

> -Original Message-
> From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of
> Tony Harminc
> Sent: Thursday, December 02, 2010 9:09 AM
> To: IBM-MAIN@bama.ua.edu
> Subject: Re: [IBM-MAIN] Hardware-assisted compression: not CPU-efficient?
> 
> On 2 December 2010 05:53, Ron Hawkins 
wrote:
> > Johnny,
> >
> > The saving in hardware assisted compression is in decompression - when
you
> read it. Look at what should be a much lower CPU cost to decompress the
files
> during restore and decide if the speed of restoring the data concurrently
is
> worth the increase in CPU required to back it up in the first place.
> 
> I am a little surprised at this. Certainly for most of the current
> dynamic dictionary based algorithms (and many more as well),
> decompression will always, except in pathological cases, be a good
> deal faster than compression. This is intuitively obvious, since the
> compression code must not only go through the mechanics of
> transforming input data into the output codestream, but must do it
> with some eye to actually compressing as best it can with the
> knowledge available to it, rather than making things worse. The
> decompression simply takes what it is given, and algorithmically
> transforms it back with no choice.
> 
> Whether a hardware assisted - which in this case means one using the
> tree manipulation instructions - decompression is disproportionately
> faster than a similar compression, I don't know, but I'd be surprised
> if it's much different.
> 
> But regardless, surely it is a strange claim that an installation
> would use hardware assisted compression in order to make their
> restores faster, particularly at the expense of their dumps. What
> would be the business case for such a thing? How many installations do
> restores on any kind of regular basis? How many have a need to have
> them run even faster than they do naturally when compared to the
> dumps?
> 
> Tony H.
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
> Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Ted MacNEIL
>opposed to back-up duration which is usually outside of
any business critical path. 

It shouldn't be, especially if back-ups have to complete before sub-systems can 
come up.

If we ran out of window, we had senior IT management and business contacts 
decide which was more critical: back-up; availability.

Sometimes, like during the Christmas shopping season the decision was 
availability.

But, that was mortgaging the future.
Recovering without back-ups during your peak season takes longer than during 
'normal' times.

We never had to run recovery when we made the decision, but I was glad I didn't 
have the responsibility to make the choice.

Back-ups are insurance premiums.
If you pay and nothing happens, it's a business expense.
If you don't pay and something happens, it may be a career event!

-
Ted MacNEIL
eamacn...@yahoo.ca

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Stephen Mednick
Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf
Of Hal Merritt
Sent: Friday, 3 December 2010 6:44 AM
To: IBM-MAIN@bama.ua.edu
Subject: Re: Hardware-assisted compression: not CPU-efficient?

Conversely, sometimes it is hard to get the backups all done in a low
activity window, so one might compromise in favor of faster backups even at
the expense of more CPU consumption. 

Depending on shop's strategy, getting a logically consistent PIT copy just
might put the backups in the business critical path. That is, all have to
complete before the next business day starts. 
---

Doesn't have to be if you combine the backups with hardware vendor
replication technologies such as SHADOWIMAGE, TIMEFINDER and FLASHCOPY. 

Read how Innovation's FDRINSTANT solution gets around the issue of taking
backups off the critical path:

http://www.innovationdp.fdr.com/products/fdrinstant/


Stephen Mednick
Computer Supervisory Services
Sydney, Australia
 
Asia/Pacific representatives for
Innovation Data Processing, Inc.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Hal Merritt
Conversely, sometimes it is hard to get the backups all done in a low activity 
window, so one might compromise in favor of faster backups even at the expense 
of more CPU consumption. 

Depending on shop's strategy, getting a logically consistent PIT copy just 
might put the backups in the business critical path. That is, all have to 
complete before the next business day starts. 




-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of 
Ron Hawkins
Sent: Thursday, December 02, 2010 12:58 PM
To: IBM-MAIN@bama.ua.edu
Subject: Re: Hardware-assisted compression: not CPU-efficient?

Gil,

I was thinking that a faster restore would be have some value as a reduction
in recovery time, as opposed to back-up duration which is usually outside of
any business critical path. 

This would have value in business continuance whether it was a small
application recovery or a full disaster recovery situation. I don't think
the frequency of recovery is a factor in this case.

Ron

> -Original Message-
> From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of
> Paul Gilmartin
> Sent: Thursday, December 02, 2010 6:09 AM
> To: IBM-MAIN@bama.ua.edu
> Subject: Re: [IBM-MAIN] Hardware-assisted compression: not CPU-efficient?
> 
> On Thu, 2 Dec 2010 02:53:17 -0800, Ron Hawkins wrote:
> >
> >The saving in hardware assisted compression is in decompression - when
you
> read it. Look at what should be a much lower CPU cost to decompress the
files
> during restore and decide if the speed of restoring the data concurrently
is
> worth the increase in CPU required to back it up in the first place.
> >
> So if you restore more frequently than you backup, you come out ahead?
> 
> -- gil
> 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html
NOTICE: This electronic mail message and any files transmitted with it are 
intended
exclusively for the individual or entity to which it is addressed. The message, 
together with any attachment, may contain confidential and/or privileged 
information.
Any unauthorized review, use, printing, saving, copying, disclosure or 
distribution 
is strictly prohibited. If you have received this message in error, please 
immediately advise the sender by reply email and delete all copies.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Ron Hawkins
Gil,

I was thinking that a faster restore would be have some value as a reduction
in recovery time, as opposed to back-up duration which is usually outside of
any business critical path. 

This would have value in business continuance whether it was a small
application recovery or a full disaster recovery situation. I don't think
the frequency of recovery is a factor in this case.

Ron

> -Original Message-
> From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of
> Paul Gilmartin
> Sent: Thursday, December 02, 2010 6:09 AM
> To: IBM-MAIN@bama.ua.edu
> Subject: Re: [IBM-MAIN] Hardware-assisted compression: not CPU-efficient?
> 
> On Thu, 2 Dec 2010 02:53:17 -0800, Ron Hawkins wrote:
> >
> >The saving in hardware assisted compression is in decompression - when
you
> read it. Look at what should be a much lower CPU cost to decompress the
files
> during restore and decide if the speed of restoring the data concurrently
is
> worth the increase in CPU required to back it up in the first place.
> >
> So if you restore more frequently than you backup, you come out ahead?
> 
> -- gil
> 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Ron Hawkins
Martin,

Except for when the compression assist instructions were in millicode on the
G4 and G5, the hardware compression from Compression Services has always had
am asymmetric cost for DFDMS compression. I remember some early
documentation from IBM when it was first introduced in DFSMS that quoted 12
instructions per byte to compress, and two instructions per byte to
decompress. 

Ron


> -Original Message-
> From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of
> Martin Packer
> Sent: Thursday, December 02, 2010 3:36 AM
> To: IBM-MAIN@bama.ua.edu
> Subject: Re: [IBM-MAIN] Hardware-assisted compression: not CPU-efficient?
> 
> Ron, is it generally the case that CPU is saved on read? I'm seeing QSAM
> with HDC jobsteps showing very high CPU. But then they seem to both write
> and read. Enough CPU to potentially suffer from queuing.
> 
> (And, yes, I know you were talking about a different category of HDC
> usage.)
> 
> Martin Packer,
> Mainframe Performance Consultant, zChampion
> Worldwide Banking Center of Excellence, IBM
> 
> +44-7802-245-584
> 
> email: martin_pac...@uk.ibm.com
> 
> Twitter / Facebook IDs: MartinPacker
> 
> 
> 
> 
> 
> Unless stated otherwise above:
> IBM United Kingdom Limited - Registered in England and Wales with number
> 741598.
> Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU
> 
> 
> 
> 
> 
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
> Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Staller, Allan
Unfortunately, IBM, et. al. *DO NOT* bill on elapsed time.  

More CPU used for Dump is less CPU available for productive work, or
worse yet, a bigger software bill!



Increased CPU time to do the dump does not necessarily mean that 
the elapsed time is longer.  In fact, by compressing the data, I would 
expect that the time required to write it out (the I/O time) would be 
less.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Tom Marchant
On Thu, 2 Dec 2010 12:09:23 -0500, Tony Harminc wrote:

>On 2 December 2010 05:53, Ron Hawkins wrote:
>>
>> The saving in hardware assisted compression is in 
>>decompression - when you read it. Look at what should be a 
>>much lower CPU cost to decompress the files during restore 
>>and decide if the speed of restoring the data concurrently is 
>>worth the increase in CPU required to back it up in the first place.
>
>I am a little surprised at this
>
>But regardless, surely it is a strange claim that an installation
>would use hardware assisted compression in order to make their
>restores faster, particularly at the expense of their dumps.

Increased CPU time to do the dump does not necessarily mean that 
the elapsed time is longer.  In fact, by compressing the data, I would 
expect that the time required to write it out (the I/O time) would be 
less.

-- 
Tom Marchant

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Tony Harminc
On 2 December 2010 05:53, Ron Hawkins  wrote:
> Johnny,
>
> The saving in hardware assisted compression is in decompression - when you 
> read it. Look at what should be a much lower CPU cost to decompress the files 
> during restore and decide if the speed of restoring the data concurrently is 
> worth the increase in CPU required to back it up in the first place.

I am a little surprised at this. Certainly for most of the current
dynamic dictionary based algorithms (and many more as well),
decompression will always, except in pathological cases, be a good
deal faster than compression. This is intuitively obvious, since the
compression code must not only go through the mechanics of
transforming input data into the output codestream, but must do it
with some eye to actually compressing as best it can with the
knowledge available to it, rather than making things worse. The
decompression simply takes what it is given, and algorithmically
transforms it back with no choice.

Whether a hardware assisted - which in this case means one using the
tree manipulation instructions - decompression is disproportionately
faster than a similar compression, I don't know, but I'd be surprised
if it's much different.

But regardless, surely it is a strange claim that an installation
would use hardware assisted compression in order to make their
restores faster, particularly at the expense of their dumps. What
would be the business case for such a thing? How many installations do
restores on any kind of regular basis? How many have a need to have
them run even faster than they do naturally when compared to the
dumps?

Tony H.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Norbert Friemel
On Thu, 2 Dec 2010 16:29:56 +0200, Yifat Oren wrote:

>
>I was under the impression that for DFDSS DUMP, COMPRESS and HWCOMPRESS are
>synonymous;
>
>Are you saying they are not?
>
>

Yes, they are not synonymous. HWCOMPRESS uses the CMPSC instruction
(dictionary-based compression). COMPRESS uses RLE (run-length encoding).

Norbert Friemel

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Vernooij, CP - SPLXM
"Yifat Oren"  wrote in message
news:<3d0c19e6913742b282eeb9a7c4ae3...@yifato>...
> Hi Johnny, 
> 
> I was under the impression that for DFDSS DUMP, COMPRESS and
HWCOMPRESS are
> synonymous;
> 
> Are you saying they are not?
> 
> 
> If you are writing to tape why not use the drive
compaction(DCB=TRTCH=COMP)
> instead?

Because he is trying to lower channel utilization, so he must compress
before sending it over the channel.

Kees.

For information, services and offers, please visit our web site: 
http://www.klm.com. This e-mail and any attachment may contain confidential and 
privileged material intended for the addressee only. If you are not the 
addressee, you are notified that no part of the e-mail or any attachment may be 
disclosed, copied or distributed, and that any other action related to this 
e-mail or attachment is strictly prohibited, and may be unlawful. If you have 
received this e-mail by error, please notify the sender immediately by return 
e-mail, and delete this message. 

Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its 
employees shall not be liable for the incorrect or incomplete transmission of 
this e-mail or any attachments, nor responsible for any delay in receipt. 
Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch 
Airlines) is registered in Amstelveen, The Netherlands, with registered number 
33014286


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Yifat Oren
Hi Johnny, 

I was under the impression that for DFDSS DUMP, COMPRESS and HWCOMPRESS are
synonymous;

Are you saying they are not?


If you are writing to tape why not use the drive compaction(DCB=TRTCH=COMP)
instead?

Best Regards,
Yifat

-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf
Of Johnny Luo
Sent: יום ה 02 דצמבר 2010 12:13
To: IBM-MAIN@bama.ua.edu
Subject: Hardware-assisted compression: not CPU-efficient?

Hi,

DSS DUMP supports COMPRESS/HWCOMPRESS keyword and I found out in my test
that HWCOMPRESS costs more CPU than COMPRESS.

Is it normal?

Currently we're dumping huge production data to tape and in order to
alleviate the tape channel utilization we need to compress the data before
writing to tape.  It works well but the cpu usage is a problem cause we have
many such backup jobs running simultaneously.

If hardware-assisted compression cannot reduce the cpu overhead,  I will
consider using resource group to cap those jobs.

Best Regards,
Johnny Luo

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email
to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the
archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Paul Gilmartin
On Thu, 2 Dec 2010 02:53:17 -0800, Ron Hawkins wrote:
>
>The saving in hardware assisted compression is in decompression - when you 
>read it. Look at what should be a much lower CPU cost to decompress the files 
>during restore and decide if the speed of restoring the data concurrently is 
>worth the increase in CPU required to back it up in the first place.
>
So if you restore more frequently than you backup, you come out ahead?

-- gil

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Miklos Szigetvari

Hi

Yes,  it is a c library

On 12/2/2010 1:26 PM, Johnny Luo wrote:

Miklos,

What do you mean by 'zlib'? Is it free on z/OS?

Best Regards,
Johnny Luo


On Thu, Dec 2, 2010 at 8:10 PM, Miklos Szigetvari<
miklos.szigetv...@isis-papyrus.com>  wrote:


Hi

A few years ago I have tried with hardware compression,  as we are using
intensively the "zlib" library (http://www.ietf.org/rfc/rfc1950.txt) to
compress/expand .
Never got a proper answer, and  till now not clear,  in which case would
bring the hardware compression some CPU reduction


On 12/2/2010 12:36 PM, Martin Packer wrote:


Ron, is it generally the case that CPU is saved on read? I'm seeing QSAM
with HDC jobsteps showing very high CPU. But then they seem to both write
and read. Enough CPU to potentially suffer from queuing.

(And, yes, I know you were talking about a different category of HDC
usage.)

Martin Packer,
Mainframe Performance Consultant, zChampion
Worldwide Banking Center of Excellence, IBM

+44-7802-245-584

email: martin_pac...@uk.ibm.com

Twitter / Facebook IDs: MartinPacker





Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number
741598.
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU






--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html




--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Johnny Luo
Miklos,

What do you mean by 'zlib'? Is it free on z/OS?

Best Regards,
Johnny Luo


On Thu, Dec 2, 2010 at 8:10 PM, Miklos Szigetvari <
miklos.szigetv...@isis-papyrus.com> wrote:

>Hi
>
> A few years ago I have tried with hardware compression,  as we are using
> intensively the "zlib" library (http://www.ietf.org/rfc/rfc1950.txt) to
> compress/expand .
> Never got a proper answer, and  till now not clear,  in which case would
> bring the hardware compression some CPU reduction
>
>
> On 12/2/2010 12:36 PM, Martin Packer wrote:
>
>> Ron, is it generally the case that CPU is saved on read? I'm seeing QSAM
>> with HDC jobsteps showing very high CPU. But then they seem to both write
>> and read. Enough CPU to potentially suffer from queuing.
>>
>> (And, yes, I know you were talking about a different category of HDC
>> usage.)
>>
>> Martin Packer,
>> Mainframe Performance Consultant, zChampion
>> Worldwide Banking Center of Excellence, IBM
>>
>> +44-7802-245-584
>>
>> email: martin_pac...@uk.ibm.com
>>
>> Twitter / Facebook IDs: MartinPacker
>>
>>
>>
>>
>>
>> Unless stated otherwise above:
>> IBM United Kingdom Limited - Registered in England and Wales with number
>> 741598.
>> Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU
>>
>>
>>
>>
>>
>>
>> --
>> For IBM-MAIN subscribe / signoff / archive access instructions,
>> send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
>> Search the archives at http://bama.ua.edu/archives/ibm-main.html
>>
>>
>>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
> Search the archives at http://bama.ua.edu/archives/ibm-main.html
>

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Miklos Szigetvari

Hi

A few years ago I have tried with hardware compression,  as we are using
intensively the "zlib" library (http://www.ietf.org/rfc/rfc1950.txt) to 
compress/expand .
Never got a proper answer, and  till now not clear,  in which case would 
bring the hardware compression some CPU reduction


On 12/2/2010 12:36 PM, Martin Packer wrote:

Ron, is it generally the case that CPU is saved on read? I'm seeing QSAM
with HDC jobsteps showing very high CPU. But then they seem to both write
and read. Enough CPU to potentially suffer from queuing.

(And, yes, I know you were talking about a different category of HDC
usage.)

Martin Packer,
Mainframe Performance Consultant, zChampion
Worldwide Banking Center of Excellence, IBM

+44-7802-245-584

email: martin_pac...@uk.ibm.com

Twitter / Facebook IDs: MartinPacker





Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number
741598.
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU






--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html




--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Martin Packer
Ron, is it generally the case that CPU is saved on read? I'm seeing QSAM 
with HDC jobsteps showing very high CPU. But then they seem to both write 
and read. Enough CPU to potentially suffer from queuing.

(And, yes, I know you were talking about a different category of HDC 
usage.)

Martin Packer,
Mainframe Performance Consultant, zChampion
Worldwide Banking Center of Excellence, IBM

+44-7802-245-584

email: martin_pac...@uk.ibm.com

Twitter / Facebook IDs: MartinPacker





Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number 
741598. 
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU






--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Ron Hawkins
Johnny,

The saving in hardware assisted compression is in decompression - when you read 
it. Look at what should be a much lower CPU cost to decompress the files during 
restore and decide if the speed of restoring the data concurrently is worth the 
increase in CPU required to back it up in the first place.

Ron

> -Original Message-
> From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of
> Johnny Luo
> Sent: Thursday, December 02, 2010 2:13 AM
> To: IBM-MAIN@bama.ua.edu
> Subject: [IBM-MAIN] Hardware-assisted compression: not CPU-efficient?
> 
> Hi,
> 
> DSS DUMP supports COMPRESS/HWCOMPRESS keyword and I found out in my test
> that HWCOMPRESS costs more CPU than COMPRESS.
> 
> Is it normal?
> 
> Currently we're dumping huge production data to tape and in order to
> alleviate the tape channel utilization we need to compress the data before
> writing to tape.  It works well but the cpu usage is a problem cause we have
> many such backup jobs running simultaneously.
> 
> If hardware-assisted compression cannot reduce the cpu overhead,  I will
> consider using resource group to cap those jobs.
> 
> Best Regards,
> Johnny Luo
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
> Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Johnny Luo
Hi,

DSS DUMP supports COMPRESS/HWCOMPRESS keyword and I found out in my test
that HWCOMPRESS costs more CPU than COMPRESS.

Is it normal?

Currently we're dumping huge production data to tape and in order to
alleviate the tape channel utilization we need to compress the data before
writing to tape.  It works well but the cpu usage is a problem cause we have
many such backup jobs running simultaneously.

If hardware-assisted compression cannot reduce the cpu overhead,  I will
consider using resource group to cap those jobs.

Best Regards,
Johnny Luo

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: SMS compression cost & size

2010-03-01 Thread Tobias Cafiero
Ron,
 Thanks and I believe I'm having firewall issues with some our the 
e-mails, so please be patient. 

Regards,
Tobias Cafiero
Data Resource Management 

Tel: (212) 855-1117




Ron Hawkins  
Sent by: IBM Mainframe Discussion List 
03/01/2010 09:35 AM
Please respond to
IBM Mainframe Discussion List 


To
IBM-MAIN@bama.ua.edu
cc

Subject
Re: SMS compression cost & size





Tobias,

There's no magic number. It's pretty much depends on the site, the
compression method and how datasets are accessed. It's like asking is 
there
an ideal wheel rim size for every car.

Why not raise the bar in some increment and measure the affect. You 
probably
have some idea of where you want to be, so get there in four incremental
increases and measure changes in space, CPU usage and IO rates as you go.

Ron

> -Original Message-
> From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of
> Tobias Cafiero
> Sent: Monday, March 01, 2010 6:19 AM
> To: IBM-MAIN@bama.ua.edu
> Subject: Re: [IBM-MAIN] SMS compression cost & size
>
> Ron,
>   We compress our GDG's and don't send them to ML1. However at 
the
> time we were trying to save DASD and used the 8 MB And 5 mb values. We
> want to raise the bar for Compression, because of DASD issues have
> subsided. Is there a ideal threshold for the DASD/Compression size 
value?
>
> Regards,
> Tobias Cafiero
> Data Resource Management
>
> Tel: (212) 855-1117
>
>
>
>
> Ron Hawkins 
> Sent by: IBM Mainframe Discussion List 
> 02/27/2010 10:45 AM
> Please respond to
> IBM Mainframe Discussion List 
>
>
> To
> IBM-MAIN@bama.ua.edu
> cc
>
> Subject
> Re: SMS compression cost & size
>
>
>
>
>
> Reza,
>
> For me it's not an exception, because I'm in the business of saving IO 
and
> not space )my employer would not like me to save space).
>
> I see nothing wrong with compressed datasets going to ML2, especially if
> they are unlikely to be touched again before they are deleted. ML2 is
> usually on media that is cheaper than ML0.
>
> Actually what I think is quite smart is that they are sending it 
straight
> to
> ML2 without migrating to ML1 first. There's almost zero net sum gain
> migrating compressed datasets to ML1.
>
> Ron
>
> > -Original Message-
> > From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
> Behalf Of
> > R Hey
> > Sent: Wednesday, February 24, 2010 12:16 AM
> > To: IBM-MAIN@bama.ua.edu
> > Subject: Re: [IBM-MAIN] SMS compression cost & size
> >
> > Ron,
> >
> > Your example is an 'exception', it was decided to do it for that DS to
> gain
> > the
> > benefit. That's OK by me.
> >
> > It wasn't decided to COMP all VSAM/PS(FB/VB) DS that are over 5 CYL,
> which
> > is the case I question. I've seen thousands of GDS,  under 5 Cyl, 
being
> > COMP'd, & they quickly go to ML2, so they are not read many times. 
This
> > doesn't make sense to me, if one is short on CPU.
> >
> > I should have said:
> > I don't see why anyone would compress ALL DS under 500 Cyl these days,
> > just to save space, when one is short on CPU.
> >
> > > there is more to compression than just the size of the dataset.
> >
> > Amen.
> >
> > Rgds,
> > Rez
> >
> > --
> > For IBM-MAIN subscribe / signoff / archive access instructions,
> > send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
> > Search the archives at http://bama.ua.edu/archives/ibm-main.html
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
> Search the archives at http://bama.ua.edu/archives/ibm-main.html
>
>
>
> _
> 
> DTCC DISCLAIMER: This email and any files transmitted with it are
> confidential and intended solely for the use of the individual or
> entity to whom they are addressed. If you have received this email
> in error, please notify us immediately and delete the email and any
> attachments from your system. The recipient should check this email
> and any attachments for the presence of viruses.  The company
> accepts no liability for any damage caused by any virus transmitted
> by this email.
>
> --
> For IBM-MAIN subscribe / signoff / archive access instruction

Re: SMS compression cost & size

2010-03-01 Thread Tobias Cafiero
Ron,
 Thanks


Regards,
Tobias Cafiero
Data Resource Management 

Tel: (212) 855-1117




Ron Hawkins  
Sent by: IBM Mainframe Discussion List 
03/01/2010 09:35 AM
Please respond to
IBM Mainframe Discussion List 


To
IBM-MAIN@bama.ua.edu
cc

Subject
Re: SMS compression cost & size





Tobias,

There's no magic number. It's pretty much depends on the site, the
compression method and how datasets are accessed. It's like asking is 
there
an ideal wheel rim size for every car.

Why not raise the bar in some increment and measure the affect. You 
probably
have some idea of where you want to be, so get there in four incremental
increases and measure changes in space, CPU usage and IO rates as you go.

Ron

> -Original Message-
> From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of
> Tobias Cafiero
> Sent: Monday, March 01, 2010 6:19 AM
> To: IBM-MAIN@bama.ua.edu
> Subject: Re: [IBM-MAIN] SMS compression cost & size
>
> Ron,
>   We compress our GDG's and don't send them to ML1. However at 
the
> time we were trying to save DASD and used the 8 MB And 5 mb values. We
> want to raise the bar for Compression, because of DASD issues have
> subsided. Is there a ideal threshold for the DASD/Compression size 
value?
>
> Regards,
> Tobias Cafiero
> Data Resource Management
>
> Tel: (212) 855-1117
>
>
>
>
> Ron Hawkins 
> Sent by: IBM Mainframe Discussion List 
> 02/27/2010 10:45 AM
> Please respond to
> IBM Mainframe Discussion List 
>
>
> To
> IBM-MAIN@bama.ua.edu
> cc
>
> Subject
> Re: SMS compression cost & size
>
>
>
>
>
> Reza,
>
> For me it's not an exception, because I'm in the business of saving IO 
and
> not space )my employer would not like me to save space).
>
> I see nothing wrong with compressed datasets going to ML2, especially if
> they are unlikely to be touched again before they are deleted. ML2 is
> usually on media that is cheaper than ML0.
>
> Actually what I think is quite smart is that they are sending it 
straight
> to
> ML2 without migrating to ML1 first. There's almost zero net sum gain
> migrating compressed datasets to ML1.
>
> Ron
>
> > -Original Message-
> > From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
> Behalf Of
> > R Hey
> > Sent: Wednesday, February 24, 2010 12:16 AM
> > To: IBM-MAIN@bama.ua.edu
> > Subject: Re: [IBM-MAIN] SMS compression cost & size
> >
> > Ron,
> >
> > Your example is an 'exception', it was decided to do it for that DS to
> gain
> > the
> > benefit. That's OK by me.
> >
> > It wasn't decided to COMP all VSAM/PS(FB/VB) DS that are over 5 CYL,
> which
> > is the case I question. I've seen thousands of GDS,  under 5 Cyl, 
being
> > COMP'd, & they quickly go to ML2, so they are not read many times. 
This
> > doesn't make sense to me, if one is short on CPU.
> >
> > I should have said:
> > I don't see why anyone would compress ALL DS under 500 Cyl these days,
> > just to save space, when one is short on CPU.
> >
> > > there is more to compression than just the size of the dataset.
> >
> > Amen.
> >
> > Rgds,
> > Rez
> >
> > --
> > For IBM-MAIN subscribe / signoff / archive access instructions,
> > send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
> > Search the archives at http://bama.ua.edu/archives/ibm-main.html
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
> Search the archives at http://bama.ua.edu/archives/ibm-main.html
>
>
>
> _
> 
> DTCC DISCLAIMER: This email and any files transmitted with it are
> confidential and intended solely for the use of the individual or
> entity to whom they are addressed. If you have received this email
> in error, please notify us immediately and delete the email and any
> attachments from your system. The recipient should check this email
> and any attachments for the presence of viruses.  The company
> accepts no liability for any damage caused by any virus transmitted
> by this email.
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
> Search the 

Re: SMS compression cost & size

2010-03-01 Thread Ron Hawkins
Tobias,

There's no magic number. It's pretty much depends on the site, the
compression method and how datasets are accessed. It's like asking is there
an ideal wheel rim size for every car.

Why not raise the bar in some increment and measure the affect. You probably
have some idea of where you want to be, so get there in four incremental
increases and measure changes in space, CPU usage and IO rates as you go.

Ron

> -Original Message-
> From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of
> Tobias Cafiero
> Sent: Monday, March 01, 2010 6:19 AM
> To: IBM-MAIN@bama.ua.edu
> Subject: Re: [IBM-MAIN] SMS compression cost & size
> 
> Ron,
>   We compress our GDG's and don't send them to ML1. However at the
> time we were trying to save DASD and used the 8 MB And 5 mb values. We
> want to raise the bar for Compression, because of DASD issues have
> subsided. Is there a ideal threshold for the DASD/Compression size value?
> 
> Regards,
> Tobias Cafiero
> Data Resource Management
> 
> Tel: (212) 855-1117
> 
> 
> 
> 
> Ron Hawkins 
> Sent by: IBM Mainframe Discussion List 
> 02/27/2010 10:45 AM
> Please respond to
> IBM Mainframe Discussion List 
> 
> 
> To
> IBM-MAIN@bama.ua.edu
> cc
> 
> Subject
> Re: SMS compression cost & size
> 
> 
> 
> 
> 
> Reza,
> 
> For me it's not an exception, because I'm in the business of saving IO and
> not space )my employer would not like me to save space).
> 
> I see nothing wrong with compressed datasets going to ML2, especially if
> they are unlikely to be touched again before they are deleted. ML2 is
> usually on media that is cheaper than ML0.
> 
> Actually what I think is quite smart is that they are sending it straight
> to
> ML2 without migrating to ML1 first. There's almost zero net sum gain
> migrating compressed datasets to ML1.
> 
> Ron
> 
> > -Original Message-
> > From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
> Behalf Of
> > R Hey
> > Sent: Wednesday, February 24, 2010 12:16 AM
> > To: IBM-MAIN@bama.ua.edu
> > Subject: Re: [IBM-MAIN] SMS compression cost & size
> >
> > Ron,
> >
> > Your example is an 'exception', it was decided to do it for that DS to
> gain
> > the
> > benefit. That's OK by me.
> >
> > It wasn't decided to COMP all VSAM/PS(FB/VB) DS that are over 5 CYL,
> which
> > is the case I question. I've seen thousands of GDS,  under 5 Cyl, being
> > COMP'd, & they quickly go to ML2, so they are not read many times. This
> > doesn't make sense to me, if one is short on CPU.
> >
> > I should have said:
> > I don't see why anyone would compress ALL DS under 500 Cyl these days,
> > just to save space, when one is short on CPU.
> >
> > > there is more to compression than just the size of the dataset.
> >
> > Amen.
> >
> > Rgds,
> > Rez
> >
> > --
> > For IBM-MAIN subscribe / signoff / archive access instructions,
> > send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
> > Search the archives at http://bama.ua.edu/archives/ibm-main.html
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
> Search the archives at http://bama.ua.edu/archives/ibm-main.html
> 
> 
> 
> _
> 
> DTCC DISCLAIMER: This email and any files transmitted with it are
> confidential and intended solely for the use of the individual or
> entity to whom they are addressed. If you have received this email
> in error, please notify us immediately and delete the email and any
> attachments from your system. The recipient should check this email
> and any attachments for the presence of viruses.  The company
> accepts no liability for any damage caused by any virus transmitted
> by this email.
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
> Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: SMS compression cost & size

2010-03-01 Thread Tobias Cafiero
Ron, 
  We compress our GDG's and don't send them to ML1. However at the 
time we were trying to save DASD and used the 8 MB And 5 mb values. We 
want to raise the bar for Compression, because of DASD issues have 
subsided. Is there a ideal threshold for the DASD/Compression size value? 

Regards,
Tobias Cafiero
Data Resource Management 

Tel: (212) 855-1117




Ron Hawkins  
Sent by: IBM Mainframe Discussion List 
02/27/2010 10:45 AM
Please respond to
IBM Mainframe Discussion List 


To
IBM-MAIN@bama.ua.edu
cc

Subject
Re: SMS compression cost & size





Reza,

For me it's not an exception, because I'm in the business of saving IO and
not space )my employer would not like me to save space).

I see nothing wrong with compressed datasets going to ML2, especially if
they are unlikely to be touched again before they are deleted. ML2 is
usually on media that is cheaper than ML0.

Actually what I think is quite smart is that they are sending it straight 
to
ML2 without migrating to ML1 first. There's almost zero net sum gain
migrating compressed datasets to ML1.

Ron

> -Original Message-
> From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of
> R Hey
> Sent: Wednesday, February 24, 2010 12:16 AM
> To: IBM-MAIN@bama.ua.edu
> Subject: Re: [IBM-MAIN] SMS compression cost & size
>
> Ron,
>
> Your example is an 'exception', it was decided to do it for that DS to
gain
> the
> benefit. That's OK by me.
>
> It wasn't decided to COMP all VSAM/PS(FB/VB) DS that are over 5 CYL, 
which
> is the case I question. I've seen thousands of GDS,  under 5 Cyl, being
> COMP'd, & they quickly go to ML2, so they are not read many times. This
> doesn't make sense to me, if one is short on CPU.
>
> I should have said:
> I don't see why anyone would compress ALL DS under 500 Cyl these days,
> just to save space, when one is short on CPU.
>
> > there is more to compression than just the size of the dataset.
>
> Amen.
>
> Rgds,
> Rez
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
> Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



_

DTCC DISCLAIMER: This email and any files transmitted with it are
confidential and intended solely for the use of the individual or
entity to whom they are addressed. If you have received this email
in error, please notify us immediately and delete the email and any
attachments from your system. The recipient should check this email
and any attachments for the presence of viruses.  The company
accepts no liability for any damage caused by any virus transmitted
by this email.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: SMS compression cost & size

2010-02-27 Thread Ron Hawkins
Reza,

For me it's not an exception, because I'm in the business of saving IO and
not space )my employer would not like me to save space).

I see nothing wrong with compressed datasets going to ML2, especially if
they are unlikely to be touched again before they are deleted. ML2 is
usually on media that is cheaper than ML0.

Actually what I think is quite smart is that they are sending it straight to
ML2 without migrating to ML1 first. There's almost zero net sum gain
migrating compressed datasets to ML1.

Ron

> -Original Message-
> From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of
> R Hey
> Sent: Wednesday, February 24, 2010 12:16 AM
> To: IBM-MAIN@bama.ua.edu
> Subject: Re: [IBM-MAIN] SMS compression cost & size
> 
> Ron,
> 
> Your example is an 'exception', it was decided to do it for that DS to
gain
> the
> benefit. That's OK by me.
> 
> It wasn't decided to COMP all VSAM/PS(FB/VB) DS that are over 5 CYL, which
> is the case I question. I've seen thousands of GDS,  under 5 Cyl, being
> COMP'd, & they quickly go to ML2, so they are not read many times. This
> doesn't make sense to me, if one is short on CPU.
> 
> I should have said:
> I don't see why anyone would compress ALL DS under 500 Cyl these days,
> just to save space, when one is short on CPU.
> 
> > there is more to compression than just the size of the dataset.
> 
> Amen.
> 
> Rgds,
> Rez
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
> Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: SMS compression cost & size

2010-02-24 Thread Tobias Cafiero
Rez,
   Do you have a  analysis of what Compression cost/dsn?  

Regards,
Tobias Cafiero
Data Resource Management 

Tel: (212) 855-1117




R Hey  
Sent by: IBM Mainframe Discussion List 
02/24/2010 03:16 AM
Please respond to
IBM Mainframe Discussion List 


To
IBM-MAIN@bama.ua.edu
cc

Subject
Re: SMS compression cost & size





Ron,

Your example is an 'exception', it was decided to do it for that DS to 
gain the
benefit. That's OK by me.

It wasn't decided to COMP all VSAM/PS(FB/VB) DS that are over 5 CYL, which
is the case I question. I've seen thousands of GDS,  under 5 Cyl, being
COMP'd, & they quickly go to ML2, so they are not read many times. This
doesn't make sense to me, if one is short on CPU.

I should have said:
I don't see why anyone would compress ALL DS under 500 Cyl these days,
just to save space, when one is short on CPU.

> there is more to compression than just the size of the dataset.

Amen.

Rgds,
Rez

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



_

DTCC DISCLAIMER: This email and any files transmitted with it are
confidential and intended solely for the use of the individual or
entity to whom they are addressed. If you have received this email
in error, please notify us immediately and delete the email and any
attachments from your system. The recipient should check this email
and any attachments for the presence of viruses.  The company
accepts no liability for any damage caused by any virus transmitted
by this email.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: SMS compression cost & size

2010-02-24 Thread R Hey
Ron,

Your example is an 'exception', it was decided to do it for that DS to gain the 
benefit. That's OK by me.

It wasn't decided to COMP all VSAM/PS(FB/VB) DS that are over 5 CYL, which 
is the case I question. I've seen thousands of GDS,  under 5 Cyl, being 
COMP'd, & they quickly go to ML2, so they are not read many times. This 
doesn't make sense to me, if one is short on CPU.

I should have said:
I don't see why anyone would compress ALL DS under 500 Cyl these days, 
just to save space, when one is short on CPU.

> there is more to compression than just the size of the dataset.

Amen.

Rgds,
Rez

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: SMS compression cost & size

2010-02-22 Thread Ron Hawkins
Reza,

I can think of a KSDS that was being accessed by LSR, but every CI was
touched until it was loaded into the LSR pool. Over 150 programs touched
that 200 CYL baby in the same 60 minutes. Compression reduced the size by
55% reduced the IO by 55% as well.

In terms of IO savings it was the same as compressing a 30,000 (150x200) CYL
file that is read once.

My point: there is more to compression than just the size of the dataset.

Ron

> -Original Message-
> From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of
> R Hey
> Sent: Monday, February 22, 2010 9:23 PM
> To: IBM-MAIN@bama.ua.edu
> Subject: Re: [IBM-MAIN] SMS compression cost & size
> 
> Thanks for the replies.
> 
> Redbook on vsam says:
> 
> There are some types of data sets that are not suitable for ZL
compression,
> resulting in rejection, such as these:
> 
> - Small data sets.
> - Data sets in which the pattern of the data does not produce a good
>  compression rate, such as image data.
> - Small records. Compression is performed on a logical record basis. When
> logical records are very short, the cost of additional work may become
>  excessive compared to the reduction in size that compression achieves.
> 
> It seesm very complicated & book talks about:
> 
> A compression-eligible data set can exist in one of three forms, depending
on
> the
> moment:
> 
> - Dictionary selection: ...
> - Mated: The data set is mated with the appropriate dictionary; this
concludes
> the sampling and interrogation processes.
> -Rejected:  A suitable dictionary match was not found during sampling and
> interrogation, so compression is bypassed, ...
> 
> It has all sorts of don't & restrictions ...
> I don't see why anyone would compress anything under 500 Cyl these days.
> 
> Salute,
> Rez
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
> Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: SMS compression cost & size

2010-02-22 Thread R Hey
Thanks for the replies.

Redbook on vsam says:

There are some types of data sets that are not suitable for ZL compression,
resulting in rejection, such as these:

- Small data sets. 
- Data sets in which the pattern of the data does not produce a good
 compression rate, such as image data.
- Small records. Compression is performed on a logical record basis. When   
logical records are very short, the cost of additional work may become
 excessive compared to the reduction in size that compression achieves.

It seesm very complicated & book talks about:

A compression-eligible data set can exist in one of three forms, depending on 
the
moment:

- Dictionary selection: ...
- Mated: The data set is mated with the appropriate dictionary; this concludes
the sampling and interrogation processes.
-Rejected:  A suitable dictionary match was not found during sampling and
interrogation, so compression is bypassed, ...

It has all sorts of don't & restrictions ...
I don't see why anyone would compress anything under 500 Cyl these days.

Salute,
Rez

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: SMS compression cost & size

2010-02-17 Thread Rick Fochtman

---
By 'killer apps' you mean good ones to COMP for, right?

Would you COMP regardless of size, if short on CPU already, with lots of 
DASD?


(even for less than 50 cyls)

If size matters, what should the MIN size be?
---
Reza, if you're running tight on CPU and got lots of DASD acreage, I'd 
set the MIN fairly high, say up around 3G and experiment for a "best 
fit" between DASD space and CPU.


Just my $0.02 worth.

Rick

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: SMS compression cost & size

2010-02-17 Thread R Hey
Ron,

> What is the objective of compressing the dataset?

Nobody remembers.
It was done in a time far, far away ...
My client is short on CPU, so I (new sysFROG) started wondering why ...

>  regardless of size.

So, size is not everything after all ;-)

Rez

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: SMS compression cost & size

2010-02-17 Thread David Andrews
On Wed, 2010-02-17 at 01:28 -0500, Ron Hawkins wrote:
> What is the objective of compressing the dataset?

In my environment (cycles to burn) reads for certain long sequential
datasets are faster for compressed data.  So my ACS routines look for
specific larger datasets that are written once, read many times and
causes them to be compressed.

Nota bene: you can't rewrite compressed records in place.

-- 
David Andrews
A. Duda and Sons, Inc.
david.andr...@duda.com

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: SMS compression cost & size

2010-02-16 Thread Ron Hawkins
Reza,

Yes, killer Apps are the good ones.

I wouldn't compress a 50 CYL dataset just for compressions sake. If it was
being read by all and sundry with no updates then I'd load it into
Hiperbatch.

I've never looked at compression to save space. I've always viewed it as an
IO reduction technique. So no, I would not compress datasets regardless of
size.

What is the objective of compressing the dataset?

Ron

> -Original Message-
> From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of
> R Hey
> Sent: Tuesday, February 16, 2010 10:02 PM
> To: IBM-MAIN@bama.ua.edu
> Subject: Re: [IBM-MAIN] SMS compression cost & size
> 
> Ron,
> 
> By 'killer apps' you mean good ones to COMP for, right?
> 
> Would you COMP regardless of size, if short on CPU already, with lots of
DASD?
> (even for less than 50 cyls)
> 
> If size matters, what should the MIN size be?
> 
> Cheers,
> Rez
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
> Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: SMS compression cost & size

2010-02-16 Thread R Hey
Ron,

By 'killer apps' you mean good ones to COMP for, right?

Would you COMP regardless of size, if short on CPU already, with lots of DASD?
(even for less than 50 cyls)

If size matters, what should the MIN size be?

Cheers,
Rez

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: SMS compression cost & size

2010-02-16 Thread Ron Hawkins
Reza,

It's LZW compression which has an asymmetric cost by design - compressing
always costs more than decompressing.

Back when the compression assist instructions were announced IBM were saying
the difference was around 6:1 compression vs decompression.

The compression and compression and decompression costs will depend on how
you use your dictionaries.

Personally I've found two killer apps for DFSMSdfp compression:

1) Large datasets updated/created in the batch critical path when
using Synchronous   Remote Copy.
2) Large datasets that are read over and over again after they are
created (backups,   reporting, DW extract, etc)

Ron

> -Original Message-
> From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of
> R Hey
> Sent: Tuesday, February 16, 2010 5:04 PM
> To: IBM-MAIN@bama.ua.edu
> Subject: [IBM-MAIN] SMS compression cost & size
> 
> Hi,
> 
> Are there any figures for the cost of SMS compression out there,
> or is it YMMV?
> 
> (I've checked the archive to find cost is higher for W than for R ...,
> seen many who decided not to do it with a lot of YMMV ...)
> 
> Also, are there any ROT for the min size to compress for?
> 
> One client I had compressed data if it was > 5 Cyl !
> This doesn't make sense to me.
> 
> What's the min size you use for compress.
> 
> Book says:
> 
> Compress when an existing data set is approaching the 4 gigabyte VSAM size
> limit or when you have capacity constraints
> 
> The data set must have a primary allocation of at least 5 MBs, or 8 MBs if
no
> secondary allocation is specified.
> 
> TIA,
> Rez
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
> Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


SMS compression cost & size

2010-02-16 Thread R Hey
Hi,

Are there any figures for the cost of SMS compression out there,
or is it YMMV?

(I've checked the archive to find cost is higher for W than for R ...,  
seen many who decided not to do it with a lot of YMMV ...)

Also, are there any ROT for the min size to compress for?

One client I had compressed data if it was > 5 Cyl !
This doesn't make sense to me.

What's the min size you use for compress.

Book says:

Compress when an existing data set is approaching the 4 gigabyte VSAM size
limit or when you have capacity constraints  

The data set must have a primary allocation of at least 5 MBs, or 8 MBs if no 
secondary allocation is specified.   

TIA,
Rez

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: SMS Compression - Software or Hardware

2010-01-28 Thread Ron Hawkins
Mike,

It is the hardware assisted compression that I was referring to. There
were/are products that do software compression without using the Hardware
assist. DFSMSdss, DFSMShsm and old versions of SAS and IAM spring to mind.

Then there was the IBM G4 and G5 that moved the hardware assist to macrocode
- very, very ugly CPU Time. Started blowing batch windows and had to regress
the compression that I spent a year getting into place :-(

Ron

> -Original Message-
> From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of
> Mike Bell
> Sent: Thursday, January 28, 2010 12:38 PM
> To: IBM-MAIN@bama.ua.edu
> Subject: Re: [IBM-MAIN] SMS Compression - Software or Hardware
> 
> There are 2 kinds of compression.
> The outboard kind that takes place in the tape unit is one example.
> there is no difference in the z.os cpu time for writing a compressed
> tape.
> 
> the operating system kind which is always software.
> the software compression can be either just software or hardware
> assisted software.
> the case for SMS is that it is hardware assisted software.  this means
> that the cpu used is much less than normal software compression but it
> does affect z.os cpu time.
> --
> Mike
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
> Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: SMS Compression - Software or Hardware

2010-01-28 Thread Mike Bell
There are 2 kinds of compression.
The outboard kind that takes place in the tape unit is one example.
there is no difference in the z.os cpu time for writing a compressed
tape.

the operating system kind which is always software.
the software compression can be either just software or hardware
assisted software.
the case for SMS is that it is hardware assisted software.  this means
that the cpu used is much less than normal software compression but it
does affect z.os cpu time.
-- 
Mike

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: SMS Compression - Software or Hardware

2010-01-28 Thread O'Brien, David W. (NIH/CIT) [C]
Thanks Ron

Dave O'Brien
NIH Contractor

From: Ron Hawkins [ron.hawkins1...@sbcglobal.net]
Sent: Thursday, January 28, 2010 1:51 PM
To: IBM-MAIN@bama.ua.edu
Subject: Re: SMS Compression - Software or Hardware

David,

SMS uses hardware compression. It has an asymmetric CPU cost, where
decompressing the data uses 80% less CPU than compressing it.

Ron

> -Original Message-
> From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of
> O'Brien, David W. (NIH/CIT) [C]
> Sent: Thursday, January 28, 2010 10:18 AM
> To: IBM-MAIN@bama.ua.edu
> Subject: [IBM-MAIN] SMS Compression - Software or Hardware
>
> If a Dataclass with the following attributes is invoked:
>
> Data Set Name Type  . . . . . : EXTENDED
>   If Extended . . . . . . . . : REQUIRED
>   Extended Addressability . . : YES
>   Record Access Bias  . . . . : USER
> Space Constraint Relief . . . : YES
>   Reduce Space Up To (%)  . . : 50
>   Dynamic Volume Count  . . . : 20
> Compaction  . . . . . . . . . : YES
>
> Is the resulting compaction software or hardware driven?
>
> I toild my user software, just want to confirm or correct. The ISMF panels
> weren't much help. Neither is the 1.9 DFSMS Storage Administration
Reference.
> Not sure where else to look.
>
> Thank You,
> Dave O'Brien
>  NIH Contractor
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
> Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: SMS Compression - Software or Hardware

2010-01-28 Thread Ron Hawkins
David,

SMS uses hardware compression. It has an asymmetric CPU cost, where
decompressing the data uses 80% less CPU than compressing it.

Ron

> -Original Message-
> From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of
> O'Brien, David W. (NIH/CIT) [C]
> Sent: Thursday, January 28, 2010 10:18 AM
> To: IBM-MAIN@bama.ua.edu
> Subject: [IBM-MAIN] SMS Compression - Software or Hardware
> 
> If a Dataclass with the following attributes is invoked:
> 
> Data Set Name Type  . . . . . : EXTENDED
>   If Extended . . . . . . . . : REQUIRED
>   Extended Addressability . . : YES
>   Record Access Bias  . . . . : USER
> Space Constraint Relief . . . : YES
>   Reduce Space Up To (%)  . . : 50
>   Dynamic Volume Count  . . . : 20
> Compaction  . . . . . . . . . : YES
> 
> Is the resulting compaction software or hardware driven?
> 
> I toild my user software, just want to confirm or correct. The ISMF panels
> weren't much help. Neither is the 1.9 DFSMS Storage Administration
Reference.
> Not sure where else to look.
> 
> Thank You,
> Dave O'Brien
>  NIH Contractor
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
> Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


SMS Compression - Software or Hardware

2010-01-28 Thread O'Brien, David W. (NIH/CIT) [C]
If a Dataclass with the following attributes is invoked:

Data Set Name Type  . . . . . : EXTENDED
  If Extended . . . . . . . . : REQUIRED
  Extended Addressability . . : YES 
  Record Access Bias  . . . . : USER
Space Constraint Relief . . . : YES 
  Reduce Space Up To (%)  . . : 50  
  Dynamic Volume Count  . . . : 20  
Compaction  . . . . . . . . . : YES 

Is the resulting compaction software or hardware driven?

I toild my user software, just want to confirm or correct. The ISMF panels 
weren't much help. Neither is the 1.9 DFSMS Storage Administration Reference. 
Not sure where else to look. 

Thank You,
Dave O'Brien
 NIH Contractor

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: why compression costs additional I/O?

2010-01-28 Thread Bill Fairchild
Pawel,

You should inform the vendor of DFSORT that using BSAM is not a good idea and 
see if any other sort vendors do a better job of handling compressed data if it 
is important enough in your shop.  Since BSAM uses EXCP internally, BSAM is 
evidently building channel programs that are less efficient given the exact 
combination of hardware involved.  Anything that BSAM does badly can be redone 
with a more judicious use of EXCP.  Also FICON with non-MIDAW channel programs 
may be exacerbating the problem of too many EXCPs as well.  Perhaps it's time 
that DFSORT, presumably a strategic product, began using a strategic access 
method like Media Manager or even STARTIO.

Bill Fairchild

Software Developer 
Rocket Software
275 Grove Street * Newton, MA 02466-2272 * USA
Tel: +1.617.614.4503 * Mobile: +1.508.341.1715
Email: bi...@mainstar.com 
Web: www.rocketsoftware.com


-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of 
Pawel Leszczynski
Sent: Wednesday, January 27, 2010 9:23 AM
To: IBM-MAIN@bama.ua.edu
Subject: Re: why compression costs additional I/O?

Hi Yifat,

Thanks for answer - you are right! - I 've checked in joblog:

for compressed output:

 0 SORTOUT  : BSAM USED

but for non-compressed output:

SORTOUT  : EXCP USED 

generally all of it probably mean that using DFSORT for compressed datasets is
not good idea.

Regards,
Pawel

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: why compression costs additional I/O?

2010-01-28 Thread Yifat Oren
Ron,

Just to be sure someone mentions this;

Compressed Format sequential data sets are a special case of PS-E's.

>From "Macro Instructions for Data Sets':

"Recommendation: For compressed format data sets, do not specify NCP (thus,
allowing the system to default it to 1) or specify NCP=1.  This 
 is the optimal value for NCP for a compressed format data set since the
system handles all buffering internally for these data sets. "

Best Regards,
Yifat

-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf
Of Ron Hawkins
Sent: Wednesday, January 27, 2010 10:28 PM
To: IBM-MAIN@bama.ua.edu
Subject: Re: why compression costs additional I/O?

Peter,

Yes for your example I am recommending NCP=96, which means BUFNO=96. I
habitually put both NCP and BUFNO on BSAM files because I've never been sure
if BSAM calculates BUFNO using the NCP value from JCL.

Many years ago I tested this to death on uncached DASD and found that
BUFNO/NCP of 16 was the point of diminishing return for QSAM and BSAM. While
I don't think these double buffer by design like EFS I think it fit well
with the chain length limit of eight blocks with BSAM and QSAM. 

I should revisit this as a study on FICON and Cached DASD as it is likely
that the knee in the curve happens at eight buffers now as I've noticed CPU
intensive utilities like IEBDG writing short chains when volumes are
SIMPLEX, and full chains when TrueCopy synchronous delays are added with
DUPLEX. It suggests to me that 16 is still a good number for when IO is
delayed. Thirty-one would be something I would recommend for BUFND on a VSAM
file with half track CISZ, but I don't think it does any harm on DSORG=PS.

As far as I recall BSAM and QSAM for PS-E does not have the same SSCH data
length and #CCW restrictions as PS, and media manager is probably limited to
a CYL. I'd only wish I had time to research this as a "science project"
right now, but at the moment I can only offer past experience with a
spattering of senior moments.

Ron

 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: why compression costs additional I/O?

2010-01-27 Thread Ron Hawkins
Peter,

Yes for your example I am recommending NCP=96, which means BUFNO=96. I
habitually put both NCP and BUFNO on BSAM files because I've never been sure
if BSAM calculates BUFNO using the NCP value from JCL.

Many years ago I tested this to death on uncached DASD and found that
BUFNO/NCP of 16 was the point of diminishing return for QSAM and BSAM. While
I don't think these double buffer by design like EFS I think it fit well
with the chain length limit of eight blocks with BSAM and QSAM. 

I should revisit this as a study on FICON and Cached DASD as it is likely
that the knee in the curve happens at eight buffers now as I've noticed CPU
intensive utilities like IEBDG writing short chains when volumes are
SIMPLEX, and full chains when TrueCopy synchronous delays are added with
DUPLEX. It suggests to me that 16 is still a good number for when IO is
delayed. Thirty-one would be something I would recommend for BUFND on a VSAM
file with half track CISZ, but I don't think it does any harm on DSORG=PS.

As far as I recall BSAM and QSAM for PS-E does not have the same SSCH data
length and #CCW restrictions as PS, and media manager is probably limited to
a CYL. I'd only wish I had time to research this as a "science project"
right now, but at the moment I can only offer past experience with a
spattering of senior moments.

Ron

 



> -Original Message-
> From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of
> Farley, Peter x23353
> Sent: Wednesday, January 27, 2010 11:51 AM
> To: IBM-MAIN@bama.ua.edu
> Subject: Re: [IBM-MAIN] why compression costs additional I/O?
> 
> Ron,
> 
> If a PS-E dataset has 6 stripes, are you recommending using NCP=96 (=16
> * 6)?  If so, what BUFNO should be used in that case?
> 
> A long time ago in a galaxy far, far away, a performance guru told me an
> ideal combination for PS datasets was to use half-track blocking and
> BUFNO=31 (1 cylinder's worth of buffers + 1).  I'd appreciate updated
> advice for the PS-E and compressed data world.
> 
> Peter
> 
> > -Original Message-
> > From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
> > Behalf Of Ron Hawkins
> > Sent: Wednesday, January 27, 2010 1:19 PM
> > To: IBM-MAIN@bama.ua.edu
> > Subject: Re: why compression costs additional I/O?
> >
> > Pawel,
> >
> > For a regular DSORG=PS dataset DFSORT and SYNCSORT use their own
> access
> > method to read and write the SORTIN and SORTOUT using very efficient
> long
> > chained Start Sub-Channels. The EXCP count reported for these datasets
> is
> > the Start SubChannel count.
> >
> > For DSORG=PS-E the sort products will use BSAM to read and write the
> > SORTIN and SORTOUT datasets. BSAM on Extended Format Datasets can be
> > efficient if you increase BUFNO and NCP, but the default of five is
> not
> > the worst thing that can happen. More importantly the EXCP count
> reported
> > for these datasets is the Block Count, and not the SSCH count. These
> are
> > usually mult-Cyl chains.
> >
> > One of the few problems with Extended Format datasets is that the
> block
> > chaining defaults are lousy. This is probably why your job is taking
> > longer with compression. BSAM, and QSAM, always use double buffering,
> so
> > whatever you specify is halved for chaining. I suggest that you add
> > DCB=NCP=n to your SORTIN and SORTOUT, where n=16 times number of
> stripes.
> >
> > If you want to check the actual IO count look at the SSCH count in the
> SMF
> > Type 46 subtype 6 records.
> >
> > One last thing is make sure that your SORTIN is compressed and
> buffered so
> > you get the benefit at the start and end of the SORT.
> >
> > Ron
> 
> 
> This message and any attachments are intended only for the use of the
> addressee and
> may contain information that is privileged and confidential. If the reader
of
> the
> message is not the intended recipient or an authorized representative of
the
> intended recipient, you are hereby notified that any dissemination of this
> communication is strictly prohibited. If you have received this
communication
> in
> error, please notify us immediately by e-mail and delete the message and
any
> attachments from your system.
> 
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
> Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: why compression costs additional I/O?

2010-01-27 Thread Farley, Peter x23353
Ron,

If a PS-E dataset has 6 stripes, are you recommending using NCP=96 (=16
* 6)?  If so, what BUFNO should be used in that case?

A long time ago in a galaxy far, far away, a performance guru told me an
ideal combination for PS datasets was to use half-track blocking and
BUFNO=31 (1 cylinder's worth of buffers + 1).  I'd appreciate updated
advice for the PS-E and compressed data world.

Peter

> -Original Message-
> From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
> Behalf Of Ron Hawkins
> Sent: Wednesday, January 27, 2010 1:19 PM
> To: IBM-MAIN@bama.ua.edu
> Subject: Re: why compression costs additional I/O?
> 
> Pawel,
> 
> For a regular DSORG=PS dataset DFSORT and SYNCSORT use their own
access
> method to read and write the SORTIN and SORTOUT using very efficient
long
> chained Start Sub-Channels. The EXCP count reported for these datasets
is
> the Start SubChannel count.
> 
> For DSORG=PS-E the sort products will use BSAM to read and write the
> SORTIN and SORTOUT datasets. BSAM on Extended Format Datasets can be
> efficient if you increase BUFNO and NCP, but the default of five is
not
> the worst thing that can happen. More importantly the EXCP count
reported
> for these datasets is the Block Count, and not the SSCH count. These
are
> usually mult-Cyl chains.
> 
> One of the few problems with Extended Format datasets is that the
block
> chaining defaults are lousy. This is probably why your job is taking
> longer with compression. BSAM, and QSAM, always use double buffering,
so
> whatever you specify is halved for chaining. I suggest that you add
> DCB=NCP=n to your SORTIN and SORTOUT, where n=16 times number of
stripes.
> 
> If you want to check the actual IO count look at the SSCH count in the
SMF
> Type 46 subtype 6 records.
> 
> One last thing is make sure that your SORTIN is compressed and
buffered so
> you get the benefit at the start and end of the SORT.
> 
> Ron


This message and any attachments are intended only for the use of the addressee 
and
may contain information that is privileged and confidential. If the reader of 
the 
message is not the intended recipient or an authorized representative of the
intended recipient, you are hereby notified that any dissemination of this
communication is strictly prohibited. If you have received this communication in
error, please notify us immediately by e-mail and delete the message and any
attachments from your system.


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: why compression costs additional I/O?

2010-01-27 Thread Ron Hawkins
Pawel,

For a regular DSORG=PS dataset DFSORT and SYNCSORT use their own access
method to read and write the SORTIN and SORTOUT using very efficient long
chained Start Sub-Channels. The EXCP count reported for these datasets is
the Start SubChannel count.

For DSORG=PS-E the sort products will use BSAM to read and write the SORTIN
and SORTOUT datasets. BSAM on Extended Format Datasets can be efficient if
you increase BUFNO and NCP, but the default of five is not the worst thing
that can happen. More importantly the EXCP count reported for these datasets
is the Block Count, and not the SSCH count. These are usually mult-Cyl
chains.

One of the few problems with Extended Format datasets is that the block
chaining defaults are lousy. This is probably why your job is taking longer
with compression. BSAM, and QSAM, always use double buffering, so whatever
you specify is halved for chaining. I suggest that you add DCB=NCP=n to your
SORTIN and SORTOUT, where n=16 times number of stripes.

If you want to check the actual IO count look at the SSCH count in the SMF
Type 46 subtype 6 records.

One last thing is make sure that your SORTIN is compressed and buffered so
you get the benefit at the start and end of the SORT.

Ron

> -Original Message-
> From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of
> Pawel Leszczynski
> Sent: Wednesday, January 27, 2010 2:56 AM
> To: IBM-MAIN@bama.ua.edu
> Subject: [IBM-MAIN] why compression costs additional I/O?
> 
> Hello everybody,
> Recently we are reviewing our EndOfDay jobs looking for potential
performance
> improvements (reducing CPU/elapsed time).
> We have several jobs sorting big datasets where output is SMS-compressible
> (type: EXTENDED) datasets.
> When we compare such sorting with sorting on non-compressible output we
> can see this:
>  EXCP   TCB   SRB   el.time
> TESTXWP5   STEP110 00   757K   3.51.709.01 <-- w/o
compression
> TESTXWP5   STEP120 00  1462K   3.62  2.89  10.45 <-- w. compresion
> 
> We guess that big SRB in (2) goes for compression (that we understand - we
> probably quit compression at all), but we don't understand 2 times bigger
EXCP
> in second case.
> 
> Any ideas will be appreciated,
> Regards,
> Pawel Leszczynski
> PKO BP SA
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
> Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: why compression costs additional I/O?

2010-01-27 Thread Edward Jaffe

Pawel Leszczynski wrote:

generally all of it probably mean that using DFSORT for compressed datasets is
not good idea.
  


The EXCP access method is not supported for extended sequential data 
sets--whether compressed or not, striped or not. I/O for these data sets 
is performed by Media Manager which uses STARTIO and fully understands 
PREFIX, MIDAWs, zHPF, etc.


In general, Media Manager is the smartest, most efficient I/O service 
available on z/OS. Its I/O driver updates the EXCP counts "manually" 
(using the SMFIOCNT service) to try to give you something to measure. 
But, these are--in effect--made up numbers whereas a real EXCP exploiter 
gets the EXCP counts updated by the EXCP driver--one per EXCP SVC 
issued. Neither measures blocks transferred unless you're 
reading/writing only one block at a time.


The bottom line is that what you're looking at is an apples-to-oranges 
comparison of EXCP counts faked by Media Manager servicing BSAM requests 
vs DFSORT doing its own EXCP. IMHO, this comparison is meaningless.


If you want to see what's really being done, I suggest a GTF trace of 
the I/O against the input/output data sets. But beware, if you have a 
z10 and a DS8100 with System z High Performance FICON support, you will 
be looking at TRANSPORT MODE channel programs for the Media Manager I/O 
which you might not understand...


[...unless you come to SHARE in Seattle and attend Session 2253: zHPF 
Channel Programming - The Bits and Bytes, presented by David Bond and 
Yours Truly at 8:00 AM Thursday morning.] This will be a very technical 
session. Attend at your own risk! :-)


--
Edward E Jaffe
Phoenix Software International, Inc
831 Parkview Drive North
El Segundo, CA 90245
310-338-0400 x318
edja...@phoenixsoftware.com
http://www.phoenixsoftware.com/

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: why compression costs additional I/O?

2010-01-27 Thread David Betten
> generally all of it probably mean that using DFSORT for compressed
datasets is
> not good idea.

I'm not sure I would agree with a general statement such as that.

First.  There is a cpu overhead associated with compression and it effects
ALL applications, not just sort.  The overhead
is generally higher for write than it is for read.  In some cases, that
overhead can be offset by reduced data transfer but that
depends on how well the data compresses.  You also need to look at how the
data set is used.  If it's written once but read
many times, then you may get enough benefit on all those reads to warrant
the negative impact on the write.

Second, higher EXCPs does not necessarily mean higher I/Os.  For BSAM,
buffers are used to store the blocks which are
then chained together in single I/Os.  So the increase in I/Os is likely
much smaller than the increase in EXCPs.

Thirdly, you may want to consider multiple stripes so that data is
transferred in parallel.  This won't reduce the I/Os but
it would allow multiple I/Os to be done in parallel and reduce elapsed
time.


I've never really considered compression as a means of improving
performance.  I've heard all the arguments about less
data being transferred but in all my years of batch tuning I never really
saw that great an impact to offset the cpu cost.  To me,
compression is great for avoiding out of space conditions and managing very
large files.  When performance is the sole
concern, I've always recommended extended format with multiple stripes but
not compressed.   Of course that requires that.
you have the disk space available to support storing the large data sets!


Have a nice day,
Dave Betten
DFSORT Development, Performance Lead
IBM Corporation
email:  bet...@us.ibm.com
DFSORT/MVSontheweb at http://www.ibm.com/storage/dfsort/

IBM Mainframe Discussion List  wrote on 01/27/2010
10:23:22 AM:

> [image removed]
>
> Re: why compression costs additional I/O?
>
> Pawel Leszczynski
>
> to:
>
> IBM-MAIN
>
> 01/27/2010 10:26 AM
>
> Sent by:
>
> IBM Mainframe Discussion List 
>
> Please respond to IBM Mainframe Discussion List.
>
> Hi Yifat,
>
> Thanks for answer - you are right! - I 've checked in joblog:
>
> for compressed output:
>
>  0 SORTOUT  : BSAM USED
>
> but for non-compressed output:
>
> SORTOUT  : EXCP USED
>
> generally all of it probably mean that using DFSORT for compressed
datasets is
> not good idea.
>
> Regards,
> Pawel
>
>
>
>
>
> On Wed, 27 Jan 2010 15:55:24 +0200, Yifat Oren 
> wrote:
>
> >Hi Pawel,
> >
> >The reason is the sort product can not use the EXCP access method with
the
> >compressed data set and instead chooses BSAM as the access method.
> >The EXCP access method usually reads or writes on a cylinder (or more)
> >boundary while BSAM, as its name suggests, reads or writes block by
block.
> >
> >Hope that helps,
> >Yifat Oren.
> >
> >-Original Message-
> >From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
> Behalf
> >Of Pawel Leszczynski
> >Sent: Wednesday, January 27, 2010 12:56 PM
> >To: IBM-MAIN@bama.ua.edu
> >Subject: why compression costs additional I/O?
> >
> >Hello everybody,
> >Recently we are reviewing our EndOfDay jobs looking for potential
> >performance improvements (reducing CPU/elapsed time).
> >We have several jobs sorting big datasets where output is
SMS-compressible
> >(type: EXTENDED) datasets.
> >When we compare such sorting with sorting on non-compressible output we
> can
> >see this:
> >     EXCP   TCB   SRB   el.time
> >TESTXWP5   STEP110 00   757K   3.51.709.01 <-- w/o
> >compression
> >TESTXWP5   STEP120 00  1462K   3.62  2.89  10.45 <-- w.
compresion
> >
> >We guess that big SRB in (2) goes for compression (that we understand -
we
> >probably quit compression at all), but we don't understand 2 times
bigger
> >EXCP in second case.
> >
> >Any ideas will be appreciated,
> >Regards,
> >Pawel Leszczynski
> >PKO BP SA
> >
> >--
> >For IBM-MAIN subscribe / signoff / archive access instructions, send
email
> >to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the
> >archives at http://bama.ua.edu/archives/ibm-main.html
> >
> >--
> >For IBM-MAIN subscribe / signoff / archive access instructions,
> >send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
> >Search the archives at http://ba

Re: why compression costs additional I/O?

2010-01-27 Thread Pawel Leszczynski
Hi Yifat,

Thanks for answer - you are right! - I 've checked in joblog:

for compressed output:

 0 SORTOUT  : BSAM USED

but for non-compressed output:

SORTOUT  : EXCP USED 

generally all of it probably mean that using DFSORT for compressed datasets is
not good idea.

Regards,
Pawel





On Wed, 27 Jan 2010 15:55:24 +0200, Yifat Oren  
wrote:

>Hi Pawel,
>
>The reason is the sort product can not use the EXCP access method with the
>compressed data set and instead chooses BSAM as the access method.
>The EXCP access method usually reads or writes on a cylinder (or more)
>boundary while BSAM, as its name suggests, reads or writes block by block.
>
>Hope that helps,
>Yifat Oren.
>
>-Original Message-
>From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On 
Behalf
>Of Pawel Leszczynski
>Sent: Wednesday, January 27, 2010 12:56 PM
>To: IBM-MAIN@bama.ua.edu
>Subject: why compression costs additional I/O?
>
>Hello everybody,
>Recently we are reviewing our EndOfDay jobs looking for potential
>performance improvements (reducing CPU/elapsed time).
>We have several jobs sorting big datasets where output is SMS-compressible
>(type: EXTENDED) datasets.
>When we compare such sorting with sorting on non-compressible output we 
can
>see this:
> EXCP   TCB   SRB   el.time
>TESTXWP5   STEP110 00   757K   3.51.709.01 <-- w/o
>compression
>TESTXWP5   STEP120     00  1462K   3.62  2.89  10.45 <-- w. compresion
>
>We guess that big SRB in (2) goes for compression (that we understand - we
>probably quit compression at all), but we don't understand 2 times bigger
>EXCP in second case.
>
>Any ideas will be appreciated,
>Regards,
>Pawel Leszczynski
>PKO BP SA
>
>--
>For IBM-MAIN subscribe / signoff / archive access instructions, send email
>to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the
>archives at http://bama.ua.edu/archives/ibm-main.html
>
>--
>For IBM-MAIN subscribe / signoff / archive access instructions,
>send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
>Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: why compression costs additional I/O?

2010-01-27 Thread Yifat Oren
Hi Pawel,

The reason is the sort product can not use the EXCP access method with the
compressed data set and instead chooses BSAM as the access method.
The EXCP access method usually reads or writes on a cylinder (or more)
boundary while BSAM, as its name suggests, reads or writes block by block.

Hope that helps,
Yifat Oren. 

-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf
Of Pawel Leszczynski
Sent: Wednesday, January 27, 2010 12:56 PM
To: IBM-MAIN@bama.ua.edu
Subject: why compression costs additional I/O?

Hello everybody,
Recently we are reviewing our EndOfDay jobs looking for potential
performance improvements (reducing CPU/elapsed time).
We have several jobs sorting big datasets where output is SMS-compressible
(type: EXTENDED) datasets. 
When we compare such sorting with sorting on non-compressible output we can
see this:
 EXCP   TCB   SRB   el.time
TESTXWP5   STEP110 00   757K   3.51.709.01 <-- w/o
compression
TESTXWP5   STEP120 00  1462K   3.62  2.89  10.45 <-- w. compresion

We guess that big SRB in (2) goes for compression (that we understand - we
probably quit compression at all), but we don't understand 2 times bigger
EXCP in second case.

Any ideas will be appreciated,
Regards,
Pawel Leszczynski
PKO BP SA

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email
to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the
archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: why compression costs additional I/O?

2010-01-27 Thread Pawel Leszczynski
On Wed, 27 Jan 2010 12:28:56 +0100, R.S. 
 wrote:

>W dniu 2010-01-27 11:55, Pawel Leszczynski pisze:
>> Hello everybody,
>> Recently we are reviewing our EndOfDay jobs looking for potential 
performance
>> improvements (reducing CPU/elapsed time).
>> We have several jobs sorting big datasets where output is SMS-
compressible
>> (type: EXTENDED) datasets.
>> When we compare such sorting with sorting on non-compressible output we
>> can see this:
>>   EXCP   TCB   SRB   el.time
>> TESTXWP5   STEP110 00   757K   3.51.709.01<-- w/o 
compression
>> TESTXWP5   STEP120 00  1462K   3.62  2.89  10.45<-- w. 
compresion
>>
>> We guess that big SRB in (2) goes for compression (that we understand - 
we
>> probably quit compression at all), but we don't understand 2 times bigger 
EXCP
>> in second case.
>>
>> Any ideas will be appreciated,
>
EXCP doesn't mean there are more data. EXCP depends on BLKSIZE. A simple
>test with IEBGENER will show the larger BLKSIZE the smaller number of
>EXCP's.
>Surely the amount of data measured in MB is smaller when compression is
>ON, because compression takes place in CPC, before data is sent to the
>channel (assumed compressible data).
>
>--
>Radoslaw Skorupka
>Lodz, Poland
>>--
>BRE Bank SA
>ul. Senatorska 18
>00-950 Warszawa
>www.brebank.pl
>

Radek,

Thanks for your fast answer (one can always count on you).
Of course, I realize that if I had another blocksize then EXCP would be 
different, but here situation is like that:

non-compressed output-blocksize: 27903
compressed output:  32750

so in second case number of EXCPs should be smaller, unless (as you 
suggested) for EXtended format 'real-low level' blocksize is much more smaller 
(4kB or so on)
Anyway do you agree that SRB comes from compression?

Regards,
Pawel Leszczynski
PKO BP SA

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: why compression costs additional I/O?

2010-01-27 Thread John Kington
Pawel,


>Hello everybody,
>Recently we are reviewing our EndOfDay jobs looking for potential >performance
>improvements (reducing CPU/elapsed time).
>We have several jobs sorting big datasets where output is SMS-compressible
>(type: EXTENDED) datasets.
>When we compare such sorting with sorting on non-compressible output we
>can see this:
> EXCP   TCB   SRB   el.time
>TESTXWP5   STEP110 00   757K   3.51.709.01 <-- w/o compression
>TESTXWP5   STEP120 00  1462K   3.62  2.89  10.45 <-- w. compresion
>
>We guess that big SRB in (2) goes for compression (that we understand - we
>probably quit compression at all), but we don't understand 2 times bigger EXCP
>in second case.

I recommend you ask your sort vendor to get the answer. I am sure they would be 
happy to explain how their excp routines are more efficient.

Regards,
John

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: why compression costs additional I/O?

2010-01-27 Thread NIGEL WOLFENDALE
I can understand from your explanation that we would get the
Radoslaw,

I can understand from your explanation that we would get the same number of 
EXCPs, but not twice as many. If say we are backing up 1000 30K blocks, if 
compression reduces the size of each block, to say 10K, then we will be backing 
up 1000 10K blocks and the number of EXCPs will stay the same. I could imagine 
it might go up slightly to allow for some 'control info' saying that 'this data 
is compressed', but I would hope that such numbers would be low - I would also 
hope - perhaps naievely, that z/OS / IOS (or whoever is doing the compressing) 
would write blocks in big lumps - of 30K - or more - it is now in control of 
the data, rather than the user.

Perhaps it does something 'stupid' like only being able to send 4K blocks, or 
use its own buffering, rather thanthe defualt of 5 - or any larger number you 
specified originally,

Nigel
 Nigel Wolfendale
nigel.wolfend...@btinternet.com
+44(0)1494 723092
+966(0)540217367 





From: R.S. 
To: IBM-MAIN@bama.ua.edu
Sent: Wednesday, 27 January, 2010 14:28:56
Subject: Re: why compression costs additional I/O?

W dniu 2010-01-27 11:55, Pawel Leszczynski pisze:
> Hello everybody,
> Recently we are reviewing our EndOfDay jobs looking for potential performance
> improvements (reducing CPU/elapsed time).
> We have several jobs sorting big datasets where output is SMS-compressible
> (type: EXTENDED) datasets.
> When we compare such sorting with sorting on non-compressible output we
> can see this:
>                                              EXCP  TCB  SRB  el.time
> TESTXWP5      STEP110    00  757K  3.51    .70    9.01<-- w/o compression
> TESTXWP5      STEP120    00  1462K  3.62  2.89  10.45<-- w. compresion
> 
> We guess that big SRB in (2) goes for compression (that we understand - we
> probably quit compression at all), but we don't understand 2 times bigger EXCP
> in second case.
> 
> Any ideas will be appreciated,

Paweł,
EXCP niekoniecznie świadczy o ilości danych. Jeżeli skopiujesz IEBGENERem ten 
sam zbiór na taśmę używając różnych blocksajzów to uzyskasz różne EXCP 
(mniejsze przy większym BLKSIZE). Tak więc większe EXCP niekoniecznie się 
przekłada na większe obciążenie systemu dyskowego. Danych w [MB] na pewno 
zapisujesz mniej, bo  kompresja odbywa się w CPC (zakładam, że zbiór się 
kompresuje).
BTW: Nie sprawdzałem, ale podejrzewam, że PS-ext w wersji COMPRESSED może mieć 
fizyczne (niewidoczne dla aplikacji) bloki 4kB - jak PDSE.
HTH


Quick&dirty translation for lurkers
EXCP doesn't mean there are more data. EXCP depends on BLKSIZE. A simple test 
with IEBGENER will show the larger BLKSIZE the smaller number of EXCP's.
Surely the amount of data measured in MB is smaller when compression is ON, 
because compression takes place in CPC, before data is sent to the channel 
(assumed compressible data).

-- Radoslaw Skorupka
Lodz, Poland


--
BRE Bank SA
ul. Senatorska 18
00-950 Warszawa
www.brebank.pl

Sąd Rejonowy dla m. st. Warszawy XII Wydział Gospodarczy Krajowego Rejestru 
Sądowego, nr rejestru przedsiębiorców KRS 025237
NIP: 526-021-50-88
Według stanu na dzień 01.01.2009 r. kapitał zakładowy BRE Banku SA (w całości 
wpłacony) wynosi 118.763.528 złotych. W związku z realizacją warunkowego 
podwyższenia kapitału zakładowego, na podstawie uchwały XXI WZ z dnia 16 marca 
2008r., oraz uchwały XVI NWZ z dnia 27 października 2008r., może ulec 
podwyższeniu do kwoty 123.763.528 zł. Akcje w podwyższonym kapitale zakładowym 
BRE Banku SA będą w całości opłacone.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: why compression costs additional I/O?

2010-01-27 Thread R.S.

W dniu 2010-01-27 11:55, Pawel Leszczynski pisze:

Hello everybody,
Recently we are reviewing our EndOfDay jobs looking for potential performance
improvements (reducing CPU/elapsed time).
We have several jobs sorting big datasets where output is SMS-compressible
(type: EXTENDED) datasets.
When we compare such sorting with sorting on non-compressible output we
can see this:
  EXCP   TCB   SRB   el.time
TESTXWP5   STEP110 00   757K   3.51.709.01<-- w/o compression
TESTXWP5   STEP120 00  1462K   3.62  2.89  10.45<-- w. compresion

We guess that big SRB in (2) goes for compression (that we understand - we
probably quit compression at all), but we don't understand 2 times bigger EXCP
in second case.

Any ideas will be appreciated,


Paweł,
EXCP niekoniecznie świadczy o ilości danych. Jeżeli skopiujesz 
IEBGENERem ten sam zbiór na taśmę używając różnych blocksajzów to 
uzyskasz różne EXCP (mniejsze przy większym BLKSIZE). Tak więc większe 
EXCP niekoniecznie się przekłada na większe obciążenie systemu 
dyskowego. Danych w [MB] na pewno zapisujesz mniej, bo  kompresja odbywa 
się w CPC (zakładam, że zbiór się kompresuje).
BTW: Nie sprawdzałem, ale podejrzewam, że PS-ext w wersji COMPRESSED 
może mieć fizyczne (niewidoczne dla aplikacji) bloki 4kB - jak PDSE.

HTH


Quick&dirty translation for lurkers
EXCP doesn't mean there are more data. EXCP depends on BLKSIZE. A simple 
test with IEBGENER will show the larger BLKSIZE the smaller number of 
EXCP's.
Surely the amount of data measured in MB is smaller when compression is 
ON, because compression takes place in CPC, before data is sent to the 
channel (assumed compressible data).


--
Radoslaw Skorupka
Lodz, Poland


--
BRE Bank SA
ul. Senatorska 18
00-950 Warszawa
www.brebank.pl

Sąd Rejonowy dla m. st. Warszawy 
XII Wydział Gospodarczy Krajowego Rejestru Sądowego, 
nr rejestru przedsiębiorców KRS 025237

NIP: 526-021-50-88
Według stanu na dzień 01.01.2009 r. kapitał zakładowy BRE Banku SA (w całości 
wpłacony) wynosi 118.763.528 złotych. W związku z realizacją warunkowego 
podwyższenia kapitału zakładowego, na podstawie uchwały XXI WZ z dnia 16 marca 
2008r., oraz uchwały XVI NWZ z dnia 27 października 2008r., może ulec 
podwyższeniu do kwoty 123.763.528 zł. Akcje w podwyższonym kapitale zakładowym 
BRE Banku SA będą w całości opłacone.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


why compression costs additional I/O?

2010-01-27 Thread Pawel Leszczynski
Hello everybody,
Recently we are reviewing our EndOfDay jobs looking for potential performance 
improvements (reducing CPU/elapsed time).
We have several jobs sorting big datasets where output is SMS-compressible 
(type: EXTENDED) datasets. 
When we compare such sorting with sorting on non-compressible output we 
can see this:
 EXCP   TCB   SRB   el.time
TESTXWP5   STEP110 00   757K   3.51.709.01 <-- w/o compression
TESTXWP5   STEP120 00  1462K   3.62  2.89  10.45 <-- w. compresion

We guess that big SRB in (2) goes for compression (that we understand - we 
probably quit compression at all), but we don't understand 2 times bigger EXCP 
in second case.

Any ideas will be appreciated,
Regards,
Pawel Leszczynski
PKO BP SA

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


  1   2   >