Hi,
couldn't you put the sequential datasets into one large PO dataset and
terse this one ? On the other side you unterse it to a PO file again and
unload it with IEBGENER.
Tim Hare schrieb:
We need to TERSE a fairly large (for us) amount of data. This data is in
multiple separate datasets now, but needs to be sent as one large sequential
dataset. We can TERSE the concatenated sequential input of course; but out
of curiosity I'm wondering: can you TERSE the individual components,
concatenate the results via IEBGENER, and the UNTERSE the resulting file on
the other end?
>From what I remember about Lempel-Ziv, the "dictionary" is built as you go
along but it might mean that the second and subsequent files concatenated
would be read with incomplete information, resulting in erroneous
decompression results?
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html
--
___________________________________________________________________
Freundliche Gruesse / Kind regards
Dipl.Math. Juergen Kehr, IT Schulung & Beratung, IT Education + Consulting
Tel. +49-561-9528788 Fax +49-561-9528789 Mobil +49-172-5129389
ICQ 292-318-696 (JKehr)
mailto:[EMAIL PROTECTED]
mailto:[EMAIL PROTECTED]
___________________________________________________________________
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html