Re: AWS and IDRC/compression
As an aside, my z/Appliance VTA/VTS has its own format that compresses then encrypts data for images at rest and in flight across TCP networks. We also provide tools to convert from several well known format to our format and back. We also added 128 bits of CRC on each block to ensure data integrity and prevent middleman alterations of the data. Our methods also reduced server disk service times because we shrink the data by as much as 75% which translates to less data going thru the disk drivers and the OS. This is unique to our server as we do not depend on the disk controller to cache and encrypt the data. Payback is large because when you retrieve the data it is passed in its smallest format, for reading or for mirroring to a DR site. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: AWS and IDRC/compression
IBM zPDT and all of its variants added compression as an option to the AWS tape format like the HET format did. The problem is what compression mechanism did they use. IBM has recently switch from LZ4 to zSTD compression in various mainframe components like DDR. So I would guess that it is probably one of those. Keep in mind that zPDT can read either without an issue. The tape creation is a configuration option. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: AWS and IDRC/compression
And I want data interchange with IBM and Broadcom, both of which accept .AWS files based on z/VSE's VTAPE implementation (which uses AWS). Currently, I just take an .AWS tape off my VTA and FTP it to them and they can read it. I really don't want to have to convert files before sending them. HET looks promising. Tony Thigpen Jay Maynard wrote on 8/1/22 10:04: I think Tony was looking for a way to do it transparently, so that no changes were needed to the programs reading or writing what they see as a real tape. Your approach requires modifying the host program or jobstream. On Mon, Aug 1, 2022 at 8:58 AM Joe Monk wrote: I dont even understand what the problem is. AWS is simply a disk format for a sequential file of tape blocks. What is inside those blocks are for the program that reads/writes them to decide... AWS does not decide that. You can easily compress, slice into blocks, and write to AWS. Sam Golobs article on AWSTAPE makes that pretty clear ... https://www.cbttape.org/awstape.htm Joe On Mon, Aug 1, 2022 at 7:59 AM Seymour J Metz wrote: Why is IDRC compatibility an issue when you're not using a physical cartridge or physical drive that supports IDRC? If you modify AWS to support compression, use whatever format that gives you the best results. -- Shmuel (Seymour J.) Metz http://mason.gmu.edu/~smetz3 From: IBM Mainframe Discussion List [IBM-MAIN@LISTSERV.UA.EDU] on behalf of Tony Thigpen [t...@vse2pdf.com] Sent: Saturday, July 30, 2022 12:45 PM To: IBM-MAIN@LISTSERV.UA.EDU Subject: Re: AWS and IDRC/compression I am working with my VTA vendor to reduce storage usage on the appliance. Currently, they can compress after the unmount and uncompress before the mount. But, this takes time, especially when servicing the mount request if the tape is large. I was thinking that doing an IDRC implementation, which is stream based and performed during write/read, it might be faster even if it's not compresses as much as with their current method. But, if the AWS file is not compatible with IBM's implementation, then it's going to add a step to send them the file. The current compressed files can be uncompressed using standard linux tools. Tony Thigpen Jay Maynard wrote on 7/29/22 22:44: I'm curious. What are you trying to accomplish with it? If it's just a matter of faster transmission of entire tape images, AWS tapes compress very well. On Fri, Jul 29, 2022 at 8:38 PM Tony Thigpen wrote: Yes. But, it sounds like nobody else will support it as a data interchange, so it may be unusable for us. I will go look at it. Tony Thigpen Jay Maynard wrote on 7/29/22 06:38: Are you talking about the tape data being compressed inside the AWS image? Hercules has a format that does this, upwardly compatible with AWS, called HET (Hercules Emulated Tape), but I don't know of any other implementations of it. Each block is compressed after being received from the program writing the tape but before being written to the file and uncompressed after being read but before being returned to the program reading the tape. On Fri, Jul 29, 2022 at 3:56 AM Tony Thigpen wrote: Does anyone know of any 'standard' for stream based (during file creation) compression of AWS tapes? Tony Thigpen -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN -- Jay Maynard -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN -- Jay Maynard -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN -- Jay Maynard -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists
Re: AWS and IDRC/compression
I think Tony was looking for a way to do it transparently, so that no changes were needed to the programs reading or writing what they see as a real tape. Your approach requires modifying the host program or jobstream. On Mon, Aug 1, 2022 at 8:58 AM Joe Monk wrote: > I dont even understand what the problem is. > > AWS is simply a disk format for a sequential file of tape blocks. What is > inside those blocks are for the program that reads/writes them to decide... > AWS does not decide that. You can easily compress, slice into blocks, and > write to AWS. > > Sam Golobs article on AWSTAPE makes that pretty clear ... > https://www.cbttape.org/awstape.htm > > Joe > > On Mon, Aug 1, 2022 at 7:59 AM Seymour J Metz wrote: > > > Why is IDRC compatibility an issue when you're not using a physical > > cartridge or physical drive that supports IDRC? If you modify AWS to > > support compression, use whatever format that gives you the best results. > > > > > > -- > > Shmuel (Seymour J.) Metz > > http://mason.gmu.edu/~smetz3 > > > > > > From: IBM Mainframe Discussion List [IBM-MAIN@LISTSERV.UA.EDU] on behalf > > of Tony Thigpen [t...@vse2pdf.com] > > Sent: Saturday, July 30, 2022 12:45 PM > > To: IBM-MAIN@LISTSERV.UA.EDU > > Subject: Re: AWS and IDRC/compression > > > > I am working with my VTA vendor to reduce storage usage on the > > appliance. Currently, they can compress after the unmount and uncompress > > before the mount. But, this takes time, especially when servicing the > > mount request if the tape is large. > > > > I was thinking that doing an IDRC implementation, which is stream based > > and performed during write/read, it might be faster even if it's not > > compresses as much as with their current method. > > > > But, if the AWS file is not compatible with IBM's implementation, then > > it's going to add a step to send them the file. The current compressed > > files can be uncompressed using standard linux tools. > > > > Tony Thigpen > > > > Jay Maynard wrote on 7/29/22 22:44: > > > I'm curious. What are you trying to accomplish with it? If it's just a > > > matter of faster transmission of entire tape images, AWS tapes compress > > > very well. > > > > > > On Fri, Jul 29, 2022 at 8:38 PM Tony Thigpen wrote: > > > > > >> Yes. But, it sounds like nobody else will support it as a data > > >> interchange, so it may be unusable for us. > > >> > > >> I will go look at it. > > >> > > >> Tony Thigpen > > >> > > >> Jay Maynard wrote on 7/29/22 06:38: > > >>> Are you talking about the tape data being compressed inside the AWS > > >> image? > > >>> Hercules has a format that does this, upwardly compatible with AWS, > > >> called > > >>> HET (Hercules Emulated Tape), but I don't know of any other > > >> implementations > > >>> of it. Each block is compressed after being received from the program > > >>> writing the tape but before being written to the file and > uncompressed > > >>> after being read but before being returned to the program reading the > > >> tape. > > >>> > > >>> On Fri, Jul 29, 2022 at 3:56 AM Tony Thigpen > wrote: > > >>> > > >>>> Does anyone know of any 'standard' for stream based (during file > > >>>> creation) compression of AWS tapes? > > >>>> > > >>>> Tony Thigpen > > >>>> > > >>>> > -- > > >>>> For IBM-MAIN subscribe / signoff / archive access instructions, > > >>>> send email to lists...@listserv.ua.edu with the message: INFO > > IBM-MAIN > > >>>> > > >>> > > >>> > > >>> -- > > >>> Jay Maynard > > >>> > > >>> > -- > > >>> For IBM-MAIN subscribe / signoff / archive access instructions, > > >>> send email to lists...@listserv.ua.edu with the message: INFO > IBM-MAIN > > >> > > >> -- > > >> For IBM-MAIN subscribe / signoff / archive access instructions, > > >> send email to lists...@listserv.ua.edu with the message: INFO > IBM-MAIN > &g
Re: AWS and IDRC/compression
I dont even understand what the problem is. AWS is simply a disk format for a sequential file of tape blocks. What is inside those blocks are for the program that reads/writes them to decide... AWS does not decide that. You can easily compress, slice into blocks, and write to AWS. Sam Golobs article on AWSTAPE makes that pretty clear ... https://www.cbttape.org/awstape.htm Joe On Mon, Aug 1, 2022 at 7:59 AM Seymour J Metz wrote: > Why is IDRC compatibility an issue when you're not using a physical > cartridge or physical drive that supports IDRC? If you modify AWS to > support compression, use whatever format that gives you the best results. > > > -- > Shmuel (Seymour J.) Metz > http://mason.gmu.edu/~smetz3 > > > From: IBM Mainframe Discussion List [IBM-MAIN@LISTSERV.UA.EDU] on behalf > of Tony Thigpen [t...@vse2pdf.com] > Sent: Saturday, July 30, 2022 12:45 PM > To: IBM-MAIN@LISTSERV.UA.EDU > Subject: Re: AWS and IDRC/compression > > I am working with my VTA vendor to reduce storage usage on the > appliance. Currently, they can compress after the unmount and uncompress > before the mount. But, this takes time, especially when servicing the > mount request if the tape is large. > > I was thinking that doing an IDRC implementation, which is stream based > and performed during write/read, it might be faster even if it's not > compresses as much as with their current method. > > But, if the AWS file is not compatible with IBM's implementation, then > it's going to add a step to send them the file. The current compressed > files can be uncompressed using standard linux tools. > > Tony Thigpen > > Jay Maynard wrote on 7/29/22 22:44: > > I'm curious. What are you trying to accomplish with it? If it's just a > > matter of faster transmission of entire tape images, AWS tapes compress > > very well. > > > > On Fri, Jul 29, 2022 at 8:38 PM Tony Thigpen wrote: > > > >> Yes. But, it sounds like nobody else will support it as a data > >> interchange, so it may be unusable for us. > >> > >> I will go look at it. > >> > >> Tony Thigpen > >> > >> Jay Maynard wrote on 7/29/22 06:38: > >>> Are you talking about the tape data being compressed inside the AWS > >> image? > >>> Hercules has a format that does this, upwardly compatible with AWS, > >> called > >>> HET (Hercules Emulated Tape), but I don't know of any other > >> implementations > >>> of it. Each block is compressed after being received from the program > >>> writing the tape but before being written to the file and uncompressed > >>> after being read but before being returned to the program reading the > >> tape. > >>> > >>> On Fri, Jul 29, 2022 at 3:56 AM Tony Thigpen wrote: > >>> > >>>> Does anyone know of any 'standard' for stream based (during file > >>>> creation) compression of AWS tapes? > >>>> > >>>> Tony Thigpen > >>>> > >>>> -- > >>>> For IBM-MAIN subscribe / signoff / archive access instructions, > >>>> send email to lists...@listserv.ua.edu with the message: INFO > IBM-MAIN > >>>> > >>> > >>> > >>> -- > >>> Jay Maynard > >>> > >>> -- > >>> For IBM-MAIN subscribe / signoff / archive access instructions, > >>> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN > >> > >> -- > >> For IBM-MAIN subscribe / signoff / archive access instructions, > >> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN > >> > > > > > > -- > > Jay Maynard > > > > -- > > For IBM-MAIN subscribe / signoff / archive access instructions, > > send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN > > -- > For IBM-MAIN subscribe / signoff / archive access instructions, > send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN > > -- > For IBM-MAIN subscribe / signoff / archive access instructions, > send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN > -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: AWS and IDRC/compression
Why is IDRC compatibility an issue when you're not using a physical cartridge or physical drive that supports IDRC? If you modify AWS to support compression, use whatever format that gives you the best results. -- Shmuel (Seymour J.) Metz http://mason.gmu.edu/~smetz3 From: IBM Mainframe Discussion List [IBM-MAIN@LISTSERV.UA.EDU] on behalf of Tony Thigpen [t...@vse2pdf.com] Sent: Saturday, July 30, 2022 12:45 PM To: IBM-MAIN@LISTSERV.UA.EDU Subject: Re: AWS and IDRC/compression I am working with my VTA vendor to reduce storage usage on the appliance. Currently, they can compress after the unmount and uncompress before the mount. But, this takes time, especially when servicing the mount request if the tape is large. I was thinking that doing an IDRC implementation, which is stream based and performed during write/read, it might be faster even if it's not compresses as much as with their current method. But, if the AWS file is not compatible with IBM's implementation, then it's going to add a step to send them the file. The current compressed files can be uncompressed using standard linux tools. Tony Thigpen Jay Maynard wrote on 7/29/22 22:44: > I'm curious. What are you trying to accomplish with it? If it's just a > matter of faster transmission of entire tape images, AWS tapes compress > very well. > > On Fri, Jul 29, 2022 at 8:38 PM Tony Thigpen wrote: > >> Yes. But, it sounds like nobody else will support it as a data >> interchange, so it may be unusable for us. >> >> I will go look at it. >> >> Tony Thigpen >> >> Jay Maynard wrote on 7/29/22 06:38: >>> Are you talking about the tape data being compressed inside the AWS >> image? >>> Hercules has a format that does this, upwardly compatible with AWS, >> called >>> HET (Hercules Emulated Tape), but I don't know of any other >> implementations >>> of it. Each block is compressed after being received from the program >>> writing the tape but before being written to the file and uncompressed >>> after being read but before being returned to the program reading the >> tape. >>> >>> On Fri, Jul 29, 2022 at 3:56 AM Tony Thigpen wrote: >>> >>>> Does anyone know of any 'standard' for stream based (during file >>>> creation) compression of AWS tapes? >>>> >>>> Tony Thigpen >>>> >>>> -- >>>> For IBM-MAIN subscribe / signoff / archive access instructions, >>>> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN >>>> >>> >>> >>> -- >>> Jay Maynard >>> >>> -- >>> For IBM-MAIN subscribe / signoff / archive access instructions, >>> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN >> >> -- >> For IBM-MAIN subscribe / signoff / archive access instructions, >> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN >> > > > -- > Jay Maynard > > -- > For IBM-MAIN subscribe / signoff / archive access instructions, > send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: AWS and IDRC/compression
Yes, I concur. Not the same, but somewhat alike. IDRC can also be software based. I found the same compaction algorithm as IDRC in z/VSE one time. I think it was in Power Pnet, but it's been awhile. Tony Thigpen Harry Wahl wrote on 7/30/22 14:06: Tony,a IDRC exploits the nature of IDRC compatible physical tape cartridges by writing everything using the cartridge's internal optimal physical block size. This is done by the cartridge drive's controller. So, ZIP based VTS compression, such as HET, is software; while IDRC compaction is hardware. There is no compression involved in IDRC, just hardware compaction. IDRC uses a process called "autoblocking" to transparently optimize how much data can fit on the cartridge's media by exploiting its optimal physical block size. Harry From: IBM Mainframe Discussion List on behalf of Tony Thigpen Sent: Saturday, July 30, 2022 12:45 PM To: IBM-MAIN@LISTSERV.UA.EDU Subject: Re: AWS and IDRC/compression I am working with my VTA vendor to reduce storage usage on the appliance. Currently, they can compress after the unmount and uncompress before the mount. But, this takes time, especially when servicing the mount request if the tape is large. I was thinking that doing an IDRC implementation, which is stream based and performed during write/read, it might be faster even if it's not compresses as much as with their current method. But, if the AWS file is not compatible with IBM's implementation, then it's going to add a step to send them the file. The current compressed files can be uncompressed using standard linux tools. Tony Thigpen Jay Maynard wrote on 7/29/22 22:44: I'm curious. What are you trying to accomplish with it? If it's just a matter of faster transmission of entire tape images, AWS tapes compress very well. On Fri, Jul 29, 2022 at 8:38 PM Tony Thigpen wrote: Yes. But, it sounds like nobody else will support it as a data interchange, so it may be unusable for us. I will go look at it. Tony Thigpen Jay Maynard wrote on 7/29/22 06:38: Are you talking about the tape data being compressed inside the AWS image? Hercules has a format that does this, upwardly compatible with AWS, called HET (Hercules Emulated Tape), but I don't know of any other implementations of it. Each block is compressed after being received from the program writing the tape but before being written to the file and uncompressed after being read but before being returned to the program reading the tape. On Fri, Jul 29, 2022 at 3:56 AM Tony Thigpen wrote: Does anyone know of any 'standard' for stream based (during file creation) compression of AWS tapes? Tony Thigpen -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN -- Jay Maynard -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN -- Jay Maynard -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: AWS and IDRC/compression
Tony, Compression (as in ZIP) and compaction (as in IDRC tapes) are not the same process. IDRC exploits the nature of IDRC compatible physical tape cartridges by writing everything using the cartridge's internal optimal physical block size. This is done by the cartridge drive's controller. So, ZIP based VTS compression, such as HET, is software; while IDRC compaction is hardware. There is no compression involved in IDRC, just hardware compaction. IDRC uses a process called "autoblocking" to transparently optimize how much data can fit on the cartridge's media by exploiting its optimal physical block size. Harry From: IBM Mainframe Discussion List on behalf of Tony Thigpen Sent: Saturday, July 30, 2022 12:45 PM To: IBM-MAIN@LISTSERV.UA.EDU Subject: Re: AWS and IDRC/compression I am working with my VTA vendor to reduce storage usage on the appliance. Currently, they can compress after the unmount and uncompress before the mount. But, this takes time, especially when servicing the mount request if the tape is large. I was thinking that doing an IDRC implementation, which is stream based and performed during write/read, it might be faster even if it's not compresses as much as with their current method. But, if the AWS file is not compatible with IBM's implementation, then it's going to add a step to send them the file. The current compressed files can be uncompressed using standard linux tools. Tony Thigpen Jay Maynard wrote on 7/29/22 22:44: > I'm curious. What are you trying to accomplish with it? If it's just a > matter of faster transmission of entire tape images, AWS tapes compress > very well. > > On Fri, Jul 29, 2022 at 8:38 PM Tony Thigpen wrote: > >> Yes. But, it sounds like nobody else will support it as a data >> interchange, so it may be unusable for us. >> >> I will go look at it. >> >> Tony Thigpen >> >> Jay Maynard wrote on 7/29/22 06:38: >>> Are you talking about the tape data being compressed inside the AWS >> image? >>> Hercules has a format that does this, upwardly compatible with AWS, >> called >>> HET (Hercules Emulated Tape), but I don't know of any other >> implementations >>> of it. Each block is compressed after being received from the program >>> writing the tape but before being written to the file and uncompressed >>> after being read but before being returned to the program reading the >> tape. >>> >>> On Fri, Jul 29, 2022 at 3:56 AM Tony Thigpen wrote: >>> >>>> Does anyone know of any 'standard' for stream based (during file >>>> creation) compression of AWS tapes? >>>> >>>> Tony Thigpen >>>> >>>> -- >>>> For IBM-MAIN subscribe / signoff / archive access instructions, >>>> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN >>>> >>> >>> >>> -- >>> Jay Maynard >>> >>> -- >>> For IBM-MAIN subscribe / signoff / archive access instructions, >>> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN >> >> -- >> For IBM-MAIN subscribe / signoff / archive access instructions, >> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN >> > > > -- > Jay Maynard > > -- > For IBM-MAIN subscribe / signoff / archive access instructions, > send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: AWS and IDRC/compression
Yeah, there is no compression at all in the native AWS format. If your VTA won't do HET or something similar, then you're stuck with AWS and then compressing after the fact. On Sat, Jul 30, 2022 at 11:45 AM Tony Thigpen wrote: > I am working with my VTA vendor to reduce storage usage on the > appliance. Currently, they can compress after the unmount and uncompress > before the mount. But, this takes time, especially when servicing the > mount request if the tape is large. > > I was thinking that doing an IDRC implementation, which is stream based > and performed during write/read, it might be faster even if it's not > compresses as much as with their current method. > > But, if the AWS file is not compatible with IBM's implementation, then > it's going to add a step to send them the file. The current compressed > files can be uncompressed using standard linux tools. > > Tony Thigpen > > Jay Maynard wrote on 7/29/22 22:44: > > I'm curious. What are you trying to accomplish with it? If it's just a > > matter of faster transmission of entire tape images, AWS tapes compress > > very well. > > > > On Fri, Jul 29, 2022 at 8:38 PM Tony Thigpen wrote: > > > >> Yes. But, it sounds like nobody else will support it as a data > >> interchange, so it may be unusable for us. > >> > >> I will go look at it. > >> > >> Tony Thigpen > >> > >> Jay Maynard wrote on 7/29/22 06:38: > >>> Are you talking about the tape data being compressed inside the AWS > >> image? > >>> Hercules has a format that does this, upwardly compatible with AWS, > >> called > >>> HET (Hercules Emulated Tape), but I don't know of any other > >> implementations > >>> of it. Each block is compressed after being received from the program > >>> writing the tape but before being written to the file and uncompressed > >>> after being read but before being returned to the program reading the > >> tape. > >>> > >>> On Fri, Jul 29, 2022 at 3:56 AM Tony Thigpen wrote: > >>> > Does anyone know of any 'standard' for stream based (during file > creation) compression of AWS tapes? > > Tony Thigpen > > -- > For IBM-MAIN subscribe / signoff / archive access instructions, > send email to lists...@listserv.ua.edu with the message: INFO > IBM-MAIN > > >>> > >>> > >>> -- > >>> Jay Maynard > >>> > >>> -- > >>> For IBM-MAIN subscribe / signoff / archive access instructions, > >>> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN > >> > >> -- > >> For IBM-MAIN subscribe / signoff / archive access instructions, > >> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN > >> > > > > > > -- > > Jay Maynard > > > > -- > > For IBM-MAIN subscribe / signoff / archive access instructions, > > send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN > > -- > For IBM-MAIN subscribe / signoff / archive access instructions, > send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN > -- Jay Maynard -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: AWS and IDRC/compression
I am working with my VTA vendor to reduce storage usage on the appliance. Currently, they can compress after the unmount and uncompress before the mount. But, this takes time, especially when servicing the mount request if the tape is large. I was thinking that doing an IDRC implementation, which is stream based and performed during write/read, it might be faster even if it's not compresses as much as with their current method. But, if the AWS file is not compatible with IBM's implementation, then it's going to add a step to send them the file. The current compressed files can be uncompressed using standard linux tools. Tony Thigpen Jay Maynard wrote on 7/29/22 22:44: I'm curious. What are you trying to accomplish with it? If it's just a matter of faster transmission of entire tape images, AWS tapes compress very well. On Fri, Jul 29, 2022 at 8:38 PM Tony Thigpen wrote: Yes. But, it sounds like nobody else will support it as a data interchange, so it may be unusable for us. I will go look at it. Tony Thigpen Jay Maynard wrote on 7/29/22 06:38: Are you talking about the tape data being compressed inside the AWS image? Hercules has a format that does this, upwardly compatible with AWS, called HET (Hercules Emulated Tape), but I don't know of any other implementations of it. Each block is compressed after being received from the program writing the tape but before being written to the file and uncompressed after being read but before being returned to the program reading the tape. On Fri, Jul 29, 2022 at 3:56 AM Tony Thigpen wrote: Does anyone know of any 'standard' for stream based (during file creation) compression of AWS tapes? Tony Thigpen -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN -- Jay Maynard -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN -- Jay Maynard -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: AWS and IDRC/compression
One of the vendors that uses AWS is Optica, their z/VT uses AWS format. It can also compress internally, but you don't have to. Brian On Fri, 29 Jul 2022 21:44:25 -0500, Jay Maynard wrote: >I'm curious. What are you trying to accomplish with it? If it's just a >matter of faster transmission of entire tape images, AWS tapes compress >very well. > >On Fri, Jul 29, 2022 at 8:38 PM Tony Thigpen wrote: > >> Yes. But, it sounds like nobody else will support it as a data >> interchange, so it may be unusable for us. >> >> I will go look at it. >> >> Tony Thigpen >> >> Jay Maynard wrote on 7/29/22 06:38: >> > Are you talking about the tape data being compressed inside the AWS >> image? >> > Hercules has a format that does this, upwardly compatible with AWS, >> called >> > HET (Hercules Emulated Tape), but I don't know of any other >> implementations >> > of it. Each block is compressed after being received from the program >> > writing the tape but before being written to the file and uncompressed >> > after being read but before being returned to the program reading the >> tape. >> > >> > On Fri, Jul 29, 2022 at 3:56 AM Tony Thigpen wrote: >> > >> >> Does anyone know of any 'standard' for stream based (during file >> >> creation) compression of AWS tapes? >> >> >> >> Tony Thigpen >> >> >> >> -- >> >> For IBM-MAIN subscribe / signoff / archive access instructions, >> >> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN >> >> >> > >> > >> > -- >> > Jay Maynard >> > >> > -- >> > For IBM-MAIN subscribe / signoff / archive access instructions, >> > send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN >> >> -- >> For IBM-MAIN subscribe / signoff / archive access instructions, >> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN >> > > >-- >Jay Maynard > >-- >For IBM-MAIN subscribe / signoff / archive access instructions, >send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: AWS and IDRC/compression
I'm curious. What are you trying to accomplish with it? If it's just a matter of faster transmission of entire tape images, AWS tapes compress very well. On Fri, Jul 29, 2022 at 8:38 PM Tony Thigpen wrote: > Yes. But, it sounds like nobody else will support it as a data > interchange, so it may be unusable for us. > > I will go look at it. > > Tony Thigpen > > Jay Maynard wrote on 7/29/22 06:38: > > Are you talking about the tape data being compressed inside the AWS > image? > > Hercules has a format that does this, upwardly compatible with AWS, > called > > HET (Hercules Emulated Tape), but I don't know of any other > implementations > > of it. Each block is compressed after being received from the program > > writing the tape but before being written to the file and uncompressed > > after being read but before being returned to the program reading the > tape. > > > > On Fri, Jul 29, 2022 at 3:56 AM Tony Thigpen wrote: > > > >> Does anyone know of any 'standard' for stream based (during file > >> creation) compression of AWS tapes? > >> > >> Tony Thigpen > >> > >> -- > >> For IBM-MAIN subscribe / signoff / archive access instructions, > >> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN > >> > > > > > > -- > > Jay Maynard > > > > -- > > For IBM-MAIN subscribe / signoff / archive access instructions, > > send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN > > -- > For IBM-MAIN subscribe / signoff / archive access instructions, > send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN > -- Jay Maynard -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: AWS and IDRC/compression
For quite a while, I tried to talk with my MFaaS provider about using .aws files as a transportable, potentially readable on Windows read-only archive. I know that some Virtual Tape Appliances use this format. After several months of confusion with the bookstore, there was some understanding, but no viable proposals. > -Original Message- > From: IBM Mainframe Discussion List On > Behalf Of Tony Thigpen > Sent: Friday, July 29, 2022 6:38 PM > To: IBM-MAIN@LISTSERV.UA.EDU > Subject: Re: AWS and IDRC/compression > > [EXTERNAL EMAIL] > > Yes. But, it sounds like nobody else will support it as a data > interchange, so it may be unusable for us. > > I will go look at it. > > Tony Thigpen > > Jay Maynard wrote on 7/29/22 06:38: > > Are you talking about the tape data being compressed inside the AWS > image? > > Hercules has a format that does this, upwardly compatible with AWS, called > > HET (Hercules Emulated Tape), but I don't know of any other > implementations > > of it. Each block is compressed after being received from the program > > writing the tape but before being written to the file and uncompressed > > after being read but before being returned to the program reading the > tape. > > > > On Fri, Jul 29, 2022 at 3:56 AM Tony Thigpen wrote: > > > >> Does anyone know of any 'standard' for stream based (during file > >> creation) compression of AWS tapes? > >> > >> Tony Thigpen > >> > >> -- > >> For IBM-MAIN subscribe / signoff / archive access instructions, > >> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN > >> > > > > > > -- > > Jay Maynard > > > > -- > > For IBM-MAIN subscribe / signoff / archive access instructions, > > send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN > > -- > For IBM-MAIN subscribe / signoff / archive access instructions, > send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: AWS and IDRC/compression
Yes. But, it sounds like nobody else will support it as a data interchange, so it may be unusable for us. I will go look at it. Tony Thigpen Jay Maynard wrote on 7/29/22 06:38: Are you talking about the tape data being compressed inside the AWS image? Hercules has a format that does this, upwardly compatible with AWS, called HET (Hercules Emulated Tape), but I don't know of any other implementations of it. Each block is compressed after being received from the program writing the tape but before being written to the file and uncompressed after being read but before being returned to the program reading the tape. On Fri, Jul 29, 2022 at 3:56 AM Tony Thigpen wrote: Does anyone know of any 'standard' for stream based (during file creation) compression of AWS tapes? Tony Thigpen -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN -- Jay Maynard -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: AWS and IDRC/compression
Are you talking about the tape data being compressed inside the AWS image? Hercules has a format that does this, upwardly compatible with AWS, called HET (Hercules Emulated Tape), but I don't know of any other implementations of it. Each block is compressed after being received from the program writing the tape but before being written to the file and uncompressed after being read but before being returned to the program reading the tape. On Fri, Jul 29, 2022 at 3:56 AM Tony Thigpen wrote: > Does anyone know of any 'standard' for stream based (during file > creation) compression of AWS tapes? > > Tony Thigpen > > -- > For IBM-MAIN subscribe / signoff / archive access instructions, > send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN > -- Jay Maynard -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN