The first (maybe only) hardware I know of that claimed no wasted space was STK 
Iceberg, which was touted as being so virtual that an emulated 3390 track 
actually left no unused track bits. I never worked with one, but I heard horror 
stories about *all* the data getting wasted when the Iceberg lost its brains 
and couldn't find anything. ;-(

For any sort of conventional emulation, I stand by my earlier point about the 
tradeoff between massive and miniscule blocksize. Agreed that this applies to 
sequential processing, but there's plenty of that in current applications, 
especially for those intent on eliminating all tape in favor of all DASD. 
There's a tremendous amount of overhead in performing I/O. The more I/Os you 
do, the longer an application will run for a given volume of data. Every 
(sequential) I/O transfers one physical record, aka block. Hence the larger the 
block--physical or emulated--the fewer I/Os you have to perform for a given 
file.

So why not define all blocks as 32K? For other than FB, that makes sense, and 
SDB algorithms take that into account. For RECFM FB, however, there would be an 
egregious amount of unusable space on each 3390 track. Nothing in z 
architecture can handle splitting a fixed block across tracks. Leftover space 
on a track cannot be used for anything else. So generally SDB recommends 
half-track blocking for FB data. Maximum data transfer per I/O, minimum wasted 
track space. 

   

.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler 
SHARE MVS Program Co-Manager
323-715-0595 Mobile
626-543-6132 Office ⇐=== NEW
robin...@sce.com


-----Original Message-----
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Charles Mills
Sent: Sunday, May 21, 2017 6:53 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: (External):Re: SDB (system determined Blksize)

As I said, I am no expert. My point was simply to give an example to illustrate 
the answer to

> Where would it be assigned or accounted for?  If you ignored such 
> waste, you could have more capacity available than the volumes you've 
> defined.

and illustrate that defined apparent 3390 space could be greater than actual 
occupied hardware space.

Good discussion of CoW here: 
http://stackoverflow.com/questions/628938/what-is-copy-on-write 

Charles

-----Original Message-----
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Paul Gilmartin
Sent: Saturday, May 20, 2017 7:46 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: SDB (system determined Blksize)

On Sat, 20 May 2017 13:33:09 -0700, Charles Mills wrote:

>Consider for example "flash copy" and similar technologies. The DASD 
>subsystem is able to make a "copy" of an entire volume without using 
>any significant amount of actual honest-to-gosh disk space.
>
>It's a little hard to explain the technology in a quick e-mail 
>paragraph but basically the controller makes a "pretend" copy of the 
>disk by making a duplicate copy of an "index" to all of the volume's 
>tracks. Whenever a track changes, it creates the track image in new 
>disk space and updates the index to point to that track. Lets companies 
>make an internally consistent backup of an entire DB2 volume while only 
>having to "freeze" DB2 for a second or so.
>
The technique is known as "Copy on Write".  CoW is also used by quality 
implementations of fork(), by ZFS (not zFS; the real one; GIYF), by btrfs, and 
by old StorageTek products, Iceberg and EchoView.


----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Reply via email to