LUW works similar to z/OS UNIX file systems. I.e. there is a "file system"
which is formatted using some utility (mkfs in the Linux/UNIX world, format
in Windows). This sets up all the internals. In today's LUW, it is usually
possible for a single file to be as big as the file system upon which it
resides. But no bigger. There is nothing like a "multi file system" file
(which would vaguely like a multivolume data set). I'm not too Windows
literate any more, but it used to be that a file system had to reside on a
single disk (or in a partition of that disk). Linux/UNIX used to be that
way too. But Linux/UNIX now implements something called LVM (Logical Volume
Manager). In short, LVM can "stitch together" a number of physical disk
volumes (called a PV for Physical Volume), or partitions, and the subdivide
that aggregate space into one or more Logical Volumes (LV). A Logical
Volume can be created in many ways, such as using software RAID, or
striping. The admin then formats a file system on the Logical Volume. Even
after creating the PV, the storage admin can add another disk into a PV
(like adding a volume to a storage group in SMS). The storage admin can
then use the new space for another LV or to extend an existing LV.
Depending on the file system formatted on the LV, it might even be possible
to tell the file system to start using the newly added space. Most of the
current Linux file systems can at least be extended when they are unmounted
(not actively used). So, like a zFS file system on z/OS, using something
like EXT4 (or BTRFS) and LVM, it is possible to dynamically expand the size
of a file system. I don't know for certain, but I doubt that Windows can do
this kind of dynamic expansion at all. To control allocation, Linux and
UNIX can use disk quotas. I don't know much about that since I don't use it
on my personal systems.

On Mon, Jun 10, 2013 at 11:15 AM, Blaicher, Christopher Y. <
[email protected]> wrote:

> I am not a LUW person, other than I use a windows machine for simple
> things, so I am curious how external storage is allocated and controlled in
> that environment.  I think we have all heard the complaints about the
> short-comings of MVS in this area, but what would be a realistic solution?
>
> I would imagine the people at IBM have spent a little time on this, and if
> it was easy would have started transitioning us from ECKD to 'the new way'
> a long time ago.  The idea of a 'run-away' program is what is the hang-up.
>
> Chris Blaicher
> Principal Software Engineer, Software Development
> Syncsort Incorporated
> 50 Tice Boulevard, Woodcliff Lake, NJ 07677
> P: 201-930-8260  |  M: 512-627-3803
> E: [email protected]
>
> -----Original Message-----
> From: IBM Mainframe Discussion List [mailto:[email protected]] On
> Behalf Of Farley, Peter x23353
> Sent: Monday, June 10, 2013 10:38 AM
> To: [email protected]
> Subject: Storage paradigm [was: RE: Data volumes]
>
> <Rant>
> Like a few others on this list, I have often gritted my teeth at the
> necessity to estimate disk storage quantities that vary widely over time in
> a fixed manner (i.e., SPACE in JCL) when the true need is just to match
> output volume to input volume each day.
>
> Why is it that IBM (and organizations that use their mainframe systems) so
> vigorously resist a conversion off of the ECKD "standard"?  (Yes, I know
> it's all about "conversion cost", but in the larger picture that is a red
> herring.)  Not that I'm likely to see such a transition in my lifetime, but
> in this dawning time of soi-disant "big data", perhaps it is past time to
> change the storage paradigm entirely, not from ECKD to FBA but to
> transition instead to something like the Multics model where every object
> in the system (whether in memory or on external storage, whether data or
> program) has an address, and all addresses are unique.  Let the storage
> subsystem decide how to optimally position and aggregate the various parts
> of objects, and how to organize them for best performance.  Such decisions
> should not require human guesstimate input to be optimal, or nearly so.
>  Characteristics of application access are far more critical specifications
> than mere size.  The ability to specify just the desired application access
> characteristics (random, sequential, growing, shrinking,
> response-time-critical, etc.) should be necessary and sufficient.
>
> EAV or not EAV, guaranteed space or not, candidate volumes, striped or not
> striped, compressed or not compressed - all of that baggage is clearly
> non-optimal for getting the job done in a timely manner.  Why should
> allocating a simple sequential file require a team of "Storage
> Administration" experts to accomplish effectively?
> </Rant>
>
> Peter
>
> -----Original Message-----
> From: IBM Mainframe Discussion List [mailto:[email protected]] On
> Behalf Of Ed Jaffe
> Sent: Sunday, June 09, 2013 10:47 AM
> To: [email protected]
> Subject: Re: Data volumes
>
> On 6/9/2013 7:12 AM, Scott Ford wrote:
> > We need bigger dasd ...ouch
>
> The largest 3390 volumes in our tiny shop hold 3,940,020 tracks or 262,668
> cylinders. That is the maximum size supported by the back-level DASD we are
> running. Newer DASD hardware can support volumes up to 1TB in size. I
> assume nearly all zEC12 and z196 customers are capable of exploiting these
> large sizes. But, do they?
>
> I spent three years dealing with, and eventually helping IBM to solve (via
> OA40210 - HIPER, DATALOSS), a serious EAV bug that should have been seen in
> most every shop in the world that uses the DFSMSdss CONSOLIDATE function
> (with or without DEFRAG). The experience was a real eye-opener for me and I
> concluded that almost nobody is using EAV!
>
> Why not? Personally, I would find it embarrassing if the Corsair thumb
> drive in my pocket held more data than our largest mainframe volumes.
> But, that's just me...
> --
>
> This message and any attachments are intended only for the use of the
> addressee and may contain information that is privileged and confidential.
> If the reader of the message is not the intended recipient or an authorized
> representative of the intended recipient, you are hereby notified that any
> dissemination of this communication is strictly prohibited. If you have
> received this communication in error, please notify us immediately by
> e-mail and delete the message and any attachments from your system.
>
> ----------------------------------------------------------------------
> For IBM-MAIN subscribe / signoff / archive access instructions, send email
> to [email protected] with the message: INFO IBM-MAIN
>
>
>
> ATTENTION: -----
>
> The information contained in this message (including any files transmitted
> with this message) may contain proprietary, trade secret or other
>  confidential and/or legally privileged information. Any pricing
> information contained in this message or in any files transmitted with this
> message is always confidential and cannot be shared with any third parties
> without prior written approval from Syncsort. This message is intended to
> be read only by the individual or entity to whom it is addressed or by
> their designee. If the reader of this message is not the intended
> recipient, you are on notice that any use, disclosure, copying or
> distribution of this message, in any form, is strictly prohibited. If you
> have received this message in error, please immediately notify the sender
> and/or Syncsort and destroy all copies of this message in your possession,
> custody or control.
>
> ----------------------------------------------------------------------
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to [email protected] with the message: INFO IBM-MAIN
>



-- 
This is a test of the Emergency Broadcast System. If this had been an
actual emergency, do you really think we'd stick around to tell you?

Maranatha! <><
John McKown

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to