DOS/VS did indeed have VTOCs on each volume with entries for all
datasets allocated on the volume. MVS could see and potentially read
compatible DASD datasets created by DOS/VS, but you didn't want to share
drives between DOS/VS and MVS because DOS/VS didn't track changes to
free space and would corrupt the free space info required by MVS.
The DOS/VS DLBL explicit track information in JCL was used to manually
allocate or extend space for datasets, because DOS did not have the
smarts to track volume free space and do automatic allocation based on
space requirements. While the dataset was physically allocated on a
volume, it had a VTOC entry. Typically an application area had their
own disk volumes or disk packs and were responsible for keeping a manual
record of the tracks allocated to each of their datasets, and possibly
moving everything around if some dataset in the middle of a volume
needed to be bigger. A common practice in environments where disk
space was scarce was to have "work" drives associated with a batch job
partition, restore application files to the work drive, run a processing
cycle, then stage the modified application files back to tape, so there
might be no permanent datasets on the work drive. There were also some
system-level DLBL definitions that were globally defined so important
installation datasets could be referenced without the JCL for individual
jobs being aware of their physical location.
PC file systems are an example of alternatives. Each physical disk or
logical disk (partition) contains a File Allocation Table that contains
an entry for each allocatable unit of space on the disk -- originally
that allocatable unit was 1 sector, but expanded to multiple contiguous
sectors (FAT16, FAT32) to keep the FAT size manageable as disk capacity
increased. The FAT indicates whether that unit of disk is free space,
or for files or directories that require additional space, where the
next part is located. There is a fixed first directory block (root),
where files or sub directories can be "cataloged". Some descriptive
information about each file is saved -- minimal on PCs, which puts the
onus of understanding data organization onto the accessing application,
not the Operating System, but there's no logical reason why you couldn't
design a directory structure that contained the level of detail
contained in z/OS Catalogs, VTOC, and VVDS. The high-level qualifier
of a multi-level z/OS dataset name could be implemented as a
sub-directory name, with the lowest-level qualifier as the file name,
and all the intermediate levels of the name giving the directory path to
find where the actual file data can be found. So things could have
evolved that way, but they didn't; and the z/OS super-power of upward
compatibility pretty well says it's not going to change, and there are
some good performance and availability arguments for the current z/OS
design.
There will always be limits on the maximum size of a physical or logical
disk, and one of the things PC file system structures can't handle is
the creation of files that exceed the size of one disk volume. z/OS
catalog structures, VTOC and VVDS support multi-volume datasets in ways
that are transparent to applications by allowing the catalog entry to
point to an ordered list of volumes.
JC Ewing
On 5/24/24 05:02, Lennie Bradshaw wrote:
When I started on IBM System/370 the shop I was at used DOS/VS. DOS/VS at that
time did not have VTOCs. We used //DLBL statements in JCL which specified the
exact locations of datasets on disk. This changed with the introduction of VSAM
on DOS/VS, but only for VSAM datasets.
Fortunately I soon moved to a company using OS/VS2 and got to use VTOCs and
CVOLs there.
As regards, why both VTOCs and Catalogs exist, what would be the alternative?
The more pertinent question is, I think,
Why do we have both VVDS datasets and VTOCs? Historically I understand, but it
could improve efficiency to merge these.
It would probably break too many existing interfaces though.
Lennie Dymoke-Bradshaw
https://rsclweb.com
-----Original Message-----
From: IBM Mainframe Discussion List <IBM-MAIN@LISTSERV.UA.EDU> On Behalf Of
Joel C. Ewing
Sent: 24 May 2024 06:02
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: VTOCs vs. catalogs
VTOCs did come first. The original DOS/360 Operating System did not have catalogs.
VTOCs contain not only information about physical location and organization of datasets
on the volume but also (for OS/360 and its MVS and z/OS descendants) contains a list of
all the free extents on the volume to support automated allocation of new extents for
datasets. It makes sense to still keep that level of information at the volume level
and not in some centralized "catalog", because an individual volume can be
varied online or offline, added to or deleted from the system, and also any hardware
failures that might affect data availability tends to affects specific volumes, so it
simplifies many things to keep volume-level descriptive information on the related volume.
As the total number number of DASD volumes on a system increases, having that
VTOC-level information distributed across all volumes vs putting all that
info in a centralized location improves performance by distributing read/write
activity for that data across all the volumes, and prevents a single point of
failure that could cause loss of all datasets.
Without a catalog to map data set names to volumes, it was necessary to
manually record and maintain a record of what volume(s) contain each dataset.
That was practical for a few volumes and a small number of datasets, but
obviously impractical when talking about 100's of volumes and 1000's of
datasets. OS/360 was designed to support very large systems; Hence it included
a catalog; but its use was optional for application datasets. These days the
recommended practice is that all z/OS application DASD datasets should be under
SMS, and SMS datasets must be cataloged.
The original CVOL catalog evolved into multi-level ICF catalogs, and an
eventual need to save additional dataset attributes to support SMS and VSAM
datasets resulted in an additional VVDS dataset to store that info on each
volume.
As the capacity and maximum number of datasets on a volume increased, a serial
search through a VTOC became a performance bottleneck, and an optional VTOCIX
(VTOC Index) was added to each volume for more efficient access.
There is some redundancy with having VTOCs, VVDSs, and Catalogs, but that aids
in error detection and recovery by allowing cross-checking between VTOCs, VVDSs
and Catalogs to look for and resolve inconsistencies.
On z/OS it is typical to use multi-level catalogs for security and availability
reasons and to keep application and personal datasets in catalogs distinct from
those containing system-level datasets essential to the operating system.
To reduce I/O and improve catalog performance, z/OS accesses catalogs via a
system Catalog address space that provides additional in-memory caching for all
open ICF catalogs.
JC Ewing
On 5/23/24 21:32, Phil Smith III wrote:
I'm curious whether any of you old-timers can explain why we have both
VTOCs and catalogs. I'm guessing it comes down to (a) VTOCs came first
and catalogs were added to solve some problem (what?) and/or (b) catalogs were
added to save some I/O and/or memory, back when a bit of those mattered. But
I'd like to understand. Anyone?
...
--
Joel C. Ewing
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions, send email to
lists...@listserv.ua.edu with the message: INFO IBM-MAIN
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
--
Joel C. Ewing
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN