The callable services benchmarks we conducted with BFS ran between 8 and 10
times longer than the test set running with the CMS file system.

Assuming a cluster of 125 Windows® 2K z/VM guests and using I/O counts
generated by Win 2K on native Intel hardware the results of extrapolating
the I/O overhead spooked us a bit.  In effect, all our instruction pipeline
optimization and translated instruction segment reuse optimization would be
negated by the I/O overhead.

We have a callable file system for z/OS that can handle an array of 128
pools each containing up to 255 volumes each. That system would be a bear to
convert owing to the OS-specific interface code but it appears from your
comments that converting may have to be seriously considered to achieve the
desired results.

Thank you.


Gary Dennis
Mantissa

On 3/25/08 9:55 AM, "Alan Altmark" <[EMAIL PROTECTED]> wrote:

> On Tuesday, 03/25/2008 at 04:26 EDT, "Gary M. Dennis"
> <[EMAIL PROTECTED]> wrote:
> 
>> Is anyone aware of a VM open source file system port with some of the
>> characteristics listed below. Such a system might enable us to add the
>> functionality needed to support these guests without starting at zero.
> 
> It isn't Open Source, but CMS has a POSIX file system (Byte File System,
> BFS) that is managed by the SFS server, allocating space only as used.  I
> don't know that I would classify it as "lightweight", though from the CMS
> user's point of view, it is, since the I/O takes place in the SFS server,
> but it takes APPC/VM (IUCV on steriods) calls to make it happen.  You can
> talk to it in assembler using the BPX1xxxx callable services.  It could
> provide you a "jump start" while you develop your own file system.
> 
> And just in case you haven't discovered it already, there's no "pluggable"
> file system interface in CMS.  You will need to write your file system
> from the bottom up.  The only help CMS will provide to you is in the form
> of HNDIO,HNDSVC, NUCEXT, and NUCXLOAD.
> 
> Alan Altmark
> z/VM Development
> IBM Endicott
> 

Reply via email to