>    Would lin_tape (non-OCO, see
ftp://index.storsys.ibm.com/devdrvr/Linux)
> help?

Nope. Lin_tape solves a different problem. 

Lin_tape still remains device-specific (in terms of the actual end
device), and still doesn't allow the Linux guests to take advantage of a
common TMS (like VM:Tape or DFSMS). It also doesn't really help the task
of getting drives attached to the right guest. It also still has
dependencies on specific distributions, which makes it a less useful
long-term solution. 

The iSCSI approach works better in that the Linux iSCSI device driver is
supported in the base code for *all* distributions, and, if properly
implemented, would supply a generic interface for drives and tape
changers that could be adapted to any of the currently supported
channel-attached tape library management tools without the Linux guest
having to know or care. The actual tape device at the end of the chain
is no longer particularly important, and you have an architected way of
communicating back to the Linux guest in a standard way if a particular
feature you ask for isn't implemented on that drive.

It also obviates the need to develop custom tape device drivers for
other operating systems (such as Solaris), and makes the drives
available to any IP-connected host with an iSCSI stack, adding value to
the whole enterprise, not just one system. 

A downside for the iSCSI approach is that I don't think the packet
processing for VSWITCH traffic gets offloaded to the I/O processors, so
you'd use up more CPU to drive tape operations. Of course, I've been
lobbying for a specialized network processor engine for a while; this
would be an ideal reason to implement it, IMHO. 

Possible simpleminded implementation: you build a VSWITCH for tape
traffic using 172.20.x.x space, and attach one virtual machine as the
"library manager" interface, and one virtual machine per tape drive you
want to permit in that "virtual library".  Define an additional VNIC on
the tape VSWITCH for the guests who need to do tape I/O. Each Linux
guest gets a iSCSI target for /dev/sg0 to the library manager, and sends
normal SCSI mount/dismount commands to the library manager to
load/unload tapes. Map additional iSCSI targets in each guest to the
virtual machines associated with the tape drives (/dev/tape0,
/dev/tape1, etc).

The library controller machine manages interfacing with the TMS and
keeps track of what drive machine are busy. A SCSI mount command gets
transformed into 'VMTAPE MOUNTFOR VIRT1 volser 181 (NOWAIT' and sets a
polling thread to monitor the mount process and return status to the
mounting host. The Linux guest then does normal
open/read/write/close/rew/fsf/bsf operations on the iSCSI device.
Dismount occurs with normal SCSI dismount to /dev/sg0, which causes the
library manager machine to detach the real tape device from the iSCSI
server for that drive. 

I think this could be done reasonably efficiently with the RSK and some
clever thinking, although it does assume that CMS can set and clear
device features such as encryption, etc directly (hint, hint). 

A similar approach could be used with disk space -- you could do some
very interesting things by emulating iSCSI volumes with big files in
SFS. You lose the overhead and security hassles of NFS, and you gain
total exploitation of all the I/O goodies that SFS currently supports.

RFC 3720 (how appropriate!) has lots of nice detail on how the protocol
works and some of the related goodies on security, etc. 

Reply via email to