Shared XSTORE is a non-starter. What happens if the "maintenance" is,
for example, to replace a z990 with a z9? DASD would be the only
solution. And the lowest-common-denominator approach would dictate that
shared XSTORE not be chosen as the answer.

Regards, 
Richard Schuh 


-----Original Message-----
From: The IBM z/VM Operating System [mailto:[EMAIL PROTECTED] On
Behalf Of Rob van der Heij
Sent: Tuesday, January 23, 2007 7:28 AM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: Keep VM 24X7 365 days

On 1/23/07, David Boyes <[EMAIL PROTECTED]> wrote:

> From the discussions that Perry Ruiter and I had back in 2000 about
> this, the biggest problems are primarily in device state and
> connectivity, followed by getting memory pages that have been flushed
> out to disk transferred, which implies some controlled effort to do
the
> process migration. Uncontrolled migration attempts (eg, crash recovery
> situations) probably haven't a prayer of working.

I guess we're still at the "SCIDS design level" of this...  I'd say
that forcing the entire virtual machine to page out and let it page
back in on the other side sounds logical and maybe not the most
complicated part. But remember it would be to DASD unless we also
architect shared XSTORE while we're at it. For the average Linux
server, paging out 2 GB and page it back in would take a few minutes.
Even when we can build large swap sets (and ignoring the fact that the
receiving system may need to page out others for this). Pumping the
pages through hipersockets could reduce the wait time with two orders
of magnitude, but that's restrictive since it has to be on the same
box.
So most likely you would need something like PPRC over ISFC to build a
copy of the working set at the receiving side. The actual delay would
be just packing up the stuff that changed very recently, and restore
that on the other side.

Among the relatively easy parts would be integrity of accounting and
monitor data during such a move ;-)

> One area that might be interesting would be to investigate a Linux
> device driver using CP *BLOCKIO services for disk I/O instead of
> directly addressing the disks. There would be a performance impact,
but
> the additional layer of isolation would effectively remove any disk

Most certainly. A high level interface offers gives much more
flexibility for running the guest. A "Virtual SAN" that offers the
guest "some blocks of disk" would make things much easier. VM would
then decide if and where to put the data in the SAN.
The current approach where intimate details about the disk device
(like WWPN etc) are kept inside Linux configuration files makes things
also very hard for what we talk about here.

-- 
Rob van der Heij
Velocity Software, Inc
http://velocitysoftware.com/

Reply via email to