Re: The redpaper for cloning zLinux images via VQDIO is available

2002-09-09 Thread Ledbetter, Scott E

Exactly correct. It depends. Most RVA's will have no problem keeping up with
write activity.  The controller is self-tuning in that under extreme
conditions (the definition of this has varied over the life of the product,
but generally less than 10% freespace) it will delay I/O completion in order
to dedicate processor resources to freespace collection.  This shows up as
high disconnect time in the CKD architecture.  If you have an RVA/SVA that
runs fine, and suddenly you notice higher than normal disconnect times, a
likely reason is that the box overall has under 10% freespace.  The 9500 and
later SVA's have really had no problem with freespace collection keeping up
with any write load.  9200/9393/RVA had a few issues, but only under very
unusual high write activity scenarios when the box was very full to start
with.


Scott Ledbetter
StorageTek



-Original Message-
From: David Andrews [mailto:[EMAIL PROTECTED]]
Sent: September 09, 2002 9:45 AM
To: [EMAIL PROTECTED]
Subject: Re: The redpaper for cloning zLinux images via VQDIO is
available


On Mon, 2002-09-09 at 10:27, Ledbetter, Scott E wrote:
> A physical backup to tape followed by a restore will use backend space on
> RVA/SVA, because it is physical I/O, and each track is written into the
box
> in a new location.

So how long does it take until the "before" image of the rewritten
tracks go through collection and are freed?

(I suspect the answer is "it depends"?  A lightly - or even moderately -
loaded 'berg/RVA/SVA probably has enough background resources to free
old track images as quickly as we can write new ones, no?)

--
David Andrews
A. Duda and Sons, Inc.
[EMAIL PROTECTED]



Re: The redpaper for cloning zLinux images via VQDIO is available

2002-09-09 Thread David Andrews

On Mon, 2002-09-09 at 10:27, Ledbetter, Scott E wrote:
> A physical backup to tape followed by a restore will use backend space on
> RVA/SVA, because it is physical I/O, and each track is written into the box
> in a new location.

So how long does it take until the "before" image of the rewritten
tracks go through collection and are freed?

(I suspect the answer is "it depends"?  A lightly - or even moderately -
loaded 'berg/RVA/SVA probably has enough background resources to free
old track images as quickly as we can write new ones, no?)

--
David Andrews
A. Duda and Sons, Inc.
[EMAIL PROTECTED]



Re: The redpaper for cloning zLinux images via VQDIO is available

2002-09-09 Thread Ledbetter, Scott E

A physical backup to tape followed by a restore will use backend space on
RVA/SVA, because it is physical I/O, and each track is written into the box
in a new location.

Scott Ledbetter
StorageTek

-Original Message-
From: John Summerfield [mailto:[EMAIL PROTECTED]]
Sent: September 06, 2002 7:27 PM
To: [EMAIL PROTECTED]
Subject: Re: The redpaper for cloning zLinux images via VQDIO is
available


On Thu, 5 Sep 2002 05:27, you wrote:
> The
> same source can be Snapped a 'virtually' unlimited number of times, with
no
> physical data being copied, and no additional backend being used.  Only
> updated tracks will cause backend storage to be used at the point in time
> that the updates are written onto the virtual disk.

How does the process of backup/restore affect this? I see a space problem
looming if I backup and restore volumes under Linux.




--
Cheers
John Summerfield


Microsoft's most solid OS: http://www.geocities.com/rcwoolley/
Join the "Linux Support by Small Businesses" list at
http://mail.computerdatasafe.com.au/mailman/listinfo/lssb



Re: The redpaper for cloning zLinux images via VQDIO is available

2002-09-06 Thread John Summerfield

On Thu, 5 Sep 2002 05:27, you wrote:
> The
> same source can be Snapped a 'virtually' unlimited number of times, with no
> physical data being copied, and no additional backend being used.  Only
> updated tracks will cause backend storage to be used at the point in time
> that the updates are written onto the virtual disk.

How does the process of backup/restore affect this? I see a space problem
looming if I backup and restore volumes under Linux.




--
Cheers
John Summerfield


Microsoft's most solid OS: http://www.geocities.com/rcwoolley/
Join the "Linux Support by Small Businesses" list at
http://mail.computerdatasafe.com.au/mailman/listinfo/lssb



Re: The redpaper for cloning zLinux images via VQDIO is available

2002-09-06 Thread John Summerfield

On Fri, 6 Sep 2002 17:16, you wrote:
>  creating its first sshd key
> (takes a while) and having me make an ssh client connection to it.

Could you just give it a key, perhaps created on a PC where MIPS are cheap? Or
is to not worth the effort?

--
Cheers
John Summerfield


Microsoft's most solid OS: http://www.geocities.com/rcwoolley/
Join the "Linux Support by Small Businesses" list at
http://mail.computerdatasafe.com.au/mailman/listinfo/lssb



Re: The redpaper for cloning zLinux images via VQDIO is available

2002-09-06 Thread Ledbetter, Scott E

A clarification,

STK's SnapShot (used on RVA and SVA) is significantly different from most
point-in-time copy facilities.  We copy pointers instead of actual data.
And only pointers.  Because of the Log-structured File architecture, we do
not have to do a background copy operation.  When the Snap is complete
(under 300 milliseconds generally) the operation is really complete.  After
a Snap, an access to the same location on the source and target virtual disk
will read the same physical data on the backend.  If either source or target
are updated, the new data is written in a different location on the backend,
and the pointers for the updated tracks on the virtual disk are changed to
point to the new location on the backend.  At that point, a read for the
updated tracks on the source and target will return different data.  The
same source can be Snapped a 'virtually' unlimited number of times, with no
physical data being copied, and no additional backend being used.  Only
updated tracks will cause backend storage to be used at the point in time
that the updates are written onto the virtual disk.

A key fact: the source/target relationship is not maintained in the control
unit.  Each track on each virtual disk is totally independent from each
other from the host view. A simple reference count to each track of backend
data is maintained.  When the data is originally written, the reference
count is one.  A Snap of that track will bump the reference count to two.
Another Snap (of either the original source or target) bumps the count to
three, and so on.  If the reference count for a track of data is three, for
example, and any one of the 'owning' virtual tracks is updated, the
reference count for that backend track is decremented to two, and a new
track with the updates is written to the backend in a different location.
The updated data will have its own reference count, initially set to one.
There is no fixed relationship between a host track, and any location on the
backend.  A track may be in array 0, disk 7 one second, and after an update,
it may reside in array 3 disk 2.  We can Snap any track on any virtual
volume to any other track anywhere in the box of 4096 volumes.  This allows
us to operate seamlessly with minidisk pooling managers like
VMDirect/VMSecur or DIRMAINT.

The reason that most other architectures require a background copy is due to
the fact that each host track is hard mapped to a unique physical location
on the backend.  A fast replicate is accomplished via a pointer copy, but
each source track must be physically written into the equivalent target
location for the replication to be complete.  For example, let's say a
source volume is fast replicated to a target volume.  Initially a pointer
copy is made, and a reference to the same track on either volume will be
directed to the source track.  Until the source volume is background copied
to the target volume, an update to the source volume may require two writes
on the backend:  the original source data must be moved to the target
volume, and the updated data must be written to the source.  With some
schemes, only one source-target relationship is allowed to exist at any
time.

The bottom line to the VM end-user:  With SnapShot, any volume or minidisk
can be instantly copied, at any time, to any other volume or minidisk in the
same box of 4096 devices(like geometry 3380->3380, 3390->3390) as long as
the target is big enough to hold the source.

I appologize for the length of this post, but this is some good stuff, and
it takes a little explanation to convince people that it is really
different, and it really works.  It is unbeatable for cloning Linux images.


Scott Ledbetter
StorageTek



-Original Message-
From: Phil Payne [mailto:[EMAIL PROTECTED]]
Sent: September 04, 2002 1:55 PM
To: [EMAIL PROTECTED]
Subject: Re: The redpaper for cloning zLinux images via VQDIO is
available


>   It'd be nice if the DASD boxes could have a "copy on write" feature
> akin to
>   the Linux memory manager;  This kind of technology would be
> reasonable for
>   handling things like R/O images.  Or am I confusing this with GPFS?

No, that's essentially how the feature works.  You ask for a point-in-time
copy of a DASD
image - the controller says 'Done' immediately and you can start using it.
The controller
starts a real copy operation in the background.  If you read a page that
hasn't been copied
yet, you get the original.  If you read one that's been copied, you get the
copy.  If you
write one, it's written where it should go in the copy and the controller
makes a note not to
overwrite it with the original when it gets around to that page.

Various brand names - Storage Technology was first with 'Snapshot'.

--
  Phil Payne
  http://www.isham-research.com
  +44 7785 302 803



Re: The redpaper for cloning zLinux images via VQDIO is available

2002-09-06 Thread Malcolm Beattie

John Summerfield writes:
> On Thu, 5 Sep 2002 01:01, you wrote:
> > However, using a VQDIO Guest LAN on a z900 (31-bit mode), and sharing
> > /usr but nothing else, it was taking us (when the machine wasn't doing
> > anything else much) about 3 minutes.  Almost all of that was spent
> > copying /; if we had had FlashCopy or something like that, I'd guess,
> > based on the way we did it, which relied on DIRMAINT and 2 IPLs on Linux
> > per guest (first time it comes up generic; it uses CMSFS to read stuff
> > the cloning process put on its 191-disk to determine hostname, IP
> > address, etc., and then write that onto the real filesystem and reIPL)
> > about 30 or 45 seconds.
>
> You should be able to do that configuration on the first boot and keep right
> on going.
>
> Two ideas. This is part of my inittab from RHL 7.3 on the desktop.
> id:5:initdefault:
>
> # System initialization.
> si::sysinit:/etc/rc.d/rc.sysinit
>
> Change it to read:
> id:5:initdefault:
> # One-time initialisation
> 1t:12345:once:/etc/rc.d/first-time
>
> # System initialization.
> si::sysinit:/etc/rc.d/rc.sysinit
>
> /etc/rc.d/first-time is a script that exits immediately if the system's
> already been set up; otherwise it does the stuff defined by the 191-disk.
>
>
> Second, front-end /etc/rc.d/rc.sysinit by changing the line:
> si::sysinit:/etc/rc.d/rc.sysinit
>
> Note too that the kernel can be built with DHCP support.

I'm recently back from a redbook residency where we wrote a redbook
on "Linux on IBM zSeries and S/390: Large Scale Linux Deployment".
Two of the parts of that redbook are particularly relevant to topics
which have been raised on this list recently.

One is about resource sharing and cloning and introduces a new way
of splitting up the readonly and readwrite parts of a Linux guest
such that you can use a single small readwrite volume (e.g. 20 cyls,
15 MB) for a small guest. (It also handles RPM management across
the readonly and readwrite parts, although still not ideally.) An
auto-configuration part (handled by a small VM configuration server
guest using PROP) lets a new clone come up, find out its management
network address, couple itself to the management GuestLAN, use that
to query a central configuration LDAP server, pick up its complete
"service" configuration information (IP address, role, etc.) from
there and boot fully into service. On our particular hardware, it
took 30 seconds from hitting Enter on the initial "create new guest"
command until the appearance of the root prompt after ssh'ing to the
new guest. That broke down as about 10 secs for the DIRMAINT guest
creation, 10 secs DDR copy of the "gold" guestvol (the readwrite
volume) and 10 secs of the guest booting, finding its network
information, configuring the network, creating its first sshd key
(takes a while) and having me make an ssh client connection to it.

The basevol+guestvol system does indeed work by changing the
rc.sysinit step of boot time. A cloned guest actually boots from the
readonly basevol its linked to (no more problems with corrupting
your bootable root fs) which looks for its guestvol at address 777.
If found, it mounts it in a special way and binds in parts of that
filesystem as all the necessary writable parts of the main filesystem
hierarchy (the book explains about how "mount --bind" works). It
then kicks init to continue with the rest of the boot process in the
normal way, now from the guestvol's /etc/inittab configuration. It's
the next step which then asks the configuration server guest (via
CP SMSG) for its basic configuration info so that it can couple to
the management GuestLAN.


The other part particularly relevant to a topic here is the section
on Hipersockets and z/VM GuestLAN (and associated chapters on TCP/IP
routing, network high availability and such like.) Those sections
were mostly written by Vic Cross and build up on the ISP/ASP redbook
now that GuestLAN/Hipersockets is around and has mostly taken over
from other options as the recommended connectivity method for Linux
guests.

I don't yet know when a redpiece will be available or if there is a
provisional publishing date.

--Malcolm

--
Malcolm Beattie <[EMAIL PROTECTED]>
Linux Technical Consultant
IBM EMEA Enterprise Server Group...
...from home, speaking only for myself



Re: The redpaper for cloning zLinux images via VQDIO is available

2002-09-05 Thread John Summerfield

On Thu, 5 Sep 2002 01:01, you wrote:
> However, using a VQDIO Guest LAN on a z900 (31-bit mode), and sharing
> /usr but nothing else, it was taking us (when the machine wasn't doing
> anything else much) about 3 minutes.  Almost all of that was spent
> copying /; if we had had FlashCopy or something like that, I'd guess,
> based on the way we did it, which relied on DIRMAINT and 2 IPLs on Linux
> per guest (first time it comes up generic; it uses CMSFS to read stuff
> the cloning process put on its 191-disk to determine hostname, IP
> address, etc., and then write that onto the real filesystem and reIPL)
> about 30 or 45 seconds.

You should be able to do that configuration on the first boot and keep right
on going.

Two ideas. This is part of my inittab from RHL 7.3 on the desktop.
id:5:initdefault:

# System initialization.
si::sysinit:/etc/rc.d/rc.sysinit

Change it to read:
id:5:initdefault:
# One-time initialisation
1t:12345:once:/etc/rc.d/first-time

# System initialization.
si::sysinit:/etc/rc.d/rc.sysinit

/etc/rc.d/first-time is a script that exits immediately if the system's
already been set up; otherwise it does the stuff defined by the 191-disk.


Second, front-end /etc/rc.d/rc.sysinit by changing the line:
si::sysinit:/etc/rc.d/rc.sysinit

Note too that the kernel can be built with DHCP support.


--
Cheers
John Summerfield


Microsoft's most solid OS: http://www.geocities.com/rcwoolley/
Join the "Linux Support by Small Businesses" list at
http://mail.computerdatasafe.com.au/mailman/listinfo/lssb



Re: The redpaper for cloning zLinux images via VQDIO is available

2002-09-05 Thread Scott Chapman

The potential issue being of course that while the copy is running in the
background it's consuming backend bandwidth, thus the "background" copy
does have the potential for degrading other workloads.  I've seen it happen
when the background copy involves several volumes worth of data.  Plus you
still have to buy DASD capacity to support each of those copies.  Snapshot
in the RVAs and SVAs of course doesn't do any background data moves (just
updates pointers) until the data is updated so it avoids the potential
performance problem and means you don't have to buy capacity for each copy
of your R/O data.  Even if you snap R/W data but only update a tiny portion
of most copies, the amount of capacity you need to support those copies is
greatly reduced.

I thought IBM had released a version of their software for the Shark that
also deferred the background copy until the data was actually updated.  It
sounded to me like a way of avoiding the potential performance penalty, but
I'm not sure but you still might need reserved capacity for each copy.

I'm not aware of EMC having that feature (e.g. they always start a
background copy, unless of course you're just splitting an already
established mirror) , and I really don't have a clue about any other
vendor.

I must say that the virtual architecture of the RVAs/SVAs is certainly
impressive and made (IMO, EMC will argue otherwise) DASD management much
easier.  It was a sad day when IBM announced that they weren't going to
bring that technology forward into the Shark.

Scott Chapman
American Electric Power




Phil Payne
 cc:
Sent by: Linux Subject: Re: The redpaper for cloning 
zLinux images via VQDIO is
on 390 Port available
<[EMAIL PROTECTED]
RIST.EDU>


09/04/02 03:55
PM
Please respond
to Linux on 390
Port






>   It'd be nice if the DASD boxes could have a "copy on write" feature
> akin to
>   the Linux memory manager;  This kind of technology would be
> reasonable for
>   handling things like R/O images.  Or am I confusing this with GPFS?

No, that's essentially how the feature works.  You ask for a point-in-time
copy of a DASD
image - the controller says 'Done' immediately and you can start using it.
The controller
starts a real copy operation in the background.  If you read a page that
hasn't been copied
yet, you get the original.  If you read one that's been copied, you get the
copy.  If you
write one, it's written where it should go in the copy and the controller
makes a note not to
overwrite it with the original when it gets around to that page.

Various brand names - Storage Technology was first with 'Snapshot'.

--
  Phil Payne
  http://www.isham-research.com
  +44 7785 302 803



Re: The redpaper for cloning zLinux images via VQDIO is available

2002-09-05 Thread Phil Payne

>  The redpaper for cloning zLinux images by using VQDIO is available from
> http://www.ibm.com/redbooks/abstracts/redp0301.html. This redpaper is based
> from the LinuxWorld zLinux cloning example (using IUCV) that I released in
> 5/5/2002 (http://www.vm.ibm.com/devpages/chongts/tscdemo.html) .  Have Fun!

Quick 'analyst' question (I'll download and read it when I have the time, but not 
until next
week at the earliest):

Roughly how quickly could a new Linux image be established using this technique on, 
e.g., a
z800?

--
  Phil Payne
  http://www.isham-research.com
  +44 7785 302 803



Re: The redpaper for cloning zLinux images via VQDIO is available

2002-09-05 Thread Tung-Sing Chong

Phil,

I was running the cloning demo (using IUCV) in the last Linuxworld. The
demo was running on a z800  with the latest shark. The images that was
created have 64m virtual memory and 150 cylinders R/W "/", 100 cylinders
swap disk,  R/O /usr  and R/O /usr/src. The first images take about 15
seconds. Through out the day we created more than 500 images and it average
out between 30-45 seconds.

The redpaper using larger minidisk and VQDIO and I have not have a chance
to running on the same z800 with the shark.

Chong

Tung-Sing Chong
Software Engineer
zSeries Software Development
Endicott, NY
T/l : 852-5342 Outside Phone: 607-752-5342
E-mail: [EMAIL PROTECTED]


Phil Payne <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 09/04/2002 12:32:17
PM

Please respond to Linux on 390 Port <[EMAIL PROTECTED]>

Sent by:Linux on 390 Port <[EMAIL PROTECTED]>


To:[EMAIL PROTECTED]
cc:
Subject:    Re: The redpaper for cloning zLinux images via VQDIO is
   available



>  The redpaper for cloning zLinux images by using VQDIO is available from
> http://www.ibm.com/redbooks/abstracts/redp0301.html. This redpaper is
based
> from the LinuxWorld zLinux cloning example (using IUCV) that I released
in
> 5/5/2002 (http://www.vm.ibm.com/devpages/chongts/tscdemo.html) .  Have
Fun!

Quick 'analyst' question (I'll download and read it when I have the time,
but not until next
week at the earliest):

Roughly how quickly could a new Linux image be established using this
technique on, e.g., a
z800?

--
  Phil Payne
  http://www.isham-research.com
  +44 7785 302 803



Re: The redpaper for cloning zLinux images via VQDIO is available

2002-09-05 Thread Adam Thornton

On Wed, Sep 04, 2002 at 06:32:17PM +0200, Phil Payne wrote:
> >  The redpaper for cloning zLinux images by using VQDIO is available from
> > http://www.ibm.com/redbooks/abstracts/redp0301.html. This redpaper is based
> > from the LinuxWorld zLinux cloning example (using IUCV) that I released in
> > 5/5/2002 (http://www.vm.ibm.com/devpages/chongts/tscdemo.html) .  Have Fun!

> Quick 'analyst' question (I'll download and read it when I have the
> time, but not until next week at the earliest):

> Roughly how quickly could a new Linux image be established using
> this technique on, e.g., a z800?

I don't know yet, not having read the paper.

However, using a VQDIO Guest LAN on a z900 (31-bit mode), and sharing
/usr but nothing else, it was taking us (when the machine wasn't doing
anything else much) about 3 minutes.  Almost all of that was spent
copying /; if we had had FlashCopy or something like that, I'd guess,
based on the way we did it, which relied on DIRMAINT and 2 IPLs on Linux
per guest (first time it comes up generic; it uses CMSFS to read stuff
the cloning process put on its 191-disk to determine hostname, IP
address, etc., and then write that onto the real filesystem and reIPL)
about 30 or 45 seconds.

If raw speed is what you care about, you precreate a small inventory of
machines, and hand those out on demand.  As you IPL each one the first
time, it can send a message indicating you need to refill your
inventory.  Then it turns into the time to IPL a Linux guest, typically
(if few services are running) twenty seconds or so.

Adam



Re: The redpaper for cloning zLinux images via VQDIO is available

2002-09-05 Thread Nix, Robert P.

You could also automate the process of building the inventory by having a VM service 
machine dedicated to that task. IPLing a machine from the inventory could recognize 
that it was the first time it had been used, and smsg the service machine to cause it 
to create a new userid. This way, taking machines off the front of the queue would 
automatically add new machines to the end of the queue. You'd need to balance the 
number of queued machines with the intensity of requests you wanted to be able to 
handle.


Robert P. Nixinternet: [EMAIL PROTECTED]
Mayo Clinic  phone: 507-284-0844
200 1st St. SW page: 507-255-3450
Rochester, MN 55905

"In theory, theory and practice are the same,
 but in practice, theory and practice are different."


> -Original Message-
> From: Adam Thornton [SMTP:[EMAIL PROTECTED]]
> Sent: Wednesday, September 04, 2002 12:01 PM
> To:   [EMAIL PROTECTED]
> Subject:  Re: The redpaper for cloning zLinux images via VQDIO is available
>
>


> If raw speed is what you care about, you precreate a small inventory of
> machines, and hand those out on demand.  As you IPL each one the first
> time, it can send a message indicating you need to refill your
> inventory.  Then it turns into the time to IPL a Linux guest, typically
> (if few services are running) twenty seconds or so.
>



Re: The redpaper for cloning zLinux images via VQDIO is available

2002-09-05 Thread Rob van der Heij

At 20:50 04-09-02, Tung-Sing Chong wrote:

>I did look into using Flashcopy on the SHARK and decided to use DDR
>instead.  I think you can only use flashcopy to copy data within the same
>RAID array. The DASD I used (>100 3390-3s) in my demo are on few RADI
>arrays.   Currently I do not have any plan to change my cloning example to
>use flashcopy.

The big problem with FlashCopy is that it copies to the same set of
tracks on another volume (or a full volume). In real life this is a
PITA but in a demo where you have packs full of identical Linux disks
it would work. Say your disk to copy is 330 cylinders, then you only
need to set up one pack with 10 x 330 cylinders as your master images
and pick the right one to copy from.

If your disks contain some white space to allow for the Linux guest to
write its own log files etc, then there may be things faster than DDR
(as we published in the ISP/ASP redbook) but since you also need to do
the formatting it will be a close race and mainly save other resources
but time (so only relevant if you clone multiple servers in parallel).

Rob



Re: The redpaper for cloning zLinux images via VQDIO is available

2002-09-05 Thread Phil Payne

> The potential issue being of course that while the copy is running in the
> background it's consuming backend bandwidth, thus the "background" copy
> does have the potential for degrading other workloads.

Depends how long the Linux image lives for.  If it's only created for a transient 
workload
(like an address space for a JOB under z/OS) then the copy doesn't have to complete.  
But
you're right - with everything except the RVA/SVA, the capacity must be available.

> I must say that the virtual architecture of the RVAs/SVAs is certainly
> impressive and made (IMO, EMC will argue otherwise) DASD management much
> easier.  It was a sad day when IBM announced that they weren't going to
> bring that technology forward into the Shark.

The decision was on where to virtualise - inside the box or outside.  StorageTank won.

--
  Phil Payne
  http://www.isham-research.com
  +44 7785 302 803



Re: The redpaper for cloning zLinux images via VQDIO is available

2002-09-04 Thread Simon Fischer

Hi,

what about the updating-problem with shared /usr /usr/src? Do you know a
nice way to do that - or are there any tools(commercial or not)
available for such an environment - perhaps from the clustering-community.

I bet it will not work with "Yast online update", does it? ;-)

Regards

Simon
--
(at home)
Linux on zSeries team
becom informationssysteme GmbH
www.becom.com

Tung-Sing Chong wrote:
> Hi,
>
>  The redpaper for cloning zLinux images by using VQDIO is available from
> http://www.ibm.com/redbooks/abstracts/redp0301.html. This redpaper is based
> from the LinuxWorld zLinux cloning example (using IUCV) that I released in
> 5/5/2002 (http://www.vm.ibm.com/devpages/chongts/tscdemo.html) .  Have Fun!
>
> Chong
>
> Tung-Sing Chong
> Software Engineer
> zSeries Software Development
> Endicott, NY
> T/l : 852-5342 Outside Phone: 607-752-5342
> E-mail: [EMAIL PROTECTED]
>



Re: The redpaper for cloning zLinux images via VQDIO is available

2002-09-04 Thread Tung-Sing Chong

Phil,

I did look into using Flashcopy on the SHARK and decided to use DDR
instead.  I think you can only use flashcopy to copy data within the same
RAID array. The DASD I used (>100 3390-3s) in my demo are on few RADI
arrays.   Currently I do not have any plan to change my cloning example to
use flashcopy.

Chong

Tung-Sing Chong
Software Engineer
zSeries Software Development
Endicott, NY
T/l : 852-5342 Outside Phone: 607-752-5342
E-mail: [EMAIL PROTECTED]


Phil Payne <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 09/04/2002 02:27:33
PM

Please respond to Linux on 390 Port <[EMAIL PROTECTED]>

Sent by:Linux on 390 Port <[EMAIL PROTECTED]>


To:[EMAIL PROTECTED]
cc:
Subject:    Re: The redpaper for cloning zLinux images via VQDIO is
   available



> To Phil:  The amount of time to clone is highly dependent (naturally) on
> the DASD subsystem.  Different technologies can and do yield drastically
> different times.  So, asking "what's the fastest" is different than
> "what's the fastest Chong achieved using his evironment and techniques".

Yes, but they were two seperate questions:

a) How long does it take at present - answered.

b) Are there plans to support Snapshot/Flashcopy/ShadowImage copying?

Obviously the creation of the DASD images is currently the largest
component, by an order of
magnitude.  Exploitation of the near-instantaneous copy facilities
supported by current DASD
subsystems would convert this 'quick' process into one that coul dbe used,
e.g., even on a
transaction-by-transaction basis.  A completely fresh Linux system for each
transaction?

Given the length of time it takes to establish a Linux environment on a
discrete server, this
kind of 'throw-away' Linux system is a powerful differentiator.

--
  Phil Payne
  http://www.isham-research.com
  +44 7785 302 803



Re: The redpaper for cloning zLinux images via VQDIO is available

2002-09-04 Thread Phil Payne

>   It'd be nice if the DASD boxes could have a "copy on write" feature
> akin to
>   the Linux memory manager;  This kind of technology would be
> reasonable for
>   handling things like R/O images.  Or am I confusing this with GPFS?

No, that's essentially how the feature works.  You ask for a point-in-time copy of a 
DASD
image - the controller says 'Done' immediately and you can start using it.  The 
controller
starts a real copy operation in the background.  If you read a page that hasn't been 
copied
yet, you get the original.  If you read one that's been copied, you get the copy.  If 
you
write one, it's written where it should go in the copy and the controller makes a note 
not to
overwrite it with the original when it gets around to that page.

Various brand names - Storage Technology was first with 'Snapshot'.

--
  Phil Payne
  http://www.isham-research.com
  +44 7785 302 803



Re: The redpaper for cloning zLinux images via VQDIO is available

2002-09-04 Thread Tung-Sing Chong

Alan,

Yes. I do mean z/VM V4R3 guest Lans with virtual OSA Express (QDIO mode).

Chong

Tung-Sing Chong
Software Engineer
zSeries Software Development
Endicott, NY
T/l : 852-5342 Outside Phone: 607-752-5342
E-mail: [EMAIL PROTECTED]


Alan Altmark/Endicott/IBM@[EMAIL PROTECTED]> on 09/04/2002 02:04:37 PM

Please respond to Linux on 390 Port <[EMAIL PROTECTED]>

Sent by:Linux on 390 Port <[EMAIL PROTECTED]>


To:[EMAIL PROTECTED]
cc:
Subject:Re: The redpaper for cloning zLinux images via VQDIO is
   available



On Wednesday, 09/04/2002 at 12:53 AST, Tung-Sing Chong/Endicott/IBM@IBMUS
wrote:
> Phil,
>
> I was running the cloning demo (using IUCV) in the last Linuxworld. The
> demo was running on a z800  with the latest shark. The images that was
> created have 64m virtual memory and 150 cylinders R/W "/", 100 cylinders
> swap disk,  R/O /usr  and R/O /usr/src. The first images take about 15
> seconds. Through out the day we created more than 500 images and it
average
> out between 30-45 seconds.
>
> The redpaper using larger minidisk and VQDIO and I have not have a
chance
> to running on the same z800 with the shark.

Chong, I'm going to go out on a limb here and assume that by "VQDIO" you
mean z/VM V4R3 guest LANs with virtual OSA Express (QDIO mode).

To Phil:  The amount of time to clone is highly dependent (naturally) on
the DASD subsystem.  Different technologies can and do yield drastically
different times.  So, asking "what's the fastest" is different than
"what's the fastest Chong achieved using his evironment and techniques".

Alan Altmark
Sr. Software Engineer
IBM z/VM Development



Re: The redpaper for cloning zLinux images via VQDIO is available

2002-09-04 Thread Dave Jones

- Original Message -
From: "Phil Payne" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Wednesday, September 04, 2002 1:27 PM
Subject: Re: The redpaper for cloning zLinux images via VQDIO is available


> > To Phil:  The amount of time to clone is highly dependent (naturally) on
> > the DASD subsystem.  Different technologies can and do yield drastically
> > different times.  So, asking "what's the fastest" is different than
> > "what's the fastest Chong achieved using his evironment and techniques".
>
> Yes, but they were two seperate questions:
>
> a) How long does it take at present - answered.
>
> b) Are there plans to support Snapshot/Flashcopy/ShadowImage copying?
>

See http://www.storagetek.com/prodserv/products/software/svan/ for details
on a product that does exactly that.

> Obviously the creation of the DASD images is currently the largest
component, by an order of
> magnitude.  Exploitation of the near-instantaneous copy facilities
supported by current DASD
> subsystems would convert this 'quick' process into one that coul dbe used,
e.g., even on a
> transaction-by-transaction basis.  A completely fresh Linux system for
each transaction?
>
> Given the length of time it takes to establish a Linux environment on a
discrete server, this
> kind of 'throw-away' Linux system is a powerful differentiator.
>
A very powerful differentiator, indeed...

Dave Jones
Sine Nomine Associates
Houston, TX



Re: The redpaper for cloning zLinux images via VQDIO is available

2002-09-04 Thread John Campbell

  Phil Payne
  cc:
  Sent by: Linux onSubject:  Re: [LINUX-390] The redpaper 
for cloning zLinux images via VQDIO
  390 Port  is available
  <[EMAIL PROTECTED]
  IST.EDU>


  09/04/2002 02:27
  PM
  Please respond to
  Linux on 390 Port









> To Phil:  The amount of time to clone is highly dependent (naturally) on
> the DASD subsystem.  Different technologies can and do yield drastically
> different times.  So, asking "what's the fastest" is different than
> "what's the fastest Chong achieved using his evironment and techniques".

Yes, but they were two seperate questions:

a) How long does it take at present - answered.

b) Are there plans to support Snapshot/Flashcopy/ShadowImage copying?

Obviously the creation of the DASD images is currently the largest
component, by an order of
magnitude.  Exploitation of the near-instantaneous copy facilities
supported by current DASD
subsystems would convert this 'quick' process into one that coul dbe used,
e.g., even on a
transaction-by-transaction basis.  A completely fresh Linux system for each
transaction?

Given the length of time it takes to establish a Linux environment on a
discrete server, this
kind of 'throw-away' Linux system is a powerful differentiator.


  It'd be nice if the DASD boxes could have a "copy on write" feature
akin to
  the Linux memory manager;  This kind of technology would be
reasonable for
  handling things like R/O images.  Or am I confusing this with GPFS?



John R. Campbell, Speaker to Machines (GNUrd)  {813-356|697}-5322
Adsumo ergo raptus sum
IBM Certified: IBM AIX 4.3 System Administration, System Support



Re: The redpaper for cloning zLinux images via VQDIO is available

2002-09-04 Thread Phil Payne

> To Phil:  The amount of time to clone is highly dependent (naturally) on
> the DASD subsystem.  Different technologies can and do yield drastically
> different times.  So, asking "what's the fastest" is different than
> "what's the fastest Chong achieved using his evironment and techniques".

Yes, but they were two seperate questions:

a) How long does it take at present - answered.

b) Are there plans to support Snapshot/Flashcopy/ShadowImage copying?

Obviously the creation of the DASD images is currently the largest component, by an 
order of
magnitude.  Exploitation of the near-instantaneous copy facilities supported by 
current DASD
subsystems would convert this 'quick' process into one that coul dbe used, e.g., even 
on a
transaction-by-transaction basis.  A completely fresh Linux system for each 
transaction?

Given the length of time it takes to establish a Linux environment on a discrete 
server, this
kind of 'throw-away' Linux system is a powerful differentiator.

--
  Phil Payne
  http://www.isham-research.com
  +44 7785 302 803



Re: The redpaper for cloning zLinux images via VQDIO is available

2002-09-04 Thread Alan Altmark

On Wednesday, 09/04/2002 at 12:53 AST, Tung-Sing Chong/Endicott/IBM@IBMUS
wrote:
> Phil,
>
> I was running the cloning demo (using IUCV) in the last Linuxworld. The
> demo was running on a z800  with the latest shark. The images that was
> created have 64m virtual memory and 150 cylinders R/W "/", 100 cylinders
> swap disk,  R/O /usr  and R/O /usr/src. The first images take about 15
> seconds. Through out the day we created more than 500 images and it
average
> out between 30-45 seconds.
>
> The redpaper using larger minidisk and VQDIO and I have not have a
chance
> to running on the same z800 with the shark.

Chong, I'm going to go out on a limb here and assume that by "VQDIO" you
mean z/VM V4R3 guest LANs with virtual OSA Express (QDIO mode).

To Phil:  The amount of time to clone is highly dependent (naturally) on
the DASD subsystem.  Different technologies can and do yield drastically
different times.  So, asking "what's the fastest" is different than
"what's the fastest Chong achieved using his evironment and techniques".

Alan Altmark
Sr. Software Engineer
IBM z/VM Development



Re: The redpaper for cloning zLinux images via VQDIO is available

2002-09-04 Thread Dave Jones

As with all questions about performance, the answer is: "it depends"
(tm--Bill Bitner, IBM Endicott). The time it takes to clone a Linux image
can be broken down into the following parts:
1) create the new virtual machine definition
2) copy whatever read/write disks are needed
3) do the Linux network configuration for the new Linux image

Steps 1 and 3 usually take at most 1 or 2 seconds of processor time, even on
the smallest S/390 systems, as these steps don't involve much work. Step 2,
copying the read/write disk(s) can take anywhere from a few 10s of seconds,
to 10 to 20 minutes, depending on the disk hardware in use and the method of
disk copying employed (DDR, PIPE trackread/trackwrite, etc.). If the disks
are "small" and the dasd hardware a SHARK or similar caching device, then
the changes are good that the entire disk will fit inside the device' cache,
making copying it very fast for the 2nd and later times. If the device is a
real 3380/3390 and the disk is an entire pack then the copy time will be
measured in minutes and not seconds. On the other hand, new devices (e.g.,
StorTek's dasd systems) support an almost "instant" or "flashcopy" function,
so that creating a new Linux clone image can be completed in 1-2 seconds on
a z800 processor.

Hope this helps.

Dave Jones
Sine Nomine
Houston, TX

- Original Message -
From: "Phil Payne" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Wednesday, September 04, 2002 11:32 AM
Subject: Re: The redpaper for cloning zLinux images via VQDIO is available


> >  The redpaper for cloning zLinux images by using VQDIO is available from
> > http://www.ibm.com/redbooks/abstracts/redp0301.html. This redpaper is
based
> > from the LinuxWorld zLinux cloning example (using IUCV) that I released
in
> > 5/5/2002 (http://www.vm.ibm.com/devpages/chongts/tscdemo.html) .  Have
Fun!
>
> Quick 'analyst' question (I'll download and read it when I have the time,
but not until next
> week at the earliest):
>
> Roughly how quickly could a new Linux image be established using this
technique on, e.g., a
> z800?
>
> --
>   Phil Payne
>   http://www.isham-research.com
>   +44 7785 302 803
>



Re: The redpaper for cloning zLinux images via VQDIO is available

2002-09-04 Thread Phil Payne

> I was running the cloning demo (using IUCV) in the last Linuxworld. The
> demo was running on a z800  with the latest shark. The images that was
> created have 64m virtual memory and 150 cylinders R/W "/", 100 cylinders
> swap disk,  R/O /usr  and R/O /usr/src. The first images take about 15
> seconds. Through out the day we created more than 500 images and it average
> out between 30-45 seconds.

Are there any plans to use flash copy style functions to create the disks?  
Theoretically you
could get down to sub-second image creation.

--
  Phil Payne
  http://www.isham-research.com
  +44 7785 302 803