Gregg-

I've often wondered if Solomon Short was a real person or just a character David G. 
made up.

"An Optimist is just a pessimist with no job experience."  - Scott Adams
Gordon W. Wolfe, Ph.D. Boeing Enterprise Servers 425-865-5940

> ----------
> From:         Gregg C Levine
> Reply To:     Linux on 390 Port
> Sent:         Thursday, July 8, 2004 1:54 PM
> To:   [EMAIL PROTECTED]
> Subject:      Re: Linux under VM and Cloning
> 
> Hello from Gregg C Levine
> So far you all have good ideas. 
> 
> However about that quote you chose Gordon, yes David did write it for
> his novel, as you've noted, except it was Solomon Short who said the
> actual quote. (And he's been suggesting that Solomon is a real person
> no less!)
> -------------------
> Gregg C Levine [EMAIL PROTECTED]
> ------------------------------------------------------------
> "The Force will be with you...Always." Obi-Wan Kenobi
> "Use the Force, Luke."  Obi-Wan Kenobi
> 
> > -----Original Message-----
> > From: Linux on 390 Port [mailto:[EMAIL PROTECTED] On Behalf
> Of
> > Wolfe, Gordon W
> > Sent: Thursday, July 08, 2004 3:46 PM
> > To: [EMAIL PROTECTED]
> > Subject: Re: [LINUX-390] Linux under VM and Cloning
> > 
> > > > How much DASD does this really save you? Is it
> > > > worth the time and effort it takes to set this up
> > >
> > A lot of people do it this way.  I gave a paper on it at SHARE.  You
> can see a copy
> > at
> > 
> > http://linuxvm.org/present/SHARE101/S9343GWa.pdf
> > 
> > It covers a wide gamut of the problems of managing multiple Linux
> servers running
> > under VM.
> > 
> > Lately, however, we've been beginning to have second thoughts.  When
> we started
> > the scheme detailed in the presentation, we thought a whole 3390-3
> was a lot of dasd
> > and were trying to save as much as possible.  Now that we're running
> about 40
> > servers, maintenance is becoming a real pain in the patootie.
> Getting our users to
> > allow us to take down their server so we can change /usr disks is
> really hard.  More
> > and more our users are wanting full 7X24 availability.
> >         Our current offering is
> >                 1500 cylinders for /, /opt, /bin, /var, etc
> >                     3338 cylinders for /usr, read-only shared.
> >                     swap in v-disk
> >                         A separate disk with much as they need for
> /home
> >                  A read-only disk with the Oracle code on it and a
> separate LVM volume
> > for Oracle databases, if they want oracle.
> > 
> > We are considering changing our offering when SLES9 comes out.  The
> current
> > proposal is to combine the / and /usr disks into one single minidisk
> of about 5000
> > cylinders, being half of a 3390-9.  Everything read-write.  Then,
> for service, we can
> > just use ssh to send some commands to the server, link an NFS disk
> in with all the
> > rpm's and just load the rpms directly on to the server.  We can even
> send ssh
> > commands to recycle various daemons.  The only thing we'd have to
> bring the server
> > all the way down for would be to bring in a new kernel.  What we
> give up to do this
> > is disk space.  Each server goes from using 1500 cylinders to 5000
> cylinders.
> > However, we're beginning to think that disk is cheaper than labor,
> especially when it
> > comes to dozens of servers to maintain. YMMV.
> > 
> > Give it some thought.  What does your management want to spend its
> money on?
> > Disk arrays or headcount?
> > 
> > "Good. Fast. Cheap.  Pick any two."   (David Gerrold, "A Matter for
> Men")
> > 
> > "An Optimist is just a pessimist with no job experience."  - Scott
> Adams
> > Gordon W. Wolfe, Ph.D. Boeing Enterprise Servers 425-865-5940
> > 
> > > ----------
> > > From:         David Boyes
> > > Reply To:     Linux on 390 Port
> > > Sent:         Thursday, July 8, 2004 7:35 AM
> > > To:   [EMAIL PROTECTED]
> > > Subject:      Re: Linux under VM and Cloning> 
> > >
> > > > We are going with a lot of Linux Guests under VM. Close to 20
> > > > per IFL and are wondering about the experiences with the
> > > > basevol/guestvol scenario. How many People accually use this
> > > > scenario?
> > >
> > > At least a dozen of our customers do. CA obviously does (see
> Bill's
> > > paper). It's proven to be a pretty good choice when you need a lot
> of
> > > fairly similar machines that don't change configuration too often.
> > >
> > > > How much DASD does this really save you? Is it
> > > > worth the time and effort it takes to set this up?
> > >
> > > It depends a lot on how well-behaved your applications are in
> terms of
> > > keeping all their files together. A lot of ISVs violate Mother's
> Second
> > > Rule (Thou Shalt Not Mix Your Code and System Code), which makes
> it more
> > > a maintenance issue than a disk space issue.
> > >
> > > IF your application is well-behaved enough to keep all it's pieces
> > > together, then it makes a fair amount of difference.
> > >
> > >
> > > > Could you
> > > > just setup Links to specific disks in VM for a Guest like if
> > > > you just wanted to share the binaries for say Oracle, or the
> > > > /usr directory or /home directory?
> > >
> > > No, because in the case of Oracle and a lot of the ISV software
> you
> > > mentioned, the application wants to dump code outside the
> directory that
> > > holds the main binaries. You also need to have write access to the
> RPM
> > > catalog and some other stuff (something I consider to be a design
> flaw
> > > in RPM -- no provision for concatenated user and system software>
> > > catalogs).
> > >
> > > /home is difficult because of the multi-system caching problem --
> virtal
> > > machines are really separate systems, and each does separate
> caches of
> > > R/W data that are completely ignorant of each other. Without
> something
> > > like AFS or GFS to coordinate writes, you get bad corruption
> problems.
> > > Best solution there is to use NFS or one of the more sophisticated
> > > filesystems mentioned previously to handle /home.
> > >
> > > > Wouldn't this accomplish
> > > > the same thing? What would be the best way to go about this?
> > >
> > > See above. You're somewhat mixing up two problems: shared
> resources and
> > > operational maintenance.  The basevol/guestvol concept is
> attempting to
> > > handle the operational maintenance problem (ie, how do you
> distribute
> > > fixes and do configuration control). Sharing disks takes it into
> > > configuration management (how do I share binaries, but deal with
> the
> > > fact that the applications expect stuff to appear in places
> outside the
> > > directory holding the binaries).
> > >
> > > "Best" is a hard thing to define. What I'd suggest is:
> > >
> > > 1) Use the basevol/guestvol setup to manage the system
> configuration
> > > information and /usr, which tend not to be particularly volatile
> > > (changes are usually infrequent).
> > >
> > > 2) Set up a separate guest LAN and create some dedicated guests
> for the
> > > function of NFS file servers. Mount the "shared" application
> information
> > > via NFS, ie /opt, /usr/local, etc. Same thing for shared R/W
> filesystems
> > > like /home.
> > >
> > > 2a) (more sophisticated variation) Set up separate guest LAN and
> use
> > > some dedicated guests as AFS servers. This takes more
> sophistication at
> > > the start, but it's more scalable and secure than NFS, and allows
> better
> > > disk management from a enterprise standpoint.
> > >
> > > >  Does SUSE support the basevol/guestvol out
> > > > of the box, or are their some packages that need installing
> > > > to make this work?
> > >
> > > Not out of the box -- it's a configuration decision, not a code
> thing.
> > > It requires modifying two of the SuSE startup scripts to overmount
> the
> > > common /etc with the system specific ones during boot.
> > >
> > > > How stable is it running under this> 
> > > > basevol/guestvol scenario?
> > >
> > > Very. Maintenance planning is the key to making this work. Day to
> day,
> > > it's a no-brainer.
> > >
> > > I'd venture to say that if more ISVs used a method like this, we'd
> > > finally be able to stamp out the assumption that it's OK to
> scatter
> > > files around the filesystem, and end up with a lot better world.
> I'm
> > > *very* glad that CA has adopted this approach. Now we need to work
> on
> > > IBM, and Tivoli, and ...
> > >
> > > 8-(
> > >
> > > -- db
> > >
> > >
> > >
> ----------------------------------------------------------------------
> > > For LINUX-390 subscribe / signoff / archive access instructions,
> > > send email to [EMAIL PROTECTED] with the message: INFO LINUX-
> > 390 or visit
> > > http://www.marist.edu/htbin/wlvindex?LINUX-390
> > >
> > >
> > 
> >
> ----------------------------------------------------------------------
> > For LINUX-390 subscribe / signoff / archive access instructions,
> > send email to [EMAIL PROTECTED] with the message: INFO LINUX-
> > 390 or visit
> > http://www.marist.edu/htbin/wlvindex?LINUX-390
> 
> ----------------------------------------------------------------------
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> 
> 

----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

Reply via email to