Warning: another long post...
What we are looking for I think is "software virtualization" and it
just is not there yet.
It's about giving each participant the illusion that he has the entire
thing for himself, while under the covers you take advantage of the
architecture to use less resources th
On Thursday, 07/27/2006 at 02:46 AST, David Boyes <[EMAIL PROTECTED]>
wrote:
> > > >Then again, if you have a z9, you probably don't much care. 8-)
> > what about performance and CPU overheads (zVM) in processing I/O's, is
> > it an issue?
>
> It (the difference between dedicated and minidisk I/O)
On Jul 27, 2006, at 5:54 PM, Dominic Coulombe wrote:
Is there a reason to use SSH without encryption over telnet?
Just wondering.
X11 port forwarding, when you know the environment's reasonably
trustworthy, comes to mind immediately.
Passwordless key-based login for automation, if, again, you
Is there a reason to use SSH without encryption over telnet?
Just wondering.
On 27-Jul-2006, at 20:00, John Summerfied wrote:
There is a patch to openssh that allows to turn encryption off; I
think it's been mentioned in TH's nahant list in the past three
months or so.
---
Yes, you're right, I posted this a little quick...
What I was thinking was more like :
All of your machines share the same /usr disk, then you take the
master down, clone his /usr disk, apply patches to the new disk and
then do a little testing on the results.
If everything goes right, your tak
If you want to remain supported by Red Hat, postfix and exim are the
other two MTAs they ship with RHEL4.
Mark Post
-Original Message-
From: Linux on 390 Port [mailto:[EMAIL PROTECTED] On Behalf Of
Jon Brock
Sent: Thursday, July 27, 2006 2:08 PM
To: LINUX-390@VM.MARIST.EDU
Subject: Smal
Long response here. Shared disk is the way to go!!
On Thu, 27 Jul 2006, MOEUR TIM C wrote:
> I'm pursuing an architecture for multiple guests under VM
> and I'd like to know if anyone else has done the same,
> or if this is just an accident waiting to happen. ...
Yes, done the same here. Acc
Rick Troth wrote:
On Thu, 27 Jul 2006, Yu Safin wrote:
If you are not trying to save disk (we use about 1 Gb for all system
files), why not use something simpler such as unison/rsync to keep all
your files synchronized to a master. That way, if the disk takes a
hit you won't see all your syste
I think Lea means:
For cluster takeover to work seamlessly, your application has to keep
session data in some common location between the servers. If that's the
case, then when the shutdown of the second server commences, it takes
itself out of the load balancer queue, completes whatever transacti
Nix, Robert P. wrote:
Not only would you have to shut down all the guests to introduce your maintenance
(although not during the actual "apply"; you could allocate new disks, copy the
old ones, and apply your maintenance there, then switch everybody over),
Robert
Can you check that your email
Dominic Coulombe wrote:
For an example, you can share RO the /usr filesystem. When you apply a
patch on the main system which owns the disk in RW, your other machines ARE
NOT aware of the changes until you re-mount the filesystem on each Linux
machine.
I will put that a little more strongly:
> On 7/27/06, Marcy Cortes <[EMAIL PROTECTED]> wrote:
> > Did you find any equipment checks on the VM console? We had a
> > situation were Linux didn't do well at all under equipment checks
(some
> > looped, some hung, some crashed).
> I will ask the VM guy. He said that he could not see any mes
Stahr, Lea wrote:
A piece of cake! Use VMUTIL on VM to do the shutdowns and startups and
have the backups scheduled appropriately. Or get the CONTROL-M agent and
have that do it all from ZOS.
I don't understand how that addresses my concern.
Stahr, Lea wrote:
With clustering, you shut down
> We are running RHEL4 on these particular guests, and the default
MTA
> is sendmail. I would prefer to run something with a smaller memory
> footprint if I can; it seems rather pointless to take up much room for
> something which will only process a couple of messages per day.
Configure it
On Thu, 27 Jul 2006, Adam Thornton wrote:
> OTOH, unionfs is a much easier approach.
UNIONFS is cool. Way cool.
But it's not the only way to do shared disks.
Read only is really effective. It works. It also does require
some rolling up of the sleeves and care and feeding of vendors.
-- R;
---
Equipment checks coming from DASD. It was something called a flapping
link condition- whatever that means. Microcode patches to the z9 were
required, but we were told it could happen on other processors as well
(not that anyone here seemed to believe that ;).
Unfortunately, we didn't have the ti
On 7/27/06, Marcy Cortes <[EMAIL PROTECTED]> wrote:
Did you find any equipment checks on the VM console? We had a
situation were Linux didn't do well at all under equipment checks (some
looped, some hung, some crashed).
I will ask the VM guy. He said that he could not see any messages but
he
On 7/27/06, Rick Troth <[EMAIL PROTECTED]> wrote:
On Thu, 27 Jul 2006, Yu Safin wrote:
> If you are not trying to save disk (we use about 1 Gb for all system
> files), why not use something simpler such as unison/rsync to keep all
> your files synchronized to a master. That way, if the disk take
Did you find any equipment checks on the VM console? We had a
situation were Linux didn't do well at all under equipment checks (some
looped, some hung, some crashed).
Marcy Cortes
This message may contain confidential and/or privileged information. If
you are not the addressee or authorized
On Jul 27, 2006, at 1:47 PM, Nix, Robert P. wrote:
because you can't just use the tools supplied by the vendor to do
the maintenance; you have to do something extra to catch all the
extra fallout. I think, for this reason, most people have abandoned
the shared /usr concept, and are just allocatin
We are running zVM 5.2, SLES 9, Oracle 10.2 G and EMC/Symmetric.
Our SLES reported the following errors. The two DASD volumes are
DEDICATED, we use LVM without striping on ext3 mounted points used by
Oracle.
Jul 24 21:50:13 lnoaesd kernel: dasd_erp(3990): 0.0.2537: Overrun -
service overrun or
On Thursday 27 July 2006 16:50, Nix, Robert P. wrote:
>Actually, I don't think you want a shared filesystem r/w to any image while
> it is r/o to several other images. Subtle things change on a read-write
> disk; accessed dates get touched, and things in the directory float. These
> things could ma
Another great thing with RO /usr is that you can harden the
permissions of some commands and be sure that your stuff stays
intact. And you are sure nobody installs their own stuff system-wide.
But you don't need to share a /usr to benifit the RO /usr...
On 27-Jul-2006, at 16:47, Nix, Robert P.
On Thu, 27 Jul 2006, Yu Safin wrote:
> If you are not trying to save disk (we use about 1 Gb for all system
> files), why not use something simpler such as unison/rsync to keep all
> your files synchronized to a master. That way, if the disk takes a
> hit you won't see all your systems go down.
G
On 27-Jul-2006, at 16:25, Nix, Robert P. wrote:
Work-around: Start yast as "yast &", so that it runs in the
background, and leaves you with a command prompt.
This will only work if using the GUI version of YaST. If using the
CLI version (ncurses), just pop up a new terminal window to launch
t
On Wed, Jul 26, 2006 at 03:04:34PM -0400, Alan Altmark wrote:
> On Wednesday, 07/26/2006 at 01:27 EST, J Leslie Turriff
> <[EMAIL PROTECTED]> wrote:
> > Okay, now, wait; are you saying that the storage device _does_ have a
> > mechanism for communicating with the Linux filesystem to determine what
On Wed, Jul 26, 2006 at 01:27:06PM -0500, J Leslie Turriff wrote:
> Okay, now, wait; are you saying that the storage device _does_ have a
> mechanism for communicating with the Linux filesystem to determine what
> filesystem pages are still cached in main storage and have not yet been
> commited to
On Wed, Jul 26, 2006 at 12:50:09PM -0400, Alan Altmark wrote:
> You're right, however, and as we've been discussing, that these features
> can be misused or misinterpreted to provide an *application*-consistent
> view of the data. They don't do that. That applies to any operating
> system, not ju
On 7/27/06, Nix, Robert P. <[EMAIL PROTECTED]> wrote:
Not only would you have to shut down all the guests to introduce your maintenance
(although not during the actual "apply"; you could allocate new disks, copy the
old ones, and apply your maintenance there, then switch everybody over), you'd
On Wed, Jul 26, 2006 at 06:21:03PM +0200, Rob van der Heij wrote:
> On 7/26/06, Mark Perry <[EMAIL PROTECTED]> wrote:
>
> >One point not mentioned yet, is that FLASHCOPY is an asynchronous process.
> >You can start a FLASHCOPY operation and it *can* return an error status
> >asynchronously. 90+% of
Actually, I don't think you want a shared filesystem r/w to any image while it
is r/o to several other images. Subtle things change on a read-write disk;
accessed dates get touched, and things in the directory float. These things
could make your r/o systems unstable, even if you aren't actively
Not only would you have to shut down all the guests to introduce your
maintenance (although not during the actual "apply"; you could allocate new
disks, copy the old ones, and apply your maintenance there, then switch
everybody over), you'd also have to find a way of tracking changes the
mainte
Work-around: Start yast as "yast &", so that it runs in the background, and
leaves you with a command prompt. Activate and format the disks as normal, but
then drop back to your command prompt and, for each disk, enter the command
"fdasd -a /dev/dasd[abc...]". This will add a partition to the di
For that purpose, I use postfix listening only on the localhost.
It was pretty simple to configure.
On 7/27/06, Jon Brock <[EMAIL PROTECTED]> wrote:
Any suggestions? Or am I heading in the wrong direction
entirely?
Ar Iau, 2006-07-27 am 13:19 -0500, ysgrifennodd McKown, John:
> True, but the MTA does not need to run on the same system as the MUA
> (email client).
Nor does it need to listen to the internet side. This is one reason the
default MTA setup on Red Hat boxes is not to listen to the internet
merely
> > >Then again, if you have a z9, you probably don't much care. 8-)
> what about performance and CPU overheads (zVM) in processing I/O's, is
> it an issue?
It (the difference between dedicated and minidisk I/O) is measurable,
but the impact is smaller on the z9 systems due to the increased
proces
> -Original Message-
> From: Linux on 390 Port [mailto:[EMAIL PROTECTED] On
> Behalf Of Jon Brock
> Sent: Thursday, July 27, 2006 1:08 PM
> To: LINUX-390@VM.MARIST.EDU
> Subject: Small Mail Transport Agent
>
>
> I am planning on starting up TripWire and a couple of
> other things
On Jul 27, 2006, at 11:08 AM, Jon Brock wrote:
I can't think of any reason I would ever need to send any mail
from one of my VM guest to the world outside of our firewall -- I
would always be able to forward it from my Exchange email -- so any
small, efficient, easy to set up MTA should d
I am planning on starting up TripWire and a couple of other things on
some of my guests, and I want to have them mail their output to my company
email address. It appears that I need an MTA for this purpose.
We are running RHEL4 on these particular guests, and the default MTA i
On 7/26/06, Marcy Cortes <[EMAIL PROTECTED]> wrote:
>> >Negatives:
>> >Lose hardware IOASSIST feature for non-dedicated volumes. Not as
>> >important as it used to be, but still noticeable.
>> That's gone anyway on a z9, right?
>Not sure, but I think so. Most of the other assorted hardware assis
On Thursday, 07/27/2006 at 09:57 ZE2, Carsten Otte <[EMAIL PROTECTED]>
wrote:
> I am sorry, but I have to disagree with Alan's statement. They _are_
> currently dangerous to use with Linux volumes that are being accessed
> _because_ unlike dm-snapshot the filesystem is not frozen in Linux
> (lockfs
You won't get a lot of messages from YaST.
It would be easier to debug to use these commands :
pvcreate for initializing the PV - disks (once formatted and partitionned)
vgcreate for creating the VG
lvcreate for creating the LV
mkfs for formatting the LV
See the man page or the LVM howto ( http
It should work, but you will need to do some planning around doing updates
on the master copy (Bill Scully's paper on how CA does this is instructive
as to the various issues).
One thing that I've been tinkering with is whether this sort of
configuration is really more like setting up diskless cli
Hi Tim,
What you want to do is feasible, but require good planning.
For an example, you can share RO the /usr filesystem. When you apply a
patch on the main system which owns the disk in RW, your other machines ARE
NOT aware of the changes until you re-mount the filesystem on each Linux
machine
Hello List,
I'm pursuing an architecture for multiple guests under VM and I'd like
to know if anyone else has done the same, or if this is just an accident
waiting to happen. I invite your thoughts, comments, and witty remarks.
Here's what I'm considering: I'd like to create multiple VM Linux
I'm attempting to create an LVM on SLES 10 (the recent GA DVD). I took
the following steps:
Activated 5 3390-9 devices
Formatted all 5 devices
Start LVM
Create volume group "system" with 4M Physical Extent size.
Add 5 physical volumes to group "system"
Add logical volume 'test' with size=max and
A piece of cake! Use VMUTIL on VM to do the shutdowns and startups and
have the backups scheduled appropriately. Or get the CONTROL-M agent and
have that do it all from ZOS.
Lea Stahr
Sr. System Administrator
Linux/Unix Team
630-753-5445
[EMAIL PROTECTED]
-Original Message-
From: Linux
Funny thing testing! I tested it and it worked four times in a row. Then
when I actually needed it, it failed. Thank you fuzzy backups!
Lea Stahr
Sr. System Administrator
Linux/Unix Team
630-753-5445
[EMAIL PROTECTED]
-Original Message-
From: Linux on 390 Port [mailto:[EMAIL PROTECTED] O
J Leslie Turriff wrote:
Sounds to me, then, like the use of the
snapshot/mirror/peer-to-peer copy features of storage devices e.g.
Shark, SATABeast, etc. are currently dangerous to use with Linux
filesystems. They would need to be able to coordinate their activities
with the filesystem lock
Carsten Otte wrote:
Fargusson.Alan wrote:
I agree. I think you should make your backups with the Linux system down. You
should test this to make sure that there is not some other operational error
causing problems.
I think we got close to the bottom of the stack now: If one can take
down
Alan Altmark wrote:
On Wednesday, 07/26/2006 at 01:27 EST, J Leslie Turriff
<[EMAIL PROTECTED]> wrote:
Okay, now, wait; are you saying that the storage device _does_ have a
mechanism for communicating with the Linux filesystem to determine what
filesystem pages are still cached in main storage
Stahr, Lea wrote:
With clustering, you shut down one image and do an OFFLINE backup while
the application runs on the second image. Then bring up the primary
image and shutdown the secondary system for backup.
which sounds every bit as tricky to me as getting good backups from a
live Linux sys
On Wed, Jul 26, 2006 at 07:07:49PM +0200, Mark Perry wrote:
> Rob van der Heij wrote:
> >On 7/26/06, Brian France <[EMAIL PROTECTED]> wrote:
> >
> >>I have an image that is 1280m. It's swap space is 464m. Is that a
> >>good ratio? The image has chewed up according to Perftoolkit 75% of
> >>it.
J Leslie Turriff wrote:
> Okay, now, wait; are you saying that the storage device _does_ have a
> mechanism for communicating with the Linux filesystem to determine what
> filesystem pages are still cached in main storage and have not yet been
> commited to external storage?
No, it does not. Invent
> On Wednesday, 07/26/2006 at 10:33 EST, J Leslie Turriff
> <[EMAIL PROTECTED]> wrote:
>> Sounds to me, then, like the use of the
>> snapshot/mirror/peer-to-peer copy features of storage devices e.g.
>> Shark, SATABeast, etc. are currently dangerous to use with Linux
>> filesystems. They would nee
55 matches
Mail list logo