Re: Max # 3390s

2010-01-25 Thread Mauro Souza
Wow... what a amazing high number of dasds... Makes me crazy...

Some time ago I've seen a box with 400GB storage in 3390-3, and each dasd
was divided in 3. Linux loaded them all, but take some 30 minutes to boot.
But 8TB is a whole different figure... I guess your boot would take half a
day... If fsck kicks in, half a month...
In zVM you can have 64k addresses ( to ), and that would be more
than enough for you. But as Linux uses 8-bit for device minor numbers, you
will end up having at most 256 PV's in each VG, as Mark said. AND you can
have at most 99 VGs.


What about building a JBOD? Or a LVM JBOD?
What about a RAID-0?
What about Drivespace? Stacker? No, just kidding...


Really, can't you find any way to reformat your storage to create mod 54's,
and export/import the database? Messing with such a large pool of dasds may
work, but may end up working as good as Gnome 2.2 runs on my ancient Pentium
166. Yeah, I have one and IT DOES run Gnome...

1200 dasds? Sounds crazy...

Mauro
http://mauro.limeiratem.com - registered Linux User: 294521
Scripture is both history, and a love letter from God.


On Mon, Jan 25, 2010 at 10:45 PM, Lee Stewart <
lstewart.dsgr...@attglobal.net> wrote:

> Thanks to all
>
> That's enough to sway the sales guys...And alas, we only have mod 9s
>  to work with.   If we had mod 54s, I'd consider it...   But there is
> the SAN space
>
> Thanks,
> Lee
>
>
> Marcy Cortes wrote:
>
>> First, give up on mod 9's and do 54's for that stuff :)
>>
>> Our biggest is about 5TB.  It's about 110 mod 54's.   It's divided into 6
>> file systems on 6 different volume groups.   (Not a DB, but just files).
>>
>> We did adjust the boot time interval of fsck so that all 6 don't get
>> fsck'd on the same startup.  Takes about 10 minutes on a file system at boot
>> time (and I don't know if that is because of the size or the number of
>> files).
>>
>> I don't think 8TB would be a problem.  I'm not sure I'd put it all in the
>> same volume group though.   There probably isn't much of a need to, though,
>> with a DB.
>>
>>
>> Marcy
>>
>> "This message may contain confidential and/or privileged information. If
>> you are not the addressee or authorized to receive this for the addressee,
>> you must not use, copy, disclose, or take any action based on this message
>> or any information herein. If you have received this message in error,
>> please advise the sender immediately by reply e-mail and delete this
>> message. Thank you for your cooperation."
>>
>>
>> -Original Message-
>> From: Linux on 390 Port [mailto:linux-...@vm.marist.edu] On Behalf Of Lee
>> Stewart
>> Sent: Monday, January 25, 2010 2:54 PM
>> To: LINUX-390@vm.marist.edu
>> Subject: [LINUX-390] Max # 3390s
>>
>> Hi all...
>> We're working with a customer that someone has suggested to them that as
>> we move their Oracle d/b from brand x to Linux on z, that we also move
>> the actual d/b from their old SAN box(es) to mainframe disk (3390
>> images).   The catch is that the d/b is about 8TB, and to my rough math
>> that seems like 1100-1200 3390 mod 9s.
>>
>> Does Linux even support that many DASD devices?   Does LVM?
>>
>> And yes, we are trying to reverse that decision and put the Linux OSs
>> and Oracle code and swap space on 3390s, but put the d/b and DBA work
>> spaces on the SAN (where big things fit better)
>>
>> Lee
>> --
>>
>> Lee Stewart, Senior SE
>> Sirius Computer Solutions
>> Phone: (303) 996-7122
>> Email: lee.stew...@siriuscom.com
>> Web:   www.siriuscom.com
>>
>>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Max # 3390s

2010-01-25 Thread Lee Stewart

Thanks to all

That's enough to sway the sales guys...And alas, we only have mod 9s
 to work with.   If we had mod 54s, I'd consider it...   But there is
the SAN space

Thanks,
Lee

Marcy Cortes wrote:

First, give up on mod 9's and do 54's for that stuff :)

Our biggest is about 5TB.  It's about 110 mod 54's.   It's divided into 6 file 
systems on 6 different volume groups.   (Not a DB, but just files).

We did adjust the boot time interval of fsck so that all 6 don't get fsck'd on 
the same startup.  Takes about 10 minutes on a file system at boot time (and I 
don't know if that is because of the size or the number of files).

I don't think 8TB would be a problem.  I'm not sure I'd put it all in the same 
volume group though.   There probably isn't much of a need to, though, with a 
DB.


Marcy

"This message may contain confidential and/or privileged information. If you are not 
the addressee or authorized to receive this for the addressee, you must not use, copy, 
disclose, or take any action based on this message or any information herein. If you have 
received this message in error, please advise the sender immediately by reply e-mail and 
delete this message. Thank you for your cooperation."


-Original Message-
From: Linux on 390 Port [mailto:linux-...@vm.marist.edu] On Behalf Of Lee 
Stewart
Sent: Monday, January 25, 2010 2:54 PM
To: LINUX-390@vm.marist.edu
Subject: [LINUX-390] Max # 3390s

Hi all...
We're working with a customer that someone has suggested to them that as
we move their Oracle d/b from brand x to Linux on z, that we also move
the actual d/b from their old SAN box(es) to mainframe disk (3390
images).   The catch is that the d/b is about 8TB, and to my rough math
that seems like 1100-1200 3390 mod 9s.

Does Linux even support that many DASD devices?   Does LVM?

And yes, we are trying to reverse that decision and put the Linux OSs
and Oracle code and swap space on 3390s, but put the d/b and DBA work
spaces on the SAN (where big things fit better)

Lee
--

Lee Stewart, Senior SE
Sirius Computer Solutions
Phone: (303) 996-7122
Email: lee.stew...@siriuscom.com
Web:   www.siriuscom.com

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390




--

Lee Stewart, Senior SE
Sirius Computer Solutions
Phone: (303) 996-7122
Email: lee.stew...@siriuscom.com
Web:   www.siriuscom.com

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Max # 3390s

2010-01-25 Thread Marcy Cortes
Wow.  That's way too many :)
Course 64K should be enough for anybody.
I hope we have bigger disk sizes before we need 250,000. 
I'd think you'd have a 64K limit under z/VM.
That's still too many :)




Marcy 

"This message may contain confidential and/or privileged information. If you 
are not the addressee or authorized to receive this for the addressee, you must 
not use, copy, disclose, or take any action based on this message or any 
information herein. If you have received this message in error, please advise 
the sender immediately by reply e-mail and delete this message. Thank you for 
your cooperation."


-Original Message-
From: Linux on 390 Port [mailto:linux-...@vm.marist.edu] On Behalf Of Rich 
Smrcina
Sent: Monday, January 25, 2010 3:15 PM
To: LINUX-390@vm.marist.edu
Subject: Re: [LINUX-390] Max # 3390s

The device drivers manual tells all, but I think the number is somewhere
north of 250,000 dasd devices.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: 2010 IBM System z Technical Conferences, zExpo, Technical University

2010-01-25 Thread Pamela Christina (It has stopped raining)
| Fixed typo.

| Answers to some of the questions that were asked.
|
| Renaming:  as I understand it, many of the conferences are renamed
|  to Technical University.
|
| Registration: z Boston TU is not open for enrollment yet so I don't
|  know the registration fee.  Sorry.
|
- Note follows --
Subject: 2010 IBM System z Technical Conferences, zExpo, Technical University
Hi everyone,
Someone asked on IBM-MAIN about the tech conferences--sorry I should
have posted this sooner. My boo-boo.

Here are the dates for two upcoming IBM System z Technical Conferences
(renamed to Technical University).

Here's the link to the events calendar where you can find these
conferences and many others events:
 http://www.vm.ibm.com/events/

 IBM System z Technical University (formerly Technical Conference)
 May 17-21, 2010
 Berlin, Germany - Hotel Berlin
 (Open for enrollments)
 http://www.ibm.com/training/conf/europe/systemz


|  IBM System z Technical University (fondly known as zExpo)
 October 4-8, 2009
 Boston, MA - Boston Marriott Copely Place
 (enrollment/registration coming soon)
 http://www.ibm.com/training/us/conf/systemz

Also, remember that you have tech ed opportunities at SHARE and WAVV

   SHARE - March 14-19, 2010 in Seattle
http://www.share.org

   WAVV  - April 9-13, 2010 Covington KY(near Cincinnati)
 (held over a weekend, minimizes time away from office)
http://www.wavv.org

   SHARE -August 1-6, 2010 in Boston MA
http://www.share.org



Regards,
Pam C

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Max # 3390s

2010-01-25 Thread Marcy Cortes
First, give up on mod 9's and do 54's for that stuff :)

Our biggest is about 5TB.  It's about 110 mod 54's.   It's divided into 6 file 
systems on 6 different volume groups.   (Not a DB, but just files).

We did adjust the boot time interval of fsck so that all 6 don't get fsck'd on 
the same startup.  Takes about 10 minutes on a file system at boot time (and I 
don't know if that is because of the size or the number of files).

I don't think 8TB would be a problem.  I'm not sure I'd put it all in the same 
volume group though.   There probably isn't much of a need to, though, with a 
DB.


Marcy 

"This message may contain confidential and/or privileged information. If you 
are not the addressee or authorized to receive this for the addressee, you must 
not use, copy, disclose, or take any action based on this message or any 
information herein. If you have received this message in error, please advise 
the sender immediately by reply e-mail and delete this message. Thank you for 
your cooperation."


-Original Message-
From: Linux on 390 Port [mailto:linux-...@vm.marist.edu] On Behalf Of Lee 
Stewart
Sent: Monday, January 25, 2010 2:54 PM
To: LINUX-390@vm.marist.edu
Subject: [LINUX-390] Max # 3390s

Hi all...
We're working with a customer that someone has suggested to them that as
we move their Oracle d/b from brand x to Linux on z, that we also move
the actual d/b from their old SAN box(es) to mainframe disk (3390
images).   The catch is that the d/b is about 8TB, and to my rough math
that seems like 1100-1200 3390 mod 9s.

Does Linux even support that many DASD devices?   Does LVM?

And yes, we are trying to reverse that decision and put the Linux OSs
and Oracle code and swap space on 3390s, but put the d/b and DBA work
spaces on the SAN (where big things fit better)

Lee
--

Lee Stewart, Senior SE
Sirius Computer Solutions
Phone: (303) 996-7122
Email: lee.stew...@siriuscom.com
Web:   www.siriuscom.com

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Max # 3390s

2010-01-25 Thread Mark Post
>>> On 1/25/2010 at 05:53 PM, Lee Stewart  
>>> wrote: 
> The catch is that the d/b is about 8TB, and to my rough math
> that seems like 1100-1200 3390 mod 9s.
> 
> Does Linux even support that many DASD devices?   Does LVM?

Now that I think about it some more, I'm pretty sure LVM has a limit of 256 PVs 
in a single VG.


Mark Post

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Max # 3390s

2010-01-25 Thread Rich Smrcina

The device drivers manual tells all, but I think the number is somewhere
north of 250,000 dasd devices.

On 01/25/2010 04:53 PM, Lee Stewart wrote:

Hi all...
We're working with a customer that someone has suggested to them that as
we move their Oracle d/b from brand x to Linux on z, that we also move
the actual d/b from their old SAN box(es) to mainframe disk (3390
images).   The catch is that the d/b is about 8TB, and to my rough math
that seems like 1100-1200 3390 mod 9s.

Does Linux even support that many DASD devices?   Does LVM?

And yes, we are trying to reverse that decision and put the Linux OSs
and Oracle code and swap space on 3390s, but put the d/b and DBA work
spaces on the SAN (where big things fit better)

Lee
--




--
Rich Smrcina
Phone: 414-491-6001
http://www.linkedin.com/in/richsmrcina

Catch the WAVV! http://www.wavv.org
WAVV 2010 - Apr 9-13, 2010 Covington, KY

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Max # 3390s

2010-01-25 Thread Patrick Spinler
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


or about 150 mod 54s, if I calculate right.  That's still a very large
number, but much more manageable.   That being said, it's possible that
a direct SAN connection would be faster - at least test it and see for
your application.

Regarding the max # of volumes in linux, I seem to recall that the scsi
driver imposed the first limit, by default 128 device.  That could be
easily raised by a boot time parameter, though.  Does the dasd driver
impose any max number of devices limitation?

You might also find this article useful in planning large lvm logical
volumes:

http://www.walkernews.net/2007/07/02/maximum-size-of-a-logical-volume-in-lvm/

- -- Pat

Lee Stewart wrote:
> Hi all...
> We're working with a customer that someone has suggested to them that as
> we move their Oracle d/b from brand x to Linux on z, that we also move
> the actual d/b from their old SAN box(es) to mainframe disk (3390
> images).   The catch is that the d/b is about 8TB, and to my rough math
> that seems like 1100-1200 3390 mod 9s.
>
> Does Linux even support that many DASD devices?   Does LVM?
>
> And yes, we are trying to reverse that decision and put the Linux OSs
> and Oracle code and swap space on 3390s, but put the d/b and DBA work
> spaces on the SAN (where big things fit better)
>
> Lee
> --
>
> Lee Stewart, Senior SE
> Sirius Computer Solutions
> Phone: (303) 996-7122
> Email: lee.stew...@siriuscom.com
> Web:   www.siriuscom.com
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (Darwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkteJFsACgkQNObCqA8uBsz6ggCfTw1JtTh9wkth+LSoh8pMDVf4
bx0An07h1tXojqnYWSimsffR91hUzWAX
=TnQ8
-END PGP SIGNATURE-

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Max # 3390s

2010-01-25 Thread Rodger Donaldson
On Mon, Jan 25, 2010 at 03:53:41PM -0700, Lee Stewart wrote:
> Hi all...
> We're working with a customer that someone has suggested to them that as
> we move their Oracle d/b from brand x to Linux on z, that we also move
> the actual d/b from their old SAN box(es) to mainframe disk (3390
> images).   The catch is that the d/b is about 8TB, and to my rough math
> that seems like 1100-1200 3390 mod 9s.
>
> Does Linux even support that many DASD devices?   Does LVM?

LVM does, but only for certain values of "does".  By default it will
be keeping copies of the LVM metadata on every volume, which means
that onlining, offlining, and other LVM ops will take forever.

--
Rodger Donaldsonrodg...@diaspora.gen.nz
I just had this vision of a young boy cowering in terror, whispering:
"I see dumb people"
-- Steve VanDevender

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Max # 3390s

2010-01-25 Thread Mark Post
>>> On 1/25/2010 at 05:53 PM, Lee Stewart  
>>> wrote: 
> The catch is that the d/b is about 8TB, and to my rough math
> that seems like 1100-1200 3390 mod 9s.
> 
> Does Linux even support that many DASD devices?   Does LVM?

I can't say for sure, but I can pretty much guarantee that system startup will 
_really_ stink, and then when LVM startup happens, it will get worse.  Plus, 
any LVM query commands that are issued later will take forever.  Just don't go 
there.


Mark Post

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Max # 3390s

2010-01-25 Thread Lee Stewart

Hi all...
We're working with a customer that someone has suggested to them that as
we move their Oracle d/b from brand x to Linux on z, that we also move
the actual d/b from their old SAN box(es) to mainframe disk (3390
images).   The catch is that the d/b is about 8TB, and to my rough math
that seems like 1100-1200 3390 mod 9s.

Does Linux even support that many DASD devices?   Does LVM?

And yes, we are trying to reverse that decision and put the Linux OSs
and Oracle code and swap space on 3390s, but put the d/b and DBA work
spaces on the SAN (where big things fit better)

Lee
--

Lee Stewart, Senior SE
Sirius Computer Solutions
Phone: (303) 996-7122
Email: lee.stew...@siriuscom.com
Web:   www.siriuscom.com

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: "The Naked Mainframe" (Forbes Security Article)

2010-01-25 Thread Leslie Turriff
On Monday 25 January 2010 11:25:30 Shockley, Gerard C wrote:
> Here is an experiment.
>
> Go here http://cve.mitre.org/compatible/vulnerability_management.html
>
> Click search >
>
> Enter : s390x
>
> > You will receive a page asking you to check spelling. Try zlinux also.
>
> Then enter : windows
>
> Any questions
>
> ://Gerard

From the CVE FAQ at the above page:

A10. Does CVE contain all vulnerabilities and exposures?

No. The intention of CVE is to be comprehensive with respect to all
*publicly known* vulnerabilities and exposures. While CVE is designed to
contain mature information, our primary focus is on identifying
vulnerabilities and exposures that are detected by security tools and any
new problems that become public, and then addressing any older security
problems that require validation.

(Emphasis is mine.) :-)

Leslie

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


2010 IBM System z Technical Conferences, zExpo, Technical University

2010-01-25 Thread Pamela Christina (a rainy day in Endicott)
Hi everyone,
Someone asked on IBM-MAIN about the tech conferences--sorry I should
have posted this sooner. My boo-boo.

Here are the dates for two upcoming IBM System z Technical Conferences
(renamed to Technical University).

Here's the link to the events calendar where you can find these
conferences and many others events:
 http://www.vm.ibm.com/events/

 IBM System z Technical University (formerly Technical Conference)
 May 17-21, 2010
 Berlin, Germany - Hotel Berlin
 (Open for enrollments)
 http://www.ibm.com/training/conf/europe/systemz


 IBM System z Technical Univeristy (fondly known as zExpo)
 October 4-8, 2009
 Boston, MA - Boston Marriott Copely Place
 (enrollment/registration coming soon)
 http://www.ibm.com/training/us/conf/systemz

Also, remember that you have tech ed opportunities at SHARE and WAVV

   SHARE - March 14-19, 2010 in Seattle
http://www.share.org

   WAVV  - April 9-13, 2010 Covington KY(near Cincinnati)
 (held over a weekend, minimizes time away from office)
http://www.wavv.org

   SHARE -August 1-6, 2010 in Boston MA
http://www.share.org



Regards,
Pam C

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: OOM-Killer shut down SSh

2010-01-25 Thread Rob van der Heij
On Mon, Jan 25, 2010 at 7:00 PM, Mark Post  wrote:
 On 1/25/2010 at 12:53 PM, Robert Giordano  wrote:
>
>>
>>
>> Can anyone provide any documentation or opinion on the use and management
>> of the OOM-Killer process on SUSE Linux SLES 10 SP1?
>
> Not really.  It's algorithms are hard-coded into the kernel.  I'm not aware 
> of any way to influence it, other than adding virtual storage to the system, 
> either via paging space, or memory.  I will provide an opinion on running 
> SLES10 SP1.  Don't.  It's been out of service quite a while now.

I don't think there's a good reason for driving your Linux server at
the edge. Folks with discrete servers get enough real memory that they
don't need to swap. So whatever interesting rules of thumb they have
for sizing that (unused) swap space does not really matter.
When you squeeze the penguins so that they do swap, you must look at
sizing. As long as you don't use the VDISK, there's hardly a cost for
having it. When your server starts to use it, you investigate what
changed and make your adjustments in a planned outage.

The most common case for Linux running out of memory that I see with
customers are configuration mistakes that made the server start
without swap disk and failure to set up the performance monitor to
detect that and issue alerts...

When the application is allocating huge amounts of memory, there's
little help possible. It will probably run out of swap space whether
you define 1G or 2G. If you have services running that you don't mind
to get kicked of for some reason, then maybe you should not run them
in the first place ;-)  When your infrastructure relies on some basic
provisioning processes to even recover the server, then you'd probably
want to have those killed last. But when running on z/VM you could
have more robust methods to reboot the server.

Rob
-- 
Rob van der Heij
Velocity Software
http://www.velocitysoftware.com/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: OOM-Killer shut down SSh

2010-01-25 Thread Alan Cox
On Mon, 25 Jan 2010 11:14:34 -0700
David Kreuter  wrote:

> My experience is with the effects of the OOM killer. Perhaps with Linux
> on a desktop it is ok as it may pick on non critical processes but in a
> virtual machine server environment it represents a drastic out of
> storage condition requiring immediate action. I have seen this a few
> times on Oracle servers and some emergency relief in the form of
> dynamically adding swap helped but eventually the servers needed to be
> rebooted with storage adjustments to virtual machine size and Oracle
> SGA.  It was unpleasant, like stopping a broken dam with a stick of gum.

Firstly: you can configure the machine to forbid overcommit. That was a
much demanded feature Red Hat added a long time ago and contributed back.
On PC servers with ram and disk being cheap that makes a lot of sense for
such servers - then the process gets out of memory errors on allocation
not OOM (but you'll need more swap 'just in case'). Not sure how it plays
out on a mainframe.

Secondly: on a modern kernel you can weight processes for killing.

Ask your vendor for advice and how much of the stuff is in your system.

Alan

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: OOM-Killer shut down SSh

2010-01-25 Thread David Kreuter
My experience is with the effects of the OOM killer. Perhaps with Linux
on a desktop it is ok as it may pick on non critical processes but in a
virtual machine server environment it represents a drastic out of
storage condition requiring immediate action. I have seen this a few
times on Oracle servers and some emergency relief in the form of
dynamically adding swap helped but eventually the servers needed to be
rebooted with storage adjustments to virtual machine size and Oracle
SGA.  It was unpleasant, like stopping a broken dam with a stick of gum.

Interestingly enough it has happened twice after applying Oracle
patches! So beware of those patches!
David Kreuter 
VM RESOURCES LTD


 Original Message 
Subject: OOM-Killer shut down SSh
From: Robert Giordano 
Date: Mon, January 25, 2010 12:53 pm
To: LINUX-390@VM.MARIST.EDU




Can anyone provide any documentation or opinion on the use and
management
of the OOM-Killer process on SUSE Linux SLES 10 SP1?

Regards
 
 
 
 
 
 Robert 930 Sylvan Ave 
 Giordano 
 
 System z IT Englewood Cliffs, 
 Architect 07632-3301 
 
 IBM Sales & USA 
 Distribution, 
 Software 
 Sales 
 
 Phone: +1-201-607-8047 
 
 Mobile: +1-201-214-7466 
 
 e-mail: rgio...@us.ibm.com 
 


--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: OOM-Killer shut down SSh

2010-01-25 Thread Mark Post
>>> On 1/25/2010 at 12:53 PM, Robert Giordano  wrote: 

> 
> 
> Can anyone provide any documentation or opinion on the use and management
> of the OOM-Killer process on SUSE Linux SLES 10 SP1?

Not really.  It's algorithms are hard-coded into the kernel.  I'm not aware of 
any way to influence it, other than adding virtual storage to the system, 
either via paging space, or memory.  I will provide an opinion on running 
SLES10 SP1.  Don't.  It's been out of service quite a while now.


Mark Post

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


OOM-Killer shut down SSh

2010-01-25 Thread Robert Giordano



Can anyone provide any documentation or opinion on the use and management
of the OOM-Killer process on SUSE Linux SLES 10 SP1?

Regards
   
   
   
   
   
 Robert 930 Sylvan Ave 
 Giordano  
   
 System z ITEnglewood Cliffs,  
 Architect 07632-3301  
   
 IBM Sales &USA
 Distribution, 
 Software  
 Sales 
   
 Phone:+1-201-607-8047 
   
 Mobile:   +1-201-214-7466 
   
 e-mail:   rgio...@us.ibm.com  
   


--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

<><>

Re: "The Naked Mainframe" (Forbes Security Article)

2010-01-25 Thread David Boyes
On 1/25/10 12:01 PM, "Patrick Spinler"  wrote:

> I fear that while you're right, in the end it may not make a large
> difference.  Specifically, in our world, free or low up front cost often
> seems to trump most other considerations, including TCO over expected
> system lifetime.

No kidding. That's why I think the CERN ELFms project is so interesting.
Those tools could be very attractive (and we have s390x binaries available
to our support customers8-)).
 
> As a result, I kind of expect to see linux/KVM hypervisor installations
> on Z spreading in a few years.

So do I, and that's not a bad thing. Harder than necessary, but not bad.
What we have to do is assume that we need to solve the management problems
and use the knowledge we have from z/VM to not reinvent the wheel. 40+ years
is a long time and a lot of knowledge to toss because it's "old".


> At least, if the idea of Z series
> virtualization doesn't get prematurely killed by commodity intel
> hardware virtualization first. :-(

They got the same management problems, only worse because they didn't design
for them. 

-- db

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: "The Naked Mainframe" (Forbes Security Article)

2010-01-25 Thread Shockley, Gerard C
Here is an experiment.

Go here http://cve.mitre.org/compatible/vulnerability_management.html

Click search >

Enter : s390x

> You will receive a page asking you to check spelling. Try zlinux also.

Then enter : windows

Any questions

://Gerard 
 
-Original Message-
From: Linux on 390 Port [mailto:linux-...@vm.marist.edu] On Behalf Of Patrick 
Spinler
Sent: Monday, January 25, 2010 12:01 PM
To: LINUX-390@vm.marist.edu
Subject: Re: "The Naked Mainframe" (Forbes Security Article)

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

David Boyes wrote:
>
> (Snip discussion of management tools, and large advantage existing 
> z/VM tools have v. non-existent or very poor linux / kvm ones)
>

I fear that while you're right, in the end it may not make a large difference.  
Specifically, in our world, free or low up front cost often seems to trump most 
other considerations, including TCO over expected system lifetime.

As a result, I kind of expect to see linux/KVM hypervisor installations on Z 
spreading in a few years.  At least, if the idea of Z series virtualization 
doesn't get prematurely killed by commodity intel hardware virtualization 
first. :-(

- -- Pat

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD4DBQFLXc5mNObCqA8uBswRAilbAJ98F2L2m9kb62jM2rQu0Ovtw4GemgCY6i8I
azPhvLf8ltWv9JNJBM5sMA==
=iOIK
-END PGP SIGNATURE-

--
For LINUX-390 subscribe / signoff / archive access instructions, send email to 
lists...@vm.marist.edu with the message: INFO LINUX-390 or visit 
http://www.marist.edu/htbin/wlvindex?LINUX-390

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: "The Naked Mainframe" (Forbes Security Article)

2010-01-25 Thread Patrick Spinler
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

David Boyes wrote:
>
> (Snip discussion of management tools, and large advantage existing
> z/VM tools have v. non-existent or very poor linux / kvm ones)
>

I fear that while you're right, in the end it may not make a large
difference.  Specifically, in our world, free or low up front cost often
seems to trump most other considerations, including TCO over expected
system lifetime.

As a result, I kind of expect to see linux/KVM hypervisor installations
on Z spreading in a few years.  At least, if the idea of Z series
virtualization doesn't get prematurely killed by commodity intel
hardware virtualization first. :-(

- -- Pat

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD4DBQFLXc5mNObCqA8uBswRAilbAJ98F2L2m9kb62jM2rQu0Ovtw4GemgCY6i8I
azPhvLf8ltWv9JNJBM5sMA==
=iOIK
-END PGP SIGNATURE-

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390