Re: Question about zVM and MF configuration

2012-01-18 Thread Michael Simms
Thanks to all for your input. I really appreciate it!

Take care,
Michael



 From: David Boyes dbo...@sinenomine.net
To: LINUX-390@VM.MARIST.EDU 
Sent: Monday, January 9, 2012 9:20 AM
Subject: Re: Question about zVM and MF configuration
 
 I have found myself having to defend the way we are now configured  vs. a
 co-worker who has come back from class saying his instructor said we should
 run 2 LPARs, one with zVM and zVSE and the other one house our production
 zLinux and DB2 images. 

This sounds like a recommendation from the time before z/VM could deal with 
LPARs containing mixed kinds of processors. 

As others have pointed out, there's no good technical reason to separate the 
two workloads (if you understand how to use the SET SHARE command in all it's 
myriad features), but there might be a political reason to have the big wall 
between the two setups. There are some pricing advantages to having Linux 
workload _really_ separated in terms of the price of z/VM and DB/2, but with 
only 1 CPU you may not be able to take advantage of them, and it's probably not 
worth the effort and annoyance to try to share the CPU at the LPAR level. 

 I have tried to explain how mainframe architecture and zVM have been
 designed as a sharing environment while at the same time protecting against
 influences from any given guest machine, should the configuration be
 configured just right. I might have partially agreed with his instructor had 
 not
 zVM come to support all manner of CPU in recent years, for example
 accommodating both CP and IFLs. We are also on a limited budget and I don’t
 know if we’d be able to purchase more storage or Chpids. Based on my years
 experience, I have poked, prodded and received advice for our system to
 where we have great performance today, both traditional and non-
 traditional workloads.

Ain't broke, don't mess with it. One LPAR is probably the way to go with your 
configuration, and your reasoning is sound in that if/when you add capacity, 
everybody wins.  The only argument for an additional LPAR would be if/when you 
plan to do VM 6.2 and turn on SSI (essentially that would turn your existing 
physical system into a 2 node cluster). The 2nd LPAR would help with 
availability in that case (move the workload over and take the other system out 
of the cluster for maintenance. But, that's different than the problem 
originally proposed. Unless you give guest systems privileges beyond class G 
and don't look regularly at your performance monitor, there's no way you're 
going to be able to do something rude to the rest of the system from a single 
guest.  The management simplification of having only one system to deal with is 
worth any risk that setup will entail.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Question about zVM and MF configuration

2012-01-15 Thread Michael Simms
I want to take this time to thank all who have answered with suggestions and 
opinions. All have been very valuable and I thank you, bunches.
Turns out those working behind my back scheduled a meeting with several zVM and 
'zLinux' specialists from IBM and including the 'gotchas' or 'think about 
this...' they agreed with my original assessment and with those who have 
responded here.
Chalk it up to those with the experience sometimes know what they are talking 
about. :-) Not bitter, just find it humorous!! IBM folks kept saying, '...but 
you've got an experienced VM system programmer there!'. It felt good. Ha ha ha.

Again THANKS A BUNCH




From: David Boyes dbo...@sinenomine.net
To: LINUX-390@VM.MARIST.EDU 
Sent: Monday, January 9, 2012 9:20 AM
Subject: Re: Question about zVM and MF configuration
 
 I have found myself having to defend the way we are now configured  vs. a
 co-worker who has come back from class saying his instructor said we should
 run 2 LPARs, one with zVM and zVSE and the other one house our production
 zLinux and DB2 images. 

This sounds like a recommendation from the time before z/VM could deal with 
LPARs containing mixed kinds of processors. 

As others have pointed out, there's no good technical reason to separate the 
two workloads (if you understand how to use the SET SHARE command in all it's 
myriad features), but there might be a political reason to have the big wall 
between the two setups. There are some pricing advantages to having Linux 
workload _really_ separated in terms of the price of z/VM and DB/2, but with 
only 1 CPU you may not be able to take advantage of them, and it's probably not 
worth the effort and annoyance to try to share the CPU at the LPAR level. 

 I have tried to explain how mainframe architecture and zVM have been
 designed as a sharing environment while at the same time protecting against
 influences from any given guest machine, should the configuration be
 configured just right. I might have partially agreed with his instructor had 
 not
 zVM come to support all manner of CPU in recent years, for example
 accommodating both CP and IFLs. We are also on a limited budget and I don’t
 know if we’d be able to purchase more storage or Chpids. Based on my years
 experience, I have poked, prodded and received advice for our system to
 where we have great performance today, both traditional and non-
 traditional workloads.

Ain't broke, don't mess with it. One LPAR is probably the way to go with your 
configuration, and your reasoning is sound in that if/when you add capacity, 
everybody wins.  The only argument for an additional LPAR would be if/when you 
plan to do VM 6.2 and turn on SSI (essentially that would turn your existing 
physical system into a 2 node cluster). The 2nd LPAR would help with 
availability in that case (move the workload over and take the other system out 
of the cluster for maintenance. But, that's different than the problem 
originally proposed. Unless you give guest systems privileges beyond class G 
and don't look regularly at your performance monitor, there's no way you're 
going to be able to do something rude to the rest of the system from a single 
guest.  The management simplification of having only one system to deal with is 
worth any risk that setup will entail.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Question about zVM and MF configuration

2012-01-09 Thread David Boyes
 I have found myself having to defend the way we are now configured  vs. a
 co-worker who has come back from class saying his instructor said we should
 run 2 LPARs, one with zVM and zVSE and the other one house our production
 zLinux and DB2 images. 

This sounds like a recommendation from the time before z/VM could deal with 
LPARs containing mixed kinds of processors. 

As others have pointed out, there's no good technical reason to separate the 
two workloads (if you understand how to use the SET SHARE command in all it's 
myriad features), but there might be a political reason to have the big wall 
between the two setups. There are some pricing advantages to having Linux 
workload _really_ separated in terms of the price of z/VM and DB/2, but with 
only 1 CPU you may not be able to take advantage of them, and it's probably not 
worth the effort and annoyance to try to share the CPU at the LPAR level. 

 I have tried to explain how mainframe architecture and zVM have been
 designed as a sharing environment while at the same time protecting against
 influences from any given guest machine, should the configuration be
 configured just right. I might have partially agreed with his instructor had 
 not
 zVM come to support all manner of CPU in recent years, for example
 accommodating both CP and IFLs. We are also on a limited budget and I don’t
 know if we’d be able to purchase more storage or Chpids. Based on my years
 experience, I have poked, prodded and received advice for our system to
 where we have great performance today, both traditional and non-
 traditional workloads.

Ain't broke, don't mess with it. One LPAR is probably the way to go with your 
configuration, and your reasoning is sound in that if/when you add capacity, 
everybody wins.  The only argument for an additional LPAR would be if/when you 
plan to do VM 6.2 and turn on SSI (essentially that would turn your existing 
physical system into a 2 node cluster). The 2nd LPAR would help with 
availability in that case (move the workload over and take the other system out 
of the cluster for maintenance. But, that's different than the problem 
originally proposed. Unless you give guest systems privileges beyond class G 
and don't look regularly at your performance monitor, there's no way you're 
going to be able to do something rude to the rest of the system from a single 
guest.  The management simplification of having only one system to deal with is 
worth any risk that setup will entail. 


Re: Question about zVM and MF configuration

2012-01-04 Thread Barton Robinson
one lpar works well, shares storage, shares other resources, reduces 
overhead, reduces system programming efforts, reduces costs. And I can 
show other mixed mode LPARs with both VSE and linux in the same LPAR. 
Was it a hardware vendor recommending increasing your hardware costs


Michael Simms wrote:

I hope everyone had a safe and happy one Holiday Season.

I need some guidance, advice and/or ammunition on an issue that has come up regarding mainframe configuration.  


I have found myself having to defend the way we are now configured  vs. a 
co-worker who has come back from
class saying his instructor said we should run 2 LPARs,
one with zVM and zVSE and the other one house our production zLinux and DB2
images. My co-worker has no mainframe experience and does not know
our hardware or complete software configuration. Apparently my co-worker fears
VSE ‘interference’ with his zLinux images as well as a fear that he would crash
the zVM system. Not sure what he has in his plans that would cause such zVM instability.  
 
We currently have: z114, 1 partial

CP, 1 IFL, 24GB storage (18/6), 2 FICON cards, 2 OSA cards, 1 zVM LPAR with 
both CPUs,
zVM V6.1, zVSE, zLinux of various flavors SuSE running and DB2 running in 
several
or more zLinuxes. Don’t know how many DB2 zLinux images yet. We
already have a couple of production zLinux and are exploring another set of
zLinux that would maybe use the zVSE VSAM Redirector and DB2. I suggest that we
add to our current configuration as it would better share resources such as
memory and I/O. I also feel that it would be easier to manage 1 LPAR instead of
2 LPARs and all their various pieces and parts that would also include zVM test
machines and 2 test VSE machines.
 
I have tried to explain how mainframe architecture and zVM have

been designed as a sharing environment while at the same time protecting
against influences from any given guest machine, should the configuration be
configured just right. I might have partially agreed with his instructor had
not zVM come to support all manner of CPU in recent years, for example 
accommodating
both CP and IFLs. We are also on a limited budget and I don’t know if we’d be
able to purchase more storage or Chpids. Based on my years experience, I have poked, prodded and received advice for our system to where we have great performance today, both traditional and non-traditional workloads.  
 
Does anyone have suggestion/points to argue one way or the

other? Do you have some examples of something similar, one way or another, to
what we have or will soon have? You probably would like some more input
variables? Just let me know and I’ll provide. 


I appreciate any and all feedback!

Thanks.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/





--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/
attachment: BARTON.vcf

Re: Question about zVM and MF configuration

2012-01-04 Thread Alan Altmark
On Wednesday, 01/04/2012 at 04:53 EST, Michael Simms
simmsmichael1...@yahoo.com wrote:

 I have tried to explain how mainframe architecture and zVM have
 been designed as a sharing environment while at the same time protecting
 against influences from any given guest machine, should the
configuration be
 configured just right. I might have partially agreed with his instructor
had
 not zVM come to support all manner of CPU in recent years, for example
 accommodating
 both CP and IFLs. We are also on a limited budget and I don't know if
we'd
 be able to purchase more storage or Chpids. Based on my years
experience, I have
 poked, prodded and received advice for our system to where we have great
 performance today, both traditional and non-traditional workloads.

 Does anyone have suggestion/points to argue one way or the
 other? Do you have some examples of something similar, one way or
another, to
 what we have or will soon have?

The issue around one vs. multiple LPARs is about control and paranoia
management.  Though just as high, the walls between virtual machines in a
single LPAR are more flexible than than the walls between LPARs.  The
effects of other virtual machines are more easily felt.

If you have old Workload A that hums along at a predictable and consistent
utilization rate (CPU and memory) and then you introduce new Workload B
that has large spikes of demand at random times, you might want to isolate
that Workload B while you develop some historical data so that you can
determine whether the spikes would create a problem for Workload A.

Once you're happy with the resource consumption patterns then you can join
workloads into a single LPAR.  It really depends on what you know about
the workloads, the resources you have available to feed them, and what the
consequences are if Workload B is hostile to Workload A.  (New 2nd LPAR
will suck MIPS and memory out of the current LPAR.)

This is why folks have separate LPARs for Dev/Test vs. Pre-prod vs.
Production.  There are risks you simply may not be willing to take with
Production.

But that's all philosophical.  With only one CP and one IFL, I think I'd
try to limit myself to one LPAR.  Your VSE workload will run only one the
CP and you can have Linux run only on the IFL.  Unless they do things that
require a lost of Master Processor work (e.g. spooling) they won't really
compete with each other on the CPU.  They *will* compete for chpids when
they are in the same LPAR.  In different LPARs, you can isolate chpids to
reserve bandwidth.

So at the end of the day, it's a considered choice, not an obvious one.

Alan Altmark

Senior Managing z/VM and Linux Consultant
IBM System Lab Services and Training
ibm.com/systems/services/labservices
office: 607.429.3323
mobile; 607.321.7556
alan_altm...@us.ibm.com
IBM Endicott

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Question about zVM and MF configuration

2012-01-04 Thread Tom Duerbusch
In my experience, it is Linux that interferes with zVSE G, but that is
only in a scarce memory configuration.

Consider that zVSE workloads really don't change the amount of memory
needed, month to month.
However, add a Websphere Linux image (take any large to you linux
application), and add it to an VM system that is actively using most of
its memory, and you page.  You page everyone else out.  oops.
Did I do a SET RESERVE on my production guests?  No?
Did I have plenty of paging devices available for this new spike? No?
What is the largest partition in VSE that will feel the effect of being
paged out? CICS/TS
Users are unhappy, until the system settles down in 5-10 minutes.

Other reasons for running a system under LPAR instead of zVM:
1.  Needs more resources then zVM can supply (i.e. more than 256 GB
real storage)
2.  Needs hardware that zVM can't manage (sysplex timer, for example).
3.  If a guest is running 90%+ cpu, you might want to move it out from
under zVM and dedicate processors to it.

If you are not having any problems with performance, than running under
a single LPAR for most things, is great.

You might want to have a small LPAR for testing (VM installs, Operator
training etc) or to provide Live Guest Relocation of your near 24X7
Linux images for when you apply maintenance or have any other scheduled
outage of your primary zVM system.

Tom Duerbusch
THD Consulting
 Michael Simms simmsmichael1...@yahoo.com 1/4/2012 3:50 PM 
I hope everyone had a safe and happy one Holiday Season.

I need some guidance, advice and/or ammunition on an issue that has
come up regarding mainframe configuration.  

I have found myself having to defend the way we are now configured  vs.
a co-worker who has come back from
class saying his instructor said we should run 2 LPARs,
one with zVM and zVSE and the other one house our production zLinux and
DB2
images. My co-worker has no mainframe experience and does not know
our hardware or complete software configuration. Apparently my
co-worker fears
VSE ‘interference’ with his zLinux images as well as a fear that he
would crash
the zVM system. Not sure what he has in his plans that would cause such
zVM instability.  
 
We currently have: z114, 1 partial
CP, 1 IFL, 24GB storage (18/6), 2 FICON cards, 2 OSA cards, 1 zVM LPAR
with both CPUs,
zVM V6.1, zVSE, zLinux of various flavors SuSE running and DB2 running
in several
or more zLinuxes. Don’t know how many DB2 zLinux images yet. We
already have a couple of production zLinux and are exploring another
set of
zLinux that would maybe use the zVSE VSAM Redirector and DB2. I suggest
that we
add to our current configuration as it would better share resources
such as
memory and I/O. I also feel that it would be easier to manage 1 LPAR
instead of
2 LPARs and all their various pieces and parts that would also include
zVM test
machines and 2 test VSE machines.
 
I have tried to explain how mainframe architecture and zVM have
been designed as a sharing environment while at the same time
protecting
against influences from any given guest machine, should the
configuration be
configured just right. I might have partially agreed with his
instructor had
not zVM come to support all manner of CPU in recent years, for example
accommodating
both CP and IFLs. We are also on a limited budget and I don’t know if
we’d be
able to purchase more storage or Chpids. Based on my years experience,
I have poked, prodded and received advice for our system to where we
have great performance today, both traditional and non-traditional
workloads.  
 
Does anyone have suggestion/points to argue one way or the
other? Do you have some examples of something similar, one way or
another, to
what we have or will soon have? You probably would like some more
input
variables? Just let me know and I’ll provide. 

I appreciate any and all feedback!

Thanks.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390
or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390 
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Question about zVM and MF configuration

2012-01-04 Thread Marcy Cortes
Alan and Tom have very valid points.

Here's some more thoughts.

You may have security/audit reasons or management policy for not running test 
and production on the same HW.  (We don't even run them in the same state :)

You may also find that some production applications are much more sensitive to 
performance variability.  We choose not to over commit memory in production 
but do over commit in test to a pretty high degree.   

If availability is of utmost importance, you may choose to buy fancy hw to 
support that (local or remote replication).  That's $ you might not want to 
spend in a test environment, so another reason to separate them.

You may find disaster recovery is easier if you only have the production 
systems separate.

You may have a requirement to test out any VM maintenance for a certain time 
period before introducing (inflicting?) it upon the production environment.   
(patch policy).  You can't do that well with only 1 VM system.

But given you have only 1 CP and 1 IFL, it doesn't sound like you have any of 
the concerns *yet* and I would have a hard time suggesting that you do another 
lpar with those resources available to you.

But you can put them in a powerpoint and drop it in your bosses lap so that 
when issues do arise you can always say I told you so. 


Marcy 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/