Re: zvm directions

2011-05-26 Thread Rob van der Heij
On Thu, May 26, 2011 at 5:06 AM, Philip Tully tull...@optonline.net wrote:
 With all do respect: Contacting our IBM rep under NDA does not fit  public
 road map 

 I think the customers are letting IBM know, that they are not ready to
 relinquish control of this asset.  It may not be the story IBM mgmt wants to
 hear but it is the one that is being told.   I may no longer go onsite to
 customers on a regular basis, but when I was, I often needed access to the
 HMC and it was pretty consistent that there was significant access control
 for the HMC.

Neither may be parts of IBM. At least two installations told me that
IBM requires that the original HMC user/pw combinations remain in
place for the (different) IBM support person to be able to support
them. I suppose that when the customer was more persuasive they could
convince their support person of something else.

Some Large shops have a separate LAN for delicate stuff and implement
access control with RSA gear. That includes a process to expire access
when people change roles, etc. This is where you find their HMC as
well the local consoles for the LPARs. You can't seriously tell them
to move some of that back into the public LAN and do local password
management again.

Rob


Re: zvm directions

2011-05-26 Thread Alan Altmark
On Wednesday, 05/25/2011 at 11:07 EDT, Philip Tully 
tull...@optonline.net wrote:
 With all do respect: Contacting our IBM rep under NDA does not fit
 publc road map.

I'm not trying to be contrary or anything, Phil, just practical.  If your 
or anyone else feels they need more information about IBM's plans for the 
future than is publicly available (on pretty much any subject), there's a 
way to deal with that.

 I think the customers are letting IBM know, that they are not ready to
 relinquish control of this asset.  It may not be the story IBM mgmt 
wants
 to hear but it is the one that is being told.   I may no longer go 
onsite to 
 customers on a regular basis, but when I was, I often needed access to 
the
 HMC and it was pretty consistent that there was significant access 
control
 for the HMC.

No one disputes that there should be significant access control for the 
HMC.  Hence my statements about improvements to HMC security management 
and the recommendation to put a firewall in front of it.  You may even 
require some form of authentication at the firewall.  And you certainly do 
NOT allow remote access into the HMC-SE LAN itself except when you have a 
remote HMC.  And for those I would seriously consider a VPN-style 
connection into the HMC-SE LAN, even though:
- All communication between an HMC and an SE is encrypted.  This is 
managed via Domain Security.
- All communication between a browser and the HMC is via HTTPS

Over time, expect to see the HMC continue to expand its role as a 
management endpoint in your System z world.  Naturally, this is an 
evolving story, so keep your 3270 emulator handy.

Alan Altmark

z/VM and Linux on System z Consultant
IBM System Lab Services and Training 
ibm.com/systems/services/labservices 
office: 607.429.3323
mobile; 607.321.7556
alan_altm...@us.ibm.com
IBM Endicott


Re: zvm directions

2011-05-25 Thread PHILIP TULLY
Thanks for the reply's as soon as I sent that I was swamped and couldn't 
really reply.


I have thrown my hat toward share, thanks for the suggestion.

Some good points, 10G is def the way we are going but with the number of 
lpars being thrust on us by the severe memory limitations we are 
concerned about the number of osa's required to meet our network 
configs.


The memory issue is our most pressing issue, with 4 current and 5 
defined but not yet active prod lpars all at 256G we have major stress 
with this.


IBM  needs to step up to a more definitive public road map for the z/VM 
operating system, with a multiple year outlook.


The instrumentation/management  of scsi attached disk would be very 
welcome.


One area where I see no value is the use of  Unified Resource Manager as 
long as it requires opening up access to the HMC.


Tools to provide cross system managment.  (I know about SSI but that is 
so limited by 4 systems to be almost useless. and btw it is still 
unannounced)


Re: zvm directions

2011-05-25 Thread Philip Tully
With all do respect: Contacting our IBM rep under NDA does not fit  publ
ic
road map 

I think the customers are letting IBM know, that they are not ready to
relinquish control of this asset.  It may not be the story IBM mgmt wants
 to
hear but it is the one that is being told.   I may no longer go onsite to

customers on a regular basis, but when I was, I often needed access to th
e
HMC and it was pretty consistent that there was significant access contro
l
for the HMC.  


Antwort: Re: zvm directions

2011-05-19 Thread William . Mongan

z/VM directions, an interesting subject that we also discussed at the
Technical University in Vienna, where I also got the tip to join this list.

As a long time z/VM user my main concern is NOT exploiting new areas and
new technologies, it is rather exploiting existing or new hardware
functions and making them available for the guest systems.
I do not know of many users that use an LPAR, run z/VM under it, and then
run z/OS guests, but that is where the problems for us begin.  z/VM
exploits the hardware but in several cases can not pass this information to
a guest z/OS system.  For example: z/VM supports hyperpav and hyperpav
aliases, even if they use the same address (using channel subsets) but can
not pass channel subsets to the z/OS guest.  I can understand that there is
pressure to allow new work loads to run under z/VM, but the current work
loads and work cases must also be supported.  When I asked why our main
shop does not try to do anything with z/VM the answer was shot back -
because z/OS capacity pricing is not supported under z/VM, be that by z/VM
or by vendors that do not even offer licenses for hard capped guests under
z/VM.  I would like to add that we discussed these issues in Vienna, and I
have high hopes that these hindurances for z/VM will be resovled, hopefully
before I retire.  Being a long time user I love the stability and ease of
use of this great product, z/VM, and am looking forward to its future.

Regards,
William Kim Mongan

__
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email 
__

Re: zvm directions

2011-05-18 Thread Martin Zimelis
Phil,
   Have you considered getting involved with the Linux  VM Program (LVM) at
SHARE?  In particular, the LVM Technical Steering Committee has been working
with IBM on this sort of topic for a number of years.  I know they're always
looking for interested members from the user community.

   Marty

On Wed, May 18, 2011 at 11:31 AM, PHILIP TULLY tull...@optonline.netwrote:

 I see that the list traffic is kind of light right now and though I would
 toss out a topic for all of us to chew on.

 I am looking for your thoughts on the current direction of zVM in
 particular where development needs to be focused.


 I sense that z/VM 6.2 with SSI will ease the burden of medium to large
 shops in the area of multi-system maintenance, and hopefully will be
 extended beyond it's current meager 4 system max size, sooner rather than
 later.

 Given the difficulty in making any changes to production workloads I don't
 see SSI with Live Guest Migration (LGM) as a panacea to issue related to
 load balancing amongst lpars.  Without more direct linux interaction I am
 concerned about the migration of workloads using dedicated fcp with or
 without NPIV as well as arp issues.

 The area I would like to see development is the utilization of the hardware
 some of us are lucky enough to have, the z196.  With a machine that can be
 delivered with 3TB of memory(1.5TB on a z10), having a maximum size z/VM
 system of 256GB is very limiting.  In reviewing presentations on memory
 limits, I have read comments that the system has been tested to more than
 400GB central storage but no indication (statement of direction...rumor)
 that the current limit will be increased.  So  I am pushing for increasing
 the max z/VM LPAR to at least 512MB if not larger.

 Expansion of the link aggregation implementation allowing for shared OSA
 cards.


 In general I am focused on larger vm systems, so that is where I would like
 to see development.

 Phil Tully

 Viewpoints presented here are my own and not my employer's



Re: zvm directions

2011-05-18 Thread Marcy Cortes
Phil, I'll 2nd your opinion that 4 systems in the SSI is meager.  I'm already 
in a quandary there with 4 prod systems and capacity planning asking where we 
put the next ones.  So now I'm not sure if we step into SSI with all 4 or have 
to immediately start with 2 plexes.  If two, we're giving up something.

I don't see LGR as a load balancing solution at all.  We will continue to use 
our F5 load balancers as well as the WAS IHS plugin for that effort.  I see it 
more for a planned outage move for things you want to move away for a while 
without the reboot.

512M seems like a good next target given our 196's can do 3TB.  We leave half 
for failover so that would mean we would do 3 prod LPARs on the box, with the 3 
standby.  That seems reasonable.   Avoiding VMWARE type sprawl I think is a 
good thing :)

We've just moved to the 10Gig OSAs and away from the LACP for a couple of 
reasons, so that is not as important to us.  The cost of OSA ports IMHO 
probably doesn't justify VM developer time.  

Replication, large ECKD minidisks, zHPF (or any I/O related things to keep ECKD 
perf on par with FCP), are things that are important here.  

With the z196s being the fastest thing out there now, I see an avalanche of new 
workload coming.  Sounds the same for you.

(PS. I'll 2nd Marty's idea of getting involved in SHARE if you can!)

Marcy


-Original Message-
From: The IBM z/VM Operating System [mailto:IBMVM@LISTSERV.UARK.EDU] On Behalf 
Of PHILIP TULLY
Sent: Wednesday, May 18, 2011 8:31 AM
To: IBMVM@LISTSERV.UARK.EDU
Subject: [IBMVM] zvm directions

I see that the list traffic is kind of light right now and though I 
would toss out a topic for all of us to chew on.

I am looking for your thoughts on the current direction of zVM in 
particular where development needs to be focused.


I sense that z/VM 6.2 with SSI will ease the burden of medium to large 
shops in the area of multi-system maintenance, and hopefully will be 
extended beyond it's current meager 4 system max size, sooner rather 
than later.

Given the difficulty in making any changes to production workloads I 
don't see SSI with Live Guest Migration (LGM) as a panacea to issue 
related to load balancing amongst lpars.  Without more direct linux 
interaction I am concerned about the migration of workloads using 
dedicated fcp with or without NPIV as well as arp issues.

The area I would like to see development is the utilization of the 
hardware some of us are lucky enough to have, the z196.  With a machine 
that can be delivered with 3TB of memory(1.5TB on a z10), having a 
maximum size z/VM system of 256GB is very limiting.  In reviewing 
presentations on memory limits, I have read comments that the system has 
been tested to more than 400GB central storage but no indication 
(statement of direction...rumor) that the current limit will be 
increased.  So  I am pushing for increasing the max z/VM LPAR to at 
least 512MB if not larger.

Expansion of the link aggregation implementation allowing for shared OSA 
cards.


In general I am focused on larger vm systems, so that is where I would 
like to see development.

Phil Tully

Viewpoints presented here are my own and not my employer's


Re: zvm directions

2011-05-18 Thread Alan Altmark
On Wednesday, 05/18/2011 at 11:33 EDT, PHILIP TULLY 
tull...@optonline.net wrote:
 I sense that z/VM 6.2 with SSI will ease the burden of medium to large
 shops in the area of multi-system maintenance, and hopefully will be
 extended beyond it's current meager 4 system max size, sooner rather
 than later.
 
 Given the difficulty in making any changes to production workloads I
 don't see SSI with Live Guest Migration (LGM) as a panacea to issue
 related to load balancing amongst lpars.  Without more direct linux
 interaction I am concerned about the migration of workloads using
 dedicated fcp with or without NPIV as well as arp issues.

- Guests that use FCP (with or without NPIV) are not expected to have 
issues as long as the guest is configured for multipathing, as the System 
z WWPNs will not move with the guest.  That means you need to be thorough 
in your zoning.

- Guests that use the VSWITCH or OSAs are not expected to have ARP issues.

- It is possible that a Linux patch will be needed for dedicated OSAs.

- There is planned to be a color matching mechanism to let you manage 
the cross-system equivalence relationships for certain dedicated device 
types (e.g. FCP and OSA).  Your job will be to ensure that all FCP (e.g.) 
subchannels of the same color have the same access rights into the fabric, 
both in terms of zoning and masking.

Disclaimer:  The above statements represent IBM's intent, but is not a 
commitment.  The implementation is subject to change without notice.  When 
the next release of z/VM is announced, we'll be able to give more details 
and have a firm understanding of any guest patch requirements.

 The area I would like to see development is the utilization of the
 hardware some of us are lucky enough to have, the z196.  With a machine
 that can be delivered with 3TB of memory(1.5TB on a z10), having a
 maximum size z/VM system of 256GB is very limiting.  In reviewing
 presentations on memory limits, I have read comments that the system has
 been tested to more than 400GB central storage but no indication
 (statement of direction...rumor) that the current limit will be
 increased.  So  I am pushing for increasing the max z/VM LPAR to at
 least 512MB if not larger.

IBM is actively investing in memory scalability for z/VM.  When CP dcan 
reliably use more than 256GB, the support limit will be raised.

 Expansion of the link aggregation implementation allowing for shared OSA
 cards.

This isn't possible with System z's shared I/O model.  There is only one 
cable on the port and any host putting data on the cable must abide by the 
LACP protocol, which requires knowledge of the state and relationships of 
all packets in transit on the cable.  Only one cook is allowed in the 
kitchen.  Hence the no sharing requirement.

If System z were to switch a System p PowerVM-like virtual I/O server 
model, then what you describe could happen.  All I/O from all LPARs would 
be intercepted and routed to another LPAR that does the real I/O on 
everyone's behalf.  It would look like a CEC-wide VSWITCH.  But it's not 
clear to me that System z should invest in that model at the expense of 
other capabilities.  (I get complaints from people about OSA latency 
now)

Alan Altmark

z/VM and Linux on System z Consultant
IBM System Lab Services and Training 
ibm.com/systems/services/labservices 
office: 607.429.3323
mobile; 607.321.7556
alan_altm...@us.ibm.com
IBM Endicott


Re: zvm directions

2011-05-18 Thread Alan Altmark
On Wednesday, 05/18/2011 at 12:07 EDT, Marcy Cortes 
marcy.d.cor...@wellsfargo.com wrote:

 I don't see LGR as a load balancing solution at all.  We will continue 
to use 
 our F5 load balancers as well as the WAS IHS plugin for that effort.  I 
see it 
 more for a planned outage move for things you want to move away for a 
while 
 without the reboot.

An excellent assessment, Marcy.  :-)  LGR was not designed to replace any 
application-level workload balancing solutions (F5).  Those balancing 
solutions provide the needed HA in case you lose a VM LPAR unexpectedly.

LGR will let you take back control of your VM LPARs.  No longer will you 
need to get 15 application owners to agree on a time for you to take down 
and service the VM system.  Their servers keep running and the application 
monitor dashboard shows green.

Oh, and I suppose there is an additional benefit in that if someone says, 
*I* can relocate a server to a different rack in case it starts to 
overheat! you can stick out your tongue and then say *I* can relocate a 
server when I want to.  My machine doesn't overheat.  :-)

Alan Altmark

z/VM and Linux on System z Consultant
IBM System Lab Services and Training 
ibm.com/systems/services/labservices 
office: 607.429.3323
mobile; 607.321.7556
alan_altm...@us.ibm.com
IBM Endicott


Re: zvm directions

2011-05-18 Thread Schuh, Richard
Too bad it will not work for geographically dispersed LPARS :-(

Regards, 
Richard Schuh 

 

 -Original Message-
 From: The IBM z/VM Operating System 
 [mailto:IBMVM@LISTSERV.UARK.EDU] On Behalf Of Alan Altmark
 Sent: Wednesday, May 18, 2011 11:28 AM
 To: IBMVM@LISTSERV.UARK.EDU
 Subject: Re: zvm directions
 
 On Wednesday, 05/18/2011 at 12:07 EDT, Marcy Cortes 
 marcy.d.cor...@wellsfargo.com wrote:
 
  I don't see LGR as a load balancing solution at all.  We 
 will continue
 to use 
  our F5 load balancers as well as the WAS IHS plugin for 
 that effort.  
  I
 see it 
  more for a planned outage move for things you want to move 
 away for a
 while 
  without the reboot.
 
 An excellent assessment, Marcy.  :-)  LGR was not designed to 
 replace any application-level workload balancing solutions 
 (F5).  Those balancing solutions provide the needed HA in 
 case you lose a VM LPAR unexpectedly.
 
 LGR will let you take back control of your VM LPARs.  No 
 longer will you need to get 15 application owners to agree on 
 a time for you to take down and service the VM system.  Their 
 servers keep running and the application monitor dashboard 
 shows green.
 
 Oh, and I suppose there is an additional benefit in that if 
 someone says,
 *I* can relocate a server to a different rack in case it 
 starts to overheat! you can stick out your tongue and then 
 say *I* can relocate a server when I want to.  My machine 
 doesn't overheat.  :-)
 
 Alan Altmark
 
 z/VM and Linux on System z Consultant
 IBM System Lab Services and Training
 ibm.com/systems/services/labservices
 office: 607.429.3323
 mobile; 607.321.7556
 alan_altm...@us.ibm.com
 IBM Endicott
 

Re: zvm directions

2011-05-18 Thread Marcy Cortes
The tongue benefit is huge.  Gotta keep up with them other guys ;)

The other really useful case I see is in the dev/test environment.
Say we want to get some good measurements from an app before they go production 
or to size them properly for their prod server purchase, but we have some pigs 
(uh, I mean very active developers writing code that is still in the early 
stages) skewing the results.  We shove them off to the other LPAR until our 
target is LPAR looks the way we want it to ...  and put them back later.  No 
one is the wiser and no emails about what happened to my server!

Marcy 


-Original Message-
From: The IBM z/VM Operating System [mailto:IBMVM@LISTSERV.UARK.EDU] On Behalf 
Of Alan Altmark
Sent: Wednesday, May 18, 2011 11:28 AM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: [IBMVM] zvm directions

On Wednesday, 05/18/2011 at 12:07 EDT, Marcy Cortes 
marcy.d.cor...@wellsfargo.com wrote:

 I don't see LGR as a load balancing solution at all.  We will continue 
to use 
 our F5 load balancers as well as the WAS IHS plugin for that effort.  I 
see it 
 more for a planned outage move for things you want to move away for a 
while 
 without the reboot.

An excellent assessment, Marcy.  :-)  LGR was not designed to replace any 
application-level workload balancing solutions (F5).  Those balancing 
solutions provide the needed HA in case you lose a VM LPAR unexpectedly.

LGR will let you take back control of your VM LPARs.  No longer will you 
need to get 15 application owners to agree on a time for you to take down 
and service the VM system.  Their servers keep running and the application 
monitor dashboard shows green.

Oh, and I suppose there is an additional benefit in that if someone says, 
*I* can relocate a server to a different rack in case it starts to 
overheat! you can stick out your tongue and then say *I* can relocate a 
server when I want to.  My machine doesn't overheat.  :-)

Alan Altmark

z/VM and Linux on System z Consultant
IBM System Lab Services and Training 
ibm.com/systems/services/labservices 
office: 607.429.3323
mobile; 607.321.7556
alan_altm...@us.ibm.com
IBM Endicott


Re: zvm directions

2011-05-18 Thread Marcy Cortes
Depends on how far, right?
You have to share DASD so PPRC distances apply.
You probably need the same subnet so you need a consultation with your network 
folks.
But should be doable if you do those things (at least that's the plan here).


Marcy 

-Original Message-
From: The IBM z/VM Operating System [mailto:IBMVM@LISTSERV.UARK.EDU] On Behalf 
Of Schuh, Richard
Sent: Wednesday, May 18, 2011 11:35 AM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: [IBMVM] zvm directions

Too bad it will not work for geographically dispersed LPARS :-(

Regards, 
Richard Schuh 

 

 -Original Message-
 From: The IBM z/VM Operating System 
 [mailto:IBMVM@LISTSERV.UARK.EDU] On Behalf Of Alan Altmark
 Sent: Wednesday, May 18, 2011 11:28 AM
 To: IBMVM@LISTSERV.UARK.EDU
 Subject: Re: zvm directions
 
 On Wednesday, 05/18/2011 at 12:07 EDT, Marcy Cortes 
 marcy.d.cor...@wellsfargo.com wrote:
 
  I don't see LGR as a load balancing solution at all.  We 
 will continue
 to use 
  our F5 load balancers as well as the WAS IHS plugin for 
 that effort.  
  I
 see it 
  more for a planned outage move for things you want to move 
 away for a
 while 
  without the reboot.
 
 An excellent assessment, Marcy.  :-)  LGR was not designed to 
 replace any application-level workload balancing solutions 
 (F5).  Those balancing solutions provide the needed HA in 
 case you lose a VM LPAR unexpectedly.
 
 LGR will let you take back control of your VM LPARs.  No 
 longer will you need to get 15 application owners to agree on 
 a time for you to take down and service the VM system.  Their 
 servers keep running and the application monitor dashboard 
 shows green.
 
 Oh, and I suppose there is an additional benefit in that if 
 someone says,
 *I* can relocate a server to a different rack in case it 
 starts to overheat! you can stick out your tongue and then 
 say *I* can relocate a server when I want to.  My machine 
 doesn't overheat.  :-)
 
 Alan Altmark
 
 z/VM and Linux on System z Consultant
 IBM System Lab Services and Training
 ibm.com/systems/services/labservices
 office: 607.429.3323
 mobile; 607.321.7556
 alan_altm...@us.ibm.com
 IBM Endicott
 


Re: zvm directions

2011-05-18 Thread Alan Altmark
On Wednesday, 05/18/2011 at 02:46 EDT, Marcy Cortes 
marcy.d.cor...@wellsfargo.com wrote:
 Depends on how far, right?
 You have to share DASD so PPRC distances apply.
 You probably need the same subnet so you need a consultation with your 
network 
 folks.
 But should be doable if you do those things (at least that's the plan 
here).

Indeed, the flat layer 2 LAN requirement is very likely going to be the 
limiting factor.

Most sites are unwilling to extend LANs very far.  There is some validity 
in that position since the subnet numbers are usually architected along 
some sort of physical boundary (e.g. city, site, building, floor).  If two 
halves of a LAN each have a router in them, you can end up with a split 
horizon if the bridge connection between the halves goes down.  That 
isn't pretty as both routers give the battle cry To me! To me! Death to 
the other!

So rather than get into this situation, the network architects typically 
don't allow a LAN segment to extend beyond a single wiring closet.  That 
doesn't mean you shouldn't ask, but it does mean you shouldn't be 
surprised if the answer comes back No.

Alan Altmark

z/VM and Linux on System z Consultant
IBM System Lab Services and Training 
ibm.com/systems/services/labservices 
office: 607.429.3323
mobile; 607.321.7556
alan_altm...@us.ibm.com
IBM Endicott


Re: zvm directions

2011-05-18 Thread Austin, Alyce (CIV)
Has z/VM 6.2 been released?

Regards,
Alyce


-Original Message-
From: The IBM z/VM Operating System [mailto:IBMVM@LISTSERV.UARK.EDU] On Behalf 
Of PHILIP TULLY
Sent: Wednesday, May 18, 2011 8:31 AM
To: IBMVM@LISTSERV.UARK.EDU
Subject: zvm directions

I see that the list traffic is kind of light right now and though I 
would toss out a topic for all of us to chew on.

I am looking for your thoughts on the current direction of zVM in 
particular where development needs to be focused.


I sense that z/VM 6.2 with SSI will ease the burden of medium to large 
shops in the area of multi-system maintenance, and hopefully will be 
extended beyond it's current meager 4 system max size, sooner rather 
than later.

Given the difficulty in making any changes to production workloads I 
don't see SSI with Live Guest Migration (LGM) as a panacea to issue 
related to load balancing amongst lpars.  Without more direct linux 
interaction I am concerned about the migration of workloads using 
dedicated fcp with or without NPIV as well as arp issues.

The area I would like to see development is the utilization of the 
hardware some of us are lucky enough to have, the z196.  With a machine 
that can be delivered with 3TB of memory(1.5TB on a z10), having a 
maximum size z/VM system of 256GB is very limiting.  In reviewing 
presentations on memory limits, I have read comments that the system has 
been tested to more than 400GB central storage but no indication 
(statement of direction...rumor) that the current limit will be 
increased.  So  I am pushing for increasing the max z/VM LPAR to at 
least 512MB if not larger.

Expansion of the link aggregation implementation allowing for shared OSA 
cards.


In general I am focused on larger vm systems, so that is where I would 
like to see development.

Phil Tully

Viewpoints presented here are my own and not my employer's


Re: zvm directions

2011-05-18 Thread Dave Jones
no.

On 05/18/2011 05:32 PM, Austin, Alyce (CIV) wrote:
 Has z/VM 6.2 been released?
 
 Regards,
 Alyce
 
 
 -Original Message-
 From: The IBM z/VM Operating System [mailto:IBMVM@LISTSERV.UARK.EDU] On 
 Behalf Of PHILIP TULLY
 Sent: Wednesday, May 18, 2011 8:31 AM
 To: IBMVM@LISTSERV.UARK.EDU
 Subject: zvm directions
 
 I see that the list traffic is kind of light right now and though I 
 would toss out a topic for all of us to chew on.
 
 I am looking for your thoughts on the current direction of zVM in 
 particular where development needs to be focused.
 
 
 I sense that z/VM 6.2 with SSI will ease the burden of medium to large 
 shops in the area of multi-system maintenance, and hopefully will be 
 extended beyond it's current meager 4 system max size, sooner rather 
 than later.
 
 Given the difficulty in making any changes to production workloads I 
 don't see SSI with Live Guest Migration (LGM) as a panacea to issue 
 related to load balancing amongst lpars.  Without more direct linux 
 interaction I am concerned about the migration of workloads using 
 dedicated fcp with or without NPIV as well as arp issues.
 
 The area I would like to see development is the utilization of the 
 hardware some of us are lucky enough to have, the z196.  With a machine 
 that can be delivered with 3TB of memory(1.5TB on a z10), having a 
 maximum size z/VM system of 256GB is very limiting.  In reviewing 
 presentations on memory limits, I have read comments that the system has 
 been tested to more than 400GB central storage but no indication 
 (statement of direction...rumor) that the current limit will be 
 increased.  So  I am pushing for increasing the max z/VM LPAR to at 
 least 512MB if not larger.
 
 Expansion of the link aggregation implementation allowing for shared OSA 
 cards.
 
 
 In general I am focused on larger vm systems, so that is where I would 
 like to see development.
 
 Phil Tully
 
 Viewpoints presented here are my own and not my employer's
 

-- 
Dave Jones
V/Soft Software
www.vsoft-software.com
Houston, TX
281.578.7544


Re: zvm directions

2011-05-18 Thread Marcy Cortes
No, nor announced.  It's statement of direction thus far.  Might not even be 
called 6.2 perhaps :)
But go to share.org and look at the Anaheim - Franciscovich 8453.

Marcy 

-Original Message-
From: The IBM z/VM Operating System [mailto:IBMVM@LISTSERV.UARK.EDU] On Behalf 
Of Austin, Alyce (CIV)
Sent: Wednesday, May 18, 2011 3:33 PM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: [IBMVM] zvm directions

Has z/VM 6.2 been released?

Regards,
Alyce


-Original Message-
From: The IBM z/VM Operating System [mailto:IBMVM@LISTSERV.UARK.EDU] On Behalf 
Of PHILIP TULLY
Sent: Wednesday, May 18, 2011 8:31 AM
To: IBMVM@LISTSERV.UARK.EDU
Subject: zvm directions

I see that the list traffic is kind of light right now and though I 
would toss out a topic for all of us to chew on.

I am looking for your thoughts on the current direction of zVM in 
particular where development needs to be focused.


I sense that z/VM 6.2 with SSI will ease the burden of medium to large 
shops in the area of multi-system maintenance, and hopefully will be 
extended beyond it's current meager 4 system max size, sooner rather 
than later.

Given the difficulty in making any changes to production workloads I 
don't see SSI with Live Guest Migration (LGM) as a panacea to issue 
related to load balancing amongst lpars.  Without more direct linux 
interaction I am concerned about the migration of workloads using 
dedicated fcp with or without NPIV as well as arp issues.

The area I would like to see development is the utilization of the 
hardware some of us are lucky enough to have, the z196.  With a machine 
that can be delivered with 3TB of memory(1.5TB on a z10), having a 
maximum size z/VM system of 256GB is very limiting.  In reviewing 
presentations on memory limits, I have read comments that the system has 
been tested to more than 400GB central storage but no indication 
(statement of direction...rumor) that the current limit will be 
increased.  So  I am pushing for increasing the max z/VM LPAR to at 
least 512MB if not larger.

Expansion of the link aggregation implementation allowing for shared OSA 
cards.


In general I am focused on larger vm systems, so that is where I would 
like to see development.

Phil Tully

Viewpoints presented here are my own and not my employer's


Re: zvm directions

2011-05-18 Thread Richard Troth
Wow ... so many possible directions *this* thread could go.

For fifty years, the platform now known as z has been all about scalability.
For more than forty years, the environment we call z/VM has been all
about resource sharing.

Multi-system maint is something most people in the industry (whether
vendors or customers) don't seem to get.  I hold up CMS, with shared
190, 19E, and the rest, as an example of they get it.  CMS may be
the *only* such example.  Of course, it presumes one's definition of
multi-system includes the concept of shared disks.  But that's kind of
the point:  No matter how good your install scheme, sharing a
pre-installed copy scales better than re-installing over and over.

I guess we all want IBM to put development into things which will
continue to make the platform viable.

So ... what's coming?  Dunno.  Some of what's already here is ...

* SAN (FBA in general)
* IPv6

These are both infrastructure things so they're not flashy.  Bill
paying executives won't be impressed ... until there is a crisis (or
unless they get pro-active).  But these things are important, so if VM
is going to inter-operate, then it must embrace them.

CMS components need to be prepared for IPv6.  The stack is.  The apps
are not.  I haven't checked VSwitch readiness.  (Alan will hopefully
chime in.)

VM already supports SAN, but ... two huge gaps:  performance and
instrumentation.  (I will defer additional comments.  Comparison with
ECKD really warrants discussion, but at another time and with
mandatory cool heads.)  EDEV makes managing SAN on VM a *lot* easier.
But it introduces CP overhead.  Using DIAG 250 should help.  Not clear
how much better it is, so maybe there is opportunity for IBM in the
CP Nuc for this.  And don't get me started about instrumentation.  The
single reason (some) people use ECKD is because you can measure what
it is doing.  I wish I knew FCP well enough to say where the numbers
are.  As it stands, most of the useful info seems to be proprietary.
What is the value of a standard (FCP) if the vendors continue to fight
over vital info like performance numbers??

The excellent thing about SAN is that it is common to other platforms.
 Everyone else uses FBA.  Whether SAN (which we can do) or IDE or ATA
or USB or Firewire or SATA ... storage is all fixed blocks of stuff.

-- R;   
speaking only for myself





On Wed, May 18, 2011 at 11:31, PHILIP TULLY tull...@optonline.net wrote:
 I see that the list traffic is kind of light right now and though I would
 toss out a topic for all of us to chew on.

 I am looking for your thoughts on the current direction of zVM in particular
 where development needs to be focused.


 I sense that z/VM 6.2 with SSI will ease the burden of medium to large shops
 in the area of multi-system maintenance, and hopefully will be extended
 beyond it's current meager 4 system max size, sooner rather than later.

 Given the difficulty in making any changes to production workloads I don't
 see SSI with Live Guest Migration (LGM) as a panacea to issue related to
 load balancing amongst lpars.  Without more direct linux interaction I am
 concerned about the migration of workloads using dedicated fcp with or
 without NPIV as well as arp issues.

 The area I would like to see development is the utilization of the hardware
 some of us are lucky enough to have, the z196.  With a machine that can be
 delivered with 3TB of memory(1.5TB on a z10), having a maximum size z/VM
 system of 256GB is very limiting.  In reviewing presentations on memory
 limits, I have read comments that the system has been tested to more than
 400GB central storage but no indication (statement of direction...rumor)
 that the current limit will be increased.  So  I am pushing for increasing
 the max z/VM LPAR to at least 512MB if not larger.

 Expansion of the link aggregation implementation allowing for shared OSA
 cards.


 In general I am focused on larger vm systems, so that is where I would like
 to see development.

 Phil Tully

 Viewpoints presented here are my own and not my employer's