We ran into an issue during our just completed DR test with using a 
hipersocket guest LAN to simulate real a hipersocket for multiple LINUX 

guests.

At our production site we use a real hipersocket to allow multiple LINUX 

guests in a VM 5.2 LPAR to communicate with a z/OS LPAR.

At DR, we run 2nd level so we used a 1st level VM hipersocket LAN as a 

standin for the real hipersocket between the 2nd level VM system and the 

2nd level z/OS system.  The directory for the 2nd level VM system had a 

single NICDEF with 64 devices, which were then dedicated to the various 

LINUX guests and VM TCPIP stacks.

The issue was that only the first LINUX guest to initialize its 
hipersocket conection was successful, the subsequent LINUX guests all got
 
what looked like (virtual) hardware errors.  The VM TCPIP stacks had no 

trouble connecting to the virtual hipersocket LAN.

We resolved the problem with additional NICDEF's on the 2nd level VM 
system for the additional LINUX guests.

The questions are:

   Is it a problem with the 1st level VM system not simulating the 
hipersocket correctly to the 2nd level VM system or is this a known 
restriction?
   What would a LINUX guest be doing/setting during initialization of a 

virtual hipersocket NIC that would prevent other LINUX guests from 
initializing interfaces on the same virtual hipersocket NIC?  And why is 

that okay for a real hipersocket but not a virtual hipersocket?

Brian Nielen

Reply via email to