Agreed. The config from 5.1 is the simplest and works well. The other two 
options have issues. 5.2 requires two service domains which costs you a lot in 
flexibility due to hardware constraints. 5.3 requires proper routing to be 
configured and can complicate network configurations. 

The big issue is that both 5.1 and 5.2 consume additional IP's due to the fact 
that link status is not propagated up through the VSW's. If this were 
corrected, it would make option 5.1 highly effective. The other options would 
only be desirable if the hardware limitations were removed and the routing 
issues could be reduced.
 
*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*
Octave J. Orgeron
Solaris Systems Engineer
http://www.opensolaris.org/os/community/sysadmin/
http://unixconsole.blogspot.com
unixconsole at yahoo.com
*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*

----- Original Message ----
From: "TEASDEL, Paul, GBM" <[email protected]>
To: Michael.Demuro at Sun.COM; ldoms-discuss at opensolaris.org
Sent: Tuesday, October 30, 2007 7:46:03 AM
Subject: Re: [ldoms-discuss] Solaris IPMP w/ LDOMS





IPMP works fine 
within LDOM's and both the primary and guest domains correctly detect and 
recover from network faults in the same way that physical hosts 
would.

 

Figure 5.1 - 2 VNETS 
connected to separate physical switches.  Works fine but be aware that 
the primary and guest domains will each require 3 IP addresses.  IPMP needs 
to be configured in the traditional way - you can't used link based failure 
detection and a single IP.

 

Figure 5.2 - Assume 
this works fine but haven't tested.  Would suggest reviewing whether 
multiple service domains is correct for your environment.  Is it delivering 
resiliency or additional complexity?

 

Figure 5.3 - Again, 
assume this works fine but would question whether introducing this kind of 
routing configuration at the host level is desirable.

 

There may be 
specific cases where 5.2 and 5.3 are suitable but as a generic rule, I'd opt 
for 
the 5.1 configuration.

 
Paul Teasdel
RBS Global Banking & Markets
Office: +44 20 7085 1030 


 




From: [email protected] 
[mailto:ldoms-discuss-bounces at opensolaris.org] On Behalf Of Michael De 
Muro - Sun Microsystems
Sent: 30 October 2007 12:11
To: 
ldoms-discuss at opensolaris.org
Subject: [ldoms-discuss] Solaris IPMP w/ 
LDOMS




Can anyone share use 
cases, best practices, or references for utilizing IP Multipathing (IPMP) with 
Logical Domains (LDoms) or ideally validate the three examples from the LDoms 
Administration Guide 1.0.1 820-3268-10 below:


Thanks 
& Kind Regards,



  
    
    Michael De Muro 
      
Solutions Architect
JPMorgan Chase 
      Account

Sun 
      Microsystems, Inc.
6000 Midlantic Drive, Suite 102N 
      
Mount Laurel, NJ 08054 USA
Phone:  1.856.231.5735 / 
      x52735
Mobile: 1.856.220.6328
Email michael.demuro at sun.com


    http://docs.sun.com/source/820-3268-10/chapter5.html#d0e9046

Configuring IPMP in a Logical Domains 
Environment
Internet Protocol Network Multipathing (IPMP) provides 
fault-tolerance and load balancing across multiple network interface cards. By 
using IPMP, you can configure one or more interfaces into an IP multipathing 
group. After configuring IPMP, the system automatically monitors the interfaces 
in the IPMP group for failure. If an interface in the group fails or is removed 
for maintenance, IPMP automatically migrates, or fails over, the failed 
interface?s IP addresses. In a Logical Domains environment, either the physical 
or virtual network interfaces can be configured for failover using IPMP.



Configuring Virtual Network Devices into an IPMP 
Group in a Logical Domain
A logical domain can be configured for fault-tolerance by 
configuring its virtual network devices to an IPMP group. When setting up an 
IPMP group with virtual network devices, in a active-standby configuration, set 
up the group to use probe-based detection. Link-based detection and failover 
currently are not supported for virtual network devices in Logical Domains 
1.0.1 
software.

The following diagram shows two virtual networks (vnet1 and vnet2) connected to 
separate virtual switch instances (vsw0 and vsw1) in the service domain, which, 
in turn, use two 
different physical interfaces (e1000g0 and e1000g1). In the event of a physical 
interface failure, the 
IP layer in LDom_A detects failure and loss of 
connectivity on the corresponding vnet through 
probe-based detection, and automatically fails over to the secondary vnet 
device.


FIGURE 5-1   Two Virtual Networks 
Connected to Separate Virtual Switch Instances




Further reliability can be achieved in the logical domain by 
connecting each virtual network device (vnet0 and vnet1) to virtual switch 
instances in different service 
domains (as shown in the following diagram). Two service domains (Service_1 and 
Service_2) with 
virtual switch instances (vsw1 and vsw2) can be set up using a split-PCI 
configuration. In this 
case, in addition to network hardware failure, LDom_A 
can detect virtual network failure and trigger a failover following a service 
domain crash or shutdown.


FIGURE 5-2   Each Virtual Network 
Device Connected to Different Service Domains




Refer to the Solaris 10 System Administration Guide: IP 
Services for more information about how to configure and use IPMP 
groups.




Configuring and Using IPMP in the Service 
Domain
Network failure detection and recovery can also be set up in 
a Logical Domains environment by configuring the physical interfaces in the 
service domain into a IPMP group. To do this, configure the virtual switch in 
the service domain as a network device, and configure the service domain itself 
to act as an IP router. (Refer to the Solaris 10 System Administration 
Guide: IP Services for information on setting up IP routing).

Once configured, the virtual switch sends all packets 
originating from virtual networks (and destined for an external machine), to 
its 
IP layer, instead of sending the packets directly via the physical device. In 
the event of a physical interface failure, the IP layer detects failure and 
automatically re-routes packets through the secondary interface.

Since the physical interfaces are directly being configured 
into a IPMP group, the group can be set up for either link-based or probe-based 
detection. The following diagram shows two network interfaces (e1000g0 and 
e1000g1) configured as 
part of an IPMP group. The virtual switch instance (vsw0) has been plumbed as a 
network device to send packets 
to its IP layer.


FIGURE 5-3   Two Network Interfaces 
Configured as Part of IPMP Group









  
  
  
  
  
  
    Logical Domains (LDoms) 
      1.0.1 Administration Guide
    820-3268-10
    


***********************************************************************************
The Royal Bank of Scotland plc. Registered in Scotland No 90312. Registered 
Office: 36 St Andrew Square, Edinburgh EH2 2YB. 
Authorised and regulated by the Financial Services Authority 
 
This e-mail message is confidential and for use by the 
addressee only. If the message is received by anyone other 
than the addressee, please return the message to the sender 
by replying to it and then delete the message from your 
computer. Internet e-mails are not necessarily secure. The 
Royal Bank of Scotland plc does not accept responsibility for 
changes made to this message after it was sent. 

Whilst all reasonable care has been taken to avoid the 
transmission of viruses, it is the responsibility of the recipient to 
ensure that the onward transmission, opening or use of this 
message and any attachments will not adversely affect its 
systems or data. No responsibility is accepted by The 
Royal Bank of Scotland plc in this regard and the recipient should carry 
out such virus and other checks as it considers appropriate. 
Visit our websites at: 
www.rbs.com
www.rbs.com/gbm
www.rbsgc.com
***********************************************************************************



______________________________________________________________________

This email has been scanned by the MessageLabs Email Security System.

For more information please visit http://www.messagelabs.com/email 

______________________________________________________________________



-----Inline Attachment Follows-----


_______________________________________________
ldoms-discuss mailing list
ldoms-discuss at opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/ldoms-discuss






__________________________________________________
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 
<http://mail.opensolaris.org/pipermail/ldoms-discuss/attachments/20071030/9d874a41/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: ATT00001
Type: image/gif
Size: 5122 bytes
Desc: ATT00001
URL: 
<http://mail.opensolaris.org/pipermail/ldoms-discuss/attachments/20071030/9d874a41/attachment.gif>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: ATT00002
Type: image/gif
Size: 5833 bytes
Desc: ATT00002
URL: 
<http://mail.opensolaris.org/pipermail/ldoms-discuss/attachments/20071030/9d874a41/attachment-0001.gif>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: ATT00003
Type: image/gif
Size: 5150 bytes
Desc: ATT00003
URL: 
<http://mail.opensolaris.org/pipermail/ldoms-discuss/attachments/20071030/9d874a41/attachment-0002.gif>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: ATT00004
Type: image/gif
Size: 168 bytes
Desc: ATT00004
URL: 
<http://mail.opensolaris.org/pipermail/ldoms-discuss/attachments/20071030/9d874a41/attachment-0003.gif>

Reply via email to