Hi John-Paul,
ok, so you are trying to setup a cluster between two guest ldoms running
on the same T1000. And your IO domain servicing the two guest domains
was first running SXCE b85 and is now running S10U5.
Sorry that I did not parse that correctly.
The message you see points to fairly low level problems - in order to
debug this, did you try to boot the guest ldoms in non-cluster mode
(boot -x), plumbing the interfaces manually and configure a test network
to see if the two domains can communicate in principle over the virtual
switch connection?
If no, then I would recommend to verify the virtual switch setup.
If yes, then I would assume there are differences between the vsw kernel
modules from SXCE b85 (which has more recent code) and S10U5. I noticed
you also asked on ldoms-discuss, so the experts there might be able to
give more details.
Greets
Thorsten
jpd wrote:
> Thorsten Frueauf wrote:
>> Hi John-Paul,
>>
>> in order to help better, could you provide more specific informations on
>> your configuration and the concrete error message(s) you see?
>>
>> Just for clarification, if you want to use Solaris Cluster Express
>> (current is 02/08) then you must use Solaris Express Developer Edition
>> 01/08 (build 79b) with it. Every other build is likely to not work.
>>
>> If you want to use Solaris Cluster 3.2U1, then you need a supported
>> Solaris 10 Update release. Solaris Cluster 3.2U1 will not work properly
>> with Solaris Express.
>
> I can read :)
>
>
>> With regard to currently supported ldom configurations with Solaris
>> Cluster, please have a look at
>> http://blogs.sun.com/SC/entry/announcing_solaris_cluster_support_in
>>
>
> I know that what I am doing is not supported - but i doubt that what
> people are doing with vmware is to.
>
> I don't have vmware, but i do have a t1000.
>
>
> Ok history.... I have posted about this before.
>
> I want to do stuff with ha clusters but only have one box - a shiny t1000
>
> A t1000 does not run vmware so I have to make do with ldom
>
> Before I had running sxce b85ish as master domain and slave domains of
> solaris 10 u4 + patches with sc3.2u1 - cluster would run but iscsi was
> flakey
>
> So with Solaris 10 u5 release I changed to that as the master domain and
> slaves domains as SXDE 1/08 with cluster express 2/08. The two can't
> seem to talk to each other, I get this error:
> on cluster1
> NOTICE: clcomm: Path cluster1.drawnet:vnet2 - cluster2.drawnet:vnet2
> errors during initiation
> WARNING: Path cluster1.drawnet:vnet2 - cluster2.drawnet:vnet2 initiation
> encountered errors, errno = 62. Remote node may be down or unreachable
> through this path.
> NOTICE: clcomm: Path cluster1.drawnet:vnet1 - cluster2.drawnet:vnet1
> errors during initiation
> WARNING: Path cluster1.drawnet:vnet1 - cluster2.drawnet:vnet1 initiation
> encountered errors, errno = 62. Remote node may be down or unreachable
> through this path.
>
> similar stuff on cluster2
>
>
>
>> Greets
>> Thorsten
>>
>> John-Paul Drawneek wrote:
>>> Ok got SXDE 1/08 to work, and installed cluster express
>>>
>>> Problem at the moment is that I can't get the two node to speak to
>>> each other.
>>>
>>> I get interconnect error, and am not too sure why as this worked with
>>> sc3.2u1.
>>>
>>> Help :(
>>>
>>> the host domain is solaris 10 u5 now - it was sxce b85 with sc3.2u1
>>> is there much difference in how the ldom network stack between these
>>> two??
--
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Sitz der Gesellschaft:
Sun Microsystems GmbH, Sonnenallee 1, D-85551 Kirchheim-Heimstetten
Amtsgericht Muenchen: HRB 161028
Geschaeftsfuehrer: Thomas Schroeder, Wolfgang Engels, Dr. Roland Boemer
Vorsitzender des Aufsichtsrates: Martin Haering
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~