Hi
Adding back the zone-discuss list
Possibly something around CR 6522362, which is due to how patchadd in
119254-34 handles patches that are THISZONEONLY=true, in particular JES
patches/packages.
I will try and replicate this, what rev of 119254 was installed prior to 34.
Enda
Freund, Phil wrote:
I believe it's JES Identity Suite 2005 Update 4 and so far we installed
only the Directory Server portion. I intend to update to current soon
but so far no time! These are T2000 servers running Solaris 10 01/06
patched at the minimum with the Sol 10 Recommended clusters thru the
beginning of 11/06.
Phil
-----Original Message-----
From: Enda O'Connor [mailto:[EMAIL PROTECTED]
Sent: Wed 3/14/2007 11:37 AM
To: Freund, Phil
Subject: Re: [zones-discuss] Re: Re: Re: Re: Patching problem with whole
root zones
Hi
Looks fine, nothing special really, what version of JES IS have you
installed though?
I take it this is T2000 on 6/06?
Enda
Freund, Phil wrote:
> I'd have sent this to the list but I've waited for almost an hours and
> this messages hasn't shown up on it yet.
>
> Here's the info for the four whole-root zones:
>
> clv1udw11:root> zonecfg -z cludsim001 info
> zonepath: /zones/cludsim001
> autoboot: false
> pool: pool_default
> fs:
> dir: /cdrom
> special: /cdrom
> raw not specified
> type: lofs
> options: [ro,nodevices]
> net:
> address: 10.1.51.11
> physical: e1000g1
> net:
> address: 10.20.51.11
> physical: e1000g5
> rctl:
> name: zone.cpu-shares
> value: (priv=privileged,limit=1,action=none)
> attr:
> name: comment
> type: string
> value: "Zone cludsim001"
> clv1udw11:root>
>
> clv1udw11:root> zonecfg -z cludsim002 info
> zonepath: /zones/cludsim002
> autoboot: false
> pool: pool_default
> fs:
> dir: /cdrom
> special: /cdrom
> raw not specified
> type: lofs
> options: [ro,nodevices]
> net:
> address: 10.1.51.12
> physical: e1000g7
> net:
> address: 10.20.51.12
> physical: e1000g5
> rctl:
> name: zone.cpu-shares
> value: (priv=privileged,limit=1,action=none)
> attr:
> name: comment
> type: string
> value: "Zone cludsim002"
> clv1udw11:root>
>
> clv1upw11:root> zonecfg -z clupsim001 info
> zonepath: /zones/clupsim001
> autoboot: false
> pool: pool_default
> fs:
> dir: /cdrom
> special: /cdrom
> raw not specified
> type: lofs
> options: [ro,nodevices]
> net:
> address: 10.1.50.19
> physical: e1000g0
> net:
> address: 10.20.50.19
> physical: e1000g6
> rctl:
> name: zone.cpu-shares
> value: (priv=privileged,limit=1,action=none)
> attr:
> name: comment
> type: string
> value: "Zone clupsim001"
> clv1upw11:root>
>
> clv1upw12:root> zonecfg -z clupsim002 info
> zonepath: /zones/clupsim002
> autoboot: false
> pool: pool_default
> fs:
> dir: /cdrom
> special: /cdrom
> raw not specified
> type: lofs
> options: [ro,nodevices]
> net:
> address: 10.1.50.49
> physical: e1000g0
> net:
> address: 10.20.50.49
> physical: e1000g6
> rctl:
> name: zone.cpu-shares
> value: (priv=privileged,limit=1,action=none)
> attr:
> name: comment
> type: string
> value: "Zone clupsim002"
> clv1upw12:root>
>
> All very consistent. All whole root zones because they all have JES
> Identity Suite installed on them.
>
> Phil
>
> -----Original Message-----
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, March 14, 2007 9:44 AM
> To: Freund, Phil
> Cc: zones-discuss@opensolaris.org
> Subject: Re: [zones-discuss] Re: Re: Re: Re: Patching problem with whole
> root zones
>
> Phil Freund wrote:
>
>> Enda,
>>
>> All of the zones were booted to milestone all when 119254-34 was
>>
> applied. Since the patch didn't require a reboot, I did it with all
> zones and global fully operational.
>
>> Phil
>>
>>
>> This message posted from opensolaris.org
>> _______________________________________________
>> zones-discuss mailing list
>> zones-discuss@opensolaris.org
>>
>>
> Hi
> what is the layout of the failing zones, ie zonecfg -z <zone> info
>
>
> Enda
>
_______________________________________________
zones-discuss mailing list
zones-discuss@opensolaris.org