Re: [zones-discuss] Zones, clusters, and maintainability

2006-06-29 Thread James Carlson
Mike Gerdts writes:
> > - Zone roots can be setup on SAN LUNs as long as the HBA drivers are in the 
> > Solaris 10 DVD miniroot. This allows for OS upgrades. We are using the 
> > Sun-branded Qlogic HBAs which are handled by the Leadville driver and is 
> > part of the miniroot so this criterion is covered.
> 
> Zone roots should have nothing to do with HBA drivers.  Zones do not
> run their own kernel, and as such this a non-issue.

It's true that non-global zones don't have their own kernel, but the
rest is not true.

When you're doing an upgrade of the system, we need to mount up the
file systems that represent the non-global zones on the system.  Since
only standard upgrade (not Live Upgrade) is supported right now for
upgrade, this means that the mounting process must take place in the
miniroot environment.

Thus, you must make certain that the necessary HBA drivers to mount
the required file systems are in the miniroot you plan to use.  If
they're not, then you'll need to modify the miniroot to add them.

-- 
James Carlson, KISS Network<[EMAIL PROTECTED]>
Sun Microsystems / 1 Network Drive 71.232W   Vox +1 781 442 2084
MS UBUR02-212 / Burlington MA 01803-2757   42.496N   Fax +1 781 442 1677
___
zones-discuss mailing list
zones-discuss@opensolaris.org


Re: [zones-discuss] Zones, clusters, and maintainability

2006-06-29 Thread Mike Gerdts

On 6/29/06, James Carlson <[EMAIL PROTECTED]> wrote:

Mike Gerdts writes:
> > - Zone roots can be setup on SAN LUNs as long as the HBA drivers are in the 
Solaris 10 DVD miniroot. This allows for OS upgrades. We are using the Sun-branded 
Qlogic HBAs which are handled by the Leadville driver and is part of the miniroot so 
this criterion is covered.
>
> Zone roots should have nothing to do with HBA drivers.  Zones do not
> run their own kernel, and as such this a non-issue.

It's true that non-global zones don't have their own kernel, but the
rest is not true.


Somehow I translated upgrade -> patching in my mind and hence my
response.  After clearing that up, it makes a lot more sense.

Mike

--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zones-discuss mailing list
zones-discuss@opensolaris.org


Re: [zones-discuss] Zones, clusters, and maintainability

2006-06-29 Thread Jeff Victor

Apologies for responding to my own post, but I need to clarify something I said:

Jeff Victor wrote:

Hi Phil,

Advice below...

Phil Freund wrote:

...
Here's where I need advice:

Each of the failover-capable zones must have the same name on each 
hardware node which in turn implies that they must use the same IP 
address, at least when being setup initially. Once the zones are 
setup, is there any reason the individual zone configurations cannot 
be modified such that each of the zones has a different IP maintenance 
address on each hardware node? There would be both a DNS entry for the 
failover zone name that matches a virtual address managed by VCS and 
maintenance names for the zone addresses on each target node to 
provide network accessability for testing after patching/upgrades. 
(Rather like the maintenance addresses in an IPMP configuration.) If 
this is true, are the only places that would need the changes 
implemented the zone xml file and the entries in the zones /etc/hosts 
(and hostname.xxx) files?


Advice?


There is no reason that you cannot (or should not) add a second IP addr 
to each zone, as you describe.  This IP addr would be 'static' i.e. it 
would not move around during failover (or any other operation). It can 
be on the same NIC as the virtual address or another NIC -  although 
IPMP restrictions might limit your choices, it's been awhile since I've 
used IPMP.


I would do this with zonecfg, either when the zone is being created or 
afterward. If you can do that, the zones framework will take care of the 
rest.


I seem to recall that Symantec's zone-creation process requires manual 
editing of the zone xml file, which is unfortunate as it puts you in 
shaky Solaris-support territory.  We have discussed this with them, and 
understand that there isn't a good alternative.  If manual editing is 
required, it sounds like you know what you're doing - the xml file, 
/etc/inet/hosts and hostname.xxx files will need 'help.'


But since the "maintenance names" and addresses would not be managed by VCS, there 
is no need to manually edit those files.  Use zonecfg as it was intended and it 
will manage the files appropriately.



--
--
Jeff VICTOR  Sun Microsystemsjeff.victor @ sun.com
OS AmbassadorSr. Technical Specialist
Solaris 10 Zones FAQ:http://www.opensolaris.org/os/community/zones/faq
--
___
zones-discuss mailing list
zones-discuss@opensolaris.org


Re: [zones-discuss] Re: improved zones/RM integration

2006-06-29 Thread Renaud Manus



Jerry Jelinek wrote:

You don't have to reboot to make FSS the default and have all
of the global zones processes running under FSS.  Instead, you can
do something like this:

# dispadmin -d FSS
# dispadmin -u
# priocntl -s -c FSS -i all
# priocntl -s -c FSS -i pid 1



Since snv_29, you can also restart the scheduler service

# dispadmin -d FSS
# svcadm restart system/scheduler

-- Renaud
___
zones-discuss mailing list
zones-discuss@opensolaris.org


[zones-discuss] Re: Zones, clusters, and maintainability

2006-06-29 Thread Phil Freund
Thanks to everyone for the sanity check and confirmation of approach. I'm going 
to test this on my lab cluster servers and get it down pat. It may take a while 
before I get there though; I still have to get another 4 servers migrated from 
boxes going off lease to my new Solaris 10 servers. (So far 5 servers migrated 
with 18 zones created on 4 new servers - 5 zones were from migration, the rest 
handle new requirements... sigh)

Phil

Phil Freund
Lead Systems and Storage Administrator
Kichler Lighting
 
 
This message posted from opensolaris.org
___
zones-discuss mailing list
zones-discuss@opensolaris.org