Hi John, 
  to clarify,  I don't want to mount a root-zone exported filesystem, but, from 
within the zone, I want to simply leverage the root-zone's automount tree to 
provide access to each non-global zone to our (seperate) NFS fileservers. 

I think the technical discussion above gets into the details of this scenario.

Hi Niko, 
   I'm glad you feel the same and also want to speak up about about it!   
(I started a thread in the brandz forum, it's now on the topic of 64bit lx, 
there are more details there of our environment)
Here are the details of my goals:  (2 flavors)
  I would like to build a display-server with 100 users, each with their own 
personal brandz-Linux zone.  Their /pkg or /app space would be a (replicated to 
the display server) zfs filesystem in the root zone. (lofs this, the simple 
part).  Their /prj or /project space would be the same auto.projects map which 
all other solaris and linux boxes have. (the heart of this discussion.. I don't 
want 100 autofs5 daemons running))

the other flavor goes like this:
in our LSF farm we generally have 2 kinds of queues (or Job Profiles):  large 
mem (32-64GB) single-core,  and  multi-core ( <=4 core) small-memory <4GB. 

with a 16 thread, 144GB Nehalem server, we need to have job-slots for both 
kinds of these; currently with LSF it is impossible for us to have a single 
machine in both kinds of queues.   To go further with my idea, I want to use 
Solaris resource-controls to help buffer the load, in addition to LSF or Sun 
Grid Engine to buffer and schedule the jobs before dispatch.

You spread those two flavors over a bunch of new modern hardware, and you have 
a new model which (IMHO) implements "virtualization done right".
Cheers, Rob
-- 
This message posted from opensolaris.org
_______________________________________________
zones-discuss mailing list
zones-discuss@opensolaris.org

Reply via email to