On Wed, Apr 26, 2006 at 07:00:21PM +1200, Sam Vilain wrote:
> Chuck wrote:
> 
>> we are completely restructuring our entire physical network around   
>> the vserver concept.                                                 
  
>> it has proven itself in stability and performance in production to   
>> the point we no longer see the need for dedicated servers except in  
>> the most demanding instances (mostly our email server which cannot   
>> be run as a guest until there is no slow down using > 130 ip addresses).  

could you describe what scheme is behind those 130 ips
in your case? I'm trying to get an idea what addresses
such large-ip systems typically use ...
  
>> in our network restructuring, we wish to use our large storage
>> nfs system and place all the vserver guests on that sharing those
>> directories to be mounted on the proper dual opteron machine front
>> end as /vservers.
  
>> i am seriously thinking of also making /etc/vservers an nfs mount so
>> that each host configuration and guests live in a particular area on
>> the nfs to make switching machines a breeze if so needed.

>> does anyone see a problem with this idea? we will be using dual GB
>> nics into this nfs system in a pvtnet from each machine to facilitate
>> large amounts of data flow. public ip space will still use 100mb
>> nics.

that is basically what lycos is doing, and together with
them we implemented the xid tagging over nfs (requires
a patched filer though), which seems to work reasonably
fine ...
  
>> if this can work efficiently (most of our guests are not disk i/o    
>> bound.. those with ultra heavy disk i/o will live on each front end  
>> machine), we can consolidate more than 100 machines into 2 front end 
>> machines and one SAN system. This would free enough rack space that  
>> if we don't need any dedicated                                       
  
>> machines in the future we could easily add more than 1500 servers in 
>> host/guest config in the same space 100 took up. it would also hugely 
>> simplify backups and drop our electric bill in half or more.

yes, just requires really sensitive tuning, otherwise
the nfs will be the bottleneck

> Nice idea, certainly NFS is right for /etc/vservers, but consider using
> a network block device, like iSCSI or ATA over Ethernet for the
> filesystems used by vservers themselves. You'll save yourself a lot of
> headaches and the thing will probably run a *lot* faster. 

this is a viable alternative too, at least iSCSI and AOE
was already tested with Linux-VServer, so it should work

> Unification would be impractical on top of all of this, but this is
> probably not a huge problem.

why would that be so? if it is the same block device, the
filesystem on-top can as well use unification, not across
different filesystems though ...

HTH,
Herbert

> Sam.
> _______________________________________________
> Vserver mailing list
> Vserver@list.linux-vserver.org
> http://list.linux-vserver.org/mailman/listinfo/vserver
_______________________________________________
Vserver mailing list
Vserver@list.linux-vserver.org
http://list.linux-vserver.org/mailman/listinfo/vserver

Reply via email to