Edward Ned Harvey wrote:
>> From: Richard Chycoski [mailto:rskiad...@chycoski.com]
>>
>> If the server that is currently pointed at fails, NIS takes too long to
>> fail over to a new server. 
>>     
>
> This hasn't been my experience.  At one site, we have 2 NIS servers (a
> master & slave) and we regularly alternate reboots of the servers, to apply
> updates to one, and then the other.  (Or for various other reasons.)  There
> have never been any problems with this, unless it manifests itself in some
> transparent way that we've never noticed.
>
> Maybe what you're talking about was a problem bygone that has since been
> solved?  All of our NIS servers and clients are RHEL4, RHEL5, centos 4 & 5,
> and solaris 10.
>
>   
No, the TCP timeout issues are inherent beneath the NIS RPC protocol. It 
is something that you're more likely to see in large installations where 
servers are frequently making many new mounts. We had a seriously-large 
set of automount maps  that didn't help the situation...
>   
>> There's nothing stopping anyone from
>> building a caching NIS client, which would erase much of this
>> advantage,
>> but I've yet to see a caching NIS client. 
>>     
>
> At one point, I wanted to support laptops and network outages, so each
> machine at some other job became a NIS slave.  It worked fine, except for
> the pushing of updates.  Before too long, we abandoned the idea.  So I
> agree, a caching NIS client would be very nice.
>   
Using slaves as client caches doesn't work well with large maps, as you 
apparently discovered. :-)

There are lots of gotchas when scaling NIS to either very large maps, 
large numbers of clients, and/or large transaction rates. Modern, fast 
processors help mitigate the issues, but, unfortunately, they don't 
eliminate them.

- Richard
_______________________________________________
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/

Reply via email to