50000 files in a single directory will make difficult for any filesystem. I would recommend breaking that out into groups of less than 10000 per directory. For better performance, separate them onto directories that are on different spindles; the parallelization of seek (and with thousands of small files that can each be read in one or two reads, your disks will spend a lot of this time seeking) should show noticeable performance improvement.
Do only some of the zones update at any given 15 minute cycle? If so, you may show an even bigger improvement by only reloading those that will have changed. On Sat, Feb 26, 2011 at 8:56 PM, Dennis Perisa <dennis.per...@gmail.com> wrote: > Hi folks, > I'm looking for suggestions to substantially improve reload times on a slave > that is serving 50,000 zones (mostly customer zones). > 'rndc reload' is being executed on the slave every 15 minutes. Due to the > large number of zones to trawl through, the reload process is causing > intermittent outages and/or significant delays to zone transfers. > Here are some ideas I have: > - use rndc reconfig instead > - separate zone files into separate dirs to improve O/S performance > (currently, all zone files are in a single dir) > Are these viable options? Any other thoughts/suggestions? > This is expected to be a short-term fix while we consider brute force > approach of throwing more cpu/mem/IO at this. > DP > > _______________________________________________ > bind-users mailing list > bind-users@lists.isc.org > https://lists.isc.org/mailman/listinfo/bind-users > -- david t. klein Cisco Certified Network Associate (CSCO11281885) Linux Professional Institute Certification (LPI000165615) Redhat Certified Engineer (805009745938860) Quis custodiet ipsos custodes? _______________________________________________ bind-users mailing list bind-users@lists.isc.org https://lists.isc.org/mailman/listinfo/bind-users