On Sun, Oct 09, 2011 at 02:31:19PM -0500, Stan Hoeppner wrote: > On 10/9/2011 8:36 AM, Bron Gondwana wrote: > > How many people are running their mail servers on 24-32 SAS spindles > > verses those running them on two spindles in RAID1? > > These results are for a maildir type workload, i.e. POP/IMAP, not a > spool workload. I believe I already stated previously that XFS is not > an optimal filesystem for a spool workload but would work well enough if > setup properly. There's typically not enough spindles nor concurrency > to take advantage of XFS' strengths on a spool workload.
I'm honestly more interested in maildir type workload too, spool doesn't get enough traffic usually to care about IO. (sorry, getting a bit off topic for the postfix list) > > Wow - just what I love doing. Building intimate knowledge of the > > XFS allocation group architecture to run up a mail server. I'll > > get right on it. > > As with anything you pick the right tool for the job. If your job > requires the scalability of XFS you'd learn to use it. Apparently your > workload doesn't. We went with lots of small filesystems to reduce single points of failure rather than one giant filesystem across all our spools. I'm still convinced that it's a better way to do it, despite people trying to convince me to throw all my eggs in one basket again. SANs are great they say, never had any problems, they say. > > Sarcasm aside - if you ship with stupid-ass defaults, don't be > > surprised if people say the product isn't a good choice for > > regular users. > > I think you missed the point. No, not really. I'm not going to advise people to use something that requires a lot of tuning. > > I tried XFS for our workload (RAID1 sets, massive set of unlinks once > > per week when we do the weekly expunge cleanup) - and the unlinks were > > just so nasty that we decided not to use it. I was really hoping for > > btrfs to be ready for prime-time by now, but that's looking unlikely > > to happen any time soon. > > Take another look at XFS. See below, specifically the unlink numbers in > the 2nd linked doc. hmm... > > Maybe my tuning fu was bad - but you know what, I did a bit of reading > > and chose options that provided similar consistency guarantees to the > > options we were currently using with reiserfs. Besides, 2.6.17 was > > still recent memory at the time, and it didn't encourage me much. > > It was not your lack of tuning fu. XFS metadata write performance was > abysmal before 2.6.35. For example deleting a kernel source tree took > 10+ times longer than EXT3/4. Look at the performance since the delayed > logging patch was introduced in 2.6.35. With a pure unlink workload > it's now up to par with EXT4 performance up to 4 threads, and surpasses > it by a factor of two or more at 8 threads and greater. XFS' greatest > strength, parallelism, now covers unlink performance, where it was > severely lacking for many years, both on IRIX and Linux. > > The design document: > http://xfs.org/index.php/Improving_Metadata_Performance_By_Reducing_Journal_Overhead > > Thread discussing the performance gains: > http://oss.sgi.com/archives/xfs/2010-05/msg00329.html My goodness. That's REALLY recent in filesystem times. Something that recent plus "all my eggs in one basket" of changing to a large multi-spindle filesystem that would really get the benefits of XFS would be more dangerous than I'm willing to consider. That's barely a year old. At least we're not still running Debian's 2.6.32 any more, but still. I'll run up some tests again some time, but I'm not thinking of switching soon. Bron.