Re: ZFS for Cyrus IMAP storage

2007-05-05 Thread Gary Mills
On Fri, May 04, 2007 at 06:35:41PM -0500, Gary Mills wrote:
> There was a question earlier regarding ZFS for Cyrus IMAP storage.  We
> recently converted to that filesystem here.  I'm extremely pleased
> with it.  Our server has about 30,000 users with over 200,000
> mailboxes.  It peaks at about 1900 IMAP sessions.  It currently has 1
> TB of storage with about 300 GB in use.  The server is a Sun T2000
> with 16 GB of memory, running Solaris 10.  It uses iSCSI to provide two
> 500 GB devices from our Netapp filer.  Disk redundancy is all on the
> Netapp because it's currently superior to that provided by ZFS.

I see that I need to clarify this statement.  ZFS' disk redundancy,
which is excellent, is not used in this configuration.  ZFS' disk
management, which also can't be used in this configuration, is
currently incomplete.  Disk management on the Netapp works very
nicely.

-- 
-Gary Mills--Unix Support--U of M Academic Computing and Networking-

Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: ZFS for Cyrus IMAP storage

2007-05-05 Thread Vincent Fox


I originally brought up the ZFS question.

We seem to have arrived at a similar solution after much 
experimentation.  Meaning using ZFS for the things it does well already, 
and  leveraging proven hardware to fill in the weak spots.


I have a pair of Sun 3510FC arrays we have exported 2 RAID5 LUNs (5 
disks each) with one on primary and the other on second controller. This 
is to exploit the active-active controller feature of the 3510FC.  We 
are also doing multipathing through a pair of SAN fabric switches.


On top of that we then use ZFS to join a LUN from each array into a 
mirror pair, and then add the other pair as well. I guess you could call 
it RAID 5+1+0.  This architecture allows us to add more storage to the 
pool while online, by adding more 3510FC array pairs to the pool.


Performance in benchmarking (Bonnie++ etc.) has shown to be little 
different from turning them into JBOD and doing everything with ZFS.   
Behavior is more predictable to me since I know that the 3510 firmware 
knows how to rebuild a RAID5 set using the assigned spare drive in that 
array.  With ZFS I see no way to specify which disk is assigned as spare 
to a particular set of disks, which could mean a spare is pulled from 
another array.


It's pretty nifty to be able to walk into the machine room and flip off 
the power to an entire array and things keep working without a blip.  
It's not the most efficient usage of disk space but with performance & 
safety this promising for an EMAIL SERVER it will definitely be 
welcome.  I dread the idea of silent data corruption or long fsck time 
on a 1+ TB mail spool which ZFS should save us from.  I have atime=off 
and compression=on.  Our setup is slightly different from yours in that 
we are clustering 2  T2000 with 8GB RAM each, and we are currently 
setting up Solaris Cluster 3.2 software in failover configuration so we 
can patch without downtime.


Thanks for the idea about daily snapshots for recovering recent data, I 
like that idea a lot.  I'll tinker around with it I wonder if there'd be 
much penalty to upping the snapshots to every 8 hours.  Depends on how 
much churn there is in your mail spool I suppose.


This system should move into production later this month.  We have 
70,000 accounts that we'll begin a long and slow migration from our 
UW-IMAP pool of servers.  We have an existing Perdition proxy server 
setup, which will allow us to migrate users transparently.  Hopefully 
I'll have more good things to say about it sometime thereafter.




Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: ZFS for Cyrus IMAP storage

2007-05-05 Thread Vincent Fox

Gary Mills wrote:

There was a question earlier regarding ZFS for Cyrus IMAP storage.  We
recently converted to that filesystem here.  I'm extremely pleased
with it.  Our server has about 30,000 users with over 200,000
mailboxes.  It peaks at about 1900 IMAP sessions.  It currently has 1
TB of storage with about 300 GB in use.  The server is a Sun T2000
with 16 GB of memory, running Solaris 10.  It uses iSCSI to provide two
500 GB devices from our Netapp filer.  Disk redundancy is all on the
Netapp because it's currently superior to that provided by ZFS.
  


Our T2000 have 8 GB for the cluster we are building here at University 
of California Davis.

Would you expect we will be having to upgrade RAM, or is it just gravy?


Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: ZFS for Cyrus IMAP storage

2007-05-05 Thread Gary Mills
On Sat, May 05, 2007 at 10:14:07AM -0700, Vincent Fox wrote:
> Gary Mills wrote:
> >There was a question earlier regarding ZFS for Cyrus IMAP storage.  We
> >recently converted to that filesystem here.  I'm extremely pleased
> >with it.  Our server has about 30,000 users with over 200,000
> >mailboxes.  It peaks at about 1900 IMAP sessions.  It currently has 1
> >TB of storage with about 300 GB in use.  The server is a Sun T2000
> >with 16 GB of memory, running Solaris 10.  It uses iSCSI to provide two
> >500 GB devices from our Netapp filer.  Disk redundancy is all on the
> >Netapp because it's currently superior to that provided by ZFS.
> 
> Our T2000 have 8 GB for the cluster we are building here at University 
> of California Davis.
> Would you expect we will be having to upgrade RAM, or is it just gravy?

I really don't know.  IMAP sessions need lots of memory.  ZFS will use
all it can get for its ARC cache.  For a one-off server, I find it
cheaper to overdesign it than to devote a lot of time to full load
testing.

-- 
-Gary Mills--Unix Support--U of M Academic Computing and Networking-

Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: ZFS for Cyrus IMAP storage

2007-05-05 Thread Rudy Gevaert

Vincent Fox wrote:


I originally brought up the ZFS question.

We seem to have arrived at a similar solution after much 
experimentation.  Meaning using ZFS for the things it does well already, 
and  leveraging proven hardware to fill in the weak spots.


I have a pair of Sun 3510FC arrays we have exported 2 RAID5 LUNs (5 
disks each) with one on primary and the other on second controller. This 
is to exploit the active-active controller feature of the 3510FC.  We 
are also doing multipathing through a pair of SAN fabric switches.


On top of that we then use ZFS to join a LUN from each array into a 
mirror pair, and then add the other pair as well. I guess you could call 
it RAID 5+1+0.  This architecture allows us to add more storage to the 
pool while online, by adding more 3510FC array pairs to the pool.


Performance in benchmarking (Bonnie++ etc.) has shown to be little 
different from turning them into JBOD and doing everything with ZFS.   
Behavior is more predictable to me since I know that the 3510 firmware 
knows how to rebuild a RAID5 set using the assigned spare drive in that 
array.  With ZFS I see no way to specify which disk is assigned as spare 
to a particular set of disks, which could mean a spare is pulled from 
another array.


It's pretty nifty to be able to walk into the machine room and flip off 
the power to an entire array and things keep working without a blip.  
It's not the most efficient usage of disk space but with performance & 
safety this promising for an EMAIL SERVER it will definitely be 
welcome.  I dread the idea of silent data corruption or long fsck time 
on a 1+ TB mail spool which ZFS should save us from.  I have atime=off 
and compression=on.  Our setup is slightly different from yours in that 
we are clustering 2  T2000 with 8GB RAM each, and we are currently 
setting up Solaris Cluster 3.2 software in failover configuration so we 
can patch without downtime.


Thanks for the idea about daily snapshots for recovering recent data, I 
like that idea a lot.  I'll tinker around with it I wonder if there'd be 
much penalty to upping the snapshots to every 8 hours.  Depends on how 
much churn there is in your mail spool I suppose.


This system should move into production later this month.  We have 
70,000 accounts that we'll begin a long and slow migration from our 
UW-IMAP pool of servers.  We have an existing Perdition proxy server 
setup, which will allow us to migrate users transparently.  Hopefully 
I'll have more good things to say about it sometime thereafter.




Are you going to do this with "1" perdition server?  Make sure you have 
compiled perdition with /dev/urandom, or an other sort of non blocking 
entropy providing device :)


Rudy


Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: ZFS for Cyrus IMAP storage

2007-05-05 Thread Vincent Fox

Rudy Gevaert wrote:
Are you going to do this with "1" perdition server?  Make sure you 
have compiled perdition with /dev/urandom, or an other sort of non 
blocking entropy providing device :)


You perhaps think we are adding Perdition to the mix, and assuming we 
have a single box that might get overloaded.  No, we've had Perdition 
running on a load-balanced pool of 4 Linux boxes for about a year and a 
half.  This was our abstraction to hide the 10+ UW-IMAP servers from the 
user population and to ease account migration between mail-stores.  I 
expect this migration will take some months to complete, and the ability 
to hide from the user what mail-store they are on will be essential.  I 
imagine we'll keep using Perdition I can't see a compelling reason at 
the moment to move to a Murder setup. .




Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: perdition (was ZFS for Cyrus IMAP storage)

2007-05-05 Thread Rob Mueller



You perhaps think we are adding Perdition to the mix, and assuming we
have a single box that might get overloaded.  No, we've had Perdition 
running on a load-balanced pool of 4 Linux boxes for about a year and a 
half.  This was our abstraction to hide the 10+ UW-IMAP servers from the 
user population and to ease account migration between mail-stores.  I 
expect this migration will take some months to complete, and the ability 
to hide from the user what mail-store they are on will be essential.  I 
imagine we'll keep using Perdition I can't see a compelling reason at the 
moment to move to a Murder setup. .


I'd recommend switching to nginx from perdition. Although you'll have to 
write your own authentication server, that's pretty easy with perl + 
Net::Server, and the overhead of nginx with a large number of connections is 
a lot smaller than perdition.


http://blog.fastmail.fm/?p=592

Rob


Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html