Re: Using a SAN/GPFS with cyrus
On 21 Jan 2004, [EMAIL PROTECTED] spake: > I'm installing Cyrus on a ssytem that will have access to an IBM > FAStT SAN with GPFS (a parallel filesystem allowing multiple servers > to share a filesystem on a SAN). > > For redundancy, I was thinking of creating the IMAP folder dir and > spool dir on the SAN and then having two mailservers setup > identically using cyrus. If the primary server goes down for any > reason, the secondary would automatically begin receiving/delivering > mail based on the MX records in DNS. > > Would this present any problems with cyrus if two servers are > accessing the same directories/files? GPFS should manage file > sharing, but I'm wondering if there are any know problems with Cyrus > in this configuration. > > Has anyone done this before? FOR THE LOVE OF GOD, RUN AWAY! We had our Cyrus message store on GPFS[1] for just about a year. I've been a Unix systems administrator for almost 15 years; It was the worst single judgment of my professional career. Period. During the 18 months when we had GPFS deployed, my unit had TWO crit sits[2] and uncovered over 30 bugs[3] in the GPFS software alone (not counting stuff we found in VSD, AIX, et cetera). The situation ended with the GPFS architect suggesting that we do something else. He's a great guy, and he helped us many times, but the product just doesn't do what we wanted. GPFS is the successor to the MultiMedia Filesystem, which was used in IBM's Videocharger product. It's *excellent* at streaming small numbers of large files (like, say, movies). It's horrible when you get above a few hundred-thousand files, as the systems can't possibly have enough memory to keep track of the filesystem meta-data. Our Cyrus installation has about 80K users, close to 1TB of disk, and many millions of files. Just the number of files alone would be enough to kill the idea of using GPFS. Cyrus makes pretty extensive use of mmap(), and so does BerkelyDB. While GPFS implements[4] mmap(), the GPFS architect had some words about the way certain operations are accomplished in Cyrus. I think there are (or used to be) places where an mmap'd file is opened for write with another file handle (or from another process). GPFS doesn't handle this well. This technique works accidentally on non-clustered filesystems because AIX (also) mmap's things for you behind your back (in addition to whatever you do) and then serializes all of the access to regions of those files. That's really the only reason why Cyrus works on JFS. Also note that the other groups/developers within IBM (especially the group that does the automounter) have their collective heads up their ass with respect to supporting "after market" filesystems on AIX. After two freakin' years of PMRs they still couldn't figure out how to make autofs interact predictably with locally-mounted GPFSs. I constantly had to employ work-arounds in my automounter maps. If you just want failover, then use the HACMP[5] product to deal with the failover scenario. If you need to scale beyond one system image, try out a Cyrus Murder. That's what we're using now, it works great. Note that in the Murder scenario, you can still use HACMP to have one of your back-ends take over for another if it fails. You just have to carefully craft your cyrus.conf files to only bind to a single IP address, so that you can run two separate instances of Cyrus on the same machine during the failover. I will be happy to discuss our particular situation, implementation choices and details with you if you'd like to contact me out-of-band. We're currently running our Murder on: 2 x p630 [backends] 4 x 1.4GHz Power4+ CPU 8GB Real Memory 4 x p615 [frontends] 2 x 1.2GHz Pwer4+ CPU 4GB Real Memory The frontends are also the web servers for our VirtualHosting cluster. We're running version 2.1.x of Cyrus. Now that 2.2.x is stable we'll upgrade, but you can imagine that it'll take some planning. ;) [1] GPFS: using CVSD then RVSD in our SP [2] crit sit: Critical Situation: IBM's tool for managing barely-tenable customer relationship situations [3] Something like 90% of our PMRs resulted in code changes [4] In typical IBM fasion, they implemented *exactly* the POSIX specification, and not an penny more. I'm not convinced that this is bad, but it bites me a lot. [5] High Availability Cluster Multi-Processing(tm), IBM Regards, -- Stephen L. Ulmer [EMAIL PROTECTED] Senior Systems Programmer http://www.ulmer.org/ Computing and Network Services VOX: (352) 392-2061 University of Florida FAX: (352) 392-9440
Re: Using a SAN/GPFS with cyrus
Prentice Bisbal wrote: For redundancy, I was thinking of creating the IMAP folder dir and spool dir on the SAN and then having two mailservers setup identically using cyrus. If the primary server goes down for any reason, the secondary would automatically begin receiving/delivering mail based on the MX records in DNS. Please bear in mind, that the MX priority in DNS is not honored by all mailsystems, so it could happen that your secondary server has traffic while your first server also has traffic, possibly delivering to the same mailbox at the same time. Gr, Nils. -- Simple guidelines to happiness: Work like you don't need the money, love like your heart has never been broken and dance like no one can see you.
Re: Using a SAN/GPFS with cyrus
Prentice Bisbal wrote: Ken, I'm not too familiar with QFS SANs. Does that have a filesystem interfacewhere the filesystem itself allows multiple SAN clients to access the same filesystem, etc? Yes, its a shared filesystem. Multiple clients can r/w simultaneously. What if the 2nd system was treated as a hot spare, and would't actually do any mailserving functions until the primary server is shutdown. (ie the switchover wouldn't be automatic - it would require human intervention). If you stop Cyrus on the primary, then start Cyrus on the spare, you should be OK. The this case, you're essentially using the SAN as shared storage, not a shared filesystem (no different from physically moving the FC connection from one box to the other). Ken Murchison wrote: Prentice Bisbal wrote: I'm installing Cyrus on a ssytem that will have access to an IBM FAStT SAN with GPFS (a parallel filesystem allowing multiple servers to share a filesystem on a SAN). For redundancy, I was thinking of creating the IMAP folder dir and spool dir on the SAN and then having two mailservers setup identically using cyrus. If the primary server goes down for any reason, the secondary would automatically begin receiving/delivering mail based on the MX records in DNS. Would this present any problems with cyrus if two servers are accessing the same directories/files? GPFS should manage file sharing, but I'm wondering if there are any know problems with Cyrus in this configuration. Has anyone done this before? Sharing mailboxes.db is prone to problems, if one machine trashes it, then the other machine(s) need to be halted while the db is reconstructed. I'm involved in something similar using 3 or 4 load balanced Sun machines on a QFS SAN. The current setup keeps separate mailboxes.db, deliver.db and tls_sessions.db on each machine (which means that duplicate delivery and Sieve aren't foolproof across machines). I have modified imapd and mupdate to keep mailboxes.db in sync across the machines. This code is currently being beta-tested, and I haven't heard any complaints for weeks. If you are interested in looking at this, checkout the unified-imapd branch from CVS. -- Kenneth Murchison Oceana Matrix Ltd. Software Engineer 21 Princeton Place 716-662-8973 x26 Orchard Park, NY 14127 --PGP Public Key--http://www.oceana.com/~ken/ksm.pgp
Re: Using a SAN/GPFS with cyrus
Ken, I'm not too familiar with QFS SANs. Does that have a filesystem interfacewhere the filesystem itself allows multiple SAN clients to access the same filesystem, etc? What if the 2nd system was treated as a hot spare, and would't actually do any mailserving functions until the primary server is shutdown. (ie the switchover wouldn't be automatic - it would require human intervention). Prentice Ken Murchison wrote: Prentice Bisbal wrote: I'm installing Cyrus on a ssytem that will have access to an IBM FAStT SAN with GPFS (a parallel filesystem allowing multiple servers to share a filesystem on a SAN). For redundancy, I was thinking of creating the IMAP folder dir and spool dir on the SAN and then having two mailservers setup identically using cyrus. If the primary server goes down for any reason, the secondary would automatically begin receiving/delivering mail based on the MX records in DNS. Would this present any problems with cyrus if two servers are accessing the same directories/files? GPFS should manage file sharing, but I'm wondering if there are any know problems with Cyrus in this configuration. Has anyone done this before? Sharing mailboxes.db is prone to problems, if one machine trashes it, then the other machine(s) need to be halted while the db is reconstructed. I'm involved in something similar using 3 or 4 load balanced Sun machines on a QFS SAN. The current setup keeps separate mailboxes.db, deliver.db and tls_sessions.db on each machine (which means that duplicate delivery and Sieve aren't foolproof across machines). I have modified imapd and mupdate to keep mailboxes.db in sync across the machines. This code is currently being beta-tested, and I haven't heard any complaints for weeks. If you are interested in looking at this, checkout the unified-imapd branch from CVS.
Re: Using a SAN/GPFS with cyrus
Prentice Bisbal wrote: I'm installing Cyrus on a ssytem that will have access to an IBM FAStT SAN with GPFS (a parallel filesystem allowing multiple servers to share a filesystem on a SAN). For redundancy, I was thinking of creating the IMAP folder dir and spool dir on the SAN and then having two mailservers setup identically using cyrus. If the primary server goes down for any reason, the secondary would automatically begin receiving/delivering mail based on the MX records in DNS. Would this present any problems with cyrus if two servers are accessing the same directories/files? GPFS should manage file sharing, but I'm wondering if there are any know problems with Cyrus in this configuration. Has anyone done this before? Sharing mailboxes.db is prone to problems, if one machine trashes it, then the other machine(s) need to be halted while the db is reconstructed. I'm involved in something similar using 3 or 4 load balanced Sun machines on a QFS SAN. The current setup keeps separate mailboxes.db, deliver.db and tls_sessions.db on each machine (which means that duplicate delivery and Sieve aren't foolproof across machines). I have modified imapd and mupdate to keep mailboxes.db in sync across the machines. This code is currently being beta-tested, and I haven't heard any complaints for weeks. If you are interested in looking at this, checkout the unified-imapd branch from CVS. -- Kenneth Murchison Oceana Matrix Ltd. Software Engineer 21 Princeton Place 716-662-8973 x26 Orchard Park, NY 14127 --PGP Public Key--http://www.oceana.com/~ken/ksm.pgp
Using a SAN/GPFS with cyrus
I'm installing Cyrus on a ssytem that will have access to an IBM FAStT SAN with GPFS (a parallel filesystem allowing multiple servers to share a filesystem on a SAN). For redundancy, I was thinking of creating the IMAP folder dir and spool dir on the SAN and then having two mailservers setup identically using cyrus. If the primary server goes down for any reason, the secondary would automatically begin receiving/delivering mail based on the MX records in DNS. Would this present any problems with cyrus if two servers are accessing the same directories/files? GPFS should manage file sharing, but I'm wondering if there are any know problems with Cyrus in this configuration. Has anyone done this before? Prentice