Re: [Dovecot] Best Cluster Storage
Jonathan, > -Original Message- > > I really wish NFS didn't have the caching issue, as it's the most simple > to set up Don't give up on the simplest solution too easily - lots of us run NFS with quite large installs. As a matter of fact, I think all of the large installs run NFS; hence the need for the Director in 2.0. -Brad
Re: [Dovecot] Best Cluster Storage
> -Original Message- > >> > > I'm sorry I don't follow this. It would be appreciated if you could > > include a simpler example. The way I see it, a VM disk is just a small > > chunck "LVM LV in my case" of a real disk. > > Perhaps if you were to compare and contrast a virtual disk to a raw > disk, that would help. If you wanted to use drbd with a raw disk being > accessed via a VM guest, that would probably be all right. Might not be > "supported" though. Depending on your virtualization method, raw device passthrough would probably be OK. Otherwise, think about what you're doing - putting a filesystem - on a replicated block device - that's presented through a virtualization layer - that's on a filesystem - that's on a block device. If you're running GFS/GlusterFS/etc on the DRBD disk, and the VM is on VMFS, then you're actually using two clustered filesystems! Each layer adds a bit of overhead, and each block-on-filesystem layering adds the potential for block misalignments and other issues that will affect your overall performance and throughput. It's just hard to do right. -Brad
Re: [Dovecot] SSD drives are really fast running Dovecot
>> The reason is that few if any organizations actually need > 28TB (14 >> 2TB Cavier Green drives--popular with idiots today) of mail storage in a > single >> mail store. That's 50 years worth of mail storage for a 50,000 employee >> company, assuming your employees aren't allowed porn/video attachments, > which >> which most aren't. >> > > WTF? 28TB of mail storage for some is rather small. Good to see your > still posting without a clue Stanley. > Remember there is a bigger world out there from your tiny SOHO I'm with you Noel. We just bought 252TB of raw disk for about 5k users. Given, this is going in to Exchange on Netapp with multi-site database replication, so this cooks down to about 53TB of usable space with room for recovery databases, defragmentation, archives, etc, but still... 28TB is not much anymore. Of course, Exchange has also gone in a different direction than folks have been indicating. 2010 has some pretty high memory requirements, but the actual IOPS demands are quite low compared to earlier versions. We're using 1TB 7200RPM SATA drives, and at the number of spindles we've got, combined with the cache in the controllers, expect to have quite a good bit of excess IOPS. Even on the Dovecot side though - if you use the Director to group your users properly, and equip the systems with enough memory, disk should not be a bottleneck if you do anything reasonably intelligent. We support 12k concurrent IMAP users at ~.75 IOPS/user/sec. POP3, SMTP, and shell access on top of that is negligible. I'm also surprised by the number of people trying to use DRBD to make local disk look like a SAN so they can turn around and put a cluster filesystem on it - with all those complex moving parts, how do you diagnose poor performance? Who is going to be able to support it if you get hit by a bus? Seems like folks would be better off building or buying a sturdy NFS server. Heck, even at larger budgets, if you're probably just going to end up with something that's essentially a clustered NFS server with a SAN behind it. -Brad
Re: [Dovecot] Question about "slow" storage but fast cpus, plenty of ram and dovecot
On Dec 12, 2010, at 23:26, Javier de Miguel RodrĂ guez wrote: > >My SAN(s) (HP LeftHand Networks) do not support SSD, though. But I have > several LeftHand nodes, some of them with raid5, others with raid 1+0. > Maildirs+indexes are now in raid5, maybe I can separate the indexes to raid > 1+0 iscsi target in a different san > >I have two raid5 (7 disks+1 spare) and I have joined them via LVM > stripping. Each disk is SAS 15k rpm 450GB, and the SANs have 512 > MB-battery-backed-cache. In our real workload (imapsync), each raid5 gives > around 1700-1800 IOPS, combined 3.500 IOPS. Your 'slow' storage is running against 16 15k RPM SAS drives? Those LeftHand controllers must be terrible. We have Maildir on NFS on a Netapp with 15k RPM 450GB FC disks and have never had performance problems, even when running the controllers up against the wall by mounting with the noac option (60k NFS IOPS!). We were using 500GB 4500 RPM ATA disks at that point - doesn't get much slower than that. Our current environment actually houses POP/IMAP/SMTP/web for 60k accounts, and an ESX cluster (12k NFS IOPS) without breaking a sweat. We'll soon be adding 128 1TB disks to the same controllers for Exchange, and should still have capacity to spare. Not particularly helpful to your situation I know, but next time you are looking at storage you might reevaluate your current strategy. -Brad
[Dovecot] ioloop.c panic
Timo, Just this morning I upgraded from 2.0.6 to 2.0.7 (hg changeset 66a523135836). A few hours later we had some power problems that caused the networking to drop out at one site. The directors on the surviving site all had a few imap-login processes crash with the following error: Nov 23 09:15:30 cc-director1 dovecot: imap-login: Panic: file ioloop.c: line 35 (io_add): assertion failed: (fd >= 0) Nov 23 09:15:31 cc-director1 dovecot: imap-login: Panic: file ioloop.c: line 35 (io_add): assertion failed: (fd >= 0) Nov 23 09:55:48 cc-director1 dovecot: imap-login: Panic: file ioloop.c: line 35 (io_add): assertion failed: (fd >= 0) Nov 23 09:55:51 cc-director2 dovecot: imap-login: Panic: file ioloop.c: line 35 (io_add): assertion failed: (fd >= 0) I don't have core dumps enabled, so I can't provide any more information than that. I could enable cores, but I honestly would be hard-pressed to replicate the network cut-out anyway so I'm not optimistic that any cores for this message would ever be forthcoming. Thanks, -Brad --- Brandon 'Brad' Davidson Virtualization Systems Administrator University of Oregon Information Services (541) 346-8098 brand...@uoregon.edu
Re: [Dovecot] dovecot startup error message
Spyros, > -Original Message- > > --- > Restarting DovecotFatal: ssl_listen: Can't resolve address required: Name > or service not known > --- > > --- > ssl_listen: required ssl_listen should be a port or address:port to listen on with SSL enabled. 'required' is not a valid hostname on your network. You should probably read this: http://wiki.dovecot.org/MainConfig?highlight=ssl_listen#line-20 -Brad
[Dovecot] Namespace subscription issue
I'm a little confused about how public namespaces work with subscriptions. If I set subscriptions=no and subscribe to the folders, the subscription entries go into the user's private subscription file and that's fine. However, I am unable to pull their status with the LIST-EXTENDED extension. Normal listing works OK: A0004 LIST "" "*" RETURN (STATUS (MESSAGES UNSEEN)) * LIST (\NonExistent) "/" "Public" * LIST () "/" "Public/Public One" * STATUS "Public/Public One" (MESSAGES 0 UNSEEN 0) * LIST () "/" "Public/Public One/Sub 1" * STATUS "Public/Public One/Sub 1" (MESSAGES 0 UNSEEN 0) * LIST () "/" "Public/Public One/Sub 2" * STATUS "Public/Public One/Sub 2" (MESSAGES 0 UNSEEN 0) * LIST () "/" "Public/Public Two" * STATUS "Public/Public Two" (MESSAGES 0 UNSEEN 0) * LIST () "/" "Public/Read-only" * STATUS "Public/Read-only" (MESSAGES 1 UNSEEN 0) A0004 OK List completed. Listing SUBSCRIBED (or LSUB) fails: A0003 LIST (SUBSCRIBED) "" "*" RETURN (STATUS (MESSAGES UNSEEN)) * LIST (\Subscribed) "/" "Public/Public One" * NO Mailbox doesn't exist: Public.Public One * LIST (\Subscribed) "/" "Public/Public One/Sub 1" * NO Mailbox doesn't exist: Public.Public One.Sub 1 * LIST (\Subscribed) "/" "Public/Public One/Sub 2" * NO Mailbox doesn't exist: Public.Public One.Sub 2 * LIST (\Subscribed) "/" "Public/Public Two" * NO Mailbox doesn't exist: Public.Public Two * LIST (\Subscribed) "/" "Public/Read-only" * NO Mailbox doesn't exist: Public.Read-only A0003 OK List completed. If I set subscriptions=yes it works OK, but then the subscriptions are shared unless I override SUBSCRIPTIONS in the mail location. Is this working as designed? Here's what I ended up doing - is there a better way? namespace { inbox = yes location = prefix = separator = / type = private } namespace { hidden = yes list = no location = maildir:%h/Maildir prefix = ~/mail/ separator = / type = private } namespace { type = public separator = / prefix = Public/ location = maildir:/var/mail/public:INDEX=~/Maildir/public:SUBSCRIPTIONS=~/Maildir/ public/subscriptions subscriptions = yes } --- Brandon 'Brad' Davidson Virtualization Systems Administrator University of Oregon Information Services (541) 346-8098 brand...@uoregon.edu
Re: [Dovecot] dovecot genesis v2.0.X
We're using 2.0.5 Director in front of a 1.2.15 POP/IMAP cluster for 60k accounts. I figure we might look at upgrading the backend to 2.0.x sometime in December after some additional shake-down and testing. Timo - I noticed in the TODO you've got: doveadm director assign That would sure be nice to have for testing - add a test host with weight 0 and assign guinea pig users to it on the fly! -Brad > -Original Message- > From: dovecot-bounces+brandond=uoregon@dovecot.org [mailto:dovecot- > > So therefore I await ver 2.0.6 with a mix of conflicting anticipation and > trepidation. :-)
Re: [Dovecot] Significant performance problems
Chris, > -Original Message- > Subject: Re: [Dovecot] Significant performance problems > > Try bumping up the RAM on both servers to 8+GB, and make sure that you > don't have any mount options that would prevent the client from caching > data - noac for example is a killer. You could also try mounting with > noac, and disabling or turning down speculative readahead on the NFS > server. Sorry - don't try noac, try noatime! Big difference! As an additional data point, I will say that we see Dovecot processes for 900 concurrent users consume about 3GB of memory. If your system is anything like ours, you've probably got less than 1GB of memory left for the kernel to use as filesystem cache. Throw as much memory as you can spare at the Dovecot and NFS servers, and see what happens. -Brad
Re: [Dovecot] Significant performance problems
Chris, > -Original Message- > Subject: [Dovecot] Significant performance problems > > I'm sure my issues are a result of misconfiguration, but I'm hoping > someone can point me in the right direction. I'm getting pressure to > move us back to GroupWise, which I desperately want to avoid :-/ > > We're running dovecot 1.2.9 on Ubuntu 10.4 LTS+postfix. The server is a > VM with 1 vCPU and 4GB of RAM. We serve about 10,000 users with anywhere > from 500-1000 logged in at any one time. Messages are stored in Maildir > format on two NFS servers (one for staff, the other for students). Is the webmail interface and imap proxy also running on this server? What does memory utilization look like on the server? How much is being used by applications, and how much is free for filesystem cache? What mount options are you using on your NFS exports (on the NFS client side)? We run 60k accounts with about 10k concurrent sessions across 12 servers. Each server has 4 cores and 8GB of RAM, and mounts 16 NFS exports spread across two servers. The servers handle close to 1k concurrent sessions each without breaking a load of 1. The keys seem to be keeping NFS IO latency down, and allowing the server to cache as much as possible. If the Dovecot server is always having to go back to NFS for client data, and the NFS server doesn't have enough memory to cache filesystem metadata and/or spindles to access the data in a timely manner, you're going to hit a pain point pretty quick. Try bumping up the RAM on both servers to 8+GB, and make sure that you don't have any mount options that would prevent the client from caching data - noac for example is a killer. You could also try mounting with noac, and disabling or turning down speculative readahead on the NFS server. Have you followed all of your storage vendor's block alignment guidelines when setting up the LUNs and virtual disks? -Brad --- Brandon 'Brad' Davidson Virtualization Systems Administrator University of Oregon Information Services (541) 346-8098 brand...@uoregon.edu
Re: [Dovecot] Can we retrieve Dovecot Proxys 'hostName' from Directorinstead of LDAP?
Timo, > -Original Message- > From: Timo Sirainen [mailto:t...@iki.fi] > > The whole userdb. Director doesn't do userdb lookups at all. (Also if > there is no userdb defined, Dovecot actually creates a default static > userdb with empty args.) Awesome, good to know. -Brad
Re: [Dovecot] Can we retrieve Dovecot Proxys 'hostName' from Directorinstead of LDAP?
> -Original Message- > From: Timo Sirainen [mailto:t...@iki.fi] > > On Wed, 2010-09-29 at 11:46 -0700, Brad Davidson wrote: > > > userdb { > > driver = static > > args = uid=dovenull gid=dovenull home=/var/run/dovecot/empty > > } > > This shouldn't be necessary. > Which bit? The args, or the whole userdb? What happens if I don't have a userdb at all? The mailservers use PAM, but I wasn't sure what to use on the Director proxies. -Brad
Re: [Dovecot] Can we retrieve Dovecot Proxys 'hostName' from Directorinstead of LDAP?
Edward, > -Original Message- > > Adding this to my 10-director.conf fixed it > > passdb { > driver = static > args = nopassword=y proxy=y > } > userdb { > driver = static > args = uid=dovenull gid=dovenull home=/var/run/dovecot/empty > } > > Do I still need "someAttribute=proxy" in pass_attrs? I believe that having it in the static passdb is sufficient. > > If I want to use proxy_maybe, is the LDAP value changed from "proxy" to > "proxy_maybe" or in pass_attrs "someAttribute=proxy_maybe"? The Director does not support proxy_maybe. When using it, all logins are proxied. Additionally, you can get rid of any other passdb/userdb sections you've got on the Directors; the LDAP directory should not be queried at all since the Director can just proxy everything through to the backends and let them figure out whether or not the user/pass are valid. > I'll take a look at poolmon for node failures. Let me know how it works for you, or if there are any enhancements you'd find useful. -Brad
Re: [Dovecot] Can we retrieve Dovecot Proxys 'hostName' from Directorinstead of LDAP?
Edward, > -Original Message- > So far all examples I've seen on the dovecot > site require the proxy to know the exact mail server to pass the user to by > way of an LDAP lookup. > > Does anyone know of a way to have Dovecot Proxy pick a server from > Directors > status list instead of looking it up from LDAP? Automatically setting the proxy destination is actually the core function of the Director. It maintains an internal list of available backend servers, and uses a hash algorithm to balance logins across them. All you need to do to enable this is: director_servers = director_mail_servers = service director { unix_listener login/director { mode = 0666 } fifo_listener login/proxy-notify { mode = 0666 } } passdb { driver = static args = nopassword=y proxy=y } userdb { driver = static args = uid=dovenull gid=dovenull home=/var/run/dovecot/empty } service imap-login { executable = imap-login director } service pop3-login { executable = pop3-login director } This tells the login processes to talk to the Director, and the static passdb/userdb tells the Director to proxy all connections and let the backend node handle authentication. Note that this won't work if specific users need to be on specific servers - the Director makes sure that all of a user's sessions end up the same host, but it does not care which host it is. > Also, how does Director discover that an IMAP server is up or down so that > it can adjust in the case of a server failure? Is this something that > Director does automatically or do we need to manually change the mail > servers vhost count in case of an IMAP node failure? It does not handle failure on its own. Several of us are using this to detect and react to node failures: http://github.com/brandond/poolmon -Brad
[Dovecot] Command to get proxy connection count
Timo, I'm trying to get a count of active proxy sessions on a given Director. I can of course enable verbose_proctitle and parse the 'N connections' string out of ps output or /proc/pid/cmdline. Is there a better way to do that, perhaps with doveadm? Goes back to needing a command for proxy and director ring status, I guess. -Brad
Re: [Dovecot] nfs director
> -Original Message- > From: Edward avanti > > have you been told where you might go lately and do with some part your > anatomy? > this Timo list, not you list, best remember this since you nobody this list Seriously? Grow up and/or take it off-list. -Brad
Re: [Dovecot] Broken SELECT ""/EXAMINE ""
Charles, > -Original Message- > > On 2010-09-01 3:50 AM, Brandon Davidson wrote: > > Imapproxy is naive and only reads capabilities from the initial > > banner - it doesn't refresh them after login. If you make sure > > they're in the initial capability list it will behave properly. > > Hopefully you or someone opened a bug with them to fix it? ;) At the time I noticed it, imapproxy was essentially unmaintained. It appears that the squirrelmail guys are taking over the project; I should see what they've been doing with it. -Brad
Re: [Dovecot] Static passdb support?
Awesome! I was just looking at wiki2 and didn't see it there. Any special caveats? > -Original Message- > From: Timo Sirainen [mailto:t...@iki.fi] > > > Do you have any plans to add a static passdb? > > v2.0 actually has it: > > args = nopassword=y proxy=y >
[Dovecot] Static passdb support?
Timo, Do you have any plans to add a static passdb? I'm essentially emulating one with sqlite on my director - have it connect to /dev/null and return three static fields for all queries. Works fine, but it would seem a little cleaner to me if I could just do: passdb { driver = static args = password='' nopasswd='Y' proxy='Y' } -Brad
Re: [Dovecot] Director mailserver health monitoring script
Timo, > -Original Message- > From: Timo Sirainen [mailto:t...@iki.fi] > > Why HOST-SET 0 + HOST-FLUSH, rather than HOST-REMOVE? The script gets the mailserver list from the director at the beginning of each poll cycle. If I remove a downed host, I'll never check it again to know when it recovers. Instead, I just disable it (vhost=0) and disassociate any active mappings, and then bump the vhost count back up when it's OK again. -Brad
Re: [Dovecot] Disable APOP challenge in POP3 login greeting
Timo, > -Original Message- > From: Timo Sirainen [mailto:t...@iki.fi] > > Yeah, I removed the check because it wasn't anymore needed. I didn't > realize it was also there to check if APOP was disabled. Added back in > another way: http://hg.dovecot.org/dovecot-2.0/rev/eed1426f55a9 Awesome, thanks! Looks good. -Brad
[Dovecot] Director mailserver health monitoring script
Timo et al; The last bit of functionality that the Dovecot director is missing compared to our existing load balancers is mailserver health monitoring. As I understand it, if a mailserver goes down, Dovecot does not take any action to route connections around the offline node, and will keep trying to proxy clients to it. Since we're hoping to cut over to Directors soon, but don't want to lose any functionality, I've hacked up a script that: * Polls the local director for a list of mailservers * Performs health checks against a list of ports on each mailserver * Disables or enables mailservers (by altering the vhost count) as necessary I've published the script on github in hopes that it might be useful to others: http://github.com/brandond/poolmon/ Maybe someday Dovecot will do something like this internally and I won't need the script any more, but for now I'm pretty happy with it. -Brad
Re: [Dovecot] OT [Fwd: Fwd: Re: EVERYONE USING DOVECOT PLEASESIGN: Thanks, Administrators of Dovecot!]
Seriously guys, can we at least keep the flame wars off-list? It's getting rather annoying. -Brad
[Dovecot] Disable APOP challenge in POP3 login greeting
Timo, It looks like Dovecot 2.0 appends an APOP challenge to the POP3 greeting even if APOP is not an enabled auth mechanism. Is there any way to disable this? We don't support APOP, and the challenge includes the private hostname of the server, which we'd rather not have in the banner. It looks like get_apop_challenge in 1.2 returns NULL if APOP isn't supported, which causes auth_client_ready to omit the banner... but I see no such check (in fact, no way for get_apop_challenge to return NULL) in 2.0, even though pop3_client_send_greeting tests for it. Thanks, -Brad
Re: [Dovecot] Login process connection routing
Timo, > -Original Message- > From: Timo Sirainen [mailto:t...@iki.fi] > > Dovecot lets kernel assign it. Whichever process grabs it first, handles > it. Makes sense. > You probably have too many login processes. process_min_avail should be > set to about the same as the number of CPU cores. Ahh, OK - that's good guidance, I didn't remember hearing that. I had just left the old setting in place from when we were forking off a process per session, where it made sense as more of a prefork. > Since there are two > processes handling most of the connections, do you also happen to have 2 > cores? :) Two cores with HT (looks like four), but yes ;) Thanks! -Brad
Re: [Dovecot] Doveadm director flush/remove
Timo, > -Original Message- > From: Timo Sirainen [mailto:t...@iki.fi] > > Did several fixes related to this in different parts of code. Now it > should work? :) No more crashes! But it still does fail eventually: [r...@cc-popmap7 ~]# doveadm director map doveadm(root): Error: User listing returned failure doveadm(root): Error: user listing failed Jul 20 11:15:04 cc-popmap7 dovecot: auth: Error: getpwent() failed: No such file or directory This might just be an artifact of our environment, I'm not sure. Dumping users to a file and then feeding that back in works great. -Brad
Re: [Dovecot] Doveadm director flush/remove
Timo, > -Original Message- > From: Timo Sirainen [mailto:t...@iki.fi] > > See what it says in logs. It times out after a minute: [r...@cc-popmap7 ~]# time doveadm director map doveadm(root): Error: User listing returned failure doveadm(root): Error: user listing failed real1m0.028s user0m0.088s sys 0m0.072s Jul 15 13:46:24 cc-popmap7 dovecot: auth: Error: auth worker: Aborted request: Lookup timed out Jul 15 13:53:25 cc-popmap7 dovecot: auth: Error: getpwent() failed: No such file or directory > Are you using userdb passwd or userdb ldap? With userdb ldap you need to > configure iterate_attrs and iterate_filter in your LDAP config. With > passwd I think it should work directly.. userdb passwd. Our LDAP directory might not be optimally configured. The group that administers it only really cares about binds, iteration can be rather slow: [r...@cc-popmap7 ~]# time getent passwd | wc -l 51552 real8m0.120s user0m2.507s sys 0m1.093s That comes out to just over 100 entries a second. -Brad
Re: [Dovecot] Doveadm director flush/remove
Timo, > -Original Message- > From: Timo Sirainen [mailto:t...@iki.fi] > > > I suppose it could get a list of all users and then list all users whose > hash matches what director has.. Hmm. I guess that would be usable too, > yes. :) > > See if this works: http://hg.dovecot.org/dovecot-2.0/rev/4138737f41e6 I get: [r...@cc-popmap7 ~]# doveadm director map doveadm(root): Error: User listing returned failure doveadm(root): Error: user listing failed Our environment might be a little weird. We're using LDAP accounts via pam_ldap on the backend servers, so they are essentially local accounts (not virtual). I'm using passthrough auth with NOPASSWORD in the director proxy query. The accounts are also available on the directors, but since there's about 45k of them it would take quite a while to iterate and test hashes for all of them, if that's what it's trying to do. Sounds like this might just not work in our environment due to the size of our account base. I do appreciate very much that you added the feature though! -Brad
Re: [Dovecot] Doveadm director flush/remove
Timo, > -Original Message- > From: Timo Sirainen [mailto:t...@iki.fi] > > Yes, that's what it was intended to do. OK. I guess I had figured that removing it from the director would also kill any active proxy sessions, but that's obviously not the case.. it just removes the host from the list and any mappings from the hash. > Hmh.. I guess that would be nice, but also a bit annoying to do. It > would require each login process to have a connection to director > process, and currently there's no such connection (except for the notify > fifo, but that's wrong way). Maybe something as simple as killing any login proxies that are talking to the selected backend, or are proxying for users that are mapped to the selected backends? Or maybe the Directors don't know enough to do that? I'm thinking like 'doveadm kick' for proxy connections, since who/kick doesn't work on the Director, just backends. While I'm making a wishlist... 'doveadm director status ' to show list of users mapped to a host? Or maybe just 'doveadm director status -v' to show list of users instead of just user count. > > like I'd done 'doveadm direct add HOSTNAME 0 && doveadm director > > flush HOSTNAME' before removing it? > > But that does almost the same thing as remove. You're right. I see that FLUSH just does the 'remove any mappings' bit that REMOVE does, and ADD with a 0 count is effectively the same as removing it from the list. For some reason I was thinking of this as 'flush out (kill) any active proxy sessions'. -Brad
Re: [Dovecot] Director proxy timeout
Timo, > -Original Message- > From: Timo Sirainen [mailto:t...@iki.fi] > >> I can easily change this write to be nonblocking and just retry later >> instead of hanging, but there is still a bug if director never reads it.. > > This should fix it: http://hg.dovecot.org/dovecot-2.0/rev/510b627687f8 So far so good! I upgraded the directors and switched the webmail system back over a few hours ago and it has yet to get stuck. So glad it was something fairly easy to fix, I was beginning to despair that it was an imapproxy problem that I'd have to track down and fix myself. Makes sense that it was the notify socket, I had noticed that the 'doveadm director status' user numbers would drop off significantly after it had been running for a while, but I didn't know what to correlate that too. -Brad
Re: [Dovecot] help on migrating some old Maildirs
> -Original Message- > > > 2) is there any way of having dovecot to calculating the S= and W= > > parameters and renaming those files and, thus, avoiding some negative > > impact caused by the lack of them ? > > Anyway, the filenames themselves can't be renamed, because > the ,S=xx,W=yy is part of the "maildir base filename", and changing it > assigns a new UID for the message. If he's restoring from an old machine (they're not currently indexed by Dovecot on the new server), they're going to get a new UID when Dovecot finds them anyway, right? If he copied them from the old server into folder/new on the new one, would Dovecot add the S and W flags when it moves them to folder/cur? -Brad
Re: [Dovecot] Director proxy timeout
Timo, > -Original Message- > From: Timo Sirainen [mailto:t...@iki.fi] > >> Is the director proxy inactivity timeout configurable? > > Proxy has no inactivity timeout. > Maybe it's a firewall or something disconnecting them? That's odd. I'm trying to figure out what's going on. I didn't run in to any problems with it on my test box, but within a few hours of enabling it on the production system I had Apache processes backing up. It's rather hard to troubleshoot there are now so many moving parts. It does look like all of the "Maximum execution time of 120 seconds exceeded" errors logged by Apache are within Roundcube's GetCapability readLine loop. There have been problems in the past with Roundcube's IMAP code going off the deep end if it loses its IMAP connection. I'll see if I can't do something about that, but it would also be nice if I could figure out why I'm getting disconnected between LOGIN and CAPABILITY in the first place. -Brad
[Dovecot] Director proxy timeout
Timo, Is the director proxy inactivity timeout configurable? I just recently attempted to switch our production webmail's imapproxy system to use a pool of two directors and ran into problems with it apparently disconnecting them unexpectedly. I don't believe that it IDLEs, I think it just UNSELECTs to reset the state and then keeps the connection open until it's used again, or until a configurable delay has elapsed, after which it logs out on its own. I've got the imapproxy inactivity delay set to 300 seconds, and it seems like the director is disconnecting them before that time is up. I haven't had the same problem when imapproxy connects directly to the backend servers, which are running 1.2.12. On the webmail system, imapproxy is logging a LOT of: Jul 11 21:10:21 cc-mailapps1 in.imapproxyd[28477]: IMAP_Line_Read(): connection closed prematurely. The director shows: Jul 11 21:10:14 cc-popmap7p dovecot: imap-login: proxy(jacintha): disconnecting 172.25.142.164 The backend server shows: Jul 11 21:10:14 cc-popmap2p dovecot: imap: user=, rip=10.142.0.162, pid=8629: Connection closed bytes=855/46109 Roundcube seems to handle the disconnects pretty badly, leaving a bunch of Apache processes chewing up CPU time. -Brad
[Dovecot] Director error during sync
Just an FYI - if I have a two-node director ring, I get this when I start up the second node: Jul 7 16:59:38 oh-popmap7 dovecot: director: Error: Received SYNC from 10.142.0.162:1234/left (seq=6) while already synced Totally expected since it's already received the same SYNC from the right side, but since I thought I heard you say that "any errors that Dovecot logs are bugs" I thought I'd report it ;) For that matter, it also logs errors when I shut down / restart the second node: Jul 7 16:59:37 oh-popmap7 dovecot: director: Error: Director 10.142.0.162:1234/left disconnected Jul 7 16:59:37 oh-popmap7 dovecot: director: Error: Director 10.142.0.162:1234/right disconnected Jul 7 16:59:37 oh-popmap7 dovecot: director: Error: director(10.142.0.162:1234/out): connect() failed: Connection refused -Brad
[Dovecot] Eronious comment in sample Director configuration
The sample Director configuration (10-director.conf) says that director_servers and director_mail_servers can be lists of either IPs or hostnames: # List of IPs or hostnames to all director servers, including ourself. # Ports can be specified as ip:port. The default port is the same as # what director service's inet_listener is using. #director_servers = # List of IPs or hostnames to all backend mail servers. Ranges are allowed # too, like 10.0.0.10-10.0.0.30. #director_mail_servers = However, if I use hostnames, it bails out with an "Invalid IP address" error. It also seems to be confusing the dashes in my hostnames for address range specifications. Obviously hostnames are not currently supported, but I thought I'd mention it so the documentation could be updated. -Brad
Re: [Dovecot] mdbox: Cannot create subfolder called "dbox-Mails" (2.0beta5)
Bill, > -Original Message- > > Taking into account the additional requirement to make things easy for > the sysadmin, one idea would be to make the special value be something > like "DbOx-mAiLs". IMO, I'd rather have to explain to users why they can't create a particular quite unlikely folder name, than deal with having strange directories (either silly caps, or some totally random string of characters) all over the place. Either way, it sounds like Timo's not convinced that it's worth changing. -Brad
Re: [Dovecot] dovecot 1.2.11/ thunderbird 3.1 - moving folders
> >> At this point thunderbird shows error-message when i start to delete > >> folder1. It tells: > >> [CANNOT] Mailbox is'nt selectable: folder1. > >> AND > >> [NONEXISTENT] Directory folder1 is'nt empty, can't delete it. > > > > Yeah, it's a bug. Fixed in v2.0 now .. but since v1.2's code is entirely > > different here, I'm not sure if I should bother touching it anymore.. > > > is there any workaround for me? > I think a lot of people would be happy if this bug also could be fixed in > 1.2 branch. > v2 ist still beta and as ISP you cant switch to new software within a few > days, and our customers make trouble. +1, I'd appreciate a patch for 1.2 if it's not a total pain to fix. I don't see us being able to go to 2.0 until after it's been out of beta for a few months. I hate to see the 'current' branch being deprecated before we have a workable 'stable' alternative to upgrade to. I can see saying no to fixes for 1.0 and 1.1, but there are a fair number of folks that don't feel comfortable running beta releases in production. -Brad
Re: [Dovecot] 'doveadm who' enhancement request
> -Original Message- > From: Timo Sirainen [mailto:t...@iki.fi] > .. > > password_query = SELECT null AS password, 'Y' AS nopassword, 'Y' AS > > proxy WHERE '%{lip}' NOT LIKE '10.142.0.%%' AND '%{lip}' != '%{rip}' > > This query no longer works, because both lip and rip are replaced with the > original ones from proxy.. Ah, OK. The looping was unexpected though, as it has in the past complained "Proxying loops to itself", but I guess that check fails as well due to the aforementioned lip/rip replacement. I'll stick with untrusted for now then, and just live with some extra SSL overhead and doveadmin not showing the proxied endpoint. Maybe someday if you feel like adding proxy_maybe to the director it'll work right. I know I'm trying to shoehorn the director into an infrastructure it's not really meant for. A better choice would probably be to bring a new dedicated director online in either location, and put those behind the load balancer. I wonder if they can stand up to 10k+ concurrent proxied connections though? -Brad
Re: [Dovecot] 'doveadm who' enhancement request
Timo, > > Is there any chance 'doveadm who' > > could use this to display the original connection source? > > If login_trusted_networks contains proxies, I think it should already do > that?.. Interesting. I'd tried putting the private network in login_trusted_networks but it got stuck in a loop until the director process ran out of file handles, so I took it back out. This is probably a little weird in that it's proxying to itself, and also trusting the looped connection. I guess it's running the original endpoints through the authdb for validation, which then proxies, causes another authdb lookup, etc? /etc/dovecot/dovecot.conf: director_servers = 10.142.0.162 director_mail_servers = 10.142.0.162 login_trusted_networks = 10.142.0.0/24 passdb { driver = sql args = /etc/dovecot/proxy-sqlite.conf } passdb { driver = pam } userdb { driver = passwd } /etc/dovecot/proxy-sqlite.conf: driver = sqlite connect = /dev/null password_query = SELECT null AS password, 'Y' AS nopassword, 'Y' AS proxy WHERE '%{lip}' NOT LIKE '10.142.0.%%' AND '%{lip}' != '%{rip}' The verbose auth look during the loop looked like: (lots more of the following omitted) Jun 2 13:48:58 cc-popmap7 dovecot: auth: Debug: new auth connection: pid=19120 Jun 2 13:48:58 cc-popmap7 dovecot: auth: Debug: client in: AUTH 1 PLAIN service=imapsecured lip=128.223.142.138 rip=128.223.157.45lport=993 rport=60872 resp= Jun 2 13:48:58 cc-popmap7 dovecot: auth: Debug: client out: OK 1 user=brandond proxy pass= Jun 2 13:48:58 cc-popmap7 dovecot: auth: Debug: sql(brandond,128.223.157.45): query: SELECT null AS password, 'Y' AS nopassword, 'Y' AS proxy WHERE '128.223.142.138' NOT LIKE '10.142.0.%' AND '128.223.142.138' != '128.223.157.45' Jun 2 13:48:58 cc-popmap7 dovecot: auth: Debug: new auth connection: pid=19121 Jun 2 13:48:58 cc-popmap7 dovecot: auth: Debug: client in: AUTH 1 PLAIN service=imapsecured lip=128.223.142.138 rip=128.223.157.45lport=993 rport=60872 resp= Jun 2 13:48:58 cc-popmap7 dovecot: auth: Debug: client out: OK 1 user=brandond proxy pass= Jun 2 13:48:58 cc-popmap7 dovecot: auth: Debug: sql(brandond,128.223.157.45): query: SELECT null AS password, 'Y' AS nopassword, 'Y' AS proxy WHERE '128.223.142.138' NOT LIKE '10.142.0.%' AND '128.223.142.138' != '128.223.157.45' Jun 2 13:48:58 cc-popmap7 dovecot: auth: Debug: client in: AUTH 1 PLAIN service=imapsecured lip=128.223.142.138 rip=128.223.157.45lport=993 rport=60872 resp= Jun 2 13:48:58 cc-popmap7 dovecot: auth: Debug: client out: OK 1 user=brandond proxy pass= Jun 2 13:48:58 cc-popmap7 dovecot: auth: Debug: sql(brandond,128.223.157.45): query: SELECT null AS password, 'Y' AS nopassword, 'Y' AS proxy WHERE '128.223.142.138' NOT LIKE '10.142.0.%' AND '128.223.142.138' != '128.223.157.45' Jun 2 13:48:58 cc-popmap7 dovecot: auth: Debug: new auth connection: pid=19123 Jun 2 13:48:58 cc-popmap7 dovecot: auth: Debug: new auth connection: pid=19124 Jun 2 13:48:58 cc-popmap7 dovecot: director: Error: socket(/var/run/dovecot//auth-login) failed: Too many open files Jun 2 13:48:58 cc-popmap7 dovecot: director: Error: connect(/var/run/dovecot//auth-login) failed: Too many open files Jun 2 13:48:58 cc-popmap7 dovecot: auth: Debug: client in: AUTH 1 PLAIN service=imapsecured lip=128.223.142.138 rip=128.223.157.45lport=993 rport=60872 resp= Jun 2 13:48:58 cc-popmap7 dovecot: director: Error: socket(/var/run/dovecot//auth-login) failed: Too many open files Jun 2 13:48:58 cc-popmap7 dovecot: director: Error: connect(/var/run/dovecot//auth-login) failed: Too many open files Jun 2 13:48:58 cc-popmap7 dovecot: auth: Debug: client out: OK 1 user=brandond proxy pass= Jun 2 13:48:58 cc-popmap7 dovecot: auth: Debug: sql(brandond,128.223.157.45): query: SELECT null AS password, 'Y' AS nopassword, 'Y' AS proxy WHERE '128.223.142.138' NOT LIKE '10.142.0.%' AND '128.223.142.138' != '128.223.157.45' Jun 2 13:48:58 cc-popmap7 dovecot: director: Error: socket(/var/run/dovecot//auth-login) failed: Too many open files Jun 2 13:48:58 cc-popmap7 dovecot: director: Error: connect(/var/run/dovecot//auth-login) failed: Too many open files Jun 2 13:48:58 cc-popmap7 dovecot: imap-login: Warning: Error sending handshake to auth server: Broken pipe -Brad
[Dovecot] 'doveadm who' enhancement request
When Dovecot is in proxy mode, the client sends along the original connection endpoints in an ID command. Is there any chance 'doveadm who' could use this to display the original connection source? As it currently stands, all I see is a bunch of connections from the proxy. Thanks! --- Brandon 'Brad' Davidson Virtualization Systems Administrator University of Oregon Information Services (541) 346-8098 brand...@uoregon.edu
Re: [Dovecot] A new director service in v2.0 for NFS installations
Timo, > -Original Message- > >> That's too bad! Any hope of getting support for this > > I wasn't really planning on implementing it soon. > >> and director+proxy_maybe anytime soon? > > I tried looking into it today, but it's an annoyingly difficult change, > so probably won't happen soon either. That's too bad! I guess I'll see what I can do within the current constraints. You'd gotten my hopes up though ;) I was just playing around a bit, and came up with something like: passdb { driver = sql args = /etc/dovecot/proxy-sqlite.conf } passdb { driver = pam } driver = sqlite connect = /dev/null password_query = SELECT null AS password, 'Y' AS nopassword, 'Y' AS proxy WHERE '%{lip}' LIKE '10.142.0.%%' AND '%{lip}' != '%{rip}' This is a hack, but should be roughly equivalent to proxy_maybe and a local block around the sql passdb, right? -Brad
Re: [Dovecot] A new director service in v2.0 for NFS installations
Timo, > -Original Message- > From: dovecot-bounces+brandond=uoregon@dovecot.org [mailto:dovecot- > > The company here in Italy didn't really like such idea, so I thought about > making it more transparent and simpler to manage. The result is a new > "director" service, which does basically the same thing, except without SQL > database. The idea is that your load balancer can redirect connections to > one or more Dovecot proxies, which internally then figure out where the > user should go. So the proxies act kind of like a secondary load balancer > layer. This looks very cool! We run a basic two-site active-active configuration with 6 Dovecot hosts in either location, with an Active/Standby load balancer cluster in front, and a cluster of geographically distributed NFS servers in the back. I'm sure I've described it before. We'd like to keep failover as simple as possible while also avoiding single points of failure. I have some questions about the suggested configuration, as well as the current implementation. * Does this work for POP3 as well as IMAP? * Is there any reason not to use all 12 of our servers as proxies as well as mailbox servers, and let the director communication route connections to the appropriate endpoint? * Does putting a host into 'directed proxy' mode prevent it from servicing local mailbox requests? * How is initial synchronization handled? If a new host is added, is it sent a full copy of the user->host mapping database? * What would you think about using multicast for the notifications instead of a ring structure? If we did set up all 12 hosts in a ring, it would be conceivable that a site failure plus failure of a single host at the surviving site would segment the ring. Multicast would prevent this, as well as (conceivably) simplifying dynamic resizing of the pool. Thanks! -Brad
Re: [Dovecot] Problem with ACL and rename folder
Timo, > -Original Message- > From: Timo Sirainen > > Fixed: http://hg.dovecot.org/dovecot-1.2/rev/6f25b20b8367 > > (It was already fixed in v2.0.) I know you were hoping to make 1.2.11 the last in that branch, but it seems like we've seen a few patches since then. Are we due for 1.2.12 sometime soon? -Brad