Re: [Dovecot] Synchronization error in NFS
OK. One more question. Now, the director and backend server are running on the same servers, I set it up according to http://wiki2.dovecot.org/RunningDovecot#Running_Multiple_Invocations_of_Dovecot . The question is how to use doveadm to manage the different instances? I know there is dovecot -c , but how about doveadm? Timo Sirainen Sent by: To dovecot-bounces@d Dovecot Mailing List ovecot.org cc 02/09/2012 08:55 Subject PMRe: [Dovecot] Synchronization error in NFS Please respond to Dovecot Mailing List On 9.2.2012, at 10.36, Andy YB Hu wrote: > I just tried out the Director. One question is about the re-redirection. I > know director will redirect all the simultaneous requests from the same > user to only a single server at the same time. The question is how to > manage the time period after last connection to re-decide to redirect which > machine? director_user_expire? Look like not. > > I did one test, set director_user_expire = 1 min, then keep sending > requests to the director in 2 min interval, the result is it keeps redirect > to the same back end server. In normal operation the user is always redirected to the same server. http://blog.dovecot.org/2010/05/new-director-service-in-v20-for-nfs.html has some more details. If you have enough connections, it shouldn't matter that the connections aren't constantly going to random backends. In practice they get distributed well enough. <><><>
Re: [Dovecot] dsync deleting too many emails (sdbox)
On Thu, 9 Feb 2012, Timo Sirainen wrote: On 9.2.2012, at 21.47, Timo Sirainen wrote: I've anyway done several fixes in v2.1. Can you try if these problems happen with it too? And in any case cleanup the dbox from the *.broken files, so that "doveadm force-resync" won't give any errors. A bit more specifically: The last such dbox bug was fixed only today, so you'd need v2.1 hg version or wait for v2.1.rc6 which should happen this week. And in general: It would be helpful to have a clean fully working dbox, and then know the *first* error(s) that gets printed about dsync corrupting it. Otherwise it's difficult to guess what are some old problems and what are new ones and which problems happens only because of another problem. Good to know. This weekend I can try to set up something of a 'lab' for testing dsync + (s)dbox, both to see if I can reproduce the errors with the old versions, and to see if the new versions fix them. I'll keep in mind the consideration of knowing the first error that gets printed! -- Asheesh.
Re: [Dovecot] dsync deleting too many emails (sdbox)
On 9.2.2012, at 21.47, Timo Sirainen wrote: > I've anyway done several fixes in v2.1. Can you try if these problems > happen with it too? And in any case cleanup the dbox from the *.broken > files, so that "doveadm force-resync" won't give any errors. A bit more specifically: The last such dbox bug was fixed only today, so you'd need v2.1 hg version or wait for v2.1.rc6 which should happen this week. And in general: It would be helpful to have a clean fully working dbox, and then know the *first* error(s) that gets printed about dsync corrupting it. Otherwise it's difficult to guess what are some old problems and what are new ones and which problems happens only because of another problem.
Re: [Dovecot] dsync deleting too many emails (sdbox)
On Thu, 2012-02-02 at 14:59 -0500, Asheesh Laroia wrote: > I'm guessing this is some bad interaction with sdbox and partial file > downloads? > > I haven't read the code for this, but I would guess the dsync process isn't > being atomic > about file transfers, so it is leaving half-completed transfers in place, > which results > in corrupt files when they're next examined. There were some problems related to this in dbox, although in your case it seems to be worse than what it should.. I've anyway done several fixes in v2.1. Can you try if these problems happen with it too? And in any case cleanup the dbox from the *.broken files, so that "doveadm force-resync" won't give any errors.
Re: [Dovecot] Strange behavior from shared namespaces and INBOX, probably a bug
On Fri, 2011-09-23 at 14:13 +0200, Christoph Bussenius wrote: > Some folders of user1, including the INBOX, have been shared using these IMAP > commands: > . login user1 XX > . setacl INBOX user2 lrwstiekx > . setacl box-a user2 lrwstiekx > > Now if we use telnet to log in as user2 and select "shared/user1", it will > contain the same > mails as "shared/user1/INBOX". > > The really strange thing is that "SELECT"-ing "shared/user1" succeeds only > if it is the first command afted logging in. If it not the first > command (if e. g. the "LIST" or "SELECT" command has already been used), > then dovecot will report that the mailbox does not exist. v2.1 always fails to select "shared/user1". I don't think I'll bother figuring out why v2.0 doesn't, might not be an easy fix. Much of the code related to this was rewritten in v2.1.
Re: [Dovecot] Segfaul probably during dsync
On Tue, 2011-04-12 at 14:52 +0200, Matthias Rieber wrote: > Hi, > > it's caused by a virtual folder: > > INBOX.IBX.Folder1 > INBOX.Ordner.Folder1 > INBOX.Ordner.Folder1.* >OR (OR (OR HEADER FROM bar.com HEADER FROM bar.de) HEADER FROM > foo.com) HEADER FROM barfoos.net NOT HEADER FROM root@ NOT HEADER FROM > www-data@ SINCE 1-Jan-2010 > > When I delete the dovecot.index.search* files it works for a while but > fails again. Are you still getting these crashes? I tried to reproduce but couldn't.
Re: [Dovecot] 2.1.rc1 (8a63f621bd2e): SiS permission issue + crash
On Sat, 2011-12-10 at 04:35 +0100, Pascal Volk wrote: > dsync -u tes...@example.com mirror maildir:/tmp/Maildir > rm -rf Maildir && cp -a Maildir_org Maildir && chown -R 70010:70002 Maildir > dsync -vu tes...@example.com mirror maildir:/tmp/Maildir > dsync(tes...@example.com): Error: > stat(/srv/mail/.SiS/70002/a2/7b/.temp.blau.819.4f06409857c627e0) failed: > Permission denied > dsync(tes...@example.com): Error: > safe_mkstemp(/srv/mail/.SiS/70002/a2/7b/.temp.blau.819.) failed: Permission > denied > dsync(tes...@example.com): Panic: file dsync-worker-local.c: line 1644 > (local_worker_save_msg_continue): assertion failed: (ret == -1) I couldn't reproduce this crash, but I guess this should fix it: http://hg.dovecot.org/dovecot-2.1/rev/e29bc3eb0ba6 Also fixed a related problem where if dbox failed to save a message it still added it to index: http://hg.dovecot.org/dovecot-2.1/rev/98a59ac1f3d0
Re: [Dovecot] Quota Calculation seems to be wrong when using dsync
On Sat, 2010-12-25 at 10:08 +0100, Thomas Leuxner wrote: > plugin { > quota = dict:user::file:%h/mdbox/dovecot-quota > quota_rule = *:storage=1GB > quota_rule2 = Trash:storage=+10%% > } > > Kick off a manual backup: > > $ dsync -u u...@domain.tld backup mdbox://mdbox > > This results in doubling the quota for the backed up user. This is problematic. With dict quota you'll have this problem, because both source and destination uses the same file. So it would kind of make sense to disable quota for the destination dsync.. Except with Maildir++ the quota is stored in the Maildir root directory. There are no problems with dsyncing it, and you most likely wouldn't want quota disabled there. So .. I'm not really sure what I can do about this. There are some workarounds you could do, like: dsync -u u...@domain.tld backup dsync -o mail=mdbox://mdbox -o plugin/quota= (works only with latest 2.0/2.1 hg, but with older versions you could do e.g. -o mail_plugins=) Still, it would be nice if there was some generic solution to this. Perhaps the destination username should be something different, like "backup". In dict-sql case then it would modify "backup" user's quota. For dict-file the %h could maybe expand to backup user's homedir.. The backup username probably should be a parameter to dsync I guess.. But an extra parameter wouldn't fix this automatically..
Re: [Dovecot] Crash on mail folder delete
On Wed, 2012-01-25 at 16:04 -0800, Daniel L. Miller wrote: > On 1/25/2012 3:43 PM, Daniel L. Miller wrote: > > On 1/25/2012 3:42 PM, Timo Sirainen wrote: > >> On 26.1.2012, at 1.37, Daniel L. Miller wrote: > >> > >>> Attempting to delete a folder from within the trash folder using > >>> Thunderbird. I see the following in the log: > >> Dovecot version? > >> > > 2.1.rc3. I'm compiling rc5 now... > > > Error still there on rc5. > > Jan 25 16:03:47 bubba dovecot: imap(dmil...@amfes.com): Panic: file > mailbox-list-fs.c: line 156 (fs_list_get_path): assertion failed: > (mailbox_list_is_valid_pattern(_list, name)) Fixed: http://hg.dovecot.org/dovecot-2.1/rev/95a9428fe68b
Re: [Dovecot] BUG(?): Incorrect responses for ACL prohibited actions
Hi, Continuing this old thread: On Tue, 2011-05-10 at 22:46 -0600, Michael M Slusarz wrote: > But as far as the NO for a non-silent STORE, it seems that RFC 4314 > [4] disagrees with you: > >STORE operation SHOULD NOT fail if the user has rights to modify >at least one flag specified in the STORE, as the tagged NO >response to a STORE command is not handled very well by deployed >clients. > > To me, the negative inference from this statement would be: "STORE > operation SHOULD fail if the user has no rights to modify at least one > flag specified in the STORE." That's not the negative of it. :) > At a minimum, a NOPERM response should be thrown, or else there is no > feedback at all why the flag was not set (without parsing ACLs). Perhaps OK [NOPERM] or some other kind of informational message about it .. But there's no way to do it with Dovecot's current API. Also RFC 3501 recommends implementing "session flags" for flags that cannot be permanently stored. So even if user doesn't have access to set any flags, a "well behaving IMAP server" (so not Dovecot :( ) would set those flags for the duration of the current session. Anyway, you can look at PERMANENTFLAGS reply to see if it's possible to set the flag, no need to look at ACLs. > >> My reading of this is that NOPERM should be returned for ANY ACL > >> prohibited action, not just for selecting or creating a mailbox. > >> Dovecot 2.0.12 does not return NOPERM for DELETE/EXPUNGE actions > >> (at a minimum) that are prohibited. > > > > I'm not really sure. Maybe for EXPUNGE a NO would be okay. For flag > > changes it's just annoying to see clients popup pointless error > > messages when trying to set a \Seen flag (or \Answered flag when > > replying). Apparently I've tried this earlier, since there's a comment in code: ret = acl_mailbox_right_lookup(_mail->box, ACL_STORAGE_RIGHT_EXPUNGE); if (ret <= 0) { /* if we don't have permission, silently return success so users won't see annoying error messages in case their clients try automatic expunging. */
Re: [Dovecot] [dovecot] Getting duplicates when using snarf plugin with mbox backend
On Mon, 2011-06-13 at 08:59 -0400, Jonathan SIegle wrote: > Running dovecot version 2.0.11. To reproduce, open two imap sessions and > issue a check command from each at the same time with new mail in the queue. > > 0 login testuser testpw > 1 select inbox > -- Deliver mail -- > 2 check Finally fixed: http://hg.dovecot.org/dovecot-2.0/rev/76220f2b5966
Re: [Dovecot] POP3 UIDLs with virtual INBOX and migration from maildir->mdbox
On Thu, 2012-02-09 at 15:35 +0100, Peter Mogensen wrote: > Hi, > > Considering the scenario, where you have some old account with a > different POP3 UIDL format and you migrate them to dovecot. > > So these old UIDLs would be saved to dovecot-uidlist. > > At some later time you want to introduce a virtual POP3 INBOX like > described on: > http://wiki.dovecot.org/Plugins/Virtual > > So you decide to make the new UIDL format "%f" - to make them unique > across folders. > > So far so good. Assuming the messages are in the same order, so far so good. > But then you decide to migrate to mdbox with all your old UIDLs. > The docs says that saving old UIDLs is only supported in Maildir and > that %f is only supported in Maildir. > > So is this at all possible? > > Would pop3_uidl_format = %g solve this (except for the old legacy UIDL's) ? %g and %f are equal with Maildir. And if you migrated with dsync from maildir to mdbox, then all GUIDs and POP3 UIDLs are preserved. But test it first! The main potential problem is that although UIDLs are preserved, their order isn't and POP3 clients don't like the order changing. With Maildir uidlist you can reorder POP3 mails to different than IMAP mails, but with mdbox you can't currently.
Re: [Dovecot] vsz_limit
Hello Timo, There is no other problem as far as I know. That's why I think is has something to do with dovecot. Specifically with the imap an imap-login process, as I observe in the processes status. Anyway, the system load is not high enough to cause these problems. However, imap service doesn't work properly. So, in order to enable the login process in high performance mode I add the parameter service_count = 0. Right? I'll let you know if this helps. Kind regards. Héctor Moreno Blanco División de Seguridad e Infraestructuras / Security and Infrastructures Division GMV Isaac Newton, 11 P.T.M. Tres Cantos E-28760 Madrid Tel. +34 91 807 21 00 Fax +34 91 807 21 99 www.gmv.com -Mensaje original- De: Timo Sirainen [mailto:t...@iki.fi] Enviado el: jueves, 09 de febrero de 2012 13:53 Para: Héctor Moreno Blanco CC: dovecot@dovecot.org Asunto: Re: [Dovecot] vsz_limit On 9.2.2012, at 10.41, Héctor Moreno Blanco wrote: > I can see these errors, but I'm not sure if they have something to do with my > problem: > > ... > Feb 8 12:04:57 XX dovecot: imap-login: Error: read(imap) failed: > Connection reset by peer Feb 8 12:04:57 XX dovecot: imap-login: > Error: read(imap) failed: Remote closed connection (process_limit > reached?) Feb 8 12:04:57 XX dovecot: imap-login: Error: > fd_send(imap, 16) failed: Broken pipe imap service isn't responding. > Feb 8 12:08:09 XX dovecot: imap-login: Error: master(imap): Auth > request timed out (received 0/12 bytes) imap process isn't responding because auth process isn't responding. > Do you see anything wrong? Yes. Is the system load very high? That could explain this. Or do you see any other error messages? Those errors you pasted above show that something is wrong, but not the root cause of what's wrong. > Anyway, I'm going to investigate what David Warden told me about the "High > Security" mode, just in case it is related to my problem. It could at least help reduce the load. Also it would be a good idea to upgrade to latest v2.0. __ This message including any attachments may contain confidential information, according to our Information Security Management System, and intended solely for a specific individual to whom they are addressed. Any unauthorised copy, disclosure or distribution of this message is strictly forbidden. If you have received this transmission in error, please notify the sender immediately and delete it. __ Este mensaje, y en su caso, cualquier fichero anexo al mismo, puede contener informacion clasificada por su emisor como confidencial en el marco de su Sistema de Gestion de Seguridad de la Informacion siendo para uso exclusivo del destinatario, quedando prohibida su divulgacion copia o distribucion a terceros sin la autorizacion expresa del remitente. Si Vd. ha recibido este mensaje erroneamente, se ruega lo notifique al remitente y proceda a su borrado. Gracias por su colaboracion. __
[Dovecot] POP3 UIDLs with virtual INBOX and migration from maildir->mdbox
Hi, Considering the scenario, where you have some old account with a different POP3 UIDL format and you migrate them to dovecot. So these old UIDLs would be saved to dovecot-uidlist. At some later time you want to introduce a virtual POP3 INBOX like described on: http://wiki.dovecot.org/Plugins/Virtual So you decide to make the new UIDL format "%f" - to make them unique across folders. So far so good. But then you decide to migrate to mdbox with all your old UIDLs. The docs says that saving old UIDLs is only supported in Maildir and that %f is only supported in Maildir. So is this at all possible? Would pop3_uidl_format = %g solve this (except for the old legacy UIDL's) ? /Peter
Re: [Dovecot] fts (lucene): indexing of virtual mailboxes?
Hi, On Fri, 2011-09-23 at 16:49 +0200, Lutz Preßler wrote: > Hello, > > (recent 2.1alpha2 variant - my test setup known to Timo). > No time to diagnose in depth at the moment, but I just noticed > that SEARCHing in virtual mailboxes seems not to create lucene > index content of its own but use those of referenced mailboxes? > The problem is that no new indexing takes place. > Example: with > INBOX > INBOX.in% > all > in dovecot-virtual, for a given query I only get matches from > those mailboxes searched in previously. This was a long time ago, but I just tested and looks like it works nowadays.
Re: [Dovecot] Performance of Maildir vs sdbox/mdbox
On 9.2.2012, at 14.56, Jan-Frode Myklebust wrote: >>> Should I try increasing LMTP_PROXY_DATA_INPUT_TIMEOUT_MSECS, or do you have >>> any >>> other ideas for what might be causing it ? >> >> The backend server didn't reply within LMTP_PROXY_DEFAULT_TIMEOUT_MSECS >> (30 secs). > > It's actually 60 sec in v2.0 > > > http://hg.dovecot.org/dovecot-2.0/file/750db4b4c7d3/src/lmtp/lmtp-proxy.c#l13 30. LMTP_PROXY_DATA_INPUT_TIMEOUT_MSECS is not LMTP_PROXY_DEFAULT_TIMEOUT_MSECS >> It still shouldn't have crashed of course, and that crash is already fixed >> in v2.1 >> (in the LMTP simplification change). > > Do you think we should rather run v2.1-rc* on our dovecot directors > (for IMAP, POP3 and LMTP), even if we keep the backend servers on v2.0 ? Yes, I've done a lot of improvements to proxying and error handling/logging in v2.1. Also I'm planning on finishing my email backlog soon and making the last v2.1-rc before renaming it to v2.1.0.
Re: [Dovecot] Performance of Maildir vs sdbox/mdbox
On Thu, Feb 09, 2012 at 01:48:09AM +0200, Timo Sirainen wrote: > On 7.2.2012, at 10.25, Jan-Frode Myklebust wrote: > > > Feb 6 16:13:10 loadbalancer2 dovecot: lmtp(6601): Panic: file > > lmtp-proxy.c: line 376 (lmtp_proxy_output_timeout): assertion failed: > > (proxy->data_input->eof) > .. > > Should I try increasing LMTP_PROXY_DATA_INPUT_TIMEOUT_MSECS, or do you have > > any > > other ideas for what might be causing it ? > > > The backend server didn't reply within LMTP_PROXY_DEFAULT_TIMEOUT_MSECS > (30 secs). It's actually 60 sec in v2.0 http://hg.dovecot.org/dovecot-2.0/file/750db4b4c7d3/src/lmtp/lmtp-proxy.c#l13 > It still shouldn't have crashed of course, and that crash is already fixed in > v2.1 > (in the LMTP simplification change). Do you think we should rather run v2.1-rc* on our dovecot directors (for IMAP, POP3 and LMTP), even if we keep the backend servers on v2.0 ? > Anyway, you can fix this without recompiling by returning e.g. > "proxy_timeout=60" passdb extra field for 60 secs timeout. Thanks, well consider that option if it crashes too often... Have only seen this problem once for the last week. -jf
Re: [Dovecot] Synchronization error in NFS
On 9.2.2012, at 10.36, Andy YB Hu wrote: > I just tried out the Director. One question is about the re-redirection. I > know director will redirect all the simultaneous requests from the same > user to only a single server at the same time. The question is how to > manage the time period after last connection to re-decide to redirect which > machine? director_user_expire? Look like not. > > I did one test, set director_user_expire = 1 min, then keep sending > requests to the director in 2 min interval, the result is it keeps redirect > to the same back end server. In normal operation the user is always redirected to the same server. http://blog.dovecot.org/2010/05/new-director-service-in-v20-for-nfs.html has some more details. If you have enough connections, it shouldn't matter that the connections aren't constantly going to random backends. In practice they get distributed well enough.
Re: [Dovecot] vsz_limit
On 9.2.2012, at 10.41, Héctor Moreno Blanco wrote: > I can see these errors, but I'm not sure if they have something to do with my > problem: > > ... > Feb 8 12:04:57 XX dovecot: imap-login: Error: read(imap) failed: > Connection reset by peer > Feb 8 12:04:57 XX dovecot: imap-login: Error: read(imap) failed: Remote > closed connection (process_limit reached?) > Feb 8 12:04:57 XX dovecot: imap-login: Error: fd_send(imap, 16) failed: > Broken pipe imap service isn't responding. > Feb 8 12:08:09 XX dovecot: imap-login: Error: master(imap): Auth request > timed out (received 0/12 bytes) imap process isn't responding because auth process isn't responding. > Do you see anything wrong? Yes. Is the system load very high? That could explain this. Or do you see any other error messages? Those errors you pasted above show that something is wrong, but not the root cause of what's wrong. > Anyway, I'm going to investigate what David Warden told me about the "High > Security" mode, just in case it is related to my problem. It could at least help reduce the load. Also it would be a good idea to upgrade to latest v2.0.
Re: [Dovecot] [PATCH] Bad boundary check in client_find_namespace
Hi, I'm glad to see my report finally arrive, thank you :) On 09.02.2012 04:02, Timo Sirainen wrote: Fixed now slightly differently than you: No problem – I agree that my code was a bit kludgy. I noticed that my original mail might be a bit unclear: > while trying to investigate the bug I reported last week, I found that > there is a broken boundary check So I just want to make clear that this patch does not fix the other problem that I reported at http://www.dovecot.org/list/dovecot/2011-September/061316.html (“Strange behavior from shared namespaces and INBOX, probably a bug”). Cheers, Christoph -- Christoph Bußenius Rechnerbetriebsgruppe der Fakultäten Informatik und Mathematik TU München +49 89-289-18519 <> Raum 00.05.055 <> Boltzmannstr. 3 <> Garching
Re: [Dovecot] vsz_limit
Hello Timo, I can see these errors, but I'm not sure if they have something to do with my problem: ... Feb 8 12:04:57 XX dovecot: imap-login: Error: read(imap) failed: Connection reset by peer Feb 8 12:04:57 XX dovecot: imap-login: Error: read(imap) failed: Remote closed connection (process_limit reached?) Feb 8 12:04:57 XX dovecot: imap-login: Error: fd_send(imap, 16) failed: Broken pipe ... Feb 8 12:08:09 XX dovecot: imap-login: Error: master(imap): Auth request timed out (received 0/12 bytes) ... Do you see anything wrong? Anyway, I'm going to investigate what David Warden told me about the "High Security" mode, just in case it is related to my problem. I appreciate your answers. Kind regards. Héctor Moreno Blanco División de Seguridad e Infraestructuras / Security and Infrastructures Division GMV Isaac Newton, 11 P.T.M. Tres Cantos E-28760 Madrid Tel. +34 91 807 21 00 Fax +34 91 807 21 99 www.gmv.com -Mensaje original- De: Timo Sirainen [mailto:t...@iki.fi] Enviado el: jueves, 09 de febrero de 2012 0:29 Para: Héctor Moreno Blanco CC: dovecot@dovecot.org Asunto: Re: [Dovecot] vsz_limit On 8.2.2012, at 10.58, Héctor Moreno Blanco wrote: > The problem is at the moment of maximum load of the system. What problem? Does Dovecot log any errors? __ This message including any attachments may contain confidential information, according to our Information Security Management System, and intended solely for a specific individual to whom they are addressed. Any unauthorised copy, disclosure or distribution of this message is strictly forbidden. If you have received this transmission in error, please notify the sender immediately and delete it. __ Este mensaje, y en su caso, cualquier fichero anexo al mismo, puede contener informacion clasificada por su emisor como confidencial en el marco de su Sistema de Gestion de Seguridad de la Informacion siendo para uso exclusivo del destinatario, quedando prohibida su divulgacion copia o distribucion a terceros sin la autorizacion expresa del remitente. Si Vd. ha recibido este mensaje erroneamente, se ruega lo notifique al remitente y proceda a su borrado. Gracias por su colaboracion. __
Re: [Dovecot] Synchronization error in NFS
Thanks Timo, I just tried out the Director. One question is about the re-redirection. I know director will redirect all the simultaneous requests from the same user to only a single server at the same time. The question is how to manage the time period after last connection to re-decide to redirect which machine? director_user_expire? Look like not. I did one test, set director_user_expire = 1 min, then keep sending requests to the director in 2 min interval, the result is it keeps redirect to the same back end server. Actually what i want is the "secondary load balancer layer" can redirect requests to random back end. How to manage it? Only after the files on the previous back end is expired? Thanks. Timo Sirainen Sent by: To dovecot-bounces@d Dovecot Mailing List ovecot.org cc 02/09/2012 07:49 Subject AMRe: [Dovecot] Synchronization error in NFS Please respond to Dovecot Mailing List On 7.2.2012, at 8.26, Andy YB Hu wrote: > I am running some concurrent testings under NFS. .. > Here are what I am doing: One session running loop of COPY commands > (while(1) COPY...) connects to one dovecot server; The other session > running loop of SELECT commands (while(1) SELECT...) connects to the other > dovecot server. Both are accessing the same mail box (/tmp/NFS); I don't even attempt to support this kind of configuration anymore. Use http://wiki2.dovecot.org/Director <><><>