Re: [Dovecot] Configuration problem?
On 6 February 2012 17:49, Dennis Guhl d...@dguhl.org wrote: On Mon, Feb 06, 2012 at 05:33:01PM +, Anne Wilson wrote: I have a new Scientific Linux 6.1 mail server (dovecot -n below) and am seeing the following in the logs, with no idea what is happening: - Dovecot Begin Dovecot was killed, and not restarted afterwards. You shut dovecot down and does not restart it. It appears to be doing things without my intervention. Despite the reports that it kept shutting down, Dovecot continued to serve messages throughout the day. **Unmatched Entries** dovecot: imap(anne): Connection closed bytes=205614/894243: 1 Time(s) [..] dovecot: imap(anne): Disconnected: Logged out bytes=7914/89868: 1 Time(s) The user closed the connection. dovecot: imap: Server shutting down. bytes=1309821/4473013: 1 Time(s) [..] dovecot: imap: Server shutting down. bytes=3146/79269: 1 Time(s) The server closed the connection due to a shutdown command. dovecot: master: Dovecot v2.0.9 starting up (core dumps disabled): 1 Time(s) Dovecot did what it just said: it started. Your logwatch is to old to knew about the messages dovecot emits to syslog. HTH Dennis [..] You mean the version of logwatch is too old? I'm beginning to wonder whether running an Enterprise version is such a good idea after all. Anne
Re: [Dovecot] Performance of Maildir vs sdbox/mdbox
On Mon, Feb 06, 2012 at 10:01:03PM +0100, Jan-Frode Myklebust wrote: Your fsyncs can run over 60 seconds? Hopefully not.. maybe just me being confused by the error message about lmtp_proxy_output_timeout. After adding http://hg.dovecot.org/dovecot-2.0/rev/71084b799a6c on friday, we haven't seen any problems so it looks like this problem is solved. Crap, saw 6 message might be sent more than once messages from postfix yesterday, all at the time of this crash on the director postfix/lmtp was talking with: Feb 6 16:13:10 loadbalancer2 dovecot: lmtp(6601): Panic: file lmtp-proxy.c: line 376 (lmtp_proxy_output_timeout): assertion failed: (proxy-data_input-eof) Feb 6 16:13:10 loadbalancer2 dovecot: lmtp(6601): Error: Raw backtrace: /usr/lib64/dovecot/libdovecot.so.0 [0x2ab6f193d680] - /usr/lib64/dovecot/libdovecot.so.0 [0x2ab6f193d6d6] - /usr/lib64/dovecot/libdovecot.so.0 [0x2ab6f193cb93] - dovecot/lmtp [0x406d75] - /usr/lib64/dovecot/libdovecot.so.0(io_loop_handle_timeouts+0xcd) [0x2ab6f194859d] - /usr/lib64/dovecot/libdovecot.so.0(io_loop_handler_run+0x68) [0x2ab6f1949558] - /usr/lib64/dovecot/libdovecot.so.0(io_loop_run+0x2d) [0x2ab6f194820d] - /usr/lib64/dovecot/libdovecot.so.0(master_service_run+0x13) [0x2ab6f1936a83] - dovecot/lmtp(main+0x144) [0x403fa4] - /lib64/libc.so.6(__libc_start_main+0xf4) [0x35f8a1d994] - dovecot/lmtp [0x403da9] Feb 6 16:13:10 loadbalancer2 dovecot: master: Error: service(lmtp): child 6601 killed with signal 6 (core dumps disabled) Should I try increasing LMTP_PROXY_DATA_INPUT_TIMEOUT_MSECS, or do you have any other ideas for what might be causing it ? -jf
Re: [Dovecot] dsync error Mailbox has children, delete them first
Am 13.12.2011 11:47, schrieb Jürgen Obermann: Hi, I use dsync to backup mailboxes from mbox format to mdbox on a remote system. The first run for a user with dsync is OK, but during the second there are lots of the following errors: dsync-remote(user): Error: Can't delete mailbox directory Example: Mailbox has children, delete them first I see no way how I could influence the order dsync deletes mailboxes. This happens with dovecot version 2.0.16 Thank you, Juergen Obermann Hallo, after upgrade to dovecot 2.0.17 this problem went away. Greetings, Jürgen Obermann Hochschulrechenzentrum der Justus-Liebig-Universität Gießen Heinrich-Buff-Ring 44 Tel. 0641-9913054
[Dovecot] Multiple userdb possible?
Hello, I am running v2.0.13. In my dovecot.conf I have: userdb { args = /etc/dovecot/dovecot-usrdb-ldap.conf driver = ldap } passdb { args = /etc/dovecot/dovecot-passdb-ldap.conf driver = ldap } Is it legitimate to include multiple ldap userdb's, like: userdb { args = /etc/dovecot/dovecot-usrdb-ldap1.conf driver = ldap } passdb { args = /etc/dovecot/dovecot-passdb-ldap1.conf driver = ldap } userdb { args = /etc/dovecot/dovecot-usrdb-ldap2.conf driver = ldap } passdb { args = /etc/dovecot/dovecot-passdb-ldap2.conf driver = ldap } If it is legitimate (in case configuration is different, please correct me), in which sequence userdb's are evaluated? Thanks, Nick
Re: [Dovecot] Configuration problem?
On Tue, Feb 07, 2012 at 08:08:24AM +, Anne Wilson wrote: On 6 February 2012 17:49, Dennis Guhl d...@dguhl.org wrote: On Mon, Feb 06, 2012 at 05:33:01PM +, Anne Wilson wrote: I have a new Scientific Linux 6.1 mail server (dovecot -n below) and am seeing the following in the logs, with no idea what is happening: - Dovecot Begin Dovecot was killed, and not restarted afterwards. You shut dovecot down and does not restart it. It appears to be doing things without my intervention. Despite the reports that it kept shutting down, Dovecot continued to serve messages throughout the day. The messages logwatch shows appeared at some time within the analysed period and are not necessarily in a time sorted order. Btw do not rely on any summary of log files but look into the log yourself. [..] You mean the version of logwatch is too old? I'm beginning to wonder Yes, the current version is 7.4.0 from march 2011 (http://www.logwatch.org). whether running an Enterprise version is such a good idea after all. I don't know Scientific Linux but I use Debian stable on all my server and I'm very happy with it. Nonetheless do I manually upgrade some packages wich added needed features or are maintained by upstream. It is crucial to know and understand the philosophy behind a distribution and to decide if this works for you and if you can live with the caveats resulting. Dennis
Re: [Dovecot] Slightly more intelligent way of handling issues in sdbox?
06-02-2012 22:47, Timo Sirainen yazmış: On 3.2.2012, at 16.16, Mark Zealey wrote: I was doing some testing on sdbox yesterday. Basically I did the following procedure: 1) Create new sdbox; deliver 2 messages into it (u.1, u.2) 2) Create a copy of the index file (no cache file created yet) 3) deliver another message to the mailbox (u.3) 4) copy back index file from stage (2) 5) deliver new mail Then the message delivered in stage 3 ie u.3 gets replaced with the message delivered in (5) also called u.3. http://hg.dovecot.org/dovecot-2.1/rev/a765e0a895a9 fixes this. I've not actually tried this patch yet, but looking at it, it is perhaps useful for the situation I described below when the index is corrupt. In this case I am describing however, the not is NOT corrupt - it is simply an older version (ie it only thinks there are the first 2 mails in the directory, not the 3rd). This could happen for example when mails are being stored on different storage than indexes; say for example you have 2 servers with remote NFS stored mails but local indexes that rsync between the servers every hour. You manually fail over one server to the other and you then have a copy of the correct indexes but only from an hour ago. The mails are all there on the shared storage but because the indexes are out of date, when a new message comes in it will be automatically overwritten. (speaking of which, it would be great if force-resync also rebuilt the cache files if there are valid cache files around, rather than just doing away with them) Well, ideally there shouldn't be so much corruption that this matters.. That's true, but in our experience we usually get corruption in batches rather than a one-off occurrence. Our most common case is something like this: Say for example there's an issue with the NFS server (assuming we are storing indexes on there as well now) and so we have to killall -9 dovecot processes or similar. In that case you get a number of corrupted indexes on the server. Rebuilding the indexes generates an IO storm (say via lmtp or a pop3 access); then the clients log in via imap and we have to re-read all the messages to generate the cache files which is a second IO storm. If the caches were rebuilt at least semi-intelligently (ie you could extract from the cache files a list of things that had previously been cached) that would reduce the effects of rare storage level issues such as this. Mark
Re: [Dovecot] Configuration problem?
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 07/02/12 11:57, Dennis Guhl wrote: On Tue, Feb 07, 2012 at 08:08:24AM +, Anne Wilson wrote: On 6 February 2012 17:49, Dennis Guhl d...@dguhl.org wrote: On Mon, Feb 06, 2012 at 05:33:01PM +, Anne Wilson wrote: I have a new Scientific Linux 6.1 mail server (dovecot -n below) and am seeing the following in the logs, with no idea what is happening: - Dovecot Begin Dovecot was killed, and not restarted afterwards. You shut dovecot down and does not restart it. It appears to be doing things without my intervention. Despite the reports that it kept shutting down, Dovecot continued to serve messages throughout the day. The messages logwatch shows appeared at some time within the analysed period and are not necessarily in a time sorted order. Btw do not rely on any summary of log files but look into the log yourself. [..] Actually, this morning there aren't the same messages, so perhaps I was restarting services while trying to get it right - in fact it seems very likely that that was so. Today there are a few like dovecot: imap(anne): Disconnected: Logged out bytes=11892/21219: 1 Time(s) I presume that refers to clients logging out of the imap connection? In which case, I can forget about that. I normally read the summary each morning and refer directly to the logs if I see something that looks unusual. Occasionally, as in this case, there are entries that I don't understand and I ask those who do :-) You mean the version of logwatch is too old? I'm beginning to wonder Yes, the current version is 7.4.0 from march 2011 (http://www.logwatch.org). whether running an Enterprise version is such a good idea after all. I don't know Scientific Linux but I use Debian stable on all my server and I'm very happy with it. Nonetheless do I manually upgrade some packages wich added needed features or are maintained by upstream. It is crucial to know and understand the philosophy behind a distribution and to decide if this works for you and if you can live with the caveats resulting. I've run CentOS for maybe 4 years, and it's similar to SL, both being RHEL clones, but maintained by different communities. On a server (even though this is a very mild server, being only file and print serving) the older packages are rarely a problem. I appreciate the time and trouble you are taking to educate me :-) Anne -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.11 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAk8xQSsACgkQj93fyh4cnBdynQCeO6tY/798/6YonBALxJ0baZcy qG4AoIEHcZWsjIqhz4v3+SMk76FLhjIk =/T9e -END PGP SIGNATURE-
[Dovecot] Fedora 16 configuration
Hello, I am trying to get dovecot to work on a fedora 16 install with sendmail. I have been able to get it to work in the past with dovecot.conf but not with the new conf.d directory and associated config files. I keep seeing this in maillog: Feb 7 14:28:59 sendmail dovecot: pop3-login: Aborted login (no auth attempts): rip=x.x.x.x, lip=x.x.x.x And the mail client comes back with username or password invalid. Is there instructions somewhere regarding fedora 16 installs? I found this one and tried it to no avail: http://www.server-world.info/en/note?os=Fedora_16p=mailf=2 Thanks in advance, Cliff
Re: [Dovecot] Multiple userdb possible?
On 7/2/2012 6:00 μμ, /dev/rob0 wrote: ... Having two LDAP searches is conceptually no different than having system users and SQL users. ... In the order specified. A /etc/dovecot/dovecot-usrdb-ldap1.conf match prevents searching in /etc/dovecot/dovecot-usrdb-ldap2.conf; keep this in mind in setting up the queries ... Thank you for the clarifications! Regards, Nick
Re: [Dovecot] user login on behalf of another user
Hello, Am 06.02.2012 16:05, schrieb Timo Sirainen: Master user doesn't necessarily have access to all users' mailboxes. In the passdb lookup you can decide if this master user is allowed to be this destination user. For example if you used passdb checkpassword, you could look at USER and MASTER_USER environment variables to figure out if this combination should be allowed or not. The checkpassword script can also do the actual authentication via PAM (I'd think there's a way to call it somehow). Thank you. I got an idea, how I could configure this. Ingo
Re: [Dovecot] Possible broken indexer(lucene/solr)?
Le 06/02/12 22:26, Ingo Thierack a écrit : Hello, try to use the new 2.1rc and don't get any data in the searchindex. Tried first lucene, and switched than back so solr. If I do an search in an mailfolder, i get in the dovecot-log 2012-02-06 22:17:11 | dovecot:| indexer-worker(xx): Indexed 0 messages in INBOX/dovecot Log from solr. Feb 6, 2012 10:17:11 PM org.apache.solr.core.SolrCore execute INFO: [] webapp=/solr path=/select params={fl=uid,scoresort=uid+ascfq=%2Bbox:120ed10bbe9dcd4c8d2ef8146a47+%2Buser:xxxq=body:solrrows=9159} hits=0 status=0 QTime=1 Maybe I miss something. Upgraded from 2.0.15 to 2.1(head from repository yesterday) With 2.0 i see, if i start an search, solr had to work on the mail, now happens nothing. Upgrade the schema.xml, delete the old index. Regard Ingo Thierack Same thing here. Tried with 2.1-rc1 and rc5. No results.
Re: [Dovecot] Slightly more intelligent way of handling issues in sdbox?
On Tue, Feb 7, 2012 at 4:08 AM, Mark Zealey mark.zea...@webfusion.com wrote: 06-02-2012 22:47, Timo Sirainen yazmış: On 3.2.2012, at 16.16, Mark Zealey wrote: I was doing some testing on sdbox yesterday. Basically I did the following procedure: 1) Create new sdbox; deliver 2 messages into it (u.1, u.2) 2) Create a copy of the index file (no cache file created yet) 3) deliver another message to the mailbox (u.3) 4) copy back index file from stage (2) 5) deliver new mail Then the message delivered in stage 3 ie u.3 gets replaced with the message delivered in (5) also called u.3. http://hg.dovecot.org/dovecot-2.1/rev/a765e0a895a9 fixes this. I've not actually tried this patch yet, but looking at it, it is perhaps useful for the situation I described below when the index is corrupt. In this case I am describing however, the not is NOT corrupt - it is simply an older version (ie it only thinks there are the first 2 mails in the directory, not the 3rd). This could happen for example when mails are being stored on different storage than indexes; say for example you have 2 servers with remote NFS stored mails but local indexes that rsync between the servers every hour. You manually fail over one server to the other and you then have a copy of the correct indexes but only from an hour ago. The mails are all there on the shared storage but because the indexes are out of date, when a new message comes in it will be automatically overwritten. (speaking of which, it would be great if force-resync also rebuilt the cache files if there are valid cache files around, rather than just doing away with them) Well, ideally there shouldn't be so much corruption that this matters.. That's true, but in our experience we usually get corruption in batches rather than a one-off occurrence. Our most common case is something like this: Say for example there's an issue with the NFS server (assuming we are storing indexes on there as well now) and so we have to killall -9 dovecot processes or similar. In that case you get a number of corrupted indexes on the server. Rebuilding the indexes generates an IO storm (say via lmtp or a pop3 access); then the clients log in via imap and we have to re-read all the messages to generate the cache files which is a second IO storm. If the caches were rebuilt at least semi-intelligently (ie you could extract from the cache files a list of things that had previously been cached) that would reduce the effects of rare storage level issues such as this. Mark What about something like: a writer to an index/cache file checks for the existence of file name.1. If it doesn't exist or is over a day old, if the current index/cache file is not corrupt, take a snapshot of it as file name.1. Then if an index/cache file is corrupt, it can check for file name.1 and use that as the basis for a rebuild, so at least only a day's worth of email is reverted to its previous state (instead of all of it), assuming it's been modified in less than a day. Clearly it'd take up a bit more disk space, though the various dovecot.* files are pretty modest in size, even for big mailboxes. Or it might be a decent use case for some sort of journaling, so that the actual index/cache files don't ever get written to, except during a consolidation, to roll up journals once they've reached some threshold. There'd definitely be a performance price to pay though, not to mention breaking backwards compatibility. And I'm just throwing stuff out to see if any of it sticks, so don't mistake this for even remotely well thought-out suggestions :)