Re: [Dovecot] exceeded mail_max_userip_connections
On Sun, May 8, 2011 12:03 pm, Voytek Eymont wrote: SSL: Connection secure. IMAP Server: Maximum number of connections from user+IP exceeded (mail_max_userip_connections) so if I have Squirell logged in all the time, plus K-9 running, plus occasionally use IMAP client on my Palm, how many connections should I allow ? -- Voytek
Re: [Dovecot] Building Pigeonhole
hello You might include ( -I /usr/.../somewhere/include/dovecot ) Dovecot2 include files at compilation Le 08/05/2011 03:39, Peter Bell a écrit : I'm attempting to build Pigeonhole 0.2.3 for use with my Dovecot 2.0.12 installation on Slackware. I've downloaded the sources and unzipped into a folder which sits alongside the folder in which I built Dovecot. When I ./configure, the configuration appears to complete without error. However, when I make, the compiler throws lots of errors all stemming, I believe from its failure to find a set of include files: cmd-vacation.c:4:17: error: lib.h: No such file or directory cmd-vacation.c:5:17: error: str.h: No such file or directory cmd-vacation.c:6:22: error: strfuncs.h: No such file or directory cmd-vacation.c:7:17: error: md5.h: No such file or directory cmd-vacation.c:8:21: error: hostpid.h: No such file or directory cmd-vacation.c:9:26: error: str-sanitize.h: No such file or directory cmd-vacation.c:10:29: error: message-address.h: No such file or directory cmd-vacation.c:11:26: error: message-date.h: No such file or directory cmd-vacation.c:12:20: error: ioloop.h: No such file or directory I believe that these files are all part of the main Dovecot - what am I meant to be doing so that the Pigeonhole build process can find them? Peter.
[Dovecot] sieve filters not being invoked
Hi all, Similar to someone who posted here yesterday, I am having trouble getting sieve filters working. I have installed pigeonhole. I can create, edit, and save scripts from both the Thunderbird sieve extension as well as the Roundcube sieve plugin via managesieve running on port 4190. The .sieve file is properly saved in ~/sieve with a symlink from ~/.dovecot.sieve. But the filters are not being invoked on incoming mail. I have mail_debug enabled, but I don't see anything useful in /var/log/dovecot. Anyone have any ideas? Thanks $ dovecot -n # 2.0.11: /usr/local/etc/dovecot/dovecot.conf # OS: FreeBSD 8.1-RELEASE amd64 ufs auth_verbose = yes base_dir = /var/run/dovecot/ disable_plaintext_auth = no first_valid_gid = 0 info_log_path = /var/log/dovecot log_path = /var/log/dovecot mail_access_groups = mail mail_debug = yes mail_location = maildir:/home/colin/vmail/%d/%n managesieve_notify_capability = mailto managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date passdb { args = /usr/local/etc/dovecot-passwd driver = passwd-file } plugin/sieve = ~/.dovecot.sieve plugin/sieve_dir = ~/sieve plugin/sieve_global_dir = /var/lib/dovecot/sieve/global/ plugin/sieve_global_path = /var/lib/dovecot/sieve/default.sieve protocols = imap sieve service auth { unix_listener /var/spool/postfix/private/auth { group = postfix mode = 0660 user = postfix } unix_listener auth-master { group = mail mode = 0660 user = vmail } user = root } service imap-login { executable = /usr/local/libexec/dovecot/imap-login vsz_limit = 64 M } service managesieve-login { inet_listener sieve { port = 4190 } } service pop3-login { executable = /usr/local/libexec/dovecot/imap-login vsz_limit = 64 M } ssl_cert = /etc/ssl/venus.crt ssl_key = /etc/ssl/venus.key userdb { driver = passwd } userdb { args = uid=vmail gid=vmail home=/home/colin/vmail/%d/%n driver = static } verbose_proctitle = yes protocol lda { mail_plugins = sieve } - Colin Brace Amsterdam http://lim.nl -- View this message in context: http://old.nabble.com/sieve-filters-not-being-invoked-tp31569757p31569757.html Sent from the Dovecot mailing list archive at Nabble.com.
[Dovecot] ntp revisited (so what to do ?)
OK, So what you people say is : 1. Run ntpdate during startup only once 2. After that, keep time with ntpd Right ? Regards, spyros I merely function as a channel that filters music through the chaos of noise - Vangelis
Re: [Dovecot] compressed mboxes very slow
Stan Hoeppner s...@hardwarefreak.com writes: On 5/6/2011 3:07 PM, Kamil Jońca wrote: I have some archive mails in gzipped mboxes. I could use them with dovecot 1.x without problems. But recently I have installed dovecot 2.0.12, and they are slow. very slow. Creating index files takes about 10 minutes for ~20M file with 560 messages for bzipped mbox, for gzipped is little better but still unusable :( What other software, if any, was also upgraded/changed when you upgraded to Dovecot 2.0.12? Libraries? Filesystem? Daemons? What IIRC only dovecot - I simply upgrade debian package via aptitude. they have been mildly corrupted along the way? Did this bad behavior start directly after the upgrade or did 2.0.12 run the zipped mbox Yes, immediately after upgrade. files at acceptable speed for a while? Did you add/enable any new Dovecot plugins that you weren't running in 1.2.x? No, the only thing was converting old config to dovecot-2 config. Stracing dovecot process shows that every ~ 20 messages it rereads complete mbox file. Can you be a bit more specific here? What do you mean by rereads complete mbox file? I'm not a dev, but that sounds suspiciously like Sorry, my fault, more correctly is to say: regularly. an error handling mechanism. I.e. an error occurred while processing, or the file may have changed while processing, so we start over. I'm almost sure that file is not changed. Could you have a buggy inotify/dnotify or something along those lines? How to check it? Do you now have something else running say, at the filesystem level, that that is making Dovecot think the file has changed even though it hasn't? Are you zipping these mbox files via a cron job that is running every few seconds instead of every few hours or days? No. These files were compressed once by mutt, and then only read as archive via dovecot. Something is apparently causing Dovecot to reread these files regularly, and I'd guess it's probably not a Dovecot bug. Did you run strace when accessing a non-compressed mbox file for comparison? http://strony.aster.pl/kjonca/dovecot.log.gz - uncompressed mbox http://strony.aster.pl/kjonca/dovecot.gz.log.gz - gzipped mbox KJ -- http://blogdebart.pl/2009/12/22/mamy-chorych-dzieci/ KRETYNIZM - ułomność predysponująca często do wampiryzmu (J.Collin de Plancy Słownik wiedzy tajemnej)
Re: [Dovecot] ntp revisited (so what to do ?)
On Sun, 8 May 2011 11:07:04 +0100 (BST) Spyros Tsiolis sts...@yahoo.co.uk articulated: So what you people say is : 1. Run ntpdate during startup only once 2. After that, keep time with ntpd As I posted earlier using the technique I showed, on a FreeBSD system, there would be absolutely no reason to do so; however, I cannot vouch for that on other systems. -- Jerry ✌ dovecot.u...@seibercom.net Disclaimer: off-list followups get on-list replies or get ignored. Please do not ignore the Reply-To header. __
Re: [Dovecot] ntp revisited (so what to do ?)
On Sun, May 08, 2011 at 06:45:01AM -0400, Jerry wrote: On Sun, 8 May 2011 11:07:04 +0100 (BST) Spyros Tsiolis sts...@yahoo.co.uk articulated: So what you people say is : 1. Run ntpdate during startup only once 2. After that, keep time with ntpd As I posted earlier using the technique I showed, on a FreeBSD system, there would be absolutely no reason to do so; however, I cannot vouch for that on other systems. Right. As for running ntpdate, the years have passed and the debian manual now says: -g Normally, ntpd exits with a message to the system log if the offset exceeds the panic threshold, which is 1000 s by default. This option allows the time to be set to any value without restriction; however, this can happen only once. If the threshold is exceeded after that, ntpd will exit with a message to the system log. This option can be used with the -q and -x options. -q Exit the ntpd just after the first time the clock is set. This behavior mimics that of the ntpdate program, which is to be retired. So, ntpdate is to be retired. In boot scripts either simply run ntpd -g or, probably better: ntpd -gqx ntpd In FreeBSD, AFAICS, setting ntpd_enable=YES# Start time server ntpd_sync_on_start=YES # Synchronize on start in /etc/rc.d corresponds to the second of the two, at least as of FreeBSD 6.4, since before 6.4 the -x was apparently missing, which would not correct big offsets, see: http://lists.freebsd.org/pipermail/freebsd-bugs/2009-March/034439.html
Re: [Dovecot] ntp revisited (so what to do ?)
On Dom, 2011-05-08 at 11:07 +0100, Spyros Tsiolis wrote: OK, So what you people say is : 1. Run ntpdate during startup only once 2. After that, keep time with ntpd Right ? Right, that ensures that time is correct (ntpdate run at startup) and that it is kept correct without the clock going back (ntp running as daemon). -- Jose Celestino | http://japc.uncovering.org/files/japc-pgpkey.asc Assumption is the Mother of Screw-Up -- Mr. John Elwood Hale
[Dovecot] mail_max_lock_timeout setup
hi all in wich section most mail_max_lock_timeout be set up? Thanks.
Re: [Dovecot] DOVECOT v2.0.11 using SIEVE not working
Thanks for your response. Regarding the ports, I was referring to the services. Which I verified they are running. What would cause the scripts to run but do nothing? service managesieve-login { inet_listener sieve { port = 4190 } inet_listener sieve_deprecated { port = 2000 } the LDA is enabled and the changes have been made to sendmail. I created a new sendmail.cf file and the added lines are in there. When I receive an email the .dovecot.sieve is executed but does nothing. I've attached the output of dovecot -n. # 2.0.12: /usr/local/etc/dovecot/dovecot.conf # OS: Linux 2.6.35.12-88.fc14.i686 i686 Fedora release 14 (Laughlin) auth_mechanisms = plain login disable_plaintext_auth = no listen = * mail_location = mbox:~/mail:INBOX=/var/mail/%u mail_privileged_group = mail maildir_very_dirty_syncs = yes mbox_write_locks = fcntl passdb { driver = pam } service imap-login { inet_listener imap { port = 143 } } service pop3-login { inet_listener pop3 { port = 110 } inet_listener pop3s { ssl = no } } ssl_cert = /etc/pki/dovecot/certs/dovecot.pem ssl_key = /etc/pki/dovecot/private/dovecot.pem userdb { driver = passwd } - Original Message - From: Stephan Bosch step...@rename-it.nl To: dovecot@dovecot.org Sent: Saturday, May 07, 2011 3:31 AM Subject: Re: [Dovecot] DOVECOT v2.0.11 using SIEVE not working On 5/7/2011 12:54 AM, Matt Mc Namara wrote: Hi, I'm trying to get sieve working with dovecot. I seem to have everything enabled but my scripts done seem to work. both sieve-filter (2000) and sieve (4190) are running Uh, what do you mean with sieve-filter in this case? Regarding your problem: - Make sure you are using the Dovecot LDA (http://wiki2.dovecot.org/LDA) and/or LMTP (http://wiki2.dovecot.org/LMTP). - Make sure the LDA Sieve plugin is enabled (http://wiki2.dovecot.org/Pigeonhole/Sieve/Configuration) - Make sure that the sieve scripts are found. You can obtain more information by enabling mail_debug. If the above does not solve your problem, it is important to post your dovecot -n output here. Regards, Stephan.
Re: [Dovecot] exceeded mail_max_userip_connections
On 11:59 AM, Voytek Eymont wrote: On Sun, May 8, 2011 12:03 pm, Voytek Eymont wrote: SSL: Connection secure. IMAP Server: Maximum number of connections from user+IP exceeded (mail_max_userip_connections) so if I have Squirell logged in all the time, plus K-9 running, plus occasionally use IMAP client on my Palm, how many connections should I allow ? As many as one per client per subscribed folder, but ... Possibly Squirell is using a different IP (localhost, 127.0.0.1) and doesn't count. I suspect the issue is with K-9. I had similar issues with older versions of K-9. They went away at some point. I'm currently using K-9 3.604. If you are using an older version of K-9, particularly a 2.xxx version, I suggest you upgrade. -- Mark Sapiro m...@msapiro.netThe highway is for gamblers, San Francisco Bay Area, Californiabetter use your sense - B. Dylan
Re: [Dovecot] ntp revisited (so what to do ?)
On 5/8/2011 5:07 AM, Spyros Tsiolis wrote: OK, So what you people say is : 1. Run ntpdate during startup only once 2. After that, keep time with ntpd Right ? Yes, or run ntpd with the -g option.You don't want to use the -x option (as some might have suggested) as that can cause ntpd to take up to 2 weeks to synchronize the time. Detailed ntp setup is OT for this list, but make sure your ntp.conf lists at least three servers. Typically the ntp.org pool servers will work fine, eg. server 0.uk.pool.ntp.org server 1.uk.pool.ntp.org server 2.uk.pool.ntp.org server 3.uk.pool.ntp.org Then once in a while make sure ntp is running and syncronised. I like ntpq -p which will show the peerlist with a * next to the current master. ntpd works best on a long-running server, and typically shouldn't be used on a virtual server. Virtual environments have their own time service. -- Noel Jones
Re: [Dovecot] ntp revisited (so what to do ?)
On 5/8/2011 5:07 AM, Spyros Tsiolis wrote: OK, So what you people say is : 1. Run ntpdate during startup only once 2. After that, keep time with ntpd Right ? When running ntpd don't run ntpdate at startup, or any time. Use one or the other, not both (if you incorrectly use both, ntpdate will throw off drift calculations in ntpd). This is the proper setup for bare metal hosts. I didn't pay attention to earlier posts in this thread. So, if you're talking about a guest running inside a virtual machine then the setup is entirely different, and may vary depending on your underlying hypervisor and other factors. -- Stan
[Dovecot] Issues with authentication failure delays
There are two rather clear issues with the state of authentication failure delays. First, the delay length isn't what was (presumably) intended. Second, there is a new way of doing failure delays in Dovecot 2 which was added *in addition to* the old method, rather than replacing it. As a result delays may not be the expected length and settings don't have the expected effect. First, the length of the failure delays. Based on auth/auth-penalty.c and auth/auth-penalty.h, it seems rather clear that the delay time (for the newer type of failure delay) was intended to start at 2 seconds and double for each failure (see auth_penalty_to_secs), but be capped at 15 seconds. However, a simple test which tries to log in 5 times with a random password and times each attempt shows something different: $ cat authtest.py import imaplib import time import random conn = imaplib.IMAP4('localhost') for i in range(5): try: start = time.time() conn.login('testusers', str(random.random())) except Exception, e: print e print time.time() - start $ python authtest.py [AUTHENTICATIONFAILED] Authentication failed. 0.502058982849 [AUTHENTICATIONFAILED] Authentication failed. 4.50464391708 [AUTHENTICATIONFAILED] Authentication failed. 8.50679802895 [AUTHENTICATIONFAILED] Authentication failed. 15.5040819645 [AUTHENTICATIONFAILED] Authentication failed. 15.5039038658 (Note that these results are with auth_failure_delay set to 0, more on that in a bit.) Aside from the extra half second on each attempt (which I have no clue about), there is no delay on the first attempt. Subsequent delays seem to have the correct timing. I *think* this is because auth_penalty_lookup is called from auth_request_handler_auth_begin, that is, at the *beginning* of an authentication attempt, therefore not affecting the first failed attempt. This may be too minor an issue to worry much about, but it certainly looks to me like it's not doing quite what was intended. Moving on to the second issue. Revision fbff8ca77d2e added a new style of authentication failure delay, but left the existing failure delay mechanism in place. The old failure delay uses the auth_failure_delay setting, and could be disabled by using a value of 0 for that setting. Its remnants are in auth/auth-request-handler.c in the function auth_request_handler_flush_failures. It looks like much of the code in that file could be removed or simplified by eliminating this older failure delay system. Better still, I would like to see the auth_failure_delay setting retained and used in the new system. The value of the setting could be used in place of AUTH_PENALTY_INIT_SECS, allowing similar configurability to what the old system offered. -Kevin
Re: [Dovecot] ntp revisited (so what to do ?)
On 5/8/2011 7:36 AM, Jose Celestino wrote: On Dom, 2011-05-08 at 11:07 +0100, Spyros Tsiolis wrote: OK, So what you people say is : 1. Run ntpdate during startup only once 2. After that, keep time with ntpd Right ? Right, that ensures that time is correct (ntpdate run at startup) and that it is kept correct without the clock going back (ntp running as daemon). This is not correct. You're assuming that ntpd doesn't perform sanity checks on the system time when the daemon starts, which is not the case. Again, use ntpd or ntpdate, not both. Preferably, today, in 2011, and for many years now, only use ntpd, except in guests sitting atop a hypervisor. In the virtual environment case you run ntpd in the hypervisor and configure the guest kernels appropriately. There is a plethora of platform specific documentation out there covering the VM time keeping case so I won't attempt to repeat it all here, except to say that with Linux the first/best step is running a tickless kernel, which is now the default on many distros, as it helps both laptops/netbooks when in sleep mode and VM guests when they get time sliced into what is in essence a sleep state as far as the kernel sees system clock ticks. -- Stan
Re: [Dovecot] ntp revisited (so what to do ?)
On Sun, 8 May 2011, Stan Hoeppner wrote: On 5/8/2011 5:07 AM, Spyros Tsiolis wrote: So, if you're talking about a guest running inside a virtual machine then the setup is entirely different, and may vary depending on your underlying hypervisor and other factors. Certainly I run ntpd on all my KVM-based virtual machines, since KVM provides each guest with a virtualized hardware clock. With Xen, this can also be done if using a Xen-enabled kernel in the guest, using the Xen independent wallclock. Otherwise you usually have to run ntpdate periodically through cron. Steve
Re: [Dovecot] compressed mboxes very slow
On 5/8/2011 5:21 AM, Kamil Jońca wrote: Stan Hoeppners...@hardwarefreak.com writes: On 5/6/2011 3:07 PM, Kamil Jońca wrote: I have some archive mails in gzipped mboxes. I could use them with dovecot 1.x without problems. But recently I have installed dovecot 2.0.12, and they are slow. very slow. Creating index files takes about 10 minutes for ~20M file with 560 messages for bzipped mbox, for gzipped is little better but still unusable :( What other software, if any, was also upgraded/changed when you upgraded to Dovecot 2.0.12? Libraries? Filesystem? Daemons? What IIRC only dovecot - I simply upgrade debian package via aptitude. The latest Debian stable dovecot package is 1.2.15-4. If 'aptitude upgrade' pulled 2.0.12 then you are running either testing or unstable, or you're using non-official mirrors. Either way, you can expect to have some problems. Also, you probably need to be asking on debian-user or asking the maintainers directly. And you need to be able to give them an actual bug report. I'm guessing the problem is Debian specific and not vanilla Dovecot 2.0.12 specific. Timo hasn't responded to you yet, which may be a good indication of this. they have been mildly corrupted along the way? Did this bad behavior start directly after the upgrade or did 2.0.12 run the zipped mbox Yes, immediately after upgrade. Look at your aptitude and/or dkpg logs to see what other packages, if any, got upgraded/replaced when you installed dovecot. files at acceptable speed for a while? Did you add/enable any new Dovecot plugins that you weren't running in 1.2.x? No, the only thing was converting old config to dovecot-2 config. Stracing dovecot process shows that every ~ 20 messages it rereads complete mbox file. Can you be a bit more specific here? What do you mean by rereads complete mbox file? I'm not a dev, but that sounds suspiciously like Sorry, my fault, more correctly is to say: regularly. an error handling mechanism. I.e. an error occurred while processing, or the file may have changed while processing, so we start over. I'm almost sure that file is not changed. It probably didn't, given the fact that Dovecot won't write to zipped mbox files, period. But if you have a broken inotify/dnotify setup it may appear to Dovecot that the file has changed. Such things are common with testing/unstable distros. Changes to the kernel, APIs, and apps occur rapidly. Such distros are meant for developers and end users with the knowledge and ability to file concise bug reports after identifying problems. Inotify may not be the problem at all, but it seems a possibility given that Dovecot is apparently stopping decompression and rereading the file multiple times until finished. I've not looked at the Dovecot source, but this seems a likely cause of the reread. Could you have a buggy inotify/dnotify or something along those lines? How to check it? If you're running testing/unstable you should already know how to check this. Inotify is a kernel API. For Debian Dovecot to use inotify it must be compiled with the build option 'notify=inotify'. You'll need to see the package maintainer's build script. You'll also need to look at the kernel .config used to build your kernel as inotify must be built into your kernel. Do you now have something else running say, at the filesystem level, that that is making Dovecot think the file has changed even though it hasn't? Are you zipping these mbox files via a cron job that is running every few seconds instead of every few hours or days? No. These files were compressed once by mutt, and then only read as archive via dovecot. Was mutt upgraded along with dovecot when you ran 'aptitude --safe upgrade'? Have you tested any other IMAP client such as Thunderbird to eliminate mutt as the cause of the problem? Something is apparently causing Dovecot to reread these files regularly, and I'd guess it's probably not a Dovecot bug. Did you run strace when accessing a non-compressed mbox file for comparison? http://strony.aster.pl/kjonca/dovecot.log.gz- uncompressed mbox http://strony.aster.pl/kjonca/dovecot.gz.log.gz- gzipped mbox I didn't ask for the files but the results of your analysis. This is your system and it's your job to troubleshoot it. I'm simply trying to assist you in your efforts. If this is a production system I suggest you downgrade to your previous Dovecot version that was working properly, then build a test rig to troubleshoot this. If that's not in your cards, I suggest sticking with Debian Stable and newer Dovecot backports as they become available. -- Stan
Re: [Dovecot] ntp revisited (so what to do ?)
Spyros wrote OK, So what you people say is : 1. Run ntpdate during startup only once 2. After that, keep time with ntpd Right ? https://support.ntp.org/bin/view/Support/StartingNTP4 says: - Start ntd as early as possible - - ntpd -g ... is better than ntpdate ... ; ntpd ... - Wait before starting time-sensitive services - - As last as possible in the boot sequence, run 'ntp-wait -v', and start time-sensitive services after it successfully returns. I'm fairly certain the above is excellent advice, and BCP. H
Re: [Dovecot] exceeded mail_max_userip_connections
On Mon, May 9, 2011 11:51 am, Mark Sapiro wrote: Voytek Eymont wrote: I thought it was 3.6x, I installed off market abt one week ago If you got it a week ago from the market, it's probably 3.604. thanks, 3.605 -- Voytek