[Dovecot] quota_warning script
Hello. I'm reading http://wiki.dovecot.org/Quota/1.1 one last time before my upgrade, and I have a question about quota_warning option. Which type of script must we use for this example: quota_warning = storage=95%% /usr/local/bin/quota-warning.sh 95 What the script must doing? It sends an email? It deletes some emails? Thanks! -- -Nicolas.
Re: [Dovecot] fd limit 1024 is lower in dovecot-1.1.1
On Sun, 29 Jun 2008 03:53:54 pm Zhang Huangbin wrote: > Hi, all. > > I just upgrade from 1.0.15 to 1.1.1 in a test box(RHEL 5.2, x86_64). > > after upgrade, i got this warning msg: > > 8< > # /etc/init.d/dovecot restart > Stopping Dovecot Imap: [ OK ] > Starting Dovecot Imap: Warning: fd limit 1024 is lower than what Dovecot > can use under full load (more than 1280). Either grow the limit or > change login_max_processes_count and max_mail_processes settings >[ OK ] > 8< > > but i changed either login_max_processes_count and max_mail_processes > to 2048, it raised the same msg. change may not mean increase > How can i solove this issue? > /etc/security/limits.conf to increase the "nofiles" or possibly decrese the process counts. -- Daniel Black -- Proudly a Gentoo Linux User. Gnu-PG/PGP signed and encrypted email preferred http://pgp.mit.edu:11371/pks/lookup?op=get&search=0x76677097 GPG Signature D934 5397 A84A 6366 9687 9EB2 861A 4ABA 7667 7097 signature.asc Description: This is a digitally signed message part.
[Dovecot] fd limit 1024 is lower in dovecot-1.1.1
Hi, all. I just upgrade from 1.0.15 to 1.1.1 in a test box(RHEL 5.2, x86_64). after upgrade, i got this warning msg: 8< # /etc/init.d/dovecot restart Stopping Dovecot Imap: [ OK ] Starting Dovecot Imap: Warning: fd limit 1024 is lower than what Dovecot can use under full load (more than 1280). Either grow the limit or change login_max_processes_count and max_mail_processes settings [ OK ] 8< but i changed either login_max_processes_count and max_mail_processes to 2048, it raised the same msg. How can i solove this issue? Thanks very much. My dovecot -n output: 8< # dovecot -n # 1.1.1: /etc/dovecot.conf Warning: fd limit 1024 is lower than what Dovecot can use under full load (more than 1280). Either grow the limit or change login_max_processes_count and max_mail_processes settings log_path: /var/log/dovecot.log protocols: pop3 pop3s imap imaps listen: * ssl_cert_file: /etc/pki/dovecot/certs/dovecotCert.pem ssl_key_file: /etc/pki/dovecot/private/dovecotKey.pem login_dir: /var/run/dovecot/login login_executable(default): /usr/libexec/dovecot/imap-login login_executable(imap): /usr/libexec/dovecot/imap-login login_executable(pop3): /usr/libexec/dovecot/pop3-login max_mail_processes: 1024 mail_uid: 2000 mail_gid: 2000 mail_location: maildir:/%Lh/%Ld/%Ln/:INDEX=/%Lh/%Ld/%Ln/ mail_executable(default): /usr/libexec/dovecot/imap mail_executable(imap): /usr/libexec/dovecot/imap mail_executable(pop3): /usr/libexec/dovecot/pop3 mail_plugins(default): quota imap_quota mail_plugins(imap): quota imap_quota mail_plugins(pop3): quota mail_plugin_dir(default): /usr/lib64/dovecot/imap mail_plugin_dir(imap): /usr/lib64/dovecot/imap mail_plugin_dir(pop3): /usr/lib64/dovecot/pop3 pop3_client_workarounds(default): pop3_client_workarounds(imap): pop3_client_workarounds(pop3): outlook-no-nuls oe-ns-eoh auth default: mechanisms: plain login user: vmail passdb: driver: sql args: /etc/dovecot-mysql.conf userdb: driver: sql args: /etc/dovecot-mysql.conf socket: type: listen client: path: /var/spool/postfix/private/auth mode: 432 user: postfix group: postfix master: path: /var/run/dovecot/auth-master mode: 432 user: vmail group: vmail -- Best Regards. Zhang Huangbin - Mail Server Solution for Red Hat(R) Enterprise Linux & CentOS 5.x: http://rhms.googlecode.com/
Re: [Dovecot] Dovecot corrupted index cache
On Sat, 2008-06-28 at 10:00 -0700, Michael D Godfrey wrote: > > > > On Wed, 2008-06-25 at 09:40 -0700, Michael D. Godfrey wrote: > > > >>> > > A guess would be that this is likely due to the endianess of the > >>> > > multiple architectures that the index is being accessed with. We have > >>> > > the same issue here across i686/x86_64/sparc. I'm about to post to an > >>> > > older email thread about this as well. > >>> > >> > This is a good guess. We use a mixture of i386 and x86_64. This is not > >> > an "endedness" conflict, but > >> > could be the problem for other reasons. > >> > > > > So you use NFS? > > > > > Clients mount off of the server, but dovecot (IMAP) connections are made > directly. Home directories are not mounted, so I do not think NFS can be > affecting dovecot. I think the problem is due to 2 users using the > same account > making simultaneous access. It could be that the users need to be on > machines > with differing architectures (i386 and x86_64 in our case). I don't really understand. If there is no NFS, then that means you have only one Dovecot server. So how can one Dovecot server be both i386 and x86_64? Or if you mean the client machines are i386/x86-64, Dovecot doesn't even know about them so that doesn't matter. signature.asc Description: This is a digitally signed message part
Re: [Dovecot] Dovecot corrupted index cache
On Wed, 2008-06-25 at 09:40 -0700, Michael D. Godfrey wrote: > > A guess would be that this is likely due to the endianess of the > > multiple architectures that the index is being accessed with. We have > > the same issue here across i686/x86_64/sparc. I'm about to post to an > > older email thread about this as well. > This is a good guess. We use a mixture of i386 and x86_64. This is not > an "endedness" conflict, but > could be the problem for other reasons. So you use NFS? Clients mount off of the server, but dovecot (IMAP) connections are made directly. Home directories are not mounted, so I do not think NFS can be affecting dovecot. I think the problem is due to 2 users using the same account making simultaneous access. It could be that the users need to be on machines with differing architectures (i386 and x86_64 in our case). Anything more I can tell you? Michael
Re: [Dovecot] Dovecot index, NFS, and multiple architectures
This was starting from a clean index, first opening pine on the NFS Solaris 9 sparc machine, and then at the same time opening pine on my Fedora 9 i386 workstation. Why does it matter where you run Pine? Does it directly execute Dovecot on the local machine instead of connecting via TCP? Correct. We have dovecot executing locally in each instance, with the index being shared. I'll try the TCP method and get back to you. By the way, the only reason I'm specifically doing it this way to test out what might possibly happen to our user group. We have approximately 50,000 student accounts, and 20,000 staff accounts that all access mail in multiple fashions. We want to be able to roll out dovecot everywhere, but to do this it has to be resiliant enough to handle multiple instances of dovecot on multiple architectures. For example, a student logs into a webmail machine (sparc) and then ssh's into a linux frontend server and opens pine at the same time. This scenerio isn't likely to happen, but it could. We're just trying to cover all possibilities. Hence why I we're running the local dovecot/pine and the server side dovecot/pine... trying to see how it holds up. So far it's been great minus the endianess issue. By the way, we're trying out seperating the index by arch and it's working pretty good right now. The only concern is how it's going to scale with regards to disk usage if we have double the number of indexs per account. We figure max of 10MB per index multiplied by 2, multiplied by 70,000... not a small number at all, but that's for us to worry about. ;) Of course, that is a worse case scenerio. I'd suggest not running Dovecot on different architectures. Like if you're on a non-x86 make it connect via TCP to a x86 Dovecot server. I'm going to try that out and get back to you. By the way, I don't think this is related to the corruption, but we also have tons of these in the logs: Jun 25 11:52:32 host IMAP(user): : Created dotlock file's timestamp is different than current time (1214409234 vs 1214409152): /dovecot-index/control/user/.INBOX/dovecot-uidlist Jun 25 11:52:32 host IMAP(user): : Created dotlock file's timestamp is different than current time (1214409235 vs 1214409152): /dovecot-index/control/user/.INBOX/dovecot-uidlist Dovecot really wants that clocks are synchronized between the NFS clients and the server. If the clock difference is more than 1 second, you'll get problems. I figured. Looks like we need to be a little more strict with ntp. ;)
Re: [Dovecot] courier IMAP to dovecot migration: folders not showing up
On 6/27/2008 8:27 PM, Jacob Yocom-Piatt wrote: any clues on how to fix this issue would be welcome. It will probably be helpful to provide output of dovecot -n so we can see your config... ? -- Best regards, Charles
Re: [Dovecot] Keeping \Seen flag private
El Saturday 28 June 2008 07:25:31 Timo Sirainen escribió: > On Fri, 2008-06-27 at 15:09 +0100, Imobach González Sosa wrote: > > Hi all, > > > > I wanna to set up shared folders for a couple of users and I'd like that > > everyone could keep the \Seen flag as private. So if user #1 reads some > > messages and user #2 not, those messages appear as "unseen" to #2 and > > "seen" to #1. > > > > I've implemented shared folders using namespaces with every user having > > their own "control" and "private" directories. But all the flags (\Seen > > included) are "shared". > > > > Am I on the right path? Some tip or documentation? > > I updated http://wiki.dovecot.org/SharedMailboxes now to mention flag > sharing. Ah, great! Thank you very much, Timo! -- Imobach González Sosa Banot.net http://www.banot.net/
Re: [Dovecot] Keeping \Seen flag private
El Friday 27 June 2008 21:08:30 Asheesh Laroia escribió: > On Fri, 27 Jun 2008, Imobach González Sosa wrote: > > There's no problem in disallowing the users to update the Seen flag. > > What I want is that every user have their own Seen flags. > > Timo and others will know more; but for me, I would just deliver the > messages to multiple people and see that Dovecot deliver will use > hardlinks for the multiple deliveries - that way, the message flags in the > filename are still canonical. Yes, it could be a solution. But our user's requirements are a bit of... strange? They want one of them to manage the folder (with subfolders) and the rest of them only can read messages. Thanks anyway for your suggestion! -- Imobach González Sosa Banot.net http://www.banot.net/