Re: [Dovecot] v2.0.alpha2 released

2009-10-22 Thread Timo Sirainen
On Fri, 2009-10-23 at 05:30 +0200, Pascal Volk wrote:
> I had also the idea, that the verbose_proctitle could display additional
> the currently executed command.
> BUT, is there still a need for verbose_proctitle support? (It currently
> doesn't work on Linux.) 

I'm trying to get setproctitle() syscall added to Linux, and in the mean
time it could use the ugly hackish way that OpenSSH and sendmail also
uses.

> I think it could be replaced by the new doveadm who command.

Hmm. Perhaps. But that would need some way for doveadm to figure out
what each process is doing. And that could be annoying to do.. Basically
two possibilities:

a) doveadm connects to each process and asks what they're doing (and how
would it connect?..)

b) each process is talking to some state gathering process and telling
all the time what they're doing (waste of cpu/etc)


signature.asc
Description: This is a digitally signed message part


Re: [Dovecot] v2.0.alpha2 released

2009-10-22 Thread Pascal Volk
On 10/23/2009 05:14 AM Timo Sirainen wrote:
> http://dovecot.org/releases/2.0/alpha/dovecot-2.0.alpha2.tar.gz
> http://dovecot.org/releases/2.0/alpha/dovecot-2.0.alpha2.tar.gz.sig
> 
> Changes since alpha1:
> 
>  …
>  - Added doveadm who command for listing currently logged in users.
> 
> v2.0.beta1 TODO list:
> 
>  - verbose_proctitle=yes is now quite useless because of the imap/pop3
> process creation change. Probably enable linux-proctitle-hack and start
> changing the proctitle much more often. For example it could contain the
> command that's currently being run by user. And login processes could
> contain how many client connections they currently have. And other
> processes too something similar..

I had also the idea, that the verbose_proctitle could display additional
the currently executed command.
BUT, is there still a need for verbose_proctitle support? (It currently
doesn't work on Linux.) I think it could be replaced by the new doveadm
who command.


Regards,
Pascal
-- 
The trapper recommends today: cafebabe.0929...@localdomain.org


[Dovecot] v2.0.alpha2 released

2009-10-22 Thread Timo Sirainen
http://dovecot.org/releases/2.0/alpha/dovecot-2.0.alpha2.tar.gz
http://dovecot.org/releases/2.0/alpha/dovecot-2.0.alpha2.tar.gz.sig

Changes since alpha1:

 - All debug messages are now logged to debug log (debug_log_path
setting, defaults to info_log_path). Patch by Pascal Volk.
 - Added support for SORT=DISPLAY IMAP extension.
 - Added doveadm who command for listing currently logged in users.
 - ssl_ciphers_list: Disable anonymous and export ciphers by default.
 - mail_chroot can now contain %variables (e.g. /home/%u)
 - fixed a bad maildir crashing bug
 - ssl-params is now run with +15 priority when generating the ssl
parameters
 - Redesigned how login processes log in and create imap/pop3 processes.
This fixes several settings and some other problems, plus it's now also
possible for a single imap/pop3 process to handle multiple connections.
If you want to try this, modify master.conf:

service imap {
  # comment out:
  #service_count = 1

  # This should be at least the number of CPUs, maybe more (2-3x?)
  process_min_avail = 4

  # Limit the number of clients a single process can handle.
  # Disk IO is blocking, so some commands can take a while.
  client_limit = 5
}

It currently doesn't even try to group same user's connections to same
process, they'll just go randomly to whichever process manages to accept
it first. Some day perhaps there should be a new service in the middle
that keeps track of existing connections and forwards new ones to right
mail processes.

v2.0.beta1 TODO list:

 - master.conf has to be somehow put into defaults, so doveconf -n
doesn't show its contents. an example master.conf would be minimal and
show only those settings that admin might want to change (and those
would show up in doveconf -n)
 - verbose_proctitle=yes is now quite useless because of the imap/pop3
process creation change. Probably enable linux-proctitle-hack and start
changing the proctitle much more often. For example it could contain the
command that's currently being run by user. And login processes could
contain how many client connections they currently have. And other
processes too something similar..
 - change plugin hook design to allow multiple users in same process to
run different plugins

See alpha1 release announcement for more about these TODO:

 - v1.x config backwards compatibility
 - dsync is buggy
 - config process is slow
 - tcp-wrappers support



signature.asc
Description: This is a digitally signed message part


Re: [Dovecot] HA Dovecot Config?

2009-10-22 Thread Charles Sprickman
This is veering a bit OT, hence the top-post, but it looks like another HA 
option may be available in a few months:


http://lists.freebsd.org/pipermail/freebsd-announce/2009-October/001279.html

In short, you can stack any geom-aware FS on top of this.  Combined with 
CARP, you've got a decent/simple/cheap option.


Pawel does really good work, I look forward to playing around with this.

Charles

___
Charles Sprickman
NetEng/SysAdmin
Bway.net - New York's Best Internet - www.bway.net
sp...@bway.net - 212.655.9344


On Thu, 22 Oct 2009, Steve wrote:



 Original-Nachricht 

Datum: Thu, 22 Oct 2009 12:57:56 +0100
Von: Ed W 
An: Dovecot Mailing List 
Betreff: Re: [Dovecot] HA Dovecot Config?



Steve wrote:


Hallo Ed,



I have never used FileReplicationPro but looking at what it offers it

reminds me of GlusterFS. I use GlusterFS for all www data of the domains I
host. I don't use jet GlusterFS for IMAP/POP storage for all domains I host.
Only a small subset of the domains I host have GlusterFS for IMAP/POP but
so far it works without issues and I will soon or later migrate the other
domains to be on GlusterFS as well.


The setup of GlusterFS is ultra easy (compared to other clustering FS

solution I have seen) and it offers some very nice functions.


In the past I have burned my fingers with older GlusterFS releases when

I have tried to use it as storage for IMAP/POP but the later 2.0.x releases
of GlusterFS are more stable.





The thread moved on and no one seemed to bite, but I also have watched
glusterfs for a long while now and been very attracted by the basic
principle. I would be very interested to hear more about how it's worked
out for you?


I use GlusterFS since long time. For mail hosting I waited for release 2 and 
then when it was out I switched all mail domains to use GlusterFS 2.0.1. It was 
a ultra big failure for me. The process took so much CPU that I could barely 
run anything else on the system and stability was very, very bad. That forced 
me to switch back to NFS for all the domains. Then later around 2.0.4 I looked 
again at it and things where more stable. I started then with a bunch of 
domains to run on top of GlusterFS 2.0.4 and moved then to GlusterFS GIT and 
somewhere around 2.0.7 the GIT version broke horribly in my setup that I 
switched to 2.0.7. That has been some weeks ago and since then I run around 1/3 
of my domains on top of GlusterFS 2.0.7 with two active nodes using server side 
replicate, io-threads, write-behind and io-cache. On the client side I have no 
performance translators or anything such. Just bare client. I used server side 
replication because I wanted the shared storage to behav

e

like a SAN and not use the server part as dumb bricks where the client is 
responsible for the replication. So far both nodes have 2 x 1TB disks in RAID 1 
mode and those disks are then exported as a GlusterFS brick doing replicate. 
Setup in Dovecot is +/- like you would do if you would use NFS.



Got any benchmarks, perhaps comparing against local
storage and vs NFS?


I have benchmarked but have nothing made to show to others. In general I can 
say that using GlusterFS on gigabit network is +/- 1/3 to 1/2 of the raw disk 
speed. If you add performance translators then the speed is somewhere between 
1/2 to 1/1 of raw disk speed (depending what block size I use and depending if 
I use Booster or not). With the performance translators you can easy saturate a 
gigabit connection. Have not tried to use anything faster then gigabit.
Off course local storage is faster since I don't have that theoretical 125MB/s 
limit I have when using gigabit. The newer releases of GlusterFS are comparable 
to NFS in terms of speed. In terms of CPU usage GlusterFS is way behind NFS and 
local storage. In terms of flexibility GlusterFS is better then anything else.



It seems intuitively very appealing to use something heavily distributed
in the style of gluster/mogile for the file data, and something fast
(like local storage) for the indexes.  Perhaps even dovecot proxying
could be used to help force users onto a persistent server to avoid
re-creating index data...


Have never tried MoglieFS. What I don't like about it is that it uses HTTP for 
transport and operations.



For smaller setups it would be appealing to find something simpler than
DRBD for a two server active/active setup (eg both servers acting as
both storage & imap and failover for each other)


That is easy done with GlusterFS. Very easy.



Cheers

Ed W


// Steve
--
GRATIS f?r alle GMX-Mitglieder: Die maxdome Movie-FLAT!
Jetzt freischalten unter http://portal.gmx.net/de/go/maxdome01



[Dovecot] doveadm who

2009-10-22 Thread Timo Sirainen
Pascal Volk suggested this, and it was pretty quick to implement for
Dovecot v2.0. Ideas welcome how to improve it, or if it's already
perfect :)

The first line is written to stderr, so |sort can be used:

# doveadm who|sort 
username# (ips) (pids)
timo1 (127.0.0.1) (2457)
tss 2 (127.0.0.1) (617 1345)
tss22 (127.0.0.2 127.0.0.1) (2392 2799)

# doveadm who|sort -k2 -nr
username# (ips) (pids)
tss22 (127.0.0.2 127.0.0.1) (2392 2799)
tss 2 (127.0.0.1) (617 1345)
timo1 (127.0.0.1) (2457)

You can filter connections:

# doveadm who 127.0.0.2
username# (ips) (pids)
tss22 (127.0.0.2 127.0.0.1) (2392 2799)

# doveadm who 127.0.0.0/24
username# (ips) (pids)
tss22 (127.0.0.2 127.0.0.1) (2392 2799)
timo1 (127.0.0.1) (2457)
tss 2 (127.0.0.1) (617 1345)

# doveadm who tss
username# (ips) (pids)
tss22 (127.0.0.2 127.0.0.1) (2392 2799)
tss 2 (127.0.0.1) (617 1345)

# doveadm who tss 127.0.0.1
username# (ips) (pids)
tss22 (127.0.0.2 127.0.0.1) (2392 2799)
tss 2 (127.0.0.1) (617 1345)


signature.asc
Description: This is a digitally signed message part


[Dovecot] 1.1 Quota Question

2009-10-22 Thread Marty Anstey
Hi,
I'm not sure if this question has been asked before but some googling
hasn't turned up anything really relevant.

We are currently running Dovecot 1.1.16 & Postfix; maildir++.

When a message arrives for a mailbox which is over quota, it is bounced.
Obviously, this isn't very desirable; the primary downside to this is
that when junk mail hits a full mailbox it's bounced to that recipient
of the message. On a busy mail system, that could potentially get us
blacklisted pretty quickly. Ideally we would like to reject the messages
inline. Is there an easy way to set this up?

Thanks,
Marty





Re: [Dovecot] Need a little shadow to MySQL conversion help

2009-10-22 Thread Timo Sirainen
On Thu, 2009-10-22 at 11:31 -0700, Marc Perkel wrote:
> But - slightly off topic. Suppose I wanted to add some kind of date/time 
> field to MySQL so that I can records the date and time of the last 
> login. Is there an easy way to do that?

http://wiki.dovecot.org/PostLoginScripting#Last-login_tracking is the
way to do it currently.



signature.asc
Description: This is a digitally signed message part


Re: [Dovecot] Need a little shadow to MySQL conversion help

2009-10-22 Thread Marc Perkel

This seems to work fine for me. Thanks to everyone for your help.

default_pass_scheme = CRYPT

password_query = \
 SELECT user_name, domain_name, password \
 FROM users WHERE user_name = '%n' AND domain_name = '%d'


But - slightly off topic. Suppose I wanted to add some kind of date/time 
field to MySQL so that I can records the date and time of the last 
login. Is there an easy way to do that?




Re: [Dovecot] pop3-login: Fatal: io_loop_handle_add: epoll_ctl(1, 5):

2009-10-22 Thread Marco Nenciarini

Timo Sirainen ha scritto:

On Thu, 2009-10-22 at 11:44 +0200, Marco Nenciarini wrote:
This morning it happened another time, another time during the daily 
cron execution.


Oct 22 06:26:57 server dovecot: pop3-login: Panic: Leaked file fd 5: dev 
0.12 inode 1005


Can you apply the attached patch and see what it logs the next time it
happens?



I've applied the patch (with a little modification because i use 
managesieve)


At this moment on all my systems I have a 1.2.6+2debug_patches and core 
dumps are enabled.


Marco

--
-
|Marco Nenciarini| Debian/GNU Linux Developer - Plug Member |
| mnen...@prato.linux.it | http://www.prato.linux.it/~mnencia   |
-
Key fingerprint = FED9 69C7 9E67 21F5 7D95  5270 6864 730D F095 E5E4



Re: [Dovecot] Solved.. VZW Blackberry BIS problems?

2009-10-22 Thread B. Cook

Turns out this was a self inflicted wound..

Seems that NOD32 (version 4) and Thunderbird with multiple IMAP accounts 
has a problem with 'duplication of emails'..


So low and behold disabling NOD32 inside TB solved the problems..

Nothing to do with dovecot..

Seems on Monday everyone had time to upgrade/update things at home..

Thanks for the help.

On 10/20/09 8:03 AM, B. Cook wrote:

Ever since Columbus day things have been strange with my two dovecot
servers, according to my Blackberry users.

We do not have a BES just the standard BIS that comes with a personal
account.

Several people have told me that since that Monday that they get
occasional duplicate copies of messages which only appear singularly in
their Maildir inboxes.

I myself even called VZW and got transferred to RIM themselves where
they 'reset' my account; (same as me going into the BIS server and
deleting and adding again) only to have the same thing happen; again
occasionally..



Re: [Dovecot] sieve + redirect + as attachment

2009-10-22 Thread Jerry
On Thu, 22 Oct 2009 12:38:45 -0400
Charles Marcus  wrote:

> On 10/22/2009, Stephan Bosch (step...@rename-it.nl) wrote:
> > What is your exact application? I can take this issue to the Sieve
> > mailinglist.
> 
> One use case I would need is for a spam bucket.
> 
> We have an out-sourced anti-spam service, and I have more than a few
> ancient email addresses that get nothing but spam, and I'd like to be
> able to forward all inbound messages (server side, as they come in)
> to a designated address(es), but our provider requires them to be
> forwarded as an attachment.

Interestingly enough, that is one of reasons that I would benefit from
sieve having the ability to forward-as-attachment; possibly with the
ability to use multiple addresses in the process.

-- 
Jerry
ges...@yahoo.com

|===
|===
|===
|===
|

One way to stop a run away horse is to bet on him.


Re: [Dovecot] pop3-login: Fatal: io_loop_handle_add: epoll_ctl(1, 5):

2009-10-22 Thread Timo Sirainen
On Thu, 2009-10-22 at 11:44 +0200, Marco Nenciarini wrote:
> This morning it happened another time, another time during the daily 
> cron execution.
> 
> Oct 22 06:26:57 server dovecot: pop3-login: Panic: Leaked file fd 5: dev 
> 0.12 inode 1005

Can you apply the attached patch and see what it logs the next time it
happens?
diff -r ab32d7e2c0d6 src/login-common/main.c
--- a/src/login-common/main.c	Tue Oct 20 15:49:01 2009 -0400
+++ b/src/login-common/main.c	Thu Oct 22 12:50:52 2009 -0400
@@ -18,6 +18,7 @@
 #include 
 #include 
 #include 
+#include 
 
 bool disable_plaintext_auth, process_per_connection;
 bool verbose_proctitle, verbose_ssl, verbose_auth, auth_debug;
@@ -427,6 +428,13 @@
 	   restrict_access_by_env() is called */
 	lib_init();
 
+	for (i = 3; i < 20; i++) {
+		struct stat st;
+
+		if (fstat(i, &st) == 0 && major(st.st_dev) == 0 && minor(st.st_dev) == 12)
+			i_panic("login fd %d is ino %ld", i, (long)st.st_ino);
+	}
+
 	if (is_inetd) {
 		/* running from inetd. create master process before
 		   dropping privileges. */
diff -r ab32d7e2c0d6 src/master/listener.c
--- a/src/master/listener.c	Tue Oct 20 15:49:01 2009 -0400
+++ b/src/master/listener.c	Thu Oct 22 12:50:52 2009 -0400
@@ -8,6 +8,41 @@
 
 #include 
 #include 
+#include 
+
+static void check_listeners(struct settings *set)
+{
+	const struct listener *listens;
+	unsigned int i, listen_count;
+
+	if (array_is_created(&set->listens)) {
+		listens = array_get(&set->listens, &listen_count);
+		for (i = 0; i < listen_count; i++) {
+			struct stat st;
+
+			if (listens[i].fd < 0) continue;
+			if (net_getsockname(listens[i].fd, NULL, NULL) == 0) continue;
+
+			if (fstat(listens[i].fd, &st) < 0) i_panic("fstat(%d) failed: %m", listens[i].fd);
+			i_panic("listener %d is dev %d.%d ino %ld", i,
+major(st.st_dev), minor(st.st_dev), (long)st.st_ino);
+		}
+	}
+
+	if (array_is_created(&set->ssl_listens)) {
+		listens = array_get(&set->ssl_listens, &listen_count);
+		for (i = 0; i < listen_count; i++) {
+			struct stat st;
+
+			if (listens[i].fd < 0) continue;
+			if (net_getsockname(listens[i].fd, NULL, NULL) == 0) continue;
+
+			if (fstat(listens[i].fd, &st) < 0) i_panic("fstat(ssl %d) failed: %m", listens[i].fd);
+			i_panic("ssl listener %d is dev %d.%d ino %ld", i,
+major(st.st_dev), minor(st.st_dev), (long)st.st_ino);
+		}
+	}
+}
 
 static void resolve_ip(const char *set_name, const char *name,
 		   struct ip_addr *ip, unsigned int *port)
@@ -345,10 +380,14 @@
 	}
 
 	for (server = settings_root; server != NULL; server = server->next) {
-		if (server->imap != NULL)
+		if (server->imap != NULL) {
 			listener_listen_missing(server->imap, "imap", retry);
-		if (server->pop3 != NULL)
+			check_listeners(server->imap);
+		}
+		if (server->pop3 != NULL) {
 			listener_listen_missing(server->pop3, "pop3", retry);
+			check_listeners(server->pop3);
+		}
 	}
 }
 
diff -r ab32d7e2c0d6 src/master/login-process.c
--- a/src/master/login-process.c	Tue Oct 20 15:49:01 2009 -0400
+++ b/src/master/login-process.c	Thu Oct 22 12:50:52 2009 -0400
@@ -698,15 +698,31 @@
 	cur_fd = LOGIN_MASTER_SOCKET_FD + 1;
 	if (array_is_created(&group->set->listens)) {
 		listens = array_get(&group->set->listens, &listen_count);
-		for (i = 0; i < listen_count; i++, cur_fd++)
+		for (i = 0; i < listen_count; i++, cur_fd++) {
+			struct stat st;
+
+			if (net_getsockname(listens[i].fd, NULL, NULL) < 0) {
+if (fstat(listens[i].fd, &st) < 0) i_panic("fstat(%d) failed: %m", listens[i].fd);
+i_panic("ssl listener %d is dev %d.%d ino %ld", i,
+	major(st.st_dev), minor(st.st_dev), (long)st.st_ino);
+			}
 			dup2_append(&dups, listens[i].fd, cur_fd);
+		}
 	}
 
 	if (array_is_created(&group->set->ssl_listens)) {
 		listens = array_get(&group->set->ssl_listens,
 &ssl_listen_count);
-		for (i = 0; i < ssl_listen_count; i++, cur_fd++)
+		for (i = 0; i < ssl_listen_count; i++, cur_fd++) {
+			struct stat st;
+
+			if (net_getsockname(listens[i].fd, NULL, NULL) < 0) {
+if (fstat(listens[i].fd, &st) < 0) i_panic("fstat(%d) failed: %m", listens[i].fd);
+i_panic("ssl listener %d is dev %d.%d ino %ld", i,
+	major(st.st_dev), minor(st.st_dev), (long)st.st_ino);
+			}
 			dup2_append(&dups, listens[i].fd, cur_fd);
+		}
 	}
 
 	/* make sure we don't leak syslog fd. try to do it as late as possible,


signature.asc
Description: This is a digitally signed message part


Re: [Dovecot] sieve + redirect + as attachment

2009-10-22 Thread Charles Marcus
On 10/22/2009, Stephan Bosch (step...@rename-it.nl) wrote:
> What is your exact application? I can take this issue to the Sieve 
> mailinglist.

One use case I would need is for a spam bucket.

We have an out-sourced anti-spam service, and I have more than a few
ancient email addresses that get nothing but spam, and I'd like to be
able to forward all inbound messages (server side, as they come in) to a
designated address(es), but our provider requires them to be forwarded
as an attachment.

-- 

Best regards,

Charles


[Dovecot] Public Folder Quotas

2009-10-22 Thread Peter Fraser
Hi All

I'm really busy adding features to dovecot running on my dev box to
later move into prod. I saw where public mailbox quotas was added to
1.2 Does anyone have this working? I haven't been able to find docs on
that as of yet.


Re: [Dovecot] Public Folders

2009-10-22 Thread Thomas Leuxner

Am 22.10.2009 um 16:14 schrieb Peter Fraser:


Hi All
I'm trying to implement public folders. My dovecot -n readout is at
the bottom. I created a maildir called resumes in /home/public

Its contents are:
mail# ls -la /home/public/resumes
total 6
drwx--  3 vmail  vmail  512 Oct 22 08:58 .
drwx--  4 vmail  vmail  512 Oct 22 08:47 ..
drwx--  5 vmail  vmail  512 Oct 22 08:58 Maildir
-rw---  1 vmail  vmail0 Oct 21 18:30 dovecot-acl-list



Hi,

the layout in the public namespace is supposed to be in maildir flavour.
/home/public/.resumes


namespace:
 type: public
 separator: /
 prefix: public/
 location: maildir:/home/public
 list: yes
 subscriptions: yes


So your shared folder should be called '.resumes' without a 'Maildir'  
subdirectory in it. Instead of the file 'dovecot-acl-list' use  
'dovecot-acl' inside that folder. The list ACL file 'dovecot-acl-list'  
is then automatically created in the root directory.


Regards
Thomas



Re: [Dovecot] HA Dovecot Config?

2009-10-22 Thread Anthony
On Thu, Oct 22, 2009 at 12:57:56PM +0100, Ed W wrote:
> Steve wrote:
>> ...
>
> The thread moved on and no one seemed to bite, but I also have watched  
> glusterfs for a long while now and been very attracted by the basic  
> principle. I would be very interested to hear more about how it's worked  
> out for you?  Got any benchmarks, perhaps comparing against local  
> storage and vs NFS?

I'd also recommend testing DRBD

I tested and ran a glusterfs cluster on 3 production servers 
for several months, however with only a small load, nothing
substantial. Through testing and production, I didn't run into
a single issue, stability or otherwise.

However, having said that, and as long as we're talking HA...

Complaints about stability are very common on the glusterfs
mailing lists. I also have issue with the fuse layer, it has
always been unstable in my experiences with it. Because of
this, I dropped the idea of a glusterfs cluster. IMO, it isn't
quite production ready. 

Maybe that's just my pain threshold though, and it doesn't mean
I find glusterfs unusable either. But, mail users are finicky, 
and waste enough of my time already.

I am actively pursuing a DRBD cluster now. Benchmarks on DRBD +
OCFS2 showed similar throughput, and the project appears to 
be more stable.
-- 
Anthony


[Dovecot] Public Folders

2009-10-22 Thread Peter Fraser
Hi All
I'm trying to implement public folders. My dovecot -n readout is at
the bottom. I created a maildir called resumes in /home/public

Its contents are:
mail# ls -la /home/public/resumes
total 6
drwx--  3 vmail  vmail  512 Oct 22 08:58 .
drwx--  4 vmail  vmail  512 Oct 22 08:47 ..
drwx--  5 vmail  vmail  512 Oct 22 08:58 Maildir
-rw---  1 vmail  vmail0 Oct 21 18:30 dovecot-acl-list

Then I created a directory called resumes in
/usr/local/etc/dovecot-acls and in it, I created the file dovecot-acl

I put the following entries in dovecot-acl
owner lrwstiekxa
user=user1 rwl

When I connect as user1 in thunderbird, right click on the inbox and
go subscribe, I see a greyed out public which I cannot subscribe to. I
do not see resumes. Any ideas on why this is?

mail# dovecot -n
# 1.2.4: /usr/local/etc/dovecot.conf
# OS: FreeBSD 7.2-RELEASE-p1 i386
protocols: imap imaps pop3 pop3s
ssl_cert_file: /usr/local/etc/dovecot/ssl/certs/dovecot.pem
ssl_key_file: /usr/local/etc/dovecot/ssl/private/dovecot.pem
ssl_cipher_list: ALL:!ADH!LOW:!SSLv2:!EXP:+HIGH:+MEDIUM
disable_plaintext_auth: no
login_dir: /var/run/dovecot/login
login_executable(default): /usr/local/libexec/dovecot/imap-login
login_executable(imap): /usr/local/libexec/dovecot/imap-login
login_executable(pop3): /usr/local/libexec/dovecot/pop3-login
login_greeting: Mail Server ready.
verbose_proctitle: yes
first_valid_uid: 1000
first_valid_gid: 1000
mail_privileged_group: vmail
mail_location: maildir:~/Maildir:INBOX=~/Maildir/:INDEX=~/Maildir/tmp/index
mail_executable(default): /usr/local/libexec/dovecot/imap
mail_executable(imap): /usr/local/libexec/dovecot/imap
mail_executable(pop3): /usr/local/libexec/dovecot/pop3
mail_plugins(default): quota imap_quota acl imap_acl
mail_plugins(imap): quota imap_quota acl imap_acl
mail_plugins(pop3): quota
mail_plugin_dir(default): /usr/local/lib/dovecot/imap
mail_plugin_dir(imap): /usr/local/lib/dovecot/imap
mail_plugin_dir(pop3): /usr/local/lib/dovecot/pop3
imap_client_workarounds(default): delay-newmail netscape-eoh
tb-extra-mailbox-sep
imap_client_workarounds(imap): delay-newmail netscape-eoh tb-extra-mailbox-sep
imap_client_workarounds(pop3):
pop3_client_workarounds(default):
pop3_client_workarounds(imap):
pop3_client_workarounds(pop3): outlook-no-nuls oe-ns-eoh
dict_db_config: /usr/local/etc/dovecot-db.conf
namespace:
  type: public
  separator: /
  prefix: public/
  location: maildir:/home/public
  list: yes
  subscriptions: yes
namespace:
  type: private
  separator: /
  location: maildir:/home/vmail/%u/Maildir
  inbox: yes
  list: yes
  subscriptions: yes
lda:
  mail_plugins: quota acl
  postmaster_address: postmas...@example.com
  sendmail_path: /usr/sbin/sendmail
auth default:
  mechanisms: plain login
  username_format: %Lu
  debug: yes
  passdb:
driver: pam
args: session=yes dovecot
  passdb:
driver: ldap
args: /usr/local/etc/dovecot-ldap.conf
  userdb:
driver: static
args: uid=1002 gid=1002 home=/home/vmail/%u allow_all_users=yes
  socket:
type: listen
client:
  path: /var/run/dovecot/auth-client
  mode: 432
master:
  path: /var/run/dovecot/auth-master
  mode: 384
plugin:
  quota: maildir
  quota2: maildir:user quota
  quota_rule: *:storage=512M
  quota_rule2: Trash:storage=10M
  quota_rule3: SPAM:ignore
  quota_warning: storage=95%% /usr/local/etc/dovecot/quota-warning.sh 95
  quota_warning2: storage=80%% /usr/local/etc/dovecot/quota-warning.sh 80
  acl: vfile:/usr/local/etc/dovecot-acls:cache_secs=300


Re: [Dovecot] HA Dovecot Config?

2009-10-22 Thread Steve

 Original-Nachricht 
> Datum: Thu, 22 Oct 2009 12:57:56 +0100
> Von: Ed W 
> An: Dovecot Mailing List 
> Betreff: Re: [Dovecot] HA Dovecot Config?

> Steve wrote:
>
Hallo Ed,


> > I have never used FileReplicationPro but looking at what it offers it
> reminds me of GlusterFS. I use GlusterFS for all www data of the domains I
> host. I don't use jet GlusterFS for IMAP/POP storage for all domains I host.
> Only a small subset of the domains I host have GlusterFS for IMAP/POP but
> so far it works without issues and I will soon or later migrate the other
> domains to be on GlusterFS as well.
> >
> > The setup of GlusterFS is ultra easy (compared to other clustering FS
> solution I have seen) and it offers some very nice functions.
> >
> > In the past I have burned my fingers with older GlusterFS releases when
> I have tried to use it as storage for IMAP/POP but the later 2.0.x releases
> of GlusterFS are more stable.
> >   
> 
> 
> The thread moved on and no one seemed to bite, but I also have watched 
> glusterfs for a long while now and been very attracted by the basic 
> principle. I would be very interested to hear more about how it's worked 
> out for you?
>
I use GlusterFS since long time. For mail hosting I waited for release 2 and 
then when it was out I switched all mail domains to use GlusterFS 2.0.1. It was 
a ultra big failure for me. The process took so much CPU that I could barely 
run anything else on the system and stability was very, very bad. That forced 
me to switch back to NFS for all the domains. Then later around 2.0.4 I looked 
again at it and things where more stable. I started then with a bunch of 
domains to run on top of GlusterFS 2.0.4 and moved then to GlusterFS GIT and 
somewhere around 2.0.7 the GIT version broke horribly in my setup that I 
switched to 2.0.7. That has been some weeks ago and since then I run around 1/3 
of my domains on top of GlusterFS 2.0.7 with two active nodes using server side 
replicate, io-threads, write-behind and io-cache. On the client side I have no 
performance translators or anything such. Just bare client. I used server side 
replication because I wanted the shared storage to behave 
 like a SAN and not use the server part as dumb bricks where the client is 
responsible for the replication. So far both nodes have 2 x 1TB disks in RAID 1 
mode and those disks are then exported as a GlusterFS brick doing replicate. 
Setup in Dovecot is +/- like you would do if you would use NFS.


> Got any benchmarks, perhaps comparing against local 
> storage and vs NFS?
> 
I have benchmarked but have nothing made to show to others. In general I can 
say that using GlusterFS on gigabit network is +/- 1/3 to 1/2 of the raw disk 
speed. If you add performance translators then the speed is somewhere between 
1/2 to 1/1 of raw disk speed (depending what block size I use and depending if 
I use Booster or not). With the performance translators you can easy saturate a 
gigabit connection. Have not tried to use anything faster then gigabit.
Off course local storage is faster since I don't have that theoretical 125MB/s 
limit I have when using gigabit. The newer releases of GlusterFS are comparable 
to NFS in terms of speed. In terms of CPU usage GlusterFS is way behind NFS and 
local storage. In terms of flexibility GlusterFS is better then anything else.


> It seems intuitively very appealing to use something heavily distributed 
> in the style of gluster/mogile for the file data, and something fast 
> (like local storage) for the indexes.  Perhaps even dovecot proxying 
> could be used to help force users onto a persistent server to avoid 
> re-creating index data...
> 
Have never tried MoglieFS. What I don't like about it is that it uses HTTP for 
transport and operations.


> For smaller setups it would be appealing to find something simpler than 
> DRBD for a two server active/active setup (eg both servers acting as 
> both storage & imap and failover for each other)
> 
That is easy done with GlusterFS. Very easy.


> Cheers
> 
> Ed W
>
// Steve
-- 
GRATIS für alle GMX-Mitglieder: Die maxdome Movie-FLAT!
Jetzt freischalten unter http://portal.gmx.net/de/go/maxdome01


Re: [Dovecot] sieve + redirect + as attachment

2009-10-22 Thread Jerry
On Thu, 22 Oct 2009 14:01:59 +0200
Stephan Bosch  wrote:

> What is your exact application? I can take this issue to the Sieve 
> mailinglist.

Cool. I will elaborate more fully then. Since this is not a dovecot
specific problem, I will reply to you directly.

-- 
Jerry
ges...@yahoo.com

|===
|===
|===
|===
|

genlock, n: Why he stays in the bottle.


Re: [Dovecot] sieve + redirect + as attachment

2009-10-22 Thread Stephan Bosch

Jerry wrote:

On Wed, 21 Oct 2009 23:01:14 +0200
Stephan Bosch  wrote:


Jerry wrote:

Is it possible to use the 'redirect' function in 'sieve' to forward
a message as an attachment rather than in-line?


Unfortunately, no. I thought the (draft) enclose extension could
provide this new feature, but the specification explicitly excludes
redirect from being affected:

http://ietfreport.isoc.org/idref/draft-ietf-sieve-mime-loop/#page-11

I am not sure exactly why. To my knowledge this is not provided by
any other Sieve feature/extension.

Regards,

Stephan


Honestly, I get confused reading RFCs. Since there is an 'enclose'
extension, exactly how is it intended to be used? In any case, sieve
seriously needs a way to forward a message as an attachment.

What is your exact application? I can take this issue to the Sieve 
mailinglist.


Regards,

Stephan


Re: [Dovecot] HA Dovecot Config?

2009-10-22 Thread Ed W

Steve wrote:

I have never used FileReplicationPro but looking at what it offers it reminds 
me of GlusterFS. I use GlusterFS for all www data of the domains I host. I 
don't use jet GlusterFS for IMAP/POP storage for all domains I host. Only a 
small subset of the domains I host have GlusterFS for IMAP/POP but so far it 
works without issues and I will soon or later migrate the other domains to be 
on GlusterFS as well.

The setup of GlusterFS is ultra easy (compared to other clustering FS solution 
I have seen) and it offers some very nice functions.

In the past I have burned my fingers with older GlusterFS releases when I have 
tried to use it as storage for IMAP/POP but the later 2.0.x releases of 
GlusterFS are more stable.
  



The thread moved on and no one seemed to bite, but I also have watched 
glusterfs for a long while now and been very attracted by the basic 
principle. I would be very interested to hear more about how it's worked 
out for you?  Got any benchmarks, perhaps comparing against local 
storage and vs NFS?


It seems intuitively very appealing to use something heavily distributed 
in the style of gluster/mogile for the file data, and something fast 
(like local storage) for the indexes.  Perhaps even dovecot proxying 
could be used to help force users onto a persistent server to avoid 
re-creating index data...


For smaller setups it would be appealing to find something simpler than 
DRBD for a two server active/active setup (eg both servers acting as 
both storage & imap and failover for each other)


Cheers

Ed W



Re: [Dovecot] sieve + redirect + as attachment

2009-10-22 Thread Jerry
On Wed, 21 Oct 2009 23:01:14 +0200
Stephan Bosch  wrote:

> Jerry wrote:
> > Is it possible to use the 'redirect' function in 'sieve' to forward
> > a message as an attachment rather than in-line?
> > 
> Unfortunately, no. I thought the (draft) enclose extension could
> provide this new feature, but the specification explicitly excludes
> redirect from being affected:
> 
> http://ietfreport.isoc.org/idref/draft-ietf-sieve-mime-loop/#page-11
> 
> I am not sure exactly why. To my knowledge this is not provided by
> any other Sieve feature/extension.
> 
> Regards,
> 
> Stephan

Honestly, I get confused reading RFCs. Since there is an 'enclose'
extension, exactly how is it intended to be used? In any case, sieve
seriously needs a way to forward a message as an attachment.

-- 
Jerry
ges...@yahoo.com

|===
|===
|===
|===
|

I THINK MAN INVENTED THE CAR by instinct.

Jack Handey, The New Mexican, 1988


Re: [Dovecot] simple steps with sieve

2009-10-22 Thread Gavin Hamill
On Tue, 2009-10-20 at 17:34 +0200, Stephan Bosch wrote:
> Peter Borg wrote:
> > I find it really hard to believe that Gavin and I are the only ones to hit
> > this issue. That said I've probably been hacking at this particular system
> > too long and am missing something very obvious!
> > 
> You're definitely not the only one. Finding a good solution is difficult 
> however. The intention of this check within the vacation action is to 
> prevent spurious vacation responses to for example Bcc:'ed deliveries 
> (and perhaps multi-drop aliases).

For my own situation, I solved this issue entirely from the 'roundcube'
webmail package. Help on the roundcubeforums site suggested changes to
the 'sieverules' plugin so :addresses would always be included in a
vacation rule. I extended this to the :from parameter.

Also used roundcube's new_user_identity plugin to populate the primary
identity of a 'ar-wbim' username with 'wim@ourdomain.com' so that
the :from and :addresses are using the canonical format

Fairly ugly, but it works for us. I'll document the changes if I get a
moment.

Cheers,
Gavin.




Re: [Dovecot] pop3-login: Fatal: io_loop_handle_add: epoll_ctl(1, 5):

2009-10-22 Thread Marco Nenciarini
This morning it happened another time, another time during the daily 
cron execution.


Oct 22 06:26:57 server dovecot: pop3-login: Panic: Leaked file fd 5: dev 
0.12 inode 1005
Oct 22 06:26:57 server dovecot: dovecot: Temporary failure in creating 
login processes, slowing down for now
Oct 22 06:26:57 server dovecot: dovecot: child 21311 (login) killed with 
signal 6 (core dumps disabled)


I have dovecot 1.2.6 with Timo's patch to check leaked descriptors.

Marco

--
-
|Marco Nenciarini| Debian/GNU Linux Developer - Plug Member |
| mnen...@prato.linux.it | http://www.prato.linux.it/~mnencia   |
-
Key fingerprint = FED9 69C7 9E67 21F5 7D95  5270 6864 730D F095 E5E4



Re: [Dovecot] pop3-login: Fatal: io_loop_handle_add: epoll_ctl(1, 5):

2009-10-22 Thread Marco Nenciarini

Brandon Davidson ha scritto:

Hi Marco,

Let's see what Timo has to say about that log file bit. Since it seems to
happen to you fairly frequently, it might be worth enabling core dumps as
well?



You are right. I've just rebuilt my package with -g -O0 and enabled core 
dumps.


Marco

--
-
|Marco Nenciarini| Debian/GNU Linux Developer - Plug Member |
| mnen...@prato.linux.it | http://www.prato.linux.it/~mnencia   |
-
Key fingerprint = FED9 69C7 9E67 21F5 7D95  5270 6864 730D F095 E5E4



Re: [Dovecot] pop3-login: Fatal: io_loop_handle_add: epoll_ctl(1, 5):

2009-10-22 Thread Brandon Davidson
Hi Marco,

On 10/22/09 1:50 AM, "Marco Nenciarini"  wrote:
> This morning it happened another time, another time during the daily
> cron execution.
> 
> Oct 22 06:26:57 server dovecot: pop3-login: Panic: Leaked file fd 5: dev
> 0.12 inode 1005
> Oct 22 06:26:57 server dovecot: dovecot: Temporary failure in creating
> login processes, slowing down for now
> Oct 22 06:26:57 server dovecot: dovecot: child 21311 (login) killed with
> signal 6 (core dumps disabled)
> 
> I have dovecot 1.2.6 with Timo's patch to check leaked descriptors.

I rebuilt the binaries on our hosts with optimization disabled, and I'm
still waiting for it to reoccur so I can gather file descriptor information
and a core. I don't have the leak-detect patch applied.

Let's see what Timo has to say about that log file bit. Since it seems to
happen to you fairly frequently, it might be worth enabling core dumps as
well?

-Brad



Re: [Dovecot] NFS random redirects

2009-10-22 Thread Brandon Davidson
Thomas,

On 10/22/09 1:29 AM, "Thomas Hummel"  wrote:
> On Wed, Oct 21, 2009 at 09:39:22AM -0700, Brandon Davidson wrote:
>> As a contrasting data point, we run NFS + random redirects with almost no
>> problems. 
> 
> Thanks for your answer as well.
> 
> What mailbox format are you using ?

We switched to Maildir a while back due to performance issues with mbox,
primarily centered around locking and the cost of rewriting the entire file
when one message changes. Haven't looked back since.

Our config is pretty vanilla - users in LDAP (via pam_ldap), standard UNIX
home directory layout, Sendmail on the MTA hosts.

-Brad 



Re: [Dovecot] NFS random redirects

2009-10-22 Thread Thomas Hummel
On Wed, Oct 21, 2009 at 09:39:22AM -0700, Brandon Davidson wrote:

> As a contrasting data point, we run NFS + random redirects with almost no
> problems. 

Thanks for your answer as well.

What mailbox format are you using ?

-- 
Thomas Hummel   | Institut Pasteur
 | Pôle informatique - systèmes et réseau


Re: [Dovecot] NFS random redirects

2009-10-22 Thread Thomas Hummel
On Wed, Oct 21, 2009 at 04:59:50PM +0100, Guy wrote:

> Our current setup uses two NFS mounts accessed simultaneously by two
> servers.

[...]

Thanks for sharing your experience.
Are you using mbox, dbox or maildir ?
What % of IMAP and POP3 clients ?

-- 
Thomas Hummel   | Institut Pasteur
 | Pôle informatique - systèmes et réseau