Re: [Dovecot] Best Cluster Storage

2011-01-21 Thread Stan Hoeppner
Henrique Fernandes put forth on 1/21/2011 1:38 AM:

 We are out of ideias to make it faster. We only came up making more ocfs2
 cluster with smaller disks. With this we are gettng better performance. We
 have now 2 cluster one with 4 TB other with 1 TB and are migrating some os
 emails form 4TB to 1TB and already have ready another cluster with 1 TB. So
 we have 3 machines and those 3 mount 3 disks each from the storage and mount
 3 ocfs2 cluster. So we think the each DLM gets less work.  Are we right?

That's impossible to say without me having an understanding of how this is
actually setup.  From your description I'm unable to understand what you have.

-- 
Stan


Re: [Dovecot] utility to update indexes ?

2011-01-21 Thread Jan-Frode Myklebust
On Fri, Jan 21, 2011 at 01:08:09AM +0200, Timo Sirainen wrote:
 On Thu, 2011-01-20 at 23:50 +0100, Jan-Frode Myklebust wrote:
 
  But this woun´t work if the maildir has been modified outside of
  dovecot (i.e. webmail usage). Are there any simple interface I can use
  in this short snippet for noticing that the index is out of sync, and
  update it ?
 
 With v2.0 you could use doveadm easily (you can also just use doveadm
 binary from v2.0 and keep using v1.2 elsewhere):
 
 doveadm mailbox status unseen inbox
 

This sounds great, but I'm struggeling to get it working... It complains
about:

$ doveadm -v mailbox status -u u...@example.com unseen inbox
doveadm(u...@example.com): Error: userdb lookup: 
connect(/usr/local/dovecot-2.0.9/var/run/dovecot/auth-userdb) failed: No such 
file or directory
doveadm(u...@example.com): Fatal: User lookup failed: Internal error occurred. 
Refer to server log for more information.

Will I need to have the dovecot-2 daemon running for this to work ?


My config was quickly converted from v1.2 by dovecot -n  new.conf
and very little modifications..

 dovecot -n #
# 2.0.9: /usr/local/dovecot-2.0.9/etc/dovecot/dovecot.conf
# OS: Linux 2.6.9-89.0.9.ELsmp x86_64 Red Hat Enterprise Linux ES
# release 4 (Nahant Update 8) 
auth_verbose = yes
disable_plaintext_auth = no
mail_gid = 3000
mail_uid = 3000
mmap_disable = yes
namespace {
  inbox = yes
  location = 
  prefix = INBOX.
  type = private
}
passdb {
  args = /usr/local/dovecot2/etc/dovecot/dovecot-ldap.conf
  driver = ldap
}
plugin {
  quota = maildir
}
protocols = imap pop3
service auth {
  unix_listener /var/run/dovecot/auth-master {
group = atmail
mode = 0660
user = root
  }
  user = dovecot-auth
}
service imap-login {
  inet_listener imap {
address = *
port = 143
  }
  user = dovecot
}
service imap {
  executable = /usr/local/dovecot2/sbin/imap-wrapper.sh
  process_limit = 300
}
service pop3-login {
  inet_listener pop3 {
address = *
port = 110
  }
  user = dovecot
}
service pop3 {
  executable = /usr/local/dovecot2/sbin/pop-wrapper.sh
  process_limit = 300
}
ssl = no
userdb {
  args = /usr/local/dovecot2/etc/dovecot/dovecot-ldap.conf
  driver = ldap
}
protocol imap {
  imap_client_workarounds = delay-newmail
  mail_plugins = quota imap_quota
}
protocol pop3 {
  mail_plugins = quota
  pop3_client_workarounds = outlook-no-nuls oe-ns-eoh
  pop3_uidl_format = UID%u-%v
}
protocol lda {
  auth_socket_path = /var/run/dovecot/auth-master
  mail_plugins = quota
  postmaster_address = mailer-dae...@example.com
  sendmail_path = /usr/sbin/sendmail
}
 /dovecot -n 

$ grep -v ^# /usr/local/dovecot2/etc/dovecot/dovecot-ldap.conf|grep -v ^$
hosts = ldapm1.example.com:389 ldapm2.example.com:389
auth_bind = yes
auth_bind_userdn = uid=%n,ou=people,o=%d,o=ISP,o=example,c=com
base = ou=people,o=%d,o=ISP,o=example,c=com
deref = never
scope = onelevel
user_filter = ((objectClass=mailPerson)(uid=%n))
user_attrs =
mailMessageStore=mail=maildir:%$:INDEX=/indexes/%1u/%1.1u/%u,mailQuota=quota_rule=*:storage=%$


Also tried a minimal dovecot.conf:

$ ../../sbin/dovecot -n
# 2.0.9: /usr/local/dovecot-2.0.9/etc/dovecot/dovecot.conf
# OS: Linux 2.6.9-89.0.9.ELsmp x86_64 Red Hat Enterprise Linux ES
# release 4 (Nahant Update 8) 
mail_gid = 3000
mail_uid = 3000
mmap_disable = yes
ssl = no
userdb {
  args = /usr/local/dovecot2/etc/dovecot/dovecot-ldap.conf
  driver = ldap
}

But get the exact same errors..


  -jf


Re: [Dovecot] Delivered-To header without +extension ?

2011-01-21 Thread Per Jessen
Charles Marcus wrote:

 On 2011-01-20 4:06 AM, Per Jessen wrote:
 I've been reading
 a bit, and I think the issue is that postfix adds X-Original-To when
 delivering to a mailbox - which delivery via smtp/lmtp isn't.
 
 I'm not sure if postfix should be adding it - postfix applies
 virtual_aliases_maps, then delivers to dovecot via lmtp (set up via
 virtual_transport) - without X-Original-To, the information
 of original recipient is lost at this point.
 
 Yikes... I've been planning on switching to LMTP for delivery, but
 this would be a show-stopper...
 
 Please keep us updated on what you find out...

It looks like the issue was discussed here:

http://marc.info/?l=postfix-usersm=118852762117587

Wietse concludes that the virtual aliasing would be better done on the
final station, i.e. dovecot.  Personally I don't need the X-Original-To
header, but it does seem like it ought to be written by whatever is
chosen as virtual_transport, rather than only virtual or pipe.


/Per Jessen, Zürich



Re: [Dovecot] LMTP home, chroot, mail userdb fields.

2011-01-21 Thread Lev Serebryakov
Hello, Timo.
You wrote 21 января 2011 г., 0:10:12:

 On Thu, 2011-01-20 at 15:21 +0300, Lev Serebryakov wrote:
 Jan 20 12:19:25 lmtp(38939, l...@domain.com): Error: mkdir(./cur) in 
 directory /var/run/dovecot failed: Permission denied (euid=3(v-mail) 
 egid=3(v-mail) missing +w perm: ., euid is not dir owner)
 Fixed: http://hg.dovecot.org/dovecot-2.0/rev/0fc2d00f83df
  Sorry, it doesn't. I've added some logging via i_error() (I know, it
looks more like i_debug()) and now log shows me:

Jan 21 14:01:36 lmtp(17650, l...@domain.com): Error: (LEV-ADDITION) Replace 
home (/) with chroot (/usr/home/hosted/v-mail/domain.com/lev)
Jan 21 14:01:36 lmtp(17650, l...@domain.com): Error: (LEV-ADDITION) Set 
mail_home to (/usr/home/hosted/v-mail/domain.com/lev)
Jan 21 14:01:36 lmtp(17650, l...@domain.com): Error: mkdir(./cur) in directory 
/var/run/dovecot failed: Permission denied (euid=3(v-mail) 
egid=3(v-mail) missing +w perm: ., euid is not dir owner)
Jan 21 14:01:36 lmtp(17650, l...@domain.com): Error: Opening INBOX failed: 
Mailbox doesn't exist: INBOX
Jan 21 14:01:36 lmtp(17650, l...@domain.com): Error: mkdir(./cur) in directory 
/var/run/dovecot failed: Permission denied (euid=3(v-mail) 
egid=3(v-mail) missing +w perm: ., euid is not dir owner)
Jan 21 14:01:36 lmtp(17650, l...@domain.com): Info: gJIWCJBnOU3yRAAAWL5c8Q: 
msgid=unspecified: save failed to INBOX: Internal error occurred. Refer to 
server log for more information. [2011-01-21 14:01:36]
Jan 21 14:01:36 lmtp(17650, l...@domain.com): Error: BUG: Saving failed to 
unknown storage
Jan 21 14:01:36 lmtp(17650): Info: Disconnect from local: Client quit


-- 
// Black Lion AKA Lev Serebryakov l...@serebryakov.spb.ru



Re: [Dovecot] utility to update indexes ?

2011-01-21 Thread Timo Sirainen
On 21.1.2011, at 10.47, Jan-Frode Myklebust wrote:

 doveadm mailbox status unseen inbox
 This sounds great, but I'm struggeling to get it working... It complains
 about:
 
 $ doveadm -v mailbox status -u u...@example.com unseen inbox
 doveadm(u...@example.com): Error: userdb lookup: 
 connect(/usr/local/dovecot-2.0.9/var/run/dovecot/auth-userdb) failed: No such 
 file or directory

This needs to be able to connect to v1.2's auth-master socket. So first problem 
is to make it look in the correct directory:

base_dir = /var/run/dovecot

(or wherever your v1.2's base_dir is)

Then you'll also need to configure v1.2 to add auth-userdb socket on top of 
auth-master socket (if you didn't already have it). http://wiki.dovecot.org/LDA 
describes how to add auth-master socket. Just name it auth-userdb instead of 
auth-master.



Re: [Dovecot] Delivered-To header without +extension ?

2011-01-21 Thread Charles Marcus
On 2011-01-21 4:30 AM, Per Jessen wrote:
 Charles Marcus wrote:
 On 2011-01-20 4:06 AM, Per Jessen wrote:
 I've been reading
 a bit, and I think the issue is that postfix adds X-Original-To when
 delivering to a mailbox - which delivery via smtp/lmtp isn't.

 I'm not sure if postfix should be adding it - postfix applies
 virtual_aliases_maps, then delivers to dovecot via lmtp (set up via
 virtual_transport) - without X-Original-To, the information
 of original recipient is lost at this point.

 Yikes... I've been planning on switching to LMTP for delivery, but
 this would be a show-stopper...

 Please keep us updated on what you find out...

 It looks like the issue was discussed here:
 
 http://marc.info/?l=postfix-usersm=118852762117587
 
 Wietse concludes that the virtual aliasing would be better done on the
 final station, i.e. dovecot.  Personally I don't need the X-Original-To
 header, but it does seem like it ought to be written by whatever is
 chosen as virtual_transport, rather than only virtual or pipe.

Thanks...

Thoughts Timo?

-- 

Best regards,

Charles


Re: [Dovecot] utility to update indexes ?

2011-01-21 Thread Jan-Frode Myklebust
On Fri, Jan 21, 2011 at 03:08:26PM +0200, Timo Sirainen wrote:
  
  $ doveadm -v mailbox status -u u...@example.com unseen inbox
  doveadm(u...@example.com): Error: userdb lookup: 
  connect(/usr/local/dovecot-2.0.9/var/run/dovecot/auth-userdb) failed: No 
  such file or directory
 
 
 Then you'll also need to configure v1.2 to add auth-userdb socket on top of 
 auth-master socket (if you didn't already have it). 
 http://wiki.dovecot.org/LDA describes how to add auth-master socket. Just 
 name it auth-userdb instead of auth-master.
 

I already had a /var/run/dovecot/auth-master, so I just created a 
symlink to it from from /usr/local/dovecot-2.0.9/var/run/dovecot/auth-userdb
and now doveadm works. Thanks again!



  -jf


Re: [Dovecot] SSD drives are really fast running Dovecot

2011-01-21 Thread Jerry
Seriously, isn't it time this thread died a peaceful death. It has long
since failed to to have any real relevance to Dovecot, except in the
most extreme sense. It has evolved into a few testosterone poisoned
individuals attempting to make this forum a theater for some mating
ritual. If they seriously want to continue this convoluted thread,
perhaps they would be so kind as to take it of-list and find a platform
better suited for this public display. At the very least, I would hope
that Timo might consider closing this thread. I know that Wietse would
never have let this thread reach this point on the Postfix forum.

In any case, I am not creating a kill filter to dispense with it.

-- 
Jerry ✌
dovecot.u...@seibercom.net

Disclaimer: off-list followups get on-list replies or get ignored.
Please do not ignore the Reply-To header.
__



Re: [Dovecot] Best Cluster Storage

2011-01-21 Thread Stan Hoeppner
Jan-Frode Myklebust put forth on 1/21/2011 5:49 AM:
 On Thu, Jan 20, 2011 at 10:14:42PM -0600, Stan Hoeppner wrote:

 Have you considered SGI CXFS?  It's the fastest cluster FS on the planet by 
 an
 order of magnitude.  It uses dedicated metadata servers instead of a DLM, 
 which
 is why it's so fast.  Directory traversal operations would be orders of
 magnitude faster than what you have now.
 
 That sounds quite impressive. Order of magnitude improvements would be 
 very welcome. Do you have any data to back up that statement ? Are you 
 talking streaming performance, IOPS or both ?

Both.

 I've read that CXFS has bad metadata performance, and that the 
 metadata-server can become a bottleneck.. Is the metadata-server 
 function only possible to run on one node (with passive standby node for
 availability) ?

Where did you read this?  I'd like to take a look.  The reason CXFS is faster
than other cluster filesystems is _because of_ the metadata broker.  It is much
faster than distributed lock manager schemes at high loads, and equally fast at
low loads.  There is one active metadata broker server _per filesystem_ with as
many standby backup servers per filesystem as you want.  So for a filesystem
seeing heavy IOPS you'd want a dedicated metadata broker.  For filesystems
storing large amounts of data but with low metadata IOPS you would use one
broker server for multiple such filesystems.

Using GbE for the metadata network yields excellent performance.  Using
Infiniband is even better, especially with large CXFS client node counts under
high loads, due to the dramatically lower packet latency through the switches,
and a typical 20 or 40 Gbit signaling rate for 4x DDR/QDR.  Using Infiniband for
the metadata network actually helps DLM cluster filesystems more than those with
metadata brokers.

 Do you know anything about the pricing of CXFS? I'm quite satisfied with
 GPFS, but know I might be a bit biased since I work for IBM :-) If CXFS
 really is that good for maildir-type storage, I probably should have
 another look..

Given the financial situation SGI has found itself in the last few years, I have
no idea how they're pricing CXFS or the SAN arrays.  One acquisition downside to
CXFS is that you have to deploy the CXFS metadata brokers on SGI hardware only,
and their servers are more expensive that most nearly identical competing 
products.

Typically, they only sell CXFS as an add on to their fiber channel SAN products.
 So it's not an inexpensive solution.  It's extremely high performance, but you
pay for it.  Honestly, for most organizations doing mail clusters, unless you
have a _huge_ user base and lots of budget, you might not afford an SGI solution
for mail cluster data storage.  It never hurts to ask though, and sales people's
time is free to potential customers.  If your current cluster filesystem+SAN
isn't cutting it, it can't hurt to ask an SGI salesperson.

At minimum you're probably looking at the cost of an Altix UV10 for the metadata
broker server, an SGI InfiniteStorage 4100 Array, and the CXFS licenses for each
cluster node you connect.  Obviously you'll need other things such as a fiber
channel switch, HBAs, etc, but that's the same with for any other fiber channel
cluster setup.

Even though you may pay a small price premium, SGI's fiber channel arrays are
truly some of the best available.  The specs on their lowest end model, the
4100, are pretty darn impressive for the _bottom_ of the line card:
http://www.sgi.com/pdfs/4180.pdf

If/when deploying such a solution, it really pays to use fewer fat Dovecot nodes
instead of lots of thin nodes.  Fewer big core count boxes with lots of memory
and a single FC HBA cost less in the long run than many lower core count boxes
with low memory and an HBA.  The cost of a single port FC HBA is typically more
than a white box 1U single socket quad core server with 4GB RAM.  Add the FC HBA
and CXFS license to each node and you should see why fewer larger nodes is 
better.

-- 
Stan


Re: [Dovecot] SSD drives are really fast running Dovecot

2011-01-21 Thread Stan Hoeppner
Jerry put forth on 1/21/2011 7:53 AM:
 Seriously, isn't it time this thread died a peaceful death. It has long
 since failed to to have any real relevance to Dovecot, except in the
 most extreme sense. It has evolved into a few testosterone poisoned
 individuals attempting to make this forum a theater for some mating
 ritual. If they seriously want to continue this convoluted thread,
 perhaps they would be so kind as to take it of-list and find a platform
 better suited for this public display. At the very least, I would hope
 that Timo might consider closing this thread. I know that Wietse would
 never have let this thread reach this point on the Postfix forum.
 
 In any case, I am not creating a kill filter to dispense with it.

I'm guilty as charged.  Consider it dead.  Sorry for the noise Jerry, everyone.

-- 
Stan




[Dovecot] Does dsync handle client-side deletions?

2011-01-21 Thread Patrick Schoenfeld
Hi there,

I'm currently evaluating the idea of a multi-master setup where each
node shall hold a full copy of the mailboxes. Basically the idea is
to use NFS and dsync to keep those copies in sync. So I did some
tests with dsync and ran into a problem. Consider the following
scenario:

1) Location1 and Location2 are in sync
2) A mail gets deleted on Location1 (via IMAP)
3) dsync mirror run to sync the two locations

Expected behaviour:
dsync notices that the mail was deleted on Location1 and also deletes
it on Location2 to get the locations in sync.

What I experience, however, is:
dsync notices that the mail is missing on Location2 and copies it
from Location1 to get the locations in sync.

(At least) In debug mode it will spit a warning:

dsync(test2): Info: INBOX: highest_modseq changed: 8 != 11
dsync(test2): Info: INBOX: Couldn't keep all uids
dsync(test2): Info: INBOX: Ignored 1 modseq changes
dsync(test2): Warning: Mailbox changes caused a desync. You may want to
run dsync again.

Now the question is: Doesn't dsync handle deletions or is there something
I missed? Dovecot version is 2.0.9.

Thanks in advance
and best Regards,

Patrick


Re: [Dovecot] How to enable COPY and APPEND commands separately

2011-01-21 Thread Charles Marcus
On 2011-01-20 6:51 PM, Alex Cherniak wrote:
 I'd like to allow a user to move messages between his folders on Dovecot
 IMAP account, but prevent move/copy from different accounts (Exchange in
 particular).
 Outlook uses xx UID COPY 1 folder and then xx UID STORE 1 +FLAGS
 (\Deleted \Seen) for internal moves and xx APPEND folder for external
 ones.
 I tried to achieve this with ACL, but i (insert) seems to control both.
 Do I miss something? Should I look somewhere else?
 Please help.

Better to describe the actual *problem* and the end result you are
actually trying to achieve, rather than asking for help with a
pre-conceived solution that may or may not be appropriate or necessary...

-- 

Best regards,

Charles


Re: [Dovecot] utility to update indexes ?

2011-01-21 Thread Charles Marcus
On 2011-01-20 6:08 PM, Timo Sirainen wrote:
 On Thu, 2011-01-20 at 23:50 +0100, Jan-Frode Myklebust wrote:
 But this woun´t work if the maildir has been modified outside of
 dovecot (i.e. webmail usage). Are there any simple interface I can use
 in this short snippet for noticing that the index is out of sync, and
 update it ?

 With v2.0 you could use doveadm easily (you can also just use doveadm
 binary from v2.0 and keep using v1.2 elsewhere):
 
 doveadm mailbox status unseen inbox
 
 If you're actually running v2.0 you can also ask this information via
 UNIX/TCP socket.

Easiest would be to just use a webmail app that talks IMAP and let it
talk directly to dovecot... ?

-- 

Best regards,

Charles


Re: [Dovecot] Best Cluster Storage

2011-01-21 Thread Henrique Fernandes
[]'sf.rique


On Fri, Jan 21, 2011 at 5:59 AM, Stan Hoeppner s...@hardwarefreak.comwrote:

 Henrique Fernandes put forth on 1/21/2011 1:38 AM:

  We are out of ideias to make it faster. We only came up making more ocfs2
  cluster with smaller disks. With this we are gettng better performance.
 We
  have now 2 cluster one with 4 TB other with 1 TB and are migrating some
 os
  emails form 4TB to 1TB and already have ready another cluster with 1 TB.
 So
  we have 3 machines and those 3 mount 3 disks each from the storage and
 mount
  3 ocfs2 cluster. So we think the each DLM gets less work.  Are we right?

 That's impossible to say without me having an understanding of how this is
 actually setup.  From your description I'm unable to understand what you
 have.


Let me try explain better.

We have 3 virtual machines with this set up:

/dev/sda1 3.6T  2.4T  1.3T  66% /A
/dev/sdb1 1.0T   36G  989G   4% /B
/dev/sdc1 1.0T  3.3G 1021G   1% /C

/dev/sda1 on /A type ocfs2 (rw,_netdev,heartbeat=local)
/dev/sdb1 on /B type ocfs2 (rw,_netdev,heartbeat=local)
/dev/sdc1 on /C type ocfs2 (rw,_netdev,heartbeat=local)

My question is, what is faster ? Configuring just one big disk with ocfs2 (
sda1) or using more and smaller disks sdb1 and sdc1 and more ?

It is ok now ?

All our emails are in sda1  and we are having many many performance
problens. So we are migrating some of email to sdb1 and eventualy to sdc1.
Right now, seens to be much better performance in sdb1 than in sda1. But we
are not sure if it is because have so much less emails and concurrency or
because is acctualy better.




 --
 Stan



Re: [Dovecot] Best Cluster Storage

2011-01-21 Thread Ed W

On 20/01/2011 16:20, Henrique Fernandes wrote:


Same question!

I have about 1TB used and it takes 22 hrs to backup maildirs!

I have problens with ocfs2 in fouding the file!


Just an idea, but have you evaluated performance of mdbox (new dovecot 
format) on your storage devices?  It appears to be a gentle hybrid of 
mbox and maildir, with many mails packed into a single file (which might 
increase your performance due to fewer stat calls), but there is more 
than one file per folder, so some of the mbox limitations are avoided?


I haven't personally tried it, but I think you can see the theoretical 
appeal?


Good luck

Ed W


Re: [Dovecot] COMPRESS bug?

2011-01-21 Thread Timo Sirainen
On Tue, 2011-01-04 at 23:14 +, Ed W wrote:

 - Zlib enabled
 - COMPRESS initiated in the connection
 - 30,000 messages in a single maildir folder
 - IMAP SEARCH SINCE requested
 - Dovecot 2.0.7 hangs without sending the closing stanza of the SEARCH 

Fixed: http://hg.dovecot.org/dovecot-2.0/rev/b71834419ea3



signature.asc
Description: This is a digitally signed message part


Re: [Dovecot] Does dsync handle client-side deletions?

2011-01-21 Thread Timo Sirainen
On Fri, 2011-01-21 at 16:15 +0100, Patrick Schoenfeld wrote:

 1) Location1 and Location2 are in sync
 2) A mail gets deleted on Location1 (via IMAP)

Via Dovecot v2.0 IMAP? What mailbox format? You haven't disabled index
files, right?

 3) dsync mirror run to sync the two locations
 
 Expected behaviour:
 dsync notices that the mail was deleted on Location1 and also deletes
 it on Location2 to get the locations in sync.

Yes, this should happen.

 What I experience, however, is:
 dsync notices that the mail is missing on Location2 and copies it
 from Location1 to get the locations in sync.

This shouldn't happen. Although I've heard that this actually does
happen randomly and I haven't really debugged it much yet. But it should
be a rare occurrence, not reproducible.



signature.asc
Description: This is a digitally signed message part


Re: [Dovecot] Best Cluster Storage

2011-01-21 Thread Henrique Fernandes
I have considered the idea, but we just change from mbox to maildir about 4
months ago, and we have many problens with some accouts. We were using dsync
to migrate.

But once we choose mdbox we are sticky to dovecot, or gona have to migrate
all users again if we choose to use another imap server.

But  thanks!

[]'sf.rique


On Fri, Jan 21, 2011 at 3:16 PM, Ed W li...@wildgooses.com wrote:

 On 20/01/2011 16:20, Henrique Fernandes wrote:


 Same question!

 I have about 1TB used and it takes 22 hrs to backup maildirs!

 I have problens with ocfs2 in fouding the file!


 Just an idea, but have you evaluated performance of mdbox (new dovecot
 format) on your storage devices?  It appears to be a gentle hybrid of mbox
 and maildir, with many mails packed into a single file (which might increase
 your performance due to fewer stat calls), but there is more than one file
 per folder, so some of the mbox limitations are avoided?

 I haven't personally tried it, but I think you can see the theoretical
 appeal?

 Good luck

 Ed W



Re: [Dovecot] LMTP home, chroot, mail userdb fields.

2011-01-21 Thread Timo Sirainen
On Fri, 2011-01-21 at 14:03 +0300, Lev Serebryakov wrote:

 Jan 21 14:01:36 lmtp(17650, l...@domain.com): Error: (LEV-ADDITION) Replace 
 home (/) with chroot (/usr/home/hosted/v-mail/domain.com/lev)
 Jan 21 14:01:36 lmtp(17650, l...@domain.com): Error: (LEV-ADDITION) Set 
 mail_home to (/usr/home/hosted/v-mail/domain.com/lev)
 Jan 21 14:01:36 lmtp(17650, l...@domain.com): Error: mkdir(./cur) in 
 directory /var/run/dovecot failed: Permission denied (euid=3(v-mail) 
 egid=3(v-mail) missing +w perm: ., euid is not dir owner)

Well, I'm not entirely sure why, since it works with me.. But setting
mail=maildir:~/ rather than mail=maildir:. probably fixes this.



signature.asc
Description: This is a digitally signed message part


Re: [Dovecot] Best Cluster Storage

2011-01-21 Thread Ed W

Hi

I have considered the idea, but we just change from mbox to maildir 
about 4 months ago, and we have many problens with some accouts. We 
were using dsync to migrate.


Out of curiousity - how did the backup times change between mbox vs 
maildir?  I would suggest that this gives you a baseline for how much 
performance you could recover by switching back to something which is 
kind of an mbox/maildir hybrid?


But once we choose mdbox we are sticky to dovecot, or gona have to 
migrate all users again if we choose to use another imap server.


True, but seriously, what are your options these days?  Dovecot, Cyrus 
and ...?  If you switch to cyrus then I think you need to plan your 
migration carefully due to it's own custom indexes (so maildir buys you 
little).  If you move to MS Exchange then you still can't use raw 
maildir.  Actually apart from Courier is there another big name IMAP 
server using raw maildir?


With that in mind perhaps you just bite the bullet and assume that 
future migration will need dsync again?  It's likely to only get easier 
as dsync matures?


Good luck

Ed W


Re: [Dovecot] utility to update indexes ?

2011-01-21 Thread Jan-Frode Myklebust
On Fri, Jan 21, 2011 at 10:36:11AM -0500, Charles Marcus wrote:
 
 Easiest would be to just use a webmail app that talks IMAP and let it
 talk directly to dovecot... ?

Yes, we want to implement that as soon as possible. Looking forward to
getting everything all maildirs completely managed by dovecot.


  -jf


[Dovecot] restarting director

2011-01-21 Thread Cor Bosman
Hi all, anyone having any problems with restarting the director? Every time I 
bring down 1 of the director servers, reboot it, or just restart it for 
whatever reason, im seeing all kinds of problems. Dovecot generally always 
gives me this error:

Jan 20 22:49:55 imapdirector3 dovecot: director: Error: Director 
194.109.26.173:444/right disconnected before handshake finished

It seems the directors cant agree on forming a ring anymore, and this may be 
leading to problems with clients. I mostly have to resort to bringing down all 
directors, and restarting them all at once. Not really a workable solution.  As 
an example, last night for a few hours we were getting complaints from 
customers about being disconnected, and the only obvious error in the log was 
the one above, after one of my colleagues had to restart a director because of 
some changes in the syslog daemon. After I restarted all directors withing a 
few seconds of each other, all complaints disappeared.

Timo, i know ive asked similar questions before, but the answer just eludes me. 

If I have 3 director servers, and need to take one down and restart it, what is 
the proper method to reconnect the ring? In practice, I cant seem to work it 
out and I mostly end up with the above error until I just restart them all. Not 
fun with 20.000 clients connected.

Cor



Re: [Dovecot] Best Cluster Storage

2011-01-21 Thread Henrique Fernandes
[]'sf.rique


On Fri, Jan 21, 2011 at 3:29 PM, Ed W li...@wildgooses.com wrote:

 Hi


  I have considered the idea, but we just change from mbox to maildir about
 4 months ago, and we have many problens with some accouts. We were using
 dsync to migrate.


 Out of curiousity - how did the backup times change between mbox vs
 maildir?  I would suggest that this gives you a baseline for how much
 performance you could recover by switching back to something which is kind
 of an mbox/maildir hybrid?


I don't know if i got your question right, but before, while using mbox, we
had less users and much less quota, it was only 200MB now is about 1GB.  And
before we did not have a good backup system, had many problens.
We pretty much change to maildir to be easie to make incremental backups and
etc.

And we are considering testing mbdox or sdbox. But still to earlier to make
another big change like this.



  But once we choose mdbox we are sticky to dovecot, or gona have to migrate
 all users again if we choose to use another imap server.


 True, but seriously, what are your options these days?  Dovecot, Cyrus and
 ...?  If you switch to cyrus then I think you need to plan your migration
 carefully due to it's own custom indexes (so maildir buys you little).  If
 you move to MS Exchange then you still can't use raw maildir.  Actually
 apart from Courier is there another big name IMAP server using raw maildir?

 With that in mind perhaps you just bite the bullet and assume that future
 migration will need dsync again?  It's likely to only get easier as dsync
 matures?


Yeah, i know there is no better choices but, still in mind. I had problens
in dsync with acounts that was write by dovecot.
I am studing dovecot dbox!

Still an alternative.



 Good luck

 Ed W



Re: [Dovecot] restarting director

2011-01-21 Thread Timo Sirainen
On Fri, 2011-01-21 at 19:59 +0200, Timo Sirainen wrote:

 I can take a look at it, but it would help if you were able to reproduce
 the problem.

More clearly: Reliably reproduce this in a test setup :)



signature.asc
Description: This is a digitally signed message part


Re: [Dovecot] restarting director

2011-01-21 Thread Timo Sirainen
On Fri, 2011-01-21 at 13:42 -0400, Cor Bosman wrote:
 Hi all, anyone having any problems with restarting the director? Every
 time I bring down 1 of the director servers, reboot it, or just
 restart it for whatever reason, im seeing all kinds of problems.
 Dovecot generally always gives me this error:
 
 Jan 20 22:49:55 imapdirector3 dovecot: director: Error: Director
 194.109.26.173:444/right disconnected before handshake finished

I'm not sure if that itself is a problem..

 It seems the directors cant agree on forming a ring anymore, and this
 may be leading to problems with clients. I mostly have to resort to
 bringing down all directors, and restarting them all at once. Not
 really a workable solution.  As an example, last night for a few hours
 we were getting complaints from customers about being disconnected,
 and the only obvious error in the log was the one above, after one of
 my colleagues had to restart a director because of some changes in the
 syslog daemon. After I restarted all directors withing a few seconds
 of each other, all complaints disappeared.

I can take a look at it, but it would help if you were able to reproduce
the problem. I'm still lagging a lot behind in emails (=bugfixes)..



signature.asc
Description: This is a digitally signed message part


Re: [Dovecot] Best Cluster Storage

2011-01-21 Thread Stan Hoeppner
Henrique Fernandes put forth on 1/21/2011 9:50 AM:

 Let me try explain better.
 
 We have 3 virtual machines with this set up:
 
 /dev/sda1 3.6T  2.4T  1.3T  66% /A
 /dev/sdb1 1.0T   36G  989G   4% /B
 /dev/sdc1 1.0T  3.3G 1021G   1% /C
 
 /dev/sda1 on /A type ocfs2 (rw,_netdev,heartbeat=local)
 /dev/sdb1 on /B type ocfs2 (rw,_netdev,heartbeat=local)
 /dev/sdc1 on /C type ocfs2 (rw,_netdev,heartbeat=local)
 
 My question is, what is faster ? Configuring just one big disk with ocfs2 (
 sda1) or using more and smaller disks sdb1 and sdc1 and more ?
 
 It is ok now ?
 
 All our emails are in sda1  and we are having many many performance
 problens. So we are migrating some of email to sdb1 and eventualy to sdc1.
 Right now, seens to be much better performance in sdb1 than in sda1. But we
 are not sure if it is because have so much less emails and concurrency or
 because is acctualy better.

None of this means much in absence of an accurate ESX host hardware and iSCSI
network layout description.  You haven't stated how /dev/sd[abc]1 are physically
connected to the ESX hosts.  You haven't given a _physical hardware description_
of /dev/sd[abc]1 or the connections to the EMC CX4.

For instance, if /dev/sda1 in a 10 disk RAID5 group in the CX4, but /dev/sdb1 is
a 24 disk RAID10 group in the CX4, *AND*

/dev/sda1 is LUN mapped out of an iSCSI port on the CX4 along with many many
other LUNS which are under constant heavy use, *AND* /dev/sdb1 is LUN mapped out
of an iSCSI port that shares no other LUNs, *then*

I would say the reason /dev/sdb1 is much faster is due to:

A.  24 drive RAID10 vs 10 drive RAID6 will yield ~10x increase in random IOPS
B.  Zero congestion on the /dev/sdb1 iSCSI port will decrease latency

We need to know the physical characteristics of the hardware.  SAN performance
issues are not going to be related (most of the time) to how you have Dovecot 
setup.

Do you have any iostat data to share with us?  Any data/graphs from the EMC
controller showing utilization per port and per array?

If you're unable to gather such performance metric data it will be difficult to
assist you.

-- 
Stan


Re: [Dovecot] Best Cluster Storage

2011-01-21 Thread Ed W

On 21/01/2011 17:50, Henrique Fernandes wrote:
I don't know if i got your question right, but before, while using 
mbox, we had less users and much less quota, it was only 200MB now is 
about 1GB.  And before we did not have a good backup system, had many 
problens.
We pretty much change to maildir to be easie to make incremental 
backups and etc.


Sorry, the point of the question was simply whether you could use your 
old setup to help estimate whether there is actually any point switching 
from maildir?  Sounds like you didn't have the same backup service back 
then, so you can't compare though?


Just pointing out that it's completely unproven whether moving mdbox 
will actually make a difference anyway...


And we are considering testing mbdox or sdbox. But still to earlier to 
make another big change like this.




Sure - by the way I believe you can mix mailbox storage formats to a 
large extent?  I'm not using this stuff so please check the docs before 
believing me, but I believe you can mix storage formats even down to the 
folder level under some conditions?  I dare say you did exactly this 
during your migration so I doubt I'm telling you anything new...?


The only point of mentioning that is that you could do something as 
simple as duplicating some proportion of the mailboxes to new dummy 
accounts, simply for the purpose of padding out some new format 
directories - users wouldn't really access them.  Then you could try and 
compare the backup times of the original mailboxes (that the users 
actually use) with the duplicated ones in whatever format you are testing?


Just an idea?

Good luck

Ed W



Re: [Dovecot] Best Cluster Storage

2011-01-21 Thread Henrique Fernandes
[]'sf.rique


On Fri, Jan 21, 2011 at 4:31 PM, Ed W li...@wildgooses.com wrote:

 On 21/01/2011 17:50, Henrique Fernandes wrote:

 I don't know if i got your question right, but before, while using mbox,
 we had less users and much less quota, it was only 200MB now is about 1GB.
  And before we did not have a good backup system, had many problens.
 We pretty much change to maildir to be easie to make incremental backups
 and etc.


 Sorry, the point of the question was simply whether you could use your old
 setup to help estimate whether there is actually any point switching from
 maildir?  Sounds like you didn't have the same backup service back then, so
 you can't compare though?


I am not comparin anything, because we reformulate ALL email system, before
it was only one machine with local disk. So we bougth an EMC and starting
using it to the new mail system in virtual machines iSCSI etc.


 Just pointing out that it's completely unproven whether moving mdbox will
 actually make a difference anyway...


  And we are considering testing mbdox or sdbox. But still to earlier to
 make another big change like this.


 Sure - by the way I believe you can mix mailbox storage formats to a large
 extent?  I'm not using this stuff so please check the docs before believing
 me, but I believe you can mix storage formats even down to the folder level
 under some conditions?  I dare say you did exactly this during your
 migration so I doubt I'm telling you anything new...?


Yeah i did likely you said, mix of mbox and maildir, actulay only active
users have maildir, inactive users still mbox.



 The only point of mentioning that is that you could do something as simple
 as duplicating some proportion of the mailboxes to new dummy accounts,
 simply for the purpose of padding out some new format directories - users
 wouldn't really access them.  Then you could try and compare the backup
 times of the original mailboxes (that the users actually use) with the
 duplicated ones in whatever format you are testing?

 Just an idea?

 We usualy use one domain per test. Like this other sdb1 we are testing.



 Good luck

 Ed W



But you asked before about haardware.

It is an EMC CX4, linked with ONE 1gbE to ONE dlink ( i am not sure but i
guess if full Gbit ) and from this dlink it conects to 4 XEN machines at
1gbit and in the virtual machines over iSCSI to EMC.

About the disk is 8 disk in RAID 1+0  in sda
and i guess in sdc and sdb is RAID5 with 12 disk ( those are test )

Sorry don't know spec form the disks.


We think it is the ocfs2 and the size of the partition, becasue. We can
write an big file in a accetable speed. But if we try to delete or create or
read lots of small files the speed is  horrible. We think is an DLM problem
in propagate the locks and etc.


Do you have any idea how to test the storage from maildir usage ? We made a
bashscript that write some diretores and lots of files and after it removes
and etc.

Any better ideias ?

Apreciate your help!


Re: [Dovecot] Panic in 2.0.9 imap-client

2011-01-21 Thread Mike Abbott
 Jan 17 12:06:20 server dovecot: imap(@YYY): Panic: file 
 imap-client.c: line 570 (client_continue_pending_input): assertion failed: 
 (!client-handling_input)

I can reproduce this every time by sending any data in the same packet after 
the tag IDLECRLF.  For instance using nc:
$ nc localhost 143
... login, etc ...
x idle^M^Jfoo

Where I generate ^M^J by typing ctrl-V ctrl-M ctrl-V ctrl-J.

Re: [Dovecot] Panic in 2.0.9 imap-client

2011-01-21 Thread Timo Sirainen
On Fri, 2011-01-21 at 12:56 -0600, Mike Abbott wrote:
  Jan 17 12:06:20 server dovecot: imap(@YYY): Panic: file
 imap-client.c: line 570 (client_continue_pending_input): assertion
 failed: (!client-handling_input)
 
 I can reproduce this every time by sending any data in the same packet
 after the tag IDLECRLF.  

The crash actually only started happening after I tried to fix this.
Previously the whole connection would hang. Of course, I can't think of
why any client would send IDLE+DONE in the same TCP packet. It doesn't
make any sense. Would have been easier to just send NOOP.

 For instance using nc:
 $ nc localhost 143
 ... login, etc ...
 x idle^M^Jfoo
 
 Where I generate ^M^J by typing ctrl-V ctrl-M ctrl-V ctrl-J.

Oh, that's nice. I've always been annoyed at testing these kinds of bugs
because I didn't know of any easy way to send multiple commands in same
packet.

Fixed now: http://hg.dovecot.org/dovecot-2.0/rev/4741f1b4f9b3



signature.asc
Description: This is a digitally signed message part


Re: [Dovecot] Does dsync handle client-side deletions?

2011-01-21 Thread Patrick Schoenfeld
Hi,

thanks for the quick response.

On Fri, Jan 21, 2011 at 07:21:44PM +0200, Timo Sirainen wrote:
 On Fri, 2011-01-21 at 16:15 +0100, Patrick Schoenfeld wrote:
 
  1) Location1 and Location2 are in sync
  2) A mail gets deleted on Location1 (via IMAP)
 
 Via Dovecot v2.0 IMAP? 

Yes.

 What mailbox format? You haven't disabled index
 files, right?

Mailbox format is Maildir. I haven't disabled index files, at least not
knowingly. Basically I'm using a default configuration, based on
whats delivered with the Debian snapshot package of dovecot2
(from what I can tell, this is doc/example-config/* in the source
tarball) with the neccessary changes to authenticate against a LDAP
server.
However, from a look at the maildirs I cannot find a main index file as
described in [1], only the two other indexes.

hostname:/var/spool/mail1/test2/Maildir# ls -l *index*
-rw--- 1 vmail root 17408 21. Jan 21:12 dovecot.index.cache
-rw--- 1 vmail root  2080 21. Jan 21:12 dovecot.index.log

Is that normal?

  What I experience, however, is:
  dsync notices that the mail is missing on Location2 and copies it
  from Location1 to get the locations in sync.
 
 This shouldn't happen. Although I've heard that this actually does
 happen randomly and I haven't really debugged it much yet. But it should
 be a rare occurrence, not reproducible.

In my current setup its reproducible.
Note: This is dovecot 2.0 on Debian Lenny.

Best Regards,
Patrick

[1] http://wiki2.dovecot.org/IndexFiles


Re: [Dovecot] Best Cluster Storage

2011-01-21 Thread Stan Hoeppner
Henrique Fernandes put forth on 1/21/2011 12:53 PM:

 But you asked before about haardware.

I asked about the hardware.

 It is an EMC CX4, linked with ONE 1gbE to ONE dlink ( i am not sure but i
 guess if full Gbit ) and from this dlink it conects to 4 XEN machines at
 1gbit and in the virtual machines over iSCSI to EMC.

OMG!?  A DLink switch?  Is it one of their higher end managed models or consumer
grade?  Which model is it?  Do you currently dedicate this DLink GbE switch to
*only* iSCSI SAN traffic?  What network/switch do you currently run OCFS
metadata traffic over?  Same as the client network?  If so, that's bad.

You *NEED* a *QUALITY* managed dedicated GbE switch for iSCSI and OCFS metadata
traffic.  You *NEED* to get a decent GbE managed switch if that DLink isn't one
of their top of line models.  You will setup link aggregation between the two
GbE ports on the CX4 and the managed switch.  Program the switch and HBAs, and
the ports on the CX4 for jumbo frame support.  Read the documentation that comes
with each product, and read the Linux ethernet docs to learn how to do link
aggregation.  You will need 3 GbE ports on each Xen host.  One will plug into
the network switch that carries client traffic.  Two will plug into the SAN
dedicated managed switch, one for OCFS metadata traffic and the other for iSCSI
SAN traffic.  If you don't separate these 3 types of traffic onto dedicated 3
GbE links your performance will always be low to horrible.

 About the disk is 8 disk in RAID 1+0  in sda
 and i guess in sdc and sdb is RAID5 with 12 disk ( those are test )

RAID 10 (1+0) is EXCELLENT for maildir.  Any parity RAID (5/6) will have less
than *half* the random write IOPs of RAID 10.  Currently you only have a stripe
width of *only 4* with your current RAID 10 which is a big part of your problem.
 You *NEED* to redo the CX4.  The maximum member count for RAID 10 on the CX4 is
16 drives.  That is your target.

Assign two spares.   If you still have 16 drives remaining, create a single RAID
10 array of those 16 drives with a stripe depth of 64.  If you have 14 drives
remaining, do it with 14.  You *NEED* to maximize the RAID 10 with as many
drives as you can.  Then, slice appropriately sized LUNs, one for maildir use,
one for testing, etc.  Export each as a separate LUN.

The reason for this is that you are currently spindle stripe starved.  You need
to use RAID 10, but your current stripe width of 4 doesn't yield enough IOPS to
keep up with your maildir data write load.  Moving to a stripe with of 7 (14/2)
or 8 (16/2) will double your sustained IOPs over what you have now.

 Sorry don't know spec form the disks.

That's ok as it's not critical information.

 We think it is the ocfs2 and the size of the partition, becasue.
snip

With only 4 OCFS clients I'm pretty sure this is not the cause of your problems.
 The issues appear all hardware and network design related.  I've identified
what seem to be the problem areas and presented you the solutions above.
Thankfully none of them will be expensive, as all you need is one good quality
managed switch, if you don't already have one.

*BUT*, you will have a day, maybe two, of horrible user performance as you move
all the maildir data off the CX4 and reconfigure it for a 14 or 16 drive RAID
10.  Put a couple of fast disks in one of the Xen servers or a fast spare bare
metal server and run Dovecot on it while you're fixing the CX4.  You'll also
have to schedule an outage while you install the new switch and reconfigure all
the ports.  Sure, performance will suck for your users for a day or two, but
better that it sucks only one or two more days than for months into the future
if you don't take the necessary steps to solve the problem permanently.

 Do you have any idea how to test the storage from maildir usage ? We made a
 bashscript that write some diretores and lots of files and after it removes
 and etc.

I'm pretty sure I've already identified your problems without need for testing,
thanks to the information you provided about your hardware.  Here's an example
of a suitable managed switch with link aggregation and jumbo frame support, if
you don't already have one:

http://h10144.www1.hp.com/products/switches/HP_ProCurve_Switch_2810_Series/overview.htm
http://www.newegg.com/Product/Product.aspx?Item=N82E16833316041

This switch has plenty of processing power to handle your iSCSI and metadata
traffic on just one switch.  But remember, you need two GbE network links into
this switch from each Xen host--one for OCFS metadata and one for iSCSI.  You
should use distinct RFC1918 IP subnets for each, if you aren't already, such as
192.168.1.0/24 for the metadata network, and 172.16.1.0/24 for the iSCSI
network.  You'll need a third GbE connection to your user traffic network.
Again, keep metadata/iSCSI traffic on a separate physical network infrastructure
from client traffic.

Hope this helps.  I know you're going to cringe at the idea of reconfiguring the
CX4 

Re: [Dovecot] Best Cluster Storage

2011-01-21 Thread Stan Hoeppner
Henrique Fernandes put forth on 1/21/2011 12:53 PM:

 We think it is the ocfs2 and the size of the partition, becasue. We can
 write an big file in a accetable speed. But if we try to delete or create or
 read lots of small files the speed is  horrible. We think is an DLM problem
 in propagate the locks and etc.

It's not the size of the filesystem that's the problem.  But it is an issue with
the DLM, and with the small RAID 10 set.  This is why I recommended putting DLM
on its own dedicated network segment, same with the iSCSI traffic, and making
sure you're running full duplex GbE all round.  DLM doesn't require GbE
bandwidth, but the latency of GbE is less than fast ethernet.  I'm also
assuming, since you didn't say, that you were running all your ethernet traffic
over a single GbE port on each Xen host.  That just doesn't scale when doing
filesystem clustering.  The traffic load is too great, unless you're idling all
the time, in which case, why did you go OCFS? :)

 Do you have any idea how to test the storage from maildir usage ? We made a
 bashscript that write some diretores and lots of files and after it removes
 and etc.

This only does you any good if you have instrumentation setup to capture metrics
while you run your test.  You''ll need to run iostat on the host running the
script tests, along with iftop, and any OCFS monitoring tools.  You'll need to
use the EMC software to gather IOPS and bandwidth metrics from the CX4 during
the test.  You'll also need to make sure your aggregate test data size is
greater than 6GB which is 2x the size of the cache in the CX4.  You need to hit
the disks, hard, not the cache.

The best test is to simply instrument your normal user load and collect the
performance data I mentioned.

 Any better ideias ?

Ditch iSCSI and move to fiber channel.  A Qlogic 14 port 4Gb FC switch with all
SFPs included is less than $2500 USD.  You already have the FC ports in your
CX4.  You'd instantly quadruple the bandwidth of the CX4 and that of each Xen
host, from 200 to 800 MB/s and 100 to 400 MB/s respectively.  Four single port
4Gb FC HBAs, one for each server, will run you $2500-3000 USD.  So for about $5k
USD you can quadruple your bandwidth, and lower your latency.

I don't recall if you ever told us what your user load is.  How many concurrent
Dovecot user sessions are you supporting on average?

 Apreciate your help!

No problem.  SANs are one of my passions. :)

-- 
Stan


Re: [Dovecot] Panic in 2.0.9 imap-client

2011-01-21 Thread Mike Abbott
 I can't think of why any client would send IDLE+DONE in the same TCP packet.

Maybe not in the same packet, but network congestion or server overloading 
could cause the IDLE and DONE to queue up together.

 Oh, that's nice.

Glad to help.

 Fixed now: http://hg.dovecot.org/dovecot-2.0/rev/4741f1b4f9b3

Yes that does fix the crash.  Thanks.



[Dovecot] expire plugin and sieve

2011-01-21 Thread cvb

Hi.

I am running dovecot 1.2.9 here, allowing users to filter their mails 
with the sieve plugin, and am using sieve to move mail tagged as 
probably spam into the spam folder.


I'm now looking to get the expire plugin working as well. It does work 
as described in the wiki: Once I manually move messages into other 
folders, the mysql database is filled with entries.


However, the combination of sieve and expire does not seem to be 
working: When sieve moves messages into a folder, no entry is created in 
the database. Don't these plugins work together, or did I misconfigure 
something?


# dovecot -n

# 1.2.9: /etc/dovecot/dovecot.conf
# OS: Linux 2.6.32-27-server x86_64 Ubuntu 10.04.1 LTS fuse.glusterfs
log_timestamp: %Y-%m-%d %H:%M:%S
protocols: imap imaps pop3 pop3s managesieve
ssl_cert_file: /etc/dovecot/imapd.pem
ssl_key_file: /etc/dovecot/imapd.pem
disable_plaintext_auth: no
login_dir: /var/run/dovecot/login
login_executable(default): /usr/lib/dovecot/imap-login
login_executable(imap): /usr/lib/dovecot/imap-login
login_executable(pop3): /usr/lib/dovecot/pop3-login
login_executable(managesieve): /usr/lib/dovecot/managesieve-login
login_user: postfix
login_process_per_connection: no
login_process_size: 128
first_valid_uid: 113
mail_privileged_group: mail
mail_location: maildir:/home/vmail/%Ld/%Ln:INDEX=/var/indexes/%u
mail_debug: yes
mail_nfs_storage: yes
mbox_write_locks: fcntl dotlock
mail_executable(default): /usr/lib/dovecot/imap
mail_executable(imap): /usr/lib/dovecot/imap
mail_executable(pop3): /usr/lib/dovecot/pop3
mail_executable(managesieve): /usr/lib/dovecot/managesieve
mail_plugins(default): expire
mail_plugins(imap): expire
mail_plugins(pop3): expire
mail_plugins(managesieve):
mail_plugin_dir(default): /usr/lib/dovecot/modules/imap
mail_plugin_dir(imap): /usr/lib/dovecot/modules/imap
mail_plugin_dir(pop3): /usr/lib/dovecot/modules/pop3
mail_plugin_dir(managesieve): /usr/lib/dovecot/modules/managesieve
imap_client_workarounds(default): outlook-idle delay-newmail
imap_client_workarounds(imap): outlook-idle delay-newmail
imap_client_workarounds(pop3):
imap_client_workarounds(managesieve):
pop3_client_workarounds(default):
pop3_client_workarounds(imap):
pop3_client_workarounds(pop3): outlook-no-nuls oe-ns-eoh
pop3_client_workarounds(managesieve):
managesieve_logout_format(default): bytes=%i/%o
managesieve_logout_format(imap): bytes=%i/%o
managesieve_logout_format(pop3): bytes=%i/%o
managesieve_logout_format(managesieve): bytes ( in=%i : out=%o )
namespace:
  type: private
  separator: .
  prefix: INBOX.
  inbox: yes
  list: yes
  subscriptions: yes
lda:
  postmaster_address: postmas...@example.com
  mail_plugins: expire
  mail_plugins: sieve
auth default:
  mechanisms: plain login
  user: nobody
  verbose: yes
  passdb:
driver: sql
args: /etc/dovecot/dovecot-sql.conf
  userdb:
driver: static
args: uid=5000 gid=5000 home=/home/vmail/%Ld/%Ln 
allow_all_users=yes

  socket:
type: listen
client:
  path: /var/spool/postfix/private/auth
  mode: 432
  user: postfix
  group: root
master:
  path: /var/run/dovecot/auth-master
  mode: 438
  user: vmail
  group: vmail
plugin:
  sieve: /home/vmail/%Ld/%Ln/.dovecot.sieve
  sieve_global_path: /home/vmail/globalsieverc
  sieve_dir: ~/sieve
  sieve_global_dir: /var/lib/dovecot/sieve/global/
  expire: INBOX.Trash 7 INBOX.Mailing-Lists.* 30 INBOX.Spam 14
  expire_dict: proxy::expire
dict:
  expire: mysql:/etc/dovecot/dovecot-dict-expire.conf

Thanks, Christian