Re: Overquota flag and auth caching

2022-09-13 Thread list

On 09-13-2022 5:17 am, Christian Rößner wrote:

The result is that mails are still accepted, even a user went over quota 
resulting in bounces.
What is the correct way to use the over quota flag and which solutions can be 
taken to invalidate the user?
Is it possible to do this in a Lua user  backend? Any other method?



It is unclear in your message if you are aware of how you can use have your MTA 
check quota during delivery, and reject before accepting, to prevent 
back-scatter. For example if you are using Postfix you can use the policy 
service to do this in main.cf:

smtpd_recipient_restrictions =
...
check_policy_service unix:private/quota-status

And here are the docs on setting up quota for postfix on the dovecot side:

https://doc.dovecot.org/configuration_manual/quota_plugin/#quota-service


Re: Panic: file ostream.c: assertion failed when using mail_filter on RHEL8

2021-04-24 Thread Martijn Brinkers (list)
The same problem occurs when running 2.3.13 on RHEL8

Is this a RHEL8 specific issue?

Log output

Apr 24 14:14:38 webmail dovecot[568870]: master: Dovecot v2.3.13
(89f716dc2) starting up for imap
Apr 24 14:15:13 webmail dovecot[568872]: imap-login: Login: user=<
test=40example.com@ciphermail.private>, method=PLAIN, rip=127.0.0.1,
lip=127.0.0.1, mpid=584990, secured, session=
Apr 24 14:15:13 webmail dovecot[568872]: imap(
test=40example.com@ciphermail.private)<584990>:
Logged out in=44 out=617 deleted=0 expunged=0 trashed=0 hdr_count=0
hdr_bytes=0 body_count=0 body_bytes=0
Apr 24 14:15:20 webmail dovecot[568872]: imap-login: Login: user=<
test=40example.com@ciphermail.private>, method=PLAIN, rip=127.0.0.1,
lip=127.0.0.1, mpid=589277, secured, session=<6ebjirjAbIJ/AAAB>
Apr 24 14:15:20 webmail dovecot[568872]: imap(
test=40example.com@ciphermail.private)<589277><6ebjirjAbIJ/AAAB>:
Panic: file ostream.c: line 204 (o_stream_flush): assertion failed:
(stream->stream_errno != 0)
Apr 24 14:15:20 webmail dovecot[568872]: imap(
test=40example.com@ciphermail.private)<589277><6ebjirjAbIJ/AAAB>:
Error: Raw backtrace:
/usr/lib64/dovecot/libdovecot.so.0(backtrace_append+0x41)
[0x7fc62f5b7d41] ->
/usr/lib64/dovecot/libdovecot.so.0(backtrace_get+0x22) [0x7fc62f5b7e62]
-> /usr/lib64/dovecot/libdovecot.so.0(+0x1041eb) [0x7fc62f5c41eb] ->
/usr/lib64/dovecot/libdovecot.so.0(+0x104287) [0x7fc62f5c4287] ->
/usr/lib64/dovecot/libdovecot.so.0(+0x5a50b) [0x7fc62f51a50b] ->
/usr/lib64/dovecot/libdovecot.so.0(+0x5eb38) [0x7fc62f51eb38] ->
/usr/lib64/dovecot/libdovecot-storage.so.0(maildir_save_finish+0x9a)
[0x7fc62f91614a] -> /usr/lib64/dovecot/lib10_quota_plugin.so(+0x10658)
[0x7fc62eaea658] -> /usr/lib64/dovecot/libdovecot-
storage.so.0(mailbox_save_finish+0x77) [0x7fc62f8ee2b7] ->
dovecot/imap(+0x1350d) [0x56192db1350d] -> dovecot/imap(+0x13baf)
[0x56192db13baf] -> dovecot/imap(cmd_append+0x12c) [0x56192db13e0c] ->
dovecot/imap(command_exec+0x6c) [0x56192db2261c] ->
dovecot/imap(+0x206af) [0x56192db206af] -> dovecot/imap(+0x20761)
[0x56192db20761] -> dovecot/imap(client_handle_input+0x1c5)
[0x56192db20b45] -> dovecot/imap(client_input+0x76) [0x56192db21046] ->
/usr/lib64/dovecot/libdovecot.so.0(io_loop_call_io+0x6d)
[0x7fc62f5da75d] ->
/usr/lib64/dovecot/libdovecot.so.0(io_loop_handler_run_internal+0x139)
[0x7fc62f5dbd79] ->
/usr/lib64/dovecot/libdovecot.so.0(io_loop_handler_run+0x50)
[0x7fc62f5da800] ->
/usr/lib64/dovecot/libdovecot.so.0(io_loop_run+0x48) [0x7fc62f5da978]
-> /usr/lib64/dovecot/libdovecot.so.0(master_service_run+0x17)
[0x7fc62f54ec67] -> dovecot/imap(main+0x335) [0x56192db126c5] ->
/lib64/libc.so.6(__libc_start_main+0xf3) [0x7fc62f120803] ->
dovecot/imap(_start+0x2e) [0x56192db1287e]
Apr 24 14:15:20 webmail dovecot[568872]:
imap(test=40example.com@ciphermail.private)<589277><6ebjirjAbIJ/AAAB>:
Fatal: master: service(imap): child 589277 killed with signal 6 (core
not dumped - https://dovecot.org/bugreport.html#coredumps - set
/proc/sys/fs/suid_dumpable to 2)
On Fri, 2021-04-23 at 12:38 +0200, Martijn Brinkers (list) wrote:
> Hi,
> 
> I installed dovecot on a new up-to-date RHEL8 test system and
> configured a simple mail_filter plugin which only copies the input
> back
> to the output (i.e., the filter does nothing special).
> 
> On RHEL8 I get the following panic when saving the mail. The Dovecot
> version that comes with RHEL8 is dovecot-2.3.8-4.el8.x86_64
> 
> The mail_filter works correctly on an Ubuntu machine )2.2.33.2-
> 1ubuntu4.7)
> 
> Any idea what's causing this panic on RHEL8 and how to fix it?
> 
> Kind regards,
> 
> Martijn Brinkers
> 
> Log output (panic log is at the bottom)
> 
> Apr 23 10:35:54 ciphermail-webmail dovecot[18680]: master: Dovecot
> v2.3.8 (9df20d2db) starting up for imap
> Apr 23 10:36:17 ciphermail-webmail dovecot[18682]: auth: Debug:
> Loading
> modules from directory: /usr/lib64/dovecot/auth
> Apr 23 10:36:17 ciphermail-webmail dovecot[18682]: auth: Debug:
> Module
> loaded: /usr/lib64/dovecot/auth/lib20_auth_var_expand_crypt.so
> Apr 23 10:36:17 ciphermail-webmail dovecot[18682]: auth: Debug:
> Module
> loaded: /usr/lib64/dovecot/auth/libdriver_sqlite.so
> Apr 23 10:36:17 ciphermail-webmail dovecot[18682]: auth: Debug: Read
> auth token secret from /var/run/dovecot/auth-token-secret.dat
> Apr 23 10:36:17 ciphermail-webmail dovecot[18682]: auth: Debug: auth
> client connected (pid=19771)
> Apr 23 10:36:17 ciphermail-webmail dovecot[18682]: auth: Debug:
> client
> in:
> AUTH1PLAINservice=imapsecured
> se
> ssion=i7avXaHAuLh/AAABlip=127.0.0.1rip=127.0.0.1 
>   
>  lport=143rport=47288resp=
> Apr 23 10:36:17 ciphermail-webmail dovecot[1

Why was mail-filter plugin removed in version 2.3.14?

2021-04-23 Thread Martijn Brinkers (list)
Hi,

Mail-filter plugin was removed in version 2.3.14.

Any idea why was this removed? 

Is there a replacement for the mail-filter plugin?

Kind regards,

Martijn Brinkers



Panic: file ostream.c: assertion failed when using mail_filter on RHEL8

2021-04-23 Thread Martijn Brinkers (list)
Hi,

I installed dovecot on a new up-to-date RHEL8 test system and
configured a simple mail_filter plugin which only copies the input back
to the output (i.e., the filter does nothing special).

On RHEL8 I get the following panic when saving the mail. The Dovecot
version that comes with RHEL8 is dovecot-2.3.8-4.el8.x86_64

The mail_filter works correctly on an Ubuntu machine )2.2.33.2-
1ubuntu4.7)

Any idea what's causing this panic on RHEL8 and how to fix it?

Kind regards,

Martijn Brinkers

Log output (panic log is at the bottom)

Apr 23 10:35:54 ciphermail-webmail dovecot[18680]: master: Dovecot
v2.3.8 (9df20d2db) starting up for imap
Apr 23 10:36:17 ciphermail-webmail dovecot[18682]: auth: Debug: Loading
modules from directory: /usr/lib64/dovecot/auth
Apr 23 10:36:17 ciphermail-webmail dovecot[18682]: auth: Debug: Module
loaded: /usr/lib64/dovecot/auth/lib20_auth_var_expand_crypt.so
Apr 23 10:36:17 ciphermail-webmail dovecot[18682]: auth: Debug: Module
loaded: /usr/lib64/dovecot/auth/libdriver_sqlite.so
Apr 23 10:36:17 ciphermail-webmail dovecot[18682]: auth: Debug: Read
auth token secret from /var/run/dovecot/auth-token-secret.dat
Apr 23 10:36:17 ciphermail-webmail dovecot[18682]: auth: Debug: auth
client connected (pid=19771)
Apr 23 10:36:17 ciphermail-webmail dovecot[18682]: auth: Debug: client
in:
AUTH1PLAINservice=imapsecuredse
ssion=i7avXaHAuLh/AAABlip=127.0.0.1rip=127.0.0.1   
 lport=143rport=47288resp=
Apr 23 10:36:17 ciphermail-webmail dovecot[18682]: auth: Debug: dict(
t...@example.com,127.0.0.1,): Performing passdb
lookup
Apr 23 10:36:17 ciphermail-webmail dovecot[18682]: auth-worker(19774):
Debug: Loading modules from directory: /usr/lib64/dovecot/auth
Apr 23 10:36:17 ciphermail-webmail dovecot[18682]: auth-worker(19774):
Debug: Module loaded:
/usr/lib64/dovecot/auth/lib20_auth_var_expand_crypt.so
Apr 23 10:36:17 ciphermail-webmail dovecot[18682]: auth-worker(19774):
Debug: Module loaded: /usr/lib64/dovecot/auth/libdriver_sqlite.so
Apr 23 10:36:17 ciphermail-webmail dovecot[18682]: auth-worker(19774):
Debug: conn unix:auth-worker (pid=19773,uid=97): Server accepted
connection (fd=13)
Apr 23 10:36:17 ciphermail-webmail dovecot[18682]: auth-worker(19774):
Debug: conn unix:auth-worker (pid=19773,uid=97): Sending version
handshake
Apr 23 10:36:17 ciphermail-webmail dovecot[18682]: auth-worker(19774):
Debug: conn unix:auth-worker (pid=19773,uid=97): auth-worker<1>:
Handling PASSV request
Apr 23 10:36:17 ciphermail-webmail dovecot[18682]: auth-worker(19774):
Debug: dict(t...@example.com,127.0.0.1,): Performing
passdb lookup
Apr 23 10:36:17 ciphermail-webmail dovecot[18682]: auth-worker(19774):
Debug: dict(t...@example.com,127.0.0.1,): Lookup: 
shared/passdb/t...@example.com/test = {"userdb_email":"t...@example.com
","userdb_quota_rule":"*:bytes=1073741824","password":"{SSHA256.b64}SF5
hEeUZVO4ydasqNb040KHWGyin791L40BYR8cXf+I0EGeetzq5Cg==","userdb_password
":"test","userdb_home":"
/var/vmail/test=40example.com@ciphermail.private","user":"
test=40example.com@ciphermail.private"}
Apr 23 10:36:17 ciphermail-webmail dovecot[18682]: auth-worker(19774):
Debug: dict(t...@example.com,127.0.0.1,): username
changed t...@example.com -> test=40example.com@ciphermail.private
Apr 23 10:36:17 ciphermail-webmail dovecot[18682]: auth-worker(19774):
Debug: dict(test=40example.com@ciphermail.private,127.0.0.1,): Finished passdb lookup
Apr 23 10:36:17 ciphermail-webmail dovecot[18682]: auth-worker(19774):
Debug: conn unix:auth-worker (pid=19773,uid=97): auth-worker<1>:
Finished
Apr 23 10:36:17 ciphermail-webmail dovecot[18682]: auth: Debug: dict(
t...@example.com,127.0.0.1,): username changed 
t...@example.com -> test=40example.com@ciphermail.private
Apr 23 10:36:17 ciphermail-webmail dovecot[18682]: auth: Debug: dict(
test=40example.com@ciphermail.private,127.0.0.1,):
Finished passdb lookup
Apr 23 10:36:17 ciphermail-webmail dovecot[18682]: auth: Debug: auth(
test=40example.com@ciphermail.private,127.0.0.1,):
Auth request finished
Apr 23 10:36:17 ciphermail-webmail dovecot[18682]: auth: Debug: client
passdb out: OK1
user=test=40example.com@ciphermail.private
original_user=t...@example.com
Apr 23 10:36:17 ciphermail-webmail dovecot[18682]: auth: Debug: master
in:
REQUEST3373400065197711603cbf25103fc9a6
77ceabe45f077e40session_pid=19776request_auth_token
Apr 23 10:36:17 ciphermail-webmail dovecot[18682]: auth: Debug:
prefetch(test=40example.com@ciphermail.private,127.0.0.1,): Performing userdb lookup
Apr 23 10:36:17 ciphermail-webmail dovecot[18682]: auth: Debug:
prefetch(test=40example.com@ciphermail.private,127.0.0.1,): success
Apr 23 10:36:17 ciphermail-webmail dovecot[18682]: auth: Debug:
prefetch(test=40example.com@ciphermail.private,127.0.0.1,): Finished userdb lookup
Apr 23 10:36:17 ciphermail-webmail dovecot[18682]: auth: Debug: master
userdb out: USER

Re: Virtual question

2018-03-02 Thread List







Sorry, different email client on the road.
I created the folders under my dovecot install /usr/local/etc/dovecot/virtual/ 
and then created month and day folders with dovecot-virtaul files in each 
folder. The folders how up on the mail clients, but either are inaccessible, or 
empty.
namespace {  prefix = @virtual. separator = . location = 
virtual:/usr/local/etc/dovecot/virtual:INDEX=˜/Maildir/virtual:CONTROL:˜/Maildir/virtual}
On Mar 1, 2018, at 22:35, Aki Tuomi  
wrote:/etc/dovecot/virtual/month/dovecot-virtual

Then in dovecot.conf  you put

namespace virtual {
 location = virtual:/etc/dovecot/virtual:INDEX=~/virtual:CONTROL=~/virtual






Re: Doveadm backup error.

2017-12-10 Thread Dovecot list
But on other side i have that same settings (Only on dovecot 1 ) ID and GID
are that same, and on other site i dont have any info in logs...
I think this is @ dovecot 2 problem, but dont know how to solve it...
Best regards.

2017-12-01 8:02 GMT+01:00 Aki Tuomi <aki.tu...@dovecot.fi>:

> This is probably problem on the other end.
>
> Aki
>
>
> On 20.11.2017 19:36, Dovecot list wrote:
> > Hello. I try to migrate dovecot 1 to dovecot 2 with doveadm backup.
> > But when i try to set doveadm backup i get :
> >
> > mx3:/root/dsync@[23:11] # doveadm -v -c ah.temp backup -R -u a...@test.pl
> > <ahu...@i-pi.pl> imapc:
> > doveadm(a...@test.pl <ahu...@i-pi.pl>): Error: Mail access for users with
> UID
> > 145 not permitted (see first_valid_uid in config file, uid from userdb
> > lookup).
> > doveadm(a...@test.pl <ahu...@i-pi.pl>): Error: User init failed
> >
> >
> > mx3:/root/dsync@[22:13] # ls -la /home/mail/vhosts/test.pl/a...@test.pl/
> > <http://i-pi.pl/ahu...@i-pi.pl/>
> > total 1
> > drwxr-xr-x  2 vmail  vmail  2 Nov 12 23:59 .
> >
> > mx3:/root/dsync@[22:14] # doveadm user a...@test.pl <ahu...@i-pi.pl>
> > field   value
> > uid 145
> > gid 145
> > home/home/mail/vhosts/test.pl/a...@test.pl <
> http://i-pi.pl/ahu...@i-pi.pl>
> > mailmdbox:~/mdbox
> > quota_rule  *:storage=5M
> >
> > mx3:/root/dsync@[22:14] # id 145
> > uid=145(vmail) gid=145(vmail) groups=145(vmail)
> >
> > mx3:/root/dsync@[1:14] # doveconf -n | grep 145
> > first_valid_uid = 145
> > last_valid_uid = 145
> >
> > I dont have any idea whats is the problem.
> > Best regards.
>
>


Doveadm backup error.

2017-11-20 Thread Dovecot list
Hello. I try to migrate dovecot 1 to dovecot 2 with doveadm backup.
But when i try to set doveadm backup i get :

mx3:/root/dsync@[23:11] # doveadm -v -c ah.temp backup -R -u a...@test.pl
 imapc:
doveadm(a...@test.pl ): Error: Mail access for users with UID
145 not permitted (see first_valid_uid in config file, uid from userdb
lookup).
doveadm(a...@test.pl ): Error: User init failed


mx3:/root/dsync@[22:13] # ls -la /home/mail/vhosts/test.pl/a...@test.pl/

total 1
drwxr-xr-x  2 vmail  vmail  2 Nov 12 23:59 .

mx3:/root/dsync@[22:14] # doveadm user a...@test.pl 
field   value
uid 145
gid 145
home/home/mail/vhosts/test.pl/a...@test.pl 
mailmdbox:~/mdbox
quota_rule  *:storage=5M

mx3:/root/dsync@[22:14] # id 145
uid=145(vmail) gid=145(vmail) groups=145(vmail)

mx3:/root/dsync@[1:14] # doveconf -n | grep 145
first_valid_uid = 145
last_valid_uid = 145

I dont have any idea whats is the problem.
Best regards.


Re: Migrating from Dovecot 1 to Dovecot 2

2017-11-09 Thread Dovecot list
Thanks for reply.
I try to make it, but @start i get this error.
mx3:/root/dsync@[23:11] # doveadm -v -c ahuryn.temp backup -R -u
t...@test.pl imapc:
doveadm(t...@test.pl): Error: Mail access for users with UID 145 not
permitted (see first_valid_uid in config file, uid from userdb lookup).
doveadm(t...@test.pl): Error: User init failed

maildirectory is 145:145

Regards


Migrating from Dovecot 1 to Dovecot 2

2017-11-03 Thread Dovecot list
Hello.
I try to migrate about 200G of mails from one server to another.
On the old i have Dovecot1 with Maildirs (without master pass etc.), on the
new one i setup dovecot2 with mdbox. I need now to migrate (partialy, not
all at once) mails from one to another.
I can't find any solution that i can use? I dont have master password, and
i want to mikgrate all mailaccont each other. Can anyone use me a working
config for this ? Best will be that migratet dont want to be downloaded by
mail client one more time.
Thanks for any help.
Best regards.


Re: Locking Errors

2017-10-12 Thread List

> On Oct 12, 2017, at 10:43 AM, Bill Shirley <b...@knoxvillechristian.org> 
> wrote:
> 
> doveconf: Fatal: Error in configuration file 
> /etc/dovecot/conf.d/10-master.conf line 27: Invalid size: $default_vsz_limit
> 
> Have you though about fixing the configuration error?
> 
> Bill
> 
> On 10/12/2017 10:05 AM, List wrote:
>>> On Oct 12, 2017, at 8:56 AM, Bill Shirley <b...@knoxvillechristian.org> 
>>> wrote:
>>> 
>>> Is everyone's email going to /mail/user/Maildir ?
>>> 
>>> What's the output of 'dovecot -n' ?
>>> 
>>> Bill
>>> 
>>> On 10/12/2017 9:47 AM, List wrote:
>>>> We recently upgraded to version 2.2.10-8 on Centos 7 (from 2.2.10-7) and 
>>>> immediately started experiencing errors in the log file that look like 
>>>> this:
>>>> 
>>>> Error: Broken file /mail/user/Maildir/dovecot-uidlist line 891: Invalid 
>>>> data:
>>>> Error: Timeout (180s) while waiting for lock for transaction log file 
>>>> /mail/user/Maildir/dovecot.index.log
>>>> Error: /mail/user/Maildir/.Trash/dovecot-uidlist: next_uid was lowered 
>>>> (421 -> 419, hdr=418)
>>>> Error: Corrupted index cache file /mail/user/Maildir/dovecot.index.cache: 
>>>> invalid record size
>>>> Error: Broken file /mail/user/Maildir/.Trash/dovecot-uidlist line 156: 
>>>> Invalid data:
>>>> Error: Corrupted index cache file /mail/user/Maildir/dovecot.index.cache: 
>>>> invalid record size
>>>> Error: Index /mail/user/Maildir/dovecot.index: Lost log for seq=2 
>>>> offset=2392
>>>> Error: fcntl(write-lock) locking failed for file 
>>>> /mail/user/Maildir/dovecot.index.log: Stale file handle
>>>> 
>>>> We are not sure what might be causing this, can anyone help illuminate 
>>>> what’s going on?
>> Bill, no I just changed the log output to mask the real usernames, these 
>> would be different users.  Here is dovecot -n
>> 
>> # 2.2.10: /etc/dovecot/dovecot.conf
>> doveconf: Fatal: Error in configuration file 
>> /etc/dovecot/conf.d/10-master.conf line 27: Invalid size: $default_vsz_limit
> 

I sent a second message after fixing it with the full config.  Turns out the 
update from Centos 7.3 to 7.4 has some kind of NFS bug that is causing this 
issue, not dovecot. 

Re: Locking Errors

2017-10-12 Thread List

> On Oct 12, 2017, at 8:56 AM, Bill Shirley <b...@knoxvillechristian.org> wrote:
> 
> Is everyone's email going to /mail/user/Maildir ?
> 
> What's the output of 'dovecot -n' ?
> 
> Bill
> 
> On 10/12/2017 9:47 AM, List wrote:
>> We recently upgraded to version 2.2.10-8 on Centos 7 (from 2.2.10-7) and 
>> immediately started experiencing errors in the log file that look like this:
>> 
>> Error: Broken file /mail/user/Maildir/dovecot-uidlist line 891: Invalid data:
>> Error: Timeout (180s) while waiting for lock for transaction log file 
>> /mail/user/Maildir/dovecot.index.log
>> Error: /mail/user/Maildir/.Trash/dovecot-uidlist: next_uid was lowered (421 
>> -> 419, hdr=418)
>> Error: Corrupted index cache file /mail/user/Maildir/dovecot.index.cache: 
>> invalid record size
>> Error: Broken file /mail/user/Maildir/.Trash/dovecot-uidlist line 156: 
>> Invalid data:
>> Error: Corrupted index cache file /mail/user/Maildir/dovecot.index.cache: 
>> invalid record size
>> Error: Index /mail/user/Maildir/dovecot.index: Lost log for seq=2 offset=2392
>> Error: fcntl(write-lock) locking failed for file 
>> /mail/user/Maildir/dovecot.index.log: Stale file handle
>> 
>> We are not sure what might be causing this, can anyone help illuminate 
>> what’s going on?
> 

Bill, no I just changed the log output to mask the real usernames, these would 
be different users.  Here is dovecot -n

# 2.2.10: /etc/dovecot/dovecot.conf
doveconf: Fatal: Error in configuration file /etc/dovecot/conf.d/10-master.conf 
line 27: Invalid size: $default_vsz_limit

Re: OT: Central sieve management

2015-05-29 Thread list

 On May 29, 2015, at 7:46 AM, Robert Blayzor rblayzor.b...@inoc.net wrote:
 
 On May 28, 2015, at 4:24 PM, l...@airstreamcomm.net wrote:
 
 A bit off topic, but I was wondering if anyone here has a solution for 
 centrally managing sieve for multiple users from a custom web application?  
 We would like to implement pigeonhole sieve on our dovecot cluster, however 
 we need to be able to access user’s sieve configurations from a central 
 location for troubleshooting and support purposes.  
 
 
 Couldn’t this be done with ManageSieve and a master login?
 
 --
 Robert
 inoc.net!rblayzor
 Jabber: rblayzor.AT.inoc.net
 PGP Key: 78BEDCE1 @ pgp.mit.edu
 


Correct me if I am wrong, isn’t ManageSieve just a protocol?  We were looking 
for a library or prebuilt tool that would talk to ManageSieve and could then 
hack into our in-house management application.

OT: Central sieve management

2015-05-28 Thread list
A bit off topic, but I was wondering if anyone here has a solution for 
centrally managing sieve for multiple users from a custom web application?  We 
would like to implement pigeonhole sieve on our dovecot cluster, however we 
need to be able to access user’s sieve configurations from a central location 
for troubleshooting and support purposes.  

JMAP support

2015-03-13 Thread List
Just found http://jmap.io/ which is a JSON based RPC protocol for 
synchronizing messages/contacts/calendars.  Any plans to support this 
protocol in Dovecot?


Re: dovecot and glusterfs

2015-01-13 Thread List

On 1/13/15, 6:02 AM, Michael Schwartzkopff wrote:

Am Dienstag, 13. Januar 2015, 21:40:34 schrieb Nick Edwards:

On 1/13/15, Michael Schwartzkopff m...@sys4.de wrote:

Hi,

I did some experiments with dovecot on a glusterfs on the active nodes
without
a director. So I had concurrent access to the files.

With the help of the available documentation about NFS and fcntl locks I
managed to find out the following:

With the plain mbox format dovecot seems to apply and to honor the fcntl
locks.  But since this format is not used any more in real setups, it is
useless.

With mdbox and maildir format I could reliably crash my mail storage just
by

delivering mails to the both dovecots via LMTP to the same user. In
maildir

dovecot seems not the set / respect the fnctl locks of the index file.
dotlocks
do not seems to work either with mdbox.

So I think the only solution os to use a director in a real world setup.
Or
is
there any non-obvious trick that I did not check?

Interesting, we use NFSv3 dovecot LDA with maildir, we have at present
two dozen front end SMTP servers (using dovecot-lda) and some, hrmm we
added a few more over Christmas, so I think about 32  pop3 servers,
but with only 4 imap servers incl webmail (IMAP is not heavily used
here due to government spy laws) talking to NAS storage server
backend, *we do not use director* at all and has never been an issue.
Director IIRC solves the problem of IMAP inconsistency, but we never
see advantage when we tested, no doubt it solves some fancy setup
problem, but since director can not help with pop3, it was not worth
the hassle. never had any problems with webmail either, load balancers
seem to look after it well

Yes. NFS has its own locking. I wanted to use plain glusterfs client without
the detour of NFS. Thanks for your hint.

Mit freundlichen Grüßen,

Michael Schwartzkopff



The last time we experimented with Glusterfs (two years ago) the native 
client was actually not able to maintain consistency as well as the NFS 
for a reason that I cannot remember anymore.  We used maildir, and when 
using NFS we were able to deliver about a hundred thousand emails per 
hour and do a couple hundred thousand IMAP and POP3 retrievals per hour 
against a modest four node Gluster cluster with four Dovecot/Postfix 
servers (running in vmware).


imapc only backing up INBOX

2015-01-07 Thread List
I am attempting to pull email from gmail IMAP to my local machine and 
with the configuration I have I only seem to get messages from the INBOX 
folder.  Hoping I could get some assistance getting all the gmail 
folders to download.


Here is the imapc config:

imapc_host = 64.233.171.108
imapc_user = %u
imapc_master_user = master
imapc_password = somepass
imapc_features = rfc822.size
# If you have Dovecot v2.2.8+ you may get a significant performance 
improvement with fetch-headers:

imapc_features = $imapc_features fetch-headers
# Read multiple mails in parallel, improves performance
mail_prefetch_count = 20

# If the old IMAP server uses INBOX. namespace prefix, set:
imapc_list_prefix = Gmail

# for SSL:
imapc_port = 993
imapc_ssl = imaps
#imapc_ssl_ca_dir = /etc/ssl
imapc_ssl_verify = no

And the doveadm command I am running:

doveadm -D -o imapc_user=$username -o imapc_password=$escaped_password 
backup -R -x '\All' -x '\Flagged' -x '\Important' -u $username imapc:


Re: Required SSL with exceptions

2014-12-09 Thread List

On 12/9/14, 12:50 AM, SATOH Fumiyasu wrote:

Hi,

At Mon, 08 Dec 2014 16:01:43 -0600,
List wrote:

Essentially we would like to host IMAP with SSL enforced for any connections 
coming from anywhere except the subnet where our other mail servers reside.  
The idea is to not install a local instance of dovecot on the 
webmail/carddav/caldav servers to reduce the number of instances that need to 
be managed.  Is it possible to have two imap listeners, where ssl is enforced 
on one port, and not on another?

Use login_trusted_networks parameter.



Excellent, that's exactly what I was looking for.  Thank you!


Required SSL with exceptions

2014-12-08 Thread List
I have a Dovecot cluster which is on separate machines from my 
webmail/caldav/cardav cluster, and I currently have the system setup 
with ssl = required.  Unfortunately the caldav/cardav server I am 
running doesn't support STARTTLS so I was wondering if there is a way to 
still enforce ssl for every connection with the exception of a certain 
subnet, or if there is a better way to accomplish this without install a 
local install of Dovecot on each of my caldav/cardav servers.


Re: Required SSL with exceptions

2014-12-08 Thread List

On 12/8/14, 1:45 PM, Robert Schetterer wrote:

Am 08.12.2014 um 19:41 schrieb List:

I have a Dovecot cluster which is on separate machines from my
webmail/caldav/cardav cluster, and I currently have the system setup
with ssl = required.  Unfortunately the caldav/cardav server I am
running doesn't support STARTTLS so I was wondering if there is a way to
still enforce ssl for every connection with the exception of a certain
subnet, or if there is a better way to accomplish this without install a
local install of Dovecot on each of my caldav/cardav servers.

perhaps this helps

http://wiki2.dovecot.org/SSL/DovecotConfiguration?highlight=%28trusted%29


There are a couple of different ways to specify when SSL/TLS is required:

 disable_plaintext_auth=yes allows plaintext authentication only when
SSL/TLS is used first.

 ssl = required requires SSL/TLS also for non-plaintext authentication.

 If you have only plaintext mechanisms enabled (auth { mechanisms =
plain login } ), you can use either (or both) of the above settings.
They behave exactly the same way then.

Note that plaintext authentication is always allowed (and SSL not
required) for connections from localhost, as they're assumed to be
secure anyway. This applies to all connections where the local and the
remote IP addresses are equal. Also IP ranges specified by
login_trusted_networks setting are assumed to be secure.



Best Regards
MfG Robert Schetterer



Essentially we would like to host IMAP with SSL enforced for any 
connections coming from anywhere except the subnet where our other mail 
servers reside.  The idea is to not install a local instance of dovecot 
on the webmail/carddav/caldav servers to reduce the number of instances 
that need to be managed.  Is it possible to have two imap listeners, 
where ssl is enforced on one port, and not on another?


doveadm backup gmail using imapc

2014-12-05 Thread List
I am trying to sync a gmail inbox with dovecot 2.2.10 using the 
following config:


imapc_host = 64.233.171.108
imapc_user = %u
imapc_master_user = master
imapc_password = secret
imapc_features = rfc822.size
imapc_features = $imapc_features fetch-headers
mail_prefetch_count = 20

# If the old IMAP server uses INBOX. namespace prefix, set:
#imapc_list_prefix = INBOX

# for SSL:
imapc_port = 993
imapc_ssl = imaps
#imapc_ssl_ca_dir = /etc/ssl
imapc_ssl_verify = no

And the doveadm command:

doveadm -D -o imapc_user=t...@domain.tld -o imapc_password=password 
backup -R -x '\All' -x '\Flagged' -x '\Important' -u t...@domain.tld imapc:


I am getting the error:

dsync(t...@domain.tld: Error: Mailbox INBOX sync: mailbox_delete failed: 
INBOX can't be deleted.


What I really want to do is just sync Gmail's inbox, drafts, sent, 
trash/archive, and spam folders to my new system.  Is this possible 
using imapc?


Exporting plain_pass using passwd driver

2014-12-03 Thread List
I am trying to get a postlogin script running that can see the 
plain_pass=%w while running the passwd driver.  This is running Dovecot 
2.0.7 and so far everything I have tried results in the value being empty.


Postlogin script in v1

2014-11-03 Thread List

Does dovecot-1.0.7-7.el5 support postlogin scripting?


Object storage

2014-08-26 Thread List

Timo,

We are contemplating building an S3 compatible cluster of storage 
servers using Pithos https://github.com/exoscale/pithos and Cassandra 
to get multi data center redundancy and availability. Would it be 
possible to have multiple Dovecot servers in disparate geographical 
locations all talking to the same object storage, assuming it's 
consistent between sites?  From what I remember about obox the dovecot 
servers all keep a copy of the index locally and merge it with a file in 
the object storage at intervals, is this an issue with our concept?  
Also does your team have any performance analysis data on the obox 
plugin, or data on it's use in production?


[Dovecot] NoSQL support

2014-06-04 Thread List
Is there any support for NoSQL databases such as Cassandra (CQL) or 
MongoDB now or planned in the future for userdb and passdb lookups?


Re: [Dovecot] Can't get authentication for masterusers on Mac OS X Server 10.6.8

2014-03-03 Thread list
Since you've defined verbose auth logging you should get some 
interesting log files about your failed login attempts that could point 
us in the right direction.

Matthijs

On Mon, Mar 03, 2014 at 03:37:31PM +0100, Gilles Celli wrote:
 Hi dovecot masters,
 
 This is my first post here, since I desperately need some advices from the 
 dovecot community.
 I've tried to get an answer on the Apple Forums but til now no luckhere 
 we go:
 
 I've tried to sync our users emails (Mac OS X Server 10.6.8 Snow Leopard with 
 dovecot 1.1.20-apple0.5) via imapsync
 to our new server by using the masterusers authentication method on the old 
 10.6.8 server...
 
 The main problem on OS X Server 10.6.8 is that dovecot 1.1.20 uses the OD 
 (OpenDirectory) driver (well I think),
 so that when following the directions of Master users/password from this page 
 I can't login with the
 http://wiki1.dovecot.org/Authentication/MasterUsers
 
 I couldn't find anything on the OD driver directivethe dovecot 
 1.1.20-apple build doesn't even have the shadow driver built in (see below 
 the dovecot --build-options),
 so that passdb shadow {} won't work anyway
 
 
 I always get NO Authentication failed, when trying the following:
 telnet localhost 143
 Trying 127.0.0.1...
 Connected to localhost.
 Escape character is '^]'.
 * OK Dovecot ready.
 1 login user1*mailadmin PASSWORD
 1 NO Authentication failed.
  
 I've tried also to add a Post-login scripting like described here, but no 
 luck either:
 http://www.stefanux.de/wiki/doku.php/server/dovecot
 
 Does someone know how to fix my migration issue ?
 
 Any help is greatly appreciated.
 
 Gilles
 
 Here's my dovecot :
 
 dovecotd --build-options
 Build options: ioloop=kqueue notify=kqueue ipv6 openssl
 Mail storages: maildir mbox dbox cydir raw
 SQL drivers:
 Passdb: checkpassword od pam passwd passwd-file
 Userdb: od passwd passwd-file prefetch static
 
 
 Here's my dovecot -n output:
 
 dovecotd -n
 
 # 1.1.20apple0.5: /private/etc/dovecot/dovecot.conf
 Warning: fd limit 256 is lower than what Dovecot can use under full load 
 (more than 306). Either grow the limit or change login_max_processes_count 
 and max_mail_processes settings
 # OS: Darwin 10.8.0 i386  hfs
 base_dir: /var/run/dovecot
 syslog_facility: local6
 protocols: pop3 imap pop3s imaps
 ssl_ca_file: 
 /etc/certificates/Default.DB14D82BF89A0DDCE123137BC94AEA0C94DDD838.chain.pem
 ssl_cert_file: 
 /etc/certificates/Default.DB14D82BF89A0DDCE123137BC94AEA0C94DDD838.cert.pem
 ssl_key_file: 
 /etc/certificates/Default.DB14D82BF89A0DDCE123137BC94AEA0C94DDD838.key.pem
 ssl_cipher_list: ALL:!LOW:!SSLv2:!aNULL:!ADH:!eNULL
 disable_plaintext_auth: no
 login_dir: /var/run/dovecot/login
 login_executable(default): /usr/libexec/dovecot/imap-login
 login_executable(imap): /usr/libexec/dovecot/imap-login
 login_executable(pop3): /usr/libexec/dovecot/pop3-login
 login_user: _dovecot
 login_process_per_connection: no
 max_mail_processes: 50
 mail_max_userip_connections(default): 20
 mail_max_userip_connections(imap): 20
 mail_max_userip_connections(pop3): 10
 verbose_proctitle: yes
 first_valid_uid: 6
 first_valid_gid: 6
 mail_access_groups: mail
 mail_location: maildir:/var/spool/imap/dovecot/mail/%u
 mail_executable(default): /usr/libexec/dovecot/imap
 mail_executable(imap): /usr/libexec/dovecot/imap
 mail_executable(pop3): /usr/libexec/dovecot/pop3
 mail_process_sharing: full
 mail_max_connections(default): 10
 mail_max_connections(imap): 10
 mail_max_connections(pop3): 5
 mail_plugins(default): quota imap_quota
 mail_plugins(imap): quota imap_quota
 mail_plugins(pop3): quota
 mail_plugin_dir(default): /usr/lib/dovecot/imap
 mail_plugin_dir(imap): /usr/lib/dovecot/imap
 mail_plugin_dir(pop3): /usr/lib/dovecot/pop3
 lda:
   postmaster_address: postmas...@example.com
   hostname: mymailserver.example.com
   mail_plugins: quota
   quota_full_tempfail: yes
   sendmail_path: /usr/sbin/sendmail
   auth_socket_path: /var/run/dovecot/auth-master
   log_path: /var/log/mailaccess.log
   info_log_path: /var/log/mailaccess.log
 auth default:
   mechanisms: plain login gssapi apop cram-md5
   master_user_separator: *
   verbose: yes
   passdb:
 driver: passwd-file
 args: /etc/dovecot/passwd.masterusers
 pass: yes
 master: yes
   passdb:
 driver: od
   userdb:
 driver: od
 args: partition=/etc/dovecot/partition_map.conf enforce_quotas=no
   socket:
 type: listen
 master:
   path: /var/run/dovecot/auth-master
   mode: 384
   user: _dovecot
   group: mail
 plugin:
   quota_warning: storage=100%% /usr/libexec/dovecot/quota-exceeded.sh
   quota_warning2: storage=90%% /usr/libexec/dovecot/quota-warning.sh
   quota: maildir:User quota
   sieve: /var/spool/imap/dovecot/sieve-scripts/%u/dovecot.sieve


pgpVhy91fLKsH.pgp
Description: PGP signature


Re: [Dovecot] Can't get authentication for masterusers on Mac OS X Server 10.6.8

2014-03-03 Thread list
Try getting more verbose logs using dovecot's logging mechanisms.
auth_verbose=yes
auth_debug=yes
It seems that you aren't authenticating your master users against your 
passwd file, instead you are authenticating against your OpenDirectory.


Re: [Dovecot] 2 users database on same LDAP with different mail location

2014-02-25 Thread list
On Tue, Feb 25, 2014 at 11:42:33AM +0100, Francesco wrote:
 Hello,
 i know i know, i'm getting annoying but appearently i always come up
 with weird ideas and i cant seem to accomplish such a task.
 
 the scenario is that i have an LDAP server with a bunch of users.
 some of them are in a specific OU, and i'd like to define for all these
 users belonging to this OU an alternative mail location/storage.
 
 in details for all the users i'd like to use maildir storage in a
 directory, while for the users belonging to a specific OU i'd like to
 use dbox with an alternative storage attached.
 
 so i created 2 userdb like this:
 
 userdb {
   driver = ldap
   args = /etc/dovecot/dovecot-ldap-maildir.conf.ext
 }
 
 userdb {
   driver = ldap
   args = /etc/dovecot/dovecot-ldap-dbox.conf.ext
 }
 
 and then defined these 2 args files:
 maildir:
 
 hosts = localhost
 dn = CN=ldapadmin,OU=administrators,DC=plutone,DC=local
 dnpass = password
 auth_bind = yes
 ldap_version = 3
 base = DC=plutone,DC=local
 user_attrs = sAMAccountName=home=/var/vmail/%$
 
 dbox:
 
 hosts = localhost
 dn = CN=ldapadmin,OU=administrators,DC=plutone,DC=local
 dnpass = password
 auth_bind = yes
 ldap_version = 3
 base = OU=dboxusers,OU=lowpriority,DC=plutone,DC=local
 user_attrs = sAMAccountName=home=/var/local_dbox/%$,
 =mail=dbox:/var/local_dbox/%$:ALT=/var/iscsi_dbox/%$
 user_filter = ((ObjectClass=person)(mail=%u))
 
 
 yet it doesn't matter how hard i try if i send an email to a user
 belonging to the dboxusers OU i still have the user to be addressed to
 the maildir storage in /var/vmail
 
 am i missing something?
 
 Thanks
 Francesco

You can use LDAP to search for an alternative mail attribute, and specify a 
default location using 
mail_location. In your example; mail_location = /var/vmail/%u. Then use one 
LDAP config file to override the 
mailbox location if the LDAP database specifies a maildir location.

By the way, aren't userdb's searched sequentially? Try switching those userdb's 
to make the one with the group 
lookup go first. LDAP users will always match the userdb without group lookup.

Matthijs


Re: [Dovecot] 2 users database on same LDAP with different mail location

2014-02-25 Thread list
On Tue, Feb 25, 2014 at 01:29:37PM +0100, l...@grootstyr.eu wrote:
 On Tue, Feb 25, 2014 at 11:42:33AM +0100, Francesco wrote:
  Hello,
  i know i know, i'm getting annoying but appearently i always come up
  with weird ideas and i cant seem to accomplish such a task.
  
  the scenario is that i have an LDAP server with a bunch of users.
  some of them are in a specific OU, and i'd like to define for all these
  users belonging to this OU an alternative mail location/storage.
  
  in details for all the users i'd like to use maildir storage in a
  directory, while for the users belonging to a specific OU i'd like to
  use dbox with an alternative storage attached.
  
  so i created 2 userdb like this:
  
  userdb {
driver = ldap
args = /etc/dovecot/dovecot-ldap-maildir.conf.ext
  }
  
  userdb {
driver = ldap
args = /etc/dovecot/dovecot-ldap-dbox.conf.ext
  }
  
  and then defined these 2 args files:
  maildir:
  
  hosts = localhost
  dn = CN=ldapadmin,OU=administrators,DC=plutone,DC=local
  dnpass = password
  auth_bind = yes
  ldap_version = 3
  base = DC=plutone,DC=local
  user_attrs = sAMAccountName=home=/var/vmail/%$
  
  dbox:
  
  hosts = localhost
  dn = CN=ldapadmin,OU=administrators,DC=plutone,DC=local
  dnpass = password
  auth_bind = yes
  ldap_version = 3
  base = OU=dboxusers,OU=lowpriority,DC=plutone,DC=local
  user_attrs = sAMAccountName=home=/var/local_dbox/%$,
  =mail=dbox:/var/local_dbox/%$:ALT=/var/iscsi_dbox/%$
  user_filter = ((ObjectClass=person)(mail=%u))
  
  
  yet it doesn't matter how hard i try if i send an email to a user
  belonging to the dboxusers OU i still have the user to be addressed to
  the maildir storage in /var/vmail
  
  am i missing something?
  
  Thanks
  Francesco
 
 You can use LDAP to search for an alternative mail attribute, and specify a 
 default location using 
 mail_location. In your example; mail_location = /var/vmail/%u. Then use one 
 LDAP config file to override the 
 mailbox location if the LDAP database specifies a maildir location.
 
 By the way, aren't userdb's searched sequentially? Try switching those 
 userdb's to make the one with the group 
 lookup go first. LDAP users will always match the userdb without group lookup.
 
   Matthijs

An addition to my own comment, put the group lookup userdb first, and add skip 
= found to the second userdb. 
This way it will search the group userdb first and if it found the user, so 
when it is in the group, don't 
search the second userdb and use the answer from the first userdb.

Matthijs


Re: [Dovecot] Basic clustered filesystem advice

2013-09-18 Thread List

On 9/17/13 3:23 PM, Andreas Gaiser wrote:

Does anybody know about GlusterFS  Dovecot?


...Andreas


Time marches on, and I need to continue the service migration. I'd still
like to use Dovecot (we're migrating away from Cyrus).  I'm assuming the
only other alternative without existing shared storage is to use DRBD
and a cluster file system to provide the replication, and to ensure
Director is enabled.  Are there any things to watch for surrounding
this?
We tested glusterfs 3.2 a while ago using four storage nodes, four 
Dovecot/Postfix machines, and a number of email client bots that 
generated upwards of a 180k inbound messages per hour, and upwards of 
360k pop/imap connections per hour.  Unfortunately we did not grab any 
metrics on how long it took for a POP/IMAP session to 
open/read/delete/close or how long SMTP transactions took, we simply 
wanted to see how much load would be generated which was reasonable for 
the machines we used.  All storage and mail machines were virtual 
(vmware) and consisted of 2vcpu with 8gigs mem running Centos 6.1.  We 
tested both NFS and the gluster native client and didn't see much 
difference in perceived load on the system.  We did not run into any of 
the issues that are common with running Dovecot over NFS during our 
testing, which we attribute to a proper configuration for NFS and solid 
NTP.  We ran an extended test that lasted for about two months and 
nothing really hiccuped or failed to function so I would call it a 
success to that extent.


We also tested stretching the glusterfs cluster between our two data 
centers which are 100 miles apart as the fiber lays.  Our latency is 
very low and stable between sites, and resulted in a small increase in 
load on the cluster.  I would not recommend this concept over anything 
but the most stable and fault tolerant WAN imaginable, but it seemed to 
work reasonably well for the duration of the testing we did (about a day 
long test).


If I were to do it again obviously I would grab metrics and compare it 
to access times for a basic single server system on local disk and an 
NFS backed system using multiple servers, but alas we were just propping 
it up for fun and see how far we could abuse it.  If one could assume 
that Glusterfs does scale linearly with more nodes you could continue to 
add capacity to the storage layer and grow the cluster, but that's 
another level of testing all together.




Re: [Dovecot] Basic clustered filesystem advice

2013-09-18 Thread List

On 9/18/13 11:20 AM, Tim Groeneveld wrote:


- Original Message -

On 9/17/13 3:23 PM, Andreas Gaiser wrote:

Does anybody know about GlusterFS  Dovecot?


Time marches on, and I need to continue the service migration. I'd
still
like to use Dovecot (we're migrating away from Cyrus).  I'm
assuming the
only other alternative without existing shared storage is to use
DRBD
and a cluster file system to provide the replication, and to
ensure
Director is enabled.  Are there any things to watch for
surrounding
this?


I still want my dream of the perfect mail storage engine in
dovecot to one day to be true.

This magical mailbox format  storage engine would allow
storing emails in different geographical locations. There
would be an attempt to ensure that mail is always closest
to the user (ie, the mail server that the user connects
to retrieve email from).

Then you could define how many copies of each user's mail
would be stored on a per-user basis, but those copies could
be stored on any storage server, but not more then x times
per network location.

Unfortunately, this mystical engine does not sound like
it is going to be built in the next handful of years at
least.

A man can dream.

Regards,
Tim




Tim, I too have had this dream but it feels very much like people just 
don't care about Geo-distributed messaging at scale.  Since Dovecot now 
supports storing messages in s3 compatible storage (using obox) I was 
thinking about extending an object storage app I developed using node.js 
on top of Cassandra to implement the s3 API and see if that could breath 
some life into this concept.  When time permits I suppose.




[Dovecot] Dovecot 2.1.1 crash

2013-09-16 Thread List

Dovecot RPM from atrpms crashed, here are the logs:

Aug 31 11:55:08 10.123.128.231 dovecot: imap(user@domain): Panic: 
Message count decreased
Aug 31 11:55:08 10.123.128.231 dovecot: imap(user@domain): Error: Raw 
backtrace: /usr/lib64/dovecot/libdovecot.so.0(+0x4184a) [0x7f144f00384a] 
- /usr/lib64/dovecot/libdovecot.so.0(+0x41896) [0x7f144f003896] - 
/usr/lib64/dovec
ot/libdovecot.so.0(+0x1934a) [0x7f144efdb34a] - dovecot/imap() 
[0x417ba9] - dovecot/imap() [0x40a636] - dovecot/imap() [0x40a96c] - 
dovecot/imap(command_exec+0x3d) [0x410a5d] - 
dovecot/imap(client_command_cancel+0x3a) [0x40f3da] -
dovecot/imap(client_destroy+0xdd) [0x41025d] - 
/usr/lib64/dovecot/libdovecot.so.0(io_loop_call_io+0x36) 
[0x7f144f00fd16] - 
/usr/lib64/dovecot/libdovecot.so.0(io_loop_handler_run+0x9f) 
[0x7f144f010d9f] - /usr/lib64/dovecot/libdovecot.s
o.0(io_loop_run+0x28) [0x7f144f00fcb8] - 
/usr/lib64/dovecot/libdovecot.so.0(master_service_run+0x13) 
[0x7f144effc0d3] - dovecot/imap(main+0x29d) [0x418a0d] - 
/lib64/libc.so.6(__libc_start_main+0xfd) [0x7f144ec4dcdd] - dovecot/imap()

[0x408469]
Aug 31 11:55:09 10.123.128.231 dovecot: imap(user@domain): Fatal: 
master: service(imap): child 20986 killed with signal 6 (core dumps 
disabled)


Doveconf -n:

# 2.1.1: /etc/dovecot/dovecot.conf
# OS: Linux 2.6.32-279.19.1.el6.x86_64 x86_64 CentOS release 6.3 (Final)
auth_master_user_separator = *
auth_mechanisms = plain login
disable_plaintext_auth = no
first_valid_uid = 300
mail_fsync = always
mail_location = maildir:~/Maildir
mail_nfs_index = yes
mail_nfs_storage = yes
mbox_write_locks = fcntl
mmap_disable = yes
namespace inbox {
  inbox = yes
  location =
  mailbox Drafts {
special_use = \Drafts
  }
  mailbox Junk {
special_use = \Junk
  }
  mailbox Sent {
special_use = \Sent
  }
  mailbox Sent Messages {
special_use = \Sent
  }
  mailbox Trash {
special_use = \Trash
  }
  prefix =
}
passdb {
  args = /etc/dovecot/sql.conf.ext
  driver = sql
}
service imap-login {
  inet_listener imap {
port = 143
  }
  process_min_avail = 4
  service_count = 0
  vsz_limit = 128 M
}
service pop3-login {
  inet_listener pop3 {
port = 110
  }
  process_min_avail = 4
  service_count = 0
  vsz_limit = 128 M
}
ssl_ca = /usr/local/admin/certs/wildcard.ca
ssl_cert = /usr/local/admin/certs/wildcard.crt
ssl_key = /usr/local/admin/certs/wildcard.key
userdb {
  args = /etc/dovecot/telco-sql.conf.ext
  driver = sql
}
protocol imap {
  mail_max_userip_connections = 30
}


Re: [Dovecot] From Sendmail to Dovecot mdbox, what is missing.

2013-01-26 Thread Torpey List
SOLVED.
It turns out it was SELinux that was causing this error (as well as others):
Jan 26 17:32:58 nala kernel: type=1400 audit(1359243178.285:5768): avc:  denied 
 { setgid } for  pid=30558 comm=dovecot-lda capability=6  
scontext=unconfined_u:system_r:dovecot_deliver_t:s0 
tcontext=unconfined_u:system_r:dovecot_deliver_t:s0 tclass=capability

The errors were combined into err.txt using the following command.
  grep audit /var/log/messages |grep dovecot-lda  err.txt

Then a SELinux was generated using:
  audit2allow -i err.txt -M dovecot-lda

which made a file dovecot-lda.te that contained the following:
  module dovecot-lda 2.1;

  require {
  type var_log_t;
  type dovecot_deliver_t;
  type etc_runtime_t;
  class capability { setuid dac_read_search setgid dac_override };
  class file append;
  class dir write;
  }

  #= dovecot_deliver_t ==
  allow dovecot_deliver_t etc_runtime_t:file append;
  # This avc is allowed in the current policy

  allow dovecot_deliver_t self:capability setgid;
  allow dovecot_deliver_t self:capability { setuid dac_read_search dac_override 
};
  # The source type 'dovecot_deliver_t' can write to a 'dir' of the 
following types:
  # user_home_t, dovecot_deliver_tmp_t, user_home_dir_t, tmp_t, mail_spool_t, 
nfs_t

  allow dovecot_deliver_t var_log_t:dir write;

If you make any changes to dovecot-lda.te, like the version number because you 
have already tried to get it into SELinux then you have to do the following 
command:
   make

Finally, to get it incorporated into SELinux:
   semodule -i dovecot-lda.pp

This has been driving me crazy for a month, I am surprised that I could not 
find straight solution.
I have to give credit to the following bugzilla that helped me use the 
audit2allow in an automated way that provided the necessary detail to generate 
dovecot-lda.te listed above.
   https://bugzilla.redhat.com/show_bug.cgi?id=667579

My mail is flowing from tests, now I need to have a larger stream make it work.

Thanks,
Steve


-Original Message- 
From: Steffen Kaiser 
Sent: Thursday, January 03, 2013 1:02 AM 
To: Dovecot Mailing List 
Cc: Torpey List 
Subject: Re: [Dovecot] From Sendmail to Dovecot mdbox, what is missing. 

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Tue, 1 Jan 2013, Torpey List wrote:

 Dovecot-lda – I have had issues getting it configured.

 What issues? If you were trying to get the LDA to deliver to /var/mail,
 it's possible you were running into permissions problems. The best
 solution is to deliver into the mdbox instead, or just leave Sendmail to
 deliver to /var/mail.

 Sendmail changes
 FEATURE(`local_procmail',
 `/usr/libexec/dovecot/dovecot-lda',`/usr/libexec/dovecot/dovecot-lda
 -d $u')
 MODIFY_MAILER_FLAGS(`LOCAL', `-f')
 MAILER(procmail)dnl


I do use:
FEATURE(`local_procmail', `/etc/mail/smrsh/dovecot-deliver', 
`/etc/mail/smrsh/dovecot-deliver -f $g -d $u -m $h')dnl

Note, you need a symlink in your smrsh-directory anyway.

 The option that has gone the furthest is *Making dovecot-lda setuid-root*.

I don't use a setuid-root LDA.

 However, I have errors.  Here are the permissions.

   -rwxr-xr-x. 1 root secmail 26512 Aug 18  2011 
 /usr/libexec/dovecot/dovecot-lda

Your LDA is not setuid-root ;-)

   srw---. 1 mail root 0 Jan  1 08:39 /var/run/dovecot/auth-userdb

Do you need to protect /var/run/dovecot/auth-userdb that tight? I mean, is 
this server used by users via ssh or something? Otherwise make the Unix 
permission of that socket so, that any system user can read from it (aka 
0666). 
Maybe, put all mail users into the same group and use 0660. Change group 
of auth-userdb to mail ... .


 Errors.
 == /var/log/maillog ==
 Jan  1 08:24:02 nala sendmail[20154]: r01EO2qc020154: from=u...@yahoo.com, 
 size=5723, class=0, nrcpts=1, 
 msgid=1357050226.83142.yahoomail...@web120205.mail.ne1.yahoo.com, 
 proto=ESMTP, daemon=MTA, relay=mail.example.com [192.168.1.152]
 Jan 01 08:24:02 lda: Error: userdb lookup: 
 connect(/var/run/dovecot/auth-userdb) failed: Permission denied (euid=0(root) 
 egid=0(root) missing +r perm: /var/run/dovecot/auth-userdb, euid is dir owner)
 Jan 01 08:24:02 lda: Fatal: Internal error occurred. Refer to server log for 
 more information.

That error seems to indicate a Dovecot permission check failure, but IMHO 
root is allowed to connect always. You could try to chmod +x 
/var/run/dovecot/auth-userdb, the x-perm disables the check of Dovecot.

 Jan  1 08:24:02 nala sendmail[20155]: r01EO2qc020154: to=u...@example.com, 
 delay=00:00:00, xdelay=00:00:00, mailer=local, pri=35889, dsn=4.0.0, 
 stat=Deferred: local mailer (/usr/libexec/dovecot/dovecot-lda) exited with 
 EX_TEMPFAIL

 == /var/log/messages ==
 Jan  1 08:24:02 nala kernel: type=1400 audit(1357050242.947:42): avc:  denied 
  { dac_override } for  pid=20156 comm=dovecot-lda capability=1  
 scontext

Re: [Dovecot] From Sendmail to Dovecot mdbox, what is missing.

2013-01-06 Thread Torpey List



-Original Message- 
From: Steffen Kaiser

Sent: Thursday, January 03, 2013 1:02 AM
To: Dovecot Mailing List
Cc: Torpey List
Subject: Re: [Dovecot] From Sendmail to Dovecot mdbox, what is missing.

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Tue, 1 Jan 2013, Torpey List wrote:


Dovecot-lda – I have had issues getting it configured.


What issues? If you were trying to get the LDA to deliver to /var/mail,
it's possible you were running into permissions problems. The best
solution is to deliver into the mdbox instead, or just leave Sendmail to
deliver to /var/mail.


Sendmail changes
FEATURE(`local_procmail',
`/usr/libexec/dovecot/dovecot-lda',`/usr/libexec/dovecot/dovecot-lda
-d $u')
MODIFY_MAILER_FLAGS(`LOCAL', `-f')
MAILER(procmail)dnl



I do use:
FEATURE(`local_procmail', `/etc/mail/smrsh/dovecot-deliver',
`/etc/mail/smrsh/dovecot-deliver -f $g -d $u -m $h')dnl

Note, you need a symlink in your smrsh-directory anyway.


This appears to have been my road block.  Mail has started moving, so now I 
need to do testing to make sure everything else is working.


I knew that I was missing a detail.

Thank you so much,
Steve 



Re: [Dovecot] From Sendmail to Dovecot mdbox, what is missing.

2013-01-01 Thread Torpey List


-Original Message- 
From: Ben Morrow 
Sent: Monday, December 31, 2012 8:52 PM 
To: Dovecot Mailing List 
Subject: Re: [Dovecot] From Sendmail to Dovecot mdbox, what is missing. 

At  5PM -0600 on 31/12/12 you (Torpey List) wrote:
 Sendmail 8.14.4
 dovecot 2.0.9

 I have sendmail working and it is sending mail to /var/mail/%u.
 I have dovecot working in that I can move emails into IMAP folders and
 I can send email through IMAP. I have set up dovecot to use mdbox
 based on the following:
 mail_location = mdbox:~/mail

 However, I seem to be lacking a key piece of information.
 Sendmail is sending the mail to /var/mail/%u as a mbox (single file
 for all emails) format.
 Dovecot wants to read the mail in mdbox (Multiple messages per file,
 but unlike mbox multiple files per mailbox.) So the two programs are
 not working together.

 So, I cannot get dovecot to read new emails at /var/mail/%u.
 So I tried changing to the following:
 mail_location = mdbox:~/mail:INBOX=/var/mail/%u
 However, dovecot complains that it is NOT a directory. That is
 because sendmail is sending as mbox format.

 I have tried two lines of “mail_location” but that did not work.
 example
 mail_location = mdbox:~/mail  for dovecot
 mail_location = mbox:INBOX=/var/mail/%u - for sendmail

No, that doesn't work: in fact, the second line will completely override
the first. If you run 'doveconf -n' or 'doveconf mail_location' you will
see that the first line doesn't have any effect.


I did not expect it to work, but I was trying all that I could before posting a 
question.

If you want to keep INBOX delivery to mboxes in /var/mail, you can do
this using two namespaces. One points to mdbox:~/mail, and holds the
users' ordinary IMAP folders in mdbox format, and the other has
INBOX=/var/mail/%u and just holds the INBOX. There is an example in
http://wiki2.dovecot.org/Namespaces of doing this with Maildir and mbox;
adjusting it for mdbox shouldn't be hard.

You will find you need a directory for each user to hold the other
folders in the INBOX namespace, since Dovecot doesn't know there won't
ever be any. This directory is also used to store Dovecot's index files
for that namespace, and it should *not* be the same as the mdbox
directory. According to http://wiki2.dovecot.org/MailLocation/mbox , you
can skip this if you use

   location = mbox:/var/empty:INBOX=/var/mail/%u:INDEX=MEMORY

(assuming /var/empty is a readonly root-owned empty directory), but
since this tells Dovecot not to store index files on disk it may make
INBOX access less efficient. If you use a real directory rather than
/var/empty you may want to consider enabling the acl plugin and setting
up a global ACL which prevents users from creating additional folders in
the INBOX namespace.

It's probably also a good idea to set mail_location = mdbox:~/mail and
omit the location parameter from the mdbox namespace, since IIRC
otherwise commands like 'doveadm purge' won't work correctly.


I am going to try an option below.

 I have tried LMTP and dovecot-lda.

If you want to deliver mail into the mdbox INBOX, and forget about
/var/mail altogether, you will need to get one of these two working
since Sendmail doesn't understand mdbox. This is probably the best
option in the long run, unless you have other software which relies on
mail being in /var/mail. If you pick this option you need to remove all
references to /var/mail from dovecot.conf; with the two lines you had
above Dovecot will simply carry on delivering into /var/mail just as
Sendmail had been.


I would like to deliver new mail into the mdbox INBOX and forget about 
/var/mail but I did not see how to do this.  I think that was the piece of 
information that I am missing.

 LMTP – I could not see any difference with this added or not.

If you had configured Dovecot to deliver into /var/mail, that's hardly
surprising. Otherwise, are you sure you were delivering mail to the LMTP
server? If you were you should have seen entries in Dovecot's log file,
and the delivered mail should have ended up with a Received header from
the LMTP server.


I have used egrep and there is no line that has /var/mail that is uncommented 
in any of the config files.

Based on your comment, then no I do not believe the new mail was going through 
LMTP.

 Dovecot-lda – I have had issues getting it configured.

What issues? If you were trying to get the LDA to deliver to /var/mail,
it's possible you were running into permissions problems. The best
solution is to deliver into the mdbox instead, or just leave Sendmail to
deliver to /var/mail.

 Sendmail changes
 FEATURE(`local_procmail',
 `/usr/libexec/dovecot/dovecot-lda',`/usr/libexec/dovecot/dovecot-lda
 -d $u')
 MODIFY_MAILER_FLAGS(`LOCAL', `-f')
 MAILER(procmail)dnl

I know nothing at all about Sendmail configuration, but going by the
Dovecot wiki that looks correct. Are you sure mail for the appropriate
users was actually getting routed through that mailer? What did you see

Re: [Dovecot] From Sendmail to Dovecot mdbox, what is missing.

2013-01-01 Thread Torpey List



-Original Message- 
From: Thomas Leuxner

Sent: Tuesday, January 01, 2013 9:03 AM
To: Dovecot Mailing List
Subject: Re: [Dovecot] From Sendmail to Dovecot mdbox, what is missing.

Am 01.01.2013 um 15:44 schrieb Torpey List l...@torpey.org:

I don't use Sendmail myself so I can't really comment on its configuration. 
However the issue looks like a typical mismatch of UIDs on the socket:


http://wiki2.dovecot.org/LDA/Sendmail

As per the link above you could try running 'chown mail' on the LDA. This 
will match the ID to the 'userdb' socket unix_listener (user = mail):


-rwxr-xr-x. 1 root secmail 26512 Aug 18  2011 
/usr/libexec/dovecot/dovecot-lda
  srw---. 1 mail root 0 Jan  1 08:39 
/var/run/dovecot/auth-userdb


Good Luck
Thomas


I have changed the permissions to the following:
-rwxr-xr-x. 1 mail secmail 26512 Aug 18  2011 
/usr/libexec/dovecot/dovecot-lda

srw-rw-rw-. 1 mail secmail 0 Jan  1 09:41 /var/run/dovecot/auth-userdb

Then I get this error (steve is who the email is addressed to):

Jan 01 09:43:47 lda(steve): Fatal: setgid(501(steve)) failed with 
euid=0(root), gid=0(root), egid=0(root): Operation not permitted (This 
binary should probably be called with process group set to 501(steve) 
instead of 0(root))


Thanks,
Steve 



Re: [Dovecot] From Sendmail to Dovecot mdbox, what is missing.

2013-01-01 Thread Torpey List



-Original Message- 
From: Torpey List

Sent: Tuesday, January 01, 2013 9:50 AM
To: Dovecot Mailing List
Subject: Re: [Dovecot] From Sendmail to Dovecot mdbox, what is missing.



-Original Message- 
From: Thomas Leuxner

Sent: Tuesday, January 01, 2013 9:03 AM
To: Dovecot Mailing List
Subject: Re: [Dovecot] From Sendmail to Dovecot mdbox, what is missing.

Am 01.01.2013 um 15:44 schrieb Torpey List l...@torpey.org:

I don't use Sendmail myself so I can't really comment on its 
configuration. However the issue looks like a typical mismatch of UIDs on 
the socket:


http://wiki2.dovecot.org/LDA/Sendmail

As per the link above you could try running 'chown mail' on the LDA. This 
will match the ID to the 'userdb' socket unix_listener (user = mail):


-rwxr-xr-x. 1 root secmail 26512 Aug 18  2011 
/usr/libexec/dovecot/dovecot-lda
  srw---. 1 mail root 0 Jan  1 08:39 
/var/run/dovecot/auth-userdb


Good Luck
Thomas


I have changed the permissions to the following:
-rwxr-xr-x. 1 mail secmail 26512 Aug 18  2011 
/usr/libexec/dovecot/dovecot-lda

srw-rw-rw-. 1 mail secmail 0 Jan  1 09:41 /var/run/dovecot/auth-userdb

Then I get this error (steve is who the email is addressed to):

Jan 01 09:43:47 lda(steve): Fatal: setgid(501(steve)) failed with 
euid=0(root), gid=0(root), egid=0(root): Operation not permitted (This 
binary should probably be called with process group set to 501(steve) 
instead of 0(root))


Thanks,
Steve


I was rereading man dovecot-lda and specifically the option -d username. 
it said that it is used typically with virutal users, but not necessarily 
with system users.  I am doing system users; therefore I removed it from the 
sendmail feature, but then I get the following error in maillog:


Jan  1 10:28:39 nala sendmail[23041]: r01GScR4023040: smtpquit: mailer local 
exited with exit value 64


I googled, but did not find what value 64 meant.  Anyone have a list or a 
clue what this error means?


Thanks,
Steve 



[Dovecot] From Sendmail to Dovecot mdbox, what is missing.

2012-12-31 Thread Torpey List
Sendmail 8.14.4
dovecot 2.0.9


I have sendmail working and it is sending mail to /var/mail/%u.
I have dovecot working in that I can move emails into IMAP folders and I can 
send email through IMAP.  I have set up dovecot to use mdbox based on the 
following:
mail_location = mdbox:~/mail

However, I seem to be lacking a key piece of information.  
Sendmail is sending the mail to /var/mail/%u as a mbox (single file for all 
emails) format.
Dovecot wants to read the mail in mdbox (Multiple messages per file, but unlike 
mbox multiple files per mailbox.)  So the two programs are not working together.

So, I cannot get dovecot to read new emails at /var/mail/%u.
So I tried changing to the following:
mail_location = mdbox:~/mail:INBOX=/var/mail/%u
However, dovecot complains that it is NOT a directory.  That is because 
sendmail is sending as mbox format.

I have tried two lines of “mail_location” but that did not work.
example
mail_location = mdbox:~/mail    
for dovecot
mail_location = mbox:INBOX=/var/mail/%u- for sendmail

I have tried LMTP and dovecot-lda.

LMTP – I could not see any difference with this added or not.

Dovecot-lda – I have had issues getting it configured.

Thanks for any help!

Sendmail changes
FEATURE(`local_procmail', 
`/usr/libexec/dovecot/dovecot-lda',`/usr/libexec/dovecot/dovecot-lda -d $u')
MODIFY_MAILER_FLAGS(`LOCAL', `-f')
MAILER(procmail)dnl

Here is dovecot configuration
[root@nala mail]# dovecot -n
# 2.0.9: /etc/dovecot/dovecot.conf
# OS: Linux 2.6.32-279.14.1.el6.x86_64 x86_64 Scientific Linux release 6.3 
(Carbon)
auth_mechanisms = plain login
mail_gid = mail
mail_location = mdbox:~/mail
mail_uid = mail
mbox_write_locks = fcntl
passdb {
  driver = pam
}
plugin {
  mail_log_events = delete undelete expunge copy mailbox_delete mailbox_rename
  mail_log_group_events = yes
}
service auth {
  unix_listener auth-userdb {
mode = 0600
user = mail
  }
}
service lmtp {
  inet_listener lmtp {
address = 192.168.1.185 127.0.0.1 ::1
port = 24
  }
  user = mail
}
ssl_cert = /etc/pki/dovecot/certs/dovecot.pem
ssl_key = /etc/pki/dovecot/private/dovecot.pem
userdb {
  driver = passwd
}
protocol lda {
  info_log_path = /var/log/maillog
  log_path = /var/log/maillog
  postmaster_address = postmas...@torpey.org
}

Re: [Dovecot] maildir_copy_with_hardlinks on v.2.0.19

2012-07-30 Thread mailing list subscriber
On Sat, Jul 28, 2012 at 8:04 PM, Timo Sirainen t...@iki.fi wrote:
 On 23.7.2012, at 22.12, mailing list subscriber wrote:

 As requested, here is my update. As you can see I am running now the
 latest release however emails delivered through lmtp gets split into
 different files instead of expected hardlinked files.
 ..
 userdb {
  driver = passwd
 }

 Looks like you're using system users. Each mail then needs to be written 
 using different permissions, so hard linking can't work.


I am afraid this is incorrect:

[root@email ~]# cd /tmp
[root@email tmp]# touch 1
[root@email tmp]# stat 1
  File: `1'
  Size: 0   Blocks: 0  IO Block: 4096   regular empty file
Device: 803h/2051d  Inode: 46923784Links: 1
Access: (0644/-rw-r--r--)  Uid: (0/root)   Gid: (0/root)
Access: 2012-07-30 13:34:45.0 +0300
Modify: 2012-07-30 13:34:45.0 +0300
Change: 2012-07-30 13:34:45.0 +0300
[root@email tmp]# ln 1 2
[root@email tmp]# stat 2
  File: `2'
  Size: 0   Blocks: 0  IO Block: 4096   regular empty file
Device: 803h/2051d  Inode: 46923784Links: 2
Access: (0644/-rw-r--r--)  Uid: (0/root)   Gid: (0/root)
Access: 2012-07-30 13:34:45.0 +0300
Modify: 2012-07-30 13:34:45.0 +0300
Change: 2012-07-30 13:34:51.0 +0300
[root@email tmp]# chown xfs:xfs 1
[root@email tmp]# stat 1
  File: `1'
  Size: 0   Blocks: 0  IO Block: 4096   regular empty file
Device: 803h/2051d  Inode: 46923784Links: 2
Access: (0644/-rw-r--r--)  Uid: (   43/ xfs)   Gid: (   43/ xfs)
Access: 2012-07-30 13:34:45.0 +0300
Modify: 2012-07-30 13:34:45.0 +0300
Change: 2012-07-30 13:35:03.0 +0300
[root@email tmp]# chown ntp:ntp 2
[root@email tmp]# stat 2
  File: `2'
  Size: 0   Blocks: 0  IO Block: 4096   regular empty file
Device: 803h/2051d  Inode: 46923784Links: 2
Access: (0644/-rw-r--r--)  Uid: (   38/ ntp)   Gid: (   38/ ntp)
Access: 2012-07-30 13:34:45.0 +0300
Modify: 2012-07-30 13:34:45.0 +0300
Change: 2012-07-30 13:35:15.0 +0300
[root@email tmp]# echo test  2
[root@email tmp]# cat 1
test
[root@email tmp]#


[Dovecot] Fwd: official dev team position regarding multiple times requested feature (global sieve)

2012-07-24 Thread mailing list subscriber
forwarding to the proper list address since your reply came with a
Reply-To header

-- Forwarded message --
From: mailing list subscriber mailinglist...@gmail.com
Date: Tue, Jul 24, 2012 at 10:24 AM
Subject: Re: official dev team position regarding multiple times
requested feature (global sieve)
To: awill...@opengroupware.us


On Mon, Jul 23, 2012 at 11:47 PM, Adam Tauno Williams
awill...@opengroupware.us wrote:
 On Mon, 2012-07-23 at 23:26 +0300, mailing list subscriber wrote:
 With above figure in mind, I'm looking at the history of request for
 ability to have a forced, default, site-wide sieve script that the
 user or any other sieve client is unable to alter (I'll just pick most
 relevant):

 Indeed, this would be terribly useful.

 2004, with reference to 2001:
 http://goo.gl/Gbo0k
 2006: A submitted patch
 http://www.irbs.net/internet/info-cyrus/0612/0231.html
 2007: A very good synthesis:
 http://goo.gl/Lo33b
 2010: sieve include implemented in v.2.3.0, still fails to meet above
 requirements

 But you fail to reference an open bug?!


The page at http://cyrusimap.web.cmu.edu/mediawiki/index.php/Report_A_Bug
says: If you are absolutely positive you have discovered a bug[...].
This might be misleading since this is a feature request and not a
bug.

Leaving the technical details of the proper way to report this I'm
still curios if the developers are actually AWARE of these repeated
requests.

 With all due respect, what is the development's team position
 regarding this feature and how do the development team see a solution
 that meets both requirements?

 I find SIEVE scripts assigned to folders are not executed
 https://bugzilla.cyrusimap.org/show_bug.cgi?id=3617

 People forget to cancel their vacation message
 https://bugzilla.cyrusimap.org/show_bug.cgi?id=2985
 NOTE: this is really a UI issue, IMO, multiple web clients solve this by
 generating intelligent scripts.

That's another problem with the poor IT guy trying to assemble bricks
from different vendors: when he finally is heard, the answer is uhm,
the feature you requested fail beyond the scope of my brick. I'm
sorry, I can't do that, even when there is a proposed patch
available!

If you read carefully all my referenced links you'll find one that is
listing other imap servers that support it. From what I've tested, at
least one implementation has done this as a global pre-user execution
of a script (dovecot)


 I don't see any bugs about a default-user-sieve-script.  *BUT*
 imapd.conf does offer this option [IMPLEMENTED]:
   autocreate_sieve_script: none
 The  full path of a file that contains a sieve script. This script
 automatically becomes a user’s initial default sieve filter script.
 When this option is not defined, no default sieve filter is created.
 The file must be readable by the cyrus daemon.

 Of course, the user can override this.

Unfortunately that feature only addresses PROVISIONING of data, which
is quite unuseful since cyrus mail admins do this with the auto-*
uoa.gr patches (that's another pain that devs refused to include for a
long time) - What it lacks is the ability to handle CHANGE (change of
default script, or alter existing user scripts).


Re: [Dovecot] Fwd: official dev team position regarding multiple times requested feature (global sieve)

2012-07-24 Thread mailing list subscriber
sorry folks, please ignore me. my head is is spinning trying to get
hardlinks and default sieve script working at the same time, writing
to dovecot and cyrus at the same time. one is doing the hardlink part
good, and the other the sieve. both fail to get both features right at
the same time. the previous message was intended for the other imap
server :)

On Tue, Jul 24, 2012 at 10:56 AM, Stephan Bosch step...@rename-it.nl wrote:
 What is the purpose of this posting?


 On 7/24/2012 9:27 AM, mailing list subscriber wrote:

 forwarding to the proper list address since your reply came with a
 Reply-To header


 Regards,

 Stephan.


Re: [Dovecot] maildir_copy_with_hardlinks on v.2.0.19

2012-07-23 Thread mailing list subscriber
On Sun, Jul 22, 2012 at 2:59 PM, mailing list subscriber
mailinglist...@gmail.com wrote:
 Hi,

 I'm trying to get the so-called single instance store (I think cyrus
 has got the name for the first time) with dovecot --version = 2.0.19
 binary package installed from ubuntu 12.04 lts official repo.

As requested, here is my update. As you can see I am running now the
latest release however emails delivered through lmtp gets split into
different files instead of expected hardlinked files.

Please pay attention as I'm commenting in-between different pasted outputs:

# 2.1.8 (30b0d6b1c581): /etc/dovecot/dovecot.conf
# OS: Linux 3.2.0-26-generic x86_64 Ubuntu 12.04 LTS
auth_username_format = %Ln
auth_verbose = yes
auth_verbose_passwords = plain
mail_location = maildir:~/Maildir
managesieve_notify_capability = mailto
managesieve_sieve_capability = fileinto reject envelope
encoded-character vacation subaddress comparator-i;ascii-numeric
relational regex imap4flags copy include variables body enotify
environment mailbox date ihave
namespace inbox {
  inbox = yes
  location =
  prefix =
}
passdb {
  driver = pam
}
plugin {
  autocreate = Inbox
  autocreate2 = Sent
  autocreate3 = Drafts
  autocreate4 = Spam
  autocreate5 = Trash
  autosubscribe = Inbox
  autosubscribe2 = Sent
  autosubscribe3 = Drafts
  autosubscribe4 = Spam
  autosubscribe5 = Trash
  mail_log_events = delete expunge mailbox_delete mailbox_rename
  mail_log_fields = uid box msgid size
  sieve = ~/.dovecot.sieve
  sieve_dir = ~/sieve
}
protocols = imap pop3 sieve lmtp
service auth {
  unix_listener /var/spool/postfix/private/dovecot-auth {
group = postfix
mode = 0660
user = postfix
  }
}
service lmtp {
  unix_listener /var/spool/postfix/private/dovecot-lmtp {
group = postfix
mode = 0660
user = postfix
  }
}
ssl_cert = /etc/ssl/certs/dovecot.pem
ssl_cipher_list =
ALL:!LOW:!SSLv2:ALL:!aNULL:!ADH:!eNULL:!EXP:RC4+RSA:+HIGH:+MEDIUM
ssl_key = /etc/ssl/private/dovecot.pem
userdb {
  driver = passwd
}
verbose_proctitle = yes
protocol imap {
  imap_client_workarounds = delay-newmail
  mail_max_userip_connections = 10
  mail_plugins =  autocreate
}
protocol pop3 {
  mail_max_userip_connections = 10
  pop3_client_workarounds = outlook-no-nuls oe-ns-eoh
}
protocol lda {
  deliver_log_format = msgid=%m: %$
  mail_plugins = sieve
  postmaster_address = postmaster
  quota_full_tempfail = yes
  rejection_reason = Your message to %t was automatically rejected:%n%r
}

for your pleasure, here is the transaction (note the same queue id and
a single lmtp session for all three recipients)

Jul 23 21:47:32 imap postfix/qmgr[27463]: 6746C27C687:
from=r...@imap.mydomain.ro, size=532, nrcpt=3 (queue active)
Jul 23 21:47:32 imap dovecot: lmtp(40609): Connect from local
Jul 23 21:47:32 imap dovecot: lmtp(40609, anotheruser):
pnuDHkScDVChngAA7nOI2A:
msgid=20120723184732.6746c27c...@imap.mydomain.ro: saved mail to
INBOX
Jul 23 21:47:32 imap postfix/lmtp[40608]: 6746C27C687:
to=anotheru...@mydomain.ro,
relay=imap.mydomain.ro[private/dovecot-lmtp], delay=0.26,
delays=0.15/0.01/0/0.1, dsn=2.0.0, status=sent (250 2.0.0
anotheru...@mydomain.ro pnuDHkScDVChngAA7nOI2A Saved)
Jul 23 21:47:32 imap dovecot: lmtp(40609, firstuser):
pnuDHkScDVChngAA7nOI2A:
msgid=20120723184732.6746c27c...@imap.mydomain.ro: saved mail to
INBOX
Jul 23 21:47:32 imap postfix/lmtp[40608]: 6746C27C687:
to=firstu...@mydomain.ro,
relay=imap.mydomain.ro[private/dovecot-lmtp], delay=0.37,
delays=0.15/0.01/0/0.2, dsn=2.0.0, status=sent (250 2.0.0
firstu...@mydomain.ro pnuDHkScDVChngAA7nOI2A Saved)
Jul 23 21:47:32 imap dovecot: lmtp(40609, firstuser):
pnuDHkScDVChngAA7nOI2A:
msgid=20120723184732.6746c27c...@imap.mydomain.ro: saved mail to
INBOX
Jul 23 21:47:32 imap postfix/lmtp[40608]: 6746C27C687:
to=firstu...@anothermydomain.ro,
relay=imap.mydomain.ro[private/dovecot-lmtp], delay=0.44,
delays=0.15/0.01/0/0.28, dsn=2.0.0, status=sent (250 2.0.0
firstu...@anothermydomain.ro pnuDHkScDVChngAA7nOI2A Saved)
Jul 23 21:47:32 imap dovecot: lmtp(40609): Disconnect from local:
Client quit (in reset)
Jul 23 21:47:32 imap postfix/qmgr[27463]: 6746C27C687: removed

for whoever wants to blame postfix 2.9.3 without reason, here is the postconf -n

alias_database = hash:/etc/aliases
alias_maps = hash:/etc/aliases
biff = no
broken_sasl_auth_clients = yes
config_directory = /etc/postfix
home_mailbox = Maildir/
html_directory = /usr/share/doc/postfix/html
inet_interfaces = all
mailbox_command = /usr/lib/dovecot/deliver -c
/etc/dovecot/conf.d/01-mail-stack-delivery.conf -m ${EXTENSION}
mailbox_size_limit = 0
mydestination = imap.mydomain.ro, localhost.mydomain.ro, localhost
myhostname = imap.mydomain.ro
mynetworks = 127.0.0.0/8 [:::127.0.0.0]/104 [::1]/128
readme_directory = /usr/share/doc/postfix
recipient_delimiter = +
relayhost =
smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache
smtp_use_tls = yes
smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu)
smtpd_recipient_restrictions

[Dovecot] maildir_copy_with_hardlinks on v.2.0.19

2012-07-22 Thread mailing list subscriber
Hi,

I'm trying to get the so-called single instance store (I think cyrus
has got the name for the first time) with dovecot --version = 2.0.19
binary package installed from ubuntu 12.04 lts official repo.

I have checked that maildir_copy_with_hardlinks is enabled (dovecot
-a|grep hard shows yes) then I have installed and enabled the lmtp
component of dovecot. The configuration dovecot -n is pasted here:
http://paste.lug.ro/131180

Also in the same paste is a strace against dovecot and childrent
showing evidence of the MTA delivering a single copy of the message
via LMTP with multiple RCPT TO: headers.

However when looking in the Maildir, I see the mail break down into
three separate files instead of expected hardlinked files (stat and
ls shows one single link count, inodes are different)

Given the above data, what (am I | dovecot is) doing wrong?

Please cc me if you need additional input when replying as I'm not
subscribed to the list (I'll watch the thread online only)
Many thanks in advance.


Re: [Dovecot] Outlook 2010 very slow when using IMAP - are there any tweaks?

2012-07-02 Thread Mailing List SVR

Il 02/07/2012 16:34, Kaya Saman ha scritto:

Hi,

though this is a bit of a side question, has anybody had an issue
while running Outlook 2010 with Dovecot?

The reason why I am asking is that I have setup a Dovecot 2.1.7 server
on FreeBSD which works fantastically with Thunderbird but Outlook
seems to be twice as slow in transferring information across??


# dovecot -n
# 2.1.7: /usr/local/etc/dovecot/dovecot.conf
# OS: FreeBSD 8.2-RELEASE amd64
auth_debug = yes
auth_mechanisms = plain ntlm login
auth_use_winbind = yes
auth_username_format = %n
auth_verbose = yes
auth_winbind_helper_path = /usr/local/bin/ntlm_auth
disable_plaintext_auth = no
info_log_path = /var/log/dovecot-info.log
log_path = /var/log/dovecot.log
mail_gid = mail_user
mail_home = /mail/AD_Mail/%Ld/%Ln
mail_location = maildir:~/Maildir
mail_uid = mail_user
passdb {
   args = failure_show_msg=yes
   driver = pam
}
pop3_fast_size_lookups = yes
pop3_lock_session = yes
pop3_no_flag_updates = yes
protocols = imap pop3
ssl = no
userdb {
   driver = static
}




Since (like most corporate organizations out there) we solely run
Outlook coupled to Exchange, this excersize was meant to be a way of
getting rid of PST files. We don't run out own Exchange however, and
don't have any control over it either.


My workaround was to simply use the Outlook GUI space to transfer
emails between Exchange and Dovecot running the IMAPv4 protocol.


For whatever reason Outlook is being really garbage about dealing with
stuff and since I don't know Outlook or MS products very well (being
your typical average OpenSource guy) I was wondering if there were any
tweaks that could be made within Outlook to speed it up or in Dovecot
to work better with Outlook?


I guess one could get sidetracked into the argument of mdbox vs.
Maildir from my config however, Thunderbird is really fast and
transfers large amounts of data really easily. Reaches round 130Mbps
using nload performance grapher, while Outlook only manages ~50kbps
but spikes at 2-3Mbps on occassion.


Try to understand (from dovecot logs) if there are difference between 
outlook and thunderbird, for example outlook connect over ssl and 
thunderbird no ecc..


Nicola




Can anyone offer any guidence or assistance in this matter??


Actually wherever I run Dovecot, including my servers at home, it is
fast and reliable. Yes I know MS is the polar opposite of anything
worth using however, my company won't change and I'm stuck banging my
head against the wall while trying to get MS to interface with
ANYTHING.


Regards,


Kaya






Re: [Dovecot] moving from BSD to Ubuntu

2012-06-30 Thread Mailing List SVR

Il 30/06/2012 22:19, spamv...@googlemail.com ha scritto:

hi..

im planning to move my Mailserver from an FreeBSD Box to an Ubuntu
12.04 LTS Box.


Hi, I recently migrated to ubuntu 12.04 (not from freebsd) the only 
problem was this:


https://bugs.launchpad.net/ubuntu/+source/openssl/+bug/1016905

solved patching openssl ubuntu package,

Nicola


Both Boxes run Dovecot 2.0

Does anyone did this before and experienced any problems ?
Downtime is no problem, my plan is to stop Dovecot on the Bsd Box and
copy all Mailbox files to the Uuntu system and start dovecot.

Regards
Hans






[Dovecot] auth service: out of memory

2012-06-29 Thread Mailing List SVR

Hi,

I have some out of memory errors in my logs (file errors.txt attached)

I'm using dovecot 2.0.19, I can see some memory leaks fix in hg after 
the 2.0.19 release but they seem related to imap-login service,


I attached my config too, is something wrong there? Should I really 
increase the limit based on my settings?


Can these commits fix the reported leak?

http://hg.dovecot.org/dovecot-2.0/rev/6299dfb73732
http://hg.dovecot.org/dovecot-2.0/rev/67f1cef07427

Please note that the auth service is restarted when it reach the limit 
so no real issues,


please advice

thanks
Nicola


cat /var/log/mail.log | grep Out of memory
Jun 28 11:48:24 server1 dovecot: master: Error: service(auth): child 31301 
returned error 83 (Out of memory (service auth { vsz_limit=128 MB }, you may 
need to increase it))
Jun 28 11:50:18 server1 dovecot: auth: Fatal: pool_system_realloc(8192): Out of 
memory
Jun 28 11:50:18 server1 dovecot: master: Error: service(auth): child 10782 
returned error 83 (Out of memory (service auth { vsz_limit=128 MB }, you may 
need to increase it))
Jun 28 11:52:43 server1 dovecot: master: Error: service(auth): child 16854 
returned error 83 (Out of memory (service auth { vsz_limit=128 MB }, you may 
need to increase it))
Jun 28 11:54:01 server1 dovecot: auth: Fatal: block_alloc(4096): Out of memory
Jun 28 11:54:01 server1 dovecot: master: Error: service(auth): child 23378 
returned error 83 (Out of memory (service auth { vsz_limit=128 MB }, you may 
need to increase it))
Jun 28 11:55:09 server1 dovecot: auth: Fatal: pool_system_realloc(8192): Out of 
memory
Jun 28 11:55:09 server1 dovecot: master: Error: service(auth): child 28203 
returned error 83 (Out of memory (service auth { vsz_limit=128 MB }, you may 
need to increase it))
Jun 28 11:56:07 server1 dovecot: master: Error: service(auth): child 32570 
returned error 83 (Out of memory (service auth { vsz_limit=128 MB }, you may 
need to increase it))
Jun 28 11:57:01 server1 dovecot: auth: Fatal: block_alloc(4096): Out of memory
Jun 28 11:57:01 server1 dovecot: master: Error: service(auth): child 5136 
returned error 83 (Out of memory (service auth { vsz_limit=128 MB }, you may 
need to increase it))
Jun 28 11:57:57 server1 dovecot: master: Error: service(auth): child 9245 
returned error 83 (Out of memory (service auth { vsz_limit=128 MB }, you may 
need to increase it))
Jun 28 11:58:52 server1 dovecot: master: Error: service(auth): child 13779 
returned error 83 (Out of memory (service auth { vsz_limit=128 MB }, you may 
need to increase it))
Jun 28 11:59:49 server1 dovecot: master: Error: service(auth): child 18260 
returned error 83 (Out of memory (service auth { vsz_limit=128 MB }, you may 
need to increase it))
Jun 28 12:01:03 server1 dovecot: auth: Fatal: pool_system_realloc(8192): Out of 
memory
Jun 28 12:01:03 server1 dovecot: master: Error: service(auth): child 22181 
returned error 83 (Out of memory (service auth { vsz_limit=128 MB }, you may 
need to increase it))
Jun 28 12:03:24 server1 dovecot: auth: Fatal: pool_system_malloc(3144): Out of 
memory
Jun 28 12:03:24 server1 dovecot: master: Error: service(auth): child 27253 
returned error 83 (Out of memory (service auth { vsz_limit=128 MB }, you may 
need to increase it))

# 2.0.19: /etc/dovecot/dovecot.conf
# OS: Linux 3.2.0-25-generic x86_64 Ubuntu 12.04 LTS ext4
auth_cache_size = 10 M
auth_mechanisms = plain login
auth_socket_path = /var/run/dovecot/auth-userdb
auth_worker_max_count = 128
base_dir = /var/run/dovecot/
default_process_limit = 200
default_vsz_limit = 128 M
disable_plaintext_auth = no
first_valid_gid = 2000
first_valid_uid = 2000
hostname = mail.example.com
last_valid_gid = 2000
last_valid_uid = 2000
listen = *
login_greeting = SVR ready.
mail_location = maildir:/srv/panel/mail/%d/%t/Maildir
mail_plugins =  quota trash autocreate
managesieve_notify_capability = mailto
managesieve_sieve_capability = fileinto reject envelope encoded-character 
vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy 
include variables body enotify environment mailbox date ihave
passdb {
  args = /etc/dovecot/dovecot-sql.conf.ext
  driver = sql
}
plugin {
  autocreate = Trash
  autocreate2 = Junk
  autocreate3 = Drafts
  autocreate4 = Sent
  autosubscribe = Trash
  autosubscribe2 = Junk
  autosubscribe3 = Drafts
  autosubscribe4 = Sent
  quota = maildir:User quota
  quota_rule = *:storage=300MB
  quota_rule2 = Trash:ignore
  quota_warning = storage=95%% quota-warning 95 %u
  quota_warning2 = storage=80%% quota-warning 80 %u
  sieve = ~/.dovecot.sieve
  sieve_before = /etc/dovecot/sieve/move-spam.sieve
  sieve_dir = ~/sieve
  sieve_max_actions = 32
  sieve_max_redirects = 4
  sieve_max_script_size = 1M
  sieve_quota_max_scripts = 10
  sieve_quota_max_storage = 2M
  trash = /etc/dovecot/dovecot-trash.conf.ext
}
postmaster_address = postmas...@example.com
protocols = imap pop3 sieve
service auth-worker {
  user = $default_internal_user
}
service auth {
  

Re: [Dovecot] auth service: out of memory

2012-06-29 Thread Mailing List SVR

Il 29/06/2012 09:19, Timo Sirainen ha scritto:

On 29.6.2012, at 9.35, Mailing List SVR wrote:


I have some out of memory errors in my logs (file errors.txt attached)

How large is your auth process's VSZ when it starts up and has handled a couple 
of logins? It's possible that it's not leaking at all, you're just not giving 
enough memory for its normal operation. Some Linux distros nowadays build 
binaries that eat up a lot of VSZ immediately when they start up.



ps aux report this:

dovecot   7454  0.0  0.0  85980  3776 ?S09:36   0:00 
dovecot/auth


before restarting dovecot the auth process was running since about 1 
hour and this is the output from ps aux


dovecot  25002  0.0  0.0  86112  3780 ?S08:24   0:00 
dovecot/auth


thanks
Nicola





Re: [Dovecot] auth service: out of memory

2012-06-29 Thread Mailing List SVR

Il 29/06/2012 09:45, Timo Sirainen ha scritto:

On 29.6.2012, at 10.39, Mailing List SVR wrote:


Il 29/06/2012 09:19, Timo Sirainen ha scritto:

On 29.6.2012, at 9.35, Mailing List SVR wrote:


I have some out of memory errors in my logs (file errors.txt attached)

How large is your auth process's VSZ when it starts up and has handled a couple 
of logins? It's possible that it's not leaking at all, you're just not giving 
enough memory for its normal operation. Some Linux distros nowadays build 
binaries that eat up a lot of VSZ immediately when they start up.



ps aux report this:

dovecot   7454  0.0  0.0  85980  3776 ?S09:36   0:00 dovecot/auth

before restarting dovecot the auth process was running since about 1 hour and 
this is the output from ps aux

dovecot  25002  0.0  0.0  86112  3780 ?S08:24   0:00 dovecot/auth

So you have 44 MB of VSZ available after startup. You also have 10 MB of auth 
cache, which could in reality take somewhat more than 10 MB. It doesn't leave a 
whole lot available for regular use. I'd increase the auth process's VSZ limit 
and see if it still crashes.


I increased the limit to 192MB or should I set the limit to 256MB or 
more? I'll wait some days to see if still crash




If you want to, you could also test with valgrind if there's a leak:

service auth {
   executable = /usr/bin/valgrind --leak-check=full -q /usr/libexec/dovecot/auth
}

You'd then need to restart the auth process to make valgrind output the leaks.


for now I prefer to avoid valgrind on a production server if the crash 
persist with the new limit I'll setup a test environment and I'll run 
valgrind there,


thanks
Nicola



[Dovecot] 2.0.19 segfault

2012-06-23 Thread Mailing List SVR

Hi,

after the upgrade from dovecot 2.0.13 (ubuntu oneiric) to dovecot 2.0.19 
(ubuntu precise), in my logs I have a lot of these errors:


Jun 23 00:20:29 server1 dovecot: master: Error: service(imap-login): 
child 6714 killed with signal 11 (core dumps disabled)


I tested 2.0.21 and the problem is still here. The problem seems to 
appear only when the client is ms outlook, thunderbird works fine


Here is the captured trace (I hope this is enough and I don't need to 
install debug symbols for everythings):


Core was generated by `dovecot/imap-login -D'.
Program terminated with signal 11, Segmentation fault.
#0  0x7f4d01c1a031 in RC4 () from 
/lib/x86_64-linux-gnu/libcrypto.so.1.0.0

(gdb) bt full
#0  0x7f4d01c1a031 in RC4 () from 
/lib/x86_64-linux-gnu/libcrypto.so.1.0.0

No symbol table info available.
#1  0x0134 in ?? ()
No symbol table info available.
#2  0x00cd in ?? ()
No symbol table info available.
#3  0x7f4d03e97470 in ?? ()
No symbol table info available.
#4  0x7f4d01c80629 in ?? () from 
/lib/x86_64-linux-gnu/libcrypto.so.1.0.0

No symbol table info available.
#5  0x7f4d01f82bcf in ?? () from /lib/x86_64-linux-gnu/libssl.so.1.0.0
No symbol table info available.
#6  0x7f4d01f79e04 in ?? () from /lib/x86_64-linux-gnu/libssl.so.1.0.0
No symbol table info available.
#7  0x7f4d01f7a134 in ?? () from /lib/x86_64-linux-gnu/libssl.so.1.0.0
No symbol table info available.
#8  0x7f4d027fed6f in ssl_write (proxy=0x7f4d03e7c0a0)
at ssl-proxy-openssl.c:499
ret = optimized out
#9  0x7f4d027fee68 in plain_read (proxy=0x7f4d03e7c0a0)
at ssl-proxy-openssl.c:308
ret = optimized out
corked = true
---Type return to continue, or q return to quit---
#10 0x7f4d025b5c98 in io_loop_call_io (io=0x7f4d03e84b10) at 
ioloop.c:384

ioloop = 0x7f4d03e3e680
t_id = 2
#11 0x7f4d025b6d27 in io_loop_handler_run (ioloop=optimized out)
at ioloop-epoll.c:213
ctx = 0x7f4d03e505a0
events = 0x6579351d
event = 0x7f4d03e50610
list = 0x7f4d03e93690
io = optimized out
tv = {tv_sec = 59, tv_usec = 999832}
msecs = optimized out
ret = 1
i = optimized out
call = optimized out
#12 0x7f4d025b5c28 in io_loop_run (ioloop=0x7f4d03e3e680) at 
ioloop.c:405

No locals.
#13 0x7f4d025a3e33 in master_service_run (service=0x7f4d03e3e550,
callback=optimized out) at master-service.c:481
No locals.
#14 0x7f4d027f7cc2 in main (argc=2, argv=0x7f4d03e3e370) at main.c:371
set_pool = 0x7f4d03e3e880
allow_core_dumps = optimized out
---Type return to continue, or q return to quit---
login_socket = 0x7f4d02800763 login
c = optimized out
#15 0x7f4d021d676d in __libc_start_main ()
   from /lib/x86_64-linux-gnu/libc.so.6
No symbol table info available.
#16 0x7f4d02c2d5a9 in _start ()
No symbol table info available.

Nicola





Re: [Dovecot] 2.0.19 segfault

2012-06-23 Thread Mailing List SVR

Il 23/06/2012 22:39, Mailing List SVR ha scritto:

Hi,

after the upgrade from dovecot 2.0.13 (ubuntu oneiric) to dovecot 
2.0.19 (ubuntu precise), in my logs I have a lot of these errors:


Jun 23 00:20:29 server1 dovecot: master: Error: service(imap-login): 
child 6714 killed with signal 11 (core dumps disabled)


I tested 2.0.21 and the problem is still here. The problem seems to 
appear only when the client is ms outlook, thunderbird works fine


Here is the captured trace (I hope this is enough and I don't need to 
install debug symbols for everythings):


Core was generated by `dovecot/imap-login -D'.
Program terminated with signal 11, Segmentation fault.
#0  0x7f4d01c1a031 in RC4 () from 
/lib/x86_64-linux-gnu/libcrypto.so.1.0.0

(gdb) bt full
#0  0x7f4d01c1a031 in RC4 () from 
/lib/x86_64-linux-gnu/libcrypto.so.1.0.0

No symbol table info available.
#1  0x0134 in ?? ()
No symbol table info available.
#2  0x00cd in ?? ()
No symbol table info available.
#3  0x7f4d03e97470 in ?? ()
No symbol table info available.
#4  0x7f4d01c80629 in ?? () from 
/lib/x86_64-linux-gnu/libcrypto.so.1.0.0

No symbol table info available.
#5  0x7f4d01f82bcf in ?? () from 
/lib/x86_64-linux-gnu/libssl.so.1.0.0

No symbol table info available.
#6  0x7f4d01f79e04 in ?? () from 
/lib/x86_64-linux-gnu/libssl.so.1.0.0

No symbol table info available.
#7  0x7f4d01f7a134 in ?? () from 
/lib/x86_64-linux-gnu/libssl.so.1.0.0

No symbol table info available.
#8  0x7f4d027fed6f in ssl_write (proxy=0x7f4d03e7c0a0)
at ssl-proxy-openssl.c:499
ret = optimized out
#9  0x7f4d027fee68 in plain_read (proxy=0x7f4d03e7c0a0)
at ssl-proxy-openssl.c:308
ret = optimized out
corked = true
---Type return to continue, or q return to quit---
#10 0x7f4d025b5c98 in io_loop_call_io (io=0x7f4d03e84b10) at 
ioloop.c:384

ioloop = 0x7f4d03e3e680
t_id = 2
#11 0x7f4d025b6d27 in io_loop_handler_run (ioloop=optimized out)
at ioloop-epoll.c:213
ctx = 0x7f4d03e505a0
events = 0x6579351d
event = 0x7f4d03e50610
list = 0x7f4d03e93690
io = optimized out
tv = {tv_sec = 59, tv_usec = 999832}
msecs = optimized out
ret = 1
i = optimized out
call = optimized out
#12 0x7f4d025b5c28 in io_loop_run (ioloop=0x7f4d03e3e680) at 
ioloop.c:405

No locals.
#13 0x7f4d025a3e33 in master_service_run (service=0x7f4d03e3e550,
callback=optimized out) at master-service.c:481
No locals.
#14 0x7f4d027f7cc2 in main (argc=2, argv=0x7f4d03e3e370) at 
main.c:371

set_pool = 0x7f4d03e3e880
allow_core_dumps = optimized out
---Type return to continue, or q return to quit---
login_socket = 0x7f4d02800763 login
c = optimized out
#15 0x7f4d021d676d in __libc_start_main ()
   from /lib/x86_64-linux-gnu/libc.so.6
No symbol table info available.
#16 0x7f4d02c2d5a9 in _start ()
No symbol table info available.

Nicola


Here is a more detailed trace,

Core was generated by `dovecot/imap-login -D'.
Program terminated with signal 11, Segmentation fault.
#0  RC4 () at rc4-x86_64.s:343
343rc4-x86_64.s: File o directory non esistente.
(gdb) bt full
#0  RC4 () at rc4-x86_64.s:343
No locals.
#1  0x0134 in ?? ()
No symbol table info available.
#2  0x00cd in ?? ()
No symbol table info available.
#3  0x7f4d03e97470 in ?? ()
No symbol table info available.
#4  0x7f4d01c80629 in rc4_hmac_md5_cipher (ctx=optimized out,
out=0x7f4d03e8d0b8 
\314V\347\335Lc\024\205\221'µ\006\177\313\326ۢ\313\317\303c\266\360\347\364\263\242\316z\326\307\320\303Ω\242`\303\321ί\313Т\177\315\305\313̯\320\307u\307\320\320\303\316ѢzƢ\307\314\303\300\316v\242\313\306\316Ǣ\321c\030T 
SORT=DISPLAY\301\021\222RC\005D=R\244\237T\342\004\\020ES 
TH\003\246AD=\247\032FS 
\351ULTIA\315\025N8\032\341\255\364EZ\376\236\062 
CHILDREN\\\b{\250\240\255PACE U\216\331\nLUS LIST-EXTENDED I18NLEVEL=h 
CO...,

in=optimized out, len=0) at e_rc4_hmac_md5.c:163
key = 0x1a
rc4_off = 139968754799079
md5_off = optimized out
blocks = optimized out
l = optimized out
plen = optimized out
#5  0x7f4d01f82bcf in tls1_enc (s=0x7f4d03e7b700, send=1) at 
t1_enc.c:828

---Type return to continue, or q return to quit---
rec = 0x7f4d03e7bcb8
ds = 0x7f4d03e95cf0
l = 308
bs = 1
i = optimized out
ii = optimized out
j = optimized out
k = optimized out
pad = optimized out
enc = 0x7f4d01f4eae0
#6  0x7f4d01f79e04 in do_ssl3_write (s=0x7f4d03e7b700, type=23,
buf=0x7f4d03e7c514 A0 OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR 
LOGIN-REFERRALS ID ENABLE IDLE SORT SORT=DISPLAY THREAD=REFERENCES 
THREAD=REFS MULTIAPPEND UNSELECT CHILDREN NAMESPACE UIDPLUS 
LIST-EXTENDED I18NLEVEL=1 CO..., len=292,

create_empty_fragment=0) at s3_pkt.c:815
p

Re: [Dovecot] 2.0.19 segfault

2012-06-23 Thread Mailing List SVR

Il 24/06/2012 00:05, Timo Sirainen ha scritto:

On Sat, 2012-06-23 at 22:39 +0200, Mailing List SVR wrote:


after the upgrade from dovecot 2.0.13 (ubuntu oneiric) to dovecot 2.0.19
(ubuntu precise), in my logs I have a lot of these errors:

Jun 23 00:20:29 server1 dovecot: master: Error: service(imap-login):
child 6714 killed with signal 11 (core dumps disabled)

I tested 2.0.21 and the problem is still here. The problem seems to
appear only when the client is ms outlook, thunderbird works fine

Looks to me more like OpenSSL library bug. The only reason why it could
be Dovecot bug is if Dovecot is causing memory corruption. Could you run
imap-login via valgrind to see if this is the case?

service imap-login {
   executable = /usr/bin/valgrind -q --vgdb=no 
/usr/local/libexec/dovecot/imap-login
   chroot =
}

Also have you changed any ssl-related settings in dovecot.conf?



attached my complete configuration, I hope there is a mistake in my config

I looked at the code and there was no relevant change from dovecot 
2.0.13 and dovecot 2.0.19, upgrading between ubuntu releases updated 
openssl too and this could be the problem,


however is not clear to me while imap over ssl works fine with 
thunderdird and I see the crash in the logs for customers that seems to 
use ms outlook,


Nicola






# 2.0.19: /etc/dovecot/dovecot.conf
# OS: Linux 3.2.0-25-generic x86_64 Ubuntu 12.04 LTS ext4
auth_cache_size = 10 M
auth_mechanisms = plain login
auth_socket_path = /var/run/dovecot/auth-userdb
auth_worker_max_count = 128
base_dir = /var/run/dovecot/
default_process_limit = 200
disable_plaintext_auth = no
first_valid_gid = 2000
first_valid_uid = 2000
hostname = mail.svrinformatica.it
last_valid_gid = 2000
last_valid_uid = 2000
listen = *
login_greeting = SVR ready.
mail_location = maildir:/srv/panel/mail/%d/%t/Maildir
mail_plugins =  quota trash autocreate
managesieve_notify_capability = mailto
managesieve_sieve_capability = fileinto reject envelope encoded-character 
vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy 
include variables body enotify environment mailbox date ihave
passdb {
  args = /etc/dovecot/dovecot-sql.conf.ext
  driver = sql
}
plugin {
  autocreate = Trash
  autocreate2 = Junk
  autocreate3 = Drafts
  autocreate4 = Sent
  autosubscribe = Trash
  autosubscribe2 = Junk
  autosubscribe3 = Drafts
  autosubscribe4 = Sent
  quota = maildir:User quota
  quota_rule = *:storage=300MB
  quota_rule2 = Trash:ignore
  quota_warning = storage=95%% quota-warning 95 %u
  quota_warning2 = storage=80%% quota-warning 80 %u
  sieve = ~/.dovecot.sieve
  sieve_before = /etc/dovecot/sieve/move-spam.sieve
  sieve_dir = ~/sieve
  sieve_max_actions = 32
  sieve_max_redirects = 4
  sieve_max_script_size = 1M
  sieve_quota_max_scripts = 10
  sieve_quota_max_storage = 2M
  trash = /etc/dovecot/dovecot-trash.conf.ext
}
postmaster_address = postmas...@svrinformatica.it
protocols = imap pop3 sieve
service auth-worker {
  user = $default_internal_user
}
service auth {
  unix_listener /var/spool/postfix/private/dovecot-auth {
group = vmail
mode = 0660
user = postfix
  }
  unix_listener auth-userdb {
group = vmail
mode = 0660
user = vmail
  }
  user = $default_internal_user
}
service managesieve-login {
  inet_listener sieve {
port = 4190
  }
}
service quota-warning {
  executable = script 
/srv/panel/django/systemcp/systemutils/mail/quota-warning.py
  unix_listener quota-warning {
user = vmail
  }
  user = dovecot
}
ssl_cert = /etc/ssl/certs/dovecot.pem
ssl_cipher_list = 
ALL:!LOW:!SSLv2:ALL:!aNULL:!ADH:!eNULL:!EXP:RC4+RSA:+HIGH:+MEDIUM
ssl_key = /etc/ssl/private/dovecot.pem
userdb {
  driver = prefetch
}
userdb {
  args = /etc/dovecot/dovecot-sql.conf.ext
  driver = sql
}
protocol lda {
  mail_plugins =  quota trash autocreate sieve
}
protocol imap {
  imap_client_workarounds = delay-newmail
  mail_max_userip_connections = 10
  mail_plugins =  quota trash autocreate imap_quota
}
protocol sieve {
  mail_max_userip_connections = 10
  mail_plugins =  quota trash autocreate
  managesieve_max_compile_errors = 5
}
protocol pop3 {
  mail_max_userip_connections = 10
  mail_plugins =  quota trash autocreate
  pop3_client_workarounds = outlook-no-nuls oe-ns-eoh
}



Re: [Dovecot] 2.0.19 segfault

2012-06-23 Thread Mailing List SVR

Il 24/06/2012 00:05, Timo Sirainen ha scritto:

On Sat, 2012-06-23 at 22:39 +0200, Mailing List SVR wrote:


after the upgrade from dovecot 2.0.13 (ubuntu oneiric) to dovecot 2.0.19
(ubuntu precise), in my logs I have a lot of these errors:

Jun 23 00:20:29 server1 dovecot: master: Error: service(imap-login):
child 6714 killed with signal 11 (core dumps disabled)

I tested 2.0.21 and the problem is still here. The problem seems to
appear only when the client is ms outlook, thunderbird works fine

Looks to me more like OpenSSL library bug.


the bug seems related to this patch:

http://cvs.openssl.org/chngview?cn=22415

I'm applying just now


  The only reason why it could
be Dovecot bug is if Dovecot is causing memory corruption. Could you run
imap-login via valgrind to see if this is the case?

service imap-login {
   executable = /usr/bin/valgrind -q --vgdb=no 
/usr/local/libexec/dovecot/imap-login
   chroot =
}

Also have you changed any ssl-related settings in dovecot.conf?








Re: [Dovecot] 2.0.19 segfault

2012-06-23 Thread Mailing List SVR

Il 24/06/2012 00:49, Mailing List SVR ha scritto:

Il 24/06/2012 00:05, Timo Sirainen ha scritto:

On Sat, 2012-06-23 at 22:39 +0200, Mailing List SVR wrote:

after the upgrade from dovecot 2.0.13 (ubuntu oneiric) to dovecot 
2.0.19

(ubuntu precise), in my logs I have a lot of these errors:

Jun 23 00:20:29 server1 dovecot: master: Error: service(imap-login):
child 6714 killed with signal 11 (core dumps disabled)

I tested 2.0.21 and the problem is still here. The problem seems to
appear only when the client is ms outlook, thunderbird works fine

Looks to me more like OpenSSL library bug.


the bug seems related to this patch:

http://cvs.openssl.org/chngview?cn=22415

I'm applying just now


I can confirm that the patch listed above solve the problem, thanks for 
pointing me to openssl,


Nicola




  The only reason why it could
be Dovecot bug is if Dovecot is causing memory corruption. Could you run
imap-login via valgrind to see if this is the case?

service imap-login {
   executable = /usr/bin/valgrind -q --vgdb=no 
/usr/local/libexec/dovecot/imap-login

   chroot =
}

Also have you changed any ssl-related settings in dovecot.conf?












Re: [Dovecot] dovecot and unison

2012-04-03 Thread dm-list-email-dovecot
At Mon, 02 Apr 2012 19:02:17 -0400,
FZiegler wrote:
 
 I am successfully using dovecot purely as a personal local mail store on 
 my desktop. (There is only one account, and it's only ever accessed by 
 local mail clients on the machine. The point is to have a common store I 
 can use with any client; plus, I prefer dovecot's Mailbox storage to 
 Thunderbird's mboxes.)
 
 Now I'd like if possible, to replicate this setup on my laptop and keep 
 both in sync with unison (http://www.cis.upenn.edu/~bcpierce/unison/), 
 which I am already using to sync much of my home dir about once a day.
 
 I found at least one positive message regarding this topic 
 (http://dovecot.org/list/dovecot/2010-April/048092.html), but I feel I 
 could use some more advice.

I have a similar setup, but I use offlineimap instead of unison:

http://offlineimap.org/

It seems to work pretty well.  That's not to say that unison wouldn't
work as well also.  However, offlineimap has the advantage that it
doesn't restrict you to a star topology.  You can, for instance, since
to your laptop at work and from your laptop at home.

Note that offlineimap is slow if you don't use imap at both ends.
Therefore, I use it on the local end.  A simplified excerpt of my
.offlineimaprc looks like this:



[general]
accounts = DefaultAccount

[Account DefaultAccount]
localrepository = MyLocal
remoterepository = MyRemote

[Repository MyRemote]
type = IMAP
preauthtunnel = ssh -qax -oBatchMode=yes -oServerAliveInterval=60 
MY-MAIL-SERVER 'exec env CONFIG_FILE=/PATH/TO/PRIVATE/dovecot.conf 
/usr/lib/dovecot/imap'

[Repository MyLocal]
type = IMAP
preauthtunnel = CONFIG_FILE=$HOME/etc/dovecot.conf /usr/lib/dovecot/imap



Unfortunately, in dovecot 2.1, the full text search no longer seems to
work in pre-auth mode, but I don't think that has anything to do with
offlineimap.  I think maybe dovecot is deprecating pre-auth mode or
requires a more complicated setup.


[Dovecot] dovecot 2.1 breaks FTS + pre-auth?

2012-03-31 Thread dm-list-email-dovecot
Hi.  I use dovecot in the simplest possible way, as an IMAP server in
pre-auth mode over ssh or just locally over a unix-domain socket
(e.g., with offlineimap, which runs much faster using dovecot for the
local message store).  Ideally I would like to avoid running any extra
daemons or setting up anything as root.  Until recently, this has
worked fine by just setting the CONFIG_FILE environment variable to
something in my home directory.

Here is my configuration:

$ export CONFIG_FILE=$HOME/etc/dovecot.conf
$ dovecot -n
# 2.1.3: /home/dm/etc/dovecot.conf
# OS: Linux 3.2.13-1-ARCH x86_64  
mail_location = maildir:~/Mail/inbox
mail_plugins =  fts fts_squat
plugin {
  fts = squat
  fts_squat = partial=4 full=10
}
doveconf: Error: ssl enabled, but ssl_cert not set
doveconf: Fatal: Error in configuration file /home/dm/etc/dovecot.conf: ssl 
enabled, but ssl_cert not set

Full text search used to work just fine with this configuration, and
still does on a machine I have running dovecot 2.0.13.  However, on
the machine with 2.1, I get errors about /var/run/dovecot/index not
existing.

$ printf a select INBOX\nb search text xyzzy\nc logout\n \
| /usr/lib/dovecot/imap
* PREAUTH [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE 
SORT SORT=DISPLAY THREAD=REFERENCES THREAD=REFS MULTIAPPEND UNSELECT CHILDREN 
NAMESPACE UIDPLUS LIST-EXTENDED I18NLEVEL=1 CONDSTORE QRESYNC ESEARCH ESORT 
SEARCHRES WITHIN CONTEXT=SEARCH LIST-STATUS SPECIAL-USE] Logged in as dm
imap(dm): Error: net_connect_unix(/var/run/dovecot/indexer) failed: No such 
file or directory
...

Needless to say, no dovecot.index.search or dovecot.index.search.uids
file is created after this error.

While I can't write /var/run/dovecot, this is not a permission issue.
For example, adding base_dir=/home/dm (my home directory) to the
configuration file yields the same error for /home/dm/indexer.  I'm
guessing something has changed where imap requires an indexer daemon
and doesn't launch it in pre-auth mode any more, but I can't find
anything about this in the documentation.

In short, if anyone can tell me how to use FTS in conjunction with
pre-auth mode or point me to a working example, I would appreciate it.


Re: [Dovecot] distributed mdbox

2012-03-23 Thread list
On Wed, 21 Mar 2012 09:56:12 -0600, James Devine fxmul...@gmail.com
wrote:
 Anyone know how to setup dovecot with mdbox so that it can be used
through
 shared storage from multiple hosts?  I've setup a gluster volume and am
 sharing it between 2 test clients.  I'm using postfix/dovecot LDA for
 delivery and I'm using postal to send mail between 40 users.  In doing
 this, I'm seeing these errors in the logs
 
 Mar 21 09:36:29 test-gluster-client2 dovecot: lda(testuser34): Error:
Fixed
 index file /mnt/testuser34/mdbox/storage/dovecot.map.index:
messages_count
 272 - 271
 Mar 21 09:36:30 test-gluster-client2 dovecot: lda(testuser28): Error:
Log
 synchronization error at seq=4,offset=3768 for
 /mnt/testuser28/mdbox/storage/dovecot.map.index: Append with UID 516,
but
 next_uid = 517
 Mar 21 09:36:30 test-gluster-client2 dovecot: lda(testuser28): Error:
Log
 synchronization error at seq=4,offset=4220 for
 /mnt/testuser28/mdbox/storage/dovecot.map.index: Extension record update
 for invalid uid=517
 Mar 21 09:36:30 test-gluster-client2 dovecot: lda(testuser28): Error:
Log
 synchronization error at seq=4,offset=5088 for
 /mnt/testuser28/mdbox/storage/dovecot.map.index: Extension record update
 for invalid uid=517
 Mar 21 09:36:30 test-gluster-client2 dovecot: lda(testuser28): Warning:
 fscking index file /mnt/testuser28/mdbox/storage/dovecot.map.index
 Mar 21 09:36:30 test-gluster-client2 dovecot: lda(testuser34): Warning:
 fscking index file /mnt/testuser34/mdbox/storage/dovecot.map.index
 
 
 This is my dovecot config currently:
 
 jdevine@test-gluster-client2:~ dovecot -n
 # 2.0.13: /etc/dovecot/dovecot.conf
 # OS: Linux 3.0.0-13-server x86_64 Ubuntu 11.10
 lock_method = dotlock
 mail_fsync = always
 mail_location = mdbox:~/mdbox
 mail_nfs_index = yes
 mail_nfs_storage = yes
 mmap_disable = yes
 passdb {
   driver = pam
 }
 protocols =  imap
 ssl_cert = /etc/ssl/certs/dovecot.pem
 ssl_key = /etc/ssl/private/dovecot.pem
 userdb {
   driver = passwd
 }

I was able to get dovecot working across a gluster cluster a few weeks ago
and it worked just fine.  I would recommend using the native gluster mount
option (need to install gluster software on clients), and using distributed
replicated as your replication mechanism.  If you're running two gluster
servers you should have a replica count of two with distributed replicated.
You should test first to make sure you can create a file in both mounts
and see it from every mount point in the cluster, as well as interact with
it.  It's also very important to make sure your servers are running with
synchronized clocks from an NTP server.  Very bad things happen to a
(dovecot or gluster) cluster out of sync with NTP.



Re: [Dovecot] distributed mdbox

2012-03-23 Thread list
On Fri, 23 Mar 2012 16:06:25 +0200, Timo Sirainen t...@iki.fi wrote:
 On 23.3.2012, at 15.39, l...@airstreamcomm.net
l...@airstreamcomm.net
 wrote:
 
 I was able to get dovecot working across a gluster cluster a few weeks
 ago
 and it worked just fine.  I would recommend using the native gluster
 mount
 option (need to install gluster software on clients), and using
 distributed
 replicated as your replication mechanism.
 
 Have you tried stress testing it with imaptest? Run in parallel for both
 servers:
 
 imaptest host=gluster1 user=testuser pass=testpass
 imaptest host=gluster2 user=testuser pass=testpass
 
 http://imapwiki.org/ImapTest
 
 And see if Dovecot logs any errors.

I did stress test it, but we have developed a mail bot net tool for the
purpose.  I should mention this was tested using dovecot 1.2, as this is
our current production version (hopefully will be upgrading soon).  Its
comprised of a control server that starts a bot network of client machines
that creates pop/imap connections (smtp as well) on our test cluster of
dovecot (and postfix) servers.  In my test I distributed the load across a
two node dovecot (/postfix) cluster back ended by glusterfs, which has SAN
storage attached to it.  I actually didn't change my configuration from
when I had a test NFS server connected to the test servers (mmap disabled,
fcntl locking, etc), because glusterfs was an afterthought when we were
stress testing our new netapp system using NFS.  We have everything in
VMware, including the glusterfs servers.  Using five bot servers and
connecting 7 times a second from each server (35 connections per second)
for both pop and imap (70 total connections per second) split between two
dovecot servers I was not seeing any big issues.  The load average was low,
and there were no errors to speak of in dovecot (or postfix).  I was
mounting the storage with the glusterfs native client, not using NFS (which
I have not tested).  I would like to do a more thorough test of glusterfs
using Dovecot 2.0 on some dedicated hardware and see how much further I can
push the system.



Re: [Dovecot] distributed mdbox

2012-03-23 Thread list
On Fri, 23 Mar 2012 23:03:01 +0200, Timo Sirainen t...@iki.fi wrote:
 On 23.3.2012, at 19.43, l...@airstreamcomm.net
l...@airstreamcomm.net
 wrote:
 
 Have you tried stress testing it with imaptest? Run in parallel for
both
 servers:
 I did stress test it, but we have developed a mail bot net tool for
the
 purpose.  I should mention this was tested using dovecot 1.2, as this
is
 our current production version (hopefully will be upgrading soon).  Its
 comprised of a control server that starts a bot network of client
 machines
 that creates pop/imap connections (smtp as well) on our test cluster of
 dovecot (and postfix) servers.  In my test I distributed the load
across
 a
 two node dovecot (/postfix) cluster back ended by glusterfs, which has
 SAN
 storage attached to it.  I actually didn't change my configuration from
 when I had a test NFS server connected to the test servers (mmap
 disabled,
 fcntl locking, etc), because glusterfs was an afterthought when we were
 stress testing our new netapp system using NFS.  We have everything in
 VMware, including the glusterfs servers.  Using five bot servers and
 connecting 7 times a second from each server (35 connections per
second)
 for both pop and imap (70 total connections per second) split between
two
 dovecot servers I was not seeing any big issues.  The load average was
 low,
 and there were no errors to speak of in dovecot (or postfix).  I was
 mounting the storage with the glusterfs native client, not using NFS
 (which
 I have not tested).  I would like to do a more thorough test of
glusterfs
 using Dovecot 2.0 on some dedicated hardware and see how much further I
 can
 push the system.
 
 What did the bots do? Add messages and delete messages as fast as they
 could? I guess that's mostly enough to see if things work. imaptest
anyway
 hammers the server as fast as it can with all kinds of commands.

We created two python scripts on the bots that listed all the messages in
the inbox then deleted all the messages in the inbox, one script doing pop
and the other doing imap.  The bots were also sending messages to the
server simultaneously to repopulate inboxes.  I didn't know about imaptest,
thanks!



[Dovecot] Dovecot and scalable database storage

2012-03-22 Thread list
I saw some interesting mails from TImo back in 2009 talking about the idea
of using something like Cassandra db or similar as a storage platform for
both email and index/logs.  I was wondering if this has been discussed
since then, and if there are any plans to support something like this in
the future?  I have been playing with Cassandra and found that their
RackAwareStrategy gives you the ability to replicate writes to as many
nodes as you would like, but more importantly what nodes and one of those
nodes could be defined by what rack it lives in or what data center it
lives in.  This means multiple sites high available storage clusters,
seemingly a system that dovecot could benefit from in terms of performance
and redundancy and simplicity.  Any takers?



Re: [Dovecot] v2.1.2 released

2012-03-15 Thread list
On Thu, 15 Mar 2012 16:53:53 +0200, Timo Sirainen t...@iki.fi wrote:
 http://dovecot.org/releases/2.1/dovecot-2.1.2.tar.gz
 http://dovecot.org/releases/2.1/dovecot-2.1.2.tar.gz.sig
 
 There are a ton of proxying related improvements in this release. You
 should now be able to do pretty much anything you want with Dovecot
 proxy/director.
 
 This release also includes the initial version of dsync-based
 replication. I'm already successfully using it for @dovecot.fi mails,
 but it still has some problems. See
 http://dovecot.org/list/dovecot/2012-March/064243.html for some details
 how to configure it.
 
   + Initial implementation of dsync-based replication. For now this
 should be used only on non-critical systems.
   + Proxying: POP3 now supports sending remote IP+port from proxy to
 backend server via Dovecot-specific XCLIENT extension.
   + Proxying: proxy_maybe=yes with host=hostname (instead of IP)
 works now properly.
   + Proxying: Added auth_proxy_self setting
   + Proxying: Added proxy_always extra field (see wiki docs)
   + Added director_username_hash setting to specify what part of the
 username is hashed. This can be used to implement per-domain
 backends (which allows safely accessing shared mailboxes within
 domain).
   + Added a session ID string for imap/pop3 connections, available
 in %{session} variable. The session ID passes through Dovecot
 IMAP/POP3 proxying to backend server. The same session ID is can be
 reused after a long time (currently a bit under 9 years). 
   + passdb checkpassword: Support credentials lookups (for
 non-plaintext auth and for lmtp_proxy lookups)
   + fts: Added fts_index_timeout setting to abort search if indexing
 hasn't finished by then (default is to wait forever). 
   - doveadm sync: If mailbox was expunged empty, messages may have
 become back instead of also being expunged in the other side.
   - director: If user logged into two directors while near user
 expiration, the directors might have redirected the user to two
 different backends.
   - imap_id_* settings were ignored before login.
   - Several fixes to mailbox_list_index=yes
   - Previous v2.1.x didn't log all messages at shutdown.
   - mbox: Fixed accessing Dovecot v1.x mbox index files without errors.

Are there any performance metrics around dsync replication, such as how
many users this has been tested on, or how long the replication take to
occur?  Also I have not been able to determine from reading the mailinglist
whether or not dsync replication works with different types of mailboxes
(maildir, dbox, mbox), what is supported?



[Dovecot] Post-login scripting - Trash cleanup

2012-02-28 Thread list
We are considering using the post-login scripting to clear trash older
than 90 days from user accounts.  has anyone done this, and if so did this
cause logins to slow down too much waiting for the trash to purge?  One
idea was to execute the trash purge script once a day by tracking their
logins and seeing that it has already ran that day.  Another idea was to
call the trash purge script in the background and continue without
acknowledging that it has finished to keep logins speedy.


Re: [Dovecot] Post-login scripting - Trash cleanup

2012-02-28 Thread list
On Tue, 28 Feb 2012 19:26:11 +0100, Robert Schetterer
rob...@schetterer.org wrote:
 Am 28.02.2012 19:11, schrieb l...@airstreamcomm.net:
 We are considering using the post-login scripting to clear trash older
 than 90 days from user accounts.  has anyone done this, and if so did
 this
 cause logins to slow down too much waiting for the trash to purge?  One
 idea was to execute the trash purge script once a day by tracking their
 logins and seeing that it has already ran that day.  Another idea was
to
 call the trash purge script in the background and continue without
 acknowledging that it has finished to keep logins speedy.
 
 look here if this match/solve your problem
 
 http://wiki2.dovecot.org/Plugins/Expire

Expire looks to be useful, but it appears to be something that enhances
expunging of messages rather than automates the process if I am reading
correctly.  We would like to make the process for expunging old Trash
messages as automated and inline as possible.


Re: [Dovecot] Multiple locations, 2 servers - planning questions...

2012-02-27 Thread list
On Mon, 27 Feb 2012 13:38:39 -0500, Charles Marcus
cmar...@media-brokers.com wrote:
 On 2012-02-27 1:34 PM, Sven Hartge s...@svenhartge.de wrote:
 Charles Marcuscmar...@media-brokers.com  wrote:
 Each location is an entire floor of a 6 story building. The remote
 location has the capacity for about 60 users, the new location about
 100. We only allow IMAP access to email, so if everyone is using email
 at the same time, that would be a lot of traffic over a single Gb link
 I think...
 
 Naa, most clients download mails only once and then keep them cached
 locally (at least Thunderbird and Outlook do).

 Looking at the used bandwidth of the mailserver of my small university
 (10.000 users, about 1000 concurrently active during the daytime)
 shows a steady amount of roughly 5MBit/s with peaks to 10MBit/s in and
 out.
 
 Interesting - thanks for the numbers...
 
 But, again, my main reason for 2 servers is not performance, it is for 
 redundancy...

I too have been tasked with multisite redundancy, and have been
experimenting with GlusterFS
(http://www.gluster.org/community/documentation/index.php/Main_Page), which
is a distributed file system.  In our network we have a dedicated 10GB link
between two datacenters 100 miles apart, and I have a GlusterFS node at
each site setup in Distriubted Replicated mode with 2 replicas which means
the servers are mirrored.  The file writes are done to all the replica
servers (2 servers in this case), so depending on network latency the
writes could potentially slow down.  GlusterFS has it's own file serving
protocol that allows automatic and immediate failover in the case that a
storage node disappears, but there are some caveats to restoring a failed
storage node (takes forever to resync the data).  I have not put this
experiment into production, but I can say that it's extremely simple to
manage, and performance testing has shown that it could handle mail traffic
just fine.  You could also look at GPFS
(http://www-03.ibm.com/systems/software/gpfs/), which is not open source
but it's apparently rock solid and I believe supports multisite clustering.


Re: [Dovecot] Dovecot v2.2 plans

2012-02-16 Thread list
On Wed, 15 Feb 2012 20:51:59 +0200, Timo Sirainen t...@iki.fi wrote:
 On 15.2.2012, at 5.08, l...@airstreamcomm.net l...@airstreamcomm.net
 wrote:
 
 I know you mentioned you would cover this in a coming post, but we were
 curious what the new dsync replication will be capable of.  Would it
 monitor changes to mailboxes and push automatic replication to the
remote
 mail store,
 
 Yes.
 
 and if this is the case could it be an N-way replication setup
 in which any host in a cluster can participate in the replication?
 
 Initially 2-way, but I don't think anything prevents it being N-way.
 
 Do you consider this to be a high availability solution?
 
 
 The initial version is really about doing all of this with NFS. In NFS
 setup if two replaced storages are both mounted and the primary storage
 dies, Dovecot will start using the replica. So that's HA.
 
 The other possibility is to run Dovecot in two completely separate data
 centers and replicate through ssh. Here are more possibilities for how
to
 do HA, but some of them also have downsides.. dovecot.fi mails are
actually
 done this way, and can be accessed from either server at any time. I've
 been thinking about soon making half of my clients use one server and
half
 the other one to see if I can find any dsync bugs (I've always 3-4 IMAP
 clients connected).

Just to throw our thoughts into the mix, finding an open source multi-site
active/active mail solution that does not require building super expensive
multi-site storage systems would be a really refreshing way to purse this
level of availability.  Maybe the only way to accurately get this level of
availability is to cluster the storage between sites?


Re: [Dovecot] Dovecot v2.2 plans

2012-02-14 Thread list
On Mon, 13 Feb 2012 13:47:06 +0200, Timo Sirainen t...@iki.fi wrote:
 Here's a list of things I've been thinking about implementing for
Dovecot
 v2.2. Probably not all of them will make it, but I'm at least interested
in
 working on these if I have time.
 
 Previously I've mostly been working on things that different companies
 were paying me to work on. This is the first time I have my own company,
 but the prioritization still works pretty much the same way:
 
  - 1. priority: If your company is highly interested in getting
something
  implemented, we can do it as a project via my company. This guarantees
  that you'll get the feature implemented in a way that integrates well
into
  your system.
  - 2. priority: Companies who have bought Dovecot support contract can
let
  me know what they're interested in getting implemented. It's not a
  guarantee that it gets implemented, but it does affect my priorities.
:)
  - 3. priority: Things other people want to get implemented.
 
 There are also a lot of other things I have to spend my time on, which
are
 before the 2. priority above. I guess we'll see how things work out.
 
 dsync-based replication
 ---
 
 I'll write a separate post about this later. Besides, it's coming for
 Dovecot v2.1 so it's a bit off topic, but I thought I'd mention it
anyway.
 
 Shared mailbox improvements
 ---
 
 Support for private flags for all mailbox formats:
 
 namespace {
   type = public
   prefix = Public/
   mail_location =
mdbox:/var/vmail/public:PVTINDEX=~/mdbox/indexes-public
 }
 
  - dsync needs to be able to replicate the private flags as well as
shared
  flags.
  - might as well add a common way for all mailbox formats to specify
which
  flags are shared and which aren't. $controldir/dovecot-flags would say
  which is the default (private or shared) and what flags/keywords are
the
  opposite.
  - easy way to configure shared mailboxes to be accessed via imapc
  backend, which would allow easy shared mailbox accesses across servers
or
  simply between two system users in same server. (this may be tricky to
  dsync.)
  - global ACLs read from a single file supporting wildcards, instead of
  multiple different files
  - default ACLs for each namespace/storage root (maybe implemented using
  the above..)
 
 Metadata / annotations
 --
 
 Add support for server, mailbox and mail annotations. These need to be
 dsyncable, so their changes need to be stored in various .log files:
 
 1. Per-server metadata. This is similar to subscriptions: Add changes to
 dovecot.mailbox.log file, with each entry name a hash of the metadata
key
 that was changed.
 
 2. Per-mailbox metadata. Changes to this belong inside
 mailbox_transaction_context, which write the changes to mailbox's
 dovecot.index.log files. Each log record contains a list of changed
 annotation keys. This gives each change a modseq, and also allows easily
 finding out what changes other clients have done, so if a client has
done
 ENABLE METADATA Dovecot can easily push metadata changes to client by
only
 reading the dovecot.index.log file.
 
 3. Per-mail metadata. This is pretty much equivalent to per-mailbox
 metadata, except changes are associated to specific message UIDs.
 
 The permanent storage is in dict. The dict keys have components:
  - priv/ vs. shared/ for specifying private vs. shared metadata
  - server/ vs mailbox/mailbox guid/ vs. mail/mailbox guid/uid
  - the metadata key name
 
 This would be a good time to improve the dict configuration to allow
 things like:
  - mixed backends for different hierarchies (e.g. priv/mailbox/* goes to
a
  file, while the rest goes to sql)
  - allow sql dict to be used in more relational way, so that mail
  annotations could be stored with tables: mailbox (id, guid) and
  mail_annotation (mailbox_id, key, value), i.e. avoid duplicating the
guid
  everywhere.
 
 Things to think through:
  - How to handle quota? Probably needs to be different from regular mail
  quota. Probably some per-user metadata quota bytes counter/limit.
  - Dict lookups should be done asynchronously and prefetched as much as
  possible. For per-mail annotation lookups mail_alloc() needs to include
a
  list of annotations that are wanted.
 
 Configuration
 -
 
 Copy all mail settings to namespaces, so it'll be possible to use
 per-namespace mailbox settings. Especially important for imapc_*
settings,
 but can be useful for others as well. Those settings that aren't
explicitly
 defined in the namespace will use the global defaults. (Should doveconf
-a
 show all of these values, or simply the explicitly set values?)
 
 Get rid of *.conf.ext files. Make everything part of dovecot.conf, so
 doveconf -n outputs ALL of the configuration. There are mainly 3 config
 files I'm thinking about: dict-sql, passdb/userdb sql, passdb/userdb
ldap.
 The dict-sql is something I think needs a bigger redesign (mentioned
above
 in Metadata section

[Dovecot] Vacation via database

2012-02-03 Thread list
We are moving our inbound mail to use dovecot LMTP in the near future and
we are looking for suggestions on how to implement a mysql based vacation
system.  If anyone has experience with this, good or bad please let us
know.



Re: [Dovecot] Fault tolerant architecture

2011-11-28 Thread list
On Mon, 28 Nov 2011 20:14:19 -0200, Marcelo Salhab Brogliato
msbrogli-dove...@vialink.com.br wrote:
 Hi,
 I'm new to this list and want your help.
 I'm the mail admin for some domains in Rio de Janeiro - Brazil. Today we
 have only one machine running dovecot (imap+pop3) with local mail.
 We are migrating to two virtual machines in kvm running in separate
hosts.
 Then we have two main problems:
 - How to share mail files to both dovecots? We've been thinking about
NFS
 using local indexes. Is this a good approach?
 - How do we have a fault tolerant mail servers? Our first solutions is
 using two IP addresses in our DNS records.
 
 About NFS using local indexes, I'm configuring a test server. But how to
 configure local indexes when my mail_location comes from sql
(userdb_home
 actually).
 I'm using dovecot 1.2.9.
 
 I guess you already had some of these problems or maybe in another
 architecture these neither exists.
 
 Thanks for any help,
 
 Marcelo

Marcelo,

There are a number of ways to bring HA to a cluster of mail servers, one
that we have experimented with lately is a bit exotic, but might work for
you.

From the base layer we are experimenting with GlusterFS, distributed and
replicated file system that offers very simple management and high
availablity.  It does run in userspace, which according to some will suffer
from performance bottlenecks, but so far we have not seen any serious
problems while running on 15k disks in raid 10.  Assuming you have two
virtual machines you could create a distributed file system between them,
and have a mirrored copy of the data on both.  

Next is dovecot/postfix/webmail which would be setup to use the local
GlusterFS mount on the system containing the mail storage and indexes. 

To provide HA on the connectivity side we used ucarp, which creates a
virtual IP address between two servers and fails that virtual IP over to
another server in the event of a server going down.  I personally would
never use DNS load balancing (two IPs for one DNS name) as it would round
robin to each server regardless of whether it is running or not.

This is somewhat exotic, but it works and provides a very high level of
availability.  However with HA comes more complexity and management.

Good luck and let me know if you would like more specifics.



[Dovecot] ODBC support

2011-09-19 Thread list
I was wondering if ODBC support was on the road map for Dovecot, or if it
has ever been discussed?

Thanks.



[Dovecot] Converting from qpopper mbox to dovecot maildir

2011-07-05 Thread list
We have an older mail server using qpopper and the mbox format which we
need to update to dovecot and maildir format.  I have read through the docs
on migrating from mbox to maildir, as well as a few nuggets on how to
migrate from qpopper to dovecot, and I was wondering if I could get some
suggestions on best practices for this specific migration.  Would the built
in dovecot conversion plugin be a viable method to migrate users?  We will
be migrating to dovecot 2.0.12 from qpopper 4.0.5.

Thanks.



Re: [Dovecot] Converting from qpopper mbox to dovecot maildir

2011-07-05 Thread list
On Tue, 5 Jul 2011 13:41:02 -0700 (PDT), Joseph Tam jtam.h...@gmail.com
wrote:
 On Tue, 5 Jul 2011, l...@airstreamcomm.net wrote:
 
 We have an older mail server using qpopper and the mbox format which we
 need to update to dovecot and maildir format.  I have read through the
 docs
 on migrating from mbox to maildir, as well as a few nuggets on how to
 migrate from qpopper to dovecot, and I was wondering if I could get
some
 suggestions on best practices for this specific migration.  Would the
 built
 in dovecot conversion plugin be a viable method to migrate users?  We
 will
 be migrating to dovecot 2.0.12 from qpopper 4.0.5.
 
 I didn't do anything special other than to use
 
   pop3_reuse_xuidl = yes
 
 so that clients don't re-download all their messages.
 
 Joseph Tam jtam.h...@gmail.com

Joseph,

Did you convert the mbox emails to maildir format as well, or just put
dovecot in front of the mbox files and run with the config setting you
described above?  I am attempting to convert all the mbox files to maildir
format for use on a new system, and have found the mb2md.pl script on this
page:

http://wiki.dovecot.org/Migration/MailFormat

I am going to test the script against a test user and see if it functions
as expected.



[Dovecot] Mysql access denied

2011-06-23 Thread list
Currently using dovecot 2.0.12 and mysql server 4.0.20 (I know, it's
really old) and having issues getting Dovecot to authenticate to the mysql
server.  We have confirmed that the credentials are correct and the host
machine can access the database, however we are getting the following
error:

Jun 23 08:12:50 hostname dovecot: auth: Error: mysql(databaseserver.com):
Connect failed to database (database): Access denied for user:
'sqlad...@ip.of.host.machine' (Using password: YES) - waiting for 1 seconds
before retry

We are assuming this has something to do with the password hashing
algorithm in older versions of mysql, but we are hoping to confirm this
theory and find a solution.

Thanks.



Re: [Dovecot] Mysql access denied

2011-06-23 Thread list
On Thu, 23 Jun 2011 15:48:58 +0200, Johan Hendriks
joh.hendr...@gmail.com wrote:
 Op 23-6-2011 15:37, l...@airstreamcomm.net [1] schreef:  
 Currently using dovecot 2.0.12 and mysql server 4.0.20 (I know, it's
 really old) and having issues getting Dovecot to authenticate to the
 mysql
 server. We have confirmed that the credentials are correct and the host
 machine can access the database, however we are getting the following
 error:
 
 Jun 23 08:12:50 hostname dovecot: auth: Error:
mysql(databaseserver.com):
 Connect failed to database (database): Access denied for user:
 'sqlad...@ip.of.host.machine [2]' (Using password: YES) - waiting for 1
 seconds
 before retry
 
 We are assuming this has something to do with the password hashing
 algorithm in older versions of mysql, but we are hoping to confirm this
 theory and find a solution.
 
 Thanks.
 
   This has as far as i can see nothing to do with hashes.
  It is the mysql database that disallows the user sqladmin access to the
 database.
  Make sure the user sqladmin has the proper rights to access the
 database, from the ipadres.
 
  Gr
  Johan Hendriks
 

 
 Links:
 --
 [1] mailto:l...@airstreamcomm.net
 [2] mailto:sqlad...@ip.of.host.machine

When talking about hashes I was referring to this wiki article:
http://wiki1.dovecot.org/MysqlProblems.  As I stated in my email we have
confirmed that the host can access the database just fine, and the
credentials are correct in the config for Dovecot.

Thanks.



[Dovecot] SQL config error

2011-06-22 Thread list
Currently using 2.0.12, configured the auth-sql.conf to look like this for
password lookups (doing smtp auth with postfix, so I am not actually
running pop or imap, just auth):

passdb {
  driver = sql
  connect = host=server.net dbname=passwd user=sqluser password='password'
  default_pass_scheme = CRYPT
  password_query = SELECT CONCAT(username,'@domain.net') as user, pw as
password FROM passwd WHERE username = '%n'

  # Path for SQL configuration file, see
example-config/dovecot-sql.conf.ext
  args = /etc/dovecot/dovecot-sql.conf.ext
}

Starting Dovecot I am getting the following error:

# 2.0.12: /etc/dovecot/dovecot.conf
doveconf: Fatal: Error in configuration file
/etc/dovecot/conf.d/auth-sql.conf.ext line 8: Unknown setting: connect

This is my first time configuring SQL for Dovecot so I am not sure how
connect is recognized as an unknown setting?

Thanks.



Re: [Dovecot] SQL config error

2011-06-22 Thread list
On Wed, 22 Jun 2011 22:14:10 +0200, Pascal Volk
user+dove...@localhost.localdomain.org wrote:
 On 06/22/2011 07:35 PM l...@airstreamcomm.net wrote:
 Currently using 2.0.12, configured the auth-sql.conf to look like this
 for
 password lookups (doing smtp auth with postfix, so I am not actually
 running pop or imap, just auth):
 
 passdb {
   driver = sql
   connect = host=server.net dbname=passwd user=sqluser
   password='password'
   default_pass_scheme = CRYPT
   password_query = SELECT CONCAT(username,'@domain.net') as user, pw as
 password FROM passwd WHERE username = '%n'
 
   # Path for SQL configuration file, see
 example-config/dovecot-sql.conf.ext
   args = /etc/dovecot/dovecot-sql.conf.ext
 }
 
 Starting Dovecot I am getting the following error:
 
 # 2.0.12: /etc/dovecot/dovecot.conf
 doveconf: Fatal: Error in configuration file
 /etc/dovecot/conf.d/auth-sql.conf.ext line 8: Unknown setting: connect
 
 This is my first time configuring SQL for Dovecot so I am not sure how
 connect is recognized as an unknown setting?
 
 It's an unknown setting in the passdb {} section.
 
 ,--[ $sysconfdir/dovecot/dovecot-sql.conf.ext ]--
 | connect = …
 | [default_pass_scheme = …]
 | password_query = …
 | user_query = …
 | iterate_query = …
 `--
 
 ,--[ $sysconfdir/dovecot/conf.d/auth-sql.conf.ext ]--
 | passdb {
 |   driver = sql
 |   args = $sysconfdir/dovecot/dovecot-sql.conf.ext
 ] }
 | userdb {
 |   driver = sql
 |   args = $sysconfdir/dovecot/dovecot-sql.conf.ext
 | }
 `--
 
 
 Re4gards,
 Pascal

Pascal,

I discovered looking at the config file again that the passdb section is
trying to reference the file /etc/dovecot/dovecot-sql.conf.ext for the
information.  I added the config options to that file, and it's working
now.  

Thanks for the reply.

Michael



[Dovecot] Isilon OneFS storage

2011-04-28 Thread list
I am wondering if anyone has any experience with the Isilon OneFS storage
system (http://www.isilon.com/) using NFS to host storage for Dovecot.

Thanks



Re: [Dovecot] Dotlock dovecot-uidlist errors / NFS / High Load

2011-03-24 Thread list
Timo  friends,

We have not posted in a long time regarding this issue, but I wanted to
update everyone that it was indeed an NTP issue.  We have since put
together stratum 2 servers on physical hardware (as opposed to virtual
machines) which now act as the source for our cluster of dovecot machines
and everything is working flawlessly.  I just wanted to give my thanks to
everyone that helped us trouble shoot this issue and show my gratitude for
such a great community!

Michael

On Fri, 28 Jan 2011 03:40:38 +0200, Timo Sirainen t...@iki.fi wrote:
 On Tue, 2011-01-25 at 12:34 -0600, l...@airstreamcomm.net wrote:
 
 Jan 25 11:30:11 1295976611 POP3(4eagles): Warning: Created dotlock
file's
 timestamp is different than current time (1295976643 vs 1295976607):
 /mail/4/e/4eagles/Maildir/dovecot-uidlist
 
 We added the epoch time to the log_timestamp setting to compare it to
the
 dotlock error, and as you can see the Created dotlock epoch time is
32
 seconds in the future compared to the epoch time of the log event.  At
 this
 point I hope you can help us understand where a timestamp from the
future
 might be generated from. 
 
 The first timestamp comes from the .lock file's ctime as reported by
 fstat(). If that's in the future, then the possibilities are:
 
 a) Bug in the kernel's NFS client code (unlikely)
 b) Bug in NFS server reporting or setting wrong ctime
 c) NFS server's clock went into future
 
 No possibility of it being Dovecot's fault anymore.



Re: [Dovecot] NoSQL Storage Backend

2011-02-10 Thread list
On Thu, 10 Feb 2011 10:48:56 -0800, Marc Villemade
marc.villem...@scality.com wrote:
 Hey Marten,
 
 As a disclaimer, I work with Scality.
 
 Thank you guys for letting us know about the typo in the URL for dovecot
 on the website.
 I have tried to reply to the email that sent us the information to thank
 them, but it was bogus ;(
 
 We have also seen your request for information. Our Account Manager from
 Germany will be in touch with you very soon.
 
 Best,
 
 -Marc
 @mastachand
 http://linkd.in/heve30
 
 
 
 
 On Feb 10, 2011, at 10:26 AM, Marten Lehmann wrote:
 
 The Scality webpages mentions they´ve developed storage connector for
 dovecot:
 
 http://www.scality.com/storage-solutions/
 
 whatever that means..
 
 Not so great - they're linking to dovecot.COM instead of
dovecot.ORG...
 
 I asked them about the dovecot support two days ago but still no
reply...

Marc,

That must have been us (jesusj...@fu.com?), sorry we just wanted to get
the message over without any sales pitches or something similar coming
back.  Glad to hear it's fixed.

Michael



[Dovecot] Storing Dovecot index files on sql

2011-01-27 Thread list


We have been communicating with the mailling list on another thread
regarding dotlocking on the dovcot-uidlist over an NFS share, and I was
hoping to get some ideas on whether or not a sql database would be a
suitable location for storing the dovecot index files. Our though process
so far has been that storing the index files on NFS with the maildir has
not worked very effectively when we are using multiple Dovecot servers
(even using sticky IPs in our load balancer), and using local storage is
not possible with a cluster of machines, so we got to thinking about what
type of storage platform would be suitable for these index files that would
provide native locking of the data while multiple servers accessed and
wrote to it. Our question to the community is whether sql storage for index
files has ever been considered, what issues would this introduce, and if
it's even feasible? 

Thanks everyone! 

Michael 

 

Re: [Dovecot] Dotlock dovecot-uidlist errors / NFS / High Load

2011-01-25 Thread list
On Tue, 25 Jan 2011 01:11:53 +0200, Timo Sirainen t...@iki.fi wrote:
 On 25.1.2011, at 1.06, l...@airstreamcomm.net wrote:
 
 Multi-server setup that tries to flush NFS caches:
 
 dotlock_use_excl = no # only needed with NFSv2, NFSv3+ supports O_EXCL
 and
 it's faster
 
 You're probably using NFSv3, right? Then this isn't needed.
 
 Also not the tries word. It doesn't work perfectly, although in your
 case it seems to be working than expected. Still, these NFS problems are
 the reason I created director: http://wiki2.dovecot.org/Director

We are using NFSv3, and for five months the system worked with four
dovecot servers and three postfix servers all accessing the same NFS server
simultaneously.  We cannot pick out a change in our network or on the
virtual environment that our machines resides that would have impacted the
system this drastically.  We have also confirmed that our clocks on all
systems accessing the NFS server and the NFS server itself are within 1
second of each other.  It's confounding us why the logs show such strange
time stamps:

Jan 25 11:30:11 1295976611 POP3(4eagles): Warning: Created dotlock file's
timestamp is different than current time (1295976643 vs 1295976607):
/mail/4/e/4eagles/Maildir/dovecot-uidlist

We added the epoch time to the log_timestamp setting to compare it to the
dotlock error, and as you can see the Created dotlock epoch time is 32
seconds in the future compared to the epoch time of the log event.  At this
point I hope you can help us understand where a timestamp from the future
might be generated from.

As for the director, we will be considering the option after doing some
heavy testing.

Thanks,

Michael



Re: [Dovecot] Dotlock dovecot-uidlist errors / NFS / High Load

2011-01-24 Thread list
Timo,

Thanks for the quick reply!  We are building an rpm with the patch and
will test this week and report back with our findings.

We are grateful for your help and the chance to communicate directly with
the author of dovecot!

Michael

On Thu, 20 Jan 2011 23:18:16 +0200, Timo Sirainen t...@iki.fi wrote:
 On Thu, 2011-01-20 at 08:32 -0600, l...@airstreamcomm.net wrote:
 
 Created dotlock file's timestamp is different than current time
 (1295480202 vs 1295479784): /mail/user/Maildir/dovecot-uidlist
 
 Hmm. This may be a bug that happens when dotlocking has to wait for a
 long time for dotlock. See if
 http://hg.dovecot.org/dovecot-1.2/raw-rev/9a50a9dc905f fixes this.



Re: [Dovecot] Dotlock dovecot-uidlist errors / NFS / High Load

2011-01-24 Thread list
On Mon, 24 Jan 2011 08:41:31 -0600, l...@airstreamcomm.net wrote:
 Timo,
 
 Thanks for the quick reply!  We are building an rpm with the patch and
 will test this week and report back with our findings.
 
 We are grateful for your help and the chance to communicate directly
with
 the author of dovecot!
 
 Michael
 
 On Thu, 20 Jan 2011 23:18:16 +0200, Timo Sirainen t...@iki.fi wrote:
 On Thu, 2011-01-20 at 08:32 -0600, l...@airstreamcomm.net wrote:
 
 Created dotlock file's timestamp is different than current time
 (1295480202 vs 1295479784): /mail/user/Maildir/dovecot-uidlist
 
 Hmm. This may be a bug that happens when dotlocking has to wait for a
 long time for dotlock. See if
 http://hg.dovecot.org/dovecot-1.2/raw-rev/9a50a9dc905f fixes this.

Timo,

We tested the patch you suggested with no success.  We are seeing
timestamps straying into the 100's of seconds of difference, which does not
reflect the perceivable drift that shows on our systems clock (which is
negligible, sub 1 second).  Currently we run against two stratum 2 servers
that get their time from two stratum 1 servers, and per Stan's suggestions
earlier we are rebuilding the stratum 2 machines on bare metal hardware
(not in a virtual machine) to be sure the clocks will be super stable.  I
guess what I am asking is if you have ever seen an issue similar to this
and if NTP actually played a role.  I have spent some time reviewing the
mailing list archives and have not found a definitive answer from other
Dovecot user's experiences. 

Thanks again,

Michael

Here's the error showing a difference of 239 seconds:

Created dotlock file's timestamp is different than current time
(1295900137 vs 1295899898): /mail/username/Maildir/dovecot-uidlist

Here's the output of dovecot -n after patching:

# 1.2.16: /etc/dovecot.conf
# OS: Linux 2.6.18-92.el5 x86_64 CentOS release 5.5 (Final) 
protocols: imap pop3
listen(default): *:143
listen(imap): *:143
listen(pop3): *:110
shutdown_clients: no
login_dir: /var/run/dovecot/login
login_executable(default): /usr/libexec/dovecot/imap-login
login_executable(imap): /usr/libexec/dovecot/imap-login
login_executable(pop3): /usr/libexec/dovecot/pop3-login
login_process_per_connection: no
login_process_size: 128
login_processes_count: 4
login_max_processes_count: 256
login_max_connections: 386
first_valid_uid: 300
mail_location: maildir:~/Maildir
mmap_disable: yes
dotlock_use_excl: no
mail_nfs_storage: yes
mail_nfs_index: yes
mail_executable(default): /usr/libexec/dovecot/imap
mail_executable(imap): /usr/libexec/dovecot/imap
mail_executable(pop3): /usr/libexec/dovecot/pop3
mail_plugin_dir(default): /usr/lib64/dovecot/imap
mail_plugin_dir(imap): /usr/lib64/dovecot/imap
mail_plugin_dir(pop3): /usr/lib64/dovecot/pop3
auth default:
  username_format: %Ln
  worker_max_count: 50
  passdb:
driver: pam
  userdb:
driver: passwd




Re: [Dovecot] Dotlock dovecot-uidlist errors / NFS / High Load

2011-01-24 Thread list
On Tue, 25 Jan 2011 00:13:53 +0200, Timo Sirainen t...@iki.fi wrote:
 On 24.1.2011, at 23.06, l...@airstreamcomm.net wrote:
 
 We tested the patch you suggested with no success.  We are seeing
 timestamps straying into the 100's of seconds of difference, which does
 not
 reflect the perceivable drift that shows on our systems clock (which is
 negligible, sub 1 second).
 ..
 Created dotlock file's timestamp is different than current time
 (1295900137 vs 1295899898): /mail/username/Maildir/dovecot-uidlist
 
 I see you have:
 
 dotlock_use_excl: no
 
 There was another reason why the above error could happen with that
 setting. http://hg.dovecot.org/dovecot-1.2/rev/ab81fbb195e2 fixes it.
 
 But anyway, the main problem you have is that Dovecot is waiting for
 hundreds of seconds for the uidlist lock. It shouldn't. Meaning your NFS
 server is answering way too slowly to Dovecot.

Timo,

Thanks for the quick reply.  

Per the dovecot wiki on nfs http://wiki.dovecot.org/NFS:

Multi-server setup that tries to flush NFS caches:

mmap_disable = yes
dotlock_use_excl = no # only needed with NFSv2, NFSv3+ supports O_EXCL and
it's faster
mail_nfs_storage = yes # v1.1+ only
mail_nfs_index = yes # v1.1+ only

We configured our systems to match these settings, as we are trying to use
a multi-server setup.  

We certainly have not excluded our NFS server as the culprit for the long
wait, but at this point we are running two server's against that NFS share,
a postfix machine and a dovecot machine, both accessing the disk without
error or any latency.  It's only when we add a second dovecot machine to
the mix when we start to see the dotlock errors.  In our configuration we
have a load balancer that is distributing traffic automatically to the pop
and imap services on our cluster of machines.  Once we add a new server to
the mix the load balancer automatically directs new traffic, and we start
seeing the errors:

Jan 24 13:20:39 10.123.128.105 dovecot: pop3-login: Login:
user=mpbixler, method=PLAIN, rip=68.65.35.174, lip=64.33.128.105
Jan 24 13:20:42 10.123.128.105 dovecot: POP3(mpbixler): Disconnected:
Logged out top=0/0, retr=0/0, del=0/8234, size=1450555980
Jan 24 13:40:46 10.123.128.105 dovecot: pop3-login: Login:
user=mpbixler, method=PLAIN, rip=68.65.35.174, lip=64.33.128.105
Jan 24 13:40:47 10.123.128.105 dovecot: POP3(mpbixler): Disconnected:
Logged out top=0/0, retr=0/0, del=0/8234, size=1450555980
Jan 24 14:14:28 10.123.128.108 dovecot: pop3-login: Login:
user=mpbixler, method=PLAIN, rip=68.65.35.174, lip=64.33.128.108
Jan 24 14:14:28 10.123.128.108 dovecot: POP3(mpbixler): Created dotlock
file's timestamp is different than current time (1295900068 vs 1295899828):
/mail/m/p/mpbixler/Maildir/dovecot-uidlist
Jan 24 14:14:28 10.123.128.108 dovecot: POP3(mpbixler): Created dotlock
file's timestamp is different than current time (1295900068 vs 1295899828):
/mail/m/p/mpbixler/Maildir/dovecot-uidlist
Jan 24 14:14:32 10.123.128.108 dovecot: POP3(mpbixler): Created dotlock
file's timestamp is different than current time (1295900072 vs 1295899833):
/mail/m/p/mpbixler/Maildir/dovecot-uidlist
Jan 24 14:14:33 10.123.128.108 dovecot: POP3(mpbixler): Disconnected:
Logged out top=0/0, retr=1/7449, del=0/8235, size=1450563412
Jan 24 14:34:36 10.123.128.105 dovecot: pop3-login: Login:
user=mpbixler, method=PLAIN, rip=68.65.35.174, lip=64.33.128.105
Jan 24 14:34:40 10.123.128.105 dovecot: POP3(mpbixler): Disconnected:
Logged out top=0/0, retr=1/10774, del=0/8236, size=1450574168
Jan 24 14:54:53 10.123.128.105 dovecot: pop3-login: Login:
user=mpbixler, method=PLAIN, rip=68.65.35.174, lip=64.33.128.105
Jan 24 14:54:54 10.123.128.105 dovecot: POP3(mpbixler): Disconnected:
Logged out top=0/0, retr=0/0, del=0/8236, size=1450574168


As you can see here, our current mail server 10.123.128.105 is handling
the traffic without incident, and once we add 10.123.128.108 to the cluster
the dovecot service errors out.  From the last time the user was logged in
at 13:40:47 to the login on the new server at 14:14:28 is about 34 minutes,
which from my understanding of the code is not possible for a lock file to
exist that long, and does not reflect in the time stamps in the error log. 


Pretty stumped at this point, as our NFS server appears to be running
smoothly.  All the packet captures we have run show that all the NFS
related traffic is moving back and forth quickly and without error.  

Thanks,

Michael



[Dovecot] Dotlock dovecot-uidlist errors / NFS / High Load

2011-01-20 Thread list
As of late our four node dovecot 1.2.13 cluster has been experiencing a
massive number of these dotlock errors:

Created dotlock file's timestamp is different than current time
(1295480202 vs 1295479784): /mail/user/Maildir/dovecot-uidlist

These dotlock errors correspond with very high load averages, and
eventually we have to turn off all but one server to stop them from
occurring.  We first assumed this trend was related to the NFS storage, but
we could not find a networking issue or NFS related problem to speak of. 
We run the mail storage on NFS which is hosted on a Centos 5.5 host, and
mounted with the following options:

udp,nodev,noexec,nosuid.  

Secondly we thought the issues were due to NTP as the time stamps vary so
widely, so we rebuilt our NTP servers and found closer stratum 1 source
clocks to synchronize to hoping it would alleviate the problem but the
dotlock errors returned after about 12 hours.  We have fcntl locking set in
our configuration file, but it is our understanding from look at the source
code that this file is locked with dotlock.  

Any help troubleshooting is appreciated.

Thanks,

Michael


# 1.2.13: /etc/dovecot.conf
# OS: Linux 2.6.18-194.8.1.el5 x86_64 CentOS release 5.5 (Final) 
protocols: imap pop3
listen(default): *:143
listen(imap): *:143
listen(pop3): *:110
shutdown_clients: no
login_dir: /var/run/dovecot/login
login_executable(default): /usr/libexec/dovecot/imap-login
login_executable(imap): /usr/libexec/dovecot/imap-login
login_executable(pop3): /usr/libexec/dovecot/pop3-login
login_process_per_connection: no
login_process_size: 128
login_processes_count: 4
login_max_processes_count: 256
login_max_connections: 386
first_valid_uid: 300
mail_location: maildir:~/Maildir
mmap_disable: yes
dotlock_use_excl: no
mail_nfs_storage: yes
mail_nfs_index: yes
mail_executable(default): /usr/libexec/dovecot/imap
mail_executable(imap): /usr/libexec/dovecot/imap
mail_executable(pop3): /usr/libexec/dovecot/pop3
mail_plugin_dir(default): /usr/lib64/dovecot/imap
mail_plugin_dir(imap): /usr/lib64/dovecot/imap
mail_plugin_dir(pop3): /usr/lib64/dovecot/pop3
auth default:
  username_format: %Ln
  worker_max_count: 50
  passdb:
    driver: pam
  userdb:
    driver: passwd



[Dovecot] Fwd: Re: Dotlock dovecot-uidlist errors / NFS / High Load

2011-01-20 Thread list
Stan,

Thanks for the reply.  In our case we have actually already done most of
the work you suggested to no avail.  We had rebuilt two new ntp servers
that sync against two stratum 1 sources, and all our nfs clients,
regardless of using dovecot, sync to those two machines.  You bring up the
difference between bare metal and hypervisor, and we are running these
machines on vmware 4.0.  All the vmware knowledge base articles tend to
push us towards ntp, and since we are using centos 5.5 there are no kernel
modifications that need to be made regarding timing from what we can find. 
I will give the ntpdate option a try and see what happens.

I was also hoping to understand why the uidlist file is the only file that
uses dotlock, or if there was plans to give it the option to use other
locking mechanisms in the future.

Thanks again!

Michael

 Original Message 
Subject: Re: [Dovecot] Dotlock dovecot-uidlist errors / NFS / High Load
Date: Thu, 20 Jan 2011 10:57:24 -0600
From: Stan Hoeppner s...@hardwarefreak.com
To: dovecot@dovecot.org

l...@airstreamcomm.net put forth on 1/20/2011 8:32 AM:

 Secondly we thought the issues were due to NTP as the time stamps vary
so
 widely, so we rebuilt our NTP servers and found closer stratum 1 source
 clocks to synchronize to hoping it would alleviate the problem but the
 dotlock errors returned after about 12 hours.  We have fcntl locking set
in
 our configuration file, but it is our understanding from look at the
source
 code that this file is locked with dotlock.  
 
 Any help troubleshooting is appreciated.

From your description it sounds as if you're ntpd syncing each of the 4
servers
against an external time source, first stratum 2/3 sources, then stratum 1
sources in an attempt to cure this problem.

In a clustered server environment, _always_ run a local physical
box/router ntpd
server (preferably two) that queries a set of external sources, and
services
your internal machine queries.  With RTTs all on your LAN, and using the
same
internal time sources for every query, this clock drift issue should be
eliminated.  Obviously, when you first set this up, stop ntpd and run
ntpdate to
get an initial time sync for each cluster host.

If after setting this up, and we're dealing with bare metal cluster member
servers, then I'd guess you've got a failed/defective clock chip on one
host.
If this is Linux, you can work around that by changing the local time
source.
There are something like 5 options.  Google for Linux time or similar. 
Or,
simply replace the hardware--RTC chip, mobo, etc.

If any of these cluster members are virtual machines, regardless of
hypervisor,
I'd recommend disabling using ntpd, and cron'ing ntpdate to run once every
5
minutes, or once a a minute, whatever it takes to get the times to remain
synced, against your local ntpd server mentioned above.  I got to the
point with
VMWare ESX that I could make any Linux distro VM of 2.4 or 2.6 stay within
one
minute a month before needing a manual ntdate against our local time
source.
The time required to get to that point is a total waste.  Cron'ing ntpdate
as I
mentioned is the quick, reliable way to solve this issue, if you're using
VMs.

-- 
Stan





[Dovecot] Upgrading from 1.2 to 2.0

2011-01-19 Thread list


Greetings Dovecot mailing list! 

We are an ISP that recently migrated
from Courier to Dovecot, and are looking at the next round of upgrades.
Hoping that we could get pointed to some documentation or get a general
list of best practices for an upgrade from 1.2.13 to 2.0.9. Our current
environment consists of four Dovecot machines that share an NFS backend
hosting POP and IMAP. Our biggest question is whether we can support a
rolling upgrade from 1.2.13 to 2.0.9, or if it would need to be an all for
one cut.  

Thanks in advance.

[Dovecot] pop3-login: auth failure -- due to number as first character in login name

2010-04-27 Thread dovecot mail list account
Seem to have found a error in version 1.2.6 regarding usernames having a number 
as the first character. Customers with a number as the first character in thier 
user name can not login via pop3 clients. However they can login through our 
webmail interface. If we remove the number from the username then the customer 
can login.


# 1.2.6: /etc/dovecot.conf
# OS: Linux 2.6.31.12-desktop-3mnb x86_64 Mandriva Linux 2010.0
protocols: pop3
ssl_listen: *
ssl_cert_file: /etc/pki/tls/certs/dovecot.pem
ssl_key_file: /etc/pki/tls/private/dovecot.pem
disable_plaintext_auth: no
login_dir: /var/run/dovecot/login
login_executable: /usr/lib64/dovecot/pop3-login
mail_privileged_group: mail
mail_location: mbox:~/mail:INBOX=/home/mail/%u
mail_executable: /usr/lib64/dovecot/pop3
mail_plugin_dir: /usr/lib64/dovecot/modules/pop3
pop3_uidl_format: %08Xv%08Xu
lda:
  postmaster_address: postmas...@example.com
auth default:
  mechanisms: plain login
  verbose: yes
  passdb:
driver: pam
  userdb:
driver: passwd
  socket:
type: listen
client:
  path: /var/spool/postfix/private/auth
  mode: 432
  user: postfix
  group: postfix
plugin:
  quota_warning: storage=95%% /usr/local/bin/quota-warning.sh 95
  quota_warning2: storage=80%% /usr/local/bin/quota-warning.sh 80


[Dovecot] dovecot-uidlist: Duplicate file entry at line error

2009-02-23 Thread Dovecot List
This is with respect to an error that I am facing in dovecot.

The error is that is seen in logs is Feb 23 00:04:46 mailblade1
dovecot: IMAP(USERNAME): /indexes/USERNAME/.INBOX/dovecot-uidlist:
Duplicate file entry at line 7:
1234776125.M559298P3988.s2o.qlc.co.in,S=13111,W=13470:2, (uid 94277 -
97805)

This error is seen for multiple users.

Once this error occurs for a user, the user downloads mails all over
again in POP3.

Please help! And let us know what can be done.

Details of our setup
--

The setup has deployed 2 servers with Dovecot using the same storage
over NFS with the following options:

* Dovecot Version: v1.1.8 on both servers
* Server1 for retrieving mails using POP and imap Access
* Server2 for Mail Delivery via postfix and dovecot as LDA
* Number of users: 2
* Number of mails Handled: 500K per day

--Note: Based on recommendation, kernel was upgraded from 2.6.18 (-
see below for details) and yet the issue persists.


Specifications of the server
---
1) Server spec used for mail Access
    OS: Linux 2.6.28 i686 CentOS release 5.2 (Final)
    kernel : 2.6.28
    /proc/sys/fs/inotify/max_user_instances = 1024
    /proc/sys/fs/epoll/max_user_instances  = 1024
    Memory : 2GB

2) Server used for Dovecot Delivery
    OS: Linux 2.6.21-1.3194.fc7 i686 Fedora release 7 (Moonshine)
    kernel : 2.6.21
    /proc/sys/fs/inotify/max_user_instances = 128
    /proc/sys/fs/epoll/max_user_instances  = NA
    Memory : 2 GB


Conf on both the system as follows =
--
# 1.1.8: /usr/local/etc/dovecot.conf
#NFS Settings
mmap_disable: yes
mail_nfs_storage: yes
mail_nfs_index: yes

#Other special settings
pop3_no_flag_updates(pop3): yes
pop3_lock_session(pop3): yes



# all the settings
base_dir: /usr/local/var/run/dovecot
log_path:
info_log_path:
log_timestamp: %b %d %H:%M:%S
syslog_facility: mail
protocols: imap pop3
listen(default): *:143
listen(imap): *:143
listen(pop3): *:110
ssl_listen:
ssl_disable: yes
ssl_ca_file:
ssl_cert_file: /etc/ssl/certs/dovecot.pem
ssl_key_file: /etc/ssl/private/dovecot.pem
ssl_key_password:
ssl_parameters_regenerate: 168
ssl_cipher_list:
ssl_cert_username_field: commonName
ssl_verify_client_cert: no
disable_plaintext_auth: no
verbose_ssl: no
shutdown_clients: yes
nfs_check: yes
version_ignore: no
login_dir: /usr/local/var/run/dovecot/login
login_executable(default): /usr/local/libexec/dovecot/imap-login
login_executable(imap): /usr/local/libexec/dovecot/imap-login
login_executable(pop3): /usr/local/libexec/dovecot/pop3-login
login_user: dovecot
login_greeting: Welcome to MailServe Popserver.
login_log_format_elements: user=%u method=%m rip=%r lip=%l %c
login_log_format: %$: %s
login_process_per_connection: no
login_chroot: yes
login_greeting_capability: no
login_process_size: 64
login_processes_count: 3
login_max_processes_count: 128
login_max_connections: 256
valid_chroot_dirs:
mail_chroot:
max_mail_processes: 600
mail_max_userip_connections(default): 50
mail_max_userip_connections(imap): 50
mail_max_userip_connections(pop3): 8
verbose_proctitle: yes
first_valid_uid: 99
last_valid_uid: 0
first_valid_gid: 99
last_valid_gid: 0
mail_extra_groups:
mail_access_groups:
mail_privileged_group:
mail_uid:
mail_gid:
mail_location: maildir:~/Maildir:INDEX=/indexes/%u:CONTROL=/indexes/%u
mail_cache_fields:
mail_never_cache_fields: imap.envelope
mail_cache_min_mail_count: 0
mailbox_idle_check_interval: 30
mail_debug: no
mail_full_filesystem_access: no
mail_max_keyword_length: 50
mail_save_crlf: no
mmap_disable: yes
dotlock_use_excl: yes
fsync_disable: no
mail_nfs_storage: yes
mail_nfs_index: yes
mailbox_list_index_disable: yes
lock_method: fcntl
maildir_stat_dirs: no
maildir_copy_with_hardlinks: yes
maildir_copy_preserve_filename: no
mbox_read_locks: fcntl
mbox_write_locks: dotlock fcntl
mbox_lock_timeout: 300
mbox_dotlock_change_timeout: 120
mbox_min_index_size: 0
mbox_dirty_syncs: yes
mbox_very_dirty_syncs: no
mbox_lazy_writes: yes
dbox_rotate_size: 2048
dbox_rotate_min_size: 16
dbox_rotate_days: 1
umask: 63
mail_drop_priv_before_exec: no
mail_executable(default): /usr/local/libexec/dovecot/imap
mail_executable(imap): /usr/local/libexec/dovecot/imap
mail_executable(pop3): /usr/local/libexec/dovecot/pop3
mail_process_size: 256
mail_plugins(default): quota imap_quota
mail_plugins(imap): quota imap_quota
mail_plugins(pop3): quota
mail_plugin_dir(default): /usr/local/lib/dovecot/imap
mail_plugin_dir(imap): /usr/local/lib/dovecot/imap
mail_plugin_dir(pop3): /usr/local/lib/dovecot/pop3
mail_log_prefix: %Us(%u):
mail_log_max_lines_per_sec: 10
imap_max_line_length: 65536
imap_capability:
imap_client_workarounds:
imap_logout_format: bytes=%i/%o
pop3_no_flag_updates(default): no
pop3_no_flag_updates(imap): no
pop3_no_flag_updates(pop3): yes
pop3_enable_last: no
pop3_reuse_xuidl: no
pop3_lock_session(default): no
pop3_lock_session(imap): no
pop3_lock_session(pop3): yes
pop3_uidl_format: %08Xu%08Xv

[Dovecot] Public (Shared Folders) ACL Questions

2008-09-04 Thread Mailing List
I'm trying to set up a public namespace so that a set of IMAP folders
are available to all staff - similar to MS Exchange Public Folders.

I've managed to set up the namespace correctly but I'm having trouble
with the ACLs. The global ACL file is the only method I can get to work.

All I want to do is to allow 1 user admin privileges to create  delete
anything but all other users should only be able to create not delete.
Reading through the mailing list I thought a /etc/dovecot-acls/.DEFAULT
file would be suitable but what should be put in here to achieve what I
want? Are you able to use wildcards somehow within this file, i.e.:

owner lrwstiekxa
[EMAIL PROTECTED] lrwstiekxa
[EMAIL PROTECTED] lrw


Does this .DEFAULT file only apply to the public (shared) namespace or
will it effect private mailboxes also?

If I was to create a specific global acl file for a specific folder
which would take precedence, the .DEFAULT acls or the specific folder
acls?

Also an INBOX is shown within the public folders namespace but no
folder exists in the public folders maildir hierarchy - any ideas how I
can stop this?

Any help would be greatly appreciated.

Gavin



[Dovecot] Dovecot Sieve Broken with 1.1 RPMs?

2008-07-07 Thread Mailing List
We have been using dovecot v1:1.0.15-1_72 and dovecot-sieve v1.0.3-7
very successfully for many months but I've been unable to get
dovecot-sieve to work with the latest 1.1 release. The new release seems
to completely ignore our global sieve script no matter what we do.

Is anyone else using the atrpms-stable rpms packages and able to get
dovecot-sieve to work correctly?

Any help would be greatly appreciated.

Gavin



Re: [Dovecot] Dovecot Sieve Broken with 1.1 RPMs?

2008-07-07 Thread Mailing List
Thanks for taking the time to respond Patrick.

Previously (in v1.0) we did not need the plugin section:
sieve = mailfilter.sieve

Is this now required even when just wanting to use a global sieve
script? We have ~100 users so manually adding a mailfilter.sieve file to
each users directory will be very time consuming. Are we able to input
the full path name to our global script instead,
i.e. /etc/dovecot-sieve/global?

I tried using sieve_global_path = /etc/dovecot-sieve/global in the 'lda
protocol' section but, as stated, the script seems to be ignored. The
local delivery agent is working but everything gets delivered to the
users inbox instead of being filtered.

Gavin


On Tue, 2008-07-08 at 10:36 +0800, Patrick Nagel wrote:
 Hi Gavin,
 
 On Tuesday 08 July 2008, you wrote:
  We have been using dovecot v1:1.0.15-1_72 and dovecot-sieve v1.0.3-7
  very successfully for many months but I've been unable to get
  dovecot-sieve to work with the latest 1.1 release. The new release seems
  to completely ignore our global sieve script no matter what we do.
 
  Is anyone else using the atrpms-stable rpms packages and able to get
  dovecot-sieve to work correctly?
 
  Any help would be greatly appreciated.
 
  Gavin
 
 yes, we're using the following
 
 dovecot
 Version: 1.1.1
 Release: 2_76.el5
 
 and
 
 dovecot-sieve
 Version: 1.1.5
 Release: 8.el5
 
 from atrpms, and sieve works as it did before.
 
 Sieve relevant configuration looks as follows:
 
 * in section 'protocol lda':
 mail_plugins = cmusieve
 sieve_global_dir = /storage/sieve
 
 * in section 'plugin':
 sieve = mailfilter.sieve
 
 In each user's home directory who should have sieve filtering activated, we 
 place a mailfilter.sieve and in there we include one or more global and/or 
 personal sieve scripts.
 
 For example:
 * /home/john.doe/mailfilter.sieve:
 require include;
 include :personal vacation.sieve;
 include :global spam.sieve;
 
 spam.sieve resides in the /storage/sieve directory.
 
 Hope this helps.
 Patrick.
 



Re: [Dovecot] netapp/maildir/dovecot performance

2007-03-22 Thread bofh list

On 3/22/07, Tom Bombadil [EMAIL PROTECTED] wrote:

...


What operating system are you using? Performance over NFS is
tremendously different. And each OS can be tweaked differently.

You should try noatime (noxattr on solaris) regardless.


RHEL4u3 64-bit.  I will try noatime.

Thanks


Re: [Dovecot] tb-negative-fetch workaround

2007-03-15 Thread bofh list

On 3/13/07, Timo Sirainen [EMAIL PROTECTED] wrote:
...


I'm running out of ideas. Check with rawlog what exactly Dovecot and
Thunderbird are talking to each others
(http://dovecot.org/bugreport.html). Are there any large UIDs either?
What is Thunderbird doing just before it sends that command? Maybe its
local cache is broken?


Can't reproduce the problem on the client.  Chalk it up to broken
Thunderbird cache.  Client is now logging but it seems like it was a
spurious event.