RE: altmove reverse doesn't work

2021-04-14 Thread JAVIER MIGUEL RODRIGUEZ
Same here.

doveadm altmove -r is broken, needs to be fixed. We want to recompres our ALT 
mdbox storage from LZMA to ZSTD and we can not do it because doveadm altmove -r 
does not work.

Regards

De: dovecot  En nombre de Zdenek Zámecník
Enviado el: viernes, 9 de abril de 2021 15:12
Para: Aki Tuomi ; Dovecot Mailing List 

Asunto: Re: altmove reverse doesn't work


I already trued doveadm purge but with no luck. Also debug parameter doesn't 
show any interesting output as you can see below. It shows that it's moving 
about 7 messages but in fact it doesn't do anything. If I repeat the 
command the output is still same. I just found that a few other people already 
explained same problem, for example here: 
https://dovecot.org/pipermail/dovecot/2021-February/121329.html

Is there any chance to get it fixed in upstream?

Apr 09 14:58:00 Debug: Loading modules from directory: /usr/lib/dovecot/modules

Apr 09 14:58:00 Debug: Module loaded: 
/usr/lib/dovecot/modules/lib01_acl_plugin.so

Apr 09 14:58:00 Debug: Module loaded: 
/usr/lib/dovecot/modules/lib10_quota_plugin.so

Apr 09 14:58:00 Debug: Module loaded: 
/usr/lib/dovecot/modules/lib20_fts_plugin.so

Apr 09 14:58:00 Debug: Module loaded: 
/usr/lib/dovecot/modules/lib20_zlib_plugin.so

Apr 09 14:58:00 Debug: Loading modules from directory: 
/usr/lib/dovecot/modules/doveadm

Apr 09 14:58:00 Debug: Module loaded: 
/usr/lib/dovecot/modules/doveadm/lib10_doveadm_acl_plugin.so

Apr 09 14:58:00 Debug: Module loaded: 
/usr/lib/dovecot/modules/doveadm/lib10_doveadm_quota_plugin.so

Apr 09 14:58:00 Debug: Module loaded: 
/usr/lib/dovecot/modules/doveadm/lib10_doveadm_sieve_plugin.so

Apr 09 14:58:00 Debug: Skipping module doveadm_fts_lucene_plugin, because 
dlopen() failed: 
/usr/lib/dovecot/modules/doveadm/lib20_doveadm_fts_lucene_plugin.so: undefined 
symbol: lucene_index_iter_deinit (this is usually intentional, so just ignore 
this message)

Apr 09 14:58:00 Debug: Module loaded: 
/usr/lib/dovecot/modules/doveadm/lib20_doveadm_fts_plugin.so

Apr 09 14:58:00 Debug: Skipping module doveadm_mail_crypt_plugin, because 
dlopen() failed: 
/usr/lib/dovecot/modules/doveadm/libdoveadm_mail_crypt_plugin.so: undefined 
symbol: mail_crypt_box_get_pvt_digests (this is usually intentional, so just 
ignore this message)

Apr 09 14:58:00 
doveadm(myu...@mydomain.yyy)<27721><>: Debug: 
auth-master: userdb lookup(myu...@mydomain.yyy): 
Started userdb lookup

Apr 09 14:58:00 
doveadm(myu...@mydomain.yyy)<27721><>: Debug: 
auth-master: conn unix:/var/run/dovecot/auth-userdb: Connecting

Apr 09 14:58:00 
doveadm(myu...@mydomain.yyy)<27721><>: Debug: 
auth-master: conn unix:/var/run/dovecot/auth-userdb (pid=14462,uid=0): Client 
connected (fd=8)

Apr 09 14:58:00 
doveadm(myu...@mydomain.yyy)<27721><>: Debug: 
auth-master: userdb lookup(myu...@mydomain.yyy): 
auth USER input: myu...@mydomain.yyy 
quota_rule=*:bytes=20GB

Apr 09 14:58:00 
doveadm(myu...@mydomain.yyy)<27721><>: Debug: 
auth-master: userdb lookup(myu...@mydomain.yyy): 
Finished userdb lookup 
(username=myu...@mydomain.yyy 
quota_rule=*:bytes=20GB)

Apr 09 14:58:00 
doveadm(myu...@mydomain.yyy)<27721><>: Debug: Added 
userdb setting: plugin/quota_rule=*:bytes=20GB

Apr 09 14:58:00 doveadm(myu...@mydomain.yyy): 
Debug: Effective uid=2000, gid=2000, home=/var/vmail/mydomain.yyy.com/myuser

Apr 09 14:58:00 doveadm(myu...@mydomain.yyy): 
Debug: Quota root: name=User quota backend=dict args=:proxy::quota

Apr 09 14:58:00 doveadm(myu...@mydomain.yyy): 
Debug: Quota rule: root=User quota mailbox=* bytes=21474836480 messages=0

Apr 09 14:58:00 doveadm(myu...@mydomain.yyy): 
Debug: Quota rule: root=User quota mailbox=Trash ignored

Apr 09 14:58:00 doveadm(myu...@mydomain.yyy): 
Debug: Quota rule: root=User quota mailbox=Junk ignored

Apr 09 14:58:00 doveadm(myu...@mydomain.yyy): 
Debug: Quota warning: bytes=17179869184 (80%) messages=0 reverse=no 
command=quota-warning 90 myu...@mydomain.yyy

Apr 09 14:58:00 doveadm(myu...@mydomain.yyy): 
Debug: Quota warning: bytes=18253611008 (85%) messages=0 reverse=no 
command=quota-warning 95 myu...@mydomain.yyy

Apr 09 14:58:00 doveadm(myu...@mydomain.yyy): 
Debug: Quota warning: bytes=20401094656 (95%) messages=0 reverse=no 
command=quota-warning 105 myu...@mydomain.yyy

Apr 09 14:58:00 doveadm(myu...@mydomain.yyy): 
Debug: Quota grace: root=User

Re: Question about doveadm altmove

2021-04-05 Thread Javier Miguel Rodríguez
Any update on this? Should I fill a bug-report for doveadm altmove -r 
not working?


Regards


Javier

El 28/03/2021 a las 18:00, JAVIER MIGUEL RODRIGUEZ escribió:

Any update on this? Dovecot 2.3.14 makes doveadm altmove -r functional?


*De:* dovecot  en nombre de María Arrea 


*Enviado:* Monday, March 22, 2021 3:15:13 PM
*Cc:* dovecot@dovecot.org 
*Asunto:* Re: Question about doveadm altmove
zlib plugin, as far as I know, only supports zstd, gzip, bzip2 and 
lzma/xz compression. The last one is being deprecated.

I have found this interesting post in the mailing list:
https://dovecot.org/pipermail/dovecot/2021-February/121329.html
Same problem here with Dovecot 2.3.13, "doveadm altmove -r" is not
moving anything from alternate to default storage. I fixed this by
reverting this commit:

https://github.com/dovecot/core/commit/2795f6183049a8a4cc489869b3e866dc20a8a732  
<https://github.com/dovecot/core/commit/2795f6183049a8a4cc489869b3e866dc20a8a732>
Is this fixed in 2.3.14 ? Does doveadm altmove -r works as expected in 
2.3.14?

Regards
*Sent:* Sunday, March 21, 2021 at 11:28 PM
*From:* "justina colmena ~biz" 
*To:* dovecot@dovecot.org
*Subject:* Re: Question about doveadm altmove
On Sunday, March 21, 2021 12:16:28 PM AKDT María Arrea wrote:
> Hello.
>
> We are running dovecot 2.3.13. Full doveconf -n output below
>
> In 2.3.14 Changelog I found this:
>
> * Remove XZ/LZMA write support. Read support will be removed in future
> release.
> We are using mdbox + XZ/LZMA for alternate storage (messages older 
than 2
> weeks are moved to ALT storage via cron job), so we must convert 
from XZ to

> another thing (maybe zstd or bz2).

Why can't you just pipe the output of "doveadm altmove" command through an
external command to do the XZ/LZMA compression if dovecot no longer 
supports

it internally?

From doveadm-altmove (1):
> This command can be used with sdbox or mdbox storage to move mails to
alternative
> storage path when :ALT= is specified for the mail location.

And that's set in stone.

https://en.wikipedia.org/wiki/XZ_Utils 
<https://en.wikipedia.org/wiki/XZ_Utils>


So what are the issues with xz? Security? Crashes or viruses on expanding
invalid archives?


Re: Question about doveadm altmove

2021-03-28 Thread JAVIER MIGUEL RODRIGUEZ
Any update on this? Dovecot 2.3.14 makes doveadm altmove -r functional?


De: dovecot  en nombre de María Arrea 

Enviado: Monday, March 22, 2021 3:15:13 PM
Cc: dovecot@dovecot.org 
Asunto: Re: Question about doveadm altmove

zlib plugin, as far as I know, only supports zstd, gzip, bzip2 and lzma/xz 
compression. The last one is being deprecated.

I have found this interesting post in the mailing list:

https://dovecot.org/pipermail/dovecot/2021-February/121329.html


Same problem here with Dovecot 2.3.13, "doveadm altmove -r" is not
moving anything from alternate to default storage. I fixed this by
reverting this commit:

https://github.com/dovecot/core/commit/2795f6183049a8a4cc489869b3e866dc20a8a732


Is this fixed in 2.3.14 ? Does doveadm altmove -r works as expected in 2.3.14?

Regards


Sent: Sunday, March 21, 2021 at 11:28 PM
From: "justina colmena ~biz" 
To: dovecot@dovecot.org
Subject: Re: Question about doveadm altmove
On Sunday, March 21, 2021 12:16:28 PM AKDT María Arrea wrote:
> Hello.
>
> We are running dovecot 2.3.13. Full doveconf -n output below
>
> In 2.3.14 Changelog I found this:
>
> * Remove XZ/LZMA write support. Read support will be removed in future
> release.
> We are using mdbox + XZ/LZMA for alternate storage (messages older than 2
> weeks are moved to ALT storage via cron job), so we must convert from XZ to
> another thing (maybe zstd or bz2).

Why can't you just pipe the output of "doveadm altmove" command through an
external command to do the XZ/LZMA compression if dovecot no longer supports
it internally?

>From doveadm-altmove (1):
> This command can be used with sdbox or mdbox storage to move mails to
alternative
> storage path when :ALT= is specified for the mail location.

And that's set in stone.

https://en.wikipedia.org/wiki/XZ_Utils

So what are the issues with xz? Security? Crashes or viruses on expanding
invalid archives?


Re: [Dovecot-news] Headsup on feature removal

2020-04-17 Thread Javier Miguel Rodríguez

Hello Aki

Can you elaborate about memory management issues in liblzma & dovecot?

Regards

El 19/03/2020 a las 20:07, Aki Tuomi escribió:


After discussing it internally, we decided to postpone the xz removal for the 
time being. We understand the complexity of migrating away from it, so we want 
to give more time to do that.
However beware that there are memory management issues in liblzma and we 
consider it unsafe to use. Feel free to use any of the other supported 
compresion algorithms instead. (We are also adding zstandard support in 2.3.11.)




RE: [Dovecot-news] Headsup on feature removal

2020-03-18 Thread JAVIER MIGUEL RODRIGUEZ
I fully agree with this:

> Please consider holding off on removing features for the next major 
> release, 2.4.0 instead.  It makes sense to retain, in as much as is 
> possible, feature backwards compatibility across a major release.





Re: [Dovecot-news] Headsup on feature removal

2020-03-18 Thread Javier Miguel Rodríguez
    xz compression support for mdbox is used extensively here. Why are 
you planning to remove it?


El 17/03/2020 a las 7:50, Aki Tuomi escribió:

Hi!

Dovecot is now a nearly 20 year old product, and during that time it has 
accumulated many different features and plugins in its core repository.

We are starting to gradually remove some of these parts, which are unused, 
untested or deprecated.
We will provide advance notification before removing anything.

To start, the following features are likely to be removed in next few releases 
of Dovecot.

  - Authentication drivers: vpopmail, checkpassword, bsdauth, shadow, sia
  - Password schemes: HMAC-MD5, RPA, SKEY, PLAIN-MD4, LANMAN, NTLM, SMD5
  - Authentication mechanisms: ntlm, rpa, skey
  - Dict drivers: memcached, memcached-ascii (use redis instead)
  - postfix postmap support
  - autocreate & autosubscribe plugins (use built-in auto=create/subscribe 
setting instead)
  - expire plugin (use built-in autoexpunge setting)
  - fts-squat plugin
  - mailbox alias plugin
  - mail-filter plugin
  - snarf plugin
  - xz compression algorithm

For the authentication drivers that are being removed, we suggest using Lua as 
a replacement. See
https://doc.dovecot.org/configuration_manual/authentication/lua_based_authentication/

For information about converting between password schemes, see
https://wiki2.dovecot.org/HowTo/ConvertPasswordSchemes

If you are using any of these features, please start preparing for their 
removal in the near
future. Features will begin to be dropped as of v2.3.11.

Additionally, the mbox format will no longer receive new development. It will 
still be
maintained, however its use beyond migrations and other limited use cases will 
be discouraged.

Please contact us via the mailing list if you have any questions.

Regards,
Dovecot Team

___
Dovecot-news mailing list
dovecot-n...@dovecot.org
https://dovecot.org/mailman/listinfo/dovecot-news


Re: [Dovecot] v2.2.13 released

2014-05-13 Thread Javier Miguel Rodríguez


I think the new settings */mdbox_purge_preserve_alt /*setting 
should be enforced by default in 2.3.0+/.


/Regards

Javier/
//

/





+ mdbox: Added mdbox_purge_preserve_alt setting to keep the file
  within alt storage during purge. (Should become enforced in v2.3.0?)




--
Apoyo a la Docencia e Investigación
Servicio de Informática y Comunicaciones

Gestión de Incidencias: https://sicremedy.us.es/arsys


Re: [Dovecot] xz compression

2014-04-03 Thread Javier Miguel Rodríguez


El 03/04/2014 16:28, T.B. escribió:

Hello Timo,

I've successfully setup xz compression for my Dovecot installation 
using the version 2.2.12 from Debian unstable.


Read the man page of xz(1) . With -9 compression level 674 MiB of 
ram are needed. If you use dovecot+xz, you really need to increse vsz_limit.


Personally, I would not use xz (-9) for main storage in a busy 
site. If you get +20 messages/second you need a lot of ram only for 
compression. I would use xz (-9) for alternate storage, tough.


Regards

Javier

--
Apoyo a la Docencia e Investigación
Servicio de Informática y Comunicaciones

Gestión de Incidencias: https://sicremedy.us.es/arsys


Re: [Dovecot] Architecture for large Dovecot cluster

2014-01-24 Thread Javier de Miguel Rodríguez
 

Great mail, Stan 

Another trick: you can save storage (both space & iops) using mdox and
compression. CPU power is far cheaper than iops , the less data you
read/write, the fewer iops. 

You can use gzip,bzip2 or even LZMA/xz compression for LDA. If you also
use Single Instace Storage and Alternate (cheap) storage for old mail,
you can save a lot of money in storage. Also consider using mdbox + ssd
for indexes (hp storevirtual VSA+ a couple of ESXi with ssd disks will
give you real-time replicated ssd iscsi lun for indexes)

Just my 2 cents.

Regards

Javier

 

Re: [Dovecot] mdbox - healthy rotation size vs default

2013-08-26 Thread Javier de Miguel Rodríguez
 

Another intesting thing for this thread: if you set a very high
value for mdbox rotate settings, your incremental backups will be awful.
If you have hundreds of messages in a mdbox and you doveadm purge one of
them, the full .m file must be copied in the incremental / diferential
backup. 

I use 10 MB+zlib for "main storage" and 250 MB+bzip2 for
alternate storage. 

Regards 

Javier 

 

Re: [Dovecot] Strange Dovecot 2.0.20 auth chokes and cores

2012-05-30 Thread Javier Miguel Rodríguez
 

There is a known Problem with epoll, at least on Red Hat / CentOS,
this bugzilla may give you additional info (comments of Timo inside)


https://bugzilla.redhat.com/show_bug.cgi?id=681578 

Regards 

Javier


El 30/05/2012 17:45, Konrad . escribió: 

> When we upgrade our
kernels from 2.6.32.2 to 3.2.16 something strange
> has happened.
> On
high traffic dovecot/auth looks like not responding.
> 
> We found a lot
of this lines at the log:
> dovecot: pop3-login: Error:
net_connect_unix(pop3) failed: Resource
> temporarily unavailable
>
(...) and clients stop authorizing
> 
> Some other errors follow in the
wake of:
> dovecot: pop3: Error: Raw backtrace:
>
/opt/dovecot2/lib/dovecot/libdovecot.so.0(+0x373ca) [0x7768a3ca] ->
>
/opt/dovecot2/lib/dovecot/libdovecot.so.0(+0x3743b) [0x7768a43b] ->
>
/opt/dovecot2/lib/dovecot/libdovecot.so.0(i_fatal+0) [0x7766048b] ->
>
/opt/dovecot2/lib/dovecot/libdovecot.so.0(+0x4593a) [0x7769893a] ->
>
/opt/dovecot2/lib/dovecot/libdovecot.so.0(io_add+0xaf) [0x7769757f] ->
>
/opt/dovecot2/lib/dovecot/libdovecot.so.0(master_service_init_finish+0x19a)
>
[0x77683c2a] -> dovecot/pop3(main+0xfc) [0x804a90c] ->
>
/lib/i386-linux-gnu/libc.so.6(__libc_start_main+0xf3) [0x774c04d3] ->
>
dovecot/pop3() [0x804aba9]
> dovecot: pop3: Error: Raw backtrace:
>
/opt/dovecot2/lib/dovecot/libdovecot.so.0(+0x373ca) [0x7768a3ca] ->
>
/opt/dovecot2/lib/dovecot/libdovecot.so.0(+0x3743b) [0x7768a43b] ->
>
/opt/dovecot2/lib/dovecot/libdovecot.so.0(i_fatal+0) [0x7766048b] ->
>
/opt/dovecot2/lib/dovecot/libdovecot.so.0(+0x4593a) [0x7769893a] ->
>
/opt/dovecot2/lib/dovecot/libdovecot.so.0(io_add+0xaf) [0x7769757f] ->
>
/opt/dovecot2/lib/dovecot/libdovecot.so.0(master_service_init_finish+0x19a)
>
[0x77683c2a] -> dovecot/pop3(main+0xfc) [0x804a90c] ->
>
/lib/i386-linux-gnu/libc.so.6(__libc_start_main+0xf3) [0x774c04d3] ->
>
dovecot/pop3() [0x804aba9]
> dovecot: master: Error: service(pop3):
child 18756 killed with signal
> 6 (core dumped)
> dovecot: master:
Error: service(pop3): child 18756 killed with signal
> 6 (core dumped)
>
dovecot: master: Error: service(pop3): command startup failed,
throttling
> dovecot: master: Error: service(pop3): command startup
failed, throttling
> dovecot: pop3-login: Error: Raw backtrace:
>
/opt/dovecot2/lib/dovecot/libdovecot.so.0(+0x373ca) [0x776b73ca] ->
>
/opt/dovecot2/lib/dovecot/libdovecot.so.0(+0x3743b) [0x776b743b] ->
>
/opt/dovecot2/lib/dovecot/libdovecot.so.0(i_fatal+0) [0x7768d48b] ->
>
/opt/dovecot2/lib/dovecot/libdovecot.so.0(+0x4593a) [0x776c593a] ->
>
/opt/dovecot2/lib/dovecot/libdovecot.so.0(io_add+0xaf) [0x776c457f] ->
>
/opt/dovecot2/lib/dovecot/libdovecot.so.0(master_service_init_finish+0x19a)
>
[0x776b0c2a] ->
>
/opt/dovecot2/lib/dovecot/libdovecot-login.so.0(main+0x143)
>
[0x77705383] -> /lib/i386-linux-gnu/libc.so.6(__libc_start_main+0xf3)
>
[0x774ed4d3] -> dovecot/pop3-login() [0x8049471]
> dovecot: pop3-login:
Error: Raw backtrace:
>
/opt/dovecot2/lib/dovecot/libdovecot.so.0(+0x373ca) [0x776fd3ca] ->
>
/opt/dovecot2/lib/dovecot/libdovecot.so.0(+0x3743b) [0x776fd43b] ->
>
/opt/dovecot2/lib/dovecot/libdovecot.so.0(i_fatal+0) [0x776d348b] ->
>
/opt/dovecot2/lib/dovecot/libdovecot.so.0(+0x4593a) [0x7770b93a] ->
>
/opt/dovecot2/lib/dovecot/libdovecot.so.0(io_add+0xaf) [0x7770a57f] ->
>
/opt/dovecot2/lib/dovecot/libdovecot.so.0(master_service_init_finish+0x19a)
>
[0x776f6c2a] ->
>
/opt/dovecot2/lib/dovecot/libdovecot-login.so.0(main+0x143)
>
[0x7774b383] -> /lib/i386-linux-gnu/libc.so.6(__libc_start_main+0xf3)
>
[0x775334d3] -> dovecot/pop3-login() [0x8049471]
> 
> And example stack
trace (from pop3, pop3-login throws almost the same):
> #0 0x776f6424 in
__kernel_vsyscall ()
> No symbol table info available.
> #1 0x7744d1ef
in __GI_raise (sig=6) at
> ../nptl/sysdeps/unix/sysv/linux/raise.c:64
>
resultvar = 
> resultvar = 
> pid = 2002518004
> selftid = 25476
> #2
0x77450835 in __GI_abort () at abort.c:91
> save_stage = 2
> act =
{__sigaction_handler = {sa_handler = 0x9bce4a8,
> sa_sigaction =
0x9bce4a8}, sa_mask = {__val = {163409408, 2002781570,
> 163374248, 603,
163374280,
> 604, 163374280, 2001703379, 0, 2002790760, 2140717252,
>
163374280, 0, 2003786736, 2002596704, 0, 2002618953, 2003087348,
>
2140717196, 0, 163409408,
> 2001286473, 163374248, 10, 2000834616,
2002534400, 604,
> 2003087348, 604, 2002791863, 4294967295, 10}},
sa_flags = 2140717316,
> sa_restorer = 0x7764bd84 }
> sigs = {__val =
{32, 0 }}
> #3 0x77603390 in default_fatal_finish (type=,
> status=) at
failures.c:187
> backtrace = 0x9bce098
>
"/opt/dovecot2/lib/dovecot/libdovecot.so.0(+0x3837a) [0x7760337a] ->
>
/opt/dovecot2/lib/dove

Re: [Dovecot] Very High Load on Dovecot 2 and Errors in mail.err.

2012-05-20 Thread Javier Miguel Rodríguez
 

I know that you are NOT running RHEL / CentOS, but this problem with
> 1000 child processes bit us hard, read this red hat kernel bugzilla
(Timo has comments inside):


https://bugzilla.redhat.com/show_bug.cgi?id=681578 

Maybe you are
hitting the same limit? 

Regards 

Javier 

El 20/05/2012 11:59, Urban
Loesch escribió: 

> Am 19.05.2012 21:05, schrieb Timo Sirainen:
> 
>>
On Wed, 2012-05-16 at 08:59 +0200, Urban Loesch wrote: 
>> 
>>> The
Server was running about 1 year without any problems. 15Min Load was
between 0,5 and max 8. No high IOWAIT. CPU Idletime about 98%.
>> .. 
>>

>>> # iostat -k Linux 3.0.28-vs2.3.2.3-rol-em64t (mailstore4)
16.05.2012 _x86_64_ (24 CPU)
>> Did you change the kernel just before it
broke? I'd try another version.
> 
> The first time it brokes with
kernel 2.6.38.8-vs2.3.0.37-rc17.
> Then I tried it with 3.0.28 and it
brokes again.
> On friday evening I disabled the cgroup feature
compleetly and until now 
> it seems to work normally.
> But this could
be because we have weekend and now there are not many 
> connections
active. So I have
> to wait until monday. If it happens again I will try
version 3.2.17.
> 
> On the other side it could be that the server is
overloaded, because 
> this problem happens only when there are
> more
than 1000 tasks active. Sounds strange for me, because it has been 
>
working without problems since 1 year
> and we made no changes. Also
there were almost more than 1000 tasks 
> active over the last year and
we had no problems.
> 
> thanks
> Urban

 

Re: [Dovecot] index IO patterns

2012-05-11 Thread Javier de Miguel Rodríguez

Even without LDA/LMTP dovecot-imap needs to write right? It would
need to update the index every time an imap connect happens and
new mails are found in the mail store.


Well of course. Indexes are also updated when flags are modified, moved 
a messages, delete a message, etc.. But in my setup there are 65% reads 
and the rest writes


Regards

Javier



Cor





Re: [Dovecot] index IO patterns

2012-05-10 Thread Javier Miguel Rodríguez
 

Indexes are very random, mostly read, some writes if using
dovecot-lda (ej: dbox). The average size is rather small, maybe 5 KB in
our setup. Bandwith is rather low, 20-30 MB/sec 

We are using HP
LeftHand for our replicated storage needs. 

Regards 

Javier 

El
11/05/2012 08:41, Cor Bosman escribió: 

> Hey all, we're in the process
of checking out alternatives to our index storage. We're currently
storing indexes on a NetApp Metrocluster which works fine, but is very
expensive. We're planning a few different setups and doing some actual
performance tests on them. 
> 
> Does anyone know some of the IO
patterns of the indexes? For instance:
> 
> - mostly random reads or
linear reads/writes? 
> - average size of reads and writes?
> - how many
read/writes on average for a specific mailbox size?
> 
> Anyone do any
measurements of this kind?
> 
> Alternatively, does anyone have any
experience with other redundant storage options? Im thinking things like
MooseFS, DRBD, etc? 
> 
> regards,
> 
> Cor

 

Re: [Dovecot] Problem about dovecot Panic

2012-03-29 Thread Javier Miguel Rodríguez
  

We had the same problem. Reboot with an older kernel
(2.6.18-274.17.1.el5 works for us). It is known bug of RHEL, see this
bugzilla: 

https://bugzilla.redhat.com/show_bug.cgi?id=681578 

Regards


Javier 

On Thu, 29 Mar 2012 10:15:32 +0200 (CEST), FABIO FERRARI
wrote: 

> Good morning,
> we have 2 Redhat Enterprise 5.7 machines,
they are a cluster with some
> mail services in it (postfix and dovecot
2).
> 
> The version of dovecot is dovecot-2.0.1-1_118.el5 (installed
via rpm).
> 
> From last week we have this dovecot problem: suddenly
dovecot doesn't
> accept any new connections, the dovecot.log file
reports lines like these
> 
> Mar 15 12:38:54 secchia dovecot: imap:
Panic: epoll_ctl(add, 5) failed:
> Invalid argument
> Mar 15 12:38:54
secchia dovecot: imap: Error: Raw backtrace:
>
/usr/lib64/dovecot/libdovecot.so.0 [0x36ea436de0] ->
>
/usr/lib64/dovecot/libdovecot.so.0 [0x36ea436e3a] ->
/usr/lib64/dovecot/
> libdovecot.so.0 [0x36ea4362e8] ->
>
/usr/lib64/dovecot/libdovecot.so.0(io_loop_handle_add+0x118)
>
[0x36ea441498] -> /usr/lib64/dovecot/libdovecot.so.0(io_add+0x8f)
>
[0x36ea440b7f] -> /usr/li
>
b64/dovecot/libdovecot.so.0(master_service_init_finish+0x1c6)
>
[0x36ea430c16] -> dovecot/imap(main+0x10a) [0x41773a] ->
>
/lib64/libc.so.6(__libc_start_main+0xf4) [0x36ea01d994] -> dovecot/
>
imap [0x408179]
> Mar 15 12:38:54 secchia dovecot: master: Error:
service(imap): child 14514
> killed with signal 6 (core dumps
disabled)
> Mar 15 12:38:54 secchia dovecot: master: Error:
service(imap): command
> startup failed, throttling
> Mar 15 12:39:50
secchia dovecot: imap-login: Error: master(imap): Auth
> request timed
out (received 0/12 bytes)
> Mar 15 12:39:51 secchia dovecot: imap-login:
Error: master(imap): Auth
> request timed out (received 0/12 bytes)
>
Mar 15 12:39:51 secchia dovecot: imap-login: Error: master(imap): Auth
>
request timed out (received 0/12 bytes)
> Mar 15 12:39:52 secchia
dovecot: imap-login: Error: net_connect_unix(imap)
> failed: Resource
temporarily unavailable
> Mar 15 12:39:52 secchia dovecot: imap-login:
Error: net_connect_unix(imap)
> failed: Resource temporarily
unavailable
> Mar 15 12:39:52 secchia dovecot: imap-login: Error:
master(imap): Auth
> request timed out (received 0/12 bytes)
> Mar 15
12:39:53 secchia dovecot: imap-login: Error: net_connect_unix(imap)
>
failed: Resource temporarily unavailable
> Mar 15 12:39:53 secchia
dovecot: imap-login: Error: net_connect_unix(imap)
> failed: Resource
temporarily unavailable
> Mar 15 12:39:54 secchia dovecot: imap-login:
Error: net_connect_unix(imap)
> failed: Resource temporarily
unavailable
> Mar 15 12:39:54 secchia dovecot: imap: Error: Login client
disconnected
> too early
> Mar 15 12:39:54 secchia dovecot: imap: Error:
Login client disconnected
> too early
> Mar 15 12:39:54 secchia dovecot:
imap: Error: Login client disconnected
> too early
> Mar 15 12:39:54
secchia dovecot: imap: Error: Login client disconnected
> too early
>
Mar 15 12:39:55 secchia dovecot: imap: Panic: epoll_ctl(add, 5)
failed:
> Invalid argument
> 
> and the kern.log file reports
> 
> Mar
15 12:38:52 secchia kernel: dlm: closing connection to node 1
> Mar 15
12:39:04 secchia kernel: lpfc :83:00.0: 1:(0):2753 PLOGI
> failure
DID:010400 Status:x9/x32900
> Mar 15 12:39:04 secchia kernel: lpfc
:03:00.0: 0:(0):2753 PLOGI
> failure DID:010400 Status:x9/x32900
>
Mar 15 12:41:14 secchia kernel: lpfc :03:00.0: 0:(0):2753 PLOGI
>
failure DID:010400 Status:x9/x32900
> Mar 15 12:41:15 secchia kernel:
lpfc :83:00.0: 1:(0):2753 PLOGI
> failure DID:010400
Status:x9/x32900
> Mar 15 12:42:11 secchia kernel: dlm: got connection
from 1
> 
> can you help us?
> 
> thanks in advance
> 
> Fabio Ferrari
 

Re: [Dovecot] Recalculate quota when quota=dict ?

2012-02-20 Thread Javier Miguel Rodríguez
I have seen this behaviour with a local ext4 iSCSI filesystem. When the 
system is hammered by I/O (example, perfoming a full backup), I also see 
those messages in the log.


Regards

Javier



On 17.2.2012, at 11.51, jos...@hybrid.pl wrote:


By the way: what might have caused such a warning?

r...@mail2.hybrid.pl /tmp/transfer>  doveadm quota recalc -u jos...@hybrid.pl
doveadm(jos...@hybrid.pl): Warning: Created dotlock file's timestamp is 
different than current time (1329464622 vs 1329464672): 
/var/mail/mail/hybrid.pl/joshua/.mailing.ekg/dovecot-uidlist

Does it keep happening? Is this a local filesystem or NFS? Shouldn't happen 
unless remote storage server's clock and local server's clock aren't synced.





[Dovecot] Question about mdbox alt storage in Dovecot 2.0

2012-02-12 Thread Javier Miguel Rodríguez
  

Hello 

Reading 2.1rc6 changelog I see this: 

mdbox: When saving
to alt storage, Dovecot didn't append as much
 data to m.* files as it
could have.

Could you elaborate more on this? Has been ported to
Dovecot 2.0?

Regards

Javier

On Sun, 12 Feb 2012 23:01:10 +0200, Timo
Sirainen wrote: 

>
http://dovecot.org/releases/2.1/rc/dovecot-2.1.rc6.tar.gz [1]
>
http://dovecot.org/releases/2.1/rc/dovecot-2.1.rc6.tar.gz.sig [2]
> 
>
I've finally finished all of my email backlog. If you haven't received
an answer to some question/bugreport, resend the mail.
> 
> This is
hopefully the last v2.1 RC. If I don't receive any (serious) bug reports
about this release in next few days, I'll just change the version number
to v2.1.0 (and maybe update man pages, some are still missing..)
> 
>
I'll also create dovecot-2.2 hg repository today and add some pending
patches from Stephan there and start doing some early spring cleaning in
there. :)
> 
> Since v2.1.rc5 there have been lots of small fixes and
logging improvements, but I also did a few bigger things since they
really had to be done soon and I didn't want v2.2.0 release to be only a
few months after v2.1.0 with barely any new features.
> 
> * Added
automatic mountpoint tracking and doveadm mount commands to
> manage the
list. If a mountpoint is unmounted, error handling is
> done by assuming
that the files are only temporarily lost. This is
> especially helpful
if dbox alt storage becomes unmounted.
> * Expire plugin: Only go
through users listed by userdb iteration.
> Delete dict rows for
nonexistent users, unless
> expire_keep_nonexistent_users=yes.
> * LDA's
out-of-quota mails now include DSN report instead of MDN.
> 
> + LDAP:
Allow building passdb/userdb extra fields from multiple LDAP
>
attributes by using %{ldap:attributeName} variables in the template.
> +
doveadm log errors shows the last 1000 warnings and errors since
>
Dovecot was started.
> + Improved multi-instance support: Track
automatically which instances
> are started up and manage the list with
doveadm instance commands.
> All Dovecot commands now support -i
parameter to
> select the instance (instead of having to use -c ).
> See
instance_name setting.
> + doveadm mailbox delete: Added -r parameter to
delete recursively
> + doveadm acl: Added "add" and "remove" commands.
>
+ Updated to Unicode v6.1
> - mdbox: When saving to alt storage, Dovecot
didn't append as much
> data to m.* files as it could have.
> - dbox:
Fixed error handling when saving failed or was aborted
> - IMAP: Using
COMPRESS extension may have caused assert-crashes
> - IMAP: THREAD REFS
sometimes returned invalid (0) nodes.
> - dsync: Fixed handling
non-ASCII characters in mailbox names.
> 
>
___
> Dovecot-news mailing
list
> dovecot-n...@dovecot.org [3]
>
http://dovecot.org/cgi-bin/mailman/listinfo/dovecot-news [4]



Links:
--
[1]
http://dovecot.org/releases/2.1/rc/dovecot-2.1.rc6.tar.gz
[2]
http://dovecot.org/releases/2.1/rc/dovecot-2.1.rc6.tar.gz.sig
[3]
mailto:dovecot-n...@dovecot.org
[4]
http://dovecot.org/cgi-bin/mailman/listinfo/dovecot-news


Re: [Dovecot] Performance of Maildir vs sdbox/mdbox

2012-01-18 Thread Javier Miguel Rodríguez


Spanish edu site here, 80k users, 4,5 TB of email, 6.000 iops 
(indexes) + 9.000 iops (mdboxes) in working hours here.


We evaluated mdbox against Maildir and we found that with these 
setting dovecot 2 perfoms better than Maildir:


mdbox_rotate_interval = 1d
mdbox_rotate_size=60m
zlib_save_level = 9 # 1..9
  zlib_save = gz # or bz2

We detected 40% less iops with this setup *in working hours (more 
info below)*. Zlib saved some writes (15-30%). With mdbox, deletion of a 
message is written to indexes (use SSD for this), and a nightly cronjob 
deletes the real message from the mdbox, this saves us some iops in 
working hours. Also, backup software is MUCH happier handling hundreds 
of thousands files (mdbox) versus tens of millions (maildir)


Mdbox has also drawbacks: you have to be VERY careful with your 
indexes, they contain data that can not be rebuilt from mdboxes. The 
nightly cronjob "purging" the mdboxes hammers the SAN. Full backup time 
is reduced, but incremental backup space & time increases: if you delete 
a message, after "purging" it from the mdbox the mdbox file changes 
(size and date), so the incremental backup has to copy it again.


Regards

Javier





Re: [Dovecot] resolve mail_home ?

2012-01-17 Thread Javier Miguel Rodríguez
That comand/paramater should be great for our backup scripts in our 
hashed mdboxes tree, we are using now slocate...


Regards

Javier





Nope..

Maybe a new command, or maybe a parameter to doveadm user that would
show mail_uid/gid/home. Or maybe something that dumps config output with
%vars expanded to the given user. Hmm.





Re: [Dovecot] Performance-Tuning

2011-11-08 Thread Javier de Miguel Rodríguez


Other important thing to consider is message expunging. With mdbox 
you are "delaying" the I/O associated with deleting e-mails. We have a 
nightly cronjob that expunge messages from mdboxes.


If you have en EVA (wich one? 4.400? 6.400? ) you also can consider 
RAID 1+0 or SSD for indexes. Indexes are hammered in mdbox.


Regards

Javier


Am Dienstag, 8. November 2011, 15:15:39 schrieb Javier de Miguel Rodríguez:


Hi,


  If you have CPU to spare, consider using zlib with mdbox. You are
trading CPU power (cheap) to get fewer IOPS (IOPS count is expensive).

Hey. This point is great. I hadn't realized that.

Sure. zlib will save IOPS and 2x6-CPUs aren't a problem. Good point -thanks.


compressed) and backup software is happier because there are few
(100.000+ files with mdbox) to backup instead of several millions
(Maildir)

Yes, that#s the main reason why I want to switch to mbox. At the moment our
roundtrip-time for the backup is>  24h...


Peer






Re: [Dovecot] Performance-Tuning

2011-11-08 Thread Javier de Miguel Rodríguez
We are very happy with mdbox+zlib+ext4 + iSCSI SAN (HP Lefthand in 
our setup)


If you have CPU to spare, consider using zlib with mdbox. You are 
trading CPU power (cheap) to get fewer IOPS (IOPS count is expensive). 
Mdbox has halved our backup windows (2,8 TB uncompressed mailboxes, 2 TB 
compressed) and backup software is happier because there are few 
(100.000+ files with mdbox) to backup instead of several millions (Maildir)


Regards

Javier

Hi,

I have>  11 TB hard used Mailstorage, saved als maildir in ext3 on HP EVA.

I always wanted to make some mesurements about several influences to the
performance (switch to ext4, switch to mdbox), but I never had enough time
to do that.

At the moment I *need* more speed, we have too much waitI/O on the system
and I already used all other performance and tuning-tricks (separated cache,
noatime, fsync and all that stuff).

I have to change my setup, maybe somebody else here have hard facts:

*) Is ext4 faster? How much faster?
*) Is it faster because of the ext4 kernel-module (which can be used on ext3
to) or because of the ext4 filesystem layout?


*) Is mdbox really faster? I'd like to have mdbox to have better performance
in running my backup-processes. But does it bring some performance boosts
to?


Thanks for any hints an tricks,

Peer






Re: [Dovecot] Dot Lock timestmap, users disconnections from roundcube

2011-11-04 Thread Javier de Miguel Rodríguez

Same problem here, any hint about a fix or workaround?

Regards

Javier



We follow the guidelines about timekeeping RHEL in vmware vsphere located here

  
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1006427

  These problems happens in peak hours. Any dovecot config parameter I could 
set to mitigate this problem?

  Regards

  Maria

- Original Message -
From: Ed W
Sent: 11/03/11 11:57 AM
To: Maria Arrea, Dovecot Mailing List
Subject: Re: [Dovecot] Dot Lock timestmap, users disconnections from roundcube

  On 03/11/2011 10:49, Maria Arrea wrote:>  All the ESXs hosts and all the VM use the 
same NTP server.>  >  Any other idea?>  Doesn't ESX have issues with the time 
drifting when certain kernel options are set? Something to do with it rescheduling machines 
and them not counting idle ticks or something..? Does this problem happen during idle hours 
or peak hours? I should home in on clock problems... Probably vmware related issues to the 
kernel you are using? Good luck Ed W





Re: [Dovecot] problem with dovecot and sieve

2011-06-27 Thread Javier
I'll be planning on upgrade soon then, if that cures the problem (ie:
i will use submission_host instead of sendmail binary)

Another (maybe) unrelated question.
It is possible to add extra parameters so the connection made to
submission_host uses user's credentials ? (for authenticated smtp)
Or I'm asking something ridiculous?

Thanks Timo, Thanks list.

Javier

On Mon, Jun 27, 2011 at 8:42 PM, Timo Sirainen  wrote:
> On Fri, 2011-06-24 at 18:23 +0300, Timo Sirainen wrote:
>> On 16.6.2011, at 19.24, Javier wrote:
>>
>> > Jun 16 13:18:27 mailstore5 dovecot: lmtp(8460, x...@xxx.com):
>> > Error: waitpid() failed: No child processes
>>
>> This is the main problem. It just shouldn't be happening. You could try 
>> stracing an lmtp process while it sends a mail, and see if there are two 
>> waitpid() calls or of the first one is giving this error. If there is only 
>> this one waitpid() call this would seem like a kernel problem.
>
> I think this fixes the bug:
> http://hg.dovecot.org/dovecot-2.0/rev/748b0fd169d1
>
> Of course, since you can't upgrade that's not very helpful.. You could
> try to figure out why your sendmail binary is forking and not make it do
> that..
>
>
>


Re: [Dovecot] problem with dovecot and sieve

2011-06-27 Thread Javier
Upgrade is an option, just want to know how many I have.
But, as being a medium sized mail system, I have to take little steps.

Thanks
Javier

On Mon, Jun 27, 2011 at 8:52 AM, Benny Pedersen  wrote:
> On Fri, 24 Jun 2011 11:33:26 -0300, Javier wrote:
>>
>> No other hint?
>> Only option is to upgrade to latest?
>
> or backport the needed things from later sources, its GPL v2 remember ? :)
>
>


Re: [Dovecot] problem with dovecot and sieve

2011-06-24 Thread Javier
No other hint?
Only option is to upgrade to latest?

Thanks
Javier

On Tue, Jun 21, 2011 at 4:38 PM, Javier  wrote:
> Thanks for the response.
> I'll try updating dovecot to latest version but this will take me some
> time, so other options are welcome.
> It seems like submission_host is from v2.0.10+ and I have 2.0.9 :(
>
> Sending mail with
>
> echo "test" | sendmail x...@xxxx.com
>
> works fine.
> Thanks
> Javier
>
> On Sun, Jun 19, 2011 at 8:17 AM, Stephan Bosch  wrote:
>> Op 16-6-2011 18:24, Javier schreef:
>>>
>>> Maybe it is useful to know, that vacation does the same thing, an
>>> error in the logs but the response arrives to the sender.
>>>
>>> Log file also show some error with waitpid()
>>
>> First thing to notice is that your Dovecot is relatively old, so this
>> problem may be fixed already. Also, recent Dovecot (v2.0) versions support
>> sending messages directly to an SMTP (smart)host instead of using the
>> sendmail binary.
>>
>> Looks like your sendmail binary terminates inappropriately (or Dovecot
>> thinks it does). Can you successfully send mail from the command line using
>> the sendmail tool?
>>
>> Regards,
>>
>> Stephan.
>>
>>
>


Re: [Dovecot] UIDL and message migration

2011-06-21 Thread Javier
Hugo,

I have been there too.
Check if you are changing the server address in the e-mail client. If
you are doing so, most clients check uidls based on this (at least in
my tests).

If you keep the hostname intact and the uidls are the same, the
e-mails won't be downloaded again.  You can do this messing with the
hosts file in the client computer.

To check if the uidls are the same, just launch a telnet session in
both POP servers and issue a UIDL command and compare the output.

Javier

On Tue, Jun 21, 2011 at 8:17 AM, Hugo Monteiro  wrote:
> Hello,
>
> I'm in the process of migrating accounts between two dovecot servers. An old
> server running ancient 1.0.15 and a new server running 1.2.15.
> Account migration is going to be gradual and so i would like to use imapsync
> to move messages and subscriptions from one server to the other. The only
> problem so far is with POP users. After doing an account migration with
> imapsync, and although both servers share the same pop3_uidl_format =
> %08Xu%08Xv, the client downloads every message that was already on the
> server.
>
> Any pointers would be very much appreciated.
>
> Best Regards,
>
> Hugo Monteiro.
>


Re: [Dovecot] problem with dovecot and sieve

2011-06-21 Thread Javier
Thanks for the response.
I'll try updating dovecot to latest version but this will take me some
time, so other options are welcome.
It seems like submission_host is from v2.0.10+ and I have 2.0.9 :(

Sending mail with

echo "test" | sendmail x...@.com

works fine.
Thanks
Javier

On Sun, Jun 19, 2011 at 8:17 AM, Stephan Bosch  wrote:
> Op 16-6-2011 18:24, Javier schreef:
>>
>> Maybe it is useful to know, that vacation does the same thing, an
>> error in the logs but the response arrives to the sender.
>>
>> Log file also show some error with waitpid()
>
> First thing to notice is that your Dovecot is relatively old, so this
> problem may be fixed already. Also, recent Dovecot (v2.0) versions support
> sending messages directly to an SMTP (smart)host instead of using the
> sendmail binary.
>
> Looks like your sendmail binary terminates inappropriately (or Dovecot
> thinks it does). Can you successfully send mail from the command line using
> the sendmail tool?
>
> Regards,
>
> Stephan.
>
>


Re: [Dovecot] problem with dovecot and sieve

2011-06-16 Thread Javier
Maybe it is useful to know, that vacation does the same thing, an
error in the logs but the response arrives to the sender.

Log file also show some error with waitpid()

Jun 16 13:18:27 mailstore5 dovecot: lmtp(8460, x...@xxx.com):
Error: waitpid() failed: No child processes
Jun 16 13:18:27 mailstore5 dovecot: lmtp(8460, xxx...@xx.com):
Error: +F/dFJQm+k0MIQAAmtbU9A: sieve:
msgid=: failed to
send vacation response to  (refer to server log for
more information)
Jun 16 13:18:27 mailstore5 dovecot: lmtp(8460, xxx...@xx.com):
+F/dFJQm+k0MIQAAmtbU9A: sieve:
msgid=: sent
vacation response to 

Thanks
Javier

On Thu, Jun 16, 2011 at 12:01 PM, Javier  wrote:
> Hi
>
> We've been using dovecot with great success so far. We are trying to
> add sieve support for our users.
> We enabled managesieve and users can define rules from the webmail
> (roundcube) with sieverules plugin for roundcube.
>
> Everything goes ok, but here's a problem I couldn't figure yet.
>
> When I define a redirect rule, the mail is forwarded but a local copy
> is stored too. Weird thing is that the logs say redirecting failed but
> the mail gets forwarded. Let me show you some of this (personal data
> masked):
>
> # cat .dovecot.sieve
> ## Generated by Roundcube Webmail SieveRules Plugin ##
> # rule:[teste]
> if anyof (true)
> {
>        redirect "x@x";
> }
>
> And the log from the user's sieve log
>
> sieve: info: started log at Jun 15 18:05:49.
> error: msgid=:
> failed to redirect message to  (refer to server log
> for more information).
>
> syslog:
>
> Jun 16 11:40:26 mailstore5 dovecot: lmtp(8458, ...@xx.com):
> Error: /eMhMNoV+k0KIQAAmtbU9A: sieve: execution of script
> /var/maildir++/99/xx@/.dovecot.sieve failed, but implicit
> keep was successful (user logfile
> /var/maildir++/99/xxx...@x.com/.dovecot.sieve.log may reveal
> additional details)
>
> And gets redirected anyway
> Jun 16 11:40:26 mailstore5 postfix/smtp[13041]: CB4D1C79FE:
> to=, delay=0.12, delays=0.02/0/0.01/0.08, dsn=2.0.0,
> status=sent (250 2.0.0 Ok: queued as E24FAB0880)
>
> There's no explicit keep anywhere in the sieve rule, nor a global
> rule, so Im confused, the email should be forwarded only.  The error
> message confuses me too, as it says failed but the mail gets through.
>
> dovecot -n
> # 2.0.9: /opt/mail/dovecot/etc/dovecot.conf
> # OS: Linux 2.6.36.2 x86_64 Debian 5.0.8
> auth_mechanisms = plain login
> base_dir = /opt/mail/dovecot/var
> disable_plaintext_auth = no
> listen = *
> mail_location = maildir:~/Maildir
> mail_plugins = create_mbox quota
> managesieve_notify_capability = mailto
> managesieve_sieve_capability = fileinto reject envelope
> encoded-character vacation subaddress comparator-i;ascii-numeric
> relational regex imap4flags copy include varia
> passdb {
>  args = socket=/opt/mail/auth_server/var/socket timeout=10
>  driver = courier
> }
> plugin {
>  quota = maildir:User quota
>  quota_rule = Trash:ignore
>  quota_rule2 = Spam:ignore
>  sieve = ~/.dovecot.sieve
> }
> protocols = imap pop3 lmtp sieve
> service imap-login {
>  inet_listener imap {
>    port = 30143
>    ssl = no
>  }
> }
> service lmtp {
>  inet_listener lmtp {
>    address = 0.0.0.0
>    port = 30024
>  }
>  process_min_avail = 4
> }
> service pop3-login {
>  inet_listener pop3 {
>    port = 30110
>  }
> }
> ssl = no
> ssl_parameters_regenerate = 0
> userdb {
>  args = socket=/opt/mail/auth_server/var/socket timeout=10
>  driver = courier
> }
> protocol pop3 {
>  mail_plugins = create_mbox quota maildiraccess
>  pop3_client_workarounds = outlook-no-nuls oe-ns-eoh
>  pop3_reuse_xuidl = no
>  pop3_save_uidl = no
>  pop3_uidl_format = %f
> }
> protocol imap {
>  mail_plugins = create_mbox quota imap_quota maildiraccess
> }
> protocol lmtp {
>  mail_plugins = create_mbox quota sieve
> }
> protocol sieve {
>  mail_debug = yes
> }
>
> Your help is appreciated.
> Thanks!
> Javier
>


[Dovecot] problem with dovecot and sieve

2011-06-16 Thread Javier
Hi

We've been using dovecot with great success so far. We are trying to
add sieve support for our users.
We enabled managesieve and users can define rules from the webmail
(roundcube) with sieverules plugin for roundcube.

Everything goes ok, but here's a problem I couldn't figure yet.

When I define a redirect rule, the mail is forwarded but a local copy
is stored too. Weird thing is that the logs say redirecting failed but
the mail gets forwarded. Let me show you some of this (personal data
masked):

# cat .dovecot.sieve
## Generated by Roundcube Webmail SieveRules Plugin ##
# rule:[teste]
if anyof (true)
{
redirect "x@x";
}

And the log from the user's sieve log

sieve: info: started log at Jun 15 18:05:49.
error: msgid=:
failed to redirect message to  (refer to server log
for more information).

syslog:

Jun 16 11:40:26 mailstore5 dovecot: lmtp(8458, ...@xx.com):
Error: /eMhMNoV+k0KIQAAmtbU9A: sieve: execution of script
/var/maildir++/99/xx@/.dovecot.sieve failed, but implicit
keep was successful (user logfile
/var/maildir++/99/xxx...@x.com/.dovecot.sieve.log may reveal
additional details)

And gets redirected anyway
Jun 16 11:40:26 mailstore5 postfix/smtp[13041]: CB4D1C79FE:
to=, delay=0.12, delays=0.02/0/0.01/0.08, dsn=2.0.0,
status=sent (250 2.0.0 Ok: queued as E24FAB0880)

There's no explicit keep anywhere in the sieve rule, nor a global
rule, so Im confused, the email should be forwarded only.  The error
message confuses me too, as it says failed but the mail gets through.

dovecot -n
# 2.0.9: /opt/mail/dovecot/etc/dovecot.conf
# OS: Linux 2.6.36.2 x86_64 Debian 5.0.8
auth_mechanisms = plain login
base_dir = /opt/mail/dovecot/var
disable_plaintext_auth = no
listen = *
mail_location = maildir:~/Maildir
mail_plugins = create_mbox quota
managesieve_notify_capability = mailto
managesieve_sieve_capability = fileinto reject envelope
encoded-character vacation subaddress comparator-i;ascii-numeric
relational regex imap4flags copy include varia
passdb {
  args = socket=/opt/mail/auth_server/var/socket timeout=10
  driver = courier
}
plugin {
  quota = maildir:User quota
  quota_rule = Trash:ignore
  quota_rule2 = Spam:ignore
  sieve = ~/.dovecot.sieve
}
protocols = imap pop3 lmtp sieve
service imap-login {
  inet_listener imap {
port = 30143
ssl = no
  }
}
service lmtp {
  inet_listener lmtp {
address = 0.0.0.0
port = 30024
  }
  process_min_avail = 4
}
service pop3-login {
  inet_listener pop3 {
port = 30110
  }
}
ssl = no
ssl_parameters_regenerate = 0
userdb {
  args = socket=/opt/mail/auth_server/var/socket timeout=10
  driver = courier
}
protocol pop3 {
  mail_plugins = create_mbox quota maildiraccess
  pop3_client_workarounds = outlook-no-nuls oe-ns-eoh
  pop3_reuse_xuidl = no
  pop3_save_uidl = no
  pop3_uidl_format = %f
}
protocol imap {
  mail_plugins = create_mbox quota imap_quota maildiraccess
}
protocol lmtp {
  mail_plugins = create_mbox quota sieve
}
protocol sieve {
  mail_debug = yes
}

Your help is appreciated.
Thanks!
Javier


Re: [Dovecot] Outlook Calendar Connector Question

2011-04-28 Thread Javier Miguel Rodríguez


With funambol (open source) you can connect your 
PDAs/iPhone/Outlook to have centralized calendars and contacts.


You cal also read about davical.

Regards

Javier





Quoting Jake Johnson :


Is there a freeware or opensource calendar connector that will work with
Dovecot?

Any suggestions would be appreciated.

Thanks.








Re: [Dovecot] IO rate quotas?

2011-04-08 Thread Javier de Miguel Rodriguez

>>> 
>> 
>> 
> 
> I would hope that traffic shaping could be done in an affordable manner, like 
> you say with FOSS on the Dovecot server.
> 

Go to www.lartc.org (linux advanced routing & traffic control). They have a lot 
of doc and a great mailing list

Traffic shaping is a bit tricky until you understand all the mess, but when you 
"get it" is a very powerfull tool.

Regards

Javier



> 


Re: [Dovecot] 2.0.10 Auth failed while binding ldap

2011-03-05 Thread Javier de Miguel Rodríguez



http://hg.dovecot.org/dovecot-2.0/rev/b44ec48d9425 probably fixes it?
That patch solves the problem for me, now dovecot ldap auth works. Thank 
you Timo.



(I was going to test that broken change when I made it, but then I realized I 
didn't have LDAP server installed and I just hate installing slapd. Today I 
thought I'd rather try writing my own really simple LDAP server, but after a 
few hours I gave up on it too. Maybe some day I'll try again. So the above 
patch is also untested.)





Re: [Dovecot] 2.0.10 Auth failed while binding ldap

2011-03-05 Thread Javier de Miguel Rodríguez

El 05/03/11 11:48, Stéphane Wartel escribió:

Dear all,

Since new release has been installed, auth process crash with io loop :


Same problem here, dovecot 2.0.9 works right with ldap (RHEL 5.6 x64) , 
but dovecot 2.0.10 crashes


/Mar  5 19:21:21 buzon dovecot: auth: Panic: file db-ldap.c: line 1113 
(db_ldap_result_change_attr): assertion failed: (ctx->vals == NULL)
Mar  5 19:21:21 buzon dovecot: auth: Error: Raw backtrace: 
/usr/lib64/dovecot/libdovecot.so.0 [0x2b5c03bded30] -> 
/usr/lib64/dovecot/libdovecot.so.0 [0x2b5c03bded86] -> 
/usr/lib64/dovecot/libdovecot.so.0 [0x2b5c03bde743] -> 
/usr/lib64/dovecot/auth/libauthdb_ldap.so(db_ldap_result_iterate_next+0x36e) 
[0x2b5c03e3f7ee] -> /usr/lib64/dovecot/auth/libauthdb_ldap.so 
[0x2b5c03e42130] -> /usr/lib64/dovecot/auth/libauthdb_ldap.so 
[0x2b5c03e40d6e] -> 
/usr/lib64/dovecot/libdovecot.so.0(io_loop_call_io+0x48) 
[0x2b5c03be9708] -> 
/usr/lib64/dovecot/libdovecot.so.0(io_loop_handler_run+0xd5) 
[0x2b5c03beaa75] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_run+0x2d) 
[0x2b5c03be969d] -> 
/usr/lib64/dovecot/libdovecot.so.0(master_service_run+0x13) 
[0x2b5c03bd8163] -> dovecot/auth [0 wait, 0 passdb, 1 
userdb](main+0x2cc) [0x4151ec] -> 
/lib64/libc.so.6(__libc_start_main+0xf4) [0x3b0881d994] -> dovecot/auth 
[0 wait, 0 passdb, 1 userdb] [0x409ab9]
Mar  5 19:21:21 buzon dovecot: master: Error: service(auth): child 14082 
killed with signal 6 (core dumps disabled)/


Any ideas?

Regards

Javier


[Dovecot] Error purging mdbox (damaged) mailbox

2011-02-28 Thread Javier Miguel Rodríguez


We are stress testing our preproduction system. One of the "evil" 
tests we made was putting our mailboxes filesystem in read-only in the 
middle of smtp(+lda) delivery. When we try to purge one of the affected 
mailboxes we get error like the followings:

/

doveadm(lbandera@mysite): Panic: file mdbox-purge.c: line 225 
(mdbox_purge_save_msg): assertion failed: (ret == (off_t)msg_size)
doveadm(lbandera@mysite): Error: Raw backtrace: 
/usr/lib64/dovecot/libdovecot.so.0 [0x3b0943bab0] -> 
/usr/lib64/dovecot/libdovecot.so.0(default_fatal_handler+0x35) 
[0x3b0943bb95] -> /usr/lib64/dovecot/libdovecot.so.0 [0x3b0943b4c3] -> 
/usr/lib64/dovecot/libdovecot-storage.so.0(mdbox_purge+0xe83) 
[0x3b0986f0c3] -> /usr/bin/doveadm [0x408d65] -> /usr/bin/doveadm 
[0x4093c1] -> /usr/bin/doveadm(doveadm_mail_single_user+0x9d) [0x4094ed] 
-> /usr/bin/doveadm [0x4096fe] -> 
/usr/bin/doveadm(doveadm_mail_try_run+0xb7) [0x409b37] -> 
/usr/bin/doveadm(main+0x2fc) [0x40dddc] -> 
/lib64/libc.so.6(__libc_start_main+0xf4) [0x3b0881d994] -> 
/usr/bin/doveadm [0x408c39]/


/
doveadm(leonvela@mysite): Panic: file mdbox-purge.c: line 225 
(mdbox_purge_save_msg): assertion failed: (ret == (off_t)msg_size)
doveadm(leonvela@mysite): Error: Raw backtrace: 
/usr/lib64/dovecot/libdovecot.so.0 [0x3b0943bab0] -> 
/usr/lib64/dovecot/libdovecot.so.0(default_fatal_handler+0x35) 
[0x3b0943bb95] -> /usr/lib64/dovecot/libdovecot.so.0 [0x3b0943b4c3] -> 
/usr/lib64/dovecot/libdovecot-storage.so.0(mdbox_purge+0xe83) 
[0x3b0986f0c3] -> /usr/bin/doveadm [0x408d65] -> /usr/bin/doveadm 
[0x4093c1] -> /usr/bin/doveadm(doveadm_mail_single_user+0x9d) [0x4094ed] 
-> /usr/bin/doveadm [0x4096fe] -> 
/usr/bin/doveadm(doveadm_mail_try_run+0xb7) [0x409b37] -> 
/usr/bin/doveadm(main+0x2fc) [0x40dddc] -> 
/lib64/libc.so.6(__libc_start_main+0xf4) [0x3b0881d994] -> 
/usr/bin/doveadm [0x408c39]/


The mailboxes are damaged, but maybe doveadm should not crash on 
them, should handle the error more gracefully and exit with a error status.


Regards

Javier


[Dovecot] Question about mdbox_preallocate_space and ext4

2011-02-13 Thread Javier de Miguel Rodríguez

Hello

Can anyone explain about mdbox_preallocate_space and ext4?


Regards

Javier



Re: [Dovecot] Great time savings backing a mdbox versus Maildir

2011-02-08 Thread Javier Miguel Rodríguez



The stats in the SAN after the change maildir->mdbox do not help, we have zlib 
enabled in lda&  imap with mdbox, so our # of real IOPs is lower than Maildir (we 
did not have zlib enabled)


I wonder how large a write can be before it is split to two iops.. With NFS 
probably smaller I'd guess. Still, I would have thought that even if zlib 
writes only half as much, the disk iops difference wouldn't be nearly as much.



Without zlib our mailstore was 2.1 TB. With zlib enabled is 1.4 TB. 
We use a iSCSI SAN with ext4. I am writing a document with some 
benchmarking of dovecot (postal & rabid software) with some graphs about 
# of iops, cpu load, and so... I am still writing it if you are 
interested I can post a link to the document in the list.


    Regards

Javier




Re: [Dovecot] Great time savings backing a mdbox versus Maildir

2011-02-08 Thread Javier Miguel Rodríguez



So with mdbox disk I/O usage increased compared to maildir+ramdisk indexes?



That is a "tricky" question to ask. It depends on usage, I think 
the following:



- LDA delivery: load is a bit lower (on disk) in Maildir vs mdbox: 
in both cases the message has to be written, indexes are updated, in 
Maildir indexes are in ram, so lower "disk" load in this case


- POP3 access: the same as the previous post

- IMAP access: this is tricky. In mdbox a /"delete message"/ 
command only lowers the refcount, indexes are updated and in the night a 
cron job runs doveadm purge. In Maildir, you really delete the message 
when MUA/webmail /"compacts"/ the folder, and indexes are updated. I 
think that mdbox has a /"delayed IO" /in this case, and has less load on 
disk on "production hours".


Am I missing anything? The stats in the SAN after the change 
maildir->mdbox do not help, we have zlib enabled in lda & imap with 
mdbox, so our # of real IOPs is lower than Maildir (we did not have zlib 
enabled)


Regards

Javier




Re: [Dovecot] Great time savings backing a mdbox versus Maildir

2011-02-08 Thread Javier de Miguel Rodríguez

Hello

Hmm. I guess if you were doing backups 24h/day, then you can't really say how 
much faster mdbox performs than maildir (outside backups)?



No, 24 hours is for a FULL backup in the weekend. An incremental 
backup is only 2-3 hours in the night every day.


About performance... I can not give you real numbers of Maildir vs 
mdbox. In Maildir our indexes were stored in a ram disk, but we can not 
do that with mdbox (we can not recreate them if power is lost).


Regards

Javier



Re: [Dovecot] Great time savings backing a mdbox versus Maildir

2011-02-08 Thread Javier Miguel Rodríguez



Oh.. I envy you. Will probably need to do the same at some point, but
I'm having problems understanding how we will ever be able to make the
transition. Too many files -- too many users..



We did the transiction via imapsync: we had /"the old server"/ and 
a /"new server"/, and we migrated all mailboxes
with imapsync and master user feature. The first imapsync takes a lot of 
time, but the next ones are incremental, and take much less time. When 
we are ready (a night) , we stop we switch from "old server" to "new 
server". Minimal downtine, and if everythings goes wrong, we can 
imapsync "in the other way", from new-> old instead old->new



Our mail servers are virtualized in a  vmware vsphere cluster. We 
have HA & DRS, and all the info is stored in the iSCSI SAN. Ir our setup 
we "only" have a virtualized mail , but if the hw node fails the 
virtualized starts automatically in another ESX.


Regards

Javier


How long did it take to convert from maildir to mdbox, how much downtime ?

Do you have a clustered setup, or single node? I'm wondering how safe
mdbox will be on a clusterfs (GPFS), as we've had a bit of trouble with
the index-files when they're accessed from multiple nodes at the same
time (but that was with v1.0.15 -- so we should maybe trust that such
problems has since been fixed :-)


   -jf




[Dovecot] Great time savings backing a mdbox versus Maildir

2011-02-07 Thread Javier Miguel Rodríguez


Hello

I am writing to this mailing list to thanks Timo for dovecot 2 & 
mdbox. We have almost 30.000 active users and our life was sad with 
Maildir & backup: 24 hours for a full backup  with bacula (zlib enabled 
maildirs, 1.4 TB). After switching to mdbox, the backup time is under 12 
hours ! Instead of backing 17 millions files, with mdbox our backup is 
only of 1 million files, and that speeds up a lot the backup operation.



Timo, here you have detailed info about the bacula backup jobs, you 
can use them in the wiki if you desire. If you need aditional info 
(hardware specs, dovecot config, etc) I can share it.


*Maildir:*

|//Job:Backup_Linux_buzon_us.2011-01-21_19.03.26_38
  Backup Level:   Full
  Client: "buzon_us" 2.0.3 (06Mar07) 
x86_64-redhat-linux-gnu,redhat,Enterprise release
  FileSet:"Full Buzon" 2011-01-21 19:03:26
  Pool:   "Pool_Linux_Buzones_US" (From Job resource)
  Catalog:"MyCatalog" (From Client resource)
  Storage:"File" (From command line)
  Scheduled time: 21-ene-2011 19:03:20
  Start time: 21-ene-2011 19:03:29
  End time:   22-ene-2011 19:46:45
  Elapsed time:*1 day 43 mins 16 secs*
  Priority:   10
  FD Files Written:   16,903,801
  SD Files Written:*16,903,801*
  FD Bytes Written:   1,445,943,227,706 (1.445 TB)
  SD Bytes Written:   1,448,983,971,450 (1.448 TB)
  Rate:*16247.3 KB/s*
  Software Compression:   None
  VSS:no
  Encryption: no
  Accurate:   no
  Volume name(s): Buzones_US_2
  Volume Session Id:  26
  Volume Session Time:1295511704
  Last Volume Bytes:  1,450,628,892,676 (*1.450 TB*)
  Non-fatal FD errors:0
  SD Errors:  0
  FD termination status:  OK
  SD termination status:  OK
  Termination:Backup OK//|







*mdbox with mdbox_rotate_size=10m:*


|Build OS:   x86_64-redhat-linux-gnu redhat Enterprise release
  JobId:  3587
  Job:Backup_Linux_buzon_us.2011-02-07_08.13.52_53
  Backup Level:   Full (upgraded from Incremental)
  Client: "buzon_us" 2.0.3 (06Mar07) 
x86_64-redhat-linux-gnu,redhat,Enterprise release
  FileSet:"Full Buzon" 2011-01-21 19:03:26
  Pool:   "Pool_Linux_Buzones_US" (From Job resource)
  Catalog:"MyCatalog" (From Client resource)
  Storage:"File" (From command line)
  Scheduled time: 07-feb-2011 08:13:44
  Start time: 07-feb-2011 08:13:54
  End time:   07-feb-2011 19:43:50
  Elapsed time:*11 hours 29 mins 56 secs*
  Priority:   10
  FD Files Written:*1,148,780*
  SD Files Written:   1,148,780
  FD Bytes Written:*1,537,062,152,773 (1.537 TB)*
  SD Bytes Written:   1,537,218,147,402 (1.537 TB)
  Rate:*37130.7 KB/s*
  Software Compression:   None
  VSS:no
  Encryption: no
  Accurate:   yes
  Volume name(s): Buzones_US_4|Buzones_US_5
  Volume Session Id:  101
  Volume Session Time:1296724657
  Last Volume Bytes:  438,873,898,586 (438.8 GB)
  Non-fatal FD errors:0
  SD Errors:  0
  FD termination status:  OK
  SD termination status:  OK
  Termination:Backup OK|



Regards

Javier de Miguel
University of Seville


Re: [Dovecot] Questiosn about dbox

2011-01-24 Thread Javier de Miguel Rodríguez



The intended way to restore stuff is to either restore the entire dbox to a 
temp directory, or at least all the important parts of it (indexes + the files 
that contain the wanted mails) and then use something like:

doveadm import sdbox:/tmp/restoredbox "" savedsince 2011-01-01



Thank you for your response, Timo. That was the answer I was 
looking for. The above example is for sdbox, mdbox should be exactly the 
same, am I right?




- The previous question applies to sdbox and mdbox. In the case of mdbox, 
we can configure rotation of files using /mdbox_rotate_size/ . We would like to 
rotate daily, not based in size (our users ask us for yesterday's backup). How 
can we accomplish this?

mdbox_rotate_interval = 1d


Any known issues with mdbox and zlib plugin in lda & imap? I have 
read about mbox is /"read-only"/ with zlib plugin. What about mdbox with 
a high rotate interval (almost a mbox)? How does this work? Is the 
entire mdbox file loaded into ram and decompressed or a temp file in the 
filesystem is used?


Another question: any hint about the "hot spot" of size for 
/mdbox_rotate_interval/?




We have now 17.000.000 messages in our maildir, almost 1.5 TB (zlib 
compresssion enabled). Our backup time with bacula is rather bad: 24 hours for 
a full backup, most of the time the backup is busy fstat'ing all those little 
messages.

In case of Maildir there's no point in fstating any mail files. I'd guess it 
should be possible to patch bacula to not do that.


Good idea. I will write to bacula folks about that.


We think that mdbox can help us in this. Does anybody has good experiences migrating from 
maildir->mdox in "large" enviroments? What about mdox performance&  reliability?

I haven't recently heard of corruption complaints about mdbox.. Previously when 
there were those, I didn't hear of complains about losing mails or anything, so 
that's good :)


Any additional comments about this? We are seriously thinking about 
migrating to mdbox, but is always scary "to be the first one"


Thank you for your support

Regards

Javier


[Dovecot] Questiosn about dbox

2011-01-24 Thread Javier de Miguel Rodrí­guez

Hello

I have read carefully about dbox 
(/http://wiki2.dovecot.org/MailboxFormat/dbox/) and I have some questions:


- One of the main advantages (speed wise) of dbox over maildir is 
that index files are the only storage for message flags and keywords. 
What happens when we want to recover some messages from backup? With 
maildir we can rebuild message indexes, but I am not sure about dbox. 
Should we also restore "old indexes" and  merge with the "new indexes" 
in order to restore the deleted messages?



- The previous question applies to sdbox and mdbox. In the case of 
mdbox, we can configure rotation of files using /mdbox_rotate_size/ . We 
would like to rotate daily, not based in size (our users ask us for 
yesterday's backup). How can we accomplish this?



We have now 17.000.000 messages in our maildir, almost 1.5 TB (zlib 
compresssion enabled). Our backup time with bacula is rather bad: 24 
hours for a full backup, most of the time the backup is busy fstat'ing 
all those little messages. We think that mdbox can help us in this. Does 
anybody has good experiences migrating from maildir->mdox in "large" 
enviroments? What about mdox performance & reliability?


Thank you for your support

Javier





[Dovecot] Question about indexes and maildir/sdbox/mdbox

2011-01-17 Thread Javier de Miguel Rodríguez

Hello

We are now running dovecot 2.0.9 with indexes in a ram disk and 
maildir storage in a test system. We have the following questions:


- If there is a power outage / kernel crash, we will lose the 
content of ramdisk. We have tested that indexes are regenerated when a 
user logs in via imap, so e-mail access will be "slower" after a power 
outage / kernel crash, but everything should work as expected (TM). Are 
we missing something?


- We are evaluating migrating from maildir to dbox. There are two 
alternatives: sdbox and mdbox. Reading about dbox in the wiki 
(http://wiki2.dovecot.org/MailboxFormat/dbox) we see that using a ram 
disk for indexes for mdbox is a really bad idea:


/"Note that with dbox the Index files actually contain significant data 
which is held nowhere else. Index files for both *single-dbox* and 
*multi-dbox* contain message flags and keywords. For *multi-dbox*, the 
index file also contains the map_uids which link (via the "map index") 
to the actual message data. This data cannot be automatically recreated, 
so it is important that Index files are treated with the same care as 
message data files."/


So in mdbox we should not use a ramdisk for indexes. But what about 
sdbox? sdbox indexes work as maildir indexes? Are sdbox indexes bigger 
than maildir indexes?


Thank you very much for your support

Regards

Javier


Re: [Dovecot] SSD drives are really fast running Dovecot

2011-01-16 Thread Javier de Miguel Rodríguez

El 13/01/11 17:01, David Woodhouse escribió:

On Wed, 2011-01-12 at 09:53 -0800, Marc Perkel wrote:

I just replaced my drives for Dovecot using Maildir format with a pair
of Solid State Drives (SSD) in a raid 0 configuration. It's really
really fast. Kind of expensive but it's like getting 20x the speed for
20x the price. I think the big gain is in the 0 seek time.

You may find ramfs is even faster :)
ramfs (tmpfs in linux-land) is useful for indexes. If you lose the 
indexes, they will created automatically the next time a user logs in.


We are now trying zlib plugin to lower the number of iops to our 
maildir storage systems. We are using gzip (bzip2 increases a lot the 
latency). LZMA/xz seems interesting (high compression and rather good 
decompression speed) and lzo also seems interesting (blazing fast 
compression AND decompression, not much compression savings though)


What kind of "tricks" do you use to lower the number of IOPs of 
your dovecot servers?


    Regards

Javier





I hope you have backups.





Re: [Dovecot] Problem after migration dovecot 1.2 -> dovecot 2.0

2011-01-13 Thread Javier de Miguel Rodrí­guez



These two directories should have "dovenull" as group.. It should have 
automatically figured this out by looking up dovenull's group. I could send some debug 
patches to figure out what the problem is.. But you should be able to work around it by 
setting:

service imap-login {
   group = dovenull
}




That make the trick. But is strange, in the other dovecot server I 
own i do not need dovenull for those dirs.


    Regards

Javier






Re: [Dovecot] Problem after migration dovecot 1.2 -> dovecot 2.0

2011-01-13 Thread Javier de Miguel Rodrí­guez


Still no luck with this. I have followed the debuggind guidelines 
of dovecot wiki (http://wiki2.dovecot.org/Debugging/ProcessTracing) and 
executed the following:


strace -f -tt -o strace_dovecot -p 15426


15426 is the PID os /usr/sbin/dovecot. I attach you compressed the 
log of the strace . Hope this help to solve this issue.


Regards

Javier






telnet 192.168.4.80 110

Trying 192.168.4.80...
/Connected to 192.168.4.80.
Escape character is '^]'./


In syslog I got the following error:


/Jan 12 12:14:44 buzon dovecot: imap-login: Error: auth: 
connect(login) in directory / failed: Permission denied 
(euid=107() egid=110() missing +x perm: /, euid is 
not dir owner)/



My /var/run/dovecot directory listing is the following:


 ls -lhR /var/run/dovecot

//var/run/dovecot:
total 12K
srw--- 1 rootroot0 ene 12 11:40 anvil
srw--- 1 rootroot0 ene 12 11:40 anvil-auth-penalty
srw--- 1 rootroot0 ene 12 11:40 auth-client
srw--- 1 dovecot root0 ene 12 11:40 auth-login
srw--- 1 entrega root0 ene 12 11:40 auth-master
srw--- 1 entrega root0 ene 12 11:40 auth-userdb
srw--- 1 dovecot root0 ene 12 11:40 auth-worker
srw--- 1 rootroot0 ene 12 11:40 config
srw--- 1 rootroot0 ene 12 11:40 dict
srwxrwxrwx 1 rootroot0 dic 27 21:36 dict-server
srw--- 1 rootroot0 ene 12 11:40 director-admin
srw--- 1 rootroot0 ene 12 09:17 director-userdb
srw-rw-rw- 1 rootroot0 ene 12 11:40 dns-client
srw--- 1 rootroot0 ene 12 11:40 doveadm-server
lrwxrwxrwx 1 rootroot   25 ene 12 11:40 dovecot.conf -> 
/etc/dovecot/dovecot.conf

drwxr-xr-x 2 rootroot 4,0K ene 12 09:05 empty
drwxr-x--- 2 rootroot 4,0K ene 12 11:40 login
-rw--- 1 rootroot6 ene 12 11:40 master.pid

/var/run/dovecot/empty:
total 0

/var/run/dovecot/login:
total 4,0K
srw-rw-rw- 1 root root   0 ene 12 11:40 dns-client
srw-rw-rw- 1 root root   0 ene 12 11:40 imap
srw-rw-rw- 1 root root   0 ene 12 11:40 login
srw-rw-rw- 1 root root   0 ene 12 11:40 pop3
srw-rw-rw- 1 root root   0 ene 12 11:40 sieve
-rw-r--r-- 2 root root 230 ene  9 20:56 ssl-parameters.dat
srw-rw-rw- 1 root root   0 ene 12 11:40 ssl-params
/


My doveconf -n is the following:
/
# OS: Linux 2.6.18-194.26.1.el5 x86_64 Red Hat Enterprise Linux Server 
release 5.5 (Tikanga) ext3

auth_debug = yes
auth_master_user_separator = *
auth_mechanisms = plain login
base_dir = /var/run/dovecot/
default_client_limit = 4096
default_process_limit = 2500
disable_plaintext_auth = no
dotlock_use_excl = yes
mail_fsync = never
mail_gid = entrega
mail_location = 
maildir:/buzones/us.es/%2.26Hn/%2.200Hn/%n:INDEX=/buzones/ramdisk/%2.26Hn/%2.200Hn/%n

mail_plugins = " zlib"
mail_uid = entrega
managesieve_notify_capability = mailto
managesieve_sieve_capability = fileinto reject envelope 
encoded-character vacation subaddress comparator-i;ascii-numeric 
relational regex imap4flags copy include variables body enotify 
environment mailbox date

passdb {
  driver = shadow
}
passdb {
  args = /etc/dovecot/dovecot-ldap.conf
  driver = ldap
}
passdb {
  args = /etc/usuario_maestro.txt
  driver = passwd-file
  master = yes
}
passdb {
  args = /etc/dovecot/dovecot-ldap.conf
  driver = ldap
}
plugin {
  quota = maildir:Cuota de usuario
  quota_rule2 = Trash:storage=+10%%
  quota_warning = storage=95%% /usr/local/bin/quota-warning.sh 95
  quota_warning2 = storage=80%% /usr/local/bin/quota-warning.sh 80
  sieve = /buzones/us.es/%2.26Hn/%2.200Hn/%n/dovecot.sieve
  sieve_dir = /buzones/us.es/%2.26Hn/%2.200Hn/%n/sieve/
  zlib_save = gz
  zlib_save_level = 9
}
protocols = pop3 imap sieve
service auth {
  unix_listener auth-master {
user = entrega
  }
  unix_listener auth-userdb {
user = entrega
  }
  user = root
}
service imap-login {
  executable = /usr/libexec/dovecot/imap-login
  process_limit = 2000
}
service imap {
  executable = /usr/libexec/dovecot/rawlog /usr/libexec/dovecot/imap
  process_limit = 2000
}
service managesieve-login {
  executable = /usr/libexec/dovecot/managesieve-login
  inet_listener sieve {
port = 2000
  }
  process_limit = 2000
}
service managesieve {
  executable = /usr/libexec/dovecot/managesieve
  process_limit = 2000
}
service pop3-login {
  executable = /usr/libexec/dovecot/pop3-login
  process_limit = 2000
}
service pop3 {
  executable = /usr/libexec/dovecot/pop3
  process_limit = 2000
}
ssl_ca = What am I doing wrong? I have migrated a identical server from 
dovecot 1.2 to dovecot 2.0 without this problem.


Regards

Javier





strace_dovecot.gz
Description: GNU Zip compressed data


[Dovecot] Problem after migration dovecot 1.2 -> dovecot 2.0

2011-01-12 Thread Javier de Miguel Rodrí­guez
I have migrated from dovecot 1.2 to dovecot 2.0. When I connect via 
telnet to 110 port of the dovecot server the client hangs:


telnet 192.168.4.80 110

Trying 192.168.4.80...
/Connected to 192.168.4.80.
Escape character is '^]'./


In syslog I got the following error:


/Jan 12 12:14:44 buzon dovecot: imap-login: Error: auth: connect(login) 
in directory / failed: Permission denied (euid=107() 
egid=110() missing +x perm: /, euid is not dir owner)/



My /var/run/dovecot directory listing is the following:


 ls -lhR /var/run/dovecot

//var/run/dovecot:
total 12K
srw--- 1 rootroot0 ene 12 11:40 anvil
srw--- 1 rootroot0 ene 12 11:40 anvil-auth-penalty
srw--- 1 rootroot0 ene 12 11:40 auth-client
srw--- 1 dovecot root0 ene 12 11:40 auth-login
srw--- 1 entrega root0 ene 12 11:40 auth-master
srw--- 1 entrega root0 ene 12 11:40 auth-userdb
srw--- 1 dovecot root0 ene 12 11:40 auth-worker
srw--- 1 rootroot0 ene 12 11:40 config
srw--- 1 rootroot0 ene 12 11:40 dict
srwxrwxrwx 1 rootroot0 dic 27 21:36 dict-server
srw--- 1 rootroot0 ene 12 11:40 director-admin
srw--- 1 rootroot0 ene 12 09:17 director-userdb
srw-rw-rw- 1 rootroot0 ene 12 11:40 dns-client
srw--- 1 rootroot0 ene 12 11:40 doveadm-server
lrwxrwxrwx 1 rootroot   25 ene 12 11:40 dovecot.conf -> 
/etc/dovecot/dovecot.conf

drwxr-xr-x 2 rootroot 4,0K ene 12 09:05 empty
drwxr-x--- 2 rootroot 4,0K ene 12 11:40 login
-rw--- 1 rootroot6 ene 12 11:40 master.pid

/var/run/dovecot/empty:
total 0

/var/run/dovecot/login:
total 4,0K
srw-rw-rw- 1 root root   0 ene 12 11:40 dns-client
srw-rw-rw- 1 root root   0 ene 12 11:40 imap
srw-rw-rw- 1 root root   0 ene 12 11:40 login
srw-rw-rw- 1 root root   0 ene 12 11:40 pop3
srw-rw-rw- 1 root root   0 ene 12 11:40 sieve
-rw-r--r-- 2 root root 230 ene  9 20:56 ssl-parameters.dat
srw-rw-rw- 1 root root   0 ene 12 11:40 ssl-params
/


My doveconf -n is the following:
/
# OS: Linux 2.6.18-194.26.1.el5 x86_64 Red Hat Enterprise Linux Server 
release 5.5 (Tikanga) ext3

auth_debug = yes
auth_master_user_separator = *
auth_mechanisms = plain login
base_dir = /var/run/dovecot/
default_client_limit = 4096
default_process_limit = 2500
disable_plaintext_auth = no
dotlock_use_excl = yes
mail_fsync = never
mail_gid = entrega
mail_location = 
maildir:/buzones/us.es/%2.26Hn/%2.200Hn/%n:INDEX=/buzones/ramdisk/%2.26Hn/%2.200Hn/%n

mail_plugins = " zlib"
mail_uid = entrega
managesieve_notify_capability = mailto
managesieve_sieve_capability = fileinto reject envelope 
encoded-character vacation subaddress comparator-i;ascii-numeric 
relational regex imap4flags copy include variables body enotify 
environment mailbox date

passdb {
  driver = shadow
}
passdb {
  args = /etc/dovecot/dovecot-ldap.conf
  driver = ldap
}
passdb {
  args = /etc/usuario_maestro.txt
  driver = passwd-file
  master = yes
}
passdb {
  args = /etc/dovecot/dovecot-ldap.conf
  driver = ldap
}
plugin {
  quota = maildir:Cuota de usuario
  quota_rule2 = Trash:storage=+10%%
  quota_warning = storage=95%% /usr/local/bin/quota-warning.sh 95
  quota_warning2 = storage=80%% /usr/local/bin/quota-warning.sh 80
  sieve = /buzones/us.es/%2.26Hn/%2.200Hn/%n/dovecot.sieve
  sieve_dir = /buzones/us.es/%2.26Hn/%2.200Hn/%n/sieve/
  zlib_save = gz
  zlib_save_level = 9
}
protocols = pop3 imap sieve
service auth {
  unix_listener auth-master {
user = entrega
  }
  unix_listener auth-userdb {
user = entrega
  }
  user = root
}
service imap-login {
  executable = /usr/libexec/dovecot/imap-login
  process_limit = 2000
}
service imap {
  executable = /usr/libexec/dovecot/rawlog /usr/libexec/dovecot/imap
  process_limit = 2000
}
service managesieve-login {
  executable = /usr/libexec/dovecot/managesieve-login
  inet_listener sieve {
port = 2000
  }
  process_limit = 2000
}
service managesieve {
  executable = /usr/libexec/dovecot/managesieve
  process_limit = 2000
}
service pop3-login {
  executable = /usr/libexec/dovecot/pop3-login
  process_limit = 2000
}
service pop3 {
  executable = /usr/libexec/dovecot/pop3
  process_limit = 2000
}
ssl_ca = What am I doing wrong? I have migrated a identical server from 
dovecot 1.2 to dovecot 2.0 without this problem.


Regards

Javier


Re: [Dovecot] Maildir feature I'd like to see - SSD for newer messages

2010-12-23 Thread Javier de Miguel Rodríguez
On Thu, 23 Dec 2010 11:27:45 -0800, Marc Perkel  
wrote:

SSD drives are very fast but expensive. So I have a crude idea that
I'd like to see. May not be practical but would like to get some
thoughts on it.



You are asking about automatic storage tiering. You can get what you 
want in a transparant way, independient
of Dovecot. Some storage vendor (search for Fully Automated Storage 
Tiering - FAST from EMC or Compellent, recently

bought by Dell) get what you are asking for.

If the budget is low, you can achive a "poor´s man" storage tiering 
with some shell scripting, cron and soft links; Or you can look at 
http://code.google.com/p/fscops/ for a more mature implementation. Or 
just use ZFS and "hybrid storage pools"


Merry Xmas

Javier


Re: [Dovecot] How to avoid "authenticated user not found" - messages when using multiple Ldap userdbs/passdbs?

2010-12-16 Thread Javier de Miguel Rodrí­guez

El 16/12/10 14:50, Sebastian Urbanneck escribió:

Following:

We've got a Dovecot 1.0.10 running on Ubuntu Hardy. Till now we used 
to have only mail accounts under our own domain, but since we're also 
a webhoster the people began to ask if there is a possibility to use 
there own Domains in there mail address.


Try with a pop/imap proxy as perdition, and route based on 
@domain1, @domain2 to the right mail-backend (or route based in mailHost 
attibute via ldap search)


Regards

Javier



Re: [Dovecot] Question about "slow" storage but fast cpus, plenty of ram and dovecot

2010-12-15 Thread Javier de Miguel Rodrí­guez

El 15/12/10 14:28, Patrick Westenberg escribió:
Won't be 15k U320 SCSI disks also faster than average SATA disks for 
the indexes?

I am using 2xraid5 of 8 SAS disks of 15k rpm for mailboxes & indexes.

I am evaluating the migration of indexes to 1xraid 1+0 8 disks SAS 
15k rpm


Regards

    Javier


Re: [Dovecot] Question about "slow" storage but fast cpus, plenty of ram and dovecot

2010-12-13 Thread Javier de Miguel Rodrí­guez

El 13/12/10 10:16, Brad Davidson escribió:


On Dec 12, 2010, at 23:26, Javier de Miguel Rodrí guez  
wrote:


My SAN(s) (HP LeftHand Networks) do not support SSD, though. But I have 
several LeftHand nodes, some of them with raid5, others with raid 1+0. 
Maildirs+indexes are now in raid5, maybe I can separate the indexes to raid 1+0 
iscsi target in a different san

I have two raid5 (7 disks+1 spare) and I have joined them via LVM 
stripping. Each disk is SAS 15k rpm 450GB, and the SANs have 512 
MB-battery-backed-cache. In our real workload (imapsync), each raid5 gives 
around 1700-1800 IOPS, combined 3.500 IOPS.

Your 'slow' storage is running against 16 15k RPM SAS drives? Those LeftHand 
controllers must be terrible. We have Maildir on NFS on a Netapp with 15k RPM 
450GB FC disks and have never had performance problems, even when running the 
controllers up against the wall by mounting with the noac option (60k NFS 
IOPS!). We were using 500GB 4500 RPM ATA disks at that point - doesn't get much 
slower than that.


Can you give me (off-list if you desire) more info about your 
setup? I am interested in the number and type of spindles you are using. 
We are using LeftHand because of their real time replication 
capabilities, something very interesting to us, and each node pair is 
relatively cheap (8x450 g...@15k rpm sas disks per node, real time 
replication, 512 MB caché, about 25K € each node pair).


We can throw more hardware to this, let's see if using memory-based 
indexes (via ramdisk) we get better results. Zlib compression on indexes 
should be great for this.


    Regards

Javier




Re: [Dovecot] Question about "slow" storage but fast cpus, plenty of ram and dovecot

2010-12-12 Thread Javier de Miguel Rodrí­guez

Thank you for your responses Stan, I reply you below

For that many users I'm guessing you can't physically stuff enough RAM
into the machines in your ESX cluster to use a ramdisk for the index
files, and if you could, you probably couldn't, or wouldn't want to,
afford the DIMMs required to meet the need.

Yes, I have a cluster of 4 ESX servers. I am going to do some 
scriptting to see how much space we are allocating to indexes.





 - In my setup I have 25.000+ users, almost 7.000.000
messages in my maildir. How much memory should I need in
a ramdisk to hold that?

  - What happens if something fails? I think that if I
lose the indexes (ej: kernel crash) the next time I boot
the system the ramdisk will be empty, so the indexes should be
recreated. Am I right?

Given the size of your mail user base, I'd probably avoid the ramdisk
option, and go with a couple of striped (RAID 0) 100+ GB SSDs connected
on the iSCSI SAN.  This is an ESX cluster of more than one machine
correct?  You never confirmed this, but it seems a logical assumption
based on what you've stated.  If it's a single machine you should
obviously go with locally attached SATA II SSDs as it's far cheaper with
much greater real bandwidth by a factor of 100:1 vs iSCSI connection.



My SAN(s) (HP LeftHand Networks) do not support SSD, though. But I 
have several LeftHand nodes, some of them with raid5, others with raid 
1+0. Maildirs+indexes are now in raid5, maybe I can separate the indexes 
to raid 1+0 iscsi target in a different san




 - If I buy a SSD system and export that little and fast
storage via iSCSI, does zlib compression applies
to indexes?

Timo will have to answer this regarding zlib on indexes.



That would be rather interesting.



 - Any additional filesystem info? I am using ext3 on RHEL 5.5, in
RHEL 5.6 ext4 will be supported. Any performance hint/tuning (I already
use noatime, 4k blocksize)?

I'm shocked you're running 25K mailboxen with 7 million messages on
maildir atop EXT3!  On your  fast iSCSI SAN array, I assume with at
least 14 spindles in the RAID group LUN where the mail is stored, you
should be using XFS.



I have two raid5 (7 disks+1 spare) and I have joined them via LVM 
stripping. Each disk is SAS 15k rpm 450GB, and the SANs have 512 
MB-battery-backed-cache. In our real workload (imapsync), each raid5 
gives around 1700-1800 IOPS, combined 3.500 IOPS.



Formatted with the correct parameters, and mounted with the correct
options, XFS will give you _at minimum_ a factor of 2 performance gain
over EXT3 with 128 concurrent users.  As you add more concurrent users,
this ratio will grow even greater in XFS' favor.


Sadly, Red Hat Enterprise Linux 5 does not support natively XFS. I 
can install it via CentosPlus, but we need Red Hat support if somethings 
goes VERY wrong. Red Hat Enterprise Linux 6 supports XFS (and gives me 
dovecot 2.0), but maybe it is "too early" for a RHEL6 deployment for so 
many users (sigh).


I will continue investigating about indexes. Any additional hint?

Regards

Javier



Re: [Dovecot] Question about "slow" storage but fast cpus, plenty of ram and dovecot

2010-12-12 Thread Javier de Miguel Rodríguez


Thank you very much for all the responses in this thread. Now I have 
more questions:


- I have "slow" I/O (about 3.5000-4.000 IOPS, measured via 
imapsync), if I enable zlib compression in my maildirs, that should 
lower the number the IOPS (less to read, less to write, less IOPS, more 
CPU). Dovecot 2.0 is better for zlib (lda support) than dovecot 1.2.X..


- I understand that indexes should go to the fastest storage I own. 
Somebody talked about storing them in a ramdisk and then backup them to 
disk on shutdown. I have several questions about that:


- In my setup I have 25.000+ users, almost 7.000.000 
messages in my maildir. How much memory should I need in 
a ramdisk to hold that?


 - What happens if something fails? I think that if I 
lose the indexes (ej: kernel crash) the next time I boot 
the system the ramdisk will be empty, so the indexes should be 
recreated. Am I right?


- If I buy a SSD system and export that little and fast 
storage via iSCSI, does zlib compression applies 
to indexes?


- Any additional filesystem info? I am using ext3 on RHEL 5.5, in 
RHEL 5.6 ext4 will be supported. Any performance hint/tuning (I already 
use noatime, 4k blocksize)?



Regards

Javier


mail_location = maildir:~/Maildir:INDEX=MEMORY

The ":INDEX=MEMORY" disables writing the index files to disk, and as the
name implies, I believe, simply keeps indexes in memory.

I think maybe I shoudn't have called it INDEX=MEMORY, but rather more like 
INDEX=DISABLE.


"If you really want to, you can also disable the index files completely
by appending :INDEX=MEMORY."

My read of that is that indexing isn't disabled completely, merely
storing the indexes to disk is disables.  The indexes are still built
and maintained in memory.

Timo, is that correct?

It's a per-connection in-memory index. Also there is no kind of caching of 
anything (dovecot.index.cache file, which is where most of Dovecot performance 
usually comes from).


I don't know if, or how much, storing them in RAM via :INDEX=MEMORY
consumes, as compared to using a ramdisk.  The memory consumption may be
less or it may be more.  Timo should be able to answer this, and give a
recommendation as to whether this is even a sane thing to do.

I think INDEX=MEMORY performance is going to suck. http://imapwiki.org/Benchmarking explains IMAP 
performance a bit more. By default Dovecot is the "Dynamically caching server", but with 
INDEX=MEMORY it becomes "Non-caching server".




Re: [Dovecot] mailboxes and IMAP folders mirroring ?

2010-11-17 Thread Javier de Miguel Rodríguez
 
 

On 17 de noviembre de 2010 at 13:30 Frank Bonnet  wrote:

> Hello
>
> This is a bit off Dovecot but ... 
Hmm...
 
You can accomplish that in several ways:
 
1º Use inotify to rsync when mbox file changes (I recommend maildir for this,
you do not have to copy the whole file)
2º Use replicated storage (maybe this is not what you are looking for)
3º Search for "continous data protection" (cdp) in google 
 
Regards
 
Javier
 

>
> I'm searching some software to mirror mailboxes and IMAP forlders
> from the mailhub to another (clone) computer.
>
> Actually I use rsync daily but I wonder if it exists some software
> that are real time mirroring capable ?
>
> I'm using Dovecot 1.2.14 and Postfix with MBOX format.
>
> Thanks
>

[Dovecot] Question about LDA+spamassassin

2010-09-13 Thread Javier de Miguel Rodrí­guez

 Hello

I am using dovecot 1.2.11 (openexchange not still fully supported 
with dovecot 2.0). In my setup I use ldap to store quota information, 
and I want to accomplish the following: per-user spamassassin rules (MDA 
style), but my setup has the following requeriments:


mail_uid= entrega
mail_gid= entrega

LDA & SIEVE:

sieve_dir=/buzones/my_domain/%2.26Hn/%2.200Hn/%n/sieve/
mail_location=maildir:/buzones/my_domain/%2.26Hn/%2.200Hn/%n


I have been reading in the wiki, but I am not sure how I can have 
in $mail_location/user1 a .spamassassin directory per-user and lda+sieve 
work together to make spam rules by user


Regards

Javier




[Dovecot] Dovecot and OpenSSO

2010-09-09 Thread Javier de Miguel Rodríguez
 Has anybody tried to use opensso as authentication source for dovecot 
? Maybe using PAM+Opensso?


Regards

Javier


[Dovecot] Question about directory hashing

2010-06-10 Thread Javier de Miguel Rodrí­guez

Hello

I have been reading http://wiki.dovecot.org/MailLocation and 
http://wiki.dovecot.org/Variables and I do not fully understand the 
directory hashing feature. I want to migrate > 70.000 users from Sun 
Messaging to Dovecot using imapsync. I have tested with 
mail_location=maildir:/buzones/us.es/%1Hu/%2.1u/%n but I get the 
following directory tree:


`-- 0
|-- -
|-- 1
|-- 2
|-- 3
|-- 8
|-- _
|-- a
|-- ab
|-- b
|-- c
|-- d
|-- e
|-- ev
|-- f
|-- g
|-- h
|-- i
|-- ie
|-- is
|-- j
|-- jb
|-- jj
|-- jp
|-- js
|-- k
|-- l
|-- m
|-- mb
|-- n
|-- o
|-- p
|-- pp
|-- q
|-- r
|-- rm
|-- s
|-- t
|-- u
|-- v
|-- w
|-- x
|-- y
`-- z

The ideal directory layout should be the following:

First directory level: Letters from a to z
Second directory level: Numbers from 0 to 200
Third directory level: %n (username without @domain)

How can I achive this directory layout (or other optimized for 
70.000 users) . I use ext3 on Red Hat Enterprise Linux 5 with directory 
indexes on.


Regards

    Javier de Miguel


Re: [Dovecot] Questions about migration Sun Messaging -> Dovecot+Postfix+Ldap

2010-06-07 Thread Javier de Miguel Rodríguez

El 07/06/10 19:00, Timo Sirainen escribió:

On su, 2010-06-06 at 14:38 +0200, Javier de Miguel Rodríguez wrote:

   

  1) We are unable to make dbox work with quota, but we have no
problem with maildir. Quota is stored in a ldap attibute called "mailQuota"
 

I don't really recommend using dbox in v1.2. It has much better
performance and stability in v2.0. Anyway, with dbox you'll have to use
dict quota instead of maildir quota.

   


Thank you





  4)  Some users could have their mailboxes "disabled". We use the
following line: user_filter =
(&(objectClass=inetorgperson)(uid=%n)(mailUserStatus=active)) but it
does NOT work as expected. Any idea)
 

You should do it also for pass_filter. But other than that, I'd guess it
should work.

   


Thank you.


  7) When we set vacation messages they work but we see this error in
the log:   dovecot: deliver(jorgelp):
file_dotlock_create(~/.dovecot.lda-dupes) failed: No such file or directory
 

Your userdb doesn't return a home directory for users.
http://wiki.dovecot.org/VirtualUsers/Home

   

Thank you.


  8) When a user log in she uses his username "mary" (without @us.es
or @alum.us.es) . Our dovecot search in all the ldap tree until it finds
that uid. But we would like to store in our mail_location /buzones/us.es
or /buzones/alum.us.es. How can we accomplish this? We should use the
"upper branch name" as part of the mail_location.
 

pass_attrs = .., someField=domain, ..

where someField contains the us.es or alum.us.es. If there's no such
field, I guess there's no way to do it.

   


Thank you.


auth default_with_listener:
mechanisms: plain login
passdb:
  driver: ldap
  args: /etc/dovecot-ldap.conf
userdb:
  driver: ldap
  args: /etc/dovecot-ldap-userdb.conf
auth default:
mechanisms: plain login
passdb:
  driver: ldap
  args: /etc/dovecot-ldap.conf
userdb:
  driver: ldap
  args: /etc/dovecot-ldap.conf
 

Don't add more than one auth block, now it's sometimes (more or less
randomly) using dovecot-ldap-userdb.conf and other times
dovecot-ldap.conf for userdb lookups.

   

They are a simbolink link to the same file, anyway...


Thank you Timo. If you ever come to Seville (Spain) you will have as 
much as free beer you can drink :)





[Dovecot] Questions about migration Sun Messaging -> Dovecot+Postfix+Ldap

2010-06-06 Thread Javier de Miguel Rodríguez



Hello.

We are planning a migration from Sun One Messaging Server to 
Dovecot+Postfix+Ldap. We are using Dovecot 1.2.11 with Sun One Directory 
Server 5.2 ldap (we will migrate to Directory Server 6.3.1) soon. In our 
University we have 65.000 students, 5.500 staff and 6.500 teachers.


Our main ldap realm is dc=us,dc=es (us means University of Seville, 
Spain). We have two e-mail domains, @us.es (staff+teachers) and 
@alum.us.es (students). We use Sun One Directory Server to load data to 
our ldap from several sources (like Oracle databases, flat files, etc)


Our ldap tree is like this:

  dc=us,dc=es
|
|->ou=People,dc=us,dc=es   // "special" users only used by apps
|
|
|->o=us.es,dc=us,dc=es // ldap branch for staff+teachers
|
|
|->o=alum.us.es,dc=us,dc=es // ldap branch for students


A user id is unique, so there is only a "john_doe" in the ldap tree 
(I repeat, there is NOT uid=john_doe,o=us.es,dc=us,dc=es and 
uid=john_doe,o=alum.us.es,dc=us,dc=es). Below you will find a copy of 
the dovecot.conf and dovecot-ldap.conf.


Our operating system is Red Hat Enteprise Linux 5 x64.

These are our questions:

1) We are unable to make dbox work with quota, but we have no 
problem with maildir. Quota is stored in a ldap attibute called "mailQuota"


2) A user can be in different branches in the same time: for 
example, a teacher called pepito should be in the 
uid=pepito,o=us.es,dc=us,dc=es branch but if that teacher is also a 
student should  have another ldap entry 
uid=pepitosurname,o=us.es,dc=us,dc=es. Our identity management is the 
piece of software that "promotes" a user in that case. How should we use 
"mail_location" to addres this?


3) We are planning to use two raid5 of 8 SAS 15.000 rpm disks for 
these mailboxes. We will use a "2.0, ajax-based webmail" like roundcube. 
Most of our users will use webmail (imap based). How many iops should we 
have in that enviroment? We would like to use dbox, but we are stuck in 
maildir because 1)


4)  Some users could have their mailboxes "disabled". We use the 
following line: user_filter = 
(&(objectClass=inetorgperson)(uid=%n)(mailUserStatus=active)) but it 
does NOT work as expected. Any idea)


5) We are planning to use bacula to backup user mailboxes. Any 
known problem with this (i will ask in the bacula mailing list anyway)


6) I have read carefully perfomance in dovecot wiki. Can I use 
noatime in /etc/fstab safely with dovecot? Any perfomance hint apart 
from we already have in our config files?


7) When we set vacation messages they work but we see this error in 
the log:   dovecot: deliver(jorgelp): 
file_dotlock_create(~/.dovecot.lda-dupes) failed: No such file or directory


8) When a user log in she uses his username "mary" (without @us.es 
or @alum.us.es) . Our dovecot search in all the ldap tree until it finds 
that uid. But we would like to store in our mail_location /buzones/us.es 
or /buzones/alum.us.es. How can we accomplish this? We should use the 
"upper branch name" as part of the mail_location.


Thank you for your support (and for your patience).

Regards

Javier










< Config files -->


dovecot.conf:

# 1.2.11: /etc/dovecot.conf
# OS: Linux 2.6.18-194.3.1.el5 i686 Red Hat Enterprise Linux Server 
release 5.5 (Tikanga) ext3

base_dir: /var/run/dovecot/
protocols: pop3 imap imaps pop3s managesieve
listen(default): *, [::]
listen(imap): *, [::]
listen(pop3): *, [::]
listen(managesieve): *:2000
login_dir: /var/run/dovecot//login
login_executable(default): /usr/libexec/dovecot/imap-login
login_executable(imap): /usr/libexec/dovecot/imap-login
login_executable(pop3): /usr/libexec/dovecot/pop3-login
login_executable(managesieve): /usr/libexec/dovecot/managesieve-login
login_max_processes_count: 2000
max_mail_processes: 2000
verbose_proctitle: yes
mail_uid: prueba
mail_gid: prueba
mail_location: maildir:/buzones/us.es/%M/%n/
fsync_disable: yes
mail_executable(default): /usr/libexec/dovecot/rawlog 
/usr/libexec/dovecot/imap

mail_executable(imap): /usr/libexec/dovecot/rawlog /usr/libexec/dovecot/imap
mail_executable(pop3): /usr/libexec/dovecot/pop3
mail_executable(managesieve): /usr/libexec/dovecot/managesieve
mail_plugins(default): quota imap_quota
mail_plugins(imap): quota imap_quota
mail_plugins(pop3): quota
mail_plugins(managesieve):
mail_plugin_dir(default): /usr/lib/dovecot/imap
mail_plugin_dir(imap): /usr/lib/dovecot/imap
mail_plugin_dir(pop3): /usr/lib/dovecot/pop3
mail_plugin_dir(managesieve): /usr/lib/dovecot/managesieve
pop3_enable_last(default): no
pop3_enable_last(imap): no
pop3_enable_last(pop3): yes
pop3_enable_last(managesieve): no
pop3_uidl_format(default): %08Xu%08Xv
pop3_uidl_format(imap): %08Xu%08Xv
pop3_uidl_format(pop3): %08Xv%08Xu
pop3_uidl_format(managesieve): %08Xu%08Xv
lda:
 

Re: [Dovecot] Is it possible to prevent users from ever deleting anything?

2010-03-24 Thread jose javier parra sanchez
Not sure about Dovecot, but with postfix you can make a copy of every
mail that gets in and out of the system and send it to a 'control'
account.

2010/3/24 Snaky Love :
> Hi, dear dovecot-users,
>
> is it possible to make dovecot ignore the DELETE command for some accounts?
>
> Basically what I want to achieve is: users shall not be able to delete their
> emails - but they still should be able write emails.
>
> This is meant for a setup where different users are sending and receiving
> emails for a "support" account.
>
> I looked into IMAP virtual folders and ACL but I did not see a way of
> totally disabling the possibility to delete emails and share the whole
> account.
>
> I realize there may be race conditions with a setup like this and a web
> based ticket system might be a better solution, but it is only a very small
> team and we can always talk to each others to resolve conflicts - so using
> only one mail account for support seems practical and could keep the
> overhead small - but we do not want to delete anything from it
> (accidentaly).
>
> Of course, at one point in future somebody has to delete some mails from
> this accounts to get rid of old stuff - so I would like to implement a
> super-user that is able to do that kind of mailbox maintanance.
>
> How would I use ACL to setup such a scenario? Is it even possible?
> Or did I misunderstand IMAP shared folders completely???
>
> Thank you very much for your attention!
>
> Have a nice day,
> Snaky
>


Re: [Dovecot] Combination of default domain and username character translation problem in POP3 server configuration

2009-11-19 Thread Javier Vico Egea
In that case it works perfect but my problem are all the users using the old
vm-pop3 configuration with XXX!mysecondarydomain.es

Thank you for your interest.

-Mensaje original-
De: Steffen Kaiser [mailto:skdove...@smail.inf.fh-brs.de] 
Enviado el: jueves, 19 de noviembre de 2009 14:47
Para: dovecot@dovecot.org
CC: dovecot@dovecot.org
Asunto: Re: [Dovecot] Combination of default domain and username character
translation problem in POP3 server configuration

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Thu, 19 Nov 2009, Javier Vico Egea wrote:

> auth default:
>  default_realm: myprincipaldomain.es
>  username_chars:
> abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz01234567890...@!
>  username_translation: !@
>  passdb:
>driver: passwd-file
>args: /etc/virtual/%d/passwd

Hmm, what happens, if you login with:

pru...@mysecondarydomain.es

? Note the @

Does it work?

>  userdb:
>driver: static
>args: uid=500 gid=500 home=/var/spool/virtual/%d

Each use should have an unique home dir, I think.

Regards,

- -- 
Steffen Kaiser
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iQEVAwUBSwVMWXWSIuGy1ktrAQIRiggArndG34u+zfOU41LXb8Nj6A5UMQ8o1cMv
P8Ax+uKXyo9b7kae5N+ZeMkiVVMiALmMr6e7HJeKbTUdl6CFVc+Wa0TBdlpNVEJ0
d49A4IetnfVSWlfu21VR5hpenpsNE2E8JRHQ5Mb0eBaEFneT/VEk2YB7WfsmsvbF
pS2gXhnBl1q8x+VtC/y5fyYB/P8urQU8wwdVTb809fLxUuMVDEUC77bHtXBtRHYT
C0mF3ZyRmh3vFLwBb6e7VwhWkttbKlAzO7lsfNujEqA0dpjzeA+qOw+A2JmyH6sl
ZTFpDWc/jv12+7m+AJB46CsPeKZ8/cfFVITni6G7aBrmGPseIVF3+w==
=il6+
-END PGP SIGNATURE-



Re: [Dovecot] Combination of default domain and username character translation problem in POP3 server configuration

2009-11-19 Thread Javier Vico Egea
Here is the configuration:

# 1.0.7: /etc/dovecot.conf
protocols: pop3
listen: *:10100
login_dir: /var/run/dovecot/login
login_executable: /usr/libexec/dovecot/pop3-login
login_greeting: Bienvenido al servidor de correo.
login_log_format_elements: user=<%u> method=%m rip=%r lip=%l %c domain=%d
nombre=%d
mail_location: mbox:~/mail:INBOX=/var/spool/virtual/%d/%n
mail_debug: yes
mail_executable: /usr/libexec/dovecot/pop3
mail_plugin_dir: /usr/lib/dovecot/pop3
auth default:
  default_realm: myprincipaldomain.es
  username_chars:
abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz01234567890...@!
  username_translation: !@
  verbose: yes
  debug: yes
  debug_passwords: yes
  passdb:
driver: passwd-file
args: /etc/virtual/%d/passwd
  userdb:
driver: static
args: uid=500 gid=500 home=/var/spool/virtual/%d


-Mensaje original-
De: Steffen Kaiser [mailto:skdove...@smail.inf.fh-brs.de] 
Enviado el: jueves, 19 de noviembre de 2009 11:35
Para: dovecot@dovecot.org
CC: dovecot@dovecot.org
Asunto: Re: [Dovecot] Combination of default domain and username character
translation problem in POP3 server configuration

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Thu, 19 Nov 2009, Vico wrote:

What's your configuration, dovecot -n ?

- -- 
Steffen Kaiser
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iQEVAwUBSwUfPXWSIuGy1ktrAQJQdwgAgDXYD2a6/z/UERJDe77DFUVswu2/IEnv
v5beC9I+/zXbSQxotLV1EWTXnOcmV/3OjHvqGzYcjWgOZauUCoq5s/kAhQPfptTA
bPIvfyUE1I9SrsANzfkse5LfmzE8vXPqVkszSIRBY9sWDZCXL3VuWHufnWb+fRIz
/y4nLz6/mo6ETMEK5kwI7B54pXcXINzo55dNJMIQXnl9w40cFTqkhKfCCYXHgx+o
f03f/Qpz4DLo7Ap45/xaSWRj1Ve+6APxdhMicVt1rRx2DhrEbaDnNmd8z0tk9wsV
4UxUUyrNjQwYPj/0usu+069/dhzqGUiNshsop+2tYqogdWT6wlfBeQ==
=zF+V
-END PGP SIGNATURE-



Re: [Dovecot] Detect client application

2009-10-16 Thread jose javier parra sanchez
Does the 'mobile device' have an application that supports imap/pop3
?, or are you talking about to access via webmail ?
2009/10/16 Thu NGUYEN :
>
> Hello,
>
> Do you have any way to detect the client which is connecting to our IMAP
> server?
>
> I actually have an mail server which use dovecot but I just want to allow
> mobile device access to this server to get email, not desktop as Outlook,
> Thunderbird ...
>
> Thanks for your advice.
>
> --
> Regards,
> Thu NGUYEN
>
>
>
>


Re: [Dovecot] dovecot Digest, Vol 74, Issue 36

2009-06-10 Thread javier . garcia
Estimados Sres.

En este momento me encuentro fuera de la oficina hasta el día 22 de Junio. 
Cualquier tema urgente contacte directamente con Ibercom ya sea por teléfono o 
por la cuenta de correo de soporte.

Perdone las molestias.
Un saludo,

Javier García.





Re: [Dovecot] dovecot Digest, Vol 74, Issue 35

2009-06-10 Thread javier . garcia
Estimados Sres.

En este momento me encuentro fuera de la oficina hasta el día 22 de Junio. 
Cualquier tema urgente contacte directamente con Ibercom ya sea por teléfono o 
por la cuenta de correo de soporte.

Perdone las molestias.
Un saludo,

Javier García.





[Dovecot] Using account alias as login

2009-03-08 Thread Javier Amor García
Hello,
 I use dovecot with a LDAP backend for user account and aliases. The aliases
are objects of the class couriermailalias.
 Some users would like to use the alias address as POP/IMAP login instead of
the 'true' account.
Is this possible?.
I am using dovecot version 1.0,10, from ubuntu hardy packages.

Thanks for any answer,
  Javier


[Dovecot] Dovecot-auth timeouts

2008-12-23 Thread Javier Fox

Hello,

I've unfortunately been unable to find anything relating to the problem I'm 
having specifically, in searching the list or google, and so I now plead to you 
to assistance.

I'm running Dovecot as an LDA and SASL auth for Postfix on a Debian 4 box.  
Dovecot is version 1.0.rc15 (the official debian pkg version).

The problem I'm running into is this.  After some time of running (lately it's 
been as little as 5 minutes), I start to see the following errors in 
dovecot.log:

deliver(u...@domain.com): "Dec 23 14:38:47 "Error: User request from 
dovecot-auth timed out
deliver(anotheru...@domain.com): "Dec 23 14:38:48 "Error: User request from 
dovecot-auth timed out

Postfix responds to these by simply deferring the messages.  Dovecot itself, 
however, begins to return 'Authentication failed' messages after significant 
lag time (sometimes greater than 30s):

Connected to localhost.
Escape character is '^]'.
+OK Dovecot-POP
user username
+OK
pass mypassword
-ERR Authentication failed.

Now, for authentication, Dovecot is using LDAP on the local server.  The only 
additional information I can find pertaining to these errors is the following 
from slapd.log:

slapd[22593]: connection_input: conn=6 deferring operation: pending operations

These messages correspond 1-to-1 to the above 'deliver' errors, where 'conn' is 
always the same number.  Restarting dovecot and ldap resolves the issue for a 
few minutes, but sure enough the errors start flowing again.

I'm really at the end of my rope on this, as nothing I do seems to help.  I 
have a good 500+ customers being effected by this as well, and they're all none 
too pleased by it.  If this is something that will absolutely be resolved by 
upgrading from source, that is doable, but we'd prefer to stick with the 
official package version if possible.

Dovecot configs follow

Thanks,
J. Fox

- configs follow -

dovecot.conf

auth_verbose = yes
auth_debug = yes
auth_debug_passwords = yes
mail_debug = no

base_dir = /var/run/dovecot/
protocols = imap imaps pop3 pop3s
protocol lda {
 postmaster_address = postmas...@spiritone.com
 auth_socket_path = /var/run/dovecot/auth-master
 log_path = /var/log/dovecot.log
 info_log_path = /var/log/mail.info
 }
listen = *
shutdown_clients = yes
mmap_disable = yes
lock_method = dotlock
maildir_copy_with_hardlinks = no
log_path = /var/log/dovecot.log
info_log_path = /var/log/mail.log
log_timestamp = "%b %d %H:%M:%S "
syslog_facility = mail
auth_default_realm = involved.com
disable_plaintext_auth = no
ssl_cert_file = /etc/ssl/certs/dovecot.pem
ssl_key_file = /etc/ssl/private/dovecot.pem
login_chroot = yes
valid_chroot_dirs = /home/vmail/
login_user = postfix
login_process_per_connection = yes
login_processes_count = 2
login_max_processes_count = 64
login_max_connections = 128
login_greeting = Involved
login_log_format_elements = user=<%u> method=%m rip=%r lip=%l %c
login_log_format = %$: %s
default_mail_env = maildir:/home/vmail/domains/%d/%u
first_valid_uid = 103
pop3_uidl_format = %08Xu%08Xv
auth_cache_size = 10485760
auth_cache_ttl = 3600
auth_worker_max_count = 10
#auth_worker_max_request_count = 50
auth default {
   mechanisms = PLAIN LOGIN
   passdb ldap {
   args = /etc/dovecot/dovecot-ldap.conf
   }
   userdb ldap {
   args = /etc/dovecot/dovecot-ldap.conf
   }
   socket listen {
   master {
   path = /var/run/dovecot/auth-master
   mode = 0666
   user = vmail
   group = vmail
   }
   client {
   path = /var/spool/postfix/private/auth
   mode = 0660
   user = postfix
   group = postfix
   }
   }
   user = vmail
}


dovecot-ldap.conf
-
hosts = localhost
auth_bind = yes
auth_bind_userdn = cn=%n,ou=%d,ou=mail,dc=domain,dc=com
ldap_version = 3
base = ou=mail,dc=domain,dc=com
dn = cn=Manager,dc=domain,dc=com
dnpass = secret
deref = never
scope = subtree
pass_attrs = mail=user,userPassword=password
user_filter = (&(objectClass=VirtualMailAccount)(accountActive=TRUE)(mail=%u))
pass_filter = (&(objectClass=VirtualMailAccount)(accountActive=TRUE)(mail=%u))
user_global_uid = 1001
user_global_gid = 1001

---end---


Re: [Dovecot] Allow_nets + MySQL failing when using range notation

2008-05-14 Thread Javier García

Hello,

Just a couple of lines to definitely confirm that the issue is present 
*only* on the sparc version of Debian while on a box equipped with an 
Intel Pentium processor everything works as expected.

Thanks a lot for your support.

Javier

Javier García escribió:

x86, mmm..., in fact, I am testing on a sparc box, I will retry on a x86
system as my production environment is x86. I cannot guess the effect of
an architecture change, but maybe some library implementations might
matter... I'll let you know.
In the meantime, some more details on the actual installation:

prisni:~# uname -a
Linux prisni 2.6.18-6-sparc64 #1 Tue Feb 12 21:51:30 UTC 2008 sparc64
GNU/Linux

The installed packages along with dovecot are, in short, postfix, horde,
apache2 and mysql.

PS: While awaiting for moderator approval (this message was originally 
too big) I have made a small test on my production environment,


polifemo:~# uname -a
Linux polifemo 2.6.18-5-686 #1 SMP Mon Dec 24 16:41:07 UTC 2007 i686 
GNU/Linux


which seems to work. So we could probably restrict the issue to linux 
on sparc... quite a special scenario, in my opinion.


As this latest test is just preliminary, I'll let you know about 
further results.


Regards,
Javier

Timo Sirainen escribió:

On Wed, 2008-05-07 at 18:15 +0200, Javier García wrote:
 
I am afraid that I must come back with this issue. Following advice 
from the Debian package maintainers, I installed a backported 1.0.13 
version which keeps behaving wrongly. To be more specific:



Do you use x86 or something else? Just wondering if this could be
because of some endianess issue.

 
I wonder if this option is rare enough to this issue have remained 
undiscovered through versions...



Could be, but it worked in my tests.. I guess it's also possible I
messed up my tests somehow.

  








Re: [Dovecot] Allow_nets + MySQL failing when using range notation

2008-05-08 Thread Javier García

x86, mmm..., in fact, I am testing on a sparc box, I will retry on a x86
system as my production environment is x86. I cannot guess the effect of
an architecture change, but maybe some library implementations might
matter... I'll let you know.
In the meantime, some more details on the actual installation:

prisni:~# uname -a
Linux prisni 2.6.18-6-sparc64 #1 Tue Feb 12 21:51:30 UTC 2008 sparc64
GNU/Linux

The installed packages along with dovecot are, in short, postfix, horde,
apache2 and mysql.

PS: While awaiting for moderator approval (this message was originally 
too big) I have made a small test on my production environment,


polifemo:~# uname -a
Linux polifemo 2.6.18-5-686 #1 SMP Mon Dec 24 16:41:07 UTC 2007 i686 
GNU/Linux


which seems to work. So we could probably restrict the issue to linux on 
sparc... quite a special scenario, in my opinion.


As this latest test is just preliminary, I'll let you know about further 
results.


Regards,
Javier

Timo Sirainen escribió:

On Wed, 2008-05-07 at 18:15 +0200, Javier García wrote:
  
I am afraid that I must come back with this issue. Following advice from 
the Debian package maintainers, I installed a backported 1.0.13 version 
which keeps behaving wrongly. To be more specific:



Do you use x86 or something else? Just wondering if this could be
because of some endianess issue.

  
I wonder if this option is rare enough to this issue have remained 
undiscovered through versions...



Could be, but it worked in my tests.. I guess it's also possible I
messed up my tests somehow.

  





Re: [Dovecot] Allow_nets + MySQL failing when using range notation

2008-05-07 Thread Javier García

Hello again,

I am afraid that I must come back with this issue. Following advice from 
the Debian package maintainers, I installed a backported 1.0.13 version 
which keeps behaving wrongly. To be more specific:


My software version is now:
prisni:/# dovecot --version
1.0.13

My debian packages, just to be redundant:
prisni:/# dpkg -l dovecot*
ii  dovecot-common  1.0.13-1~bpo40+1secure mail 
server that supports mbox and maildir mailboxes
ii  dovecot-imapd   1.0.13-1~bpo40+1secure IMAP 
server that supports mbox and maildir mailboxes
ii  dovecot-pop3d   1.0.13-1~bpo40+1secure POP3 
server that supports mbox and maildir mailboxes


A login attempt from one IP in the allowed network...
prisni:/etc/postfix# telnet 10.34.133.64 143
Trying 10.34.133.64...
Connected to prisni.tiscali.red.
Escape character is '^]'.
* OK Bienvenido a prisni.inicia.es.
001 login [EMAIL PROTECTED] password
001 NO Authentication failed.
002 logout
* BYE Logging out
002 OK Logout completed.
Connection closed by foreign host.

... fails :-(
dovecot: 2008-05-07 17:58:34 Info: auth-worker(default): 
sql([EMAIL PROTECTED],10.34.133.64): query: select pd.contrasena as password, 
pd.allow_nets from v_permisos_direcciones pd where ( pd.imap = 1 ) and 
pd.correo = '[EMAIL PROTECTED]'
dovecot: 2008-05-07 17:58:34 Info: auth-worker(default): 
auth([EMAIL PROTECTED],10.34.133.64): allow_nets: Matching for network 
10.34.133.0/24
dovecot: 2008-05-07 17:58:34 Info: auth-worker(default): 
passdb([EMAIL PROTECTED],10.34.133.64): allow_nets check failed: IP not in 
allowed networks
dovecot: 2008-05-07 17:58:35 Info: auth(default): client out: FAIL  
1   [EMAIL PROTECTED]
dovecot: 2008-05-07 17:58:37 Info: imap-login: user=<[EMAIL PROTECTED]>, 
method=PLAIN, rip=10.34.133.64, lip=10.34.133.64, secured: Aborted login 
(1 authentication attempts)


I wonder if this option is rare enough to this issue have remained 
undiscovered through versions... Is there anyone out there using 
allow_nets in the same way as I am trying to do? Note that using a list 
single IPs has always worked in my environment.


Thanks in advance,
Javier

Javier García escribió:

Hello,

Thanks Timo for the response. I will then ask the Debian package 
maintainers on this specific issue.


Regards,
Javier

Timo Sirainen escribió:

On Mon, 2008-03-31 at 12:56 +0200, Javier García wrote:
 

Hello all,

I am testing my dovecot installation in order to restrict access via 
POP3 for IPs outside my network. I have read and understood the 
instructions in the wiki and I have reached a configuration that 
works ONLY when single IPs are listed in allow_nets but not when 
ranges in the notation x.x.x.x/y are listed. Some examples should be 
more explanatory. I am using 1.0.rc15 patched as for last week as 
distributed in Debian etch.



I don't see any obvious entries in ChangeLog related to this, but it
seems to work correctly in v1.0.13 and v1.1.rc4, so maybe it was just
broken in rc15.

  







Re: [Dovecot] Allow_nets + MySQL failing when using range notation

2008-04-25 Thread Javier García

Hello,

Thanks Timo for the response. I will then ask the Debian package 
maintainers on this specific issue.


Regards,
Javier

Timo Sirainen escribió:

On Mon, 2008-03-31 at 12:56 +0200, Javier García wrote:
  

Hello all,

I am testing my dovecot installation in order to restrict access via 
POP3 for IPs outside my network. I have read and understood the 
instructions in the wiki and I have reached a configuration that works 
ONLY when single IPs are listed in allow_nets but not when ranges in the 
notation x.x.x.x/y are listed. Some examples should be more explanatory. 
I am using 1.0.rc15 patched as for last week as distributed in Debian etch.



I don't see any obvious entries in ChangeLog related to this, but it
seems to work correctly in v1.0.13 and v1.1.rc4, so maybe it was just
broken in rc15.

  




Re: [Dovecot] feature request: deny IP address via database

2008-04-08 Thread Javier García

Written by Bill Cole on Apr 7, 2008, at 4:58 PM:
 Hey folks.  One feature I'd really like to see in dovecot is the  
ability to point it at a database (with a configurable query) and  
have it allow or deny a connection based on looking up the source  
IP address in that database.


... much stuff discarded.

I understand that the behaviour requested is similar to that of allow_nets 
(http://wiki.dovecot.org/PasswordDatabase/ExtraFields/AllowNets) but modified 
to explicitly deny some IPs (individually or in the range form). If so, 
probably some of the work should be already done. Sorry, I do not have the 
programming abilities enough to face this.
Incidentally, I would like to notice that I opened a thread a few days ago regarding 
allow_nets and database (Bill's request needs to make use of an external database too) 
because I am not able to make allow_nets work properly when using an external DB *and IP 
ranges*. Maybe Bill would like to block single IPs so this bug? wold not apply to his 
case if an extension or adaptation of allow_nets is done. (My request, in case someone 
out there is curious: "Allow_nets + MySQL failing when using range notation")

Regards,

Javier



Re: [Dovecot] outlook2007 shows frequent imap disconnect no matterwhat, outlook-idle setting in dovecot.conf

2008-03-31 Thread Javier García

Hello all,

I configured

imap_client_workarounds = outlook-idle

after a customers complaint (Windows Vista, Outlook 2007) and after a 
week, she reports me that the issue has never repeated. So it has worked 
for me. I am running 1.0.rc15 (as from the original post) on a Debian 
etch box.


Regards,
Javier

Message: 2
Date: Mon, 31 Mar 2008 07:34:00 +0200
From: "Kielbasiewicz, Peter" <[EMAIL PROTECTED]>
Subject: Re: [Dovecot] outlook2007 shows frequent imap disconnect no
matterwhat outlook-idle setting in dovecot.conf
To: "dovecot@dovecot.org" 
Message-ID:
<[EMAIL PROTECTED]>
Content-Type: text/plain; charset="iso-8859-1"


I updated to the latest rev from atrpms.net which is 1.0.13 and I see the same 
behaviour.
No matter if I set "imap_client_workarounds = outlook-idle" or 
"imap_client_workarounds =" I get those disconnect popups. It seems that the setting has 
no effect.

Is there someone who had success by applying the setting with outlook 2007?

  Peter



  

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On
Behalf Of Samuel HAMEAU
Sent: Freitag, 28. M?rz 2008 11:11
Cc: dovecot@dovecot.org
Subject: Re: [Dovecot] outlook2007 shows frequent imap disconnect no
matterwhat outlook-idle setting in dovecot.conf

Scott Silva a ?crit :


on 3-26-2008 1:33 AM Kielbasiewicz, Peter spake the following:
  

Hello,
I saw the problem quite often in various posts but could not find a
real solution .
Problem is that I frequently get annoying popups from outlook 2007
about imap idle timeouts which block working with outlook until the
popup is acknowledged.

The popup says:
 Your IMAP server closed the connection. This can occur if you leave
the connection idle for too long.
 Protocol: IMAP
 Server: testhost
 Port: 143
 Error Code: 0x800CCCDD

I tried to run dovecot with or without  "imap_client_workarounds =
outlook-idle" but do not see any difference.
I am running dovecot on Red Hat Enterprise Linux Client release 5.1
(Tikanga).
Below is the output of some commands to get a clue about my config.

Background info:
Our primary mailboxes are on exchange servers.
As there is a size limit on the server mailboxes I am evaluating a
local storage solution for our users.
Thus  I set up an additional mailbox in outlook using IMAP.
Local .pst files can not be backuped easily (especially when outlook
is running) and I am investigating a local IMAP server where users


can


move their mails to. This server can easily be backuped (even
incrementally when using Maildir) and users do  always have access


to


their mails.
The drawbacks are that users must have an extra login to the imap
server because I have no ldap access to the company AD server from


our


local site domain.
The solution will not be accepted though if I can not prevent the
frequent disconnect popups.

Peter


#dovecot --version
1.0.rc15


First try upgrading to a more current version. 1.0.rc15 has got to be
  

a


year old by now. You can get a newer rpm at atrpms.net.

  

I do have the same problem with dovecot 1.0.10 : disconnection popups
with Outlook 2003/2007 with this kind of line in the logs:
dovecot: 2008-03-26 15:16:01 Info: IMAP(user): Connection closed:
Connection reset by peer

sam




--

___
dovecot mailing list
dovecot@dovecot.org
http://dovecot.org/cgi-bin/mailman/listinfo/dovecot

End of dovecot Digest, Vol 59, Issue 78
***
  




[Dovecot] Allow_nets + MySQL failing when using range notation

2008-03-31 Thread Javier García

Hello all,

I am testing my dovecot installation in order to restrict access via 
POP3 for IPs outside my network. I have read and understood the 
instructions in the wiki and I have reached a configuration that works 
ONLY when single IPs are listed in allow_nets but not when ranges in the 
notation x.x.x.x/y are listed. Some examples should be more explanatory. 
I am using 1.0.rc15 patched as for last week as distributed in Debian etch.


First of all, everything related to this is stored in a MySQL database, 
here is my password query:


password_query = SELECT u.password as password, t.allow_nets as 
allow_nets FROM users u, access_type t WHERE u.ID_access_type = 
t.ID_access and ( t.%Ls = 1 ) and u.mail = '%u'


This one should validate all mail addresses when the protocol used is 
marked as 1 in table access_type and when the allow_nets value in this 
same table contains the IP used for the access request. The, if 
access_type looks like:


ID_access   pop3imapallow_nets
3   0   1   10.34.128.0/23, 10.34.133.0/24, 192.168.0.0/24


users with ID_access=3 fail to login by either pop3 (normal, value is 0) 
or imap. Here is the corresponding excerpt from dovecot.log:


dovecot: 2008-03-31 11:29:04 Info: auth-worker(default): 
sql([EMAIL PROTECTED],10.34.133.104): query: SELECT u.password as 
password, t.allow_nets as allow_nets FROM users u, access_type t WHERE 
u.ID_access_type = t.ID_access and ( t.imap = 1 ) and u.mail = 
'[EMAIL PROTECTED]'
dovecot: 2008-03-31 11:26:39 Info: auth-worker(default): 
auth([EMAIL PROTECTED],10.34.133.104): allow_nets: Matching for network 
192.168.0.0/24
dovecot: 2008-03-31 11:26:39 Info: auth-worker(default): 
auth([EMAIL PROTECTED],10.34.133.104): allow_nets: Matching for network 
10.34.128.0/23
dovecot: 2008-03-31 11:26:39 Info: auth-worker(default): 
auth([EMAIL PROTECTED],10.34.133.104): allow_nets: Matching for network 
10.34.133.0/23
dovecot: 2008-03-31 11:26:39 Info: auth-worker(default): 
passdb([EMAIL PROTECTED],10.34.133.104): allow_nets check failed: IP not 
in allowed networks


but if it looks like

ID_access   pop3imapallow_nets
3   0   1   10.34.133.105, 10.34.133.104


then access is allowed by IMAP

dovecot: 2008-03-31 11:34:01 Info: auth-worker(default): 
sql([EMAIL PROTECTED],10.34.133.104): query: SELECT u.password as 
password, t.allow_nets as allow_nets FROM users u, access_type t WHERE 
u.ID_access_type = t.ID_access and ( t.imap = 1 ) and u.mail = 
'[EMAIL PROTECTED]'
dovecot: 2008-03-31 11:34:01 Info: auth-worker(default): 
auth([EMAIL PROTECTED],10.34.133.104): allow_nets: Matching for network 
10.34.133.105
dovecot: 2008-03-31 11:34:01 Info: auth-worker(default): 
auth([EMAIL PROTECTED],10.34.133.104): allow_nets: Matching for network 
10.34.133.104
dovecot: 2008-03-31 11:34:01 Info: auth(default): client out: OK
1   [EMAIL PROTECTED]


while POP3 still disallowed as expected:

dovecot: 2008-03-31 11:34:25 Info: auth-worker(default): 
sql([EMAIL PROTECTED],10.34.133.104): query: SELECT u.password as 
password, t.allow_nets as allow_nets FROM users u, access_type t WHERE 
u.ID_access_type = t.ID_access and ( t.pop3 = 1 ) and u.mail = 
'[EMAIL PROTECTED]'
dovecot: 2008-03-31 11:34:25 Info: auth-worker(default): 
sql([EMAIL PROTECTED],10.34.133.104): unknown user


So, is there a bug related to the IP class notation or am I doing 
something wrong? I have tried to leave a single class (10.34.133.0/24), 
to explicitly erase any spaces after the commas, but nothing of these 
worked. Also, note that using 0.0.0.0/0 behaves as expected, this is, 
access for any IP is allowed.


Thanks in advance,

Javier




Re: [Dovecot] How do I check the release #?

2007-03-28 Thread Javier Henderson


On Mar 28, 2007, at 5:04 PM, Randolph Kahle wrote:

I am a dovecot newbie and I am setting up a machine that already  
has dovecot installed.


Since bugs are being resolved rapidly, it appears that I need to  
know the release version of dovecot I am running.


Is there a command I can issue or a file I can examine to determine  
the release version of my installation?


At the shell prompt:

dovecot --version

(assuming dovecot is in your path, otherwise something like /usr/ 
local/bin/dovecot --version, etc)


-jav




Re: [Dovecot] M-Box benchmark

2007-03-17 Thread Javier Henderson


On Mar 17, 2007, at 2:02 PM, Jonathan Stewart wrote:


Maykel Moya wrote:
A friend of mine passed me this[1] cause I'm recommending him  
Dovecot.

[snip]

[1] http://www.isode.com/whitepapers/mbox-benchmark.html


My first questions about this test are what version of dovecot was  
used, did they take into account the fact that dovecot has to build  
indexes? A 10s ramp-up time seems rather short for that.  What kind  
of auth backend was used for each product and did they even attempt  
any kind of performance tuning on anything other than their  
product?  This "whitepaper" comes across as extremely biased and  
very short on important information.


It seems to me that it was a vendor sponsored test, so one would  
think they designed the testing criteria to favor their product.


-jav