Re: Question about doveadm altmove

2021-04-05 Thread Javier Miguel Rodríguez
Any update on this? Should I fill a bug-report for doveadm altmove -r 
not working?


Regards


Javier

El 28/03/2021 a las 18:00, JAVIER MIGUEL RODRIGUEZ escribió:

Any update on this? Dovecot 2.3.14 makes doveadm altmove -r functional?


*De:* dovecot  en nombre de María Arrea 


*Enviado:* Monday, March 22, 2021 3:15:13 PM
*Cc:* dovecot@dovecot.org 
*Asunto:* Re: Question about doveadm altmove
zlib plugin, as far as I know, only supports zstd, gzip, bzip2 and 
lzma/xz compression. The last one is being deprecated.

I have found this interesting post in the mailing list:
https://dovecot.org/pipermail/dovecot/2021-February/121329.html
Same problem here with Dovecot 2.3.13, "doveadm altmove -r" is not
moving anything from alternate to default storage. I fixed this by
reverting this commit:

https://github.com/dovecot/core/commit/2795f6183049a8a4cc489869b3e866dc20a8a732  

Is this fixed in 2.3.14 ? Does doveadm altmove -r works as expected in 
2.3.14?

Regards
*Sent:* Sunday, March 21, 2021 at 11:28 PM
*From:* "justina colmena ~biz" 
*To:* dovecot@dovecot.org
*Subject:* Re: Question about doveadm altmove
On Sunday, March 21, 2021 12:16:28 PM AKDT María Arrea wrote:
> Hello.
>
> We are running dovecot 2.3.13. Full doveconf -n output below
>
> In 2.3.14 Changelog I found this:
>
> * Remove XZ/LZMA write support. Read support will be removed in future
> release.
> We are using mdbox + XZ/LZMA for alternate storage (messages older 
than 2
> weeks are moved to ALT storage via cron job), so we must convert 
from XZ to

> another thing (maybe zstd or bz2).

Why can't you just pipe the output of "doveadm altmove" command through an
external command to do the XZ/LZMA compression if dovecot no longer 
supports

it internally?

From doveadm-altmove (1):
> This command can be used with sdbox or mdbox storage to move mails to
alternative
> storage path when :ALT= is specified for the mail location.

And that's set in stone.

https://en.wikipedia.org/wiki/XZ_Utils 



So what are the issues with xz? Security? Crashes or viruses on expanding
invalid archives?


Re: [Dovecot-news] Headsup on feature removal

2020-04-17 Thread Javier Miguel Rodríguez

Hello Aki

Can you elaborate about memory management issues in liblzma & dovecot?

Regards

El 19/03/2020 a las 20:07, Aki Tuomi escribió:


After discussing it internally, we decided to postpone the xz removal for the 
time being. We understand the complexity of migrating away from it, so we want 
to give more time to do that.
However beware that there are memory management issues in liblzma and we 
consider it unsafe to use. Feel free to use any of the other supported 
compresion algorithms instead. (We are also adding zstandard support in 2.3.11.)




Re: [Dovecot-news] Headsup on feature removal

2020-03-18 Thread Javier Miguel Rodríguez
    xz compression support for mdbox is used extensively here. Why are 
you planning to remove it?


El 17/03/2020 a las 7:50, Aki Tuomi escribió:

Hi!

Dovecot is now a nearly 20 year old product, and during that time it has 
accumulated many different features and plugins in its core repository.

We are starting to gradually remove some of these parts, which are unused, 
untested or deprecated.
We will provide advance notification before removing anything.

To start, the following features are likely to be removed in next few releases 
of Dovecot.

  - Authentication drivers: vpopmail, checkpassword, bsdauth, shadow, sia
  - Password schemes: HMAC-MD5, RPA, SKEY, PLAIN-MD4, LANMAN, NTLM, SMD5
  - Authentication mechanisms: ntlm, rpa, skey
  - Dict drivers: memcached, memcached-ascii (use redis instead)
  - postfix postmap support
  - autocreate & autosubscribe plugins (use built-in auto=create/subscribe 
setting instead)
  - expire plugin (use built-in autoexpunge setting)
  - fts-squat plugin
  - mailbox alias plugin
  - mail-filter plugin
  - snarf plugin
  - xz compression algorithm

For the authentication drivers that are being removed, we suggest using Lua as 
a replacement. See
https://doc.dovecot.org/configuration_manual/authentication/lua_based_authentication/

For information about converting between password schemes, see
https://wiki2.dovecot.org/HowTo/ConvertPasswordSchemes

If you are using any of these features, please start preparing for their 
removal in the near
future. Features will begin to be dropped as of v2.3.11.

Additionally, the mbox format will no longer receive new development. It will 
still be
maintained, however its use beyond migrations and other limited use cases will 
be discouraged.

Please contact us via the mailing list if you have any questions.

Regards,
Dovecot Team

___
Dovecot-news mailing list
dovecot-n...@dovecot.org
https://dovecot.org/mailman/listinfo/dovecot-news


Re: [Dovecot] xz compression

2014-04-03 Thread Javier Miguel Rodríguez


El 03/04/2014 16:28, T.B. escribió:

Hello Timo,

I've successfully setup xz compression for my Dovecot installation 
using the version 2.2.12 from Debian unstable.


Read the man page of xz(1) . With -9 compression level 674 MiB of 
ram are needed. If you use dovecot+xz, you really need to increse vsz_limit.


Personally, I would not use xz (-9) for main storage in a busy 
site. If you get +20 messages/second you need a lot of ram only for 
compression. I would use xz (-9) for alternate storage, tough.


Regards

Javier

--
Apoyo a la Docencia e Investigación
Servicio de Informática y Comunicaciones

Gestión de Incidencias: https://sicremedy.us.es/arsys


Re: [Dovecot] Strange Dovecot 2.0.20 auth chokes and cores

2012-05-30 Thread Javier Miguel Rodríguez
 

There is a known Problem with epoll, at least on Red Hat / CentOS,
this bugzilla may give you additional info (comments of Timo inside)


https://bugzilla.redhat.com/show_bug.cgi?id=681578 

Regards 

Javier


El 30/05/2012 17:45, Konrad . escribió: 

 When we upgrade our
kernels from 2.6.32.2 to 3.2.16 something strange
 has happened.
 On
high traffic dovecot/auth looks like not responding.
 
 We found a lot
of this lines at the log:
 dovecot: pop3-login: Error:
net_connect_unix(pop3) failed: Resource
 temporarily unavailable

(...) and clients stop authorizing
 
 Some other errors follow in the
wake of:
 dovecot: pop3: Error: Raw backtrace:

/opt/dovecot2/lib/dovecot/libdovecot.so.0(+0x373ca) [0x7768a3ca] -

/opt/dovecot2/lib/dovecot/libdovecot.so.0(+0x3743b) [0x7768a43b] -

/opt/dovecot2/lib/dovecot/libdovecot.so.0(i_fatal+0) [0x7766048b] -

/opt/dovecot2/lib/dovecot/libdovecot.so.0(+0x4593a) [0x7769893a] -

/opt/dovecot2/lib/dovecot/libdovecot.so.0(io_add+0xaf) [0x7769757f] -

/opt/dovecot2/lib/dovecot/libdovecot.so.0(master_service_init_finish+0x19a)

[0x77683c2a] - dovecot/pop3(main+0xfc) [0x804a90c] -

/lib/i386-linux-gnu/libc.so.6(__libc_start_main+0xf3) [0x774c04d3] -

dovecot/pop3() [0x804aba9]
 dovecot: pop3: Error: Raw backtrace:

/opt/dovecot2/lib/dovecot/libdovecot.so.0(+0x373ca) [0x7768a3ca] -

/opt/dovecot2/lib/dovecot/libdovecot.so.0(+0x3743b) [0x7768a43b] -

/opt/dovecot2/lib/dovecot/libdovecot.so.0(i_fatal+0) [0x7766048b] -

/opt/dovecot2/lib/dovecot/libdovecot.so.0(+0x4593a) [0x7769893a] -

/opt/dovecot2/lib/dovecot/libdovecot.so.0(io_add+0xaf) [0x7769757f] -

/opt/dovecot2/lib/dovecot/libdovecot.so.0(master_service_init_finish+0x19a)

[0x77683c2a] - dovecot/pop3(main+0xfc) [0x804a90c] -

/lib/i386-linux-gnu/libc.so.6(__libc_start_main+0xf3) [0x774c04d3] -

dovecot/pop3() [0x804aba9]
 dovecot: master: Error: service(pop3):
child 18756 killed with signal
 6 (core dumped)
 dovecot: master:
Error: service(pop3): child 18756 killed with signal
 6 (core dumped)

dovecot: master: Error: service(pop3): command startup failed,
throttling
 dovecot: master: Error: service(pop3): command startup
failed, throttling
 dovecot: pop3-login: Error: Raw backtrace:

/opt/dovecot2/lib/dovecot/libdovecot.so.0(+0x373ca) [0x776b73ca] -

/opt/dovecot2/lib/dovecot/libdovecot.so.0(+0x3743b) [0x776b743b] -

/opt/dovecot2/lib/dovecot/libdovecot.so.0(i_fatal+0) [0x7768d48b] -

/opt/dovecot2/lib/dovecot/libdovecot.so.0(+0x4593a) [0x776c593a] -

/opt/dovecot2/lib/dovecot/libdovecot.so.0(io_add+0xaf) [0x776c457f] -

/opt/dovecot2/lib/dovecot/libdovecot.so.0(master_service_init_finish+0x19a)

[0x776b0c2a] -

/opt/dovecot2/lib/dovecot/libdovecot-login.so.0(main+0x143)

[0x77705383] - /lib/i386-linux-gnu/libc.so.6(__libc_start_main+0xf3)

[0x774ed4d3] - dovecot/pop3-login() [0x8049471]
 dovecot: pop3-login:
Error: Raw backtrace:

/opt/dovecot2/lib/dovecot/libdovecot.so.0(+0x373ca) [0x776fd3ca] -

/opt/dovecot2/lib/dovecot/libdovecot.so.0(+0x3743b) [0x776fd43b] -

/opt/dovecot2/lib/dovecot/libdovecot.so.0(i_fatal+0) [0x776d348b] -

/opt/dovecot2/lib/dovecot/libdovecot.so.0(+0x4593a) [0x7770b93a] -

/opt/dovecot2/lib/dovecot/libdovecot.so.0(io_add+0xaf) [0x7770a57f] -

/opt/dovecot2/lib/dovecot/libdovecot.so.0(master_service_init_finish+0x19a)

[0x776f6c2a] -

/opt/dovecot2/lib/dovecot/libdovecot-login.so.0(main+0x143)

[0x7774b383] - /lib/i386-linux-gnu/libc.so.6(__libc_start_main+0xf3)

[0x775334d3] - dovecot/pop3-login() [0x8049471]
 
 And example stack
trace (from pop3, pop3-login throws almost the same):
 #0 0x776f6424 in
__kernel_vsyscall ()
 No symbol table info available.
 #1 0x7744d1ef
in __GI_raise (sig=6) at
 ../nptl/sysdeps/unix/sysv/linux/raise.c:64

resultvar = 
 resultvar = 
 pid = 2002518004
 selftid = 25476
 #2
0x77450835 in __GI_abort () at abort.c:91
 save_stage = 2
 act =
{__sigaction_handler = {sa_handler = 0x9bce4a8,
 sa_sigaction =
0x9bce4a8}, sa_mask = {__val = {163409408, 2002781570,
 163374248, 603,
163374280,
 604, 163374280, 2001703379, 0, 2002790760, 2140717252,

163374280, 0, 2003786736, 2002596704, 0, 2002618953, 2003087348,

2140717196, 0, 163409408,
 2001286473, 163374248, 10, 2000834616,
2002534400, 604,
 2003087348, 604, 2002791863, 4294967295, 10}},
sa_flags = 2140717316,
 sa_restorer = 0x7764bd84 }
 sigs = {__val =
{32, 0 }}
 #3 0x77603390 in default_fatal_finish (type=,
 status=) at
failures.c:187
 backtrace = 0x9bce098

/opt/dovecot2/lib/dovecot/libdovecot.so.0(+0x3837a) [0x7760337a] -

/opt/dovecot2/lib/dovecot/libdovecot.so.0(+0x383eb) [0x776033eb] -

/opt/dovecot2/lib/dovecot/libdovecot.so.0(i_fatal+0) [0x775d877a...

#4 0x776033eb in i_internal_fatal_handler (ctx=0x7f98c174,

format=0x9bd6db0 (25476) epoll_ctl(%s, %d) failed: %m,

args=0x7f98c194 \323bw06)
 at failures.c:688
 status = 0
 #5
0x775d877a in i_panic (format=0x7762d364 epoll_ctl(%s, %d)
 failed:
%m) at failures.c:263
 ctx = {type = LOG_TYPE_PANIC, exit_status = 0,
timestamp = 0x0}
 args = 0x7f98c194 \323bw06
 f = 
 

Re: [Dovecot] Very High Load on Dovecot 2 and Errors in mail.err.

2012-05-20 Thread Javier Miguel Rodríguez
 

I know that you are NOT running RHEL / CentOS, but this problem with
 1000 child processes bit us hard, read this red hat kernel bugzilla
(Timo has comments inside):


https://bugzilla.redhat.com/show_bug.cgi?id=681578 

Maybe you are
hitting the same limit? 

Regards 

Javier 

El 20/05/2012 11:59, Urban
Loesch escribió: 

 Am 19.05.2012 21:05, schrieb Timo Sirainen:
 

On Wed, 2012-05-16 at 08:59 +0200, Urban Loesch wrote: 
 
 The
Server was running about 1 year without any problems. 15Min Load was
between 0,5 and max 8. No high IOWAIT. CPU Idletime about 98%.
 .. 


 # iostat -k Linux 3.0.28-vs2.3.2.3-rol-em64t (mailstore4)
16.05.2012 _x86_64_ (24 CPU)
 Did you change the kernel just before it
broke? I'd try another version.
 
 The first time it brokes with
kernel 2.6.38.8-vs2.3.0.37-rc17.
 Then I tried it with 3.0.28 and it
brokes again.
 On friday evening I disabled the cgroup feature
compleetly and until now 
 it seems to work normally.
 But this could
be because we have weekend and now there are not many 
 connections
active. So I have
 to wait until monday. If it happens again I will try
version 3.2.17.
 
 On the other side it could be that the server is
overloaded, because 
 this problem happens only when there are
 more
than 1000 tasks active. Sounds strange for me, because it has been 

working without problems since 1 year
 and we made no changes. Also
there were almost more than 1000 tasks 
 active over the last year and
we had no problems.
 
 thanks
 Urban

 

Re: [Dovecot] index IO patterns

2012-05-11 Thread Javier Miguel Rodríguez
 

Indexes are very random, mostly read, some writes if using
dovecot-lda (ej: dbox). The average size is rather small, maybe 5 KB in
our setup. Bandwith is rather low, 20-30 MB/sec 

We are using HP
LeftHand for our replicated storage needs. 

Regards 

Javier 

El
11/05/2012 08:41, Cor Bosman escribió: 

 Hey all, we're in the process
of checking out alternatives to our index storage. We're currently
storing indexes on a NetApp Metrocluster which works fine, but is very
expensive. We're planning a few different setups and doing some actual
performance tests on them. 
 
 Does anyone know some of the IO
patterns of the indexes? For instance:
 
 - mostly random reads or
linear reads/writes? 
 - average size of reads and writes?
 - how many
read/writes on average for a specific mailbox size?
 
 Anyone do any
measurements of this kind?
 
 Alternatively, does anyone have any
experience with other redundant storage options? Im thinking things like
MooseFS, DRBD, etc? 
 
 regards,
 
 Cor

 

Re: [Dovecot] Problem about dovecot Panic

2012-03-29 Thread Javier Miguel Rodríguez
  

We had the same problem. Reboot with an older kernel
(2.6.18-274.17.1.el5 works for us). It is known bug of RHEL, see this
bugzilla: 

https://bugzilla.redhat.com/show_bug.cgi?id=681578 

Regards


Javier 

On Thu, 29 Mar 2012 10:15:32 +0200 (CEST), FABIO FERRARI
wrote: 

 Good morning,
 we have 2 Redhat Enterprise 5.7 machines,
they are a cluster with some
 mail services in it (postfix and dovecot
2).
 
 The version of dovecot is dovecot-2.0.1-1_118.el5 (installed
via rpm).
 
 From last week we have this dovecot problem: suddenly
dovecot doesn't
 accept any new connections, the dovecot.log file
reports lines like these
 
 Mar 15 12:38:54 secchia dovecot: imap:
Panic: epoll_ctl(add, 5) failed:
 Invalid argument
 Mar 15 12:38:54
secchia dovecot: imap: Error: Raw backtrace:

/usr/lib64/dovecot/libdovecot.so.0 [0x36ea436de0] -

/usr/lib64/dovecot/libdovecot.so.0 [0x36ea436e3a] -
/usr/lib64/dovecot/
 libdovecot.so.0 [0x36ea4362e8] -

/usr/lib64/dovecot/libdovecot.so.0(io_loop_handle_add+0x118)

[0x36ea441498] - /usr/lib64/dovecot/libdovecot.so.0(io_add+0x8f)

[0x36ea440b7f] - /usr/li

b64/dovecot/libdovecot.so.0(master_service_init_finish+0x1c6)

[0x36ea430c16] - dovecot/imap(main+0x10a) [0x41773a] -

/lib64/libc.so.6(__libc_start_main+0xf4) [0x36ea01d994] - dovecot/

imap [0x408179]
 Mar 15 12:38:54 secchia dovecot: master: Error:
service(imap): child 14514
 killed with signal 6 (core dumps
disabled)
 Mar 15 12:38:54 secchia dovecot: master: Error:
service(imap): command
 startup failed, throttling
 Mar 15 12:39:50
secchia dovecot: imap-login: Error: master(imap): Auth
 request timed
out (received 0/12 bytes)
 Mar 15 12:39:51 secchia dovecot: imap-login:
Error: master(imap): Auth
 request timed out (received 0/12 bytes)

Mar 15 12:39:51 secchia dovecot: imap-login: Error: master(imap): Auth

request timed out (received 0/12 bytes)
 Mar 15 12:39:52 secchia
dovecot: imap-login: Error: net_connect_unix(imap)
 failed: Resource
temporarily unavailable
 Mar 15 12:39:52 secchia dovecot: imap-login:
Error: net_connect_unix(imap)
 failed: Resource temporarily
unavailable
 Mar 15 12:39:52 secchia dovecot: imap-login: Error:
master(imap): Auth
 request timed out (received 0/12 bytes)
 Mar 15
12:39:53 secchia dovecot: imap-login: Error: net_connect_unix(imap)

failed: Resource temporarily unavailable
 Mar 15 12:39:53 secchia
dovecot: imap-login: Error: net_connect_unix(imap)
 failed: Resource
temporarily unavailable
 Mar 15 12:39:54 secchia dovecot: imap-login:
Error: net_connect_unix(imap)
 failed: Resource temporarily
unavailable
 Mar 15 12:39:54 secchia dovecot: imap: Error: Login client
disconnected
 too early
 Mar 15 12:39:54 secchia dovecot: imap: Error:
Login client disconnected
 too early
 Mar 15 12:39:54 secchia dovecot:
imap: Error: Login client disconnected
 too early
 Mar 15 12:39:54
secchia dovecot: imap: Error: Login client disconnected
 too early

Mar 15 12:39:55 secchia dovecot: imap: Panic: epoll_ctl(add, 5)
failed:
 Invalid argument
 
 and the kern.log file reports
 
 Mar
15 12:38:52 secchia kernel: dlm: closing connection to node 1
 Mar 15
12:39:04 secchia kernel: lpfc :83:00.0: 1:(0):2753 PLOGI
 failure
DID:010400 Status:x9/x32900
 Mar 15 12:39:04 secchia kernel: lpfc
:03:00.0: 0:(0):2753 PLOGI
 failure DID:010400 Status:x9/x32900

Mar 15 12:41:14 secchia kernel: lpfc :03:00.0: 0:(0):2753 PLOGI

failure DID:010400 Status:x9/x32900
 Mar 15 12:41:15 secchia kernel:
lpfc :83:00.0: 1:(0):2753 PLOGI
 failure DID:010400
Status:x9/x32900
 Mar 15 12:42:11 secchia kernel: dlm: got connection
from 1
 
 can you help us?
 
 thanks in advance
 
 Fabio Ferrari
 

Re: [Dovecot] Recalculate quota when quota=dict ?

2012-02-20 Thread Javier Miguel Rodríguez
I have seen this behaviour with a local ext4 iSCSI filesystem. When the 
system is hammered by I/O (example, perfoming a full backup), I also see 
those messages in the log.


Regards

Javier



On 17.2.2012, at 11.51, jos...@hybrid.pl wrote:


By the way: what might have caused such a warning?

r...@mail2.hybrid.pl /tmp/transfer  doveadm quota recalc -u jos...@hybrid.pl
doveadm(jos...@hybrid.pl): Warning: Created dotlock file's timestamp is 
different than current time (1329464622 vs 1329464672): 
/var/mail/mail/hybrid.pl/joshua/.mailing.ekg/dovecot-uidlist

Does it keep happening? Is this a local filesystem or NFS? Shouldn't happen 
unless remote storage server's clock and local server's clock aren't synced.





[Dovecot] Question about mdbox alt storage in Dovecot 2.0

2012-02-12 Thread Javier Miguel Rodríguez
  

Hello 

Reading 2.1rc6 changelog I see this: 

mdbox: When saving
to alt storage, Dovecot didn't append as much
 data to m.* files as it
could have.

Could you elaborate more on this? Has been ported to
Dovecot 2.0?

Regards

Javier

On Sun, 12 Feb 2012 23:01:10 +0200, Timo
Sirainen wrote: 


http://dovecot.org/releases/2.1/rc/dovecot-2.1.rc6.tar.gz [1]

http://dovecot.org/releases/2.1/rc/dovecot-2.1.rc6.tar.gz.sig [2]
 

I've finally finished all of my email backlog. If you haven't received
an answer to some question/bugreport, resend the mail.
 
 This is
hopefully the last v2.1 RC. If I don't receive any (serious) bug reports
about this release in next few days, I'll just change the version number
to v2.1.0 (and maybe update man pages, some are still missing..)
 

I'll also create dovecot-2.2 hg repository today and add some pending
patches from Stephan there and start doing some early spring cleaning in
there. :)
 
 Since v2.1.rc5 there have been lots of small fixes and
logging improvements, but I also did a few bigger things since they
really had to be done soon and I didn't want v2.2.0 release to be only a
few months after v2.1.0 with barely any new features.
 
 * Added
automatic mountpoint tracking and doveadm mount commands to
 manage the
list. If a mountpoint is unmounted, error handling is
 done by assuming
that the files are only temporarily lost. This is
 especially helpful
if dbox alt storage becomes unmounted.
 * Expire plugin: Only go
through users listed by userdb iteration.
 Delete dict rows for
nonexistent users, unless
 expire_keep_nonexistent_users=yes.
 * LDA's
out-of-quota mails now include DSN report instead of MDN.
 
 + LDAP:
Allow building passdb/userdb extra fields from multiple LDAP

attributes by using %{ldap:attributeName} variables in the template.
 +
doveadm log errors shows the last 1000 warnings and errors since

Dovecot was started.
 + Improved multi-instance support: Track
automatically which instances
 are started up and manage the list with
doveadm instance commands.
 All Dovecot commands now support -i
parameter to
 select the instance (instead of having to use -c ).
 See
instance_name setting.
 + doveadm mailbox delete: Added -r parameter to
delete recursively
 + doveadm acl: Added add and remove commands.

+ Updated to Unicode v6.1
 - mdbox: When saving to alt storage, Dovecot
didn't append as much
 data to m.* files as it could have.
 - dbox:
Fixed error handling when saving failed or was aborted
 - IMAP: Using
COMPRESS extension may have caused assert-crashes
 - IMAP: THREAD REFS
sometimes returned invalid (0) nodes.
 - dsync: Fixed handling
non-ASCII characters in mailbox names.
 

___
 Dovecot-news mailing
list
 dovecot-n...@dovecot.org [3]

http://dovecot.org/cgi-bin/mailman/listinfo/dovecot-news [4]



Links:
--
[1]
http://dovecot.org/releases/2.1/rc/dovecot-2.1.rc6.tar.gz
[2]
http://dovecot.org/releases/2.1/rc/dovecot-2.1.rc6.tar.gz.sig
[3]
mailto:dovecot-n...@dovecot.org
[4]
http://dovecot.org/cgi-bin/mailman/listinfo/dovecot-news


Re: [Dovecot] Performance of Maildir vs sdbox/mdbox

2012-01-18 Thread Javier Miguel Rodríguez


Spanish edu site here, 80k users, 4,5 TB of email, 6.000 iops 
(indexes) + 9.000 iops (mdboxes) in working hours here.


We evaluated mdbox against Maildir and we found that with these 
setting dovecot 2 perfoms better than Maildir:


mdbox_rotate_interval = 1d
mdbox_rotate_size=60m
zlib_save_level = 9 # 1..9
  zlib_save = gz # or bz2

We detected 40% less iops with this setup *in working hours (more 
info below)*. Zlib saved some writes (15-30%). With mdbox, deletion of a 
message is written to indexes (use SSD for this), and a nightly cronjob 
deletes the real message from the mdbox, this saves us some iops in 
working hours. Also, backup software is MUCH happier handling hundreds 
of thousands files (mdbox) versus tens of millions (maildir)


Mdbox has also drawbacks: you have to be VERY careful with your 
indexes, they contain data that can not be rebuilt from mdboxes. The 
nightly cronjob purging the mdboxes hammers the SAN. Full backup time 
is reduced, but incremental backup space  time increases: if you delete 
a message, after purging it from the mdbox the mdbox file changes 
(size and date), so the incremental backup has to copy it again.


Regards

Javier





Re: [Dovecot] resolve mail_home ?

2012-01-17 Thread Javier Miguel Rodríguez
That comand/paramater should be great for our backup scripts in our 
hashed mdboxes tree, we are using now slocate...


Regards

Javier





Nope..

Maybe a new command, or maybe a parameter to doveadm user that would
show mail_uid/gid/home. Or maybe something that dumps config output with
%vars expanded to the given user. Hmm.





Re: [Dovecot] Outlook Calendar Connector Question

2011-04-28 Thread Javier Miguel Rodríguez


With funambol (open source) you can connect your 
PDAs/iPhone/Outlook to have centralized calendars and contacts.


You cal also read about davical.

Regards

Javier





Quoting Jake Johnson jakej1...@gmail.com:


Is there a freeware or opensource calendar connector that will work with
Dovecot?

Any suggestions would be appreciated.

Thanks.








[Dovecot] Error purging mdbox (damaged) mailbox

2011-02-28 Thread Javier Miguel Rodríguez


We are stress testing our preproduction system. One of the evil 
tests we made was putting our mailboxes filesystem in read-only in the 
middle of smtp(+lda) delivery. When we try to purge one of the affected 
mailboxes we get error like the followings:

/

doveadm(lbandera@mysite): Panic: file mdbox-purge.c: line 225 
(mdbox_purge_save_msg): assertion failed: (ret == (off_t)msg_size)
doveadm(lbandera@mysite): Error: Raw backtrace: 
/usr/lib64/dovecot/libdovecot.so.0 [0x3b0943bab0] - 
/usr/lib64/dovecot/libdovecot.so.0(default_fatal_handler+0x35) 
[0x3b0943bb95] - /usr/lib64/dovecot/libdovecot.so.0 [0x3b0943b4c3] - 
/usr/lib64/dovecot/libdovecot-storage.so.0(mdbox_purge+0xe83) 
[0x3b0986f0c3] - /usr/bin/doveadm [0x408d65] - /usr/bin/doveadm 
[0x4093c1] - /usr/bin/doveadm(doveadm_mail_single_user+0x9d) [0x4094ed] 
- /usr/bin/doveadm [0x4096fe] - 
/usr/bin/doveadm(doveadm_mail_try_run+0xb7) [0x409b37] - 
/usr/bin/doveadm(main+0x2fc) [0x40dddc] - 
/lib64/libc.so.6(__libc_start_main+0xf4) [0x3b0881d994] - 
/usr/bin/doveadm [0x408c39]/


/
doveadm(leonvela@mysite): Panic: file mdbox-purge.c: line 225 
(mdbox_purge_save_msg): assertion failed: (ret == (off_t)msg_size)
doveadm(leonvela@mysite): Error: Raw backtrace: 
/usr/lib64/dovecot/libdovecot.so.0 [0x3b0943bab0] - 
/usr/lib64/dovecot/libdovecot.so.0(default_fatal_handler+0x35) 
[0x3b0943bb95] - /usr/lib64/dovecot/libdovecot.so.0 [0x3b0943b4c3] - 
/usr/lib64/dovecot/libdovecot-storage.so.0(mdbox_purge+0xe83) 
[0x3b0986f0c3] - /usr/bin/doveadm [0x408d65] - /usr/bin/doveadm 
[0x4093c1] - /usr/bin/doveadm(doveadm_mail_single_user+0x9d) [0x4094ed] 
- /usr/bin/doveadm [0x4096fe] - 
/usr/bin/doveadm(doveadm_mail_try_run+0xb7) [0x409b37] - 
/usr/bin/doveadm(main+0x2fc) [0x40dddc] - 
/lib64/libc.so.6(__libc_start_main+0xf4) [0x3b0881d994] - 
/usr/bin/doveadm [0x408c39]/


The mailboxes are damaged, but maybe doveadm should not crash on 
them, should handle the error more gracefully and exit with a error status.


Regards

Javier


Re: [Dovecot] Great time savings backing a mdbox versus Maildir

2011-02-08 Thread Javier Miguel Rodríguez



Oh.. I envy you. Will probably need to do the same at some point, but
I'm having problems understanding how we will ever be able to make the
transition. Too many files -- too many users..



We did the transiction via imapsync: we had /the old server/ and 
a /new server/, and we migrated all mailboxes
with imapsync and master user feature. The first imapsync takes a lot of 
time, but the next ones are incremental, and take much less time. When 
we are ready (a night) , we stop we switch from old server to new 
server. Minimal downtine, and if everythings goes wrong, we can 
imapsync in the other way, from new- old instead old-new



Our mail servers are virtualized in a  vmware vsphere cluster. We 
have HA  DRS, and all the info is stored in the iSCSI SAN. Ir our setup 
we only have a virtualized mail , but if the hw node fails the 
virtualized starts automatically in another ESX.


Regards

Javier


How long did it take to convert from maildir to mdbox, how much downtime ?

Do you have a clustered setup, or single node? I'm wondering how safe
mdbox will be on a clusterfs (GPFS), as we've had a bit of trouble with
the index-files when they're accessed from multiple nodes at the same
time (but that was with v1.0.15 -- so we should maybe trust that such
problems has since been fixed :-)


   -jf




Re: [Dovecot] Great time savings backing a mdbox versus Maildir

2011-02-08 Thread Javier Miguel Rodríguez



So with mdbox disk I/O usage increased compared to maildir+ramdisk indexes?



That is a tricky question to ask. It depends on usage, I think 
the following:



- LDA delivery: load is a bit lower (on disk) in Maildir vs mdbox: 
in both cases the message has to be written, indexes are updated, in 
Maildir indexes are in ram, so lower disk load in this case


- POP3 access: the same as the previous post

- IMAP access: this is tricky. In mdbox a /delete message/ 
command only lowers the refcount, indexes are updated and in the night a 
cron job runs doveadm purge. In Maildir, you really delete the message 
when MUA/webmail /compacts/ the folder, and indexes are updated. I 
think that mdbox has a /delayed IO /in this case, and has less load on 
disk on production hours.


Am I missing anything? The stats in the SAN after the change 
maildir-mdbox do not help, we have zlib enabled in lda  imap with 
mdbox, so our # of real IOPs is lower than Maildir (we did not have zlib 
enabled)


Regards

Javier




Re: [Dovecot] Great time savings backing a mdbox versus Maildir

2011-02-08 Thread Javier Miguel Rodríguez



The stats in the SAN after the change maildir-mdbox do not help, we have zlib 
enabled in lda  imap with mdbox, so our # of real IOPs is lower than Maildir (we 
did not have zlib enabled)


I wonder how large a write can be before it is split to two iops.. With NFS 
probably smaller I'd guess. Still, I would have thought that even if zlib 
writes only half as much, the disk iops difference wouldn't be nearly as much.



Without zlib our mailstore was 2.1 TB. With zlib enabled is 1.4 TB. 
We use a iSCSI SAN with ext4. I am writing a document with some 
benchmarking of dovecot (postal  rabid software) with some graphs about 
# of iops, cpu load, and so... I am still writing it if you are 
interested I can post a link to the document in the list.


Regards

Javier




[Dovecot] Great time savings backing a mdbox versus Maildir

2011-02-07 Thread Javier Miguel Rodríguez


Hello

I am writing to this mailing list to thanks Timo for dovecot 2  
mdbox. We have almost 30.000 active users and our life was sad with 
Maildir  backup: 24 hours for a full backup  with bacula (zlib enabled 
maildirs, 1.4 TB). After switching to mdbox, the backup time is under 12 
hours ! Instead of backing 17 millions files, with mdbox our backup is 
only of 1 million files, and that speeds up a lot the backup operation.



Timo, here you have detailed info about the bacula backup jobs, you 
can use them in the wiki if you desire. If you need aditional info 
(hardware specs, dovecot config, etc) I can share it.


*Maildir:*

|//Job:Backup_Linux_buzon_us.2011-01-21_19.03.26_38
  Backup Level:   Full
  Client: buzon_us 2.0.3 (06Mar07) 
x86_64-redhat-linux-gnu,redhat,Enterprise release
  FileSet:Full Buzon 2011-01-21 19:03:26
  Pool:   Pool_Linux_Buzones_US (From Job resource)
  Catalog:MyCatalog (From Client resource)
  Storage:File (From command line)
  Scheduled time: 21-ene-2011 19:03:20
  Start time: 21-ene-2011 19:03:29
  End time:   22-ene-2011 19:46:45
  Elapsed time:*1 day 43 mins 16 secs*
  Priority:   10
  FD Files Written:   16,903,801
  SD Files Written:*16,903,801*
  FD Bytes Written:   1,445,943,227,706 (1.445 TB)
  SD Bytes Written:   1,448,983,971,450 (1.448 TB)
  Rate:*16247.3 KB/s*
  Software Compression:   None
  VSS:no
  Encryption: no
  Accurate:   no
  Volume name(s): Buzones_US_2
  Volume Session Id:  26
  Volume Session Time:1295511704
  Last Volume Bytes:  1,450,628,892,676 (*1.450 TB*)
  Non-fatal FD errors:0
  SD Errors:  0
  FD termination status:  OK
  SD termination status:  OK
  Termination:Backup OK//|







*mdbox with mdbox_rotate_size=10m:*


|Build OS:   x86_64-redhat-linux-gnu redhat Enterprise release
  JobId:  3587
  Job:Backup_Linux_buzon_us.2011-02-07_08.13.52_53
  Backup Level:   Full (upgraded from Incremental)
  Client: buzon_us 2.0.3 (06Mar07) 
x86_64-redhat-linux-gnu,redhat,Enterprise release
  FileSet:Full Buzon 2011-01-21 19:03:26
  Pool:   Pool_Linux_Buzones_US (From Job resource)
  Catalog:MyCatalog (From Client resource)
  Storage:File (From command line)
  Scheduled time: 07-feb-2011 08:13:44
  Start time: 07-feb-2011 08:13:54
  End time:   07-feb-2011 19:43:50
  Elapsed time:*11 hours 29 mins 56 secs*
  Priority:   10
  FD Files Written:*1,148,780*
  SD Files Written:   1,148,780
  FD Bytes Written:*1,537,062,152,773 (1.537 TB)*
  SD Bytes Written:   1,537,218,147,402 (1.537 TB)
  Rate:*37130.7 KB/s*
  Software Compression:   None
  VSS:no
  Encryption: no
  Accurate:   yes
  Volume name(s): Buzones_US_4|Buzones_US_5
  Volume Session Id:  101
  Volume Session Time:1296724657
  Last Volume Bytes:  438,873,898,586 (438.8 GB)
  Non-fatal FD errors:0
  SD Errors:  0
  FD termination status:  OK
  SD termination status:  OK
  Termination:Backup OK|



Regards

Javier de Miguel
University of Seville