Re: failed: Cached message size smaller than expected

2021-04-19 Thread FUSTE Emmanuel
Le 19/04/2021 à 09:01, Aki Tuomi a écrit :
>> On 17/04/2021 23:07 Michael Grant  wrote:
>>
>>   
>> On Fri, Apr 02, 2021 at 04:45:36PM -0400, Michael Grant wrote:
>>> Every few days, my mailbox seizes up.  No mail come in to my imap clients.
>>>
>>> I'm getting these errors over and over with my mailbox:
>>>
>>>Error: Mailbox INBOX: Deleting corrupted cache record uid=371208: UID 
>>> 371208: Broken physical size in mailbox INBOX: read(/var/mail/mgrant) 
>>> failed: Cached message size smaller than expected (17212 < 17222, 
>>> box=INBOX, UID=371208)
>>>Error: Mailbox INBOX: UID=371208: read(/var/mail/mgrant) failed: Cached 
>>> message size smaller than expected (17212 < 17222, box=INBOX, UID=371208) 
>>> (FETCH BODY[])
>>>Error: Mailbox INBOX: Deleting corrupted cache record uid=371203: UID 
>>> 371203: Broken physical size in mailbox INBOX: read(/var/mail/mgrant) 
>>> failed: Cached message size smaller than expected (3904 < 3914, box=INBOX, 
>>> UID=371203)
>>>Error: Mailbox INBOX: UID=371203: read(/var/mail/mgrant) failed: Cached 
>>> message size smaller than expected (3904 < 3914, box=INBOX, UID=371203) 
>>> (FETCH BODY[])
>>>
>>> My inbox is an mbox file.  I'm running dovecot installed on Debian
>>> Bullseye, the dovecot packages are all: 1:2.3.13+dfsg1-1
>>>
>>> I am running sendmail and using procmail for local delivery.
>>>
>>> I suspect, but am not certain, that this may be some locking issue
>>> between procmail and dovecot but I have never been able to prove
>>> that. The final procmail rule which appends messages to my mailbox
>>> looks like this, the trailing ':' causes procmail to use a lockfile:
>>>
>>> :0:
>>> /var/mail/mgrant
>>>
>>> The locking config lines in 10-mail.conf are commented, but I have
>>> also tried uncommenting them, did not help:
>>>
>>> #mbox_read_locks = fcntl
>>> #mbox_write_locks = fcntl dotlock
>>>
>>> Though sometimes it seems to fix itself after a few hours, the only
>>> way I have found to fix this quickly is to manually remove the cache
>>> files and restart dovecot:
>>>
>>> rm ~/mail/.imap/INBOX/*
>>> systemctl restart dovecot
>>>
>>> I am not even sure this is a locking issue.  Something definitely gets
>>> corrupted though. I do have several IMAP clients hitting the same
>>> mailbox (phone, laptop, desktop).  On the phone, I run K9 and also the
>>> gmail client which talks imap.  Also using thunderbird, outlook, and
>>> w10 mail, though typically not all at the same time.  You could
>>> definitely say I am stress testing this setup a bit!
>>>
>>> Any ideas on how to resolve this?
>> I still see this corruption every day or so.  Anyone have any ideas how to 
>> debug this or resolve it?
>>
>> Michael Grant
> Hi!
>
> We don't really fix issues with mbox files anymore, other than read issues.. 
> Our focus is enabling people to move to other formats, such as maildir. I 
> would strongly recommend you to consider using maildir instead of mbox.
>
> I would also recommend you use dovecot-lda in procmail to deliver mail, if 
> you are not already doing so.
>
> Aki
So please put mbox code read only, or kill it.
Corruption is not acceptable.
At it is not at the expected level or quality dovecot used to be or 
claim to be.

Mbox code is slow and you will do nothing to get it faster. Ok we could 
buy it.
Optimisation and feature on the other formats could make the Mbox code 
slower and slower because no investements in the Mbox code. Ok too, make 
sense.

But no, corruption is not acceptable. It is a bug.
You're each time mbox corruption reports pops-up ask to switch to 
another format as the only answer. It make me nervous as beeing in my 
opinion an unfair and incomplete answer.
This time I allow myself to react : Please put this code read only or 
disable it.
If what you need is funding for proper basic maintenance of R/W (or even 
RO) Mbox code, it will make it more obvious to your users / customers.

Regards,
Emmanuel.


Re: dovecot director and keepalived

2021-03-16 Thread FUSTE Emmanuel
Le 16/03/2021 à 12:47, Eirik Rye a écrit :
>
>
> On 03/15/2021 8:43 PM, Paterakis E. Ioannis wrote:
>> It's not keepalived's work to tell the directors which backend is 
>> up/down. You can use poolmon for that. keepalived will make sure the 
>> floating ip will always be assigned on an alive haproxy. Then it's 
>> haproxies' work to check the aliveness of directors. Then It's 
>> Directors job to assign the users to the same dovecot backend all the 
>> time, and so on
>
> What is the purpose of HAProxy in this director setup? It seems like 
> an unecessary extra layer of proxying in your example.
>
> We run a setup with keepalived directors, and a bunch of dovecot IMAP 
> servers, and this works well.
>
> The directors have two IPs each, one static and one floating 
> (keepalived). The IPs listed in the "director_servers" setting are the 
> static IPs. The floating IPs are listed in DNS.
>
> If you simply configure dovecot to bind to all interfaces, and instead 
> use iptables to limit IMAP/POP/director connections to the interfaces 
> you want, there is no need to set `net.ipv4.ip_nonlocal_bind=1`.
>
> With all that said, I do agree that there should be a way to 
> explicitly set the director's announce/listen address, instead of 
> using the net_try_bind() method.
>
> If you need this feature, I doubt it would be very hard to patch by 
> adding a new configuration option, and then modifying this code to 
> check said option value, and use it (if present) instead of trying to 
> determine the IP:
>
> https://github.com/dovecot/core/blob/fb6aa64435e0ffd66b81cd4895127187f28fa20b/src/director/director.c#L86
>  
>
>
> - Eirik
I second.
Same simple and perfectly working setup here too.

Emmanuel.

Re: Help on CRAM-MD5

2019-06-20 Thread FUSTE Emmanuel via dovecot
Le 20/06/2019 à 12:25, @lbutlr via dovecot a écrit :
> On 20 Jun 2019, at 04:14, Jorge Bastos via dovecot  
> wrote:
>> I don't desagree with your vision, but if the use of CRAM- has to use
>> plaint text password's on the server there's a dark side, or there's a
>> CRAM-XXX that can use encrypted on server side? There's always the thing
>> that can clients don't support it.
> The “encrypted” password store that CRAM-MD5 supports is MD5 which cannot be 
> classified as encryption at this point.
>
> Not sure why  you are saying CRAM-XXX as there is only CRAM-MD5.
>
I think he is referring to my reference to SCRAM-XXX class of mech.

Re: Help on CRAM-MD5

2019-06-20 Thread FUSTE Emmanuel via dovecot
Le 20/06/2019 à 11:59, @lbutlr via dovecot a écrit :
> On 20 Jun 2019, at 02:53, FUSTE Emmanuel via dovecot  
> wrote:
>> There is plenty of context where TLS is not possible/desirable.
> I’d say that is terrible advice. There are no reasonable contexts where is it 
> is acceptable to send mail credentials without encryption. My users have had 
> to use STARTTLS for submission for many many years. Insecure connections from 
> users are not an option.
Please, don't make me say what I did not say.
I use the word "context". I did not talk about "sending mail 
credentials" no more I talk about Internet.
And even with that, don't restrict the world as your use case .The world 
is not Internet only too.
And SASL and by extend the CRAM-MD5 mech is not used only in email 
scenario/protocols.

Even in email scenario, I have to deal with equipments (scanner/copiers) 
not able to do TLS or not able to deal with a private CA and insisting 
to verify the SMTP server Cert to send email, or with broken or outdated 
SSL implementation etc ... They support CRAM-MD5. It is still better 
than clear text.
I have more than 4000 of such class of equipments behind my servers each 
having their problems, bugs, limitations Yes in 2019 ... I even 
don't talk you about the thousands of proprietary, outdated, customs, 
buggy (and combine all as you want) applications that I have to deal 
with

Emmanuel.


Re: Help on CRAM-MD5

2019-06-20 Thread FUSTE Emmanuel via dovecot
Hello,

The world is not black or white.
Yes CRAM-MD5 is old and his successor SCRAM-XX is not widely 
available/implemented which is sad.
For your need, use TLS and forget about it.
Thunderbird is conservative. If you don't configure TLS or TLS is not 
available, it try to use something that not expose the password.
There is plenty of context where TLS is not possible/desirable.
And without client certificate, mutual strong authentication is not 
available, but could be with TLS+SCRAM.
There is plenty of room for SASL mech other than PLAIN/LOGIN.
It just not fit your actual needs. Just be sure to not allow PLAIN/LOGIN 
in clear.

Emmanuel.

Le 19/06/2019 à 18:58, Jorge Bastos via dovecot a écrit :
> Howdy,
>
> Answering all, so cram-md5 is old, don't want then!
> I only noticed thunderbird as default using this, so, won't implement it!
>
> Thanks for the clarify,
>
> -Original Message-
> From: dovecot  On Behalf Of Aki Tuomi via dovecot
> Sent: 19 de junho de 2019 07:31
> To: Alexander Dalloz ; dovecot@dovecot.org
> Subject: Re: Help on CRAM-MD5
>
>
> On 19.6.2019 7.48, Alexander Dalloz via dovecot wrote:
>> Am 19.06.2019 um 00:04 schrieb Jorge Bastos via dovecot:
>>> Howdy,
>>>
>>> I'm using dovecot and mysql users, and i'm creating the password with:
>>>
>>> ENCRYPT('some-passwd',CONCAT('$6$', SUBSTRING(SHA(RAND()), -16)))
>>>
>>> So far so good, everything's fine.
>>> Today saw that i didn't enabled CRAM-MD5, but if I do, and the (at
>>> least)
>>> IMAP client (roundcube/thunderbird/etc) issues CRAM-MD5 it doesn't
>>> authenticate.
>>> What am i doing wrong, or that can be done so that all types work
>>> (SASL PLAIN LOGIN + CRAM-MD5)?
>>>
>>> Thanks in advanced,
>>>
>> For shared secret mechanisms like CRAM-MD5 to work the password must
>> be stored in plaintext AFAIK. That's a good reason not to offer that.
>>
>> Alexander
>>
> CRAM-MD5 can also be stored as stage 1 MD5 hashed blob. Only marginally 
> better than plaintext. But as pointed out, CRAM-MD5, DIGEST-MD5 cannot work 
> with crypted passwords. If you want to use "secure passwords",
> SCRAM-SHA1 is an option, but probably best is to disable other than 'PLAIN' 
> and 'LOGIN' mech unless you know what you are doing.
>
>
> Aki
>
>


doveadm-server core dump

2017-06-16 Thread FUSTE Emmanuel
Hello,

coredump found after a force-resync and a replicate

Emmanuel.
(dovecot-ee 2.2.30.2-1)

[New LWP 17168]
Core was generated by `dovecot/doveadm-server'.
Program terminated with signal SIGABRT, Aborted.
#0  0x7fa334127c37 in __GI_raise (sig=sig@entry=6) at 
../nptl/sysdeps/unix/sysv/linux/raise.c:56
#0  0x7fa334127c37 in __GI_raise (sig=sig@entry=6) at 
../nptl/sysdeps/unix/sysv/linux/raise.c:56
 resultvar = 0
 pid = 17168
 selftid = 17168
#1  0x7fa33412b028 in __GI_abort () at abort.c:89
 save_stage = 2
 act = {__sigaction_handler = {sa_handler = 0x20d2b48, sa_sigaction = 
0x20d2b48}, sa_mask = {__val = {34255104, 0, 140338942616871, 1, 0, 181, 
140338929827120, 140725975364353, 0, 0, 140338942645461, 0, 140338930777984, 
140338937001888, 34203576, 140725975364140}}, sa_flags = 0, sa_restorer = 
0x7ffd51c5b1c8}
 sigs = {__val = {32, 0 }}
#2  0x7fa334546766 in default_fatal_finish (type=, 
status=status@entry=0) at failures.c:201
 backtrace = 0x209df68 "/usr/lib/dovecot/libdovecot.so.0(+0x8d770) 
[0x7fa334546770] -> /usr/lib/dovecot/libdovecot.so.0(+0x8d84e) [0x7fa33454684e] 
-> /usr/lib/dovecot/libdovecot.so.0(i_fatal+0) [0x7fa3344dcffb] -> 
dovecot/do"...
#3  0x7fa33454684e in i_internal_fatal_handler (ctx=0x7ffd51c5b3b0, 
format=, args=) at failures.c:670
 status = 0
#4  0x7fa3344dcffb in i_panic (format=format@entry=0x4530f8 "file %s: line 
%d (%s): assertion failed: (%s)") at failures.c:275
 ctx = {type = LOG_TYPE_PANIC, exit_status = 0, timestamp = 0x0, 
timestamp_usecs = 0}
 args = {{gp_offset = 40, fp_offset = 48, overflow_arg_area = 
0x7ffd51c5b4a0, reg_save_area = 0x7ffd51c5b3e0}}
#5  0x0043a1eb in dsync_brain_sync_mailbox_deinit 
(brain=brain@entry=0x20d2b48) at dsync-brain-mailbox.c:371
 last_common_uid = 0
 last_messages_count = 0
 last_common_modseq = 4428091
 changes_during_sync = 0x0
 last_common_pvt_modseq = 140725975364904
 require_full_resync = false
 error = MAIL_ERROR_NONE
 __FUNCTION__ = "dsync_brain_sync_mailbox_deinit"
#6  0x0043adc6 in dsync_brain_slave_recv_mailbox 
(brain=brain@entry=0x20d2b48) at dsync-brain-mailbox.c:841
 dsync_box = 0x20d0878
 local_dsync_box = {mailbox_guid = 
"\344B\277\017j\022\271T\002\253\000\000\367\362\265", , mailbox_lost = false, have_guids = true, have_save_guids = true, 
have_only_guid128 = false, uid_validity = 1497621508, uid_next = 10, 
messages_count = 9, first_recent_uid = 1, highest_modseq = 3, 
highest_pvt_modseq = 0, cache_fields = {arr = {buffer = 0x209d208, element_size 
= 24}, v = 0x209d208, v_modifiable = 0x209d208}}
 box = 0x20e0f88
 errstr = 0x2f 
 resync_reason = 0x466b58 "UIDVALIDITY changed during a stateful sync, 
need to restart"
 error = MAIL_ERROR_NONE
 ret = 
 resync = 255
 __FUNCTION__ = "dsync_brain_slave_recv_mailbox"
#7  0x004387db in dsync_brain_run_real (changed_r=0x7ffd51c5b67f, 
brain=0x20d2b48) at dsync-brain.c:651
 ret = true
 orig_state = DSYNC_STATE_SLAVE_RECV_MAILBOX
 orig_box_recv_state = DSYNC_BOX_STATE_MAILBOX
 orig_box_send_state = DSYNC_BOX_STATE_MAILBOX
 changed = false
#8  dsync_brain_run (brain=brain@entry=0x20d2b48, 
changed_r=changed_r@entry=0x7ffd51c5b67f) at dsync-brain.c:687
 _data_stack_cur_id = 5
#9  0x00438ae1 in dsync_brain_run_io (context=0x20d2b48) at 
dsync-brain.c:110
 brain = 0x20d2b48
 changed = false
 try_pending = true
#10 0x0044cc7f in dsync_ibc_stream_input (ibc=0x20c1a10) at 
dsync-ibc-stream.c:230
 ibc = 0x20c1a10
#11 0x7fa33455a802 in io_loop_call_io (io=0x20becc0) at ioloop.c:599
 ioloop = 0x20a9f20
 t_id = 4
 __FUNCTION__ = "io_loop_call_io"
#12 0x7fa33455bd47 in io_loop_handler_run_internal 
(ioloop=ioloop@entry=0x20a9f20) at ioloop-epoll.c:223
 ctx = 0x20ab100
 list = 0x20c14f0
 io = 
 tv = {tv_sec = 4, tv_usec = 12}
 events_count = 
 msecs = 
 ret = 1
 i = 0
 call = 
 __FUNCTION__ = "io_loop_handler_run_internal"
#13 0x7fa33455a89c in io_loop_handler_run (ioloop=ioloop@entry=0x20a9f20) 
at ioloop.c:648
No locals.
#14 0x7fa33455aa58 in io_loop_run (ioloop=0x20a9f20) at ioloop.c:623
 __FUNCTION__ = "io_loop_run"
#15 0x0042065e in cmd_dsync_server_run (_ctx=0x20b6168, user=) at doveadm-dsync.c:1169
 ctx = 0x20b6168
 ibc = 0x20c1a10
 brain = 0x20d2b48
 temp_prefix = 0x209cb68
 state_str = 0x0
 sync_type = 
 name = 0x20a9f00 "10.33.207.136"
 process_title_prefix = 0x209cb40 "10.33.207.136 "
 mail_error = MAIL_ERROR_NONE
#16 0x00421f96 in doveadm_mail_next_user (ctx=ctx@entry=0x20b6168, 

Re: v2.2.30 released

2017-05-31 Thread FUSTE Emmanuel
Le 30/05/2017 à 20:16, Timo Sirainen a écrit :
> https://dovecot.org/releases/2.2/dovecot-2.2.30.tar.gz
> https://dovecot.org/releases/2.2/dovecot-2.2.30.tar.gz.sig
>
>   * auth: Use timing safe comparisons for everything related to
> passwords. It's unlikely that these could have been used for
> practical attacks, especially because Dovecot delays and flushes all
> failed authentications in 2 second intervals. Also it could have
> worked only when passwords were stored in plaintext in the passdb.
>   * master process sends SIGQUIT to all running children at shutdown,
> which instructs them to close all the socket listeners immediately.
> This way restarting Dovecot should no longer fail due to some
> processes keeping the listeners open for a long time.
>
>   + auth: Add passdb { mechanisms=none } to match separate passdb lookup
>   + auth: Add passdb { username_filter } to use passdb only if user
> matches the filter. See https://wiki2.dovecot.org/PasswordDatabase
Shouldn't the wiki be corrected ?
we have:
mechanisms: Skip, if non-empty and the current auth mechanism is listed 
here.

but the intended meaning is:
mechanisms: Skip, if non-empty and the current auth mechanism is not 
listed here.

Isn't it?

Emmanuel.

Problem on new ee repo (was: Have the Dovecot Enterprise Edition packages been discontinued?)

2015-12-10 Thread FUSTE Emmanuel
Le 10/12/2015 08:56, Timo Sirainen a écrit :
> On 10 Dec 2015, at 06:53, deoren  wrote:
>> Hi,
>>
>> I've been using the Dovecot EE packages for Ubuntu for some time now and
>> just recently started getting 404 errors. Have those packages been
>> discontinued or is this just a temporary issue?
> They have moved, see the new installation manual: 
> https://forum.open-xchange.com/showthread.php?9650-Dovecot-releases-Dovecot-Pro-v2-2-19-1
Hello,

On Debian testing / sid, the Release file is considered invalid:
"Invalid 'Date' entry in Release file"

The file:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Suite: jessie
Version: 6.0
Component: main
Origin: Dovecot Enterprise Edition
Label: Dovecot Enterprise Edition
Architecture: i386 amd64
Date: ma 26.10.2015 14.53.15 +0200
MD5Sum:
  b305a9011ae5576623b2a8465d0024ef 5161 main/binary-amd64/Packages.gz
  cbf66961b2146096cbbbf1db91a1829f 5053 main/binary-amd64/Packages.bz2
  8bbf2508b58741d266a658b9b1dba48a 24299 main/binary-amd64/Packages
  29de301c3726a777f1ad3f1261e06b53 625 main/binary-i386/Packages
  76a1b7378933cd49c89bd91aff9ac07e 450 main/binary-i386/Packages.bz2
  503cf971eee831bc3fd3e7b4b203edd6 410 main/binary-i386/Packages.gz
SHA1:
  18eceda91fa8d4b1a24ac25a7a1b7bbf933ed18c 5161 
main/binary-amd64/Packages.gz
  d1fe50e0e0cea742e02cab98ca31d2541765 5053 
main/binary-amd64/Packages.bz2
  41a25a6ab21aa35285dc4a781688b248d1326b39 24299 main/binary-amd64/Packages
  db7598d8b636434eb50f4c664f20d8f5522b8ce2 625 main/binary-i386/Packages
  26195a21cc9e476e539f896287edce1c6bd861e3 450 main/binary-i386/Packages.bz2
  b6b3bbd8afb527343fb7898679093989f7e4ef70 410 main/binary-i386/Packages.gz
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.14 (GNU/Linux)

iEYEARECAAYFAlYuIjsACgkQFxvbvtj2VWmlQgCgiwczBt5wHyzxdbf/Nlk7j97i
0f0AnjbSVUIGLRbnU2PF3HgfHW2zOhKm
=DZ8y
-END PGP SIGNATURE-

Emmanuel.


Re: Problems Converting from Cyrus to Dovecot (cyrus2dovecot)

2015-11-27 Thread FUSTE Emmanuel
Hello Timo,

Yes, I follow closely the commit messages of the dovecot-cvs list and 
lots of things have moved in this area.
I will try and expect to be able to use dsync+imapc for our futures 
migrations.

Best regards,
Emmanuel.

Le 26/11/2015 21:37, Timo Sirainen a écrit :
> v2.2.19 has many fixes related to dsync+imapc, which were found while 
> migrating several million users from GMail. I'm not aware of any problems 
> with it now. Also even before v2.2.19 dsync+imapc has been used to 
> successfully do many large migrations.
>
>> On 26 Nov 2015, at 17:49, FUSTE Emmanuel <emmanuel.fu...@thalesgroup.com> 
>> wrote:
>>
>> Hi,
>>
>> No, I tried fetching over imapc too exactly as you  suggested.
>> In my case it was not from cyrus, but from CriticalPath.
>> isync was finally able to do the job, preserving flags and doing UIDs
>> mapping. The most boring part was generating proper config file for
>> thousands of accounts.
>> A working imapc/dsync would have been better.
>>
>> Emmanuel.
>>
>> Le 26/11/2015 15:24, Sami Ketola a écrit :
>>> Hi,
>>>
>>> I think you tried to read cyrus mails folders directly. I was talking about 
>>> fething mails from cyrus over imapc connection.
>>>
>>> Sami
>>>
>>>> On 26 Nov 2015, at 15:36, FUSTE Emmanuel <emmanuel.fu...@thalesgroup.com> 
>>>> wrote:
>>>>
>>>> Hello,
>>>>
>>>> Because it did not work ?
>>>> In a similar situation, we where forced to use isync/mbsync in imap to
>>>> imap mode because dsync did not work.
>>>> It was reported here more than a year ago (May 2014).
>>>> Time to time, I see the same report from other trying to use dsync to do
>>>> a migration to dovecot.
>>>> Dsync is a very appealing and elegant solution to this usage, but it
>>>> does not always  work in real world.
>>>>
>>>> Regards,
>>>> Emmanuel
>>>>
>>>> Le 26/11/2015 12:30, Sami Ketola a écrit :
>>>>> Hi,
>>>>>
>>>>> With imapsync you will lose message UIDs which means that IMAP clients 
>>>>> need to clear their local caches and redownload all messages. Why not use 
>>>>> dovecot dsync over imapc instead? It tries to preserve UIDs and Flags.
>>>>>
>>>>> http://wiki2.dovecot.org/Migration
>>>>>
>>>>> Sami
>>>>>
>>>>>
>>>>>> On 07 Nov 2015, at 23:35, Forrest <those.li...@gmail.com> wrote:
>>>>>>
>>>>>> Thank you for the reply.  I did find imapsync whilst perusing Google.  I 
>>>>>> will give it a shot, it sounds more realistic/reliable. I have a hoard 
>>>>>> of emails going back to 1999, so I want as few errors as possible :)
>>>>>>
>>>>>>
>>>>>>
>>>>>> On 11/7/15 3:31 PM, Philon wrote:
>>>>>>> Hi there,
>>>>>>>
>>>>>>> I was in the same position, but for mutliple accounts. Still you might 
>>>>>>> want to look at imapsync (https://github.com/imapsync/imapsync), isync 
>>>>>>> and offlineimap. There are more alternatives listed at the imapsync 
>>>>>>> homepage.
>>>>>>>
>>>>>>>
>>>>>>> Philon
>>>>>>>
>>>>>>>
>>>>>>>> Am 04.11.2015 um 20:47 schrieb Forrest <those.li...@gmail.com>:
>>>>>>>>
>>>>>>>> I have been attempting to use the cyrus2dovecot script, to no avail.
>>>>>>>>
>>>>>>>> I have many years of content that I want to convert from Cyrus to 
>>>>>>>> Dovecot; with the above not working, what are other options out there? 
>>>>>>>>  Another idea I had is simply set up another IMAP server (using 
>>>>>>>> Dovecot) and drag-and-drop and just wait, which I may end up doing.
>>>>>>>>
>>>>>>>> In the above, I copied over my entire /var/imap and /var/spool/imap to 
>>>>>>>> another system; there is only one account (mine), so calling the 
>>>>>>>> script was fairly easy; it just doesn't work.
>>>>>>>>
>>>>>>>>
>>>>>>>> inboxes=the "myaccount" that was copied over
>>>>>>>>
>>>>>>>> /home/myaccount/cyrus2dovecot --cyrus-inbox /home/myaccount/inboxes/%u 
>>>>>>>> \
>>>>>>>>   --cyrus-seen /home/myaccount/varimap/user/%h/%u.seen 
>>>>>>>>\
>>>>>>>>   --cyrus-sub /home/varimap/user/%h/%u.sub  \
>>>>>>>>   --dovecot-inbox /home/myaccount/dovecot/Maildir \
>>>>>>>>   myaccount
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> the log output complains of:
>>>>>>>>
>>>>>>>> cyrus2dovecot [myaccount]: (warning) Index record missing for: 
>>>>>>>> INBOX/62020.
>>>>>>>>
>>>>>>>> and correctly complains about squat indices, as that's not a file it 
>>>>>>>> would handle.  There is no output into the Maildir, however.
>>>>>>>>
>>>>>>>> All directory paths are correct.
>>>>>>>>
>>>>>>>>
>>>>>>>> Thanks.


Re: Problems Converting from Cyrus to Dovecot (cyrus2dovecot)

2015-11-26 Thread FUSTE Emmanuel
Hello,

Because it did not work ?
In a similar situation, we where forced to use isync/mbsync in imap to 
imap mode because dsync did not work.
It was reported here more than a year ago (May 2014).
Time to time, I see the same report from other trying to use dsync to do 
a migration to dovecot.
Dsync is a very appealing and elegant solution to this usage, but it 
does not always  work in real world.

Regards,
Emmanuel

Le 26/11/2015 12:30, Sami Ketola a écrit :
> Hi,
>
> With imapsync you will lose message UIDs which means that IMAP clients need 
> to clear their local caches and redownload all messages. Why not use dovecot 
> dsync over imapc instead? It tries to preserve UIDs and Flags.
>
> http://wiki2.dovecot.org/Migration
>
> Sami
>
>
>> On 07 Nov 2015, at 23:35, Forrest  wrote:
>>
>> Thank you for the reply.  I did find imapsync whilst perusing Google.  I 
>> will give it a shot, it sounds more realistic/reliable. I have a hoard of 
>> emails going back to 1999, so I want as few errors as possible :)
>>
>>
>>
>> On 11/7/15 3:31 PM, Philon wrote:
>>> Hi there,
>>>
>>> I was in the same position, but for mutliple accounts. Still you might want 
>>> to look at imapsync (https://github.com/imapsync/imapsync), isync and 
>>> offlineimap. There are more alternatives listed at the imapsync homepage.
>>>
>>>
>>> Philon
>>>
>>>
 Am 04.11.2015 um 20:47 schrieb Forrest :

 I have been attempting to use the cyrus2dovecot script, to no avail.

 I have many years of content that I want to convert from Cyrus to Dovecot; 
 with the above not working, what are other options out there?  Another 
 idea I had is simply set up another IMAP server (using Dovecot) and 
 drag-and-drop and just wait, which I may end up doing.

 In the above, I copied over my entire /var/imap and /var/spool/imap to 
 another system; there is only one account (mine), so calling the script 
 was fairly easy; it just doesn't work.


 inboxes=the "myaccount" that was copied over

 /home/myaccount/cyrus2dovecot --cyrus-inbox /home/myaccount/inboxes/%u 
 \
   --cyrus-seen /home/myaccount/varimap/user/%h/%u.seen\
   --cyrus-sub /home/varimap/user/%h/%u.sub  \
   --dovecot-inbox /home/myaccount/dovecot/Maildir \
   myaccount



 the log output complains of:

 cyrus2dovecot [myaccount]: (warning) Index record missing for: 
 INBOX/62020.

 and correctly complains about squat indices, as that's not a file it would 
 handle.  There is no output into the Maildir, however.

 All directory paths are correct.


 Thanks.


Re: Problems Converting from Cyrus to Dovecot (cyrus2dovecot)

2015-11-26 Thread FUSTE Emmanuel
Hi,

No, I tried fetching over imapc too exactly as you  suggested.
In my case it was not from cyrus, but from CriticalPath.
isync was finally able to do the job, preserving flags and doing UIDs 
mapping. The most boring part was generating proper config file for 
thousands of accounts.
A working imapc/dsync would have been better.

Emmanuel.

Le 26/11/2015 15:24, Sami Ketola a écrit :
> Hi,
>
> I think you tried to read cyrus mails folders directly. I was talking about 
> fething mails from cyrus over imapc connection.
>
> Sami
>
>> On 26 Nov 2015, at 15:36, FUSTE Emmanuel <emmanuel.fu...@thalesgroup.com> 
>> wrote:
>>
>> Hello,
>>
>> Because it did not work ?
>> In a similar situation, we where forced to use isync/mbsync in imap to
>> imap mode because dsync did not work.
>> It was reported here more than a year ago (May 2014).
>> Time to time, I see the same report from other trying to use dsync to do
>> a migration to dovecot.
>> Dsync is a very appealing and elegant solution to this usage, but it
>> does not always  work in real world.
>>
>> Regards,
>> Emmanuel
>>
>> Le 26/11/2015 12:30, Sami Ketola a écrit :
>>> Hi,
>>>
>>> With imapsync you will lose message UIDs which means that IMAP clients need 
>>> to clear their local caches and redownload all messages. Why not use 
>>> dovecot dsync over imapc instead? It tries to preserve UIDs and Flags.
>>>
>>> http://wiki2.dovecot.org/Migration
>>>
>>> Sami
>>>
>>>
>>>> On 07 Nov 2015, at 23:35, Forrest <those.li...@gmail.com> wrote:
>>>>
>>>> Thank you for the reply.  I did find imapsync whilst perusing Google.  I 
>>>> will give it a shot, it sounds more realistic/reliable. I have a hoard of 
>>>> emails going back to 1999, so I want as few errors as possible :)
>>>>
>>>>
>>>>
>>>> On 11/7/15 3:31 PM, Philon wrote:
>>>>> Hi there,
>>>>>
>>>>> I was in the same position, but for mutliple accounts. Still you might 
>>>>> want to look at imapsync (https://github.com/imapsync/imapsync), isync 
>>>>> and offlineimap. There are more alternatives listed at the imapsync 
>>>>> homepage.
>>>>>
>>>>>
>>>>> Philon
>>>>>
>>>>>
>>>>>> Am 04.11.2015 um 20:47 schrieb Forrest <those.li...@gmail.com>:
>>>>>>
>>>>>> I have been attempting to use the cyrus2dovecot script, to no avail.
>>>>>>
>>>>>> I have many years of content that I want to convert from Cyrus to 
>>>>>> Dovecot; with the above not working, what are other options out there?  
>>>>>> Another idea I had is simply set up another IMAP server (using Dovecot) 
>>>>>> and drag-and-drop and just wait, which I may end up doing.
>>>>>>
>>>>>> In the above, I copied over my entire /var/imap and /var/spool/imap to 
>>>>>> another system; there is only one account (mine), so calling the script 
>>>>>> was fairly easy; it just doesn't work.
>>>>>>
>>>>>>
>>>>>> inboxes=the "myaccount" that was copied over
>>>>>>
>>>>>> /home/myaccount/cyrus2dovecot --cyrus-inbox /home/myaccount/inboxes/%u   
>>>>>>   \
>>>>>>   --cyrus-seen /home/myaccount/varimap/user/%h/%u.seen   
>>>>>>  \
>>>>>>   --cyrus-sub /home/varimap/user/%h/%u.sub  \
>>>>>>   --dovecot-inbox /home/myaccount/dovecot/Maildir \
>>>>>>   myaccount
>>>>>>
>>>>>>
>>>>>>
>>>>>> the log output complains of:
>>>>>>
>>>>>> cyrus2dovecot [myaccount]: (warning) Index record missing for: 
>>>>>> INBOX/62020.
>>>>>>
>>>>>> and correctly complains about squat indices, as that's not a file it 
>>>>>> would handle.  There is no output into the Maildir, however.
>>>>>>
>>>>>> All directory paths are correct.
>>>>>>
>>>>>>
>>>>>> Thanks.


Re: doveadm sync out of memory

2015-02-17 Thread FUSTE Emmanuel
Le 16/02/2015 20:40, Casey Stone a écrit :
 On Feb 13, 2015, at 3:42 PM, FUSTE Emmanuel emmanuel.fu...@thalesgroup.com 
 wrote:

 Le 13/02/2015 16:19, Casey Stone a écrit :
 On Feb 5, 2015, at 10:39 PM, Casey Stone tcst...@caseystone.com wrote:

 Hello:

 I've been looking forward to getting my mail server up to Dovecot 2.2+ to 
 be able to use the sync mechanism. I run my own mail server just for 
 myself, with a few different accounts, and want to keep a master and 
 backup server in sync.

 I'm running the Ubuntu server 14.04.1 mail stack which features Dovecot 
 2.2.9 (and Postfix). My setup is to use system users (userdb passwd / 
 passdb pam) with ~/Maildir. I'll post full sanitized output of dovecot -n 
 if it seems necessary. I have not enabled any plugins (do I need the 
 replicator plugin active?) I have in my conf a doveadm_password defined.

 Anyway, after setting up an ssl listener on the main machine and after 
 considerable struggles with SSL, I was able to run doveadm sync from the 
 backup server successfully for a small mailbox (around 78 MB) with this 
 command:

 doveadm sync -R tcps:mainserver.example.com:12345

 Since I run this command as the system user on the backup server (same 
 system users as main server) it 'just works' for the correct single user 
 with no further options required. My plan is to run a daily cron job to 
 sync once daily for each user.

 The problem is when I try to sync a larger mailbox, say 1 GB, dsync-server 
 on the remote (master) machine throws fatal error 83 Out of Memory. I 
 already raised vsz_limit to 512 MB. Problems probably arise with mailboxes 
 around 200 MB though I haven't tested specifically. So my question is, is 
 this expected and I will need to give my VM much more memory to be able to 
 use dovecot sync, or do I have something set wrong, or is it a bug?

 Thanks for your help.
 No repsonses :-(

 Here is what it looks like when it crashes with an out of memory error:

 (start of the run)
 Feb 13 14:02:38 thepost dovecot: doveadm(10.0.1.22,tcstone): Debug: 
 Effective uid=1002, gid=1002, home=/home/tcstone
 Feb 13 14:02:38 thepost dovecot: doveadm(10.0.1.22,tcstone): Debug: 
 Namespace inbox: type=private, prefix=, sep=, inbox=yes, hidden=no, list$
 Feb 13 14:02:38 thepost dovecot: doveadm(10.0.1.22,tcstone): Debug: 
 maildir++: root=/data/tcstone/Maildir, index=, indexpvt=, control=, inbo$
 Feb 13 14:02:39 thepost dovecot: dsync-server(tcstone): Debug: Namespace : 
 Using permissions from /data/tcstone/Maildir: mode=0700 gid=defau$
 Feb 13 14:02:39 thepost dovecot: dsync-server(tcstone): Debug: brain S: out 
 state=send_mailbox_tree changed=1

 many, many more brain messages

 (end of the run)
 Feb 13 14:02:52 thepost dovecot: dsync-server(tcstone): Fatal: 
 pool_system_realloc(536870912): Out of memory
 Feb 13 14:02:52 thepost dovecot: dsync-server(tcstone): Error: Raw 
 backtrace: /usr/lib/dovecot/libdovecot.so.0(+0x5e271) [0x7f9d2056b271] - 
 /usr/lib/dovecot/libdovecot.so.0(+0x5e34e) [0x7f9d2056b34e] - 
 /usr/lib/dovecot/libdovecot.so.0(i_error+0) [0x7f9d20526bf8] - 
 /usr/lib/dovecot/libdovecot.so.0(+0x72d53) [0x7f9d2057fd53] - 
 /usr/lib/dovecot/libdovecot.so.0(+0x7792a) [0x7f9d2058492a] - 
 /usr/lib/dovecot/libdovecot.so.0(+0x77be6) [0x7f9d20584be6] - 
 /usr/lib/dovecot/libdovecot.so.0(+0x78748) [0x7f9d20585748] - 
 /usr/lib/dovecot/libdovecot.so.0(o_stream_sendv+0x8d) [0x7f9d20583d7d] - 
 /usr/lib/dovecot/libdovecot.so.0(o_stream_send+0x1a) [0x7f9d20583e1a] - 
 /usr/lib/dovecot/modules/libssl_iostream_openssl.so(+0x4c05) 
 [0x7f9d1f6a0c05] - 
 /usr/lib/dovecot/modules/libssl_iostream_openssl.so(openssl_iostream_bio_sync+0x21)
  [0x7f9d1f6a1881] - 
 /usr/lib/dovecot/modules/libssl_iostream_openssl.so(+0x7a4d) 
 [0x7f9d1f6a3a4d] - 
 /usr/lib/dovecot/modules/libssl_iostream_openssl.so(+0x7d69) 
 [0x7f9d1f6a3d69] - /usr/lib/dovecot/libdovecot.so.0(o_stream_sendv+0x8d) 
 [0x7f9d20583d7d] - /usr/lib/dovecot/libdovecot.so.0(o_stream_nsendv+0xf) 
 [0x7f9d20583e5f] - /usr/lib/dovecot/libdovecot.so.0(o_stream_nsend+0x1a) 
 [0x7f9d20583e8a] - dovecot/doveadm-server(+0x2b03f) [0x7f9d20d3003f] - 
 dovecot/doveadm-server(+0x2c768) [0x7f9d20d31768] - 
 dovecot/doveadm-server(dsync_ibc_send_mail+0x29) [0x7f9d20d2f309] - 
 dovecot/doveadm-server(dsync_brain_sync_mails+0x5fc) [0x7f9d20d24a1c] - 
 dovecot/doveadm-server(dsync_brain_run+0x523) [0x7f9d20d20f93] - 
 dovecot/doveadm-server(+0x1c270) [0x7f9d20d21270] - 
 dovecot/doveadm-server(+0x2de60) [0x7f9d20d32e60] - 
 /usr/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x27) [0x7f9d2057b247] - 
 /usr/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0xd7) [0x7f9d2057bfd7] 
 - /usr/lib/dovecot/libdovecot.so.0(io_loop_run+0x38) [0x7f9d2057ade8] - 
 dovecot/doveadm-server(+0x1a189) [0x7f9d20d1f189] - 
 dovecot/doveadm-server(+0xebeb) [0x7f9d20d13beb]
 Feb 13 14:02:52 thepost dovecot: dsync-server(tcstone): Fatal: master: 
 service(doveadm): child 13232 returned error 83 (Out of memory (service 
 doveadm

Re: doveadm sync out of memory

2015-02-13 Thread FUSTE Emmanuel
Le 13/02/2015 16:19, Casey Stone a écrit :
 On Feb 5, 2015, at 10:39 PM, Casey Stone tcst...@caseystone.com wrote:

 Hello:

 I've been looking forward to getting my mail server up to Dovecot 2.2+ to be 
 able to use the sync mechanism. I run my own mail server just for myself, 
 with a few different accounts, and want to keep a master and backup server 
 in sync.

 I'm running the Ubuntu server 14.04.1 mail stack which features Dovecot 
 2.2.9 (and Postfix). My setup is to use system users (userdb passwd / passdb 
 pam) with ~/Maildir. I'll post full sanitized output of dovecot -n if it 
 seems necessary. I have not enabled any plugins (do I need the replicator 
 plugin active?) I have in my conf a doveadm_password defined.

 Anyway, after setting up an ssl listener on the main machine and after 
 considerable struggles with SSL, I was able to run doveadm sync from the 
 backup server successfully for a small mailbox (around 78 MB) with this 
 command:

 doveadm sync -R tcps:mainserver.example.com:12345

 Since I run this command as the system user on the backup server (same 
 system users as main server) it 'just works' for the correct single user 
 with no further options required. My plan is to run a daily cron job to sync 
 once daily for each user.

 The problem is when I try to sync a larger mailbox, say 1 GB, dsync-server 
 on the remote (master) machine throws fatal error 83 Out of Memory. I 
 already raised vsz_limit to 512 MB. Problems probably arise with mailboxes 
 around 200 MB though I haven't tested specifically. So my question is, is 
 this expected and I will need to give my VM much more memory to be able to 
 use dovecot sync, or do I have something set wrong, or is it a bug?

 Thanks for your help.
 No repsonses :-(

 Here is what it looks like when it crashes with an out of memory error:

 (start of the run)
 Feb 13 14:02:38 thepost dovecot: doveadm(10.0.1.22,tcstone): Debug: Effective 
 uid=1002, gid=1002, home=/home/tcstone
 Feb 13 14:02:38 thepost dovecot: doveadm(10.0.1.22,tcstone): Debug: Namespace 
 inbox: type=private, prefix=, sep=, inbox=yes, hidden=no, list$
 Feb 13 14:02:38 thepost dovecot: doveadm(10.0.1.22,tcstone): Debug: 
 maildir++: root=/data/tcstone/Maildir, index=, indexpvt=, control=, inbo$
 Feb 13 14:02:39 thepost dovecot: dsync-server(tcstone): Debug: Namespace : 
 Using permissions from /data/tcstone/Maildir: mode=0700 gid=defau$
 Feb 13 14:02:39 thepost dovecot: dsync-server(tcstone): Debug: brain S: out 
 state=send_mailbox_tree changed=1

 many, many more brain messages

 (end of the run)
 Feb 13 14:02:52 thepost dovecot: dsync-server(tcstone): Fatal: 
 pool_system_realloc(536870912): Out of memory
 Feb 13 14:02:52 thepost dovecot: dsync-server(tcstone): Error: Raw backtrace: 
 /usr/lib/dovecot/libdovecot.so.0(+0x5e271) [0x7f9d2056b271] - 
 /usr/lib/dovecot/libdovecot.so.0(+0x5e34e) [0x7f9d2056b34e] - 
 /usr/lib/dovecot/libdovecot.so.0(i_error+0) [0x7f9d20526bf8] - 
 /usr/lib/dovecot/libdovecot.so.0(+0x72d53) [0x7f9d2057fd53] - 
 /usr/lib/dovecot/libdovecot.so.0(+0x7792a) [0x7f9d2058492a] - 
 /usr/lib/dovecot/libdovecot.so.0(+0x77be6) [0x7f9d20584be6] - 
 /usr/lib/dovecot/libdovecot.so.0(+0x78748) [0x7f9d20585748] - 
 /usr/lib/dovecot/libdovecot.so.0(o_stream_sendv+0x8d) [0x7f9d20583d7d] - 
 /usr/lib/dovecot/libdovecot.so.0(o_stream_send+0x1a) [0x7f9d20583e1a] - 
 /usr/lib/dovecot/modules/libssl_iostream_openssl.so(+0x4c05) [0x7f9d1f6a0c05] 
 - 
 /usr/lib/dovecot/modules/libssl_iostream_openssl.so(openssl_iostream_bio_sync+0x21)
  [0x7f9d1f6a1881] - 
 /usr/lib/dovecot/modules/libssl_iostream_openssl.so(+0x7a4d) [0x7f9d1f6a3a4d] 
 - /usr/lib/dovecot/modules/libssl_iostream_openssl.so(+0x7d69) 
 [0x7f9d1f6a3d69] - /usr/lib/dovecot/libdovecot.so.0(o_stream_sendv+0x8d) 
 [0x7f9d20583d7d] - /usr/lib/dovecot/libdovecot.so.0(o_stream_nsendv+0xf) 
 [0x7f9d20583e5f] - /usr/lib/dovecot/libdovecot.so.0(o_stream_nsend+0x1a) 
 [0x7f9d20583e8a] - dovecot/doveadm-server(+0x2b03f) [0x7f9d20d3003f] - 
 dovecot/doveadm-server(+0x2c768) [0x7f9d20d31768] - 
 dovecot/doveadm-server(dsync_ibc_send_mail+0x29) [0x7f9d20d2f309] - 
 dovecot/doveadm-server(dsync_brain_sync_mails+0x5fc) [0x7f9d20d24a1c] - 
 dovecot/doveadm-server(dsync_brain_run+0x523) [0x7f9d20d20f93] - 
 dovecot/doveadm-server(+0x1c270) [0x7f9d20d21270] - 
 dovecot/doveadm-server(+0x2de60) [0x7f9d20d32e60] - 
 /usr/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x27) [0x7f9d2057b247] - 
 /usr/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0xd7) [0x7f9d2057bfd7] 
 - /usr/lib/dovecot/libdovecot.so.0(io_loop_run+0x38) [0x7f9d2057ade8] - 
 dovecot/doveadm-server(+0x1a189) [0x7f9d20d1f189] - 
 dovecot/doveadm-server(+0xebeb) [0x7f9d20d13beb]
 Feb 13 14:02:52 thepost dovecot: dsync-server(tcstone): Fatal: master: 
 service(doveadm): child 13232 returned error 83 (Out of memory (service 
 doveadm { vsz_limit=512 MB }, you may need to increase it) - set 
 DEBUG_OUTOFMEM=1 environment to get core dump)

 I haven't tested whether 

Re: TCP Cluster replication headache

2014-08-26 Thread FUSTE Emmanuel
Le 07/05/2014 17:38, Emmanuel Fusté a écrit :
 Hello,

 After going crazy building a dovecot cluster, I finally see the light ;-))
 But some things are strange and could probably be fixed/enhanced.

 First :
 I follow the wiki doc, setting global doveadm_port.
 Things did not work, I've got:

 dovecot: doveadm(X1234567): Error: sync: /var/run/dovecot/auth-userdb: 
 Configured passdbs don't support crentials lookups (to see if user is 
 proxied, because doveadm_port is set)

 Same kind of error too when trying to use doveadm on the command line to
 get the replica status.
 My user/auth db is LDAP with auth_bind = yes, but I don't understand
 the message in these context and did'nt know how to fix this.
 I tried to hardcode proxy/proxy_maybe property in the passdb declaration
 etc...
 Finally, I remove the global doveadm_port 12345 and add :12345 at
 the end of my mail_replica = line and all began to work !
 Is it a wanted and expected error/fix ?

Ok, this first point should be fixed by 
http://hg.dovecot.org/dovecot-2.2/rev/a2e0e89bc27d
Need to test it.

Thank you.
Emmanuel.


 Secondly:
 Now all is working and doveadm replicator status '*'  correctly list
 all my users and the status, but after a few seconds (after replication
 kick in), I see all user listed twice.
 One time, as declared in the userdb with letters in uppercase  : X1234567
 One time, in lowercase : x1234567
 On disk, all is OK, with only one replica in an uppercase directory.
 I initially think that it was a mismatch between userdb and passdb users
 return, but it was in fact the default value of auth_username_format
 that was the culprit. After going from the default %Lu to %u doveadm
 replicator status show only one entry per user as expected.
 Is it wanted and expected too? why auth_username_format is used/interact
 with/in the replication process and/or the replicator status command ?

 Not all is functionally tested, I go back to work.
 My is conf at the end of this message.

 Thanks Simo for this great piece of software.

 Emmanuel

 # 2.2.12.7 (f7731356530e+): /etc/dovecot/dovecot.conf
 # OS: Linux 3.11.0-19-generic x86_64 Ubuntu 12.04.4 LTS
 auth_master_user_separator = *
 auth_username_format = %u
 doveadm_password = xxx
 lda_mailbox_autocreate = yes
 listen = *
 mail_gid = vmail
 mail_location = maildir:~/Maildir
 mail_plugins = quota notify replication
 mail_uid = vmail
 managesieve_notify_capability = mailto
 managesieve_sieve_capability = fileinto reject envelope encoded-character 
 vacation subaddress comparator-i;ascii-numeric relational regex imap4flags 
 copy include variables body enotify environment mailbox date ihave
 namespace {
 hidden = no
 inbox = yes
 list = yes
 location =
 prefix =
 separator = /
 subscriptions = yes
 type = private
 }
 namespace {
 hidden = no
 inbox = no
 list = children
 location = maildir:%%h/Maildir:INDEX=~/Maildir/shared/%%u
 prefix = shared/%%u/
 separator = /
 subscriptions = no
 type = shared
 }
 passdb {
 args = /etc/dovecot/dovecot-ldap.conf.ext
 driver = ldap
 }
 plugin {
 acl = vfile
 acl_anyone = allow

 acl_shared_dict =file:/appli/vmail/shared-mailboxes
 mail_replica = tcp:thsmytmbx02p.online.corp.thales:12345
 quota = 
 dict:userquota::file:/appli/vmail/local_userquota/%%h/dovecot-quota
 quota_rule = *:storage=100M
 quota_rule2 = INBOX:storage=+20%%
 quota_rule3 = Trash:storage=+10%%
 sieve = ~/.dovecot.sieve
 sieve_dir = ~/sieve
 }
 protocols = imap sieve
 service aggregator {
 fifo_listener replication-notify-fifo {
   user = vmail
 }
 unix_listener replication-notify {
   user = vmail
 }
 }
 service auth {
 unix_listener auth-userdb {
   group = vmail
   mode = 0660
 }
 }
 service doveadm {
 inet_listener {
   port = 12345
 }
 user = vmail
 }
 service replicator {
 process_min_avail = 1
 unix_listener replicator-doveadm {
   mode = 0666
 }
 }
 ssl = no
 userdb {
 args = /etc/dovecot/dovecot-users-ldap.conf.ext
 driver = ldap
 }
 protocol lda {
 mail_plugins = quota sieve
 }
 protocol imap {
 mail_plugins = quota imap_quota
 }



 dovecot-users-ldap.conf.ext:
 dovecot-ldap.conf.ext:

 uris = ldapi:///
 dn = uid=dovecot,dc=mydomain,dc=com
 dnpass = 
 auth_bind = yes
 ldap_version = 3
 base = ou=users,dc=mydomain,dc=com
 user_attrs = =home=/appli/vmail/%{ldap:uid}
 user_filter = ((objectClass=inetOrgPerson)(|(uid=%u)(mail=%u)))
 pass_attrs = =user=%{ldap:uid}
 pass_filter = ((objectClass=inetOrgPerson)(uid=%u)(!(pwdReset=TRUE)))
 iterate_attrs = uid=user
 iterate_filter = (objectClass=inetOrgPerson)



[Dovecot] Sieve fileinto extension and redirect action

2014-05-21 Thread FUSTE Emmanuel
Hello,

Is there any way to limit the use of the redirect action (local user 
only or silent ignore) as provisioned by the RFC in the Pigeonhole 
implementation ?
The only way I found for the moment is to completely disable the 
fileinto extension which hardly beat the users experience.
redirect is forbidden by my organization policy.

Regards,
Emmanuel.


Re: [Dovecot] Sieve fileinto extension and redirect action

2014-05-21 Thread FUSTE Emmanuel
Le 21/05/2014 17:49, Daniel Parthey a écrit :
 Hi Emmanuel

 Am 21.05.2014 16:04, schrieb FUSTE Emmanuel:
 Is there any way to limit the use of the redirect action (local user
 only or silent ignore) as provisioned by the RFC in the Pigeonhole
 implementation?
 The only way I found for the moment is to completely disable the
 fileinto extension which hardly beat the users experience.
 redirect is forbidden by my organization policy.
 Dovecot injects redirected messages through sendmail or smtp (depending on 
 your config).

 You might change the dovecot option sendmail_path to something different 
 than sendmail:

 before:
 sendmail_path = /usr/sbin/sendmail

 after:
 sendmail_path = /usr/local/sbin/your-mail-handler

 If you are using SMTP, then have a look at dovecot option submission_host.

 Kind regards
 Daniel
Thank you Daniel.

Looking at the code, it seems that it could be addressed  with a config 
parametter:

sieve_max_redirects = 0

And now looking at the config examples files, it is there .

Regards,
Emmanuel.