Re: dovecot HA setup across large geographic distances

2018-04-24 Thread Sophie Loewenthal
Hi,

Does the master/master plugin support user iteration like mail_location = 
maildir:/var/vmail/%d/%u

Reference to the error message : (Error: passwd-file: User iteration isn’t 
currently supported with %variable paths) 

Thanks.


> On 24 Apr 2018, at 12:46, Sophie Loewenthal  wrote:
> 
> Hi,
> 
> I have a working postfix/dovecot config on Debian located in Europe[EU].  I 
> would like to provide either an HA/load balanced setup with a second site in 
> Australia[AU] because of users located between EU/Asia/AU. The site to site 
> traffic would be tunneled through either SSH or a VPN
> 
> * Has this been done with Doevcot over such great distances? 
> 
> * How can the delay between mailstore synchonisation be handled?  ( Currently 
> looking for an opensource product like Netex's HyperIP)
> 
> I'd be grateful for your thoughts.
> 
> Best wishes, Sophie
> Sent from a phone, thus brief.



Re: Pass UniqueID to sieve scripts

2018-04-24 Thread Aki Tuomi


 
 
  
   
  
  
   
On 24 April 2018 at 21:24 Peter Blok <
pb...@bsd4all.org> wrote:
   
   

   
   

   
   
Hi,
   
   

   
   
If a user moves a mail to a Junk folder, this e-mail is going thru imap_sieve and ends up in sa-learn. I wrote some tools to immediately propagate that to the other node of the cluster as well as locally.
   
   

   
   
Everything works fine, but sometimes I see a storm of requests hogging the CPU and I would like to collect some debugging information, Preferably I would like the permanent unique id of the mail in the maildir, but am looking for some kind of unique id that will help me identify if an e-mail somehow passes the sieve filter multiple times.
   
   

   
   
Basically I have changed the pipe to sa-learn to my proxy that will send it across to the other node as well as call sa-learn locally. So something I can get in a sieve script and pass along.
   
   

   
   
Peter
   
   

   
   

   
   

   
  
  
   
  
  
   You could try parsing mail's message-id header and use it when present.
  
  
   ---
   Aki Tuomi
   
 



Pass UniqueID to sieve scripts

2018-04-24 Thread Peter Blok
Hi,

If a user moves a mail to a Junk folder, this e-mail is going thru imap_sieve 
and ends up in sa-learn. I wrote some tools to immediately propagate that to 
the other node of the cluster as well as locally.

Everything works fine, but sometimes I see a storm of requests hogging the CPU 
and I would like to collect some debugging information, Preferably I would like 
the permanent unique id of the mail in the maildir, but am looking for some 
kind of unique id that will help me identify if an e-mail somehow passes the 
sieve filter multiple times.

Basically I have changed the pipe to sa-learn to my proxy that will send it 
across to the other node as well as call sa-learn locally. So something I can 
get in a sieve script and pass along.

Peter






smime.p7s
Description: S/MIME cryptographic signature


Re: imap-login segfaulting on 2.3.1

2018-04-24 Thread Grant Keller
Sorry about that, here it is:

(gdb) bt full
#0  i_stream_get_root_io (stream=0x0) at istream.c:911
No locals.
#1  0x7f963a47de39 in i_stream_set_input_pending (stream=, 
pending=pending@entry=true)
at istream.c:923
No locals.
#2  0x7f9637cb0a59 in openssl_iostream_bio_input 
(type=OPENSSL_IOSTREAM_SYNC_TYPE_HANDSHAKE,
ssl_io=0x5615d14844d0) at iostream-openssl.c:498
data = 0x7f963a4d00bd  ""
bytes = 17339
ret = 
bytes_read = true
size = 0
#3  openssl_iostream_bio_sync (ssl_io=ssl_io@entry=0x5615d14844d0,
type=OPENSSL_IOSTREAM_SYNC_TYPE_HANDSHAKE) at iostream-openssl.c:510
ret = false
#4  0x7f9637cb0c2a in openssl_iostream_more (ssl_io=0x5615d14844d0,
type=type@entry=OPENSSL_IOSTREAM_SYNC_TYPE_HANDSHAKE) at 
iostream-openssl.c:524
ret = 
#5  0x7f9637cb2f6c in o_stream_ssl_flush (stream=0x5615d14847d0) at 
ostream-openssl.c:128
sstream = 0x5615d14847d0
plain_output = 0x5615d146d570
ret = 
#6  0x7f963a4960fe in o_stream_flush (stream=stream@entry=0x5615d1484870) 
at ostream.c:200
_stream = 0x5615d14847d0
ret = 1
__func__ = "o_stream_flush"
#7  0x7f963a4961a0 in o_stream_close_full (stream=0x5615d1484870, 
close_parents=)
at ostream.c:53
No locals.
#8  0x7f963a496243 in o_stream_destroy (stream=stream@entry=0x5615d149af90) 
at ostream.c:75
No locals.
#9  0x7f963a72acbc in login_proxy_free_final (proxy=0x5615d149af50) at 
login-proxy.c:416
---Type  to continue, or q  to quit---
__func__ = "login_proxy_free_final"
#10 0x7f963a72b427 in login_proxy_free_full 
(_proxy=_proxy@entry=0x7ffea4c6c578,
reason=0x5615d14280c0 "Disconnected by server(0s idle, in=2015, out=10272)",
delayed=delayed@entry=true) at login-proxy.c:521
proxy = 0x5615d149af50
client = 0x5615d1469d08
ipstr = 
delay_ms = 
__func__ = "login_proxy_free_full"
#11 0x7f963a72be07 in login_proxy_free_delayed (reason=, 
_proxy=0x7ffea4c6c578)
at login-proxy.c:541
No locals.
#12 login_proxy_free_errstr (server=true, errstr=, 
_proxy=0x7ffea4c6c578)
at login-proxy.c:129
proxy = 0x5615d149af50
reason = 0x5615d1428088
#13 login_proxy_finished (side=, status=, 
proxy=0x0) at login-proxy.c:619
errstr = 
server_side = true
#14 0x7f963a487fb5 in io_loop_call_io (io=0x5615d149b190) at ioloop.c:674
ioloop = 0x5615d1430d70
t_id = 2
__func__ = "io_loop_call_io"
#15 0x7f963a48989f in io_loop_handler_run_internal 
(ioloop=ioloop@entry=0x5615d1430d70)
at ioloop-epoll.c:222
ctx = 0x5615d14612e0
events = 
list = 0x5615d1462150
io = 
tv = {tv_sec = 59, tv_usec = 664792}
events_count = 
msecs = 
---Type  to continue, or q  to quit---
ret = 1
i = 0
call = 
__func__ = "io_loop_handler_run_internal"
#16 0x7f963a4880b2 in io_loop_handler_run 
(ioloop=ioloop@entry=0x5615d1430d70) at ioloop.c:726
__func__ = "io_loop_handler_run"
#17 0x7f963a4882d8 in io_loop_run (ioloop=0x5615d1430d70) at ioloop.c:699
__func__ = "io_loop_run"
#18 0x7f963a404673 in master_service_run (service=0x5615d1430c00,
callback=callback@entry=0x7f963a72dd70 ) at 
master-service.c:767
No locals.
#19 0x7f963a72e532 in login_binary_run (binary=, argc=2, 
argv=0x5615d14308c0)
at main.c:549
set_pool = 0x5615d1431ef0
login_socket = 0x5615d14308eb "director"
c = 
#20 0x7f963a002c05 in __libc_start_main (main=0x5615cfbe84d0 , 
argc=2, ubp_av=0x7ffea4c6c898,
init=, fini=, rtld_fini=, 
stack_end=0x7ffea4c6c888)
at ../csu/libc-start.c:274
result = 
unwind_buf = {cancel_jmp_buf = {{jmp_buf = {0, -1836909874292919703, 
94651679671521,
140731662911632, 0, 0, 1836542576478188137, 
1850284778656821865}, mask_was_saved = 0}},
  priv = {pad = {0x0, 0x0, 0x7f963a9454c3 <_dl_init+275>, 
0x7f963ab59150}, data = {prev = 0x0,
  cleanup = 0x0, canceltype = 982799555}}}
not_first_call = 
#21 0x5615cfbe850a in _start ()
No symbol table info available.
(gdb)


Quoting Aki Tuomi (2018-04-23 23:55:24)
> 
> 
> On 24.04.2018 00:40, Grant Keller wrote:
> > Hello,
> >
> > I have a new director ring I am setting up on centos 7 with dovecot
> > 2.3.1. I haven't been able to replecate this in testing, but as soon as
> > I start pushing production traffic to the new ring I see dozens of these  
> > in the
> > logs:
> > Apr 18 00:34:00 d.director.imapd.sonic.net kernel: imap-login[163107]: 
> > segfault at 10 ip 7ff625698dd5sp 7ffe4b77bb28 error 4 in 
> > libdovecot.so.0.0.0[7ff6255bf000+16e000]
> >
> > My config:
> > # 2.3.1 (c5a5c0c82): /etc/dovecot/dovecot.conf
> > # OS: Linux 3.10.0-693.21.1.el7.x86_64 x86_64 CentOS Linux release 7.4.1708 
> > (Core)
> > # Hostname: c.director.imapd.sonic.net
> > auth_master_user_separa

Re: Merging mailboxes with doveadm

2018-04-24 Thread Carsten Schmitz

Hello Aki,

yes, that's it - thank you.
For reference this works:

sudo doveadm import -u destinat...@company.net 
maildir:/var/vmail/company.net/some.name/Maildir  some_name ALL


Thanks again

Carsten

On 23.04.2018 15:06, Aki Tuomi wrote:

The command you are looking for is doveadm import

---
Aki Tuomi
Dovecot oy

 Original message 
From: Carsten Schmitz 
Date: 23/04/2018 15:46 (GMT+02:00)
To: dovecot@dovecot.org
Subject: Merging mailboxes with doveadm

Hello,

I am trying to merge(=copy) all mails of all user mailboxes into one 
mailbox for one-time archival purposes.


The command I am using is

 sudo doveadm -v copy -A arch...@somedomain.net ALL


The error I get for every mailbox is:

doveadm(someus...@somedomain.net): Error: Can't open mailbox 
'a...@somedomain.net': Mailbox doesn't exist: arch...@somedomain.net


doveadm(someus...@somedomain.net): Error: Can't open mailbox 
'a...@somedomain.net': Mailbox doesn't exist: arch...@somedomain.net


[etc...]

I don't exactly understand why. I think I am don't understand the 
'destination' parameter correctly, so somebody can enlighten me, please?


Thank you


Carsten







Re: Sieve "redirect" changes envelope sender in 2.3. / pigeonhole 0.5

2018-04-24 Thread Stephan Bosch



Op 24-4-2018 om 10:17 schreef Olaf Hopp:

On 04/23/2018 03:46 PM, Olaf Hopp wrote:

On 04/23/2018 03:22 PM, Stephan Bosch wrote:



Op 20-4-2018 om 14:01 schreef Olaf Hopp:

Hi (Stephan?),
is it a new feature of dovecot 2.3 /pigeonhole 0.5 that a sieve 
"redirect" changes the envelope sender of

a redirected mail or simply a bug ?

A sends mail to B, B redirects to C
C sees B (not A!) as envelope sender.
It is not a problem if C gets the mail but if that mail bounces
for various reasons it goes back to B and A will never know about 
this.


I thick this is came with 2.3 / pigeonhole 0.5 ?

# 2.3.1 (c5a5c0c82): /etc/dovecot/dovecot.conf
# Pigeonhole version 0.5.devel (61b47828)
# OS: Linux 2.6.32-696.23.1.el6.x86_64 x86_64 CentOS release 6.9 
(Final)


Probably same as issue in this thread:

https://www.dovecot.org/pipermail/dovecot/2018-April/111482.html



Yes maybe.
But I didn't see any sieve errors in the logs.
In my case there is exim sitting in front of dovecot lmtp and as said
 trusted_users = exim:dovecot
in thge exim.conf resolved this issue for me.

Regards, Olaf


I digged deeper: in 
https://www.dovecot.org/pipermail/dovecot/2018-April/111485.html 
Stephan wrote


| Yeah, this is likely due to the fact that sendmail is now invoked using
| the program-client (same as Sieve extprograms), which takes great care
| to drop any unwanted (seteuid) root privileges.

and thats the reason why my exim now needs the dovecot user as trusted 
user so that

those redirects can retain the original envelope sender.


It could also be the Systemd issues reported in that thread. I haven't 
experimented with that.


Regards,

Stephan.



dovecot HA setup across large geographic distances

2018-04-24 Thread Sophie Loewenthal
Hi,

I have a working postfix/dovecot config on Debian located in Europe[EU].  I 
would like to provide either an HA/load balanced setup with a second site in 
Australia[AU] because of users located between EU/Asia/AU. The site to site 
traffic would be tunneled through either SSH or a VPN

* Has this been done with Doevcot over such great distances? 

* How can the delay between mailstore synchonisation be handled?  ( Currently 
looking for an opensource product like Netex's HyperIP)

I'd be grateful for your thoughts.

Best wishes, Sophie
Sent from a phone, thus brief.


Re: Sieve "redirect" changes envelope sender in 2.3. / pigeonhole 0.5

2018-04-24 Thread Olaf Hopp

On 04/23/2018 03:46 PM, Olaf Hopp wrote:

On 04/23/2018 03:22 PM, Stephan Bosch wrote:



Op 20-4-2018 om 14:01 schreef Olaf Hopp:

Hi (Stephan?),
is it a new feature of dovecot 2.3 /pigeonhole 0.5 that a sieve "redirect" 
changes the envelope sender of
a redirected mail or simply a bug ?

A sends mail to B, B redirects to C
C sees B (not A!) as envelope sender.
It is not a problem if C gets the mail but if that mail bounces
for various reasons it goes back to B and A will never know about this.

I thick this is came with 2.3 / pigeonhole 0.5 ?

# 2.3.1 (c5a5c0c82): /etc/dovecot/dovecot.conf
# Pigeonhole version 0.5.devel (61b47828)
# OS: Linux 2.6.32-696.23.1.el6.x86_64 x86_64 CentOS release 6.9 (Final)


Probably same as issue in this thread:

https://www.dovecot.org/pipermail/dovecot/2018-April/111482.html



Yes maybe.
But I didn't see any sieve errors in the logs.
In my case there is exim sitting in front of dovecot lmtp and as said
 trusted_users = exim:dovecot
in thge exim.conf resolved this issue for me.

Regards, Olaf


I digged deeper: in 
https://www.dovecot.org/pipermail/dovecot/2018-April/111485.html Stephan wrote

| Yeah, this is likely due to the fact that sendmail is now invoked using
| the program-client (same as Sieve extprograms), which takes great care
| to drop any unwanted (seteuid) root privileges.

and thats the reason why my exim now needs the dovecot user as trusted user so 
that
those redirects can retain the original envelope sender.

Thanks, Olaf


--
Karlsruher Institut für Technologie (KIT)
ATIS - Abt. Technische Infrastruktur, Fakultät für Informatik

Dipl.-Geophys. Olaf Hopp
- Leitung IT-Dienste -

Am Fasanengarten 5, Gebäude 50.34, Raum 009
76131 Karlsruhe
Telefon: +49 721 608-43973
Fax: +49 721 608-46699
E-Mail: olaf.h...@kit.edu
atis.informatik.kit.edu

www.kit.edu

KIT – Die Forschungsuniversität in der Helmholtz-Gemeinschaft

Das KIT ist seit 2010 als familiengerechte Hochschule zertifiziert.




smime.p7s
Description: S/MIME Cryptographic Signature


Re: dovecot vs. mutt: no full index sync on Maildir new/ mtime change

2018-04-24 Thread Sami Ketola


> On 24 Apr 2018, at 10.33, Michael Büker  wrote:
> 
> Hi, everyone!
> 
> This is a follow-up to "Looks like a bug to me: Dovecot ignores Maildir/new 
> timestamp" from Fredrik Roubert on 01.12.2015:
> https://www.dovecot.org/list/dovecot/2015-December/102585.html
> 
> I've run into the same problem as Fredrik: When manipulating my Maildir 
> locally with mutt, deleting a message from new/ doesn't cause a full update 
> of the index. Therefore, IMAP clients still see the deleted message.
> 
> I've read and understood Timo's reply saying that dovecot only performs a 
> "partial sync" of the index when the mtime of new/, but not of cur/, changes. 
> This makes perfect sense for performance reasons for most users:
> https://www.dovecot.org/list/dovecot/2015-December/102588.html
> 
> I, however, would be willing to take the performance hit of a full index sync 
> whenever the mtime of new/ changes. Therefore, I looked at the code and tried 
> to implement a config option (maildir_fullsync_on_new_mtime_change) for this 
> behavior. However, my understanding of 
> src/lib-storage/index/maildir/maildir-sync.c was not good enough – I probably 
> put the ctx->mbox->storage->set->maildir_fullsync_on_new_mtime_change check 
> in the wrong place, and all my patch did was ruin the index ;)
> 
> So, to summarize my question: I'd like dovecot to perform a full index sync 
> when the mtime of a Maildir's new/ has changed. I'm willing to take the 
> performance hit, because it would fix a problem I'm having with using mutt 
> and dovecot together. Can this be done in principle by adding a config option 
> check like ctx->mbox->storage->set->maildir_fullsync_on_new_mtime_change in 
> the right place in src/lib-storage/index/maildir/maildir-sync.c? If so, where 
> should it be put?


While this is probably doable with some code changes I personally "solved" the 
problem just by switching to IMAP for mutt.

Sami



dovecot vs. mutt: no full index sync on Maildir new/ mtime change

2018-04-24 Thread Michael Büker

Hi, everyone!

This is a follow-up to "Looks like a bug to me: Dovecot ignores 
Maildir/new timestamp" from Fredrik Roubert on 01.12.2015:

https://www.dovecot.org/list/dovecot/2015-December/102585.html

I've run into the same problem as Fredrik: When manipulating my Maildir 
locally with mutt, deleting a message from new/ doesn't cause a full 
update of the index. Therefore, IMAP clients still see the deleted 
message.


I've read and understood Timo's reply saying that dovecot only performs 
a "partial sync" of the index when the mtime of new/, but not of cur/, 
changes. This makes perfect sense for performance reasons for most 
users:

https://www.dovecot.org/list/dovecot/2015-December/102588.html

I, however, would be willing to take the performance hit of a full index 
sync whenever the mtime of new/ changes. Therefore, I looked at the code 
and tried to implement a config option 
(maildir_fullsync_on_new_mtime_change) for this behavior. However, my 
understanding of src/lib-storage/index/maildir/maildir-sync.c was not 
good enough – I probably put the 
ctx->mbox->storage->set->maildir_fullsync_on_new_mtime_change check in 
the wrong place, and all my patch did was ruin the index ;)


So, to summarize my question: I'd like dovecot to perform a full index 
sync when the mtime of a Maildir's new/ has changed. I'm willing to take 
the performance hit, because it would fix a problem I'm having with 
using mutt and dovecot together. Can this be done in principle by adding 
a config option check like 
ctx->mbox->storage->set->maildir_fullsync_on_new_mtime_change in the 
right place in src/lib-storage/index/maildir/maildir-sync.c? If so, 
where should it be put?


Thanks for your time,
Michael