Re: dovecot imap_zlib

2024-07-12 Thread Selena Thomas via dovecot
Hi,

I think you should following these steps I am sure they will worked properly 
and also important to access:

1. Check Documentation and Changelog: Look at the latest Dovecot documentation 
and changelogs to see if there are any notes about the imap_zlib plugin being 
moved renamed or deprecated.

2. Rebuild with Plugin: Ensure you have the necessary dependencies installed 
for zlib and then rebuild Dovecot with plugin support by configuring it 
explicitly. You can use the configure  with zlib option before compiling.

3. Check for Alternative Repositories: Sometimes plugins may be maintained 
separately. Check if there is an independent repository or branch that includes 
the imap zlib plugin.

4. Contact Dovecot Community: If the plugin has been removed, consider reaching 
out to the Dovecot community or mailing list for alternative solutions or to 
understand the reason for its removal.

Thanks
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: dovecot imap_zlib

2024-07-09 Thread Selena Thomas via dovecot
Hi,
I think you should the IMAP Compress plugin (imap zlib) isn't in the latest git 
version of Dovecot. You might want to try using an older version of Dovecot 
from git where the plugin is still included or search for other plugins that 
offer similar compression features.
Thanks
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: dovecot imap_zlib

2024-07-09 Thread Selena Thomas via dovecot
Hi,
Thank you this suggestion.
Regards
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


New Member Introduction

2024-07-09 Thread Selena Thomas via dovecot
Hi everyone,

I am new to this forum and excited to be here.
I'm interested in learning more about Dovecot and its features, and I'm eager 
to participate in the discussions here.
Could someone please guide me on how to ask questions here ?
Where should I post if I have a query ?

Looking forward to your advice and connecting with you all.

Thanks,
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: Post-Auth external check?

2024-03-28 Thread Thomas Mechtersheimer
On Thu, Mar 28, 2024 at 09:19:56AM +0100, Stephan von Krawczynski wrote:
> is it possible to include some post-auth check in the password authentication?
> So after dovecot has found a user allowed to login to execute some external
> script for checking additional conditions which gives back simple true or
> false wether the login should be allowed?

Have a look at  https://doc.dovecot.org/admin_manual/post_login_scripting/
There is an example for denying a connection in an external script.

-- 
Thomas Mechtersheimer - Necklenbroicher Str. 45a - D-40667 Meerbusch - Germany
EMail: thom...@wupper.com IRC-Nick: Mechti
  Of course I'm crazy, but that doesn't mean I'm wrong. I'm mad but not ill.
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Documentation error for sieve_redirect_envelope_from

2024-03-11 Thread Thomas Mechtersheimer
Hi,

the documentation at https://doc.dovecot.org/settings/pigeonhole/ says
regarding sieve_redirect_envelope_from setting option user_email:
| The user’s primary address is used. This is configured with the
| sieve_user_email setting. If that setting is not configured, user_email
| is equal to sender.

But checking dovecot-2.3-pigeonhole-0.5.21/src/lib-sieve/sieve-address-source.c 
~line 79:
|if ( type == SIEVE_ADDRESS_SOURCE_USER_EMAIL &&
|svinst->user_email == NULL )
|type = SIEVE_ADDRESS_SOURCE_RECIPIENT;

So if sieve_user_email is not configured, "user_email" is equal to "recipient"
(which is the observed behaviour).
Either the documentation is incorrect and should be updated, or the source has 
a bug .


btw: I was looking at this cause I search for a way to change the envelope 
sender
on redirect only for a specific user. But it looks like this is impossible:
sieve_redirect_envelope_from is a server wide setting; and the sieve script
doesn't allow changing the envelope...

-- 
Thomas Mechtersheimer - Necklenbroicher Str. 45a - D-40667 Meerbusch - Germany
EMail: thom...@wupper.com IRC-Nick: Mechti
  Of course I'm crazy, but that doesn't mean I'm wrong. I'm mad but not ill.
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


doveadm-backup and quota question

2024-02-09 Thread Thomas Plant
Hello,

I executed a backup on my dovecot installation and saw that after the first
backup, the used space in quota doubled.
After executing a recalc all went to normal. For this small size mailbox it is
not a problem, but I did a backup on a bigger mailbox, and it started to reject
messages because of 'over quota'.

Here is an example:

# doveadm quota get -u i...@domain.abc
Quota name   Type    Value Limit %
User quota   STORAGE 16798 16777216 0
User quota   MESSAGE   628 - 0
Domain quota STORAGE 32282 67108864 0
Domain quota MESSAGE   645 - 0

[root@vmi792826 ~]# doveadm backup -u i...@domain.abc mdbox:/tmp/backup/
info@plant-systems

[root@vmi792826 ~]# doveadm quota get -u i...@domain.abc
Quota name   Type    Value Limit %
User quota   STORAGE 33596 16777216 0
User quota   MESSAGE  1256 - 0
Domain quota STORAGE 64563 67108864 0
Domain quota MESSAGE  1290 - 0

[root@vmi792826 ~]# doveadm quota recalc -u i...@domain.abc

[root@vmi792826 ~]# doveadm quota get -u i...@domain.abc
Quota name   Type    Value Limit %
User quota   STORAGE 16798 16777216 0
User quota   MESSAGE   628 - 0
Domain quota STORAGE 32282 67108864 0
Domain quota MESSAGE   645 - 0

Is it normal to have to issue a recalc after backup, or can it be some
misconfiguration on my side?



___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: Can Dovecot Use Wildcard TLS Certificates?

2023-09-27 Thread Thomas Zajic

* duluxoz, 27.09.23 09:34


Quick Q: Can dovecot use wildcard TLS Certificates?
[...]


Both dovecot and mutt can handle wildcard certificates just fine.

HTH & HAND
Thomas
--
=-=
-   Thomas "ZlatkO" ZajicLinux-6.1 & Thunderbird-115   -
-"In layman's terms: speedy thing goes in, speedy thing comes out."   -
=-=

___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: Can Dovecot Use Wildcard TLS Certificates?

2023-09-27 Thread Thomas Zajic

[ re-sent from my subscription address ... oops :-) ]


* duluxoz, 27.09.23 09:34


Quick Q: Can dovecot use wildcard TLS Certificates?
[...]


Both dovecot and mutt can handle wildcard certificates just fine.

HTH & HAND
Thomas
--
=-=
-   Thomas "ZlatkO" ZajicLinux-6.1 & Thunderbird-115   -
-"In layman's terms: speedy thing goes in, speedy thing comes out."   -
=-=

___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: Sieve vacation function sends auto-reply with generic From: Address

2023-09-25 Thread Thomas Boroske

Hello Markus,

Thank you for pointing me to the problem. It works after removing the 
wildcard rule.


For some reason I did not consider the postfix config to be the culprit 
at all, we also have the same rule file on our main smtp server. But 
that one of course has additional config for configuring the users valid 
sender addresses from ldap which is not present on this local postfix. 
All understandable now.



Many thanks,

Thomas


Am 2023-09-25 15:27, schrieb Markus Schönhaber:


If you have turned on postfix' canonical mapping with the above
configuration, then postfix will rewrite (in header and envelope)
@ida.ing.tu-bs.de to sysgr...@ida.ing.tu-bs.de.
To me, this seems to explain your observation just fine.



--
Dipl. Inf. Thomas Boroske

Institute of Computer and Network Engineering
TU Braunschweig
Hans-Sommer-Str. 66, D-38106 Braunschweig, Germany
www.ida.ing.tu-bs.de
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Sieve vacation function sends auto-reply with generic From: Address

2023-09-25 Thread Thomas Boroske

Good day to you all.

I have a question regarding the dovecot sieve vacation function that I 
am unable to answer myself by reading the documentation.


A while ago we moved our dovecot server to a newer system (and newer 
dovecot version 2.3.16 ). This worked mostly fine, except there is now 
an issue with the auto-reply emails send from sieve vacation scripts.


While the sending of the auto reply generally works fine, the emails 
always appear to come from the "sysgr...@ida.ing.tu-bs.de" address 
rather than the original recipient of the message.


This is bad since the recipient of the auto reply then has no idea who 
the person on vacation actually is (unless there is a name in the 
message body).


The sieve scripts are managed by roundcube webmail, an example entry for 
a vacation rule looks like this:



require ["vacation"];
# rule:[Vacation]
if true
{
	vacation :days 7 :subject "Nicht im Büro" :from 
"tbt...@ida.ing.tu-bs.de" "Ich bin nicht da!";

}

This looks ok to me. The problem is that the "tbtest" address is not 
used in the generated reply, instead  sysgr...@ida.ing.tu-bs.de is used 
as a from address for all recipients.


Note that the sysgroup@ address appears nowhere in the dovecot config, 
but it does in

/etc/postfix/sender-canonical like that:

r...@ida.ing.tu-bs.de  sysgr...@ida.ing.tu-bs.de
@ida.ing.tu-bs.de  sysgr...@ida.ing.tu-bs.de
@net.ida   sysgr...@ida.ing.tu.bs.de


The /etc/dovecot/conf.d/90-sieve.conf is mostly on default values, the 
remaining config settings are these:



plugin {
  sieve = file:~/Maildir/sieve;active=~/Maildir/.dovecot.sieve

  sieve_extensions = +editheader

  #Send vacation auto responder with correct Address
  sieve_vacation_send_from_recipient = yes
  sieve_vacation_use_original_recipient = no
}


I have attached the output of "dovecot -n" below.


Many thanks for any ideas on what could be the issue!


Kind regards,

Thomas


--
Dipl. Inf. Thomas Boroske

Institute of Computer and Network Engineering
TU Braunschweig
Hans-Sommer-Str. 66, D-38106 Braunschweig, Germany
www.ida.ing.tu-bs.de# 2.3.16 (7e2e900c1a): /etc/dovecot/dovecot.conf
# Pigeonhole version 0.5.16 (09c29328)
# OS: Linux 5.14.0-316.el9.x86_64 x86_64 CentOS Stream release 9 ext4
# Hostname: letterbox.net.ida
auth_mechanisms = plain login
auth_username_format = %n
auth_verbose = yes
first_valid_uid = 1000
last_valid_uid = 1000
lda_original_recipient_header = X-Original-To
lmtp_save_to_detail_mailbox = yes
log_path = /var/log/dovecot.log
mail_location = maildir:/var/spool/imap/%u/Maildir
mail_plugins = " acl notify mail_log"
managesieve_notify_capability = mailto
managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date index ihave duplicate mime foreverypart extracttext editheader
mbox_write_locks = fcntl
namespace {
  hidden = no
  ignore_on_failure = no
  list = children
  location = maildir:%%h/Maildir:INDEXPVT=~/Maildir/shared/%%u
  prefix = shared/%%u/
  separator = /
  subscriptions = yes
  type = shared
}
namespace inbox {
  inbox = yes
  location = 
  mailbox Drafts {
special_use = \Drafts
  }
  mailbox Junk {
special_use = \Junk
  }
  mailbox Sent {
special_use = \Sent
  }
  mailbox "Sent Messages" {
special_use = \Sent
  }
  mailbox Trash {
special_use = \Trash
  }
  prefix = INBOX/
  separator = /
  type = private
}
passdb {
  args = /etc/dovecot/dovecot-ldap.conf.ext
  driver = ldap
}
plugin {
  acl = vfile
  acl_shared_dict = file:/var/spool/imap/shared-mailboxes.db
  mail_log_events = delete undelete expunge copy mailbox_delete mailbox_rename
  mail_log_fields = uid box msgid size
  sieve = file:~/Maildir/sieve;active=~/Maildir/.dovecot.sieve
  sieve_extensions = +editheader
  sieve_vacation_send_from_recipient = yes
  sieve_vacation_use_original_recipient = no
}
pop3_uidl_format = %v.%u
protocols = imap pop3 lmtp sieve
service auth {
  inet_listener {
port = 12345
  }
  unix_listener auth-userdb {
group = vmail
user = vmail
  }
}
service lmtp {
  inet_listener lmtp {
port = 24
  }
}
service managesieve-login {
  inet_listener sieve {
port = 4190
  }
}
ssl = required
ssl_cert = ___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: DOvecot requires both IPv4 and IPV6 to start

2023-09-04 Thread Thomas Schäfer

Am 04.09.23 um 13:24 schrieb TWHG Technical via dovecot:

Hello,

I hope this is the right place to start. Ubuntu server the default listener configuration in dovecot.conf uses both IP4 and IP6 


I think that is a good default value.



on systems that have IP6 disabled dovecot will not start.


That's not so good.



Is it possible to set the default to:

listen = * to only bind to IP4 for installation and initial start. Rather than listen = *, :: which tries to bind to a non existent IP6 stack. 


I don't like that idea. It could break other installations. The default 
should stay at dual stack.



Or simply fallback to ip4 if ip6 is not available.

Maybe that is an good idea. But you are responsible for the missing 
network connectivity / IPv6, so please adjust your settings.




Regards,
Thomas





--

There’s no place like ::1

Thomas Schäfer (Systemverwaltung)
Ludwig-Maximilians-Universität
Centrum für Informations- und Sprachverarbeitung
Oettingenstraße 67 Raum C109
80538 München ☎ +49/89/2180-9706  ℻ +49/89/2180-9701


___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: IMAP account can't save any email with attachment

2023-07-25 Thread Chris Thomas
: plugin/quota_rule=*:bytes=0
> Jul 20 15:06:21 imap(myu...@mydomain.com)<2181><7W6sfewAb8VfWumz>: Debug:
> Effective uid=8, gid=8, home=/mail/mydomain.com/myuser
> Jul 20 15:06:21 imap(myu...@mydomain.com)<2181><7W6sfewAb8VfWumz>: Debug:
> Namespace inbox: type=private, prefix=, sep=, inbox=yes, hidden=no,
> list=yes, subscriptions=yes location=maildir:/mail/mydomain.com/myuser
> Jul 20 15:06:21 imap(myu...@mydomain.com)<2181><7W6sfewAb8VfWumz>: Debug:
> maildir++: root=/mail/mydomain.com/myuser, index=, indexpvt=, control=,
> inbox=/mail/mydomain.com/myuser, alt=
> Jul 20 15:06:21 imap(myu...@mydomain.com)<2181><7W6sfewAb8VfWumz>: Debug:
> Mailbox Drafts: Mailbox opened because: SELECT


In the thunderbird client, I wait for ages before a popup appears saying
"Your draft message was not copied to your drafts folder (Drafts) due to
network or file access errors.
You can retry or save the draft locally to Local Folders"

I've tried searching around for information on what the problem could be,
but I've not found anything that would explain this problem. Have any ideas?

Chris

On Thu, Jul 20, 2023 at 3:20 PM William Edwards 
wrote:

>
> > Op 20 jul. 2023 om 14:26 heeft Chris Thomas 
> het volgende geschreven:
> >
> > 
> > Hi,
> >
> > I'm getting a curious problem where if I write a draft without an
> attachment and click save. It'll work without any issue at all.
> >
> > But if I do the same, then attach a file to the email, it'll sit there
> for a couple of minutes before timing out (I'm using thunderbird), it'll
> eventually give you a message saying
> >
> > "Your draft message was not copied to your drafts folder (Drafts) due to
> network or file access errors."
> >
> > I've got all of dovecots verbose logging turned on.
>
> Cool! So … where is it?
>
> > I'm using dovecot as a submission server through to the postfix server
> to do the actual sending. All the logging is turned on there too. But I
> can't figure out what the problem is.
> >
> > Is there anything I can look for in the logs that will help me out?
> >
> > chris
> > ___
> > dovecot mailing list -- dovecot@dovecot.org
> > To unsubscribe send an email to dovecot-le...@dovecot.org
>
>
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: IMAP account can't save any email with attachment

2023-07-20 Thread Chris Thomas
Here is the info from dovecot -n

dovecot.mail-server and postfix.mail-server are valid dns entries for
themselves. It's running on a kubernetes cluster so those hostnames are
provided by the namespace and pod name, they work too, you can ping them
and it works for everything except emails with attachments, for some reason

# 2.3.4.1 (f79e8e7e4): /etc/dovecot/dovecot.conf

# Pigeonhole version 0.5.4 ()

# OS: Linux 4.9.0-9-amd64 x86_64 Debian 10.13 ext4

# Hostname: dovecot.mail-server.svc.cluster.local

auth_debug = yes

auth_debug_passwords = yes

auth_mechanisms = plain login

auth_verbose = yes

auth_verbose_passwords = yes

disable_plaintext_auth = no

first_valid_gid = 8

first_valid_uid = 8

haproxy_timeout = 5 secs

haproxy_trusted_networks = 10.0.0.0/8

hostname = s3.mydomain.com

log_path = /dev/stderr

mail_access_groups = mail

mail_debug = yes

mail_gid = mail

mail_home = /mail/%d/%n

mail_location = maildir:/mail/%d/%n

mail_plugins = " zlib"

mail_privileged_group = mail

mail_uid = mail

maildir_stat_dirs = yes

namespace inbox {

  inbox = yes

  location =

  mailbox Drafts {

auto = subscribe

special_use = \Drafts

  }

  mailbox Junk {

auto = subscribe

special_use = \Junk

  }

  mailbox Sent {

auto = subscribe

special_use = \Sent

  }

  mailbox Trash {

auto = subscribe

special_use = \Trash

  }

  prefix =

}

passdb {

  args = /etc/dovecot/dovecot-sql.conf.ext

  driver = sql

}

postmaster_address = i...@mydomain.com

protocols = " imap lmtp pop3 submission"

service auth-worker {

  unix_listener auth-worker {

group = mail

mode = 0660

user = $default_internal_user

  }

  user = mail

}

service auth {

  user = $default_internal_user

}

service dict {

  unix_listener dict {

group = mail

mode = 0660

  }

}

service imap-login {

  inet_listener imap {

haproxy = yes

port = 143

  }

  inet_listener imaps {

haproxy = yes

port = 993

ssl = yes

  }

}

service lmtp {

  inet_listener lmtp {

haproxy = no

port = 24

  }

}

service pop3-login {

  inet_listener pop3 {

haproxy = yes

port = 110

  }

  inet_listener pop3s {

haproxy = yes

port = 995

ssl = yes

  }

}

service submission-login {

  inet_listener submission {

haproxy = yes

port = 587

  }

}

ssl_cert = 
wrote:

>
> > Op 20 jul. 2023 om 14:26 heeft Chris Thomas 
> het volgende geschreven:
> >
> > 
> > Hi,
> >
> > I'm getting a curious problem where if I write a draft without an
> attachment and click save. It'll work without any issue at all.
> >
> > But if I do the same, then attach a file to the email, it'll sit there
> for a couple of minutes before timing out (I'm using thunderbird), it'll
> eventually give you a message saying
> >
> > "Your draft message was not copied to your drafts folder (Drafts) due to
> network or file access errors."
> >
> > I've got all of dovecots verbose logging turned on.
>
> Cool! So … where is it?
>
> > I'm using dovecot as a submission server through to the postfix server
> to do the actual sending. All the logging is turned on there too. But I
> can't figure out what the problem is.
> >
> > Is there anything I can look for in the logs that will help me out?
> >
> > chris
> > ___
> > dovecot mailing list -- dovecot@dovecot.org
> > To unsubscribe send an email to dovecot-le...@dovecot.org
>
>
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


IMAP account can't save any email with attachment

2023-07-20 Thread Chris Thomas
Hi,

I'm getting a curious problem where if I write a draft without an
attachment and click save. It'll work without any issue at all.

But if I do the same, then attach a file to the email, it'll sit there for
a couple of minutes before timing out (I'm using thunderbird), it'll
eventually give you a message saying

"Your draft message was not copied to your drafts folder (Drafts) due to
network or file access errors."

I've got all of dovecots verbose logging turned on. I'm using dovecot as a
submission server through to the postfix server to do the actual sending.
All the logging is turned on there too. But I can't figure out what the
problem is.

Is there anything I can look for in the logs that will help me out?

chris
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: GSSAPI auth Line too long

2023-05-31 Thread Thomas Lemarchand via dovecot

Hi !

Are you saying I should open a bug report for Thunderbird developers ?
I did not find a reference to a 998 bytes limit, do you have something I 
can refer to ?


Thank you.
--
Thomas Lemarchand

On 5/30/23 20:35, Aki Tuomi via dovecot wrote:

On 30/05/2023 20:54 EEST Thomas Lemarchand via dovecot  
wrote:

  
Hello,


On version 2.3.20 (80a5ac675d), I have a problem with submission-login
when using GSSAPI auth : it's not working, probably due to AUTH line
being too long.
It appeared after I activated PAC on my Kerberos infrastructure. Now the
Kerberos tickets contains MS-PAC data and are bigger. It's part of the
RFC and is a valid use case :
https://datatracker.ietf.org/doc/html/rfc4120#section-5.2.6

Logs :


My guess is that it's due to
https://github.com/dovecot/core/blob/main/src/lib-smtp/smtp-common.h#L10
being too low (is it configurable ?), but I didn't read the code thoroughly.
Red Hat IDM now activates MS-PAC by default, so any installation based
on IDM (or FreeIPA) may have the same problem.
What's your opinion ? Bug ?

Mail sent using password auth :'(

--
Thomas Lemarchand



Hi!

This is an RFC limitation. SASL-IR may not exceed 998 bytes including AUTH 
GSSAPI and \r\n.

If the SASL-IR exceeds this, then the client must use interactive SASL.

Aki
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org



___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: GSSAPI auth Line too long

2023-05-30 Thread Thomas Lemarchand via dovecot
Thanks you for this idea, I already had "imap_max_line_length = 256k" , 
I tried 2M, unfortunately it still does not work.


--
Thomas

On 5/30/23 20:27, Kees van Vloten wrote:


On 30-05-2023 19:54, Thomas Lemarchand via dovecot wrote:

Hello,

On version 2.3.20 (80a5ac675d), I have a problem with 
submission-login when using GSSAPI auth : it's not working, probably 
due to AUTH line being too long.
It appeared after I activated PAC on my Kerberos infrastructure. Now 
the Kerberos tickets contains MS-PAC data and are bigger. It's part 
of the RFC and is a valid use case : 
https://datatracker.ietf.org/doc/html/rfc4120#section-5.2.6



Correct, but you can and should increase line length:

imap_max_line_length = 2M

With this length it works for me with Samba-AD-DC.

- Kees.


Logs :

May 30 17:13:00 auth: Debug: auth client connected (pid=378)
May 30 17:13:00 submission-login: Debug: smtp-server: conn 
10.200.114.128:13587 [1]: Sent: 220 mail.int.k8s.lemarchand.io 
Dovecot ready.
May 30 17:13:00 submission-login: Debug: smtp-server: conn 
10.200.114.128:13587 [1]: Received new command: EHLO [192.168.202.16]
May 30 17:13:00 submission-login: Debug: smtp-server: conn 
10.200.114.128:13587 [1]: command EHLO: New command
May 30 17:13:00 submission-login: Debug: smtp-server: conn 
10.200.114.128:13587 [1]: command EHLO: Execute command
May 30 17:13:00 submission-login: Debug: smtp-server: conn 
10.200.114.128:13587 [1]: command EHLO: Pipeline blocked
May 30 17:13:00 submission-login: Debug: smtp-server: conn 
10.200.114.128:13587 [1]: command EHLO: 250 reply: Submitted
May 30 17:13:00 submission-login: Debug: smtp-server: conn 
10.200.114.128:13587 [1]: command EHLO: Replied
May 30 17:13:00 submission-login: Debug: smtp-server: conn 
10.200.114.128:13587 [1]: command EHLO: Ready to reply
May 30 17:13:00 submission-login: Debug: smtp-server: conn 
10.200.114.128:13587 [1]: Trigger output
May 30 17:13:00 submission-login: Debug: smtp-server: conn 
10.200.114.128:13587 [1]: command EHLO: Next to reply
May 30 17:13:00 submission-login: Debug: smtp-server: conn 
10.200.114.128:13587 [1]: Sending replies
May 30 17:13:00 submission-login: Debug: smtp-server: conn 
10.200.114.128:13587 [1]: command EHLO: Next to reply
May 30 17:13:00 submission-login: Debug: smtp-server: conn 
10.200.114.128:13587 [1]: command EHLO: Completed
May 30 17:13:00 submission-login: Debug: smtp-server: conn 
10.200.114.128:13587 [1]: command EHLO: Pipeline unblocked
May 30 17:13:00 submission-login: Debug: smtp-server: conn 
10.200.114.128:13587 [1]: Connection state reset
May 30 17:13:00 submission-login: Debug: smtp-server: conn 
10.200.114.128:13587 [1]: command EHLO: 250 reply: Sent: 
250-mail.int.k8s.lemarchand.io 8BITMIME AUTH GSSAPI PLAIN LOGIN BURL 
imap CHUNKING ENHANCEDSTATUSCODES SIZE P

IPELINING
May 30 17:13:00 submission-login: Debug: smtp-server: conn 
10.200.114.128:13587 [1]: command EHLO: Finished
May 30 17:13:00 submission-login: Debug: smtp-server: conn 
10.200.114.128:13587 [1]: command EHLO: Destroy
May 30 17:13:00 submission-login: Debug: smtp-server: conn 
10.200.114.128:13587 [1]: command EHLO: 250 reply: Destroy
May 30 17:13:00 submission-login: Debug: smtp-server: conn 
10.200.114.128:13587 [1]: Trigger output
May 30 17:13:00 submission-login: Debug: smtp-server: conn 
10.200.114.128:13587 [1]: No more commands pending
May 30 17:13:00 submission-login: Debug: smtp-server: conn 
10.200.114.128:13587 [1]: Sending replies
May 30 17:13:00 submission-login: Debug: smtp-server: conn 
10.200.114.128:13587 [1]: No more commands pending
May 30 17:13:00 submission-login: Debug: smtp-server: conn 
10.200.114.128:13587 [1]: Client sent invalid command: Command line 
is too long
May 30 17:13:00 submission-login: Debug: smtp-server: conn 
10.200.114.128:13587 [1]: command [unknown]: Invalid command
May 30 17:13:00 submission-login: Debug: smtp-server: conn 
10.200.114.128:13587 [1]: command [unknown]: 500 reply: Submitted
May 30 17:13:00 submission-login: Debug: smtp-server: conn 
10.200.114.128:13587 [1]: command [unknown]: Replied
May 30 17:13:00 submission-login: Debug: smtp-server: conn 
10.200.114.128:13587 [1]: command [unknown]: Ready to reply
May 30 17:13:00 submission-login: Debug: smtp-server: conn 
10.200.114.128:13587 [1]: Trigger output
May 30 17:13:00 submission-login: Debug: smtp-server: conn 
10.200.114.128:13587 [1]: Sending replies
May 30 17:13:00 submission-login: Debug: smtp-server: conn 
10.200.114.128:13587 [1]: command [unknown]: Next to reply
May 30 17:13:00 submission-login: Debug: smtp-server: conn 
10.200.114.128:13587 [1]: command [unknown]: Completed
May 30 17:13:00 submission-login: Debug: smtp-server: conn 
10.200.114.128:13587 [1]: command [unknown]: 500 reply: Sent: 500 
5.5.2 Line too long
May 30 17:13:00 submission-login: Debug: smtp-server: conn 
10.200.114.128:13587 [1]: command [unknown]: Finished
May 30 17:13:00 submission-login: Debug: smtp-server: conn 
10.200.114.128:13587 [1

GSSAPI auth Line too long

2023-05-30 Thread Thomas Lemarchand via dovecot
-login: Debug: smtp-server: conn 
10.200.114.128:13587 [1]: Sending replies
May 30 17:13:00 submission-login: Debug: smtp-server: conn 
10.200.114.128:13587 [1]: No more commands pending
May 30 17:13:00 submission-login: Debug: smtp-server: conn 
10.200.114.128:13587 [1]: Remote closed connection: Connection closed
May 30 17:13:00 submission-login: Debug: smtp-server: conn 
10.200.114.128:13587 [1]: Disconnected: Connection closed
May 30 17:13:00 submission-login: Debug: smtp-server: conn 
10.200.114.128:13587 [1]: Connection state reset


My guess is that it's due to 
https://github.com/dovecot/core/blob/main/src/lib-smtp/smtp-common.h#L10 
being too low (is it configurable ?), but I didn't read the code thoroughly.
Red Hat IDM now activates MS-PAC by default, so any installation based 
on IDM (or FreeIPA) may have the same problem.

What's your opinion ? Bug ?

Mail sent using password auth :'(

--
Thomas Lemarchand


___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


amount of users

2023-05-05 Thread Thomas Schäfer

Hi,

is somewhere a table/matrix/function based on experience how many users 
require which values for


mail_max_userip_connections

default_client_limit

default_process_limit

...?

I run into problems with a small group of users, since the defaults seem 
to be only for single man shows.


The second assumption is: User are permanently online (connected to 
imap) and using two devices in average (one or two desktops, + one 
mobile device)


Regards,

Thomas



--

There’s no place like ::1

Thomas Schäfer (Systemverwaltung)
Ludwig-Maximilians-Universität
Centrum für Informations- und Sprachverarbeitung
Oettingenstraße 67 Raum C109
80538 München ☎ +49/89/2180-9706  ℻ +49/89/2180-9701

___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: Message searching in Dovecot

2023-04-20 Thread Thomas Zajic

* Aki Tuomi via dovecot, 20.04.23 11:46


[...]
Biggest issue in my mind is that you will need to tell Solr to update
it's indexes (somehow) when using version 8 before upgrading to 9.
Because the older indexes are no longer compatible with 9.

If by that you mean migrating from [Fast]LRUCache to CaffeineCache, I found
PGNet Dev's post here [1] and Shawn Heisey's followup here [2] (including a
very handy script) extremely helpful when I did that last year:

[1] https://dovecot.org/pipermail/dovecot/2022-May/124701.html
[2] https://dovecot.org/pipermail/dovecot/2022-May/124711.html


[...]
Other than that, it was pretty simple to get it working in the end.


ACK, same here. :-) Have not yet upgraded to Solr 9.x, though, I'm currently
still at 8.11.2.

Bye,
Thomas
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: How to use dovecot with more recent versions of solr (8x, 9x)?

2023-03-27 Thread Thomas Zajic

* j...@w3.org, 27.03.23 15:28


[...]
Could someone confirm me if those 7.7.0 configuration files are compatible with
8.11.2? A diff between those files and the ones from the 8.11.2 dist reveal some
changes, starting from the LuceneVersion.
[...]

Following https://dovecot.org/pipermail/dovecot/2022-May/124701.html and
its replies worked fine here upgrading to 8.11.2. Have not tried 9.x yet,
though.

HTH
Thomas



Dynamically set LDAP user namespace using two attributes

2022-10-26 Thread Thomas Leuxner
Hi,

I’m looking for a way to use two LDAP attributes to create a user-specific 
namespace to be expunged e.g.


mailExpungeTrash=namespace/inbox/mailbox/%{ldap:mailTrashMailbox}/autoexpunge

I can create the namespace fine returning the full string to be expunged:

mailExpungeTrash=namespace/inbox/mailbox/Deleted\_Messages/autoexpunge

However I want to alter the namespace using a second LDAP attribute like:


mailExpungeTrash=namespace/inbox/mailbox/%{ldap:mailTrashMailbox}/autoexpunge

Regards
Thomas





signature.asc
Description: Message signed with OpenPGP


Re: Interfacing mutt with Dovecott

2022-07-19 Thread Thomas Zajic

* Steve Litt, 18.07.22 19:20


All my email for the past 20 years is held on a Dovecot IMAP
server (version 2.3.19.1 (9b53102964)) on my desktop. I've been using
Claws-Mail but want to switch to Mutt.
[...]
I know some people have been very successful running Mutt to access an
IMAP server, so it appears to be possible. How should I run Mutt to
access my Dovecot?


With dovecot-2.3.19.1 and mutt-2.2.6 and/or neomutt-20220429, this is
all that's needed to make it work (your "imap_authenticators" setting
should match your own setup, obviously):


[zlatko@disclosure:~]$ grep -i imap .muttrc
set folder="imaps://myu...@imap.mydomain.net"
set imap_user="myuser"
set imap_pass="mypass"
set imap_authenticators="cram-md5"
set imap_check_subscribed=yes
set imap_idle=yes
set imap_keepalive=600
set imap_peek=no
set imap_passive=no
set spoolfile="imaps://myu...@imap.mydomain.net/"


No problems with missing folders whatsoever.

HTH
Thomas


Re: Random behavior with sieve extprograms

2022-06-04 Thread Thomas Sommer
After adding the sleep(3) in my php script, I observed the processing 
the last couple of days.

At first it seemed fixed, but today it happened again.

Same story: sieve: Execution of script failed.
But again, the script ran correctly.

I ran the following test over the last 250 emails I received:
#!/bin/bash
set -x
set -v

find ./tests/Dabs -type f -name '*' -exec sh -c '
  for file do
echo "$file"
php artisan dabs:processEmail < $file --env=dabs
echo Exit code: $?
  done
' exec-sh {} +

Exit code of the script is always 0.

I don't think it a locking issue as there are only 4 emails each day... 
and the script above runs at a much faster pace... I ran the test 
without the sleep(3).


Any other ideas?

Thanks
Thomas

On 2022-06-01 20:48, John Stoffel wrote:

"Thomas" == Thomas Sommer  writes:


Thomas> Hi John
Thomas> On 2022-06-01 02:50, John Stoffel wrote:

"Thomas" == Thomas Sommer  writes:



Thomas> I have a random behavior with dovecot and sieve extprograms.



Thomas> Here is my sieve file:
Thomas> require ["fileinto", "vnd.dovecot.pipe", "copy", "imap4flags"];
Thomas> # rule:[DABS]
Thomas> if header :contains "X-Original-To" "d...@mydomain.ch"
Thomas> {
Thomas> pipe "sieve-dabs-execute.sh";
Thomas> setflag "\\Seen";
Thomas> fileinto "acme.DABS";
Thomas> stop;
Thomas> }


Can you post the code of this script?  Are you trapping all 
exceptions

in that script and making sure you only return errors when there
really is an error?


Thomas> Emails matching the condition are processed by a laravel (php)

artisan

Thomas> command. See service sieve-pipe-script below.
Thomas> The exit code of this php command is 0.


You are calling the php command from a shell script, so there's
multiple places things could go wrong.  Why not just pipe directly to
the php script (which wasn't included unless I'm totally blind and
dumb tonight... :-) instead?


Thomas> "sieve-dabs-execute.sh" is just the socket name. It was a
Thomas> shell script previously and I never updated the socket
Thomas> name. See service sieve-pipe-script in the dovecot -n output.
Thomas> It calls the php script directly: executable = script
Thomas> /usr/bin/php /srv/www/mydomain/status/artisan
Thomas> dabs:processEmail

Thanks for the clarification, I missed that part before.

Thomas> When testing directly on the cli, it works flawlessly, return
Thomas> code is 0.  bash: php artisan dabs:processEmail < email.file

How about if you run multiple copies of the script at the same time on
the console?  You might be running into contention there.

Thomas> Here is the handle method of the php script:

Thomas> public function handle()
Thomas>  {
Thomas>      $fd = \fopen('php://stdin', 'rb');

Thomas>  $parser = new MailMimeParser();
Thomas>  $message = $parser->parse($fd, true);

Thomas>  $subject = $message->getHeader(HeaderConsts::SUBJECT);
Thomas>  $dabsDate = \substr(\trim($subject), -11, 8);
Thomas>  $date = \Carbon\Carbon::parse($dabsDate);
Thomas>  $version = 
\substr($message->getHeader(HeaderConsts::SUBJECT),

Thomas> -2);

Thomas>  $attachment = $message->getAttachmentPart(0);
Thomas>  $filename = $attachment->getFilename();

Thomas>      if (Storage::exists('/dabs/' . $filename)) {
Thomas>  Log::info('Processing DABS duplicate version: ' .
$version .
Thomas> ' of: ' . $date->format('Y-m-d'));
Thomas>  // increment number to filename
Thomas>  $a = 1;
Thomas>      do {
Thomas>  $filename_new = \basename($filename, '.pdf')
. '_' . $a
Thomas> . '.pdf';
Thomas>      $a++;
Thomas>  if ($a > 9) {
Thomas>      Log::error('DABS duplicate processing > 9.
Thomas> Exiting.');
Thomas>  $this->error('DABS duplicate processing > 
9.

Thomas> Exiting.');
Thomas>  exit(1);
Thomas>  }
Thomas>  $filename = $filename_new;
Thomas>  } while ($this->dabsFileExists($filename_new));
Thomas>  }

Thomas>  Storage::put('/dabs/' . $filename, 
$attachment->getContent());

Thomas>  $dabs = Dabs::create(
Thomas>  [
Thomas>  'date' => $date,
Thomas>  'version' => $version,
Thomas>  'file' => 'dabs/' . $filename,
Thomas>  ]
Thomas>  );


This part might break because you assume that you're the only
instance of the script running.  You really want to do some locking,
and one way to do that is to try and create a new file in a loop,

Re: Random behavior with sieve extprograms

2022-06-01 Thread Thomas Sommer

Hi John

On 2022-06-01 02:50, John Stoffel wrote:

"Thomas" == Thomas Sommer  writes:


Thomas> I have a random behavior with dovecot and sieve extprograms.

Thomas> Here is my sieve file:
Thomas> require ["fileinto", "vnd.dovecot.pipe", "copy", "imap4flags"];
Thomas> # rule:[DABS]
Thomas> if header :contains "X-Original-To" "d...@mydomain.ch"
Thomas> {
Thomas>  pipe "sieve-dabs-execute.sh";
Thomas>  setflag "\\Seen";
Thomas>  fileinto "acme.DABS";
Thomas>  stop;
Thomas> }

Can you post the code of this script?  Are you trapping all exceptions
in that script and making sure you only return errors when there
really is an error?

Thomas> Emails matching the condition are processed by a laravel (php) 
artisan

Thomas> command. See service sieve-pipe-script below.
Thomas> The exit code of this php command is 0.

You are calling the php command from a shell script, so there's
multiple places things could go wrong.  Why not just pipe directly to
the php script (which wasn't included unless I'm totally blind and
dumb tonight... :-) instead?


"sieve-dabs-execute.sh" is just the socket name. It was a shell script 
previously and I never updated the socket name. See service 
sieve-pipe-script in the dovecot -n output.

It calls the php script directly:
executable = script /usr/bin/php /srv/www/mydomain/status/artisan 
dabs:processEmail


When testing directly on the cli, it works flawlessly, return code is 0.
bash: php artisan dabs:processEmail < email.file

Here is the handle method of the php script:

public function handle()
{
$fd = \fopen('php://stdin', 'rb');

$parser = new MailMimeParser();
$message = $parser->parse($fd, true);

$subject = $message->getHeader(HeaderConsts::SUBJECT);
$dabsDate = \substr(\trim($subject), -11, 8);
$date = \Carbon\Carbon::parse($dabsDate);
$version = \substr($message->getHeader(HeaderConsts::SUBJECT), 
-2);


$attachment = $message->getAttachmentPart(0);
$filename = $attachment->getFilename();

if (Storage::exists('/dabs/' . $filename)) {
Log::info('Processing DABS duplicate version: ' . $version . 
' of: ' . $date->format('Y-m-d'));

// increment number to filename
$a = 1;
do {
$filename_new = \basename($filename, '.pdf') . '_' . $a 
. '.pdf';

$a++;
if ($a > 9) {
Log::error('DABS duplicate processing > 9. 
Exiting.');
$this->error('DABS duplicate processing > 9. 
Exiting.');

exit(1);
}
$filename = $filename_new;
} while ($this->dabsFileExists($filename_new));
}

Storage::put('/dabs/' . $filename, $attachment->getContent());
$dabs = Dabs::create(
[
'date' => $date,
'version' => $version,
'file' => 'dabs/' . $filename,
]
);

if ($date->eq(today()) || $date->eq(today()->addDay())) {
event(new DabsReceived($dabs));
}

Log::info('Processing DABS email for DABS version: ' . $version 
. ' of: ' . $date->format('Y-m-d'));

sleep(3);
return 0;
}


It honestly sounds like a timing issue, maybe just putting a sleep
into your shell script at the end would be good?  Or maybe run with
the -vx switches so you log all the commands and their results?


I've added a 3 second sleep in my php script and will observe.

Could you explain where to add the -vx switch?


Thomas> I randomly get the following in my postfix logs:
Thomas> Sieve thinks that the command failed, but the email was always
processed
Thomas> correctly. In that case I get a copy in my Inbox.
Thomas> I'm wondering what could be the cause for this random behavior.
Thomas> My guess is that approximately 70% are processed correctly, 30% 
is as

Thomas> below.

Thomas> May 31 13:50:38 star dovecot[99425]:
Thomas> lda(user)<99425>: sieve:
Thomas> msgid=<62961d1c.5y4hr0vqi97jfnyb%dabs.zsmsv...@example.com>: 
fileinto

Thomas> action: stored mail into mailbox 'acme.DABS'
Thomas> May 31 13:50:39 star dovecot[99425]:
Thomas> lda(user)<99425>: sieve:
Thomas> msgid=<62961d1c.5y4hr0vqi97jfnyb%dabs.zsmsv...@example.com>:
stored mail
Thomas> into mailbox 'INBOX'
Thomas> May 31 13:50:39 star dovecot[99425]:
Thomas> lda(user)<99425>: sieve: Execution of 
script

Thomas> /home/user/sieve/.dovecot.sieve failed, but implicit keep was
successful
Thomas> (user logfile /home/user/sieve/.dovecot.sieve.log may reveal 
additional

Thomas> details)

Thomas> .dovecot.sieve.log:
Thomas> sieve: info

Random behavior with sieve extprograms

2022-05-31 Thread Thomas Sommer

Hi

I have a random behavior with dovecot and sieve extprograms.

Here is my sieve file:
require ["fileinto", "vnd.dovecot.pipe", "copy", "imap4flags"];
# rule:[DABS]
if header :contains "X-Original-To" "d...@mydomain.ch"
{
pipe "sieve-dabs-execute.sh";
setflag "\\Seen";
fileinto "acme.DABS";
stop;
}

Emails matching the condition are processed by a laravel (php) artisan 
command. See service sieve-pipe-script below.

The exit code of this php command is 0.

I randomly get the following in my postfix logs:
Sieve thinks that the command failed, but the email was always processed 
correctly. In that case I get a copy in my Inbox.

I'm wondering what could be the cause for this random behavior.
My guess is that approximately 70% are processed correctly, 30% is as 
below.


May 31 13:50:38 star dovecot[99425]: 
lda(user)<99425>: sieve: 
msgid=<62961d1c.5y4hr0vqi97jfnyb%dabs.zsmsv...@example.com>: fileinto 
action: stored mail into mailbox 'acme.DABS'
May 31 13:50:39 star dovecot[99425]: 
lda(user)<99425>: sieve: 
msgid=<62961d1c.5y4hr0vqi97jfnyb%dabs.zsmsv...@example.com>: stored mail 
into mailbox 'INBOX'
May 31 13:50:39 star dovecot[99425]: 
lda(user)<99425>: sieve: Execution of script 
/home/user/sieve/.dovecot.sieve failed, but implicit keep was successful 
(user logfile /home/user/sieve/.dovecot.sieve.log may reveal additional 
details)


.dovecot.sieve.log:
sieve: info: started log at May 31 13:50:39.
error: failed to pipe message to program `sieve-dabs-execute.sh': refer 
to server log for more information. [2022-05-31 13:50:39].


It's weird. "failed to pipe message to program" is simply not true. The 
command was processed correctly.


Any ideas where to look for clues or how to debug this?

Regards
Thomas

config:

# 2.3.14 (cee3cbc0d): /etc/dovecot/dovecot.conf
# Pigeonhole version 0.5.14 (1b5c82b2)
# OS: Linux 5.17.5-x86_64-linode154 x86_64 Ubuntu 20.04.4 LTS
auth_mechanisms = plain login
auth_username_format = %n
auth_verbose = yes
mail_location = maildir:~/Maildir
mail_plugins = " quota"
managesieve_notify_capability = mailto
managesieve_sieve_capability = fileinto reject envelope 
encoded-character vacation subaddress comparator-i;ascii-numeric 
relational regex imap4flags copy include variables body enotify 
environment mailbox date index ihave duplicate mime foreverypart 
extracttext vnd.dovecot.pipe vnd.dovecot.execute

namespace inbox {
  inbox = yes
  location =
  mailbox Drafts {
special_use = \Drafts
  }
  mailbox Junk {
special_use = \Junk
  }
  mailbox Sent {
special_use = \Sent
  }
  mailbox "Sent Messages" {
special_use = \Sent
  }
  mailbox Trash {
special_use = \Trash
  }
  prefix =
}
passdb {
  driver = pam
}
plugin {
  quota = fs:User quota
  quota_grace = 1%%
  quota_status_nouser = DUNNO
  quota_status_overquota = 552 5.2.2 Mailbox is full
  quota_status_success = DUNNO
  sieve = file:~/sieve;active=~/sieve/.dovecot.sieve
  sieve_execute_socket_dir =
  sieve_extensions = +vnd.dovecot.pipe +vnd.dovecot.execute
  sieve_pipe_exec_timeout = 30s
  sieve_pipe_socket_dir =
  sieve_plugins = sieve_extprograms
  sieve_redirect_envelope_from = recipient
  sieve_trace_debug = no
  sieve_trace_dir = ~/sieve/trace
  sieve_trace_level = matching
}
protocols = imap sieve
service auth {
  unix_listener /var/spool/postfix/private/dovecot-auth {
group = postfix
mode = 0660
user = postfix
  }
}
service quota-status {
  client_limit = 1
  executable = /usr/lib/dovecot/quota-status -p postfix
  inet_listener {
address = 127.0.0.1
port = 8881
  }
}
service sieve-pipe-script {
  executable = script /usr/bin/php /srv/www/mydomain/status/artisan 
dabs:processEmail

  unix_listener sieve-dabs-execute.sh {
mode = 0660
user = user
  }
  user = www-data
  vsz_limit = 512 M
}
ssl = required
ssl_cert =   rejection_reason = Your message to <%t> was automatically 
rejected:%n%r

}
protocol imap {
  mail_max_userip_connections = 20
  mail_plugins = " quota mail_log notify imap_quota"
}


Re: Force TCP socket disconnect on imap login failure?

2022-05-24 Thread Thomas Zajic

* Hippo Man, 23.05.22 22:54


[...] However, this does not drop connections that are existing and
already open. It will only drop *future* connections from that IP
address to port 143.

This is why I want to kill the existing connection. Even after that
"iptables" command is issued, the entity which is connected to the
imap port can continue to send more and more imap commands. [...]

If your version of 'ss' is recent anough, you can use 'ss -k' to
instantly kill an open connection. Other tools you could try are
'killcx' and 'tcpkill' (part of the 'dsniff' toolkit):

http://killcx.sourceforge.net/
https://www.monkey.org/~dugsong/dsniff/

HTH
Thomas


master user namespace error?

2022-04-26 Thread Thomas Winterstein

Hello,

I've set up a master user for migration purposes and doveadm auth tests 
succeed. If I test with telnet, the login as user also works and 
performs a user search on ldap but I get a namespace error:



Debug: ldap(testuser,1.2.3.4,<0o/OTasGHdfNCB>): Finished userdb lookup
Debug: master userdb out: 
USER#011911998977#011testuser#011uid=500#011gid=500#011nopassword=#011proxy_maybe=yes#011master_user=admin#011auth_mech=PLAIN#011auth_token=b8c3dfdfcc9ac33cb1c2f1d755g8e04db58a7aa0#011auth_user=admin
 imap-login: Login: user=, method=PLAIN, rip=1.2.3.4, lip=5.6.7.8, mpid=22569, TLS, session=<0o/OTasGHdfNCB> > Error: Namespace '': Mail storage autodetection failed with home=(not 

set)

Disconnected: Namespace '': Mail storage autodetection failed with home=(not 
set) in=0 out=417 deleted=0 expunged=0 trashed=0 hdr_count=0 hdr_bytes=0 
body_count=0 body_bytes


Anything special to consider in a proxy setup?

thank you
Thomas


Dovecot Docker Image Volumes

2022-01-02 Thread Thomas Bellebaum
Hello there and happy new year,

I have a question/request regarding the Docker Image hosted at 
https://hub.docker.com/r/dovecot/dovecot.
The Dockerfile itself declares two volumes:

- `/etc/dovecot` for configuration data
- `/srv/mail` for mail storage

It seems inconvenient in some cases to have the image create these volumes, 
especially in case of the former.

Consider a minimal Dockerfile like the following:

```
FROM dovecot/dovecot:latest
COPY dovecot.conf /etc/dovecot/dovecot.conf
```

This creates a new image building on top of the official one,
which has statically configured configuration, and thus does not need to save 
its config in a volume.
Yet currently, since the base image exports volumes, a config volume is created.

A user might also choose to save mail in a different directory or even in a 
remote SQL database, rendering the second volume unnecessary.

I would like to know a bit about the reasons for declaring the volumes,
and suggest removing the line, should this be an option.

Some impact considerations:

- Removing the volumes for future image versions will not impact existing 
deployments building on tags other than `latest`.
- As the default (example) configuration is not very useful for 
non-test-setups, most people have probably assigned the config volume 
explicitly, e.g. using docker's `-v` option. These people will also not be 
affected.
- The description explicitly states to mount `/srv/mail/`, but some people 
might rely on the the automatic volume creation nonetheless.
- Some obscure proprietary scripts may rely on the current behavior.

In any case, if the volumes are no longer declared, the image description 
should mention that the mail storage location (the default being `/var/mail`) 
must be saved e.g. by using volumes, and probably also that the configuration 
is expected at `/etc/dovecot/dovecot.conf`.

Stay healthy and have a nice day,

-- 
Thomas Bellebaum 


pgpDsW2Vj41jQ.pgp
Description: PGP signature


Issue with SCRAM-SHA Authorization

2021-12-01 Thread Thomas Schmid
input)));
  str_append(str, "c=");
  base64_encode(cbind_input, strlen(cbind_input), str);

  if (strcmp(fields[0], str_c(str)) != 0) {
*error_r = "Invalid channel binding data";
return FALSE;
  }

As you can see it uses the bind flag which was saved to the request. But
instead of the authzid, it uses always an hardcoded empty string.

Thus cbind-input is "n,," instead of "n,a=user," and results in the
request being rejected with an "Invalid channel binding data".

Which is on the one hand a funny message because dovecot does not
support channel binding at all, as describe in
https://github.com/dovecot/core/blob/a5209c83c3a82386c94d466eec5fea394973e88f/src/auth/mech-scram.c#L164
but it also on the other hand somehow correct because cbind-input string
does not match. Which is an illegal state during channel binding
negotiation which should not happen.

As said previously, it looks to me like a server side bug. Or did I miss
something special case in the RFC?

Kind Regards

Thomas Schmid


Re: health check passthrough not 100% in combination with haproxy

2021-10-17 Thread Thomas Zajic
* Marc, 17.10.21 01:15

> I have been trying to get a simple health check in haproxy to work. But 
> somehome the haproxy request is differently handled then a curl request, 
> which generates a socket error in haproxy.
> [...]

For stats, you can use "socat" to talk to haproxy's stat socket instead of using
its HTTP interface. For example, with "stats socket /var/lib/haproxy/stats" in
haproxy's config, you can then do:

echo "show stat" | socat - "UNIX-CONNECT:/var/lib/haproxy/stats"

Maybe there is a way to also expose your health check status on a local socket
instead of an HTTP listener? This would at least eliminate having to deal with
missing or superfluous CR/LFs, spaces, etc.

Just a thought.

HTH,
Thomas


Re: Error and Panic (with coredump)

2021-04-28 Thread Thomas Knaute





On 27.4.2021 9.57, Thomas Knaute wrote:







On 26/04/2021 19:03 Thomas Knaute  wrote:


Hi there,

i'm pretty new to this stuff, just tell me if you need more
information.

Apr 26 17:15:43 dilia dovecot:
imap(u...@domain.de)<78561>: Error:
i_stream_seekable_write_failed: close((>fd)) @
istream-seekable.c:246 failed (fd=21): Bad fi
le descriptor
Regards, Thomas


Do you by change run out of disk space in /tmp or see any other errors?

Aki



it was
/dev/mapper/dilia-vg-tmp 360M 3,6M 334M 2% /tmp

now it is
/dev/mapper/dilia-vg-tmp 1,4G 2,8M 1,3G 1% /tmp

Other Errors:

User has several thousand mails in the "Send" folder. Whenever the
webmailer tried to read the folder, this error occurred.

Apr 15 13:50:44 dilia dovecot:
imap(u...@domain.de)<11957>: Error: Raw backtrace:
/usr/lib/dovecot/libdovecot.so.0(+0xdb13b) [0x7fd9e90ce13b] ->
/usr/lib/dovecot/libdovecot.so.0(+0xdb1d1) [0x7fd9e90ce1d1] ->
/usr/lib/dovecot/libdovecot.so.0(+0x4a21b) [0x7fd9e903d21b] ->
/usr/lib/dovecot/libdovecot.so.0(+0x4dfc7) [0x7fd9e9040fc7] ->
/usr/lib/dovecot/libdovecot.so.0(+0xe6942) [0x7fd9e90d9942] ->
/usr/lib/dovecot/libdovecot.so.0(i_stream_alloc+0x88) [0x7fd9e90db098]
-> /usr/lib/dovecot/libdovecot.so.0(+0xee059) [0x7fd9e90e1059] ->
/usr/lib/dovecot/libdovecot.so.0(+0xee546) [0x7fd9e90e1546] ->
/usr/lib/dovecot/libdovecot.so.0(+0xe66d9) [0x7fd9e90d96d9] ->
/usr/lib/dovecot/libdovecot.so.0(i_stream_get_size+0x2a)
[0x7fd9e90da5aa] ->
/usr/lib/dovecot/modules/lib20_zlib_plugin.so(+0x417c)
[0x7fd9e8dfd17c] ->
/usr/lib/dovecot/libdovecot-storage.so.0(index_mail_set_seq+0x25)
[0x7fd9e924ceb5] -> /usr/lib/dovecot/libdovecot-storage.so.0(+0xd19ce)
[0x7fd9e92539ce] ->
/usr/lib/dovecot/libdovecot-storage.so.0(index_storage_search_next_nonblock+0x10d)
[0x7fd9e925418d] ->
/usr/lib/dovecot/libdovecot-storage.so.0(mailbox_search_next_nonblock+0x28)
[0x7fd9e91dce58] ->
/usr/lib/dovecot/libdovecot-storage.so.0(mailbox_search_next+0x3f)
[0x7fd9e91dcedf] -> dovecot/imap [u...@domain.de 10.242.2.34 UID
fetch](+0x21847) [0x557be3f0e847] -> dovecot/imap [u...@domain.de
10.242.2.34 UID fetch](imap_fetch_more+0x39) [0x557be3f0f779] ->
dovecot/imap [u...@domain.de 10.242.2.34 UID fetch](cmd_fetch+0x337)
[0x557be3f00c07] -> dovecot/imap [u...@domain.de 10.242.2.34 UID
fetch](command_exec+0x70) [0x557be3f0cd80] -> dovecot/imap
[u...@domain.de 10.242.2.34 UID fetch](+0x1e3f2) [0x557be3f0b3f2] ->
dovecot/imap [u...@domain.de 10.242.2.34 UID fetch](+0x1e494)
[0x557be3f0b494] -> dovecot/imap [u...@domain.de 10.242.2.34 UID
fetch](client_handle_input+0x1b5) [0x557be3f0b845] -> dovecot/imap
[u...@domain.de 10.242.2.34 UID fetch](client_input+0x7e)
[0x557be3f0bd6e] ->
/usr/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x6f)
[0x7fd9e90e45ef] ->
/usr/lib/dovecot/libdovecot.so.0(io_loop_handler_run_internal+0x136)
[0x7fd9e90e5be6] ->
/usr/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x4c)
[0x7fd9e90e468c] -> /usr/lib/dovecot/libdovecot.so.0(io_loop_run+0x40)
[0x7fd9e90e47f0]
Apr 15 13:50:44 dilia dovecot:
imap(u...@domain.de)<11957>: Fatal: master:
service(imap): child 11957 returned error 83 (Out of memory (service
imap { vsz_limit=512 MB }, you may need to increase it) - set
CORE_OUTOFMEM=1 environment to get core dump)

so i increased the limit:
/etc/dovecot/conf.d/10-master.conf:default_vsz_limit = 1024M



Did these actions fix the issues?



Increasing the size of /tmp seems to have solved the problem. I  
increased vsz_limit a few days ago, could it be that this is what  
caused the other error to occur (more often)?


thank you!

regards, thomas




Re: Error and Panic (with coredump)

2021-04-27 Thread Thomas Knaute








On 26/04/2021 19:03 Thomas Knaute  wrote:


Hi there,

i'm pretty new to this stuff, just tell me if you need more information.

Apr 26 17:15:43 dilia dovecot:  
imap(u...@domain.de)<78561>: Error:  
i_stream_seekable_write_failed: close((>fd)) @  
istream-seekable.c:246 failed (fd=21): Bad fi

le descriptor
Regards, Thomas


Do you by change run out of disk space in /tmp or see any other errors?

Aki



it was
/dev/mapper/dilia-vg-tmp 360M 3,6M 334M 2% /tmp

now it is
/dev/mapper/dilia-vg-tmp 1,4G 2,8M 1,3G 1% /tmp

Other Errors:

User has several thousand mails in the "Send" folder. Whenever the  
webmailer tried to read the folder, this error occurred.


Apr 15 13:50:44 dilia dovecot:  
imap(u...@domain.de)<11957>: Error: Raw backtrace:  
/usr/lib/dovecot/libdovecot.so.0(+0xdb13b) [0x7fd9e90ce13b] ->  
/usr/lib/dovecot/libdovecot.so.0(+0xdb1d1) [0x7fd9e90ce1d1] ->  
/usr/lib/dovecot/libdovecot.so.0(+0x4a21b) [0x7fd9e903d21b] ->  
/usr/lib/dovecot/libdovecot.so.0(+0x4dfc7) [0x7fd9e9040fc7] ->  
/usr/lib/dovecot/libdovecot.so.0(+0xe6942) [0x7fd9e90d9942] ->  
/usr/lib/dovecot/libdovecot.so.0(i_stream_alloc+0x88) [0x7fd9e90db098]  
-> /usr/lib/dovecot/libdovecot.so.0(+0xee059) [0x7fd9e90e1059] ->  
/usr/lib/dovecot/libdovecot.so.0(+0xee546) [0x7fd9e90e1546] ->  
/usr/lib/dovecot/libdovecot.so.0(+0xe66d9) [0x7fd9e90d96d9] ->  
/usr/lib/dovecot/libdovecot.so.0(i_stream_get_size+0x2a)  
[0x7fd9e90da5aa] ->  
/usr/lib/dovecot/modules/lib20_zlib_plugin.so(+0x417c)  
[0x7fd9e8dfd17c] ->  
/usr/lib/dovecot/libdovecot-storage.so.0(index_mail_set_seq+0x25)  
[0x7fd9e924ceb5] -> /usr/lib/dovecot/libdovecot-storage.so.0(+0xd19ce)  
[0x7fd9e92539ce] ->  
/usr/lib/dovecot/libdovecot-storage.so.0(index_storage_search_next_nonblock+0x10d) [0x7fd9e925418d] -> /usr/lib/dovecot/libdovecot-storage.so.0(mailbox_search_next_nonblock+0x28) [0x7fd9e91dce58] -> /usr/lib/dovecot/libdovecot-storage.so.0(mailbox_search_next+0x3f) [0x7fd9e91dcedf] -> dovecot/imap [u...@domain.de 10.242.2.34 UID fetch](+0x21847) [0x557be3f0e847] -> dovecot/imap [u...@domain.de 10.242.2.34 UID fetch](imap_fetch_more+0x39) [0x557be3f0f779] -> dovecot/imap [u...@domain.de 10.242.2.34 UID fetch](cmd_fetch+0x337) [0x557be3f00c07] -> dovecot/imap [u...@domain.de 10.242.2.34 UID fetch](command_exec+0x70) [0x557be3f0cd80] -> dovecot/imap [u...@domain.de 10.242.2.34 UID fetch](+0x1e3f2) [0x557be3f0b3f2] -> dovecot/imap [u...@domain.de 10.242.2.34 UID fetch](+0x1e494) [0x557be3f0b494] -> dovecot/imap [u...@domain.de 10.242.2.34 UID fetch](client_handle_input+0x1b5) [0x557be3f0b845] -> dovecot/imap [u...@domain.de 10.242.2.34 UID fetch](client_input+0x7e) [0x557be3f0bd6e] -> /usr/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x6f) [0x7fd9e90e45ef] -> /usr/lib/dovecot/libdovecot.so.0(io_loop_handler_run_internal+0x136) [0x7fd9e90e5be6] -> /usr/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x4c) [0x7fd9e90e468c] -> /usr/lib/dovecot/libdovecot.so.0(io_loop_run+0x40)  
[0x7fd9e90e47f0]
Apr 15 13:50:44 dilia dovecot:  
imap(u...@domain.de)<11957>: Fatal: master:  
service(imap): child 11957 returned error 83 (Out of memory (service  
imap { vsz_limit=512 MB }, you may need to increase it) - set  
CORE_OUTOFMEM=1 environment to get core dump)


so i increased the limit:
/etc/dovecot/conf.d/10-master.conf:default_vsz_limit = 1024M




Error and Panic (with coredump)

2021-04-26 Thread Thomas Knaute
 }
 unix_listener auth-userdb {
 group = vmail
 user = vmail
 }
}
service imap-login {
 process_min_avail = 12
 service_count = 0
}

service imap {
 process_limit = 8192
}
service lmtp {
 inet_listener lmtp {
 address = 127.0.0.1
 port = 24
 }
}
service managesieve-login {
 inet_listener sieve {
 port = 4190
 }
 inet_listener sieve_deprecated {
 port = 2000
 }
}
ssl_cert = ssl_cipher_list =  
ALL:!kRSA:!SRP:!kDHd:!DSS:!aNULL:!eNULL:!EXPORT:!DES:!3DES:!MD5:!PSK:!RC4:!ADH:!LOW:!DH@STRENGTH

ssl_dh = # hidden, use -P to show it
ssl_key = # hidden, use -P to show it
ssl_min_protocol = TLSv1.2
ssl_prefer_server_ciphers = yes
userdb {
 driver = prefetch
}
userdb {
 args = /etc/dovecot/dovecot-ldap.conf.ext
 driver = ldap
}
verbose_proctitle = yes
protocol lmtp {
 mail_plugins = zlib acl sieve
}
protocol imap {
 mail_max_userip_connections = 20
 mail_plugins = zlib acl imap_zlib imap_acl
}


Regards, Thomas




Re: Dovecot v2.3.14 released

2021-03-08 Thread Thomas Zajic


* Aki Tuomi, 04.03.21 11:21

> Hi!
> 
> We are pleased to release v2.3.14 of Dovecot.
> [...]


Hi,

Just a minor thing I noticed by chance: the Wiki documentation that is
included in the source tarball is rather outdated. The timestamp of the
files in dovecot-2.3.14/doc/wiki is 2019-06-19, which would be somewhere
between 2.3.6 (2019-04-30) and 2.3.7 (2019-07-12), according to
dovecot-2.3.14/NEWS.

I suggest either refreshing it with the current content, or simply
replacing it with a small textfile pointing to wiki.dovecot.org and/or
doc.dovecot.org. While 90% of it is probably still valid, there has
been quite a bunch of tweaks, fixes and feature additions and drops
that might lead to WTF moments and a bit of head-scratching, if one
follows these offline docs rather than their corresponding online
version.

Bye,
Thomas


Re: migration with doveadm backup to new cluster running dovecot 2.2.36 and replicator

2021-01-12 Thread Thomas Winterstein
opened because: copy caching decisions

...


for user in USER2; do echo $user ; for mailbox in `ls 
/srv/mail/v/$user/mailboxes/`; do echo $mailbox; doveadm search -u $user 
mailbox $mailbox ALL | head -n 1; done ; done

USER2
Drafts
INBOX
c92f64f79f0d1ed01e6d5b314f04886c 28
Junk
Sent
bfb2e03fdce327671e82bf173b1ccb8b 1
Trash
7f5af7ba291b2df1a11d573bdb55d7e9 1



which options in dovecot config specifically must be set so that imapc 
adopts the GUID it gets from the server its migrating from.



thanks
Thomas


Re: migration with doveadm backup to new cluster running dovecot 2.2.36 and replicator

2021-01-10 Thread Thomas Winterstein

we were able to narrow down the cause of the problem.


After the initial dsync migration process the mailbox GUIDs are the same 
for each mailbox-name across all users.

Is this intended behaviour of dsync?
If not, how can this be changed?


After the first replication process the Inboxes of ~20% of users get 
different mailbox GUIDs.



During the next incremental dsync process the mailbox GUIDs of these 
~20% get overridden by the inital one.



Then the incremental replication duplicates those Inboxes where the 
mailbox GUIDs don't match.



Any ideas?


thanks
Thomas

On 07.01.2021 16:41, Thomas Winterstein wrote:
dsync is intended to be used to change mailbox format, so it should 
work just fine.


that's exactly what we thought and why we use dsync to migrate like 
described here


   https://wiki2.dovecot.org/Migration/Dsync


Our replication is configured according to

   https://wiki.dovecot.org/Replication


Both processes run separately in time.


Still on some accounts mails of Inbox or another folder get duplicated. 
We're currently trying to debug this.


what are we missing?

thanks
Thomas

On 07.01.2021 10:21, Aki Tuomi wrote:
dsync is intended to be used to change mailbox format, so it should 
work just fine.


Aki

On 07/01/2021 11:17 Andrea Gabellini 
 wrote:


Hello,

I had a similar problem some time ago, and the problem was the mailbox
format change.

Please try to migrate with the same format.

Andrea

Il 05/01/21 15:02, Thomas Winterstein ha scritto:

No one?

If there are limitations in regards to how dsync in migration and
replication can operate together these should be stated clearly in the
documentation.

On 23.12.2020 20:33, Thomas Winterstein wrote:

Hello everyone,


we are working on migrating from dovecot 2.0.9 (maildir) to 2.2.36
(mdbox). The new cluster has two backend mail servers which replicate
through doveadm replicator. To move the data initially we use doveadm
backup (imapc).

arb
Our migration command
   doveadm -o mail_fsync=never backup -R -u $user imapc:


To test the replication of new and purge of old mails with live data
changes we ran imapc on a daily basis but encountered the problem
that some mailboxes multiplied in size. We then made sure that imapc
and replication don't run at the same time but after the first
incremental imapc process, we still had the same problems.


The doveadm-backup man-page states that it's possible to run it
multiple times during migration. But is it also possible to have the
replicator running in between? From our understanding the doveadm
backup should just work as an imap connection between the servers,
synchronizing all changes made on the source to the destination. Or
does the conversion from maildir to mdbox format in our case produce
the problems?


If you're not supposed to run the replicator before having fully
migrated, how can we shorten the downtime? rsync? And how can we be
sure that similar problems don't occur after the migration if we
can't test all mechanisms together with live data?


thanks





--
__
Daddy, why doesn't this magnet pick up this floppy ?
__

TIM San Marino S.p.A.
Andrea Gabellini
Engineering R
TIM San Marino S.p.A. - https://www.telecomitalia.sm
Via Ventotto Luglio, 212 - Piano -2
47893 - Borgo Maggiore - Republic of San Marino
Tel: (+378) 0549 886237
Fax: (+378) 0549 886188


--
Informativa Privacy

Questa email ha per destinatari dei contatti presenti negli archivi 
di TIM San Marino S.p.A.. Tutte le informazioni vengono trattate e 
tutelate nel rispetto della normativa vigente sulla protezione dei 
dati personali (Reg. EU 2016/679). Per richiedere informazioni e/o 
variazioni e/o la cancellazione dei vostri dati presenti nei nostri 
archivi potete inviare una email a priv...@telecomitalia.sm.


Avviso di Riservatezza

Il contenuto di questa e-mail e degli eventuali allegati e' 
strettamente confidenziale e destinato alla/e persona/e a cui e' 
indirizzato. Se avete ricevuto per errore questa e-mail, vi preghiamo 
di segnalarcelo immediatamente e di cancellarla dal vostro computer. 
E' fatto divieto di copiare e divulgare il contenuto di questa 
e-mail. Ogni utilizzo abusivo delle informazioni qui contenute da 
parte di persone terze o comunque non indicate nella presente e-mail 
potra' essere perseguito ai sensi di legge.






--
Thomas Winterstein  http://www.rz.uni-augsburg.de/
Universität Augsburg, Rechenzentrum . Tel. (0821) 598-2068
86135 Augsburg .. Fax. (0821) 598-2028


Re: migration with doveadm backup to new cluster running dovecot 2.2.36 and replicator

2021-01-07 Thread Thomas Winterstein

dsync is intended to be used to change mailbox format, so it should work just 
fine.


that's exactly what we thought and why we use dsync to migrate like 
described here


  https://wiki2.dovecot.org/Migration/Dsync


Our replication is configured according to

  https://wiki.dovecot.org/Replication


Both processes run separately in time.


Still on some accounts mails of Inbox or another folder get duplicated. 
We're currently trying to debug this.


what are we missing?

thanks
Thomas

On 07.01.2021 10:21, Aki Tuomi wrote:

dsync is intended to be used to change mailbox format, so it should work just 
fine.

Aki


On 07/01/2021 11:17 Andrea Gabellini  wrote:

  
Hello,


I had a similar problem some time ago, and the problem was the mailbox
format change.

Please try to migrate with the same format.

Andrea

Il 05/01/21 15:02, Thomas Winterstein ha scritto:

No one?

If there are limitations in regards to how dsync in migration and
replication can operate together these should be stated clearly in the
documentation.

On 23.12.2020 20:33, Thomas Winterstein wrote:

Hello everyone,


we are working on migrating from dovecot 2.0.9 (maildir) to 2.2.36
(mdbox). The new cluster has two backend mail servers which replicate
through doveadm replicator. To move the data initially we use doveadm
backup (imapc).

arb
Our migration command
   doveadm -o mail_fsync=never backup -R -u $user imapc:


To test the replication of new and purge of old mails with live data
changes we ran imapc on a daily basis but encountered the problem
that some mailboxes multiplied in size. We then made sure that imapc
and replication don't run at the same time but after the first
incremental imapc process, we still had the same problems.


The doveadm-backup man-page states that it's possible to run it
multiple times during migration. But is it also possible to have the
replicator running in between? From our understanding the doveadm
backup should just work as an imap connection between the servers,
synchronizing all changes made on the source to the destination. Or
does the conversion from maildir to mdbox format in our case produce
the problems?


If you're not supposed to run the replicator before having fully
migrated, how can we shorten the downtime? rsync? And how can we be
sure that similar problems don't occur after the migration if we
can't test all mechanisms together with live data?


thanks





--
__
Daddy, why doesn't this magnet pick up this floppy ?
__

TIM San Marino S.p.A.
Andrea Gabellini
Engineering R
TIM San Marino S.p.A. - https://www.telecomitalia.sm
Via Ventotto Luglio, 212 - Piano -2
47893 - Borgo Maggiore - Republic of San Marino
Tel: (+378) 0549 886237
Fax: (+378) 0549 886188


--
Informativa Privacy

Questa email ha per destinatari dei contatti presenti negli archivi di TIM San 
Marino S.p.A.. Tutte le informazioni vengono trattate e tutelate nel rispetto 
della normativa vigente sulla protezione dei dati personali (Reg. EU 2016/679). 
Per richiedere informazioni e/o variazioni e/o la cancellazione dei vostri dati 
presenti nei nostri archivi potete inviare una email a priv...@telecomitalia.sm.

Avviso di Riservatezza

Il contenuto di questa e-mail e degli eventuali allegati e' strettamente 
confidenziale e destinato alla/e persona/e a cui e' indirizzato. Se avete 
ricevuto per errore questa e-mail, vi preghiamo di segnalarcelo immediatamente 
e di cancellarla dal vostro computer. E' fatto divieto di copiare e divulgare 
il contenuto di questa e-mail. Ogni utilizzo abusivo delle informazioni qui 
contenute da parte di persone terze o comunque non indicate nella presente 
e-mail potra' essere perseguito ai sensi di legge.



--
Thomas Winterstein  http://www.rz.uni-augsburg.de/
Universität Augsburg, Rechenzentrum . Tel. (0821) 598-2068
86135 Augsburg .. Fax. (0821) 598-2028


Dovecot Folder and file permissions.

2021-01-05 Thread Thomas Strike
While working with adding a website to apache on my server something 
caused a blanket resetting of all file permissions on the server to 
apache:apache. I have most of the server running again but my mail 
services is another story. I have configured vmail on a Postfix with 
Dovecot and mariadb install. What I need is help with reestablishing the 
correct file and folder permissions that dovecot uses. The following is 
my configuration;


# OS: Linux 4.18.0-147.3.1.el8_1.x86_64 x86_64 CentOS Linux release 
8.1.1911 (Core)  xfs

# Dovecot version# 2.2.36 (1f10bfa63)
# Hostname: sleepyvalley
auth_mechanisms = plain login
mail_home = /var/vmail/%d/%n
mail_location = maildir:/var/vmail/%d/%n
mail_privileged_group = mail
mail_uid = vmail
mbox_write_locks = fcntl
namespace inbox {
  inbox = yes
  location =
  mailbox Drafts {
    special_use = \Drafts
  }
  mailbox Junk {
    special_use = \Junk
  }
  mailbox Sent {
    special_use = \Sent
  }
  mailbox "Sent Messages" {
    special_use = \Sent
  }
  mailbox Trash {
    special_use = \Trash
  }
  prefix =
}
passdb {
  args = /etc/dovecot/dovecot-sql.conf.ext
  driver = sql
}
passdb {
  args = scheme=CRAM-MD5 username_format=%u /etc/dovecot/users
  driver = passwd-file
}
postmaster_address = postmas...@sleepyvalley.net
service auth-worker {
  user = vmail
}
service auth {
  unix_listener /var/spool/postfix/private/auth {
    mode = 0666
  }
  unix_listener auth-userdb {
    mode = 0666
    user = vmail
  }
}
service lmtp {
  unix_listener /var/spool/postfix/private/dovecot-lmtp {
    group = postfix
    mode = 0666
    user = postfix
  }
}
ssl_cert = 

Re: migration with doveadm backup to new cluster running dovecot 2.2.36 and replicator

2021-01-05 Thread Thomas Winterstein

No one?

If there are limitations in regards to how dsync in migration and 
replication can operate together these should be stated clearly in the 
documentation.


On 23.12.2020 20:33, Thomas Winterstein wrote:

Hello everyone,


we are working on migrating from dovecot 2.0.9 (maildir) to 2.2.36 
(mdbox). The new cluster has two backend mail servers which replicate 
through doveadm replicator. To move the data initially we use doveadm 
backup (imapc).


arb
Our migration command
  doveadm -o mail_fsync=never backup -R -u $user imapc:


To test the replication of new and purge of old mails with live data 
changes we ran imapc on a daily basis but encountered the problem that 
some mailboxes multiplied in size. We then made sure that imapc and 
replication don't run at the same time but after the first incremental 
imapc process, we still had the same problems.



The doveadm-backup man-page states that it's possible to run it multiple 
times during migration. But is it also possible to have the replicator 
running in between? From our understanding the doveadm backup should 
just work as an imap connection between the servers, synchronizing all 
changes made on the source to the destination. Or does the conversion 
from maildir to mdbox format in our case produce the problems?



If you're not supposed to run the replicator before having fully 
migrated, how can we shorten the downtime? rsync? And how can we be sure 
that similar problems don't occur after the migration if we can't test 
all mechanisms together with live data?



thanks



--
Thomas Winterstein


migration with doveadm backup to new cluster running dovecot 2.2.36 and replicator

2020-12-23 Thread Thomas Winterstein

Hello everyone,


we are working on migrating from dovecot 2.0.9 (maildir) to 2.2.36 
(mdbox). The new cluster has two backend mail servers which replicate 
through doveadm replicator. To move the data initially we use doveadm 
backup (imapc).



Our migration command
 doveadm -o mail_fsync=never backup -R -u $user imapc:


To test the replication of new and purge of old mails with live data 
changes we ran imapc on a daily basis but encountered the problem that 
some mailboxes multiplied in size. We then made sure that imapc and 
replication don't run at the same time but after the first incremental 
imapc process, we still had the same problems.



The doveadm-backup man-page states that it's possible to run it multiple 
times during migration. But is it also possible to have the replicator 
running in between? From our understanding the doveadm backup should 
just work as an imap connection between the servers, synchronizing all 
changes made on the source to the destination. Or does the conversion 
from maildir to mdbox format in our case produce the problems?



If you're not supposed to run the replicator before having fully 
migrated, how can we shorten the downtime? rsync? And how can we be sure 
that similar problems don't occur after the migration if we can't test 
all mechanisms together with live data?



thanks
--
Thomas Winterstein


Dovecot docker auto-responder - delegate to external SMTP

2020-10-19 Thread Thomas Pronold

Hi all,

I have dockerized my dovecot setup. Everything works fine besides 
auto-responder using sieve.


Issue is that the my dovecot docker image does NOT come with a SMTP 
which could be used for outgoing mail.


My SMTP setup (postfix) is in another image/container.

I want to keep it that way, so that dovecot and postfix have separate 
images.


So basically I have 2 containers running in the same internal network: 
a) dovecot IP 192.168.10.11 b) postfix IP 192.168.10.22


How do I have to configure dovecot so that it delegates all outgoing 
(autoresponder) mails to the SMTP on IP 192.168.10.22.


The postfix image is configured in a way that certain IPs are allowed to 
send mails without authentication. So on postfix side I am set.


Currently I did a quickfix: I added postfix to dovecot image and 
configured postfix to use the relayhost 192.168.10.22 for mailing. This 
works, still I would prefer having the dovecot image not containing an 
postfix SMTP.


Thanks

Tom




AW: replicator: Panic: data stack: Out of memory

2020-09-01 Thread Thomas Tsiakalakis

I just found this post from 2017: 
http://dovecot.2317879.n4.nabble.com/replicator-crashing-oom-td59402.html
I removed the replication_sync_timeout setting and now it's working fine.


Thomas Tsiakalakis

Team Applikationsbetrieb
GDV Dienstleistungs-GmbH
Tel: +49(40)33449-4318
E-Mail: mailto:thomas.tsiakala...@gdv-dl.de

GDV Dienstleistungs-GmbH
Glockengießerwall 1
D-20095 Hamburg
www.gdv-dl.de

Niederlassungen:

Wilhelmstraße 43 / 43 G
10117 Berlin

Frankenstraße 18a
20097 Hamburg

Sitz und Registergericht: Hamburg
HRB 145291
USt.-IdNr : DE 205183123

Geschäftsführer:
Dr. Jens Bartenwerfer
Fred di Giuseppe Chiachiarella

Aufsichtsratsvorsitzender: Werner Schmidt

--
Diese E-Mail und alle Anhänge enthalten vertrauliche und/oder rechtlich 
geschützte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese 
E-Mail irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und 
vernichten Sie diese E-Mail. Das unerlaubte Kopieren sowie die unbefugte 
Weitergabe der E-Mail ist nicht gestattet.

This e-mail and any attached files may contain confidential and/or privileged 
information. If you are not the intended recipient (or have received this 
e-mail in error) please notify the sender immediately and destroy this e-mail. 
Any unauthorised copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden.

-Ursprüngliche Nachricht-
Von: Aki Tuomi 
Gesendet: Freitag, 14. August 2020 12:21
An: Thomas Tsiakalakis ; dovecot@dovecot.org
Betreff: Re: replicator: Panic: data stack: Out of memory

Try setting

service replicator {
  vsz_limit = 0
}

Aki

> On 14/08/2020 13:19 Thomas Tsiakalakis  wrote:
>
>
>
> So nobody has any idea why this could happen?
> Let me know if I should provide more Information Thanks
>
>  ThomasTsiakalakis
>
> Team Applikationsbetrieb
>  GDV Dienstleistungs-GmbH
>  Tel: +49(40)33449-4318
>  Fax:
>  E-Mail:thomas.tsiakala...@gdv-dl.de
>
>
>
>
>  GDV Dienstleistungs-GmbH
>  Glockengießerwall 1
>  D-20095 Hamburg
>  www.gdv-dl.de (http://www.gdv-dl.de)
>
>  Niederlassungen:
>
>  Wilhelmstraße 43 / 43 G
>  10117 Berlin
>
>  Frankenstraße 18
>  20097 Hamburg
>
>  Sitz und Registergericht: Hamburg
>  HRB 145291
>  USt.-IdNr : DE 205183123
>
>  Geschäftsführer:
>  Dr. Jens Bartenwerfer
>  Fred di Giuseppe Chiachiarella
>
>  Aufsichtsratsvorsitzender: Werner Schmidt
>
>  --
>  Diese E-Mail und alle Anhänge enthalten vertrauliche und/oder rechtlich 
> geschützte Informationen. Wenn Sie nicht der richtige Adressat sind oder 
> diese E-Mail irrtümlich erhalten haben, informieren Sie bitte sofort den 
> Absender und vernichten Sie diese E-Mail. Das unerlaubte Kopieren sowie die 
> unbefugte Weitergabe der E-Mail ist nicht gestattet.
>
>  This e-mail and any attached files may contain confidential and/or 
> privileged information. If you are not the intended recipient (or have 
> received this e-mail in error) please notify the sender immediately and 
> destroy this e-mail. Any unauthorised copying, disclosure or distribution of 
> the material in this e-mail is strictly forbidden.
>
>
> Von:Thomas Tsiakalakis
>  Gesendet: Dienstag, 23. Juni 2020 11:25
>  An: dovecot@dovecot.org
>  Betreff: AW: replicator: Panic: data stack: Out of memory I managed
> to convince the System that I really want a core dump Von:Thomas
> Tsiakalakis
>  Gesendet: Donnerstag, 18. Juni 2020 16:26
>  An: 'dovecot@dovecot.org' 
>  Betreff: replicator: Panic: data stack: Out of memory Hi, I have set
> up 2 SLES 15 Hosts with dovecot and replication. Everything seems to work 
> fine but every time a message is delivered, I get an out of memory error in 
> the logs. The Replication itself seems to work fine though.
> I increased default_vsz_limit to 512M but the only thing that changed
> was that dovecot was trying to allocate 1073741864 bytes instead of
> 268435496 As I said, I’m running SLES 15 SP1 and Dovecot 2.3.10 (0da0eff44) 
> (both Hosts have the same version) Each Host currently has 8GB of Memory.
> # free -h
> total used free shared buff/cache available
> Mem: 7.8Gi 278Mi 7.3Gi 47Mi 183Mi 7.2Gi
> Swap: 4.0Gi 0B 4.0Gi
> # journalctl -f
> Jun 18 15:55:48 mail1 postfix/pickup[3457]: 18533C009C8: uid=0
> from= Jun 18 15:55:48 mail1 postfix/cleanup[3669]: 18533C009C8:
> message-id=<20200618135548.18533C009C8@mail1>
> Jun 18 15:55:48 mail1 postfix/qmgr[3458]: 18533C009C8:
> from=, size=431, nrcpt=1 (queue active) Jun 18 15:55:48
> mail1 dovecot[1833]: lmtp(3673): Connect from local Jun 18 15:55:48
> mail1 dovecot[1833]: replicator: Panic: data stack: Out of memory when
> allocating 2684

AW: replicator: Panic: data stack: Out of memory

2020-09-01 Thread Thomas Tsiakalakis

Still the same error just with more bytes:

replicator: Panic: data stack: Out of memory when allocating 17179869224 bytes


Thomas Tsiakalakis

Team Applikationsbetrieb
GDV Dienstleistungs-GmbH
Tel: +49(40)33449-4318
E-Mail: mailto:thomas.tsiakala...@gdv-dl.de

GDV Dienstleistungs-GmbH
Glockengießerwall 1
D-20095 Hamburg
www.gdv-dl.de

Niederlassungen:

Wilhelmstraße 43 / 43 G
10117 Berlin

Frankenstraße 18a
20097 Hamburg

Sitz und Registergericht: Hamburg
HRB 145291
USt.-IdNr : DE 205183123

Geschäftsführer:
Dr. Jens Bartenwerfer
Fred di Giuseppe Chiachiarella

Aufsichtsratsvorsitzender: Werner Schmidt

--
Diese E-Mail und alle Anhänge enthalten vertrauliche und/oder rechtlich 
geschützte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese 
E-Mail irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und 
vernichten Sie diese E-Mail. Das unerlaubte Kopieren sowie die unbefugte 
Weitergabe der E-Mail ist nicht gestattet.

This e-mail and any attached files may contain confidential and/or privileged 
information. If you are not the intended recipient (or have received this 
e-mail in error) please notify the sender immediately and destroy this e-mail. 
Any unauthorised copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden.

-Ursprüngliche Nachricht-
Von: Aki Tuomi 
Gesendet: Freitag, 14. August 2020 12:21
An: Thomas Tsiakalakis ; dovecot@dovecot.org
Betreff: Re: replicator: Panic: data stack: Out of memory

Try setting

service replicator {
  vsz_limit = 0
}

Aki

> On 14/08/2020 13:19 Thomas Tsiakalakis  wrote:
>
>
>
> So nobody has any idea why this could happen?
> Let me know if I should provide more Information Thanks
>
>  ThomasTsiakalakis
>
> Team Applikationsbetrieb
>  GDV Dienstleistungs-GmbH
>  Tel: +49(40)33449-4318
>  Fax:
>  E-Mail:thomas.tsiakala...@gdv-dl.de
>
>
>
>
>  GDV Dienstleistungs-GmbH
>  Glockengießerwall 1
>  D-20095 Hamburg
>  www.gdv-dl.de (http://www.gdv-dl.de)
>
>  Niederlassungen:
>
>  Wilhelmstraße 43 / 43 G
>  10117 Berlin
>
>  Frankenstraße 18
>  20097 Hamburg
>
>  Sitz und Registergericht: Hamburg
>  HRB 145291
>  USt.-IdNr : DE 205183123
>
>  Geschäftsführer:
>  Dr. Jens Bartenwerfer
>  Fred di Giuseppe Chiachiarella
>
>  Aufsichtsratsvorsitzender: Werner Schmidt
>
>  --
>  Diese E-Mail und alle Anhänge enthalten vertrauliche und/oder rechtlich 
> geschützte Informationen. Wenn Sie nicht der richtige Adressat sind oder 
> diese E-Mail irrtümlich erhalten haben, informieren Sie bitte sofort den 
> Absender und vernichten Sie diese E-Mail. Das unerlaubte Kopieren sowie die 
> unbefugte Weitergabe der E-Mail ist nicht gestattet.
>
>  This e-mail and any attached files may contain confidential and/or 
> privileged information. If you are not the intended recipient (or have 
> received this e-mail in error) please notify the sender immediately and 
> destroy this e-mail. Any unauthorised copying, disclosure or distribution of 
> the material in this e-mail is strictly forbidden.
>
>
> Von:Thomas Tsiakalakis
>  Gesendet: Dienstag, 23. Juni 2020 11:25
>  An: dovecot@dovecot.org
>  Betreff: AW: replicator: Panic: data stack: Out of memory I managed
> to convince the System that I really want a core dump Von:Thomas
> Tsiakalakis
>  Gesendet: Donnerstag, 18. Juni 2020 16:26
>  An: 'dovecot@dovecot.org' 
>  Betreff: replicator: Panic: data stack: Out of memory Hi, I have set
> up 2 SLES 15 Hosts with dovecot and replication. Everything seems to work 
> fine but every time a message is delivered, I get an out of memory error in 
> the logs. The Replication itself seems to work fine though.
> I increased default_vsz_limit to 512M but the only thing that changed
> was that dovecot was trying to allocate 1073741864 bytes instead of
> 268435496 As I said, I’m running SLES 15 SP1 and Dovecot 2.3.10 (0da0eff44) 
> (both Hosts have the same version) Each Host currently has 8GB of Memory.
> # free -h
> total used free shared buff/cache available
> Mem: 7.8Gi 278Mi 7.3Gi 47Mi 183Mi 7.2Gi
> Swap: 4.0Gi 0B 4.0Gi
> # journalctl -f
> Jun 18 15:55:48 mail1 postfix/pickup[3457]: 18533C009C8: uid=0
> from= Jun 18 15:55:48 mail1 postfix/cleanup[3669]: 18533C009C8:
> message-id=<20200618135548.18533C009C8@mail1>
> Jun 18 15:55:48 mail1 postfix/qmgr[3458]: 18533C009C8:
> from=, size=431, nrcpt=1 (queue active) Jun 18 15:55:48
> mail1 dovecot[1833]: lmtp(3673): Connect from local Jun 18 15:55:48
> mail1 dovecot[1833]: replicator: Panic: data stack: Out of memory when
> allocating 268435496 bytes Jun 18 15:55:48 mail1 dovecot[1833]: replicator: 

RHEL7/CentOS7 RPM of dovecot 2.3.11.3-3 seems to have dropped tcpwrap support

2020-08-20 Thread Thomas Scheunemann
Using the Repo http://repo.dovecot.org/ce-2.3-latest after upgrading from
2.3.10.1-3 to 2.3.11.3-3 we get numerous error messages like:

dovecot: imap-login: Error: connect(tcpwrap) failed: No such file or directory

We use tcpwrap support in dovecot, which worked flawlessly in the older version.
I can see that the socket /var/run/dovecot/login/tcpwrap is not created anymore.
And comparing with RPMs, the new one seems to be missing the file:

/usr/libexec/dovecot/tcpwrap

which leads me to conclusion that the new version is just not compiled with 
tcpwrap
support.

Thomas Scheunemann


Re: solr and dovecot 2.2.36

2020-08-18 Thread Thomas Zajic
* Maciej Milaszewski, 18.08.20 14:00

> I have dovecot-2.2.36.4 (director) + 5 nodes dovecot (dovecot-2.2.36.4)
> What version of Solr do you recommend ?

Don't know about 2.2.36.4, but for 2.3.11.3 both solr-7.7.3
and solr-8.6.0 appear to work fine. I'm only running a small
setup with a handful of users, though, so YMMV.

HTH nevertheless,
Thomas


Re: replicator: Panic: data stack: Out of memory

2020-08-14 Thread Thomas Tsiakalakis
So nobody has any idea why this could happen?
Let me know if I should provide more Information

Thanks


Thomas Tsiakalakis

Team Applikationsbetrieb
GDV Dienstleistungs-GmbH
Tel: +49(40)33449-4318
Fax:
E-Mail: thomas.tsiakala...@gdv-dl.de<mailto:thomas.tsiakala...@gdv-dl.de>



GDV Dienstleistungs-GmbH
Glockengießerwall 1
D-20095 Hamburg
www.gdv-dl.de<http://www.gdv-dl.de>

Niederlassungen:

Wilhelmstraße 43 / 43 G
10117 Berlin

Frankenstraße 18
20097 Hamburg

Sitz und Registergericht: Hamburg
HRB 145291
USt.-IdNr : DE 205183123

Geschäftsführer:
Dr. Jens Bartenwerfer
Fred di Giuseppe Chiachiarella

Aufsichtsratsvorsitzender: Werner Schmidt

--
Diese E-Mail und alle Anhänge enthalten vertrauliche und/oder rechtlich 
geschützte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese 
E-Mail irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und 
vernichten Sie diese E-Mail. Das unerlaubte Kopieren sowie die unbefugte 
Weitergabe der E-Mail ist nicht gestattet.

This e-mail and any attached files may contain confidential and/or privileged 
information. If you are not the intended recipient (or have received this 
e-mail in error) please notify the sender immediately and destroy this e-mail. 
Any unauthorised copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden.

Von: Thomas Tsiakalakis
Gesendet: Dienstag, 23. Juni 2020 11:25
An: dovecot@dovecot.org
Betreff: AW: replicator: Panic: data stack: Out of memory

I managed to convince the System that I really want a core dump


Von: Thomas Tsiakalakis
Gesendet: Donnerstag, 18. Juni 2020 16:26
An: 'dovecot@dovecot.org' 
Betreff: replicator: Panic: data stack: Out of memory

Hi,

I have set up 2 SLES 15 Hosts with dovecot and replication. Everything seems to 
work fine but every time a message is delivered, I get an out of memory error 
in the logs. The Replication itself seems to work fine though.

I increased default_vsz_limit to 512M but  the only thing that changed was that 
dovecot was trying to allocate 1073741864 bytes instead of 268435496

As I said, I’m running SLES 15 SP1 and Dovecot 2.3.10 (0da0eff44) (both Hosts 
have the same version)

Each Host currently has 8GB of Memory.
# free -h
  totalusedfree  shared  buff/cache   available
Mem:  7.8Gi   278Mi   7.3Gi47Mi   183Mi   7.2Gi
Swap: 4.0Gi  0B   4.0Gi

# journalctl -f
Jun 18 15:55:48 mail1 postfix/pickup[3457]: 18533C009C8: uid=0 from=
Jun 18 15:55:48 mail1 postfix/cleanup[3669]: 18533C009C8: 
message-id=<20200618135548.18533C009C8@mail1>
Jun 18 15:55:48 mail1 postfix/qmgr[3458]: 18533C009C8: from=, 
size=431, nrcpt=1 (queue active)
Jun 18 15:55:48 mail1 dovecot[1833]: lmtp(3673): Connect from local
Jun 18 15:55:48 mail1 dovecot[1833]: replicator: Panic: data stack: Out of 
memory when allocating 268435496 bytes
Jun 18 15:55:48 mail1 dovecot[1833]: replicator: Error: Raw backtrace: 
/usr/lib64/dovecot/libdovecot.so.0(backtrace_append+0x42) [0x7f8346d6d262] -> 
/usr/lib64/dovecot/libdovecot.so.0(backtrace_get+0x1e) [0x7f8346d6d37e] -> 
/usr/lib64/dovecot/libdovecot.so.0(+0xebcee) [0x7f8346d77cee] -> 
/usr/lib64/dovecot/libdovecot.so.0(+0xebd91) [0x7f8346d77d91] -> 
/usr/lib64/dovecot/libdovecot.so.0(i_fatal+0) [0x7f8346cd1f8b] -> 
/usr/lib64/dovecot/libdovecot.so.0(t_pop_last_unsafe+0) [0x7f8346d73660] -> 
/usr/lib64/dovecot/libdovecot.so.0(+0xe7a50) [0x7f8346d73a50] -> 
/usr/lib64/dovecot/libdovecot.so.0(+0x10eb58) [0x7f8346d9ab58] -> 
/usr/lib64/dovecot/libdovecot.so.0(+0xe3ac5) [0x7f8346d6fac5] -> 
/usr/lib64/dovecot/libdovecot.so.0(buffer_write+0xf2) [0x7f8346d6fdb2] -> 
dovecot/replicator(replicator_queue_push+0xde) [0x5648989a5cde] -> 
dovecot/replicator(+0x5464) [0x5648989a5464] -> dovecot/replicator(+0x4a0b) 
[0x5648989a4a0b] -> dovecot/replicator(+0x4c02) [0x5648989a4c02] -> 
/usr/lib64/dovecot/libdovecot.so.0(io_loop_call_io+0x69) [0x7f8346d90c99] -> 
/usr/lib64/dovecot/libdovecot.so.0(io_loop_handler_run_internal+0x134) 
[0x7f8346d92574] -> 
/usr/lib64/dovecot/libdovecot.so.0(io_loop_handler_run+0x4c) [0x7f8346d90d9c] 
-> /usr/lib64/dovecot/libdovecot.so.0(io_loop_run+0x38) [0x7f8346d90fc8] -> 
/usr/lib64/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7f8346d01873] -> 
dovecot/replicator(main+0x1a0) [0x5648989a3cb0] -> 
/lib64/libc.so.6(__libc_start_main+0xea) [0x7f83468f534a] -> 
dovecot/replicator(_start+0x2a) [0x5648989a3d5a]
Jun 18 15:55:48 mail1 dovecot[1833]: 
lmtp(t...@example.com)<3673>: Warning: 
replication(t...@example.com): Sync failure:
Jun 18 15:55:48 mail1 dovecot[1833]: 
lmtp(t...@example.com)<3673>: Warning: 
replication(t...@example.com): Remote sent invalid input: -
Jun 18 15:55:48 mail1 dovecot[1833]: 
lmtp(t...@example.com)<3673>: 
msgid=<20200618135548.18

replicator: Panic: data stack: Out of memory

2020-06-18 Thread Thomas Tsiakalakis
Hi,

I have set up 2 SLES 15 Hosts with dovecot and replication. Everything seems to 
work fine but every time a message is delivered, I get an out of memory error 
in the logs. The Replication itself seems to work fine though.

I increased default_vsz_limit to 512M but  the only thing that changed was that 
dovecot was trying to allocate 1073741864 bytes instead of 268435496

As I said, I’m running SLES 15 SP1 and Dovecot 2.3.10 (0da0eff44) (both Hosts 
have the same version)

Each Host currently has 8GB of Memory.
# free -h
  totalusedfree  shared  buff/cache   available
Mem:  7.8Gi   278Mi   7.3Gi47Mi   183Mi   7.2Gi
Swap: 4.0Gi  0B   4.0Gi

# journalctl -f
Jun 18 15:55:48 mail1 postfix/pickup[3457]: 18533C009C8: uid=0 from=
Jun 18 15:55:48 mail1 postfix/cleanup[3669]: 18533C009C8: 
message-id=<20200618135548.18533C009C8@mail1>
Jun 18 15:55:48 mail1 postfix/qmgr[3458]: 18533C009C8: from=, 
size=431, nrcpt=1 (queue active)
Jun 18 15:55:48 mail1 dovecot[1833]: lmtp(3673): Connect from local
Jun 18 15:55:48 mail1 dovecot[1833]: replicator: Panic: data stack: Out of 
memory when allocating 268435496 bytes
Jun 18 15:55:48 mail1 dovecot[1833]: replicator: Error: Raw backtrace: 
/usr/lib64/dovecot/libdovecot.so.0(backtrace_append+0x42) [0x7f8346d6d262] -> 
/usr/lib64/dovecot/libdovecot.so.0(backtrace_get+0x1e) [0x7f8346d6d37e] -> 
/usr/lib64/dovecot/libdovecot.so.0(+0xebcee) [0x7f8346d77cee] -> 
/usr/lib64/dovecot/libdovecot.so.0(+0xebd91) [0x7f8346d77d91] -> 
/usr/lib64/dovecot/libdovecot.so.0(i_fatal+0) [0x7f8346cd1f8b] -> 
/usr/lib64/dovecot/libdovecot.so.0(t_pop_last_unsafe+0) [0x7f8346d73660] -> 
/usr/lib64/dovecot/libdovecot.so.0(+0xe7a50) [0x7f8346d73a50] -> 
/usr/lib64/dovecot/libdovecot.so.0(+0x10eb58) [0x7f8346d9ab58] -> 
/usr/lib64/dovecot/libdovecot.so.0(+0xe3ac5) [0x7f8346d6fac5] -> 
/usr/lib64/dovecot/libdovecot.so.0(buffer_write+0xf2) [0x7f8346d6fdb2] -> 
dovecot/replicator(replicator_queue_push+0xde) [0x5648989a5cde] -> 
dovecot/replicator(+0x5464) [0x5648989a5464] -> dovecot/replicator(+0x4a0b) 
[0x5648989a4a0b] -> dovecot/replicator(+0x4c02) [0x5648989a4c02] -> 
/usr/lib64/dovecot/libdovecot.so.0(io_loop_call_io+0x69) [0x7f8346d90c99] -> 
/usr/lib64/dovecot/libdovecot.so.0(io_loop_handler_run_internal+0x134) 
[0x7f8346d92574] -> 
/usr/lib64/dovecot/libdovecot.so.0(io_loop_handler_run+0x4c) [0x7f8346d90d9c] 
-> /usr/lib64/dovecot/libdovecot.so.0(io_loop_run+0x38) [0x7f8346d90fc8] -> 
/usr/lib64/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7f8346d01873] -> 
dovecot/replicator(main+0x1a0) [0x5648989a3cb0] -> 
/lib64/libc.so.6(__libc_start_main+0xea) [0x7f83468f534a] -> 
dovecot/replicator(_start+0x2a) [0x5648989a3d5a]
Jun 18 15:55:48 mail1 dovecot[1833]: 
lmtp(t...@example.com)<3673>: Warning: 
replication(t...@example.com): Sync failure:
Jun 18 15:55:48 mail1 dovecot[1833]: 
lmtp(t...@example.com)<3673>: Warning: 
replication(t...@example.com): Remote sent invalid input: -
Jun 18 15:55:48 mail1 dovecot[1833]: 
lmtp(t...@example.com)<3673>: 
msgid=<20200618135548.18533C009C8@mail1>: saved mail to INBOX
Jun 18 15:55:48 mail1 dovecot[1833]: lmtp(3673): Disconnect from local: Client 
has quit the connection (state=READY)
Jun 18 15:55:48 mail1 postfix/lmtp[3672]: 18533C009C8: to=, 
orig_to=, relay=mail1[private/dovecot-lmtp], delay=0.17, 
delays=0.02/0.01/0.02/0.13, dsn=2.0.0, status=sent (250 2.0.0 
 q5sLCGRy615ZDgAATn/NZw Saved)
Jun 18 15:55:48 mail1 postfix/qmgr[3458]: 18533C009C8: removed
Jun 18 15:55:48 mail1 dovecot[1833]: replicator: Fatal: master: 
service(replicator): child 2948 killed with signal 6 (core not dumped - 
https://dovecot.org/bugreport.html#coredumps - set /proc/sys/fs/suid_dumpable 
to 2)

Thomas Tsiakalakis

Team Applikationsbetrieb
GDV Dienstleistungs-GmbH
Tel: +49(40)33449-4318
Fax:
E-Mail: thomas.tsiakala...@gdv-dl.de<mailto:thomas.tsiakala...@gdv-dl.de>



GDV Dienstleistungs-GmbH
Glockengießerwall 1
D-20095 Hamburg
www.gdv-dl.de<http://www.gdv-dl.de>

Niederlassungen:

Wilhelmstraße 43 / 43 G
10117 Berlin

Frankenstraße 18
20097 Hamburg

Sitz und Registergericht: Hamburg
HRB 145291
USt.-IdNr : DE 205183123

Geschäftsführer:
Dr. Jens Bartenwerfer
Fred di Giuseppe Chiachiarella

Aufsichtsratsvorsitzender: Werner Schmidt

--
Diese E-Mail und alle Anhänge enthalten vertrauliche und/oder rechtlich 
geschützte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese 
E-Mail irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und 
vernichten Sie diese E-Mail. Das unerlaubte Kopieren sowie die unbefugte 
Weitergabe der E-Mail ist nicht gestattet.

This e-mail and any attached files may contain confidential and/or privileged 
information. If you are not the intended recipient (or have receiv

Unable to set ssl_min_protocol=TLSv1.3

2020-04-13 Thread Thomas Schneider
Good $daytime,

as per the recommendations of Mozilla’s SSL config generator[0], I
wanted to set ssl_min_protocol=TLSv1.3 in my dovecot config.  This
produced the error:

  imap-login: Error: Failed to initialize SSL server context: Unknown
  ssl_min_protocol setting 'TLSv1.3'

After some digging, I found the function that parses this setting in
src/lib-ssl-iostream/iostream-openssl-common.c
(openssl_min_protocol_to_options()), which maps strings such as
SSL_TXT_TLSV1_2 == "TLSv1.2" (from openssl/ssl.h) to the appropriate
version and option defines of OpenSSL.

Said openssl/ssl.h does not contain a SSL_TXT_TLSV1_3, so it’s no
surprise that dovecot does not know this setting.  As a quick fix, I
could probably extend struct {…} protocol_versions[] (in
iostream-openssl-common.c again) with an appropriate "TLSv1.3" entry
(and send a patch), though I would also suggest to OpenSSL to add a
SSL_TXT_TLSV1_3 define.

Unfortunately, I have not found a config setting in dovecot to set
SSL_OP_NO_TLSv1_2, or in fact any way to enforce TLS >=1.3, except maybe
via the cipher list string.

I think that dovecot should support setting this, and I’d also gladly
provide a patch.

Thanks,
Thomas

[0]: 
https://ssl-config.mozilla.org/#server=dovecot=2.3.4.1=modern=1.1.1d=5.4


signature.asc
Description: PGP signature


Re: .IMAP

2020-02-15 Thread Thomas Zajic
* Jos Chrispijn, 14.02.20 14:47

> On 14-2-20 13:39, Aki Tuomi wrote:
> 
>> This is why you put mail_location=driver:~/Mail and ensure the mails are 
>> under there, instead of mail_location=driver:~/
> 
> Yes, that is what I thought; when I use that setting, I get this error:
> [...]

Of course you shouldn't put "driver" there literally, but replace it with the 
actual
mailbox type (ie. "mbox", "maildir", "dbox", ...). The error message below 
contains
a hint to the problem, but admittedly it's easy to miss:

> Feb 14 14:32:15  dovecot[8549]: imap(jos)<8739><5ErUO4meZthSsH9H>: 
> Initializing mail storage from mail_location setting failed:
> Unknown mail storage driver driver in=0 out=375 deleted=0 expunged=0 
> trashed=0 hdr_count=0 hdr_bytes=0 body_count=0 body_bytes=0
  ^^
It would probably be easier to find if the actual driver name would be put in 
quotes
in the logging line (Unknown mail storage driver "driver" in=0 ...).

> [...]
> When I change
> 
> mail_location = mbox:/home/%u:INBOX=/var/mail/%u
> 
> into
> 
> mail_location = mbox:/home/%u/mail:INBOX=/var/mail/%u
> 
> I only get inbox (the /var/mail/%u content) and the Deleted mailbox.

That's probably because the second part of Aki's advise hasn't been followed yet
("... and ensure the mails are under there, ..."). You need to physically move 
all
mail related files and folders to the ~/mail subdir of each user. The "Deleted"
mailbox probably still shows up because your MUA has been configured to use a 
local
folder for it instead of an IMAP folder.

HTH,
Thomas


Re: [v 2.3.4.1][quota] recalculation

2020-02-11 Thread Thomas Criscione
Yes you are right !

Thanks

On 11/02/2020 11:34, Plutocrat wrote:
> On 11/02/2020 17.23, Sami Ketola wrote:
>> Does thunderbird even delete the mail from storage if you delete it from UI?
> 
> You have to right click on Trash and select Empty Trash to force it. 
> 
> In the Trash Folder Properties, you can also select a Retention Policy (eg 30 
> days, 2000 messages). 
> 
> P.
> 

-- 
Thomas Criscione
https://thomascriscione.fr
GPG: E521 FFF0 13C4 52FB 55EB  8C49 EBDD 0DC6 1ED9 2A4F


Re: [v 2.3.4.1][quota] recalculation

2020-02-11 Thread Thomas Criscione
Hi Sami,

You are right ! Event if emails are deleted from trash (and so: they do
not appears in thunderbird anywere), they are not expunged !

Right clic on trash > empty trash will send expunge signal to server !

Thanks a lot !

Thomas

On 11/02/2020 10:23, Sami Ketola wrote:
> Hi,
> 
> Does thunderbird even delete the mail from storage if you delete it from UI?
> 
> Most imap clients with default configuration just flag the mail \Deleted but 
> do not actually expunge if from server.
> In Apple Mail.App that I use there is "Erase Deleted Items" to actually 
> expunge mails previously flagged \Deleted.
> 
> Sami
> 
> 
>> On 11 Feb 2020, at 11.18, Thomas Criscione  wrote:
>>
>> Hello,
>>
>> I can't find the information on the wiki :(
>>
>> When is the quota recalculated after a mail deletion ?
>>
>> For instance, I am running low of storage and I use Thunderbird to
>> delete large mail. I only notice a recalculation when I quit
>> Thunderbirdb and I relaunch it.
>>
>> Even, with doveadm CLI, as long as Thunderbird is not disconnected on
>> the client side, the server didn't recalculate the storage available.
>>
>> Thanks for the help !
>>
>> Thomas
>>
>> -- 
>> Thomas Criscione
>> https://thomascriscione.fr
>> GPG: E521 FFF0 13C4 52FB 55EB  8C49 EBDD 0DC6 1ED9 2A4F
> 

-- 
Thomas Criscione
https://thomascriscione.fr
GPG: E521 FFF0 13C4 52FB 55EB  8C49 EBDD 0DC6 1ED9 2A4F


[v 2.3.4.1][quota] recalculation

2020-02-11 Thread Thomas Criscione
Hello,

I can't find the information on the wiki :(

When is the quota recalculated after a mail deletion ?

For instance, I am running low of storage and I use Thunderbird to
delete large mail. I only notice a recalculation when I quit
Thunderbirdb and I relaunch it.

Even, with doveadm CLI, as long as Thunderbird is not disconnected on
the client side, the server didn't recalculate the storage available.

Thanks for the help !

Thomas

-- 
Thomas Criscione
https://thomascriscione.fr
GPG: E521 FFF0 13C4 52FB 55EB  8C49 EBDD 0DC6 1ED9 2A4F


Re: slow logins over login_trusted_network

2019-12-16 Thread Thomas Zajic
* Wojciech Puchar, 16.12.19 18:04

>>> how to disable throttling (or better - put other limits) for 127.0.0.1?
>>
>> https://wiki2.dovecot.org/Upgrading/2.3 - look for "Localhost Auth Penalty"
>>
> that's certainly this.
> 
> but i am not an expert in this passdb system
> 
> my current config is
> [...]
> 
> where /usr/local/etc/dovecot/aliasy is a list of e-mail names to user account 
> names like this
> 
> woj...@puchar.net:::user=puchar-wojtek
> 
> how to properly do this?


I'm not an expert either, but I *think* you can just more or less literally 
copy/paste from
the example in the link.

Ie., right before your passdb{} entry pointing to /usr/local/etc/dovecot/aliasy 
you would just
insert another passdb{} entry as the very first one, namely the one from the 
link with exactly
the same content (you could probably name the file differently to make its 
purpose more clear,
like eg. "/usr/local/etc/dovecot/passdb-override-auth-penalty"). The key point 
in this entry
seems to be "noauthenticate=y", which I interpret as "read and use the file, 
but don't actually
use it for authentication purposes").

Then, in the file itself, you probably only need the first line containing 
"127.0.0.1", again
copy/pasted literally from the link. I interpret its contents as "for any 
connections coming
from 127.0.0.1, apply 'nodelay=yes'", ie. don't apply the default auth penalty 
delay.

Maybe an actual expert will prove me wrong, but at least my interpretation 
seems to make some
sort of sense to me. :-)

HTH,
Thomas


Re: slow logins over login_trusted_network

2019-12-16 Thread Thomas Zajic
* Wojciech Puchar, 16.12.19 15:54

> i've upgraded dovecot on my server to 2.3.9
> 
> works properly but saslauthd that uses it for rimap authentication over 
> 127.0.0.1 works SLOW. You need to wait 15-20 seconds before authentication.
> 
> only imap login over 127.0.0.1 is slowed down, while over any other IP is 
> quick.
> 
> i had this problem with older version of dovecot but it was about adding
> login_trusted_networks = 127.0.0.1
> 
> but i already have this and logins is slow.
> 
> how to disable throttling (or better - put other limits) for 127.0.0.1?

https://wiki2.dovecot.org/Upgrading/2.3 - look for "Localhost Auth Penalty"

HTH,
Thomas


Re: Perl was: JMAP: Re: http API for IMAP

2019-11-19 Thread Thomas Güttler via dovecot

Am 18.11.19 um 16:18 schrieb Ralph Seichter via dovecot:

* Thomas Güttler via dovecot:


https://github.com/guettli/programming-guidelines#regex-are-great---but-its-like-eating-rubbish


Thanks for including the disclaimer "It's my personal opinion and
feeling. No facts, no single truth." in your 'guidelines' (many of which
I disagree with). I just wish you had included the same disclaimer in
what you wrote in this thread, instead of presenting your personal
opinions and beliefs as facts.

Also, this has drifted far away from being related to Dovecot in any
useful way.



You disagree? Great! I am curious. What is wrong in my personal
guidelines?

Regards,
  Thomas Güttler



--
Thomas Guettler http://www.thomas-guettler.de/
I am looking for feedback: https://github.com/guettli/programming-guidelines


Perl was: JMAP: Re: http API for IMAP

2019-11-18 Thread Thomas Güttler via dovecot




Am 16.11.19 um 08:15 schrieb Bron Gondwana via dovecot:
proxy.jmap.io is very stale code at the moment.  I'm hoping to have enough time to hack on it at the IETF hackathon this 
weekend :)


I am a big biased. AFAIK it is written in Perl. I am very happy that I did not 
need to use Perl since 18 years now.
The regex where great. But time has changed.
Everytime you use regex today, I feel like being on the wrong track.

Related: 
https://github.com/guettli/programming-guidelines#regex-are-great---but-its-like-eating-rubbish

Regards,
  Thomas Güttler





Cheers,

Bron.

On Fri, Nov 15, 2019, at 00:44, Thomas Güttler via dovecot wrote:

Am 14.11.19 um 14:03 schrieb Benny Pedersen via dovecot:
> Thomas Güttler via dovecot skrev den 2019-11-14 08:55:
>
>> Is there already an open source imap2jmap server?
>
> why do you say imap here ?
>
> https://www.cyrusimap.org/imap/developer/jmap.html
>
> cyrus already have it, we just wait for dovecot :)


I used my favorite search engine (ecosia) and found

    https://proxy.jmap.io/

This way you can use JMAP even if you imap server does not
support it.


Regards,
   Thomas Güttler


--
Thomas Guettler http://www.thomas-guettler.de/
I am looking for feedback: https://github.com/guettli/programming-guidelines



--
   Bron Gondwana
   br...@fastmail.fm




--
Thomas Guettler http://www.thomas-guettler.de/
I am looking for feedback: https://github.com/guettli/programming-guidelines


Re: http API for IMAP

2019-11-15 Thread Thomas Güttler via dovecot




Am 14.11.19 um 19:18 schrieb Ralph Seichter via dovecot:

* Thomas Güttler via dovecot:


Stateless, http and URLs are the future.


A bold claim, and not worth anything without proof, which is impossible
to provide because you cannot predict the future.


Yes, you are right. I can't predict the future. But I can look at the current 
state of
the art. AFAIK nobody will use CORBA today if he starts from scratch.
Most people use http based APIs today.



JavaScript running on in browser or mobile phone can't connect to
IMAP/SMTP.


That's simply not true. There are JavaScript libraries like SmtpJS, a
low-level TCP/UDP socket API, and more.


Quoting this answer: https://stackoverflow.com/a/46886237/633961

> Note that smtpjs uses a service located at http://smtpjs. It's not truly a Javascript SMTP client. This "utility" 
means you are uploading your email credentials to the server smtpjs.com. Use with extreme caution.


JS running in the browser can't. JS running in Node.js can.




Please do your research before stating obvious falsehoods.


The above line is from you. Should I repeat it?


--
Thomas Guettler http://www.thomas-guettler.de/
I am looking for feedback: https://github.com/guettli/programming-guidelines


Re: http API for IMAP

2019-11-14 Thread Thomas Güttler via dovecot




Am 14.11.19 um 14:21 schrieb Phillip Odam via dovecot:

A HTTP API for IMAP and for that matter, POP3 and SMTP is exactly what we built 
where I work.



Did you build upon JMAP? If not, why not?

Regards,
 Thomas Güttler

--
Thomas Guettler http://www.thomas-guettler.de/
I am looking for feedback: https://github.com/guettli/programming-guidelines


Re: JMAP: Re: http API for IMAP

2019-11-14 Thread Thomas Güttler via dovecot

Am 14.11.19 um 14:03 schrieb Benny Pedersen via dovecot:

Thomas Güttler via dovecot skrev den 2019-11-14 08:55:


Is there already an open source imap2jmap server?


why do you say imap here ?

https://www.cyrusimap.org/imap/developer/jmap.html

cyrus already have it, we just wait for dovecot :)



I used my favorite search engine (ecosia) and found

   https://proxy.jmap.io/

This way you can use JMAP even if you imap server does not
support it.


Regards,
  Thomas Güttler


--
Thomas Guettler http://www.thomas-guettler.de/
I am looking for feedback: https://github.com/guettli/programming-guidelines


JMAP: Re: http API for IMAP

2019-11-13 Thread Thomas Güttler via dovecot




Am 13.11.19 um 15:07 schrieb Benny Pedersen via dovecot:

Thomas Güttler via dovecot skrev den 2019-11-13 14:40:

I would love to write a progressive web app for accessing dovecot (via
IMAP)


like all other webmail is using imap


But JavaScript in the browser can only use http/https.


so what ? :=)

hopefully you wont run webmail over http


Is there a way to access mails in dovecot via https?


google jmap, with is imho work on progress in dovecot


Yes, great. That's what I have been looking for.

Is there already an open source imap2jmap server?

Again, thank you for these four letters: jmap. That
was what I had on my mind.

Regards,
  Thomas

--
Thomas Guettler http://www.thomas-guettler.de/
I am looking for feedback: https://github.com/guettli/programming-guidelines


http API for IMAP

2019-11-13 Thread Thomas Güttler via dovecot

  
  
I would love to write a progressive web app for accessing dovecot
  (via IMAP)

But _javascript_ in the browser can only use http/https.
Is there a way to access mails in dovecot via https?
Maybe by a third-party tool which I don't know yet

Regards,
  Thomas Güttler



-- 
Thomas Guettler http://www.thomas-guettler.de/
I am looking for feedback: https://github.com/guettli/programming-guidelines
  



Re: imapsieve suddenly not working anymore

2019-08-21 Thread Thomas Stein via dovecot



Found the solution. Mail was copied/moved to mailbox INBOX.Spam instead 
of SPAM.


cheers, t.

On 2019-08-21 13:01, Thomas Stein via dovecot wrote:

On 2019-08-21 12:30, Thomas Stein via dovecot wrote:

Setting logging to debug reveals something is happening but the actual
scripts do not run i suppose.

Aug 21 11:54:23 imap(himbeere)<31571>: Debug:
Mailbox INBOX: Mailbox opened because: SELECT
Aug 21 11:54:23 imap(himbeere)<31569>: Debug:
imapsieve: mailbox INBOX.Spam: MOVE event


Maybe that's the problem? The "MOVE" event instead of a "COPY" event?



Aug 21 11:54:23 imap(himbeere)<31569>: Debug: sieve:
Pigeonhole version 0.5.7.1 (db5c74be) initializing
Aug 21 11:54:23 imap(himbeere)<31569>: Debug: sieve:
include: sieve_global is not set; it is currently not possible to
include `:global' scripts.
Aug 21 11:54:23 imap(himbeere)<31569>: Debug: sieve:
Sieve imapsieve plugin for Pigeonhole version 0.5.7.1 (db5c74be)
loaded
Aug 21 11:54:23 imap(himbeere)<31569>: Debug: sieve:
Sieve Extprograms plugin for Pigeonhole version 0.5.7.1 (db5c74be)
loaded
Aug 21 11:54:23 imap(himbeere)<31569>: Debug:
imapsieve: Static mailbox rule [1]: mailbox=`Spam' from=`*'
causes=(COPY FLAG) =>
before=`file:/usr/share/dovecot/sieve/report-spam.sieve' after=(none)
Aug 21 11:54:23 imap(himbeere)<31569>: Debug:
imapsieve: Static mailbox rule [2]: mailbox=`*' from=`Spam'
causes=(COPY) =>
before=`file:/usr/share/dovecot/sieve/report-ham.sieve' after=(none)
Aug 21 11:54:24 imap(himbeere)<31571>: Debug:
Mailbox INBOX: UID 132668: Opened mail because: prefetch
Aug 21 11:54:24 imap(himbeere)<31571>: Debug:
Mailbox INBOX: UID 132668: Opened mail because: access
Aug 21 11:54:24 imap(himbeere)<31571>: Debug:
Mailbox INBOX: UID 132668: Opened mail because: MIME part
Aug 21 11:54:24 imap(himbeere)<31571>: Info: Logged
out in=427 out=4207 deleted=0 expunged=0 trashed=0 hdr_count=1
hdr_bytes=507 body_count=1 body_bytes

On 2019-08-20 17:33, Thomas Stein via dovecot wrote:

Hello one and all.

Dovecot version 2.3.7.1

I've configured imapsieve like
https://wiki.dovecot.org/HowTo/AntispamWithSieve a while a go and it
worked
for years now. Suddenly i noticed moving mails to the spamfolder does
not trigger the report-spam.sieve
script anymore.

sieve-test gives:

 ~/.maildir/.Spam/cur $ sieve-test
/usr/share/dovecot/sieve/report-spam.sieve
1542388745.M99384P16720.meine-oma.de\,S\=8173\,W\=8373\:2\,S -D
sieve-test(himbeere): Debug: sieve: Pigeonhole version 0.5.7.1
(db5c74be) initializing
sieve-test(himbeere): Debug: sieve: include: sieve_global is not set;
it is currently not possible to include `:global' scripts.
sieve-test(himbeere): Debug: sieve: Sieve imapsieve plugin for
Pigeonhole version 0.5.7.1 (db5c74be) loaded
sieve-test(himbeere): Debug: sieve: Sieve Extprograms plugin for
Pigeonhole version 0.5.7.1 (db5c74be) loaded
debug: file storage: Using Sieve script path:
/usr/share/dovecot/sieve/report-spam.sieve.
debug: file script: Opened script `report-spam' from
`/usr/share/dovecot/sieve/report-spam.sieve'.
debug: Script binary /usr/share/dovecot/sieve/report-spam.svbin
successfully loaded.
debug: binary save: not saving binary
/usr/share/dovecot/sieve/report-spam.svbin, because it is already
stored.
report-spam: error: the imapsieve extension cannot be used outside 
IMAP.

sieve-test(himbeere): Info: final result: failed; resolved with
successful implicit keep
 ~/.maildir/.Spam/cur $

I'm not sure the "the imapsieve extension cannot be used outside 
IMAP"

is the error already or thats only because
the sieve-test script.

Any ideas on that?
cheers, t.


Re: imapsieve suddenly not working anymore

2019-08-21 Thread Thomas Stein via dovecot

On 2019-08-21 12:30, Thomas Stein via dovecot wrote:

Setting logging to debug reveals something is happening but the actual
scripts do not run i suppose.

Aug 21 11:54:23 imap(himbeere)<31571>: Debug:
Mailbox INBOX: Mailbox opened because: SELECT
Aug 21 11:54:23 imap(himbeere)<31569>: Debug:
imapsieve: mailbox INBOX.Spam: MOVE event


Maybe that's the problem? The "MOVE" event instead of a "COPY" event?



Aug 21 11:54:23 imap(himbeere)<31569>: Debug: sieve:
Pigeonhole version 0.5.7.1 (db5c74be) initializing
Aug 21 11:54:23 imap(himbeere)<31569>: Debug: sieve:
include: sieve_global is not set; it is currently not possible to
include `:global' scripts.
Aug 21 11:54:23 imap(himbeere)<31569>: Debug: sieve:
Sieve imapsieve plugin for Pigeonhole version 0.5.7.1 (db5c74be)
loaded
Aug 21 11:54:23 imap(himbeere)<31569>: Debug: sieve:
Sieve Extprograms plugin for Pigeonhole version 0.5.7.1 (db5c74be)
loaded
Aug 21 11:54:23 imap(himbeere)<31569>: Debug:
imapsieve: Static mailbox rule [1]: mailbox=`Spam' from=`*'
causes=(COPY FLAG) =>
before=`file:/usr/share/dovecot/sieve/report-spam.sieve' after=(none)
Aug 21 11:54:23 imap(himbeere)<31569>: Debug:
imapsieve: Static mailbox rule [2]: mailbox=`*' from=`Spam'
causes=(COPY) =>
before=`file:/usr/share/dovecot/sieve/report-ham.sieve' after=(none)
Aug 21 11:54:24 imap(himbeere)<31571>: Debug:
Mailbox INBOX: UID 132668: Opened mail because: prefetch
Aug 21 11:54:24 imap(himbeere)<31571>: Debug:
Mailbox INBOX: UID 132668: Opened mail because: access
Aug 21 11:54:24 imap(himbeere)<31571>: Debug:
Mailbox INBOX: UID 132668: Opened mail because: MIME part
Aug 21 11:54:24 imap(himbeere)<31571>: Info: Logged
out in=427 out=4207 deleted=0 expunged=0 trashed=0 hdr_count=1
hdr_bytes=507 body_count=1 body_bytes

On 2019-08-20 17:33, Thomas Stein via dovecot wrote:

Hello one and all.

Dovecot version 2.3.7.1

I've configured imapsieve like
https://wiki.dovecot.org/HowTo/AntispamWithSieve a while a go and it
worked
for years now. Suddenly i noticed moving mails to the spamfolder does
not trigger the report-spam.sieve
script anymore.

sieve-test gives:

 ~/.maildir/.Spam/cur $ sieve-test
/usr/share/dovecot/sieve/report-spam.sieve
1542388745.M99384P16720.meine-oma.de\,S\=8173\,W\=8373\:2\,S -D
sieve-test(himbeere): Debug: sieve: Pigeonhole version 0.5.7.1
(db5c74be) initializing
sieve-test(himbeere): Debug: sieve: include: sieve_global is not set;
it is currently not possible to include `:global' scripts.
sieve-test(himbeere): Debug: sieve: Sieve imapsieve plugin for
Pigeonhole version 0.5.7.1 (db5c74be) loaded
sieve-test(himbeere): Debug: sieve: Sieve Extprograms plugin for
Pigeonhole version 0.5.7.1 (db5c74be) loaded
debug: file storage: Using Sieve script path:
/usr/share/dovecot/sieve/report-spam.sieve.
debug: file script: Opened script `report-spam' from
`/usr/share/dovecot/sieve/report-spam.sieve'.
debug: Script binary /usr/share/dovecot/sieve/report-spam.svbin
successfully loaded.
debug: binary save: not saving binary
/usr/share/dovecot/sieve/report-spam.svbin, because it is already
stored.
report-spam: error: the imapsieve extension cannot be used outside 
IMAP.

sieve-test(himbeere): Info: final result: failed; resolved with
successful implicit keep
 ~/.maildir/.Spam/cur $

I'm not sure the "the imapsieve extension cannot be used outside IMAP"
is the error already or thats only because
the sieve-test script.

Any ideas on that?
cheers, t.


Re: imapsieve suddenly not working anymore

2019-08-21 Thread Thomas Stein via dovecot



Setting logging to debug reveals something is happening but the actual 
scripts do not run i suppose.


Aug 21 11:54:23 imap(himbeere)<31571>: Debug: Mailbox 
INBOX: Mailbox opened because: SELECT
Aug 21 11:54:23 imap(himbeere)<31569>: Debug: 
imapsieve: mailbox INBOX.Spam: MOVE event
Aug 21 11:54:23 imap(himbeere)<31569>: Debug: sieve: 
Pigeonhole version 0.5.7.1 (db5c74be) initializing
Aug 21 11:54:23 imap(himbeere)<31569>: Debug: sieve: 
include: sieve_global is not set; it is currently not possible to 
include `:global' scripts.
Aug 21 11:54:23 imap(himbeere)<31569>: Debug: sieve: 
Sieve imapsieve plugin for Pigeonhole version 0.5.7.1 (db5c74be) loaded
Aug 21 11:54:23 imap(himbeere)<31569>: Debug: sieve: 
Sieve Extprograms plugin for Pigeonhole version 0.5.7.1 (db5c74be) 
loaded
Aug 21 11:54:23 imap(himbeere)<31569>: Debug: 
imapsieve: Static mailbox rule [1]: mailbox=`Spam' from=`*' causes=(COPY 
FLAG) => before=`file:/usr/share/dovecot/sieve/report-spam.sieve' 
after=(none)
Aug 21 11:54:23 imap(himbeere)<31569>: Debug: 
imapsieve: Static mailbox rule [2]: mailbox=`*' from=`Spam' 
causes=(COPY) => before=`file:/usr/share/dovecot/sieve/report-ham.sieve' 
after=(none)
Aug 21 11:54:24 imap(himbeere)<31571>: Debug: Mailbox 
INBOX: UID 132668: Opened mail because: prefetch
Aug 21 11:54:24 imap(himbeere)<31571>: Debug: Mailbox 
INBOX: UID 132668: Opened mail because: access
Aug 21 11:54:24 imap(himbeere)<31571>: Debug: Mailbox 
INBOX: UID 132668: Opened mail because: MIME part
Aug 21 11:54:24 imap(himbeere)<31571>: Info: Logged 
out in=427 out=4207 deleted=0 expunged=0 trashed=0 hdr_count=1 
hdr_bytes=507 body_count=1 body_bytes


On 2019-08-20 17:33, Thomas Stein via dovecot wrote:

Hello one and all.

Dovecot version 2.3.7.1

I've configured imapsieve like
https://wiki.dovecot.org/HowTo/AntispamWithSieve a while a go and it
worked
for years now. Suddenly i noticed moving mails to the spamfolder does
not trigger the report-spam.sieve
script anymore.

sieve-test gives:

 ~/.maildir/.Spam/cur $ sieve-test
/usr/share/dovecot/sieve/report-spam.sieve
1542388745.M99384P16720.meine-oma.de\,S\=8173\,W\=8373\:2\,S -D
sieve-test(himbeere): Debug: sieve: Pigeonhole version 0.5.7.1
(db5c74be) initializing
sieve-test(himbeere): Debug: sieve: include: sieve_global is not set;
it is currently not possible to include `:global' scripts.
sieve-test(himbeere): Debug: sieve: Sieve imapsieve plugin for
Pigeonhole version 0.5.7.1 (db5c74be) loaded
sieve-test(himbeere): Debug: sieve: Sieve Extprograms plugin for
Pigeonhole version 0.5.7.1 (db5c74be) loaded
debug: file storage: Using Sieve script path:
/usr/share/dovecot/sieve/report-spam.sieve.
debug: file script: Opened script `report-spam' from
`/usr/share/dovecot/sieve/report-spam.sieve'.
debug: Script binary /usr/share/dovecot/sieve/report-spam.svbin
successfully loaded.
debug: binary save: not saving binary
/usr/share/dovecot/sieve/report-spam.svbin, because it is already
stored.
report-spam: error: the imapsieve extension cannot be used outside 
IMAP.

sieve-test(himbeere): Info: final result: failed; resolved with
successful implicit keep
 ~/.maildir/.Spam/cur $

I'm not sure the "the imapsieve extension cannot be used outside IMAP"
is the error already or thats only because
the sieve-test script.

Any ideas on that?
cheers, t.


imapsieve suddenly not working anymore

2019-08-20 Thread Thomas Stein via dovecot

Hello one and all.

Dovecot version 2.3.7.1

I've configured imapsieve like 
https://wiki.dovecot.org/HowTo/AntispamWithSieve a while a go and it 
worked
for years now. Suddenly i noticed moving mails to the spamfolder does 
not trigger the report-spam.sieve

script anymore.

sieve-test gives:

 ~/.maildir/.Spam/cur $ sieve-test  
/usr/share/dovecot/sieve/report-spam.sieve 
1542388745.M99384P16720.meine-oma.de\,S\=8173\,W\=8373\:2\,S -D
sieve-test(himbeere): Debug: sieve: Pigeonhole version 0.5.7.1 
(db5c74be) initializing
sieve-test(himbeere): Debug: sieve: include: sieve_global is not set; it 
is currently not possible to include `:global' scripts.
sieve-test(himbeere): Debug: sieve: Sieve imapsieve plugin for 
Pigeonhole version 0.5.7.1 (db5c74be) loaded
sieve-test(himbeere): Debug: sieve: Sieve Extprograms plugin for 
Pigeonhole version 0.5.7.1 (db5c74be) loaded
debug: file storage: Using Sieve script path: 
/usr/share/dovecot/sieve/report-spam.sieve.
debug: file script: Opened script `report-spam' from 
`/usr/share/dovecot/sieve/report-spam.sieve'.
debug: Script binary /usr/share/dovecot/sieve/report-spam.svbin 
successfully loaded.
debug: binary save: not saving binary 
/usr/share/dovecot/sieve/report-spam.svbin, because it is already 
stored.

report-spam: error: the imapsieve extension cannot be used outside IMAP.
sieve-test(himbeere): Info: final result: failed; resolved with 
successful implicit keep

 ~/.maildir/.Spam/cur $

I'm not sure the "the imapsieve extension cannot be used outside IMAP" 
is the error already or thats only because

the sieve-test script.

Any ideas on that?
cheers, t.


Re: Autoexpunge not working for Junk?

2019-08-12 Thread Thomas Zajic via dovecot
* Amir Caspi via dovecot, 12.08.19 22:01

> [~]# doveadm mailbox status -u cepheid firstsaved Junk
> Junk firstsaved=1563154976
> 
> I can't tell how that timestamp corresponds to a human-readable date, however.

[zlatko@disclosure:~]$ date -d @1563154976
Mon Jul 15 03:42:56 CEST 2019

HTH,
Thomas


Re: Panic: Module context expire_mail_user_module missing (Debian 10)

2019-08-02 Thread Thomas Krause via dovecot

Am 2019-08-02 10:39, schrieb Alexander Dalloz via dovecot:

Am 2019-08-02 10:31, schrieb Thomas Krause via dovecot:

Hi all,
I tried to migrate a mailserver from Debian 9 (Dovecot 2.2.27) to
Debian 10 (Dovecot 2.3.4.1). I moved the mail-partition and
/etc/dovecot from the old to the new server. Dovecot started with
2 warnings. When trying to fetch mails via pop3 the server crashes:


Aug  1 16:56:47 mail19 dovecot: pop3-login: Login: user=,
rip=::1, mpid=22394, session=
Aug  1 16:56:47 mail19 dovecot:
pop3(testuser)<22394>: Panic: Module
context expire_mail_user_module missing


Disable the expire plugin in your configuration.


After disabling the expire module it works. But I need the expire
module. What can I do?

Regards,
Thomas.


Panic: Module context expire_mail_user_module missing (Debian 10)

2019-08-02 Thread Thomas Krause via dovecot

Hi all,
I tried to migrate a mailserver from Debian 9 (Dovecot 2.2.27) to
Debian 10 (Dovecot 2.3.4.1). I moved the mail-partition and
/etc/dovecot from the old to the new server. Dovecot started with
2 warnings. When trying to fetch mails via pop3 the server crashes:


Aug  1 16:56:47 mail19 dovecot: pop3-login: Login: user=, 
rip=::1, mpid=22394, session=
Aug  1 16:56:47 mail19 dovecot: 
pop3(testuser)<22394>: Panic: Module 
context expire_mail_user_module missing
Aug  1 16:56:47 mail19 dovecot: 
pop3(testuser)<22394>: Error: Raw 
backtrace: /usr/lib/dovecot/libdovecot.so.0(+0xdb13b) [0x7fd14ea7713b] 
-> /usr/lib/dovecot/libdovecot.so.0
(+0xdb1d1) [0x7fd14ea771d1] -> 
/usr/lib/dovecot/libdovecot.so.0(+0x4a001) [0x7fd14e9e6001] -> 
/usr/lib/dovecot/modules/lib20_expire_plugin.so(+0x23b2) 
[0x7fd14e7953b2] -> /usr/lib/dovecot/libdovecot-storage.so.0
(hook_mailbox_allocated+0x89) [0x7fd14eb89409] -> 
/usr/lib/dovecot/libdovecot-storage.so.0(mailbox_alloc+0xda) 
[0x7fd14eb84c5a] -> dovecot/pop3(client_init_mailbox+0x58) 
[0x55781fb5fba8] -> dovecot/pop3(+0x5645)
 [0x55781fb5e645] -> dovecot/pop3(+0x5882) [0x55781fb5e882] -> 
/usr/lib/dovecot/libdovecot.so.0(+0x6f601) [0x7fd14ea0b601] -> 
/usr/lib/dovecot/libdovecot.so.0(+0x6f97b) [0x7fd14ea0b97b] -> 
/usr/lib/dovecot/libdo
vecot.so.0(+0x7030d) [0x7fd14ea0c30d] -> 
/usr/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x6f) [0x7fd14ea8d5ef] 
-> /usr/lib/dovecot/libdovecot.so.0(io_loop_handler_run_internal+0x136) 
[0x7fd14ea8ebe6] -> /usr/l
ib/dovecot/libdovecot.so.0(io_loop_handler_run+0x4c) [0x7fd14ea8d68c] -> 
/usr/lib/dovecot/libdovecot.so.0(io_loop_run+0x40) [0x7fd14ea8d7f0] -> 
/usr/lib/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7fd14e
a0dfd3] -> dovecot/pop3(main+0x2cf) [0x55781fb5e13f] -> 
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xeb) [0x7fd14e7f909b] 
-> dovecot/pop3(_start+0x2a) [0x55781fb5e29a]
Aug  1 16:56:47 mail19 dovecot: 
pop3(testuser)<22394>: Fatal: master: 
service(pop3): child 22394 killed with signal 6 (core dumps disabled - 
https://dovecot.org/bugreport.h

tml#coredumps)

What can I do? I tried to enable core dumps (installed systemd-coredump 
and modified limits.conf)

but it didn't work.

Best regards,
Thomas.


email notification regression between 2.2 and 2.3

2019-05-14 Thread Thomas Capricelli via dovecot



Hi,


Hello.

For the last year, each time i've tried updating dovecot from 2.2.x to 
2.3.x, I had to rollback because of the following problem:


Users don't "see" new emails arriving in subdirs.

To be specific
* we mostly use/tested thunderbird client
* we use maildrop for delivery
* this is using imap, not pop3
* when a new email arrives, thunderbird renders the dir name in blue and 
appends a "(1)" (or more..). That's what we call 'see'

* this happens only on 'subdirs', not the main directory.
* the server is a typical linux amd64, using the gentoo distribution


More testing shows that
* if we switch delivery to dovecot-lda, it works ok. But we can't commit 
that change, or at least not now/easily. Our users heavily use .mailfilter
* if the user is currently viewing the subdir, then email notification 
is working
* better : if the user went on the subdir and then in another dir, it 
keeps on working. It seems we need to 'open' it at least once for 
notification to work

* "maildir_very_dirty_syncs = no" doesn't help (we had =yes)
* "mailbox_list_index = no" doesn't help
* i can confirm inotify was detected/used in autoconf when installed 
(anyway it works well with 2.2)



I'm always on freenode (orzel). I'll be in #dovecot the next days at least.


greetings,
--
Thomas Capricelli


haproxy + submission services -> postfix failure

2019-04-19 Thread Chris Thomas via dovecot
Hi,

I have a nginx server which is using the proxy protocol to forward tcp
connections to dovecot. Dovecot is configured to be a submission
service for email to be sent. Then postfix should send the email
itself which is also using the ha proxy protocol. There are a few
moving parts in this problem so I'm not sure where the problem is. But
I want to ask if somebody can validate my dovecot configuration
somehow so I can start to tick off some things from the list.

Sending email fails, seems to get to postfix, then die
Receiving emails succeeds and I don't have any problem to pick them up.

I've figured out some stuff, like lmtp shouldn't use haproxy when
talking between postfix -> dovecot for receiving emails. If I enable
the protocol on lmtp, I can't receive any emails at all.

In order to get postfix to accept emails, I enabled haproxy protocol
and enabled postscreen and then postfix could access the source ip and
stop my server from being an open relay.

I've got tls certificates installed on dovecot and postfix, all
created by letsencrypt and I don't appear to have any problems with
them.

I will try to give as much information about the config as I can, I'm
not sure what other parts are good to have, but let me know if you are
missing something or want to check a value.

>> 10-master.conf:
service submission-login {
  inet_listener submission {
port = 587
haproxy = yes
  }
}

service lmtp {
  inet_listener lmtp {
port = 24
haproxy = no
  }
}


>> 20-submission.conf
submission_relay_host = postfix.mail-server
submission_relay_port = 25
submission_relay_ssl = starttls
submission_relay_ssl_verify = yes

Then because it might help to give the other side of the connection
configuration for postfix, here is the relevant information:

>> master.cf:
smtp  inet  n   -   -   -   1   postscreen
smtpd pass  -   -   -   -   -   smtpd

>> main.cf

postscreen_upstream_proxy_protocol = haproxy
postscreen_upstream_proxy_timeout = 10s

That's it. I don't know what other information could be useful.

There are some logs, they are like this (I've got logging turned on
for pretty much every option I have:

Dovecot logs:

Apr 19 17:54:47 submission(__EMAIL__)<497>: Debug:
Added userdb setting: plugin/quota_rule=*:bytes=0
Apr 19 17:54:47 submission(__EMAIL__)<497>: Debug:
Effective uid=8, gid=8, home=/mail/__DOMAIN_COM__/__USER__
Apr 19 17:54:47 submission(__EMAIL__)<497>: Debug:
Namespace inbox: type=private, prefix=, sep=, inbox=yes, hidden=no,
list=yes, subscriptions=yes
location=maildir:/mail/__DOMAIN_COM__/__USER__
Apr 19 17:54:47 submission(__EMAIL__)<497>: Debug:
maildir++: root=/mail/__DOMAIN_COM__/__USER__, index=, indexpvt=,
control=, inbox=/mail/__DOMAIN_COM__/__USER__, alt=
Apr 19 17:54:47 submission(__EMAIL__)<497>: Debug:
smtp-server: conn __IP_ADDR_1__:31217 [0]: Connection created
Apr 19 17:54:47 submission(__EMAIL__)<497>: Debug:
smtp-client: conn postfix.mail-server:25 [0]: Connection created
Apr 19 17:54:47 submission(__EMAIL__)<497>: Debug:
smtp-client: conn postfix.mail-server:25 [0]: Looking up IP address
Apr 19 17:54:47 submission(__EMAIL__)<497>: Debug:
smtp-client: conn postfix.mail-server:25 [0]: DNS lookup successful;
got 1 IPs
Apr 19 17:54:47 submission(__EMAIL__)<497>: Debug:
smtp-client: conn postfix.mail-server:25 [0]: Connecting to
10.104.211.161:25
Apr 19 17:54:47 submission(__EMAIL__)<497>: Debug:
smtp-client: conn postfix.mail-server:25 [0]: Connected
Apr 19 17:54:57 submission(__EMAIL__)<497>: Debug:
smtp-client: conn postfix.mail-server:25 [0]: Received greeting from
server: 421 4.3.2 No system resources
Apr 19 17:54:57 submission(__EMAIL__)<497>: Debug:
smtp-client: conn postfix.mail-server:25 [0]: Connection failed: 421
4.3.2 No system resources
Apr 19 17:54:57 submission(__EMAIL__)<497>: Error:
Failed to establish relay connection: 421 4.3.2 No system resources
Apr 19 17:54:57 submission(__EMAIL__)<497>: Debug:
smtp-client: conn postfix.mail-server:25 [0]: Disconnected
Apr 19 17:54:57 submission(__EMAIL__)<497>: Info:
Disconnect from __IP_ADDR_1__: Failed to establish relay connection
in=0 out=22 (state=GREETING)
Apr 19 17:54:57 submission(__EMAIL__)<497>: Debug:
smtp-server: conn __IP_ADDR_1__:31217 [0]: Disconnected: Failed to
establish relay connection

Postfix Logs:
postfix/postscreen[525]: warning: haproxy read: time limit exceeded

If anybody could help out, I'd be grateful because I just can't see
what the problem is.

Chris


Migration to 2.3.36 using replication

2018-12-26 Thread Thomas Durand
Hello,

I have to migrate my dovecot box to a new one. I would like to know if it’s 
possible to do this migration with the replication process.

My plan is to activate replication between 2.2.10 server and the 2.3.4 new 
server.
Then I will update the DNS to redirect the name to new box. Users will then 
connect either on the new or the old box depending on the dns upgrade.

Does anyone have already experienced such process ? Is there any 
incompatibility issue ?

Thanks.

Thomas

Random issue on mailbox "assertion failed"

2018-12-05 Thread Thomas Durand
Hi,

Since few weeks, I have regularly the issue below with some mailboxes - This is 
happening randomly.  Users are complaining emails are not visible. The 
situation is recovered by removing indexes but I would like to have it fix 
permanently.

Ruuning dovecot 2.2.10 - Anyone to advise me what is wrong ?

Thanks.

Dec 05 08:03:37 justyna.rezoo.org dovecot[18684]: imap(XXX): Panic: file 
mail-index-sync-keywords.c: line 227 (keywords_update_records): assertion 
failed: (data_offset >= sizeof(struct mail_index_record))
Dec 05 08:03:37 justyna.rezoo.org dovecot[18684]: imap(xxx: Error: Raw 
backtrace: /usr/lib64/dovecot/libdovecot.so.0(+0x6a06e) [0x7fb890be706e] -> 
/usr/lib64/dovecot/libdovecot.so.0(+0x6a14e) [0x7fb890be714e] -> 
/usr/lib64/dovecot/libdovecot.so.0(i_fatal+0) [0x7fb890b9f52c] -> 
/usr/lib64/dovecot/libdovecot-storage.so.0(mail_index_sync_keywords+0x808) 
[0x7fb890f205c8] -> 
/usr/lib64/dovecot/libdovecot-storage.so.0(mail_index_sync_record+0xfd) 
[0x7fb890f20eed] -> 
/usr/lib64/dovecot/libdovecot-storage.so.0(mail_index_sync_map+0x21e) 
[0x7fb890f21e8e] -> 
/usr/lib64/dovecot/libdovecot-storage.so.0(mail_index_map+0x3e7) 
[0x7fb890f12b77] -> /usr/lib64/dovecot/libdovecot-storage.so.0(+0xb937d) 
[0x7fb890f0e37d] -> /usr/lib64/dovecot/libdovecot-storage.so.0(+0xb9990) 
[0x7fb890f0e990] -> 
/usr/lib64/dovecot/libdovecot-storage.so.0(mail_index_open+0x8c) 
[0x7fb890f0ea7c] -> 
/usr/lib64/dovecot/libdovecot-storage.so.0(index_storage_mailbox_open+0x87) 
[0x7fb890eff9c7] -> /usr/lib64/dovecot/libdovecot-storage.so.0(+0x4d882) 
[0x7fb890ea2882] -> /usr/lib64/dovecot/libdovecot-storage.so.0(+0x4d963) 
[0x7fb890ea2963] -> /usr/lib64/dovecot/lib20_zlib_plugin.so(+0x2a4c) 
[0x7fb89018fa4c] -> /usr/lib64/dovecot/libdovecot-storage.so.0(+0x802b4) 
[0x7fb890ed52b4] -> 
/usr/lib64/dovecot/libdovecot-storage.so.0(mailbox_open+0x20) [0x7fb890ed5430] 
-> dovecot/imap(cmd_select_full+0x172) [0x7fb8913b0432] -> 
dovecot/imap(command_exec+0x3c) [0x7fb8913b601c] -> dovecot/imap(+0x17f1f) 
[0x7fb8913b4f1f] -> dovecot/imap(+0x18005) [0x7fb8913b5005] -> 
dovecot/imap(client_handle_input+0x14d) [0x7fb8913b52fd] -> 
dovecot/imap(client_input+0x85) [0x7fb8913b56c5] -> 
/usr/lib64/dovecot/libdovecot.so.0(io_loop_call_io+0x27) [0x7fb890bf7a87] -> 
/usr/lib64/dovecot/libdovecot.so.0(io_loop_handler_run+0xff) [0x7fb890bf890f] 
-> /usr/lib64/dovecot/libdovecot.so.0(io_loop_run+0x38) [0x7fb890bf75d8] -> 
/usr/lib64/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7fb890ba49e3] -> 
dovecot/imap(main+0x2c4) [0x7fb8913a9324] -> 
/lib64/libc.so.6(__libc_start_main+0xf5) [0x7fb8907cf3d5]



Thomas Durand
+336 12 07 31 87
thomas.dur...@rezoo.org





Re: Dovecot crash

2018-11-30 Thread Thomas Durand
Exactly - I removed then with 
find . -name "dovecot.index*" -type f -delete

There is no need to restart dovecot. IMAP client will be forced to resync all 
the emails from the server.


> Le 28 nov. 2018 à 23:20, JCA <1.41...@gmail.com <mailto:1.41...@gmail.com>> a 
> écrit :
> 
> Thanks. Assuming that the IMAP mail directory for the account affected is 
> under /home/xyz/mail, are you talking about the contents of the index 
> directory, excluding the log file therein?
> 
> On Wed, Nov 28, 2018 at 2:29 PM Thomas Durand  <mailto:t...@rezoo.org>> wrote:
> Hi,
> 
> I had the similar messages after an upgrade then downgrade. I was able to fix 
> by removing all indexes files. 
> 
> Thomas 
> 
> > Le 28 nov. 2018 à 22:02, JCA <1.41...@gmail.com <mailto:1.41...@gmail.com>> 
> > a écrit :
> > 
> >   This happening when my Thunderbird client is trying to establish a 
> > connection with a Dovecot server. Some background first:
> > 
> >   1) I am running Thunderbird 6.20.1 from a Linux client.
> >   2) Other clients (e.g. Maildroid in my Android phone) do not have any  
> > issues.
> >   3) The Dovecot software is version 2.2.9, running in a Linux server.
> >   4) This may be the root of the problem: at some point, the Dovecot 
> > software  was changed to  version 2.3.1, and the reverted to 2.2.9. The 
> > crash started to appear when this reversion took place.
> > 
> >   My guess is that upgrading to 2.3.1 will eliminate this problem. However, 
> > in  case that does not solve the problem, I would be grateful if anybody 
> > could throw some light into exactly what is going on here, and whether it 
> > can be fixed even without the need of upgrading to 2.3.1.
> > 
> >   What follows are the traces generated by Dovecot  2.2.9 when a connection 
> > is attempted from Thunderbird, with usernames and IP addresses disguised 
> > for privacy, and some blank lines inserted for readability:
> > 
> > Nov 28 13:36:20 imap-login: Info: Login: user=, method=PLAIN, 
> > rip=xxx.yyy.z
> > zz.154, lip=aaa.bbb.ccc.7, mpid=29245, TLS, session=
> > 
> > Nov 28 13:36:20 imap(xyz): Panic: file mail-index-sync-keywords.c: line 227 
> > (key
> > words_update_records): assertion failed: (data_offset >= sizeof(struct 
> > mail_index_record))
> > 
> > Nov 28 13:36:20 imap(xyz): Error: Raw backtrace: 
> > /usr/lib/dovecot/libdovecot.so.
> > 0(+0x65626) [0xb75ae626] -> /usr/lib/dovecot/libdovecot.so.0(+0x6569f) 
> > [0xb75ae6
> > 9f] -> /usr/lib/dovecot/libdovecot.so.0(i_fatal+0) [0xb756180e] -> 
> > /usr/lib/dove
> > cot/libdovecot-storage.so.0(mail_index_sync_keywords+0x707) [0xb76cb6b7] -> 
> > /usr
> > /lib/dovecot/libdovecot-storage.so.0(mail_index_sync_record+0xf7) 
> > [0xb76cc167] -
> > > /usr/lib/dovecot/libdovecot-storage.so.0(mail_index_sync_map+0x2a4) 
> > > [0xb76cd22
> > 4] -> /usr/lib/dovecot/libdovecot-storage.so.0(mail_index_map+0x57c) 
> > [0xb76bcb9c
> > ] -> /usr/lib/dovecot/libdovecot-storage.so.0(+0xb7c2a) [0xb76b7c2a] -> 
> > /usr/lib
> > /dovecot/libdovecot-storage.so.0(+0xb7daa) [0xb76b7daa] -> 
> > /usr/lib/dovecot/libd
> > ovecot-storage.so.0(mail_index_open+0x114) [0xb76b7f54] -> 
> > /usr/lib/dovecot/libd
> > ovecot-storage.so.0(index_storage_mailbox_open+0xab) [0xb76a808b] -> 
> > /usr/lib/do
> > vecot/libdovecot-storage.so.0(+0x5a481) [0xb765a481] -> 
> > /usr/lib/dovecot/libdove
> > cot-storage.so.0(+0x5a6f2) [0xb765a6f2] -> 
> > /usr/lib/dovecot/libdovecot-storage.s
> > o.0(+0x7a017) [0xb767a017] -> 
> > /usr/lib/dovecot/libdovecot-storage.so.0(mailbox_o
> > pen+0x26) [0xb767a1d6] -> dovecot/imap(client_open_save_dest_box+0x79) 
> > [0x805dca
> > 9] -> dovecot/imap() [0x8052678] -> dovecot/imap(command_exec+0x32) 
> > [0x805d7a2]
> > -> dovecot/imap() [0x805c746] -> dovecot/imap() [0x805c820] -> 
> > dovecot/imap(clie
> > nt_handle_input+0x125) [0x805cad5] -> dovecot/imap(client_input+0x71) 
> > [0x805ce91
> > ] -> /usr/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x42) [0xb75c1292] -> 
> > /usr
> > /lib/dovecot/libdovecot.so.0(io_loop_handler_run+0xdb) [0xb75c22cb] -> 
> > /usr/lib/
> > dovecot/libdovecot.so.0(io_loop_run+0x48) [0xb75c0cd8] -> 
> > /usr/lib/dovecot/libdo
> > vecot.so.0(master_service_run+0x2d) [0xb756728d] -> 
> > dovecot/imap(main+0x2bf) [0x
> > 8066fef] -> /lib/libc.so.6(__libc_start_main+0xe6) [0xb73cfdb6]
> > 
> > Nov 28 13:36:20 imap(xyz

Re: Dovecot crash

2018-11-28 Thread Thomas Durand
Hi,

I had the similar messages after an upgrade then downgrade. I was able to fix 
by removing all indexes files. 

Thomas 

> Le 28 nov. 2018 à 22:02, JCA <1.41...@gmail.com> a écrit :
> 
>   This happening when my Thunderbird client is trying to establish a 
> connection with a Dovecot server. Some background first:
> 
>   1) I am running Thunderbird 6.20.1 from a Linux client.
>   2) Other clients (e.g. Maildroid in my Android phone) do not have any  
> issues.
>   3) The Dovecot software is version 2.2.9, running in a Linux server.
>   4) This may be the root of the problem: at some point, the Dovecot software 
>  was changed to  version 2.3.1, and the reverted to 2.2.9. The crash 
> started to appear when this reversion took place.
> 
>   My guess is that upgrading to 2.3.1 will eliminate this problem. However, 
> in  case that does not solve the problem, I would be grateful if anybody 
> could throw some light into exactly what is going on here, and whether it can 
> be fixed even without the need of upgrading to 2.3.1.
> 
>   What follows are the traces generated by Dovecot  2.2.9 when a connection 
> is attempted from Thunderbird, with usernames and IP addresses disguised for 
> privacy, and some blank lines inserted for readability:
> 
> Nov 28 13:36:20 imap-login: Info: Login: user=, method=PLAIN, 
> rip=xxx.yyy.z
> zz.154, lip=aaa.bbb.ccc.7, mpid=29245, TLS, session=
> 
> Nov 28 13:36:20 imap(xyz): Panic: file mail-index-sync-keywords.c: line 227 
> (key
> words_update_records): assertion failed: (data_offset >= sizeof(struct 
> mail_index_record))
> 
> Nov 28 13:36:20 imap(xyz): Error: Raw backtrace: 
> /usr/lib/dovecot/libdovecot.so.
> 0(+0x65626) [0xb75ae626] -> /usr/lib/dovecot/libdovecot.so.0(+0x6569f) 
> [0xb75ae6
> 9f] -> /usr/lib/dovecot/libdovecot.so.0(i_fatal+0) [0xb756180e] -> 
> /usr/lib/dove
> cot/libdovecot-storage.so.0(mail_index_sync_keywords+0x707) [0xb76cb6b7] -> 
> /usr
> /lib/dovecot/libdovecot-storage.so.0(mail_index_sync_record+0xf7) 
> [0xb76cc167] -
> > /usr/lib/dovecot/libdovecot-storage.so.0(mail_index_sync_map+0x2a4) 
> > [0xb76cd22
> 4] -> /usr/lib/dovecot/libdovecot-storage.so.0(mail_index_map+0x57c) 
> [0xb76bcb9c
> ] -> /usr/lib/dovecot/libdovecot-storage.so.0(+0xb7c2a) [0xb76b7c2a] -> 
> /usr/lib
> /dovecot/libdovecot-storage.so.0(+0xb7daa) [0xb76b7daa] -> 
> /usr/lib/dovecot/libd
> ovecot-storage.so.0(mail_index_open+0x114) [0xb76b7f54] -> 
> /usr/lib/dovecot/libd
> ovecot-storage.so.0(index_storage_mailbox_open+0xab) [0xb76a808b] -> 
> /usr/lib/do
> vecot/libdovecot-storage.so.0(+0x5a481) [0xb765a481] -> 
> /usr/lib/dovecot/libdove
> cot-storage.so.0(+0x5a6f2) [0xb765a6f2] -> 
> /usr/lib/dovecot/libdovecot-storage.s
> o.0(+0x7a017) [0xb767a017] -> 
> /usr/lib/dovecot/libdovecot-storage.so.0(mailbox_o
> pen+0x26) [0xb767a1d6] -> dovecot/imap(client_open_save_dest_box+0x79) 
> [0x805dca
> 9] -> dovecot/imap() [0x8052678] -> dovecot/imap(command_exec+0x32) 
> [0x805d7a2]
> -> dovecot/imap() [0x805c746] -> dovecot/imap() [0x805c820] -> 
> dovecot/imap(clie
> nt_handle_input+0x125) [0x805cad5] -> dovecot/imap(client_input+0x71) 
> [0x805ce91
> ] -> /usr/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x42) [0xb75c1292] -> 
> /usr
> /lib/dovecot/libdovecot.so.0(io_loop_handler_run+0xdb) [0xb75c22cb] -> 
> /usr/lib/
> dovecot/libdovecot.so.0(io_loop_run+0x48) [0xb75c0cd8] -> 
> /usr/lib/dovecot/libdo
> vecot.so.0(master_service_run+0x2d) [0xb756728d] -> dovecot/imap(main+0x2bf) 
> [0x
> 8066fef] -> /lib/libc.so.6(__libc_start_main+0xe6) [0xb73cfdb6]
> 
> Nov 28 13:36:20 imap(xyz): Fatal: master: service(imap): child 29245 killed 
> with
>  signal 6 (core dumps disabled)
> 
> Nov 28 13:36:20 imap-login: Info: Login: user=, method=PLAIN, 
> rip=xxx.yyy.z
> zz.154, lip=aaa.bbb.ccc.7, mpid=29247, TLS, session=
> 
> Nov 28 13:36:21 imap(xyz): Panic: file mail-index-sync-keywords.c: line 227 
> (key
> words_update_records): assertion failed: (data_offset >= sizeof(struct 
> mail_inde
> x_record))
> 
> Nov 28 13:36:21 imap(xyz): Error: Raw backtrace: 
> /usr/lib/dovecot/libdovecot.so.
> 0(+0x65626) [0xb7757626] -> /usr/lib/dovecot/libdovecot.so.0(+0x6569f) 
> [0xb77576
> 9f] -> /usr/lib/dovecot/libdovecot.so.0(i_fatal+0) [0xb770a80e] -> 
> /usr/lib/dove
> cot/libdovecot-storage.so.0(mail_index_sync_keywords+0x707) [0xb78746b7] -> 
> /usr
> /lib/dovecot/libdovecot-storage.so.0(mail_index_sync_record+0xf7) 
> [0xb7875167] -
> > /usr/lib/dovecot/libdovecot-storage.so.0(mail_index_sync_map+0x2a4) 
> > [0xb787622
> 4] -> /usr/lib/dovecot/libdovecot-stora

Same emails appearing multiple time after upgrade to version 2.33

2018-11-22 Thread Thomas Durand
Hi,

I have update today dovecot and dovecot pigeonhole to the latest version 
available on my centos 7.
It’s a small email servers with postfix/amavisd/clamav/spamasssin.

I had dovecot for IMAP/POP3 with spam plugins to store spam message 
automatically into the spam folder.

After the upgrade, I was noticed by some of my users that they have in their 
message the same message appearing 2,3 even 10 times.
I had the same behavior when using roundecube

I was able to stop this after removing pigeonhole rpm (removing/disabling the 
config was not enough).

Would appreciate your help to understand what is wrong with my config.

Thanks

dovecot conf :
[root@justyna conf.d]# dovecot -n
# 2.3.3 (dcead646b): /etc/dovecot/dovecot.conf
# OS: Linux 2.6.32-042stab127.2 x86_64 CentOS Linux release 7.5.1804 (Core)  
# Hostname: x
auth_mechanisms = plain login
auth_verbose = yes
disable_plaintext_auth = no
doveadm_password = # hidden, use -P to show it
doveadm_port = 2727
imap_client_workarounds = delay-newmail
mail_location = maildir:~/Maildir
mail_plugins = " zlib notify replication"
mbox_write_locks = fcntl
namespace inbox {
  inbox = yes
  location = 
  mailbox Drafts {
auto = subscribe
special_use = \Drafts
  }
  mailbox Junk {
auto = subscribe
special_use = \Junk
  }
  mailbox Sent {
auto = subscribe
special_use = \Sent
  }
  mailbox Trash {
auto = subscribe
special_use = \Trash
  }
  prefix = 
}
passdb {
  driver = pam
}
passdb {
  args = scheme=CRYPT username_format=%u /etc/dovecot/users
  driver = passwd-file
}
plugin {
  mail_log_events = delete undelete expunge copy mailbox_delete mailbox_rename
  mail_log_fields = uid box msgid size
  mail_replica = tcp:62.210.220.186:2727
  sieve = ~/.dovecot.sieve
  sieve_dir = ~/sieve
  sieve_duplicate_default_period = 1h
  sieve_duplicate_max_period = 1d
  sieve_extensions = +vnd.dovecot.duplicate
  sieve_global_dir = /var/lib/dovecot/sieve/
  sieve_global_path = /var/lib/dovecot/sieve/default.sieve
  zlib_save = gz
  zlib_save_level = 6
}
protocols = imap pop3
service aggregator {
  fifo_listener replication-notify-fifo {
mode = 0666
user = vmail
  }
  unix_listener replication-notify {
mode = 0666
user = vmail
  }
}
service auth {
  unix_listener /var/spool/postfix/private/auth {
group = postfix
mode = 0666
user = postfix
  }
}
service doveadm {
  inet_listener {
port = 2727
  }
}
service imap-login {
  process_min_avail = 2
  service_count = 0
}
ssl_ca = 
/etc/letsencrypt/live/www.cr-avocats.com/lets-encrypt-x3-cross-signed.pem
ssl_cert = 

Re: Local access to IMAP mailboxes

2018-09-26 Thread Thomas Leuxner
* Victor Sudakov  2018.09.26 12:17:

> > >> However, I often read and modify the mailboxes locally with Mutt (e.g.
> > >> append and delete mails).

Why not use Mutt's IMAP capabilities and keep the indexes nice and clean?

Regards
Thomas


signature.asc
Description: PGP signature


Re: Last_login plugin and mysql

2018-09-24 Thread Thomas Hooge

Hello,


last_login plugin uses dict interface, which does not support "update",
it only supports get, set, unset and atomic inc. Set is implemented 
with

'INSERT INTO foo ... ON DUPLICATE UPDATE'. There is no configuration
setting to change this, as dict cannot know without performing a SELECT
that a value already exists.


Ok, now i understand the dict behaviour.
Some clarification in the last-login wiki page would be nice:
  * needed Database rights
  * Example of last_login in separate table

In my special case i wanted to use the last_login field inside
the mailbox table of postfixadmin.
From the security point of view i don't want dovecot to be able
to insert records in that table. So the only additional right
should be update on the last_login field.

In the case of the last_login plugin there ist a very high probability
the dict key exists because of the previous successfull login which
uses the same table.
If the key does not exist there is a serious problem :-)

Perhaps there shoud be a feature of different types of dicts:
  * normal dict (as implemented)
  * immutable dict (read only)
  * dict with immutable keys (only value writeable)

Kind regards,
Thomas


Last_login plugin and mysql

2018-09-23 Thread Thomas Hooge
Hello,

i have problems configuring the last_login plugin with mysql.

I have extended the postfixadmin database with a new field,
configured the plugin as described in the wiki.

Finally i got errors: table is not writable, INSERT failed.

Why is the insert required? The plugin should only UPDATE
the column for existing rows.
Or am i missing some configuration options?
Here is my dictionary config:

map {
  pattern = shared/last-login/$user
  table = mailbox
  value_field = x_last_login
  value_type = uint

  fields {
username = $user
  }
}


Dovecot: 2.2.27 (c0f36b0)

Kind regards,
Thomas


LMTP Log

2018-07-26 Thread Thomas Kristensen
Hey

I got a server setup where the Postfix and dovecot is on separated servers.
In the postfis log is goes like this on mail in:
Jul 26 10:23:13 edimailsl2 postfix/smtpd[12812]: 41blTs18Bzz40nl: 
client=unknown[172.0.0.12]
Jul 26 10:23:21 edimailsl2 postfix/cleanup[12825]: 41blTs18Bzz40nl: 
message-id=<>
Jul 26 10:23:21 edimailsl2 postfix/qmgr[10666]: 41blTs18Bzz40nl: 
from=, size=225, nrcpt=1 (queue active)
Jul 26 10:23:21 edimailsl2 postfix/lmtp[12827]: 41blTs18Bzz40nl: 
to=, relay=172.26.248.178[172.26.248.178]:2003, delay=8.6, 
delays=8.5/0.06/0.01/0.08, dsn=2.0.0, status=sent (250 2.0.0  
oCuyJ/mEWVvFfwAAbOxY/Q Saved)
Jul 26 10:23:21 edimailsl2 postfix/qmgr[10666]: 41blTs18Bzz40nl: removed

And the LMTP on the Dovecot server is:
Jul 26 10:23:21 vanslmtpsl1 dovecot: lmtp(32709): Connect from 172.0.0.1
Jul 26 10:23:21 vanslmtpsl1 dovecot: lmtp(test@ domain.dk): msgid=unspecified: 
saved mail to INBOX
Jul 26 10:23:21 vanslmtpsl1 dovecot: lmtp(32709): Disconnect from 172.0.0.1: 
Successful quit

How can I make LMTP log the ID it gives postfix (oCuyJ/mEWVvFfwAAbOxY/Q)?
So I can relate the logs from the 2 servers.

Med venlig hilsen
Thomas Kristensen
[MM_mail_logo_3_AS]
Storhaven 12 - 7100 Vejle
Tlf: 75 72 54 99 - Fax: 75 72 65 33
E-mail: t...@multimed.dk<mailto:t...@multimed.dk>




Replication problems

2018-07-19 Thread Thomas Kristensen
Hey

I am trying to setup a dovecot cluster with 2 servers using replication /dsync.

In front of it I got a Fortinet ADC (Load balance) and I think that I messing 
up the dsync.
I see mails duplicated in the sync progress.

If I disable one of the servers in the ADC, it seems to work and the sync if 
working without a problem.
But if I use both servers with a round robin on the ADC, I see mailed 
duplicated.
Ex. I sent 100 mails thru the SMTP (Postfix) and 107 mails is in both servers, 
but as said before, if I disable one of the servers in the ADC, I see the 
correct amount of mails in both dovecot servers.

In the header of the duplicated mails I see the exact same postfix id and LMTP 
id from dovecot.

Also I cant seem to get any log from the sync progress.

Med venlig hilsen
Thomas Kristensen
[MM_mail_logo_3_AS]
Storhaven 12 - 7100 Vejle
Tlf: 75 72 54 99 - Fax: 75 72 65 33
E-mail: t...@multimed.dk<mailto:t...@multimed.dk>




Shared mailboxes, index files and 'per-user-seen' flags

2018-06-06 Thread Thomas Robers

Hello,

i have a dovecot server version 2.3.1 under CentOS 6.9 and we're
using shared mailboxes with index files shared. With this configuration
I can see a lot of error messages like:

   Jun  6 13:20:31 mail dovecot: Error: imap(us...@tutech.de)<4513>
   : /export/home/imap/us...@tutech.de/shared
   /us...@tutech.de/folder/dovecot.index.pvt view is inconsistent

In 10-mail.conf the location setting is:

   location = maildir:%%h/Maildir:INDEXPVT=%h/shared/%%u

I thought setting the index files to "not shared" might help to
get rid of the errors, so I changed the setting to:

   location = maildir:%%h/Maildir:INDEX=%h/shared/%%u:INDEXPVT=%h
   /shared/%%u

like it's mentioned in the Dovecot wiki. But that doesn't work as
I expected, because the 'per-user-seen' flags do not work correctly
anymore, i think. If UserA, who has UserB as shared mailbox,
changes the seen flags of UserBs INBOX, UserBs seen flags are also
changed. The other way, if UserB changes seen flags in his INBOX
they are not changed in the shared view of UserA. Is this the
supposed way to work  or do i have an error in the configuration?

Any help is appreciated.

Thanks, Thomas.

Here's my currently used configuration:

# 2.3.1 (c5a5c0c82): /etc/dovecot/dovecot.conf
# Pigeonhole version 0.5.devel (61b47828)
# OS: Linux 2.6.32-696.23.1.el6.x86_64 x86_64 CentOS release 6.9 (Final) 
ext4

# Hostname: mail.tutech.de
auth_master_user_separator = *
auth_mechanisms = plain login
auth_verbose = yes
disable_plaintext_auth = no
doveadm_password =  # hidden, use -P to show it
doveadm_port = 12345
imap_max_line_length = 2 M
mail_debug = yes
mail_location = maildir:/export/home/imap/%Lu/Maildir
mail_plugins = acl zlib mail_log notify
mail_prefetch_count = 1
mailbox_idle_check_interval = 10 secs
managesieve_notify_capability = mailto
managesieve_sieve_capability = fileinto reject envelope 
encoded-character vacation subaddress comparator-i;ascii-numeric 
relational regex imap4flags copy include variables body enotify 
environment mailbox date index ihave duplicate mime foreverypart extracttext

namespace {
  hidden = no
  ignore_on_failure = no
  inbox = no
  list = children
  location = maildir:%%h/Maildir:INDEXPVT=%h/shared/%%u
  prefix = shared/%%u/
  separator = /
  subscriptions = yes
  type = shared
}
namespace inbox {
  hidden = no
  inbox = yes
  list = yes
  location =
  mailbox Drafts {
special_use = \Drafts
  }
  mailbox Junk {
special_use = \Junk
  }
  mailbox Sent {
special_use = \Sent
  }
  mailbox "Sent Messages" {
special_use = \Sent
  }
  mailbox Trash {
special_use = \Trash
  }
  prefix = INBOX/
  separator = /
  type = private
}

passdb {
  args = /etc/dovecot/master-users
  driver = passwd-file
  master = yes
}
passdb {
  args = /etc/dovecot/dovecot-ldap.conf.ext
  driver = ldap
}
plugin {
  acl = vfile:/etc/dovecot/global-acls:cache_secs=300
  acl_shared_dict = file:/export/home/shared-db/shared-mailboxes
  mail_log_events = append delete undelete expunge copy mailbox_delete 
mailbox_rename flag_change

  mail_log_fields = uid box msgid size from flags
  mail_replica = tcp:mail2.tutech.de
  sieve = ~/.dovecot.sieve
  sieve_dir = ~/sieve
  sieve_global = /var/lib/dovecot/sieve/global/
  sieve_user_log = ~/.dovecot.sieve.log
  zlib_save = gz
  zlib_save_level = 6
}
protocols = imap pop3 lmtp sieve sieve
service aggregator {
  fifo_listener replication-notify-fifo {
mode = 0666
user = vmail
  }
  unix_listener replication-notify {
mode = 0666
user = vmail
  }
}
service auth {
  unix_listener /var/spool/postfix/private/auth {
mode = 0666
  }
  unix_listener auth-userdb {
group = vmail
mode = 0660
user = vmail
  }
}
service config {
  unix_listener config {
user = vmail
  }
}
service doveadm {
  inet_listener {
port = 12345
  }
  user = vmail
}
service imap-login {
  inet_listener imaps {
port = 993
ssl = yes
  }
  process_limit = 500
  process_min_avail = 20
}
service imap {
  executable = imap
}
service lmtp {
  inet_listener lmtp {
address = 127.0.0.1
port = 24
  }
}
service managesieve-login {
  inet_listener sieve {
port = 4190
  }
  inet_listener sieve_deprecated {
port = 2000
  }
}
service pop3-login {
  inet_listener pop3s {
port = 995
ssl = yes
  }
}
service pop3 {
  executable = pop3
}
service replicator {
  unix_listener replicator-doveadm {
mode = 0666
  }
}
ssl = required
ssl_cert = 

Re: imapsieve: script not triggered

2018-05-08 Thread Thomas Leuxner
* Andreas Krischer <a.krisc...@akbyte.com> 2018.05.07 19:58:

>   sieve_global_extensions = +vnd.dovecot.pipe

Hi,

my working configuration looks like this:

sieve_global_extensions = +vnd.dovecot.pipe +vnd.dovecot.execute

Regards
Thomas



signature.asc
Description: PGP signature


Re: two unrelated issues, lastlogin, and an out of memory fatal error

2018-04-11 Thread Thomas Zajic
* David Mehler, 2018-04-11 17:23

> [...]
> The issue is the 1523459718  I was expecting something like a time
> stamp. Is this fixable? Also, can I use last_login to see on which IP
> the user last logged in from?
> [...]

This is in fact a timestamp:

[zlatko@disclosure:~]$ date -d @1523459718
Mit Apr 11 17:15:18 CEST 2018

Your output might look different depending on your locale.

HTH,
Thomas


Re: Panic: data stack: Out of memory when allocating bytes

2018-01-29 Thread Thomas Robers

Any idea what the problem could be? Is there anything more i could do
to encircle the problem? Or perhaps is the information i provided
uncomplete?

Am 25.01.2018 um 16:24 schrieb Thomas Robers:

Hi,

Am 24.01.2018 um 23:39 schrieb Josef 'Jeff' Sipek:
It looks like the binaries are stripped.  There should be a "debug" 
package
you can install with symbol information.  Then, the backtrace should 
be much

more helpful.


I installed the debug package and the backtrace now is:

--- snip ---
(gdb) bt full
#0  0x7f73f1386495 in raise () from /lib64/libc.so.6
No symbol table info available.
#1  0x7f73f1387c75 in abort () from /lib64/libc.so.6
No symbol table info available.
#2  0x7f73f17ab822 in mem_block_alloc (min_size=520) at 
data-stack.c:356

     block = 
     prev_size = 
     alloc_size = 134217728
#3  0x7f73f17abc18 in t_malloc_real (size=, 
permanent=true) at data-stack.c:415

     block = 
     ret = 
     alloc_size = 520
#4  0x7f73f17abdeb in t_malloc0 (size=513) at data-stack.c:468
     mem = 
#5  0x7f73f17a95dd in p_malloc (buf=0x7f73ebdcef28, size=513) at 
mempool.h:99

No locals.
#6  buffer_alloc (buf=0x7f73ebdcef28, size=513) at buffer.c:34
     __func__ = "buffer_alloc"
#7  0x7f73f17a967b in buffer_create_dynamic (pool=out>, init_size=512) at buffer.c:143

     buf = 0x7f73ebdcef28
#8  0x7f73f17a803b in backtrace_get (backtrace_r=0x7ffd0ddd8358) at 
backtrace-string.c:86

     str = 
#9  0x7f73f17af1da in default_fatal_finish (type=out>, status=0) at failures.c:221

     backtrace = 
     recursed = 1
#10 0x7f73f17af766 in i_internal_fatal_handler (ctx=0x7ffd0ddd83a0, 
format=, args=) at failures.c:718

     status = 0
#11 0x7f73f1723e11 in i_panic (format=0x1310 bounds>) at failures.c:306
     ctx = {type = LOG_TYPE_PANIC, exit_status = 0, timestamp = 0x0, 
timestamp_usecs = 0, log_prefix = 0x0}
     args = {{gp_offset = 16, fp_offset = 48, overflow_arg_area = 
0x7ffd0ddd84a0, reg_save_area = 0x7ffd0ddd83e0}}
#12 0x7f73f17ab83a in mem_block_alloc (min_size=512) at 
data-stack.c:360

     block = 
     prev_size = 
     alloc_size = 134217728
#13 0x7f73f17abc18 in t_malloc_real (size=, 
permanent=false) at data-stack.c:415

     block = 
     ret = 
     alloc_size = 512
#14 0x7f73f17abd3b in t_buffer_get (size=512) at data-stack.c:543
     ret = 0x0
#15 0x7f73f17e1d60 in vstrconcat (str1=0x7f73ebdceeb0 
"/export/home/imap/b...@tutech.de/Maildir/.bla_blub.foo_bar.John Doe", 
args=0x7ffd0ddd8550,

     ret_len=0x7ffd0ddd8570) at strfuncs.c:183
     str = 0x7f73ebdceeb0 
"/export/home/imap/b...@tutech.de/Maildir/.bla_blub.foo_bar.John Doe"

     temp = 
     bufsize = 512
     i = 
     len = 
     __func__ = "vstrconcat"
#16 0x7f73f17b7d03 in i_strconcat (str1=0x7f73ebdceeb0 
"/export/home/imap/b...@tutech.de/Maildir/.bla_blub.foo_bar.John Doe") 
at imem.c:65

     temp = 
     _data_stack_cur_id = 5
     args = {{gp_offset = 8, fp_offset = 48, overflow_arg_area = 
0x7ffd0ddd8650, reg_save_area = 0x7ffd0ddd8580}}

     ret = 
     len = 
     __func__ = "i_strconcat"
#17 0x7f73f0b1fa39 in acl_backend_vfile_object_init 
(_backend=0x7f73f255c498, name=0x7f73ebdceda8 "bla_blub.foo_bar.John 
Doe") at acl-backend-vfile.c:169

     _data_stack_cur_id = 4
     backend = 0x7f73f255c498
     aclobj = 0x7f73f25b5420
     dir = 
     vname = 0x7f73ebdcee68 
"shared/b...@tutech.de/bla_blub/foo_bar/John Doe"

     error = 0x0
#18 0x7f73f0b20740 in acllist_append (backend=0x7f73f255c498) at 
acl-backend-vfile-acllist.c:187

     iter = 0x0
     ret = 
     aclobj = 0x0
     rights = {id_type = ACL_ID_USER, identifier = 0x7f73f25c7930 
"be...@tutech.de", rights = 0x7f73f25c7948, neg_rights = 0x0, global = 
false}
     acllist = {mtime = 1512404820, name = 0x7f73f26369b8 
"bla_blub.foo_bar.Some_Text"}

     name = 0x7f73ebdceda8 "bla_blub.foo_bar.John Doe"
#19 acl_backend_vfile_acllist_try_rebuild (backend=0x7f73f255c498) at 
acl-backend-vfile-acllist.c:278

     list = 
     ns = 0x7f73f2547fd0
     iter = 0x7f73f25aad48
     type = MAILBOX_LIST_PATH_TYPE_DIR
     info = 
     rootdir = 0x7f73f255bf58 
"/export/home/imap/b...@tutech.de/Maildir"

     acllist_path = 
     output = 0x7f73f27d9990
     st = {st_dev = 140135963676352, st_ino = 140135948493088, 
st_nlink = 8192, st_mode = 333072, st_uid = 0, st_gid = 4068358896, 
__pad0 = 32627, st_rdev = 8192,
   st_size = 140135948493088, st_blksize = 140135945267666, 
st_blocks = 139, st_atim = {tv_sec = 8192, tv_nsec = 8192}, st_mtim = 
{tv_sec = 0, tv_nsec = 34}, st_ctim = {tv_sec = 34,
     tv_nsec = 1401359655646

Re: Panic: data stack: Out of memory when allocating bytes

2018-01-25 Thread Thomas Robers
7811000: load17 ALLOC LOAD 
HAS_CONTENTS
0x7f73f070f000->0x7f73f071 at 0x07812000: load18a ALLOC LOAD 
READONLY CODE HAS_CONTENTS
0x7f73f071->0x7f73f071 at 0x07813000: load18b ALLOC 
READONLY CODE

0x7f73f0712000->0x7f73f0712000 at 0x07813000: load19 ALLOC READONLY
0x7f73f0911000->0x7f73f0912000 at 0x07813000: load20 ALLOC LOAD 
HAS_CONTENTS
0x7f73f0912000->0x7f73f0913000 at 0x07814000: load21a ALLOC LOAD 
READONLY CODE HAS_CONTENTS
0x7f73f0913000->0x7f73f0913000 at 0x07815000: load21b ALLOC 
READONLY CODE

0x7f73f0916000->0x7f73f0916000 at 0x07815000: load22 ALLOC READONLY
0x7f73f0b15000->0x7f73f0b16000 at 0x07815000: load23 ALLOC LOAD 
HAS_CONTENTS
0x7f73f0b16000->0x7f73f0b17000 at 0x07816000: load24a ALLOC LOAD 
READONLY CODE HAS_CONTENTS
0x7f73f0b17000->0x7f73f0b17000 at 0x07817000: load24b ALLOC 
READONLY CODE

0x7f73f0b2a000->0x7f73f0b2a000 at 0x07817000: load25 ALLOC READONLY
0x7f73f0d2a000->0x7f73f0d2b000 at 0x07817000: load26 ALLOC LOAD 
HAS_CONTENTS
0x7f73f0d2b000->0x7f73f0d2c000 at 0x07818000: load27a ALLOC LOAD 
READONLY CODE HAS_CONTENTS
0x7f73f0d2c000->0x7f73f0d2c000 at 0x07819000: load27b ALLOC 
READONLY CODE

0x7f73f0d42000->0x7f73f0d42000 at 0x07819000: load28 ALLOC READONLY
0x7f73f0f42000->0x7f73f0f43000 at 0x07819000: load29 ALLOC LOAD 
READONLY HAS_CONTENTS
0x7f73f0f43000->0x7f73f0f44000 at 0x0781a000: load30 ALLOC LOAD 
HAS_CONTENTS
0x7f73f0f44000->0x7f73f0f48000 at 0x0781b000: load31 ALLOC LOAD 
HAS_CONTENTS
0x7f73f0f48000->0x7f73f0f49000 at 0x0781f000: load32a ALLOC LOAD 
READONLY CODE HAS_CONTENTS
0x7f73f0f49000->0x7f73f0f49000 at 0x0782: load32b ALLOC 
READONLY CODE

0x7f73f0f4a000->0x7f73f0f4a000 at 0x0782: load33 ALLOC READONLY
0x7f73f114a000->0x7f73f114b000 at 0x0782: load34 ALLOC LOAD 
READONLY HAS_CONTENTS
0x7f73f114b000->0x7f73f114c000 at 0x07821000: load35 ALLOC LOAD 
HAS_CONTENTS
0x7f73f114c000->0x7f73f114d000 at 0x07822000: load36a ALLOC LOAD 
READONLY CODE HAS_CONTENTS
0x7f73f114d000->0x7f73f114d000 at 0x07823000: load36b ALLOC 
READONLY CODE

0x7f73f1153000->0x7f73f1153000 at 0x07823000: load37 ALLOC READONLY
0x7f73f1352000->0x7f73f1353000 at 0x07823000: load38 ALLOC LOAD 
READONLY HAS_CONTENTS
0x7f73f1353000->0x7f73f1354000 at 0x07824000: load39 ALLOC LOAD 
HAS_CONTENTS
0x7f73f1354000->0x7f73f1355000 at 0x07825000: load40a ALLOC LOAD 
READONLY CODE HAS_CONTENTS
0x7f73f1355000->0x7f73f1355000 at 0x07826000: load40b ALLOC 
READONLY CODE

0x7f73f14de000->0x7f73f14de000 at 0x07826000: load41 ALLOC READONLY
0x7f73f16de000->0x7f73f16e2000 at 0x07826000: load42 ALLOC LOAD 
READONLY HAS_CONTENTS
0x7f73f16e2000->0x7f73f16e4000 at 0x0782a000: load43 ALLOC LOAD 
HAS_CONTENTS
0x7f73f16e4000->0x7f73f16e8000 at 0x0782c000: load44 ALLOC LOAD 
HAS_CONTENTS
0x7f73f16e8000->0x7f73f16e9000 at 0x0783: load45a ALLOC LOAD 
READONLY CODE HAS_CONTENTS
0x7f73f16e9000->0x7f73f16e9000 at 0x07831000: load45b ALLOC 
READONLY CODE

0x7f73f184b000->0x7f73f184b000 at 0x07831000: load46 ALLOC READONLY
0x7f73f1a4a000->0x7f73f1a52000 at 0x07831000: load47 ALLOC LOAD 
READONLY HAS_CONTENTS
0x7f73f1a52000->0x7f73f1a53000 at 0x07839000: load48 ALLOC LOAD 
HAS_CONTENTS
0x7f73f1a53000->0x7f73f1a56000 at 0x0783a000: load49 ALLOC LOAD 
HAS_CONTENTS
0x7f73f1a56000->0x7f73f1a57000 at 0x0783d000: load50a ALLOC LOAD 
READONLY CODE HAS_CONTENTS
0x7f73f1a57000->0x7f73f1a57000 at 0x0783e000: load50b ALLOC 
READONLY CODE

0x7f73f1b92000->0x7f73f1b92000 at 0x0783e000: load51 ALLOC READONLY
0x7f73f1d92000->0x7f73f1d9e000 at 0x0783e000: load52 ALLOC LOAD 
HAS_CONTENTS
0x7f73f1d9e000->0x7f73f1d9f000 at 0x0784a000: load53a ALLOC LOAD 
READONLY CODE HAS_CONTENTS
0x7f73f1d9f000->0x7f73f1d9f000 at 0x0784b000: load53b ALLOC 
READONLY CODE
0x7f73f1f0e000->0x7f73f1f2f000 at 0x0784b000: load54 ALLOC LOAD 
HAS_CONTENTS
0x7f73f1f2f000->0x7f73f1f73000 at 0x0786c000: load55 ALLOC LOAD 
HAS_CONTENTS
0x7f73f1fa4000->0x7f73f1fa8000 at 0x078b: load56 ALLOC LOAD 
HAS_CONTENTS
0x7f73f1fbc000->0x7f73f1fbe000 at 0x078b4000: load57 ALLOC LOAD 
HAS_CONTENTS

---Type  to continue, or q  to quit---
0x7f73f1fbe000->0x7f73f1fbf000 at 0x078b6000: load58 ALLOC LOAD 
READONLY HAS_CONTENTS
0x7f73f1fbf000->0x7f73f1fc at 0x078b7000: load59 ALLOC LOAD 
HAS_CONTENTS
0x7f73f1fc->0x7f73f1fc1000 at 0x078b8000: load60 ALLOC LOAD 
HAS_CONTENTS
0x7f73f1fc1000->0x7f73f1fc2000 at 0x078b9000: load61a ALLOC LOAD 
READONLY CODE HAS_CONTENTS
0x7f73f1fc2000->0x7f73f1fc2000 at 0x078ba000: load61b ALLOC 
READONLY CODE
0x7f73f21fb000->0x7f73f21fd000 at 0x078ba000: load62 ALLOC LOAD 
READONLY HAS_CONTENTS
0x7f73f21fd000->0x7f73f21fe000 at 0x078bc000: load63 ALLOC LOAD 
HAS_CONTENTS
0x7f73f24c2000->0x7f73f2835000 at 0x078bd000: load64 ALLOC LOAD 
HAS_CONTENTS
0x7ffd0ddc5000->0x7ffd0dddb000 at 0x07c3: load65 ALLOC LOAD 
HAS_CONTENTS
0x7ffd0dde4000->0x7ffd0dde5000 at 0x07c46000: load66 ALLOC LOAD 
READONLY CODE HAS_CONTENTS
0xff60->0xff601000 at 0x07c47000: load67 ALLOC 
LOAD READONLY CODE HAS_CONTENTS

-- snip ---


Jeff.



Thomas


Re: Panic: data stack: Out of memory when allocating bytes

2018-01-24 Thread Thomas Robers

Hi,

Am 23.01.2018 um 20:07 schrieb Josef 'Jeff' Sipek:

On Tue, Jan 23, 2018 at 14:03:27 -0500, Josef 'Jeff' Sipek wrote:

On Tue, Jan 23, 2018 at 18:21:38 +0100, Thomas Robers wrote:

Hello,

I'm using Dovecot 2.3 and sometimes i get this:

--- snip ---
Jan 23 14:23:13 mail dovecot: imap(b...@tutech.de)<4880>:
Panic: data stack: Out of memory when allocating 134217768 bytes


Interesting... imap is trying to allocate 128MB and failing.  A couple of
questions:

0. Does this user have any unusually large emails?


No, not usually but there are some mails which are larger than 15mb.
But that's not the normal size. Most e-mail are between some kb up to
5mb.



1. Do you have any idea what the imap process was doing at the time of the
allocation failure?


Yes perhaps. We use shared mailboxes and at the time of failure the
imap process is reading acl files in a shared mailbox (and subfolder).
This shared mailbox has about 2800 subfolder and the acl files are read
in about 10sec and and during this reading the imap process dies with
the already mentioned error. It seems it has something to do with the
shared mailbox, because since yesterday morning 5 core dumps have been
created, 4 of them by one user accessing the shared mailbox and 1
by the user who is the owner of the shared mailbox. No other mailboxes
are affected until now.


2. You snipped all the important parts of the back trace. :)  It *starts* on
the line:
#0  0x7f73f1386495 in raise () from /lib64/libc.so.6


In case you haven't used gdb before...  after starting up gdb, run "bt full"
at the gdb prompt.  That'll print out a very detailed backtrace.  (You might
want to sanity check it to make sure there aren't any user passwords in it
before posting it here...)


Yes, sorry I'm not very familiar in using gdb, but here is the full
backtrace:

--- snip ---
(gdb) bt full
#0  0x7f73f1386495 in raise () from /lib64/libc.so.6
No symbol table info available.
#1  0x7f73f1387c75 in abort () from /lib64/libc.so.6
No symbol table info available.
#2  0x7f73f17ab822 in ?? () from /usr/lib64/dovecot/libdovecot.so.0
No symbol table info available.
#3  0x7f73f17abc18 in ?? () from /usr/lib64/dovecot/libdovecot.so.0
No symbol table info available.
#4  0x7f73f17abdeb in t_malloc0 () from 
/usr/lib64/dovecot/libdovecot.so.0

No symbol table info available.
#5  0x7f73f17a95dd in ?? () from /usr/lib64/dovecot/libdovecot.so.0
No symbol table info available.
#6  0x7f73f17a967b in buffer_create_dynamic () from 
/usr/lib64/dovecot/libdovecot.so.0

No symbol table info available.
#7  0x7f73f17a803b in backtrace_get () from 
/usr/lib64/dovecot/libdovecot.so.0

No symbol table info available.
#8  0x7f73f17af1da in ?? () from /usr/lib64/dovecot/libdovecot.so.0
No symbol table info available.
#9  0x7f73f17af766 in ?? () from /usr/lib64/dovecot/libdovecot.so.0
No symbol table info available.
#10 0x7f73f1723e11 in i_panic () from /usr/lib64/dovecot/libdovecot.so.0
No symbol table info available.
#11 0x7f73f17ab83a in ?? () from /usr/lib64/dovecot/libdovecot.so.0
No symbol table info available.
#12 0x7f73f17abc18 in ?? () from /usr/lib64/dovecot/libdovecot.so.0
No symbol table info available.
#13 0x7f73f17abd3b in t_buffer_get () from 
/usr/lib64/dovecot/libdovecot.so.0

No symbol table info available.
#14 0x7f73f17e1d60 in vstrconcat () from 
/usr/lib64/dovecot/libdovecot.so.0

No symbol table info available.
#15 0x7f73f17b7d03 in i_strconcat () from 
/usr/lib64/dovecot/libdovecot.so.0

No symbol table info available.
#16 0x7f73f0b1fa39 in ?? () from /usr/lib64/dovecot/lib01_acl_plugin.so
No symbol table info available.
#17 0x7f73f0b20740 in ?? () from /usr/lib64/dovecot/lib01_acl_plugin.so
No symbol table info available.
#18 0x7f73f0b20b7d in acl_backend_vfile_acllist_rebuild () from 
/usr/lib64/dovecot/lib01_acl_plugin.so

No symbol table info available.
#19 0x7f73f0b21569 in acl_backend_vfile_object_update () from 
/usr/lib64/dovecot/lib01_acl_plugin.so

No symbol table info available.
#20 0x7f73f0b24bd8 in ?? () from /usr/lib64/dovecot/lib01_acl_plugin.so
No symbol table info available.
#21 0x7f73f1aa1973 in mailbox_create () from 
/usr/lib64/dovecot/libdovecot-storage.so.0

No symbol table info available.
#22 0x7f73f1fd1654 in cmd_create ()
No symbol table info available.
#23 0x7f73f1fde585 in command_exec ()
No symbol table info available.
#24 0x7f73f1fdb7b0 in ?? ()
No symbol table info available.
#25 0x7f73f1fdb848 in ?? ()
No symbol table info available.
#26 0x7f73f1fdbc35 in client_handle_input ()
No symbol table info available.
#27 0x7f73f1fdc17e in client_input ()
No symbol table info available.
#28 0x7f73f17c5ec5 in io_loop_call_io () from 
/usr/lib64/dovecot/libdovecot.so.0

No symbol table info available.
#29 0x7f73f17c7dcf in io_loop_handler_run_internal () from 
/usr/lib64/dovecot/libdovecot.so.0

No s

Panic: data stack: Out of memory when allocating bytes

2018-01-23 Thread Thomas Robers

Hello,

I'm using Dovecot 2.3 and sometimes i get this:

--- snip ---
Jan 23 14:23:13 mail dovecot: 
imap(b...@tutech.de)<4880>: Panic: data stack: Out of 
memory when allocating 134217768 bytes
Jan 23 14:23:13 mail dovecot: 
imap(b...@tutech.de)<4880>: Fatal: master: 
service(imap): child 4880 killed with signal 6 (core dumped)

--- snip ---

The gdb backtrace is:

--- snip ---
GNU gdb (GDB) Red Hat Enterprise Linux (7.2-92.el6)
Copyright (C) 2010 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later 


This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-redhat-linux-gnu".
For bug reporting instructions, please see:
...
Reading symbols from /usr/libexec/dovecot/imap...(no debugging symbols 
found)...done.

Attaching to program: /usr/libexec/dovecot/imap, process 4880
ptrace: Kein passender Prozess gefunden.
[New Thread 4880]
Reading symbols from /usr/lib64/dovecot/libdovecot-storage.so.0...(no 
debugging symbols found)...done.

Loaded symbols for /usr/lib64/dovecot/libdovecot-storage.so.0
Reading symbols from /usr/lib64/dovecot/libdovecot.so.0...(no debugging 
symbols found)...done.

Loaded symbols for /usr/lib64/dovecot/libdovecot.so.0
Reading symbols from /lib64/libc.so.6...(no debugging symbols found)...done.
Loaded symbols for /lib64/libc.so.6
Reading symbols from /lib64/librt.so.1...(no debugging symbols 
found)...done.

Loaded symbols for /lib64/librt.so.1
Reading symbols from /lib64/libdl.so.2...(no debugging symbols 
found)...done.

Loaded symbols for /lib64/libdl.so.2
Reading symbols from /lib64/ld-linux-x86-64.so.2...(no debugging symbols 
found)...done.

Loaded symbols for /lib64/ld-linux-x86-64.so.2
Reading symbols from /lib64/libpthread.so.0...(no debugging symbols 
found)...done.

[Thread debugging using libthread_db enabled]
Loaded symbols for /lib64/libpthread.so.0
Reading symbols from /usr/lib64/dovecot/lib01_acl_plugin.so...(no 
debugging symbols found)...done.

Loaded symbols for /usr/lib64/dovecot/lib01_acl_plugin.so
Reading symbols from /usr/lib64/dovecot/lib02_imap_acl_plugin.so...(no 
debugging symbols found)...done.

Loaded symbols for /usr/lib64/dovecot/lib02_imap_acl_plugin.so
Reading symbols from /usr/lib64/dovecot/lib15_notify_plugin.so...(no 
debugging symbols found)...done.

Loaded symbols for /usr/lib64/dovecot/lib15_notify_plugin.so
Reading symbols from /usr/lib64/dovecot/lib20_mail_log_plugin.so...(no 
debugging symbols found)...done.

Loaded symbols for /usr/lib64/dovecot/lib20_mail_log_plugin.so
Reading symbols from /usr/lib64/dovecot/lib20_zlib_plugin.so...(no 
debugging symbols found)...done.

Loaded symbols for /usr/lib64/dovecot/lib20_zlib_plugin.so
Reading symbols from /lib64/libz.so.1...(no debugging symbols found)...done.
Loaded symbols for /lib64/libz.so.1
Reading symbols from /lib64/libbz2.so.1...(no debugging symbols 
found)...done.

Loaded symbols for /lib64/libbz2.so.1
Reading symbols from /usr/lib64/dovecot/lib30_imap_zlib_plugin.so...(no 
debugging symbols found)...done.

Loaded symbols for /usr/lib64/dovecot/lib30_imap_zlib_plugin.so
Core was generated by `dovecot/imap'.
Program terminated with signal 6, Aborted.
#0  0x7f73f1386495 in raise () from /lib64/libc.so.6
--- snip ---

I searched for that error message but only found some entries regarding
older dovecot versions and setting "mail_process_size" but i couldn't
find anything regarding Dovecot Version 2.3. Maybe it's something that 
is fixed already but if so i can't find it. Or is this a configuration 
issue or a bug?


Here my Dovecot configuration:

--- snip ---
# 2.3.0 (c8b89eb): /etc/dovecot/dovecot.conf
# Pigeonhole version 0.5.0.1 (d33dca20)
# OS: Linux 2.6.32-696.3.1.el6.x86_64 x86_64 CentOS release 6.9 (Final) ext4
auth_debug = yes
auth_debug_passwords = yes
auth_master_user_separator = *
auth_mechanisms = plain login
auth_verbose = yes
disable_plaintext_auth = no
doveadm_password =  # hidden, use -P to show it
doveadm_port = 12345
imap_max_line_length = 2 M
mail_debug = yes
mail_location = maildir:/export/home/imap/%Lu/Maildir
mail_plugins = acl zlib mail_log notify
mailbox_idle_check_interval = 10 secs
managesieve_notify_capability = mailto
managesieve_sieve_capability = fileinto reject envelope 
encoded-character vacation subaddress comparator-i;ascii-numeric 
relational regex imap4flags copy include variables body enotify 
environment mailbox date index ihave duplicate mime foreverypart extracttext

namespace {
  hidden = no
  ignore_on_failure = no
  inbox = no
  list = children
  location = maildir:%%h/Maildir:INDEXPVT=%h/shared/%%u
  prefix = shared/%%u/
  separator = /
  subscriptions = yes
  type = shared
}
namespace inbox {
  hidden = no
  inbox = yes
  list = yes
  location =
  mailbox Drafts {
special_use = \Drafts
  }
  

Panic: file ostream-zlib.c: line 36 (o_stream_zlib_close): assertion failed:

2018-01-19 Thread Thomas Robers

Hi,

after updating Dovecot to version 2.3 I get a lot of core-dumps like:

Jan 18 10:08:20 mail dovecot: imap(b...@tutech.de)<18200>: Panic: file ostream-zlib.c: line 36 (o_stream_zlib_close): assertion failed: (zstream->ostream.finished || 
zstream->ostream.ostream.stream_errno != 0)

Jan 18 10:08:20 mail dovecot: imap(b...@tutech.de)<18200>: 
Error: Raw backtrace: /usr/lib64/dovecot/libdovecot.so.0(+0xc71da) [0x7f9b510c81da] -> 
/usr/lib64/dovecot
/libdovecot.so.0(+0xc7766) [0x7f9b510c8766] -> /usr/lib64/dovecot/libdovecot.so.0(+0x3be11) [0x7f9b5103ce11] -> /usr/lib64/dovecot/lib20_zlib_plugin.so(+0x4ffb) [0x7f9b4fc21ffb] -> 
/usr/lib64/dovecot/libdovecot.so.0(+0xed3a6) [0x7f9b510ee3a6] -> dovecot/imap(client_disconnect+0x4f) [0x7f9b518f3a1f] -> dovecot/imap(cmd_logout+0x5b) [0x7f9b518ede5b] -> 
dovecot/imap(command_exec+0x65) [0x7f9b518f7585] -> dovecot/imap(+0x1a7b0) [0x7f9b518f47b0] -> dovecot/imap(+0x1a848) [0x7f9b518f4848] -> dovecot/imap(client_handle_input+0x1d5) 
[0x7f9b518f4c35] -> dovecot/imap(client_input+0x6e) [0x7f9b518f517e] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_call_io+0x55) [0x7f9b510deec5] -> /usr/lib64/dovecot

/libdovecot.so.0(io_loop_handler_run_internal+0xbf) [0x7f9b510e0dcf] -> 
/usr/lib64/dovecot/libdovecot.so.0(io_loop_handler_run+0x55) [0x7f9b510defb5] -> 
/usr/lib64/dovecot
/libdovecot.so.0(io_loop_run+0x38) [0x7f9b510df1d8] -> /usr/lib64/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7f9b5105eab3] -> dovecot/imap(main+0x33e) [0x7f9b51903cee] -> 
/lib64/libc.so.6(__libc_start_main+0xfd) [0x7f9b50c8bd1d] -> dovecot/imap(+0xe339) [0x7f9b518e8339]

Jan 18 10:08:20 mail dovecot: imap(b...@tutech.de)<18200>: 
Fatal: master: service(imap): child 18200 killed with signal 6 (core dumped)



The gdb backtrace is:



gdb /usr/libexec/dovecot/imap /var/core/18200
GNU gdb (GDB) Red Hat Enterprise Linux (7.2-92.el6)
Copyright (C) 2010 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later 
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-redhat-linux-gnu".
For bug reporting instructions, please see:
...
Reading symbols from /usr/libexec/dovecot/imap...done.
[New Thread 18200]
Reading symbols from /usr/lib64/dovecot/libdovecot-storage.so.0...done.
Loaded symbols for /usr/lib64/dovecot/libdovecot-storage.so.0
Reading symbols from /usr/lib64/dovecot/libdovecot.so.0...done.
Loaded symbols for /usr/lib64/dovecot/libdovecot.so.0
Reading symbols from /lib64/libc.so.6...(no debugging symbols found)...done.
Loaded symbols for /lib64/libc.so.6
Reading symbols from /lib64/librt.so.1...(no debugging symbols found)...done.
Loaded symbols for /lib64/librt.so.1
Reading symbols from /lib64/libdl.so.2...(no debugging symbols found)...done.
Loaded symbols for /lib64/libdl.so.2
Reading symbols from /lib64/ld-linux-x86-64.so.2...(no debugging symbols 
found)...done.
Loaded symbols for /lib64/ld-linux-x86-64.so.2
Reading symbols from /lib64/libpthread.so.0...(no debugging symbols 
found)...done.
[Thread debugging using libthread_db enabled]
Loaded symbols for /lib64/libpthread.so.0
Reading symbols from /usr/lib64/dovecot/lib01_acl_plugin.so...done.
Loaded symbols for /usr/lib64/dovecot/lib01_acl_plugin.so
Reading symbols from /usr/lib64/dovecot/lib02_imap_acl_plugin.so...done.
Loaded symbols for /usr/lib64/dovecot/lib02_imap_acl_plugin.so
Reading symbols from /usr/lib64/dovecot/lib15_notify_plugin.so...done.
Loaded symbols for /usr/lib64/dovecot/lib15_notify_plugin.so
Reading symbols from /usr/lib64/dovecot/lib20_mail_log_plugin.so...done.
Loaded symbols for /usr/lib64/dovecot/lib20_mail_log_plugin.so
Reading symbols from /usr/lib64/dovecot/lib20_zlib_plugin.so...done.
Loaded symbols for /usr/lib64/dovecot/lib20_zlib_plugin.so
Reading symbols from /lib64/libz.so.1...(no debugging symbols found)...done.
Loaded symbols for /lib64/libz.so.1
Reading symbols from /lib64/libbz2.so.1...(no debugging symbols found)...done.
Loaded symbols for /lib64/libbz2.so.1
Reading symbols from /usr/lib64/dovecot/lib30_imap_zlib_plugin.so...done.
Loaded symbols for /usr/lib64/dovecot/lib30_imap_zlib_plugin.so
Reading symbols from /lib64/libgcc_s.so.1...(no debugging symbols found)...done.
Loaded symbols for /lib64/libgcc_s.so.1
Core was generated by `dovecot/imap'.
Program terminated with signal 6, Aborted.
#0  0x7f9b50c9f495 in raise () from /lib64/libc.so.6
Missing separate debuginfos, use: debuginfo-install 
dovecot23-2.3.0-3.gf.el6.x86_64



My Dovecot configuration is:

---snip---
# 2.3.0 (c8b89eb): /etc/dovecot/dovecot.conf
# Pigeonhole version 0.5.0.1 (d33dca20)
# OS: Linux 2.6.32-696.3.1.el6.x86_64 x86_64 CentOS release 6.9 (Final) ext4
auth_debug = yes
auth_debug_passwords = yes
auth_master_user_separator = *
auth_mechanisms = plain login
auth_verbose = yes

Aw: Re: Lmtp Memory Limit

2018-01-15 Thread Thomas Manninger

Hi,

 

thanks for your response!

 

Now, i already solved the problem, but it's possible a bug?

 

If i use the default values:

default_vsz_limit = 256M

service lmtp {

  vsz_limit = $default_vsz_limit

}

 

After restart, i check the limits of the lmtp process:


Max data size 268435456  268435456  bytes     

 

The limit is 256KB instead of 256MB?

 

If i change the value to 512MB, the process limit is 512KB.

 

When i set "2M" as limit, the process limit is really 2MB.

 

I am using CentOS 7.4. and dovecot 2.2.33.2 (d6601f4ec).

 

Who is parseing the values from the unit (MB) to Bytes? Dovecot or a linux libary function??

Best regards,

Thomas Manninger




Gesendet: Montag, 15. Januar 2018 um 08:16 Uhr
Von: "Aki Tuomi" <aki.tu...@dovecot.fi>
An: "Thomas Manninger" <dbgtmas...@gmx.at>, Dovecot@dovecot.org
Betreff: Re: Lmtp Memory Limit



On 14.01.2018 09:11, Thomas Manninger wrote:
> Hi,
>
> i am using dovecot 2.2.33.2 on CentOS 7.4.
>
> Since i upgraded from CentOS 7.2. to CentOS 7.4. (without upgrading dovecot), my dovecot sieve-pipe scripts crash with Out of memory:
> Out of memory (allocated 262144) (tried to allocate 8793 bytes)
>
> There are some memory limits in dovecot or sieve? Can i change this value?
>
> Kernel limitks:
> [root@xxx software]# ulimit -a
> core file size (blocks, -c) 0
> data seg size (kbytes, -d) unlimited
> scheduling priority (-e) 0
> file size (blocks, -f) unlimited
> pending signals (-i) 26505
> max locked memory (kbytes, -l) 64
> max memory size (kbytes, -m) unlimited
> open files (-n) 1024
> pipe size (512 bytes, -p) 8
> POSIX message queues (bytes, -q) 819200
> real-time priority (-r) 0
> stack size (kbytes, -s) 8192
> cpu time (seconds, -t) unlimited
> max user processes (-u) 26505
> virtual memory (kbytes, -v) unlimited
> file locks (-x) unlimited
>
>
> Dovecot is running as user mail:
> su mail
> bash-4.2$ ulimit -a
> core file size (blocks, -c) 0
> data seg size (kbytes, -d) unlimited
> scheduling priority (-e) 0
> file size (blocks, -f) unlimited
> pending signals (-i) 26505
> max locked memory (kbytes, -l) 64
> max memory size (kbytes, -m) unlimited
> open files (-n) 1024
> pipe size (512 bytes, -p) 8
> POSIX message queues (bytes, -q) 819200
> real-time priority (-r) 0
> stack size (kbytes, -s) 8192
> cpu time (seconds, -t) unlimited
> max user processes (-u) 4096
> virtual memory (kbytes, -v) unlimited
> file locks (-x) unlimited
>
> Can someone help me?
>
> Thanks for help!
>
> Best regards
> Thomas Manninger

Check 'doveconf service/lmtp' for dovecot imposed limits. If vsz_limit
is 18446744073709551615, it means default_vsz_limit is used. Check out
'doveconf default_vsz_limit'. You can then decide if you want to set
some limit for lmtp only or increase the default limit.
 
Aki





Lmtp Memory Limit

2018-01-13 Thread Thomas Manninger
Hi,
 
i am using dovecot 2.2.33.2 on CentOS 7.4.
 
Since i upgraded from CentOS 7.2. to CentOS 7.4. (without upgrading dovecot), 
my dovecot sieve-pipe scripts crash with Out of memory:
Out of memory (allocated 262144) (tried to allocate 8793 bytes)
 
There are some memory limits in dovecot or sieve? Can i change this value?
 
Kernel limitks:
[root@xxx software]# ulimit -a
core file size  (blocks, -c) 0
data seg size   (kbytes, -d) unlimited
scheduling priority (-e) 0
file size   (blocks, -f) unlimited
pending signals (-i) 26505
max locked memory   (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files  (-n) 1024
pipe size(512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority  (-r) 0
stack size  (kbytes, -s) 8192
cpu time   (seconds, -t) unlimited
max user processes  (-u) 26505
virtual memory  (kbytes, -v) unlimited
file locks  (-x) unlimited
 
 
Dovecot is running as user mail:
 su mail
bash-4.2$ ulimit -a
core file size  (blocks, -c) 0
data seg size   (kbytes, -d) unlimited
scheduling priority (-e) 0
file size   (blocks, -f) unlimited
pending signals (-i) 26505
max locked memory   (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files  (-n) 1024
pipe size(512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority  (-r) 0
stack size  (kbytes, -s) 8192
cpu time   (seconds, -t) unlimited
max user processes  (-u) 4096
virtual memory  (kbytes, -v) unlimited
file locks  (-x) unlimited
 
Can someone help me?

Thanks for help!
 
Best regards
Thomas Manninger


Dovecot 2.3-rc Logging Format

2017-12-20 Thread Thomas Leuxner
Hi,

the release candidate defaults to a log format with session IDs.

mail_log_prefix = "%s(%u)<%{pid}><%{session}>: "

As the LMTP service seems to have the session ID hardcoded, the IDs get 
duplicated in the logs:

Dec 21 08:48:03 edi dovecot: lmtp(26573): Connect from local
Dec 21 08:48:03 edi dovecot: lmtp(t...@leuxner.net)[26573]: 
: fCVaBjNnO1rNZwAAIROLbg: sieve: 
msgid=<2323281.OorJHhdMHM@ylum>, time=158ms, status=stored mail into mailbox 
':public/Mailing-Lists/Debian-User'
Dec 21 08:48:03 edi dovecot: lmtp(26573): Disconnect from local: Client has 
quit the connection (state = READY)

Regards
Thomas


signature.asc
Description: PGP signature


Re: v2.3.0 release candidate released

2017-12-20 Thread Thomas Leuxner
* Timo Sirainen <t...@iki.fi> 2017.12.18 16:23:

Hi,

what is the correct way of implementing carbon stats with 2.3?

/etc/dovecot/conf.d/90-stats.conf: 
old_stats_carbon_server=127.0.0.1:2003
old_stats_carbon_name=host_domain_tld
old_stats_carbon_interval=60s

/etc/dovecot/conf.d/20-imap.conf:

mail_plugins =

I changed imap_stats to imap_old_stats, however this yields the following error:

Dec 20 10:20:30 edi dovecot: imap(t...@leuxner.net)<26352><9VA9GMJgns4FkqmS>: 
Error: module /usr/lib/dovecot/modules/lib95_imap_old_stats_plugin.so: 
dlsym(imap_old_stats_plugin_init) failed: 
/usr/lib/dovecot/modules/lib95_imap_old_stats_plugin.so: undefined symbol: 
imap_old_stats_plugin_init
Dec 20 10:20:30 edi dovecot: imap(t...@leuxner.net)<26352><9VA9GMJgns4FkqmS>: 
Error: module /usr/lib/dovecot/modules/lib95_imap_old_stats_plugin.so: 
dlsym(imap_old_stats_plugin_deinit) failed: 
/usr/lib/dovecot/modules/lib95_imap_old_stats_plugin.so: undefined symbol: 
imap_old_stats_plugin_deinit
Dec 20 10:20:30 edi dovecot: imap(t...@leuxner.net): Error: Couldn't load 
required plugin /usr/lib/dovecot/modules/lib95_imap_old_stats_plugin.so: Module 
doesn't have init function

Regards
Thomas


signature.asc
Description: PGP signature


Re: Dovecot lmtp doesn't log

2017-12-01 Thread Thomas Leuxner
* Tomislav Perisic  2017.12.01 15:30:

> Does anyone have a working configuration regarding this that they don't
> have a problem with LMTP logging? If yes could you please send me your
> config and dovecot version to compare.
2.2.devel (904765b05):

# doveconf deliver_log_format syslog_facility
deliver_log_format = msgid=%m, time=%{delivery_time}ms, status=%$
syslog_facility = local1

rsyslog.conf:
local1.*-/var/log/dovecot/dovecot.log

local1.info   -/var/log/dovecot/dovecot.info
local1.warn   -/var/log/dovecot/dovecot.warn
local1.err/var/log/dovecot/dovecot.err
if ($syslogfacility-text=='local1') and ($programname=='dovecot') and\
($msg contains 'lmtp') and ($msg contains 'stored mail into mailbox')\
 then -/var/log/dovecot/dovecot.lmtp


signature.asc
Description: PGP signature


Re: rawlog segfaults (error 4 in libdovecot.so.0.0.0)

2017-11-10 Thread Thomas Robers - TUTECH

Am 10.11.2017 um 11:19 schrieb Aki Tuomi:

rawlog files are plain text, readable files. you do not need to dump
them with doveadm.


but the file command says

> file 20171110-101523-29744.in
> 20171110-101523-29744.in: data

and less [...] may be a binary file.  See it anyway? [...]
When i do doveadm dump 20171110-101523-29744.in i get

Detected file type: imapzlib
2 COMPRESS DEFLATE
3 ID ("name" "Thunderbird" "version" "52.4.0")
4 list (subscribed) "" "INBOX/*"
5 list (subscribed) "" "shared/*"
Error: zlib.read((file)): unexpected EOF at 137



can you get gdb "bt full" for the core file?


  gdb /usr/libexec/dovecot/rawlog /var/core/12802
  GNU gdb (GDB) Red Hat Enterprise Linux (7.2-92.el6)
  Copyright (C) 2010 Free Software Foundation, Inc.
  License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
  This is free software: you are free to change and redistribute it.
  There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
  and "show warranty" for details.
  This GDB was configured as "x86_64-redhat-linux-gnu".
  For bug reporting instructions, please see:
  <http://www.gnu.org/software/gdb/bugs/>...
  Reading symbols from /usr/libexec/dovecot/rawlog...(no debugging symbols 
found)...done.
  [New Thread 12802]
  Reading symbols from /usr/lib64/dovecot/libdovecot.so.0...done.
  Loaded symbols for /usr/lib64/dovecot/libdovecot.so.0
  Reading symbols from /lib64/libc.so.6...(no debugging symbols found)...done.
  Loaded symbols for /lib64/libc.so.6
  Reading symbols from /lib64/libdl.so.2...(no debugging symbols found)...done.
  Loaded symbols for /lib64/libdl.so.2
  Reading symbols from /lib64/librt.so.1...(no debugging symbols found)...done.
  Loaded symbols for /lib64/librt.so.1
  Reading symbols from /lib64/ld-linux-x86-64.so.2...(no debugging symbols 
found)...done.
  Loaded symbols for /lib64/ld-linux-x86-64.so.2
  Reading symbols from /lib64/libpthread.so.0...(no debugging symbols 
found)...done.
  [Thread debugging using libthread_db enabled]
  Loaded symbols for /lib64/libpthread.so.0
  Core was generated by `/usr/libexec/dovecot/rawlog [rob...@tutech.de:12801 
rawlog]  ���'.
  Program terminated with signal 11, Segmentation fault.
  #0  o_stream_flush (stream=0x0) at ostream.c:175
  175   ostream.c: Datei oder Verzeichnis nicht gefunden.
in ostream.c
  Missing separate debuginfos, use: debuginfo-install 
dovecot22-2.2.33.2-1.gf.el6.x86_64
  (gdb) bt full
  #0  o_stream_flush (stream=0x0) at ostream.c:175
  _stream = 
  ret = 
  __FUNCTION__ = "o_stream_flush"
  #1  0x00401c35 in proxy_flush_timeout ()
  No symbol table info available.
  #2  0x7fbed5387e0a in io_loop_handle_timeouts_real (ioloop=0x1bef760) at 
ioloop.c:568
  timeout = 0x1bf0a80
  item = 0x1bf0a80
  tv = {tv_sec = 0, tv_usec = 0}
  tv_call = {tv_sec = 1510312817, tv_usec = 931447}
  t_id = 3
  #3  io_loop_handle_timeouts (ioloop=0x1bef760) at ioloop.c:581
  _data_stack_cur_id = 2
  #4  0x7fbed5389357 in io_loop_handler_run_internal (ioloop=0x1bef760) at 
ioloop-epoll.c:196
  ctx = 0x1befbf0
  events = 
  event = 
  list = 
  io = 
  tv = {tv_sec = 0, tv_usec = 991646}
  events_count = 
  msecs = 
  ret = 0
  i = 
  call = 
  __FUNCTION__ = "io_loop_handler_run_internal"
  #5  0x7fbed53877ac in io_loop_handler_run (ioloop=0x1bef760) at 
ioloop.c:649
  No locals.
  #6  0x7fbed5387968 in io_loop_run (ioloop=0x1bef760) at ioloop.c:624
  __FUNCTION__ = "io_loop_run"
  #7  0x00402896 in main ()
  No symbol table info available.

Do i need to install the debuginfo packet?


Aki


Thanks
Thomas


On 10.11.2017 11:35, T. Robers wrote:

Hello everybody,

i tried to debug imap sessions with the rawlog feature and rawlog
creates files but when i try to dump them doveadm tells me
[...] Error: zlib.read((file)): unexpected EOF at [...].
I looked at syslog files and i see, that rawlog gets
terminated with a segfault, e.g.:

segfault at 10 ip 7ff6da362596 sp 7fffe725a080 error 4 in
libdovecot.so.0.0.0[7ff6da2a4000+122000]

Is there a way to debug why rawlog ist terminated? I haven't found
anything.I would be very thankful, if somebody could give a hint.

My system is:

# 2.2.33.2 (d6601f4ec): /etc/dovecot/dovecot.conf
# Pigeonhole version 0.4.21 (92477967)
# OS: Linux 2.6.32-696.3.1.el6.x86_64 x86_64 CentOS release 6.9
(Final) ext4
auth_debug = yes
auth_debug_passwords = yes
auth_master_user_separator = *
auth_mechanisms = plain login
auth_verbose = yes
disable_plaintext_auth = no
imap_max_line_length = 2 M
mail_debug = yes
mail_location = maildir:/export/home/imap/%Lu/Maildir
mail_plugins = acl zlib mail_log noti

Re: stats module

2017-11-05 Thread Thomas Leuxner
* Jeff Abrahamson <j...@p27.eu> 2017.11.03 17:45:

> > >>     -rw-r--r-- 1 root root  1856 Nov  3 16:11 91-stats

Please take note of the include scheme:
!include conf.d/*.conf

Regards
Thomas


signature.asc
Description: PGP signature


Re: authenticate as userA, but get authorization to user userB's account

2017-10-25 Thread Thomas Leuxner
* Heiko Schlittermann <h...@schlittermann.de> 2017.10.25 12:58:

> Question: Is there any way to split the authentication from the
> authorization within common mail clients (as Thunderbird) in combination
> with Dovecot. That is, doing something like logging in to the
> account sa...@example.com, using the credentials of the very own account
> (say h...@example.com)?

Hi,

wouldn't this be a use case for acl_groups, where a user would belong to group 
"Sales" and this "role" would gain specific access?

Regards
Thomas


signature.asc
Description: PGP signature


Re: Securing postfix to dovecot (SASL) auth

2017-09-27 Thread Thomas Bauer


Am 27.09.2017 um 09:35 schrieb Thomas Bauer:
> On the postfix server in master.cf:
> 
> submission inet n   -   -   -   -   smtpd
>...
>-o smtpd_sasl_path=inet:192.0.0.1:10001
>...
You might use

 -o smtpd_tls_security_level=encrypt

as well, to ensure postfix make use of tls.




signature.asc
Description: OpenPGP digital signature


Re: Securing postfix to dovecot (SASL) auth

2017-09-27 Thread Thomas Bauer


Hi,

Am 27.09.2017 um 01:07 schrieb Raymond Sellars:
> Is it possible to secure the Dovecot SASL auth provider for postfix?
> 
I'm using this configuration, which you've suggested.

> Has anyone managed to implement a secure internal approach they can share? 
> I'm wondering if Postfix with Cyrus against IMAP using STARTTLS is my best 
> alternative.
> 

My config is:
On the dovecot server:

service auth {
  inet_listener{
address=192.0.0.1
port=10001
ssl=yes
}
}

On the postfix server in master.cf:

submission inet n   -   -   -   -   smtpd
   ...
   -o smtpd_sasl_path=inet:192.0.0.1:10001
   ...

And in main.cf:

### SASL via dovecot ###
smtpd_sasl_auth_enable = yes
smtpd_sasl_path = inet:192.0.0.1:10001
smtpd_sasl_type = dovecot



> Thanks
> Raymond
> 
Greetings
Thomas



signature.asc
Description: OpenPGP digital signature


Re: maildir boxes directory mode upon creation

2017-08-24 Thread Thomas Leuxner
* vadim <va...@ideco.ru> 2017.08.23 16:04:

> I am unable to enforce dovecot to create mailboxes with 660 permissions.
> Output of dovecot -n is in the attachment.
> 
> Please tell me what's the right way to control mailbox permissions ?

Hi Vadmin,

inject the mails per LMTP rather than having Postfix save them directly and let 
Dovecot worry about the permissions:

https://wiki2.dovecot.org/HowTo/PostfixDovecotLMTP

Regards
Thomas


signature.asc
Description: PGP signature


  1   2   3   4   5   6   7   8   9   10   >