dovecot and solr clustering

2024-03-13 Thread Maciej Milaszewski
Hi
I have 1 director and ~8 dovecot nodes. I thinking about Solr cluster beacuse
one server solr probably is not enough.
I thinking about SolrCloud Mode but I don't know if it will work with Dovecot

And if not SolrCloud Mode, then what?

I don't have much experience in Solr clustering and I want to choose the most
optimal solution with scalable aand redundancy
(bare metal servers is not problem)



OpenPGP_signature.asc
Description: OpenPGP digital signature
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: dovecot and argonid

2023-07-13 Thread Maciej Milaszewski

Hi
The problem was trivial and unrelated to dovecot.
The programmer pasted an incorrect hash into the database

W dniu 13.07.2023 o 09:50, Aki Tuomi pisze:

What was it?

Aki


On 13/07/2023 10:36 EEST Maciej Milaszewski  wrote:

  
Hi

Problem solved

W dniu 12.07.2023 o 15:20, Maciej Milaszewski pisze:

Hi
For test I try use new auth (argonid) fot dovecot-2.3.20

I change only in sql:
password: {ARGON2ID}

in dovecot.conf i set
default_vsz_limit = 2000 M

and I can't auth

Jul 12 15:07:43 dovecot8 dovecot: auth-worker(44928): conn
unix:auth-worker (pid=23437,uid=114): auth-worker<4772376>:
sql(user12,xxx.xxx.xxx.12,): Password mismatch
Jul 12 15:07:43 dovecot8 dovecot: imap-login: Disconnected: Connection
closed (auth failed, 1 attempts in 105 secs): user=,
method=PLAIN, rip=xxx.xxx.xxx.12, lip=xxx.xxx.xxx.12, secured,
session=

I try telnet ip 144
a login username pass

Any idea ?

___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


--
Maciej Miłaszewski
Starszy Administrator Systemowy
IQ PL Sp. z o.o.

Biuro Obsługi Klienta:
e-mail: b...@iq.pl
tel.: +48 58 326 09 90 - 94
fax: +48 58 326 09 99

Dział pomocy: https://www.iq.pl/pomoc
Informacja dotycząca przetwarzania danych osobowych: 
https://www.iq.pl/obowiazek-informacyjny

IQ PL Sp. z o.o. z siedzibą w Gdańsku (80-298), ul. Geodetów 16, KRS 
007725, Sąd rejestrowy: Sąd Rejonowy w Gdańsku VII Wydział KRS, kapitał 
zakładowy: 140.000 PLN, NIP 5832736211, REGON 192478853

___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org



--
Maciej Miłaszewski
Starszy Administrator Systemowy
IQ PL Sp. z o.o.

Biuro Obsługi Klienta:
e-mail: b...@iq.pl
tel.: +48 58 326 09 90 - 94
fax: +48 58 326 09 99

Dział pomocy: https://www.iq.pl/pomoc
Informacja dotycząca przetwarzania danych osobowych: 
https://www.iq.pl/obowiazek-informacyjny

IQ PL Sp. z o.o. z siedzibą w Gdańsku (80-298), ul. Geodetów 16, KRS 
007725, Sąd rejestrowy: Sąd Rejonowy w Gdańsku VII Wydział KRS, kapitał 
zakładowy: 140.000 PLN, NIP 5832736211, REGON 192478853



OpenPGP_signature
Description: OpenPGP digital signature
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: dovecot and argonid

2023-07-13 Thread Maciej Milaszewski

Hi
Problem solved

W dniu 12.07.2023 o 15:20, Maciej Milaszewski pisze:

Hi
For test I try use new auth (argonid) fot dovecot-2.3.20

I change only in sql:
password: {ARGON2ID}

in dovecot.conf i set
default_vsz_limit = 2000 M

and I can't auth

Jul 12 15:07:43 dovecot8 dovecot: auth-worker(44928): conn 
unix:auth-worker (pid=23437,uid=114): auth-worker<4772376>: 
sql(user12,xxx.xxx.xxx.12,): Password mismatch
Jul 12 15:07:43 dovecot8 dovecot: imap-login: Disconnected: Connection 
closed (auth failed, 1 attempts in 105 secs): user=, 
method=PLAIN, rip=xxx.xxx.xxx.12, lip=xxx.xxx.xxx.12, secured, 
session=


I try telnet ip 144
a login username pass

Any idea ?

___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org



--
Maciej Miłaszewski
Starszy Administrator Systemowy
IQ PL Sp. z o.o.

Biuro Obsługi Klienta:
e-mail: b...@iq.pl
tel.: +48 58 326 09 90 - 94
fax: +48 58 326 09 99

Dział pomocy: https://www.iq.pl/pomoc
Informacja dotycząca przetwarzania danych osobowych: 
https://www.iq.pl/obowiazek-informacyjny

IQ PL Sp. z o.o. z siedzibą w Gdańsku (80-298), ul. Geodetów 16, KRS 
007725, Sąd rejestrowy: Sąd Rejonowy w Gdańsku VII Wydział KRS, kapitał 
zakładowy: 140.000 PLN, NIP 5832736211, REGON 192478853



OpenPGP_signature
Description: OpenPGP digital signature
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


dovecot and argonid

2023-07-12 Thread Maciej Milaszewski

Hi
For test I try use new auth (argonid) fot dovecot-2.3.20

I change only in sql:
password: {ARGON2ID}

in dovecot.conf i set
default_vsz_limit = 2000 M

and I can't auth

Jul 12 15:07:43 dovecot8 dovecot: auth-worker(44928): conn 
unix:auth-worker (pid=23437,uid=114): auth-worker<4772376>: 
sql(user12,xxx.xxx.xxx.12,): Password mismatch
Jul 12 15:07:43 dovecot8 dovecot: imap-login: Disconnected: Connection 
closed (auth failed, 1 attempts in 105 secs): user=, 
method=PLAIN, rip=xxx.xxx.xxx.12, lip=xxx.xxx.xxx.12, secured, 
session=


I try telnet ip 144
a login username pass

Any idea ?


OpenPGP_signature
Description: OpenPGP digital signature
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: Outlook Duplicate emails

2023-05-31 Thread Maciej Milaszewski

Hi
This is probaly problem in outlook clients and outlook configurations 
third party add-ins/utilities


Check if the problem occurs on thunderbird
Probably problem not exists in thunderbird

W dniu 29.05.2023 o 10:43, Nick Lekkas pisze:


Hello there to all !

I have 2 postfix/dovecot mailservers. Those mailservers are synced 
using replicator enabled.  The dovecot versions I use is 2.3.16 on 
Rocky Linux 9 …The configuration issued from CentOS 7 mailservers with 
replicator enabled using 2.2.36. The config is the same in both 2 
couple servers…


I have issue with pop3 and outlook that receives duplicate email in 
many cases and in some accounts… Have anyone faced any similar issue…?


Any help appreciated…..

Best Regards

Nick



__ ESET Endpoint Antivirus __

This email was scanned, no threats were found.


email from: to: with subject Outlook Duplicate emails dated 29/05/2023 
- is OK



Detection engine version: 27314 (20230529)

https://www.eset.com

___
dovecot mailing list --dovecot@dovecot.org
To unsubscribe send an email todovecot-le...@dovecot.org



--
Maciej Miłaszewski
Starszy Administrator Systemowy
IQ PL Sp. z o.o.

Biuro Obsługi Klienta:
e-mail:b...@iq.pl
tel.: +48 58 326 09 90 - 94
fax: +48 58 326 09 99

Dział pomocy:https://www.iq.pl/pomoc
Informacja dotycząca przetwarzania danych 
osobowych:https://www.iq.pl/obowiazek-informacyjny

IQ PL Sp. z o.o. z siedzibą w Gdańsku (80-298), ul. Geodetów 16, KRS 
007725, Sąd rejestrowy: Sąd Rejonowy w Gdańsku VII Wydział KRS, kapitał 
zakładowy: 140.000 PLN, NIP 5832736211, REGON 192478853



OpenPGP_signature
Description: OpenPGP digital signature
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: Error: Broken file dovecot-uidlist

2023-02-16 Thread Maciej Milaszewski

Hi
I had the same problem

fstab:
Ip:/vmail /vmail nfs 
rw,sec=sys,noexec,noatime,tcp,soft,rsize=32768,wsize=32768,intr,nordirplus,nfsvers=3,actimeo=120


/vmail on /vmail type nfs 
(rw,noexec,noatime,vers=3,rsize=65536,wsize=65536,namlen=255,acregmin=120,acregmax=120,acdirmin=120,acdirmax=120,hard,nocto,nordirplus,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=xxx.xxx.xxx.xxx,mountvers=3,mountport=635,mountproto=tcp,local_lock=none,addr=xxx.xxx.xxx.xxx)


In debian9 and debian10 an ubuntu...
And I solve workaraund downgrade a kernel for all node to ... 3.x in any 
4.x and 5.x I get broken file


Linux dovecot16 Debian 3.x.68-2 (kernel-care)
I use netup - I don't administer it and I don't have access and only 
downgrade kernel solve problem



W dniu 15.02.2023 o 18:17, Sohin Vyacheslav pisze:



15.02.2023 16:58, Maciej Milaszewski пишет:


Can you send me info about mounted options (fstab) and what is your 
kernel and nfs version in client and your storage


Hi Maciej,

Client=>
fstab:
IP-address:/data  /data   nfs 
auto,nofail,noatime,intr,tcp,nordirplus,actimeo=1800    0   0


# uname -r
4.15.0-204-generic

nfs-common-1:1.3.4-2.1ubuntu5.5

currently mounted as NFSv4.2
# mount | grep nfs
IP-address-2:/data on /data type nfs4 
(rw,noatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,acregmin=1800,acregmax=1800,acdirmin=1800,acdirmax=1800,hard,nordirplus,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=IP-address-1,local_lock=none,addr=IP-address-2)



NFS storage =>

# uname -r
4.15.0-158-generic

nfs-common-1:1.3.4-2.1ubuntu5.5
nfs-kernel-server-1:1.3.4-2.1ubuntu5.5


OpenPGP_signature
Description: OpenPGP digital signature


Re: Error: Broken file dovecot-uidlist

2023-02-15 Thread Maciej Milaszewski

Hi
Can you send me info about mounted options (fstab) and what is your 
kernel and nfs version in client and your storage


W dniu 15.02.2023 o 12:32, Sohin Vyacheslav pisze:


Hi All,

In mail.log are exists error messages "Error: Broken file 
/data/mail/vhosts/domain.com/u...@domain.com/Maildir/dovecot-uidlist 
line XX: Invalid data: for some email accounts.


For example,
dovecot-uidlist line 21197: Invalid data:

When I open line 21197 in editor:
# vim +21197 dovecot-uidlist
1750207 :1676454069.M698819P4439.mail-b,S=963716,W=976359

and then check this file size
# ls -l 
/data/mail/vhosts/domain.com/u...@domain.com/Maildir/cur/1676454069.M698819P4439.mail-b,S=963716,W=976359:2,S

-rw--- 1 vmail vmail 963716 Feb 15 10:41


I see that size is the same: 963176 bytes. So what exactly mentioned 
in error message "dovecot-uidlist line 21197: Invalid data:"?






OpenPGP_signature
Description: OpenPGP digital signature


Re: The end of Dovecot Director?

2022-10-26 Thread Maciej Milaszewski

Hi
What is the planned replacement like

doveadm director status
move / kick / flush
add /up / del

In 3.0 ?

Will there be a fork dovecot ?








OpenPGP_signature
Description: OpenPGP digital signature


strange cache behavior

2021-10-07 Thread Maciej Milaszewski
Hi
I Migrate user from ldap to mysql (director+nodes) (2.2.36.4)

All works fine but when I turn on debbug auth I get "cache differences "

look:
dovecot6 (lmysql)run:
 
#doveadm user m...@domain.ltd
field    value
user    domain.ltd_mx8
uid    300
gid    300
home    /vmail2/na/domain.ltd_mx8
mail   
maildir:~/Maildir:INDEX=/var/dovecot_indexes/vmail2/na/domain.ltd_mx8
quota_rule    *:bytes=10485760

In second time (via cache) I get:

#doveadm user m...@domain.ltd
field    value
uid    300
gid    300
home    /vmail2/na/domain.ltd_mx8
mail   
maildir:~/Maildir:INDEX=/var/dovecot_indexes/vmail2/na/domain.ltd_mx8
quota_rule    *:bytes=10485760

cache not give me "user    domain.ltd_mx8" value


In ldap every time I get this same

I try debbug and i found difference in 2 and 3 line:

mysql:
Oct  7 16:42:04 dovecot6 dovecot: auth: Debug: master in:
USER#0111#011...@domain.ltd#011service=doveadm#011debug
Oct  7 16:42:04 dovecot6 dovecot: auth: Debug:
sql(m...@natan-test.iq.pl): userdb cache hit:
home=/vmail2/na/domain.ltd_mx8#011quota_rule=*:bytes=10485760
Oct  7 16:42:04 dovecot6 dovecot: auth: Debug: userdb out:
USER#0111#011...@domain.ltd#011home=/vmail2/na/natan-test.iq.pl_mx8_natan-test#011quota_rule=*:bytes=10485760

in ldap:
Oct  7 16:45:31 dovecot5 dovecot: auth: Debug: master in:
USER#0111#011...@domain.ltd#011service=doveadm#011debug
Oct  7 16:45:31 dovecot5 dovecot: auth: Debug:
ldap(m...@natan-test.iq.pl): userdb cache hit:
home=/vmail2/na/domain.ltd_mx8#011quota_rule=*:bytes=10485760#011user=domain.ltd_mx8
Oct  7 16:45:31 dovecot5 dovecot: auth: Debug:
ldap(m...@natan-test.iq.pl): username changed m...@domain.ltd-> domain.ltd_mx8
Oct  7 16:45:31 dovecot5 dovecot: auth: Debug: userdb out:
USER#0111#011domain.ltd_mx8#011home=/vmail2/na/natan-test.iq.pl_mx8_natan-test#011quota_rule=*:bytes=10485760


first query witchout cache:

ldap
Oct  7 16:45:21 dovecot5 dovecot: auth: Debug: userdb out:
USER#0111#011domain.ltd_mx8#011home=/vmail2/na/domain.ltd_mx8#011quota_rule=*:bytes=10485760

mysql
Oct  7 16:41:34 dovecot6 dovecot: auth: Debug: userdb out:
USER#0111#011domain.ltd_mx8#011home=/vmail2/na/domain.ltd_mx8#011quota_rule=*:bytes=10485760

And I dont known where is a problem ? maybe in the cache setting ?

All dovecot hava this sam config (outside of the settings about auth query)




OpenPGP_signature
Description: OpenPGP digital signature


Re: dovecot ldap and mysql

2021-09-30 Thread Maciej Milaszewski
Hi
Sorry from last e-mail :) problem solved problem was in mysql query and
iterate_query

W dniu 30.09.2021 o 15:10, Maciej Milaszewski pisze:
> Hi
> In ldap:
>
> 
> user_attrs = uid=user, mailMessageStore=home,
> mailQuotaSize=quota_rule=*:bytes=%$
> user_filter =
> (&(&(!(accountStatus=deleted))(objectClass=MailUser))(|(mail=%u)(uid=%u)(mailAlternateAddress=%u)))
> pass_attrs =
> uid=user,userPassword=password,=proxy=y,uid=userdb_user,mailQuotaSize=userdb_quota_rule=*:bytes=%$,mailMessageStore=userdb_home
>
> pass_filter =
> (&(objectClass=MailUser)(|(mail=%u)(uid=%u)(mailAlternateAddress=%u)))
>
> iterate_attrs = uid=user
> iterate_filter = (&(&(objectClass=mailUser)(!(accountStatus=deleted
> 
>
> in mysql is hard  (please do not judge)
>
> ...
> user_query = select a.user_name user, a.mail_message_store home,
> CONCAT('*:bytes=', mail_quota_size) as quota_rule, a.account_status from
> account a , account_mail_alternate_address amaa where amaa.account_id =
> a.id and ((a.account_status is null) or (a.account_status != "deleted"))
> and ( a.user_name = "%u" or a.mail = "%u" or amaa.mail_alternate_address
> = "%u" ) UNION select a.user_name user, a.mail_message_store home,
> CONCAT('*:bytes=', mail_quota_size*1048576) as quota_rule,
> a.account_status from account a where ((a.account_status is null) or
> (a.account_status != "deleted")) and ( a.user_name = "%u" or a.mail = "%u");
>
> password_query = select a.user_password_encoded password, "y" AS proxy
> from account a , account_mail_alternate_address amaa where
> amaa.account_id = a.id and ((a.account_status is null) or
> (a.account_status != "deleted")) and ( a.user_name = "%u" or a.mail =
> "%u" or amaa.mail_alternate_address = "%u" ) UNION select
> a.user_password_encoded password,"y" AS proxy from account a where
> ((a.account_status is null) or (a.account_status != "deleted")) and (
> a.user_name = "%u" or a.mail = "%u" );
> ...
>
> W dniu 30.09.2021 o 14:44, Aki Tuomi pisze:
>>> On 30/09/2021 15:01 Maciej Milaszewski  wrote:
>>>
>>>  
>>> Hi
>>> I have dovecot director + nodes and migrate users from ldap to mysql.
>>> I allow to auth via e-mail and alias and uid - thats i need
>>>
>>> In director ( where users is in ldap ) all works fine - user is proxy to
>>> UID like:
>>>
>>> ...
>>> doveadm auth test o...@domain.ltd passs
>>> passdb: o...@domain.ltd auth succeeded
>>> extra fields:
>>>   user=uid_122_ola_domain.ltd
>>>   proxy
>>>   original_user=o...@domain.ltd
>>> ...
>>>
>>> In lab director2 ( where users is in mysql) not:
>>> ...
>>> doveadm auth test o...@domain.ltd passs
>>> passdb: o...@domain.ltd auth succeeded
>>> extra fields:
>>>   user=o...@domain.ltd
>>>   proxy
>>> ...
>>>
>>> and I dont known where is a problem in mysql. Mayby subquery/other ?
>> Can you include the relevant bits of doceot ldap and mysql config files, 
>> please?
>>
>> Aki
>


-- 
Maciej Miłaszewski
Starszy Administrator Systemowy
IQ PL Sp. z o.o.

Biuro Obsługi Klienta:
e-mail: b...@iq.pl
tel.: +48 58 326 09 90 - 94
fax: +48 58 326 09 99

Dział pomocy: https://www.iq.pl/pomoc
Informacja dotycząca przetwarzania danych osobowych: https://www.iq.pl/kontakt

IQ PL Sp. z o.o. z siedzibą w Gdańsku (80-298), ul. Geodetów 16, KRS 
007725, Sąd rejestrowy: Sąd Rejonowy w Gdańsku VII Wydział KRS, kapitał 
zakładowy: 140.000 PLN, NIP 5832736211, REGON 192478853




OpenPGP_signature
Description: OpenPGP digital signature


Re: dovecot ldap and mysql

2021-09-30 Thread Maciej Milaszewski
Hi
In ldap:


user_attrs = uid=user, mailMessageStore=home,
mailQuotaSize=quota_rule=*:bytes=%$
user_filter =
(&(&(!(accountStatus=deleted))(objectClass=MailUser))(|(mail=%u)(uid=%u)(mailAlternateAddress=%u)))
pass_attrs =
uid=user,userPassword=password,=proxy=y,uid=userdb_user,mailQuotaSize=userdb_quota_rule=*:bytes=%$,mailMessageStore=userdb_home

pass_filter =
(&(objectClass=MailUser)(|(mail=%u)(uid=%u)(mailAlternateAddress=%u)))

iterate_attrs = uid=user
iterate_filter = (&(&(objectClass=mailUser)(!(accountStatus=deleted


in mysql is hard  (please do not judge)

...
user_query = select a.user_name user, a.mail_message_store home,
CONCAT('*:bytes=', mail_quota_size) as quota_rule, a.account_status from
account a , account_mail_alternate_address amaa where amaa.account_id =
a.id and ((a.account_status is null) or (a.account_status != "deleted"))
and ( a.user_name = "%u" or a.mail = "%u" or amaa.mail_alternate_address
= "%u" ) UNION select a.user_name user, a.mail_message_store home,
CONCAT('*:bytes=', mail_quota_size*1048576) as quota_rule,
a.account_status from account a where ((a.account_status is null) or
(a.account_status != "deleted")) and ( a.user_name = "%u" or a.mail = "%u");

password_query = select a.user_password_encoded password, "y" AS proxy
from account a , account_mail_alternate_address amaa where
amaa.account_id = a.id and ((a.account_status is null) or
(a.account_status != "deleted")) and ( a.user_name = "%u" or a.mail =
"%u" or amaa.mail_alternate_address = "%u" ) UNION select
a.user_password_encoded password,"y" AS proxy from account a where
((a.account_status is null) or (a.account_status != "deleted")) and (
a.user_name = "%u" or a.mail = "%u" );
...

W dniu 30.09.2021 o 14:44, Aki Tuomi pisze:
>> On 30/09/2021 15:01 Maciej Milaszewski  wrote:
>>
>>  
>> Hi
>> I have dovecot director + nodes and migrate users from ldap to mysql.
>> I allow to auth via e-mail and alias and uid - thats i need
>>
>> In director ( where users is in ldap ) all works fine - user is proxy to
>> UID like:
>>
>> ...
>> doveadm auth test o...@domain.ltd passs
>> passdb: o...@domain.ltd auth succeeded
>> extra fields:
>>   user=uid_122_ola_domain.ltd
>>   proxy
>>   original_user=o...@domain.ltd
>> ...
>>
>> In lab director2 ( where users is in mysql) not:
>> ...
>> doveadm auth test o...@domain.ltd passs
>> passdb: o...@domain.ltd auth succeeded
>> extra fields:
>>   user=o...@domain.ltd
>>   proxy
>> ...
>>
>> and I dont known where is a problem in mysql. Mayby subquery/other ?
> Can you include the relevant bits of doceot ldap and mysql config files, 
> please?
>
> Aki


-- 
Maciej Miłaszewski
Starszy Administrator Systemowy
IQ PL Sp. z o.o.

Biuro Obsługi Klienta:
e-mail: b...@iq.pl
tel.: +48 58 326 09 90 - 94
fax: +48 58 326 09 99

Dział pomocy: https://www.iq.pl/pomoc
Informacja dotycząca przetwarzania danych osobowych: https://www.iq.pl/kontakt

IQ PL Sp. z o.o. z siedzibą w Gdańsku (80-298), ul. Geodetów 16, KRS 
007725, Sąd rejestrowy: Sąd Rejonowy w Gdańsku VII Wydział KRS, kapitał 
zakładowy: 140.000 PLN, NIP 5832736211, REGON 192478853




OpenPGP_signature
Description: OpenPGP digital signature


dovecot ldap and mysql

2021-09-30 Thread Maciej Milaszewski
Hi
I have dovecot director + nodes and migrate users from ldap to mysql.
I allow to auth via e-mail and alias and uid - thats i need

In director ( where users is in ldap ) all works fine - user is proxy to
UID like:

...
doveadm auth test o...@domain.ltd passs
passdb: o...@domain.ltd auth succeeded
extra fields:
  user=uid_122_ola_domain.ltd
  proxy
  original_user=o...@domain.ltd
...

In lab director2 ( where users is in mysql) not:
...
doveadm auth test o...@domain.ltd passs
passdb: o...@domain.ltd auth succeeded
extra fields:
  user=o...@domain.ltd
  proxy
...

and I dont known where is a problem in mysql. Mayby subquery/other ?



OpenPGP_signature
Description: OpenPGP digital signature


dovecot and irc

2021-09-30 Thread Maciej Milaszewski
Hi
What is official irc server ? Freenode or libra or other ?



OpenPGP_signature
Description: OpenPGP digital signature


Re: dovecot and auth cache

2021-07-23 Thread Maciej Milaszewski
Hi
Sorry for e-mail - problem solved - drine options in haproxy not
terminated existing connections

> Hi
> In my lab I have tested auth cache in dovecot like:
>
> 1) test auth
> doveadm auth login a...@domain.ltd pass
> passdb: a...@domain.ltd auth succeeded
>  
> - user a...@domain.ltd is in mysql (klater galera)
>
> 2)I stop the galley cluster
>
> 3)test auth - via cache
> doveadm auth login a...@domain.ltd pass
> passdb: a...@domain.ltd auth succeeded
>
> 4)flush cache
> root@msmtp3:~# doveadm auth cache flush
> 2 cache entries flushed
>
> root@msmtp3:~# doveadm auth cache flush
> 0 cache entries flushed
>
> - cache is emty
>
> 5)test auth
> doveadm auth login a...@domain.ltd pass
> passdb: a...@domain.ltd auth succeeded
>
> Why 5 step give - successed ?
>
> doveconf -n
> # 2.3.4.1 (f79e8e7e4): /etc/dovecot/dovecot.conf
> # Pigeonhole version 0.5.4 ()
> # OS: Linux 4.19.0-12-amd64 x86_64 Debian 10.9
> ...
> auth_cache_negative_ttl = 5 mins
> auth_cache_ttl = 5 mins
> ...
>
>
> If I restart dovecot works correctly  - I mean:
> doveadm auth login a...@domain.ltd pass
> not auth
>
> I dont have any idea
>




OpenPGP_signature
Description: OpenPGP digital signature


dovecot and auth cache

2021-07-22 Thread Maciej Milaszewski
Hi
In my lab I have tested auth cache in dovecot like:

1) test auth
doveadm auth login a...@domain.ltd pass
passdb: a...@domain.ltd auth succeeded
 
- user a...@domain.ltd is in mysql (klater galera)

2)I stop the galley cluster

3)test auth - via cache
doveadm auth login a...@domain.ltd pass
passdb: a...@domain.ltd auth succeeded

4)flush cache
root@msmtp3:~# doveadm auth cache flush
2 cache entries flushed

root@msmtp3:~# doveadm auth cache flush
0 cache entries flushed

- cache is emty

5)test auth
doveadm auth login a...@domain.ltd pass
passdb: a...@domain.ltd auth succeeded

Why 5 step give - successed ?

doveconf -n
# 2.3.4.1 (f79e8e7e4): /etc/dovecot/dovecot.conf
# Pigeonhole version 0.5.4 ()
# OS: Linux 4.19.0-12-amd64 x86_64 Debian 10.9
...
auth_cache_negative_ttl = 5 mins
auth_cache_ttl = 5 mins
...


If I restart dovecot works correctly  - I mean:
doveadm auth login a...@domain.ltd pass
not auth

I dont have any idea



OpenPGP_signature
Description: OpenPGP digital signature


Re: dovecot and broken uidlist

2021-01-29 Thread Maciej Milaszewski
Hi
Probably netapp fas8200 not support NFS 4.2 and NFS 4.1 not support auth
via exports (only kerberros)


On 28.01.2021 19:45, Tom Talpey wrote:
> On 1/28/2021 11:14 AM, Maciej Milaszewski wrote:
>> Hi
>> For test I crete a new director with 2.3.13 and node 2.3.13 I mount
>> storage via nfs with this same options:
>>
>> rw,sec=sys,noexec,noatime,tcp,hard,rsize=65536,wsize=65536,intr,nordirplus,nfsvers=3,tcp,actimeo=120
>>
>>
>> I create a simple MTA and change MX to thi same like director1
>>
>> In kernel 5.8.0-0.bpo.2-amd64 problem exists
>> In kernel 3.x - not exists
>>
>> In problem exists I check Maildir/dovecot-uidlist
>>
>> 3 V1424432537 N16208 G92c4ee0d93aa1260c62909c4ba82
>> 16144 :1611352119.M505834P25597.dovecot2,S=18282,W=18620
>> 16145 :1611352123.M269121P19872.dovecot2,S=18266,W=18604
>> 16146 :1611762747.M502108P9747.dovecot7,S=6595,W=6726
>> 16150 :1611835594.M756718P9986.dovecot7,S=62439,W=63817
>> 16163 :1611828091.M231204P5202.dovecot7,S=19348,W=19855
>> 16208 :1611849420.M137743P24417.dovecot7,S=12064,W=12296
>> 16209 :1611828091.M144806P5202.dovecot7,S=2806,W=2865
>> 16210 :1611837438.M678475P12027.dovecot7,S=17713,W=18072
>> 16211 :1611757939.M493064P7136.dovecot7,S=30783,W=31520
>> ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@$
>>
>
> A block of zeros in a file opened for append is a classic NFSv3 race.
> Your mount options allow 120 seconds of attribute caching (actimeo=120).
> One of these attributes is the file size, which is also the end of file
> marker for append. If the file is changed by another client, the append
> mode writes will land on the wrong offset, possibly overwriting or
> punching holes.
>
> If you use the "noac" mount option, this will reduce the window of
> vulnerability, but it will not eliminate it. It's also possible there
> is some issue in attribute caching in the 5.8 kernel. Do you have
> other options between 3.16 and 5.8?
>
> The best fix is to use a more robust NFS dialect such as v4.2.
>
> Tom.
>
>> If not exists:
>>
>> 16144 :1611352119.M505834P25597.dovecot2,S=18282,W=18620
>> 16145 :1611352123.M269121P19872.dovecot2,S=18266,W=18604
>> 16146 :1611762747.M502108P9747.dovecot7,S=6595,W=6726
>> 16150 :1611835594.M756718P9986.dovecot7,S=62439,W=63817
>> 16163 :1611828091.M231204P5202.dovecot7,S=19348,W=19855
>> 16208 :1611849420.M137743P24417.dovecot7,S=12064,W=12296
>> 16209 :1611828091.M144806P5202.dovecot7,S=2806,W=2865
>> 16210 :1611837438.M678475P12027.dovecot7,S=17713,W=18072
>> 16211 :1611757939.M493064P7136.dovecot7,S=30783,W=31520
>>
>> On 23.01.2021 00:59, Alessio Cecchi wrote:
>>>
>>> Hi,
>>>
>>> after some tests I notice a difference in dovecot-uidlist line
>>> format when message is read from "old kernel" and "new kernel":
>>>
>>> 81184 G1611334252.M95445P32580.mail05.myserver.com
>>> :1611334252.M95445P32580.mail05.myserver.com,S=38689,W=39290
>>> 81185 G1611336004.M47750P3921.mail01.myserver.com
>>> :1611336004.M47750P3921.mail01.myserver.com,S=15917,W=16212
>>> 81186 G1611338535.M542784P10852.mail03.myserver.com
>>> :1611338535.M542784P10852.mail03.myserver.com,S=12651,W=12855
>>> 81187 G1611341375.M164702P13505.mail01.myserver.com
>>> :1611341375.M164702P13505.mail01.myserver.com,S=8795,W=8964
>>> 81189 G1611354389.M984432P14754.mail06.myserver.com
>>> :1611354389.M984432P14754.mail06.myserver.com,S=3038,W=3096
>>> 81191 :1611355746.M365669P10402.mail03.myserver.com,S=3049,W=3107
>>> 81193 :1611356442.M611719P20778.mail01.myserver.com,S=1203,W=1230
>>> 81194 G1611356752.M573233P27082.mail01.myserver.com
>>> :1611356752.M573233P27082.mail01.myserver.com,S=1210,W=1238
>>> 81195 G1611356991.M905681P30704.mail01.myserver.com
>>> :1611356991.M905681P30704.mail01.myserver.com,S=1220,W=1249
>>> 81197 :1611357210.M42178P1962.mail01.myserver.com,S=1220,W=1250
>>> 81199 :1611357560.M26894P7157.mail01.myserver.com,S=1233,W=1264
>>>
>>> With "old kernel" (where all works fine) UID number are incremental
>>> and in the line there is one more field that start with "G1611...".
>>>
>>> With "new kernel" (where error comes) UID number skip always a
>>> number and the field "G1611..." is missing.
>

Re: dovecot and broken uidlist

2021-01-28 Thread Maciej Milaszewski
Hi
For test I crete a new director with 2.3.13 and node 2.3.13 I mount
storage via nfs with this same options:

rw,sec=sys,noexec,noatime,tcp,hard,rsize=65536,wsize=65536,intr,nordirplus,nfsvers=3,tcp,actimeo=120

I create a simple MTA and change MX to thi same like director1

In kernel 5.8.0-0.bpo.2-amd64 problem exists
In kernel 3.x - not exists

In problem exists I check Maildir/dovecot-uidlist

3 V1424432537 N16208 G92c4ee0d93aa1260c62909c4ba82
16144 :1611352119.M505834P25597.dovecot2,S=18282,W=18620
16145 :1611352123.M269121P19872.dovecot2,S=18266,W=18604
16146 :1611762747.M502108P9747.dovecot7,S=6595,W=6726
16150 :1611835594.M756718P9986.dovecot7,S=62439,W=63817
16163 :1611828091.M231204P5202.dovecot7,S=19348,W=19855
16208 :1611849420.M137743P24417.dovecot7,S=12064,W=12296
16209 :1611828091.M144806P5202.dovecot7,S=2806,W=2865
16210 :1611837438.M678475P12027.dovecot7,S=17713,W=18072
16211 :1611757939.M493064P7136.dovecot7,S=30783,W=31520
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@$

If not exists:

16144 :1611352119.M505834P25597.dovecot2,S=18282,W=18620
16145 :1611352123.M269121P19872.dovecot2,S=18266,W=18604
16146 :1611762747.M502108P9747.dovecot7,S=6595,W=6726
16150 :1611835594.M756718P9986.dovecot7,S=62439,W=63817
16163 :1611828091.M231204P5202.dovecot7,S=19348,W=19855
16208 :1611849420.M137743P24417.dovecot7,S=12064,W=12296
16209 :1611828091.M144806P5202.dovecot7,S=2806,W=2865
16210 :1611837438.M678475P12027.dovecot7,S=17713,W=18072
16211 :1611757939.M493064P7136.dovecot7,S=30783,W=31520

On 23.01.2021 00:59, Alessio Cecchi wrote:
>
> Hi,
>
> after some tests I notice a difference in dovecot-uidlist line format
> when message is read from "old kernel" and "new kernel":
>
> 81184 G1611334252.M95445P32580.mail05.myserver.com
> :1611334252.M95445P32580.mail05.myserver.com,S=38689,W=39290
> 81185 G1611336004.M47750P3921.mail01.myserver.com
> :1611336004.M47750P3921.mail01.myserver.com,S=15917,W=16212
> 81186 G1611338535.M542784P10852.mail03.myserver.com
> :1611338535.M542784P10852.mail03.myserver.com,S=12651,W=12855
> 81187 G1611341375.M164702P13505.mail01.myserver.com
> :1611341375.M164702P13505.mail01.myserver.com,S=8795,W=8964
> 81189 G1611354389.M984432P14754.mail06.myserver.com
> :1611354389.M984432P14754.mail06.myserver.com,S=3038,W=3096
> 81191 :1611355746.M365669P10402.mail03.myserver.com,S=3049,W=3107
> 81193 :1611356442.M611719P20778.mail01.myserver.com,S=1203,W=1230
> 81194 G1611356752.M573233P27082.mail01.myserver.com
> :1611356752.M573233P27082.mail01.myserver.com,S=1210,W=1238
> 81195 G1611356991.M905681P30704.mail01.myserver.com
> :1611356991.M905681P30704.mail01.myserver.com,S=1220,W=1249
> 81197 :1611357210.M42178P1962.mail01.myserver.com,S=1220,W=1250
> 81199 :1611357560.M26894P7157.mail01.myserver.com,S=1233,W=1264
>
> With "old kernel" (where all works fine) UID number are incremental
> and in the line there is one more field that start with "G1611...".
>
> With "new kernel" (where error comes) UID number skip always a number
> and the field "G1611..." is missing.
>
> Maciej, do you also have this behavior?
>
> Why Dovecot create different uidlist line format with different kernel?
>
> Il 22/01/21 17:50, Maciej Milaszewski ha scritto:
>> Hi
>> I using pop/imap and LMTP via director and user go back in dovecot node
>>
>> Current: 10.0.100.22 (expires 2021-01-22 17:42:44)
>> Hashed: 10.0.100.22
>> Initial config: 10.0.100.22
>>
>> I have 6 dovecot backands and index via local ssd disk
>> mail_location = maildir:~/Maildir:INDEX=/var/dovecot_indexes%h
>>
>> user never log in two different nodes in this same time
>>
>> I update debian from 8 to 9 (and to 10) and tested via kerlnel 4.x and
>> 5.x and problem exists
>> If I change kernel to 3.16.x problem not exists
>> I tested like:
>>
>> problem exists:
>> dovecot1-5 with 4.x
>> and
>> dovecot1-4 - with 3.19.x
>> dovecot5 - with 4.x
>> and
>> dovecot1-5 - with 5.x
>> and
>> dovecot1-4 - with 4.x
>> dovecot5 - with 5.x
>>
>> not exists:
>> dovecot1-5 - with 3.19.x
>>
>> not exists:
>> dovecot1-5 - with 3.19.x+kernel-care
>>
>> I use NetAPP with mount options:
>> rw,sec=sys,noexec,noatime,tcp,soft,rsize=32768,wsize=32768,intr,nordirplus,nfsvers=3,actimeo=120
>> I try with nocto and without nocto
>>
>> big guys from NetApp says "nfs 4.x need auth via kerberos "
>>
>>
>>
>>

Re: dovecot and broken uidlist

2021-01-22 Thread Maciej Milaszewski
Hi
I using pop/imap and LMTP via director and user go back in dovecot node

Current: 10.0.100.22 (expires 2021-01-22 17:42:44)
Hashed: 10.0.100.22
Initial config: 10.0.100.22

I have 6 dovecot backands and index via local ssd disk
mail_location = maildir:~/Maildir:INDEX=/var/dovecot_indexes%h

user never log in two different nodes in this same time

I update debian from 8 to 9 (and to 10) and tested via kerlnel 4.x and
5.x and problem exists
If I change kernel to 3.16.x problem not exists
I tested like:

problem exists:
dovecot1-5 with 4.x
and
dovecot1-4 - with 3.19.x
dovecot5 - with 4.x
and
dovecot1-5 - with 5.x
and
dovecot1-4 - with 4.x
dovecot5 - with 5.x

not exists:
dovecot1-5 - with 3.19.x

not exists:
dovecot1-5 - with 3.19.x+kernel-care

I use NetAPP with mount options:
rw,sec=sys,noexec,noatime,tcp,soft,rsize=32768,wsize=32768,intr,nordirplus,nfsvers=3,actimeo=120
I try with nocto and without nocto

big guys from NetApp says "nfs 4.x need auth via kerberos "



On 22.01.2021 16:08, Alessio Cecchi wrote:
>
> Hi Maciej,
>
> I'm using LDA for delivery email in mailbox (Maildir) and I
> think(hope) that switching to LMTP via director will fix my problem,
> but I d'ont know why wiht old kernel works and with recent no.
>
> Are you using POP/IMAP and LMTP via director so any update to dovecot
> indexes is done from the same server?
>
> Il 19/01/21 16:22, Maciej Milaszewski ha scritto:
>> Hi
>> I use lmtp and you ?
>>
>> On 19.01.2021 10:45, Alessio Cecchi wrote:
>>> Hi Maciej,
>>>
>>> I had the same issue when I switched dovecot backend from Cento 6 to
>>> Centos 7.
>>>
>>> Also my configuration is similar to you, Dovecot Direcot, Dovecot
>>> backend that share Maildir via NFS on NetApp.
>>>
>>> For local delivery of emails are you using LDA or LMTP? I'm using LDA.
>>>
>>> Let me know.
>>>
>>> Thanks
>>>
>>> Il 13/01/21 15:56, Maciej Milaszewski ha scritto:
>>>> Hi
>>>> I have been trying resolve my problem with dovecot for a few days and I
>>>> dont have idea
>>>>
>>>> My environment is: dovecot director+5 dovecot guest
>>>>
>>>> dovecot-2.2.36.4 from source
>>>> Linux 3.16.0-11-amd64
>>>> storage via nfs (NetApp)
>>>>
>>>> all works fine but when I update OS from debian 8 (kernel 3.16.x) to
>>>> debian 9 (kernel 4.9.x ) sometimes I get random in logs:
>>>> Broken dovecot-uidlist
>>>>
>>>> examle:
>>>> Error: Broken file
>>>> /vmail2/po/pollygraf.xxx_pg_pollygraf/Maildir/dovecot-uidlist line 88:
>>>> Invalid data:
>>>>
>>>> (for random users - sometimes 10 error in day per node, some times more)
>>>>
>>>> File looks ok
>>>>
>>>> But if I change kernel to 3.16.x problem with "Broken file
>>>> dovecot-uidlist"  - not exists
>>>> if turn to 4.9 or 5.x - problem exists
>>>>
>>>> I have storage via nfs with opions:
>>>> rw,sec=sys,noexec,noatime,tcp,hard,rsize=65536,wsize=65536,intr,nordirplus,nfsvers=3,tcp,actimeo=120
>>>> I tested with "nocto" or without "nocto" - nothing changes ..
>>>>
>>>> nfs options in node:
>>>> mmap_disable = yes
>>>> mail_fsync = always
>>>>
>>>> I bet the configuration is correct and I wonder why the problem occurs
>>>> with other kernels
>>>> 3.x.x - ok
>>>> 4.x - not ok
>>>>
>>>> I check and user who have problem did not connect to another node in
>>>> this time
>>>>
>>>> I dont have idea why problem exists on the kernel 4.x but not in 3.x
>>>>
>>>>
>>> -- 
>>> Alessio Cecchi
>>> Postmaster @ http://www.qboxmail.it
>>> https://www.linkedin.com/in/alessice
> -- 
> Alessio Cecchi
> Postmaster @ http://www.qboxmail.it
> https://www.linkedin.com/in/alessice




Re: dovecot and broken uidlist

2021-01-22 Thread Maciej Milaszewski
Hi
Try change kernel to older and test again

On 22.01.2021 15:45, Alessio Cecchi wrote:
>
> Hi Claudio,
>
> I made a test with NFS mount with nfsvers=4.1 and CentOS 7 as NFS
> client (our Netapp already have NFS 4.1 enabled) but the problem is
> still present.
>
> More, I don't like to switch to NFS 4 because is statefull, NFS v3 is
> stateless and for example during maintanace or upgrade of NFS server
> clients haven't problems, the reboot of Netapp is trasparent.
>
> I don't think the problem is related to Netapp, I see the same error
> in a setup of a customer based on Google Cloud (Ubuntu as Dovecot and
> NFS client and Google Cloud NFS volume as storage).
>
> In my case I'm using LDA for local delivery of emails so I hope that
> swithcing to LMTP I will resolve the issue but I'm not use since
> others users said that they are aready using LMTP.
>
> I don't know why on old Linux distro works and recents distro have the
> issue ...
>
> Il 19/01/21 20:21, Claudio Cuqui ha scritto:
>> It's a long shot..but I would try to use nfsvers=4.1 in the nfs
>> mount option (instead of nfsvers=3)  - if your netapp supports it -
>> with a newer kernel - 4.14-stable or 4.19-stable (if possible). The
>> reason for that, is a nasty bug found in linux nfs client with older
>> kernels...
>>
>> https://about.gitlab.com/blog/2018/11/14/how-we-spent-two-weeks-hunting-an-nfs-bug/
>>
>> Hope this helps...
>>
>> Regards,
>>
>> Claudio
>>
>>
>> Em qua., 13 de jan. de 2021 às 12:18, Maciej Milaszewski
>> mailto:maciej.milaszew...@iq.pl>> escreveu:
>>
>> Hi
>> I have been trying resolve my problem with dovecot for a few days
>> and I
>> dont have idea
>>
>> My environment is: dovecot director+5 dovecot guest
>>
>> dovecot-2.2.36.4 from source
>> Linux 3.16.0-11-amd64
>> storage via nfs (NetApp)
>>
>> all works fine but when I update OS from debian 8 (kernel 3.16.x) to
>> debian 9 (kernel 4.9.x ) sometimes I get random in logs:
>> Broken dovecot-uidlist
>>
>> examle:
>> Error: Broken file
>> /vmail2/po/pollygraf.xxx_pg_pollygraf/Maildir/dovecot-uidlist
>> line 88:
>> Invalid data:
>>
>> (for random users - sometimes 10 error in day per node, some
>> times more)
>>
>> File looks ok
>>
>> But if I change kernel to 3.16.x problem with "Broken file
>> dovecot-uidlist"  - not exists
>> if turn to 4.9 or 5.x - problem exists
>>
>> I have storage via nfs with opions:
>> 
>> rw,sec=sys,noexec,noatime,tcp,hard,rsize=65536,wsize=65536,intr,nordirplus,nfsvers=3,tcp,actimeo=120
>> I tested with "nocto" or without "nocto" - nothing changes ..
>>
>> nfs options in node:
>> mmap_disable = yes
>> mail_fsync = always
>>
>> I bet the configuration is correct and I wonder why the problem
>> occurs
>> with other kernels
>> 3.x.x - ok
>> 4.x - not ok
>>
>> I check and user who have problem did not connect to another node in
>> this time
>>
>> I dont have idea why problem exists on the kernel 4.x but not in 3.x
>>
>>
> -- 
> Alessio Cecchi
> Postmaster @ http://www.qboxmail.it
> https://www.linkedin.com/in/alessice


-- 
Maciej Miłaszewski
Starszy Administrator Systemowy
IQ PL Sp. z o.o.

Biuro Obsługi Klienta:
e-mail: b...@iq.pl
tel.: +48 58 326 09 90 - 94
fax: +48 58 326 09 99

Dział pomocy: https://www.iq.pl/pomoc
Informacja dotycząca przetwarzania danych osobowych: https://www.iq.pl/kontakt

IQ PL Sp. z o.o. z siedzibą w Gdańsku (80-298), ul. Geodetów 16, KRS 
007725, Sąd rejestrowy: Sąd Rejonowy w Gdańsku VII Wydział KRS, kapitał 
zakładowy: 140.000 PLN, NIP 5832736211, REGON 192478853



Re: dovecot and broken uidlist

2021-01-19 Thread Maciej Milaszewski
Hi
I use lmtp and you ?

On 19.01.2021 10:45, Alessio Cecchi wrote:
>
> Hi Maciej,
>
> I had the same issue when I switched dovecot backend from Cento 6 to
> Centos 7.
>
> Also my configuration is similar to you, Dovecot Direcot, Dovecot
> backend that share Maildir via NFS on NetApp.
>
> For local delivery of emails are you using LDA or LMTP? I'm using LDA.
>
> Let me know.
>
> Thanks
>
> Il 13/01/21 15:56, Maciej Milaszewski ha scritto:
>> Hi
>> I have been trying resolve my problem with dovecot for a few days and I
>> dont have idea
>>
>> My environment is: dovecot director+5 dovecot guest
>>
>> dovecot-2.2.36.4 from source
>> Linux 3.16.0-11-amd64
>> storage via nfs (NetApp)
>>
>> all works fine but when I update OS from debian 8 (kernel 3.16.x) to
>> debian 9 (kernel 4.9.x ) sometimes I get random in logs:
>> Broken dovecot-uidlist
>>
>> examle:
>> Error: Broken file
>> /vmail2/po/pollygraf.xxx_pg_pollygraf/Maildir/dovecot-uidlist line 88:
>> Invalid data:
>>
>> (for random users - sometimes 10 error in day per node, some times more)
>>
>> File looks ok
>>
>> But if I change kernel to 3.16.x problem with "Broken file
>> dovecot-uidlist"  - not exists
>> if turn to 4.9 or 5.x - problem exists
>>
>> I have storage via nfs with opions:
>> rw,sec=sys,noexec,noatime,tcp,hard,rsize=65536,wsize=65536,intr,nordirplus,nfsvers=3,tcp,actimeo=120
>> I tested with "nocto" or without "nocto" - nothing changes ..
>>
>> nfs options in node:
>> mmap_disable = yes
>> mail_fsync = always
>>
>> I bet the configuration is correct and I wonder why the problem occurs
>> with other kernels
>> 3.x.x - ok
>> 4.x - not ok
>>
>> I check and user who have problem did not connect to another node in
>> this time
>>
>> I dont have idea why problem exists on the kernel 4.x but not in 3.x
>>
>>
> -- 
> Alessio Cecchi
> Postmaster @ http://www.qboxmail.it
> https://www.linkedin.com/in/alessice


-- 
Maciej Miłaszewski
Starszy Administrator Systemowy
IQ PL Sp. z o.o.

Biuro Obsługi Klienta:
e-mail: b...@iq.pl
tel.: +48 58 326 09 90 - 94
fax: +48 58 326 09 99

Dział pomocy: https://www.iq.pl/pomoc
Informacja dotycząca przetwarzania danych osobowych: https://www.iq.pl/kontakt

IQ PL Sp. z o.o. z siedzibą w Gdańsku (80-298), ul. Geodetów 16, KRS 
007725, Sąd rejestrowy: Sąd Rejonowy w Gdańsku VII Wydział KRS, kapitał 
zakładowy: 140.000 PLN, NIP 5832736211, REGON 192478853



Re: dovecot and broken uidlist

2021-01-16 Thread Maciej Milaszewski IQ PL
Hi
Any idea some one ?

Dnia 13 stycznia 2021 15:56:18 CET, Maciej Milaszewski 
 napisał(a):
>Hi
>I have been trying resolve my problem with dovecot for a few days and I
>dont have idea
>
>My environment is: dovecot director+5 dovecot guest
>
>dovecot-2.2.36.4 from source
>Linux 3.16.0-11-amd64
>storage via nfs (NetApp)
>
>all works fine but when I update OS from debian 8 (kernel 3.16.x) to
>debian 9 (kernel 4.9.x ) sometimes I get random in logs:
>Broken dovecot-uidlist
>
>examle:
>Error: Broken file
>/vmail2/po/pollygraf.xxx_pg_pollygraf/Maildir/dovecot-uidlist line 88:
>Invalid data:
>
>(for random users - sometimes 10 error in day per node, some times
>more)
>
>File looks ok
>
>But if I change kernel to 3.16.x problem with "Broken file
>dovecot-uidlist"  - not exists
>if turn to 4.9 or 5.x - problem exists
>
>I have storage via nfs with opions:
>rw,sec=sys,noexec,noatime,tcp,hard,rsize=65536,wsize=65536,intr,nordirplus,nfsvers=3,tcp,actimeo=120
>I tested with "nocto" or without "nocto" - nothing changes ..
>
>nfs options in node:
>mmap_disable = yes
>mail_fsync = always
>
>I bet the configuration is correct and I wonder why the problem occurs
>with other kernels
>3.x.x - ok
>4.x - not ok
>
>I check and user who have problem did not connect to another node in
>this time
>
>I dont have idea why problem exists on the kernel 4.x but not in 3.x

--

dovecot and broken uidlist

2021-01-13 Thread Maciej Milaszewski
Hi
I have been trying resolve my problem with dovecot for a few days and I
dont have idea

My environment is: dovecot director+5 dovecot guest

dovecot-2.2.36.4 from source
Linux 3.16.0-11-amd64
storage via nfs (NetApp)

all works fine but when I update OS from debian 8 (kernel 3.16.x) to
debian 9 (kernel 4.9.x ) sometimes I get random in logs:
Broken dovecot-uidlist

examle:
Error: Broken file
/vmail2/po/pollygraf.xxx_pg_pollygraf/Maildir/dovecot-uidlist line 88:
Invalid data:

(for random users - sometimes 10 error in day per node, some times more)

File looks ok

But if I change kernel to 3.16.x problem with "Broken file
dovecot-uidlist"  - not exists
if turn to 4.9 or 5.x - problem exists

I have storage via nfs with opions:
rw,sec=sys,noexec,noatime,tcp,hard,rsize=65536,wsize=65536,intr,nordirplus,nfsvers=3,tcp,actimeo=120
I tested with "nocto" or without "nocto" - nothing changes ..

nfs options in node:
mmap_disable = yes
mail_fsync = always

I bet the configuration is correct and I wonder why the problem occurs
with other kernels
3.x.x - ok
4.x - not ok

I check and user who have problem did not connect to another node in
this time

I dont have idea why problem exists on the kernel 4.x but not in 3.x




Re: CVE-2020-24386: IMAP hibernation allows accessing other peoples mail

2021-01-07 Thread Maciej Milaszewski
On 04.01.2021 14:02, Dan Malm wrote:
> On 2021-01-04 13:03, Aki Tuomi wrote:
>> Vulnerable version: 2.2.26-2.3.11.3
>> Fixed version: 2.3.13
> No fix for 2.2.36?
>
Hi
Probably not fixed - my heart's been broken to - but this solutions
"imap_hibernate_timeout = 0" probably save you...

-- 
Maciej Miłaszewski
Starszy Administrator Systemowy
IQ PL Sp. z o.o.

Biuro Obsługi Klienta:
e-mail: b...@iq.pl
tel.: +48 58 326 09 90 - 94
fax: +48 58 326 09 99

Dział pomocy: https://www.iq.pl/pomoc
Informacja dotycząca przetwarzania danych osobowych: https://www.iq.pl/kontakt

IQ PL Sp. z o.o. z siedzibą w Gdańsku (80-298), ul. Geodetów 16, KRS 
007725, Sąd rejestrowy: Sąd Rejonowy w Gdańsku VII Wydział KRS, kapitał 
zakładowy: 140.000 PLN, NIP 5832736211, REGON 192478853



dovecot+ldap with keepalive

2020-12-28 Thread Maciej Milaszewski
Hi
I have dovecot-2.36.4 (director +5 nodes) - backend to auth is
openldap+keepalived to second ldap

For test i shutdown my ldap server - keepalive  works perfectly VIP
switched - ldapserach works ok (all connect to second ldap)
but I noticed strange dovecot behavior - some user get "no response" or
"waiting waiting"

in dovecot i use:
auth_cache_negative_ttl = 5 mins
auth_cache_size = 20 M
auth_cache_ttl = 5 mins

service lmtp {
  inet_listener lmtp {
    address = 127.0.0.1 10.0.100.4
    port = 24
  }
  process_min_avail = 5

protocol lmtp {
  auth_socket_path = director-userdb
  mail_plugins = quota expire notify mail_log
  passdb {
    args = proxy=y nopassword=y port=24
    driver = static
    name =
  }
  syslog_facility = local3
}


in ldap server i have:
idletimeout 256

Any idea ?



Re: dovecot-uidlist invalid data

2020-10-26 Thread Maciej Milaszewski
Hi
Any idea ? or solutions ?

> Hello
> I have a problem with Invalid data
> System debian10 dovecot-2.2.36.4
>
> # 2.2.36.4 (baf9232c1): /etc/dovecot/dovecot.conf
> # Pigeonhole version 0.4.24.rc1 (debaa297)
> # OS: Linux 4.19.0-12-amd64 x86_64 Debian 10
>
>
> Oct 23 15:57:52 dovecot6 dovecot:
> lmtp(33973,media4_js,2KEXD2Dhkl+1hAAAe3x6RQ): Error: Broken file
> /vmail/me/media4_js/Maildir/dovecot-uidlist line 6875: Invalid data:
>
> In debian9 - kernel-4.9.0-13 - problem exists
> In debian10 - kernel-4.19.0-12 - problem exist
>
> In debian8 - kernel 3.16.0-11-amd64 - problem not exists
> In debian9 - kernel 3.16.0-11-amd64 - problem not exists
>
> storage mount from storage NetApp
>
> storage:/vmail on /vmail type nfs
> (rw,noexec,noatime,vers=3,rsize=65536,wsize=65536,namlen=255,acregmin=120,acregmax=120,acdirmin=120,acdirmax=120,hard,nocto,nordirplus,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.19.19.19,mountvers=3,mountport=635,mountproto=tcp,local_lock=none,addr=10.19.19.19)
>
> cat /etc/fstab
> storage:/vmail    /vmail    nfs   
> rw,sec=sys,noexec,noatime,tcp,hard,rsize=65536,wsize=65536,intr,nordirplus,nfsvers=3,tcp,actimeo=120,nocto
>    
> 0    0
>
> Probably somthing in kernel or mount options. Any idea ?
>



dovecot-uidlist invalid data

2020-10-23 Thread Maciej Milaszewski
Hello
I have a problem with Invalid data
System debian10 dovecot-2.2.36.4

# 2.2.36.4 (baf9232c1): /etc/dovecot/dovecot.conf
# Pigeonhole version 0.4.24.rc1 (debaa297)
# OS: Linux 4.19.0-12-amd64 x86_64 Debian 10


Oct 23 15:57:52 dovecot6 dovecot:
lmtp(33973,media4_js,2KEXD2Dhkl+1hAAAe3x6RQ): Error: Broken file
/vmail/me/media4_js/Maildir/dovecot-uidlist line 6875: Invalid data:

In debian9 - kernel-4.9.0-13 - problem exists
In debian10 - kernel-4.19.0-12 - problem exist

In debian8 - kernel 3.16.0-11-amd64 - problem not exists
In debian9 - kernel 3.16.0-11-amd64 - problem not exists

storage mount from storage NetApp

storage:/vmail on /vmail type nfs
(rw,noexec,noatime,vers=3,rsize=65536,wsize=65536,namlen=255,acregmin=120,acregmax=120,acdirmin=120,acdirmax=120,hard,nocto,nordirplus,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.19.19.19,mountvers=3,mountport=635,mountproto=tcp,local_lock=none,addr=10.19.19.19)

cat /etc/fstab
storage:/vmail    /vmail    nfs   
rw,sec=sys,noexec,noatime,tcp,hard,rsize=65536,wsize=65536,intr,nordirplus,nfsvers=3,tcp,actimeo=120,nocto
   
0    0

Probably somthing in kernel or mount options. Any idea ?



antispam plugin again

2020-10-23 Thread Maciej Milaszewski
Hello
I have a problem with migrating dovecot from 2.2.36 to 2.3.8 -
everything works fine, but a problem with migrating anti-spam plugins

New dovecot 2.3.x has implemented own antispam-plugin like:

 new from dovecot 2.3.8 -
# From elsewhere to Spam folder
  imapsieve_mailbox1_name = Spam
  imapsieve_mailbox1_causes = COPY
  imapsieve_mailbox1_before =
file:/usr/lib64/dovecot/sieve/report-spam.sieve

  # From Spam folder to elsewhere
  imapsieve_mailbox2_name = *
  imapsieve_mailbox2_from = Spam
  imapsieve_mailbox2_causes = COPY
  imapsieve_mailbox2_before = file:/usr/lib64/dovecot/sieve/report-ham.sieve

  sieve_pipe_bin_dir = /usr/lib64/dovecot/sieve

  sieve_global_extensions = +vnd.dovecot.pipe +vnd.dovecot.environment

  #setting_name = value
  sieve_global_dir = /etc/sieve_global
  sieve_max_redirects = 20
  sieve_vacation_use_original_recipient = yes

  expire = SPAM
  #expire_dict = proxy::expire
  expire_dict = redis:host=127.0.0.1:prefix=expire/
---


In older version dovecot I use antispam_plugin + simple script:

 old from 2.2.36.4 + antispam plugin --
 ...
 antispam_backend = MAILTRAIN
  antispam_mail_spam = --spam
  antispam_mail_notspam = --ham
  antispam_mail_sendmail = /usr/local/bin/spam-learn.sh
  antispam_pipe_tmpdir = /tmp

  antispam_spam_pattern_ignorecase = spam;inbox.spam;Unwanted
  antispam_trash_pattern_ignorecase = trash;Deleted *;Junk*;kosz

  antispam_debug_target = syslog
  antispam_verbose_debug = 1
-

How do I change the bash file to make it work like on the old system (I
use pyzor) like:

cat /usr/local/bin/spam-learn.sh

#!/bin/sh
date >> /tmp/spam.txt
echo $@ >> /tmp/spam.txt

if [ "x$1" = "x--spam" ]; then
    /usr/bin/pyzor report >> /tmp/spam.txt 2>&1
fi
if [ "x$1" = "x--ham" ]; then
    /usr/bin/pyzor whitelist >> /tmp/ham.txt 2>&1
fi




Re: Auro expunge

2020-10-14 Thread Maciej Milaszewski
Hi
But if you have more users (200K) that is a problem with that scripts

On 14.10.2020 16:28, Adrian Minta wrote:
> The cron option is the safest.
>
> Run each night something like this:
>
> #!/bin/bash
>
> DOVEADM="/usr/bin/doveadm";
>
> $DOVEADM expunge -A mailbox Trash savedbefore 30d
> $DOVEADM expunge -A mailbox Spam savedbefore 30d
>
>
>
> On 10/13/20 10:27 PM, @lbutlr wrote:
>> When using autoexpunge = 14 days in, for example. The trash of junk
>> folders, does that physically remove the messages from disk or simply
>> mark them to be removed and some other action needs to be taken?
>>
>> I ask because my mail server crapped out today and I discovered a
>> Junk folder with 490,000 messages in it, and the sa-spamd and mariadb
>> processes died a horrible death.
>>
>> Haven't had time to track down what was going on, but there were
>> definitely messages in that junk folder from 2019 (though, of course,
>> they may have been added within the last 14 days, I can't verify that
>> as my solution was to remove the folder and recreate a new empty
>> maildir).
>>



Antispam plugin

2020-09-22 Thread Maciej Milaszewski
Hi
System centos8 + dovecot-2.3.8 from repo

# 2.3.8 (9df20d2db): /etc/dovecot/dovecot.conf
# Pigeonhole version 0.5.8 (b7b03ba2)
# OS: Linux 4.18.0-193.19.1.el8_2.x86_64 x86_64 CentOS Linux release
8.2.2004 (Core)

 I need "Antispam plugin". What antispam-plugin I must use ?

In older version dovecot-2.2.36.4 i use "dovecot-antispam-plugin" but in
centos I had a problem with configure  - probably toold wersion

"./configure: line 3193: DC_DOVECOT: command not found"

In wiki dovect is "antispam-plugin" I try this
http://hg.dovecot.org/dovecot-antispam-plugin
but get 404

Backand spamassassin



migrations from 2.2.36

2020-09-21 Thread Maciej Milaszewski
Hi
I started migrations dovecot cluster from 2.2.36.4 (debian) to 2.3.8
(cetos8) .I read https://wiki2.dovecot.org/Upgrading/2.3

In 10-director.conf I must disable director_doveadm_port like:
#director_doveadm_port = 2424

and add:
.
service doveadm {
  inet_listener {
    port = 2424
  }
}

Is this correct ?



the rest of the contents of the file


service director {
  unix_listener login/director {
    mode = 0666
  }
  fifo_listener login/proxy-notify {
    mode = 0666
  }
  unix_listener director-userdb {
    mode = 0600
  }
  inet_listener {
    port = 9090
  }
}

service imap-login {
  executable = imap-login director
}

service pop3-login {
  executable = pop3-login director
}
 
# Enable director for LMTP proxying:
protocol lmtp {
  auth_socket_path = director-userdb
}
 
service managesieve-login {
    executable = managesieve-login director
}
 
protocol doveadm {
    auth_socket_path=director-userdb
}



Re: another problem with 2.3.36.4 after update os

2020-09-17 Thread Maciej Milaszewski
Hi
I change kernel from 4.9.0-13-amd64 to this same (before update debian)
ant problem with  "dovecot-uidlist line 112: Invalid data" not exists

probably problem is a nfs library in kernel 4.9.x (something that)



On 16.09.2020 16:32, Maciej Milaszewski wrote:
> Hi
> A few days ago I upgraded debian8 to debian9
>
> dovecot is from source
> # 2.2.36.4 (baf9232c1): /etc/dovecot/dovecot.conf
> # Pigeonhole version 0.4.24.2 (aaba65b7)
> # OS: Linux 4.9.0-13-amd64 x86_64 Debian 9.13
>
> Today I get some times in logs:
> "dovecot-uidlist line 112: Invalid data" and I dont know why
>
> This is claster dovecot:
> dovecot1 - debian8
> dovecot2 - debian8
> dovecot3 - debian8
> dovecot4 - debian9
> dovecot5 - debian8
> director - debian8
>
> storage is mont via nfs
> I upgrade os debian8->debian9 to one node like documentations
>


-- 
Maciej Miłaszewski
Starszy Administrator Systemowy
IQ PL Sp. z o.o.

Biuro Obsługi Klienta:
e-mail: b...@iq.pl
tel.: +48 58 326 09 90 - 94
fax: +48 58 326 09 99

Dział pomocy: https://www.iq.pl/pomoc
Informacja dotycząca przetwarzania danych osobowych: https://www.iq.pl/kontakt

IQ PL Sp. z o.o. z siedzibą w Gdańsku (80-298), ul. Geodetów 16, KRS 
007725, Sąd rejestrowy: Sąd Rejonowy w Gdańsku VII Wydział KRS, kapitał 
zakładowy: 140.000 PLN, NIP 5832736211, REGON 192478853



another problem with 2.3.36.4 after update os

2020-09-16 Thread Maciej Milaszewski
Hi
A few days ago I upgraded debian8 to debian9

dovecot is from source
# 2.2.36.4 (baf9232c1): /etc/dovecot/dovecot.conf
# Pigeonhole version 0.4.24.2 (aaba65b7)
# OS: Linux 4.9.0-13-amd64 x86_64 Debian 9.13

Today I get some times in logs:
"dovecot-uidlist line 112: Invalid data" and I dont know why

This is claster dovecot:
dovecot1 - debian8
dovecot2 - debian8
dovecot3 - debian8
dovecot4 - debian9
dovecot5 - debian8
director - debian8

storage is mont via nfs
I upgrade os debian8->debian9 to one node like documentations



Re: dovecot 2.2.36.4 problem with ulimit

2020-09-16 Thread Maciej Milaszewski
Hi
Thenx replay:

cat /proc/`pidof dovecot`/limits
Limit Soft Limit   Hard Limit  
Units
Max cpu time  unlimited    unlimited   
seconds  
Max file size unlimited    unlimited   
bytes
Max data size unlimited    unlimited   
bytes
Max stack size    8388608  unlimited   
bytes
Max core file size    0    0   
bytes
Max resident set  unlimited    unlimited   
bytes
Max processes 357577   357577  
processes
Max open files    65536    65536   
files
Max locked memory 65536    65536   
bytes
Max address space unlimited    unlimited   
bytes
Max file locks    unlimited    unlimited   
locks
Max pending signals   357577   357577  
signals  
Max msgqueue size 819200   819200  
bytes
Max nice priority 0    0   
Max realtime priority 0    0   
Max realtime timeout  unlimited    unlimited   
us   

Now I change in systemd
systemctl edit dovecot.service

[Service]
TasksMax=4
LimitNOFILE=65536
LimitNPROC=357577
LimitNPROCSoft=357577
LimitSIGPENDING=357577
LimitSIGPENDINGSoft=357577
On 16.09.2020 14:17, Urban Loesch wrote:

> Hi,
>
> perhaps this?
>
> > with new debian9:
> > open files  (-n) 1024
>
> Regards
> Urban
>
>
> Am 16.09.20 um 12:57 schrieb Maciej Milaszewski:
>> Hi
>> Limits:
>>
>> Where all working fine:
>>
>> core file size  (blocks, -c) 0
>> data seg size   (kbytes, -d) unlimited
>> scheduling priority (-e) 0
>> file size   (blocks, -f) unlimited
>> pending signals (-i) 257970
>> max locked memory   (kbytes, -l) 64
>> max memory size (kbytes, -m) unlimited
>> open files  (-n) 65536
>> pipe size    (512 bytes, -p) 8
>> POSIX message queues (bytes, -q) 819200
>> real-time priority  (-r) 0
>> stack size  (kbytes, -s) 8192
>> cpu time   (seconds, -t) unlimited
>> max user processes  (-u) 257970
>> virtual memory  (kbytes, -v) unlimited
>> file locks  (-x) unlimited
>>
>>
>> with new debian9:
>>
>> core file size  (blocks, -c) 0
>> data seg size   (kbytes, -d) unlimited
>> scheduling priority (-e) 0
>> file size   (blocks, -f) unlimited
>> pending signals (-i) 257577
>> max locked memory   (kbytes, -l) 64
>> max memory size (kbytes, -m) unlimited
>> open files  (-n) 1024
>> pipe size    (512 bytes, -p) 8
>> POSIX message queues (bytes, -q) 819200
>> real-time priority  (-r) 0
>> stack size  (kbytes, -s) 8192
>> cpu time   (seconds, -t) unlimited
>> max user processes  (-u) 257577
>> virtual memory  (kbytes, -v) unlimited
>> file locks  (-x) unlimited
>>
>>
>> maby systemd "something has changed"
>>
>> and add:
>>
>> echo "kernel.pid_max = 5" >> /etc/sysctl.conf
>> sysctl -p
>> systemctl edit dovecot.service
>>
>> [Service]
>> TasksMax=4
>> systemctl daemon-reload
>> systemctl restart dovecot.service
>>
>> cat /sys/fs/cgroup/pids/system.slice/dovecot.service/pids.max
>>
>>
>> Any idea ?
>>
>> On 16.09.2020 09:45, Maciej Milaszewski wrote:
>>> Hi
>>> I update os from debian8 to debian9
>>>
>>> # 2.2.36.4 (baf9232c1): /etc/dovecot/dovecot.conf
>>> # Pigeonhole version 0.4.24.2 (aaba65b7)
>>> # OS: Linux 4.9.0-13-amd64 x86_64 Debian 9.13
>>>
>>> All works fine but sometimes I get:
>>>
>>> Sep 16 09:17:00 dovecot4 dovecot: master: Error: service(pop3): fork()
>>> failed: Resource temporarily unavailable (ulimit -u 257577 reached?)
>>> Sep 16 09:17:00 dovecot4 dovecot: master: Error: service(imap): fork()
>>> failed: Resource temporarily unavailable (ulimit -u 257577 reached?)
>>> Sep 16 09:17:00 dovecot4 dovecot: master: Error: service(doveadm):
>>> fork() failed: Resource temporarily unavailable (ulimit -u 257577
>>&

Re: dovecot 2.2.36.4 problem with ulimit

2020-09-16 Thread Maciej Milaszewski
Hi
Limits:

Where all working fine:

core file size  (blocks, -c) 0
data seg size   (kbytes, -d) unlimited
scheduling priority (-e) 0
file size   (blocks, -f) unlimited
pending signals (-i) 257970
max locked memory   (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files  (-n) 65536
pipe size    (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority  (-r) 0
stack size  (kbytes, -s) 8192
cpu time   (seconds, -t) unlimited
max user processes  (-u) 257970
virtual memory  (kbytes, -v) unlimited
file locks  (-x) unlimited


with new debian9:

core file size  (blocks, -c) 0
data seg size   (kbytes, -d) unlimited
scheduling priority (-e) 0
file size   (blocks, -f) unlimited
pending signals (-i) 257577
max locked memory   (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files  (-n) 1024
pipe size    (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority  (-r) 0
stack size  (kbytes, -s) 8192
cpu time   (seconds, -t) unlimited
max user processes  (-u) 257577
virtual memory  (kbytes, -v) unlimited
file locks  (-x) unlimited


maby systemd "something has changed"

and add:

echo "kernel.pid_max = 5" >> /etc/sysctl.conf
sysctl -p
systemctl edit dovecot.service

[Service]
TasksMax=4
systemctl daemon-reload
systemctl restart dovecot.service

cat /sys/fs/cgroup/pids/system.slice/dovecot.service/pids.max


Any idea ?

On 16.09.2020 09:45, Maciej Milaszewski wrote:
> Hi
> I update os from debian8 to debian9
>
> # 2.2.36.4 (baf9232c1): /etc/dovecot/dovecot.conf
> # Pigeonhole version 0.4.24.2 (aaba65b7)
> # OS: Linux 4.9.0-13-amd64 x86_64 Debian 9.13
>
> All works fine but sometimes I get:
>
> Sep 16 09:17:00 dovecot4 dovecot: master: Error: service(pop3): fork()
> failed: Resource temporarily unavailable (ulimit -u 257577 reached?)
> Sep 16 09:17:00 dovecot4 dovecot: master: Error: service(imap): fork()
> failed: Resource temporarily unavailable (ulimit -u 257577 reached?)
> Sep 16 09:17:00 dovecot4 dovecot: master: Error: service(doveadm):
> fork() failed: Resource temporarily unavailable (ulimit -u 257577 reached?)
> Sep 16 09:17:02 dovecot4 dovecot: master: Error: service(doveadm):
> fork() failed: Resource temporarily unavailable (ulimit -u 257577 reached?)
> Sep 16 09:17:02 dovecot4 dovecot: master: Error: service(pop3): fork()
> failed: Resource temporarily unavailable (ulimit -u 257577 reached?)
> Sep 16 09:17:02 dovecot4 dovecot: master: Error: service(imap): fork()
> failed: Resource temporarily unavailable (ulimit -u 257577 reached?)
> Sep 16 09:17:04 dovecot4 dovecot: master: Error: service(imap): fork()
> failed: Resource temporarily unavailable (ulimit -u 257577 reached?)
>
> Other dovecot is debian8 and problem not exists - any idea ?


dovecot 2.2.36.4 problem with ulimit

2020-09-16 Thread Maciej Milaszewski
Hi
I update os from debian8 to debian9

# 2.2.36.4 (baf9232c1): /etc/dovecot/dovecot.conf
# Pigeonhole version 0.4.24.2 (aaba65b7)
# OS: Linux 4.9.0-13-amd64 x86_64 Debian 9.13

All works fine but sometimes I get:

Sep 16 09:17:00 dovecot4 dovecot: master: Error: service(pop3): fork()
failed: Resource temporarily unavailable (ulimit -u 257577 reached?)
Sep 16 09:17:00 dovecot4 dovecot: master: Error: service(imap): fork()
failed: Resource temporarily unavailable (ulimit -u 257577 reached?)
Sep 16 09:17:00 dovecot4 dovecot: master: Error: service(doveadm):
fork() failed: Resource temporarily unavailable (ulimit -u 257577 reached?)
Sep 16 09:17:02 dovecot4 dovecot: master: Error: service(doveadm):
fork() failed: Resource temporarily unavailable (ulimit -u 257577 reached?)
Sep 16 09:17:02 dovecot4 dovecot: master: Error: service(pop3): fork()
failed: Resource temporarily unavailable (ulimit -u 257577 reached?)
Sep 16 09:17:02 dovecot4 dovecot: master: Error: service(imap): fork()
failed: Resource temporarily unavailable (ulimit -u 257577 reached?)
Sep 16 09:17:04 dovecot4 dovecot: master: Error: service(imap): fork()
failed: Resource temporarily unavailable (ulimit -u 257577 reached?)

Other dovecot is debian8 and problem not exists - any idea ?


Re: solr and dovecot 2.2.36

2020-08-18 Thread Maciej Milaszewski
Hi
I tested ver solr-8.6.0 but not found schema for 2.2.x with version
6.6.x works fine


On 18.08.2020 14:59, Alessio Cecchi wrote:
>
> Hi Maciej,
>
> version 6.6.x works fine, but probably also 7.7.x with schema from
> Dovecot 2.3.
>
> Ciao
>
> Il 18/08/20 14:00, Maciej Milaszewski ha scritto:
>> Hi
>> I have dovecot-2.2.36.4 (director) + 5 nodes dovecot (dovecot-2.2.36.4)
>>
>> What version of Solr do you recommend ?
>>
> -- 
> Alessio Cecchi
> Postmaster @ http://www.qboxmail.it
> https://www.linkedin.com/in/alessice



solr and dovecot 2.2.36

2020-08-18 Thread Maciej Milaszewski
Hi
I have dovecot-2.2.36.4 (director) + 5 nodes dovecot (dovecot-2.2.36.4)

What version of Solr do you recommend ?

-- 
Maciej Miłaszewski
Starszy Administrator Systemowy
IQ PL Sp. z o.o.

Biuro Obsługi Klienta:
e-mail: b...@iq.pl
tel.: +48 58 326 09 90 - 94
fax: +48 58 326 09 99

Dział pomocy: https://www.iq.pl/pomoc
Informacja dotycząca przetwarzania danych osobowych: https://www.iq.pl/kontakt

IQ PL Sp. z o.o. z siedzibą w Gdańsku (80-298), ul. Geodetów 16, KRS 
007725, Sąd rejestrowy: Sąd Rejonowy w Gdańsku VII Wydział KRS, kapitał 
zakładowy: 140.000 PLN, NIP 5832736211, REGON 192478853



solr and dovecot-2.2.36.4

2020-07-17 Thread Maciej Milaszewski
Hi
Today i try run solr-8.6.0 + dovecot-2.2.36.4 and false

I read
https://doc.dovecot.org/configuration_manual/fts/solr/#fts-backend-solr

Where I get schema.xml and solrconfig.xml for this version solr and
dovecot ?

I try schema.xml and solrconfig.xml from working solr-6.6.5 (dovecot)
"dovecot:
org.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
Error initializing QueryElevationComponent"




problem with warnings

2020-06-23 Thread Maciej Milaszewski
Hi
I have a problem with warrnings in log

Yesterday "big gays" change datastore

and After this time i get warrnings in dovecot like:

Warning: Created dotlock file's timestamp is different than current time
(1592878268 vs 1592871191): /vmail/us/username/Maildir/dovecot-uidlist

Before that there was no problem with warnnings

I have dovecot director with 5 dovecot-nodes and storage always was
mounted via nfs

/vmail on /vmail type nfs
(rw,noexec,noatime,vers=3,rsize=65536,wsize=65536,namlen=255,acregmin=120,acregmax=120,acdirmin=120,acdirmax=120,hard,nocto,nordirplus,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.0.100.41,mountvers=3,mountport=635,mountproto=tcp,local_lock=none,addr=xxx.xxx.xxx.xxx)



# 2.2.36.4 (baf9232c1): /etc/dovecot/dovecot.conf
# Pigeonhole version 0.4.24.rc1 (debaa297)
# OS: Linux 3.16.0-10-amd64 x86_64 Debian 8.11





Re: alternatives for imapproxy

2020-04-08 Thread Maciej Milaszewski
Hi
I have dovecot director but I had to separate rings - because problem in
one dovecot-director  generate a problem for second dovecot-direcor2
When I separate rings all works fine

Imapprpxy is only in machine wcho offers roundcube


On 08.04.2020 16:14, Aki Tuomi wrote:
>> On 08/04/2020 16:11 Maciej Milaszewski  wrote:
>>
>>  
>> Hi
>> System debian 8.11 and dovecot-2.2.36.4 My webmail is roundcube with
>> imapproxy.
>>
>> I have one problem.
>>
>> My dovecot servers is are in a cluster with keepalived like:
>>
>> dovecot1VIP-IPdovecot2
>>
>> All works fine
>>
>> I have a problem with imapproxy when a server dovecot1 had a problem
>> (kernel panic sic!)
>> Keepalived works perfecty and moved VIP to dovecot2 - all works fine for
>> normal users
>> but imapproxy gave a timeout and webmail clinet cannot connect
>>
>> what do you recommend alternative to imapproxy ?
>>
>> I use imapproxy because is fast ...
> You could use dovecot as proxy. Or dovecot as director proxy. See 
> https://doc.dovecot.org/admin_manual/dovecot_proxy/
>
> Aki


-- 
Maciej Miłaszewski
Starszy Administrator Systemowy
IQ PL Sp. z o.o.

Biuro Obsługi Klienta:
e-mail: b...@iq.pl
tel.: +48 58 326 09 90 - 94
fax: +48 58 326 09 99

Dział pomocy: https://www.iq.pl/pomoc
Informacja dotycząca przetwarzania danych osobowych: https://www.iq.pl/kontakt

IQ PL Sp. z o.o. z siedzibą w Gdańsku (80-298), ul. Geodetów 16, KRS 
007725, Sąd rejestrowy: Sąd Rejonowy w Gdańsku VII Wydział KRS, kapitał 
zakładowy: 140.000 PLN, NIP 5832736211, REGON 192478853




alternatives for imapproxy

2020-04-08 Thread Maciej Milaszewski
Hi
System debian 8.11 and dovecot-2.2.36.4 My webmail is roundcube with
imapproxy.

I have one problem.

My dovecot servers is are in a cluster with keepalived like:

dovecot1VIP-IPdovecot2

All works fine

I have a problem with imapproxy when a server dovecot1 had a problem
(kernel panic sic!)
Keepalived works perfecty and moved VIP to dovecot2 - all works fine for
normal users
but imapproxy gave a timeout and webmail clinet cannot connect

what do you recommend alternative to imapproxy ?

I use imapproxy because is fast ...



Re: limit for user exceeded

2020-03-31 Thread Maciej Milaszewski
Hi
I Dont understand or  I im thinking wrong:

process_limit = 25000


Older:

#fs.inotify.max_user_watches= 8192
#fs.inotify.max_user_instances = 16384

New:
fs.inotify.max_user_instances = 8192
 
fs.inotify.max_user_watches= process_limit x 2 + fs.inotify.max_user_instances
fs.inotify.max_user_watches= 58192


On 31.03.2020 13:44, Aki Tuomi wrote:
> I would prefer replies on the list... =)
>
> no. the idea is to *increase* the *current* value of 
> fs.inotify.max_user_watches and fs.inotify.max_user_instances with 5
>
> fs.inotify.max_user_watches = 8192 + 5 = 58192
>
> Aki
>
>> On 31/03/2020 14:21 Maciej Milaszewski  wrote:
>>
>>  
>> Hi
>> How I understood it correctly
>>
>> service imap {
>>   process_limit = 25000
>> }
>>
>> fs.inotify.max_user_watches= 5
>> fs.inotify.max_user_instances = 5
>>
>> ?
>>
>>
>> On 31.03.2020 12:14, Aki Tuomi wrote:
>>> Sorry, ment that we *increase* the current value with twice the process 
>>> limit for IMAP.
>>>
>>> Aki
>>>
>>>> On 31/03/2020 13:12 Aki Tuomi  wrote:
>>>>
>>>>  
>>>> We usually set them to twice the number of process_limit for imap.
>>>>
>>>> Aki
>>>>
>>>>> On 31/03/2020 12:29 Maciej Milaszewski  wrote:
>>>>>
>>>>>  
>>>>> Hi
>>>>> System debian 8.11 dovecot-2.2.36.4 and I have some warnings in log likes:
>>>>>
>>>>> Warning: Inotify watch limit for user exceeded, disabling. Increase
>>>>> /proc/sys/fs/inotify/max_user_watches
>>>>>
>>>>>
>>>>> cat /proc/sys/fs/inotify/max_user_watches
>>>>> 8192
>>>>>
>>>>> in sysctl i change
>>>>>
>>>>> #fs.inotify.max_user_watches= 8192
>>>>> #fs.inotify.max_user_instances = 16384
>>>>>
>>>>> fs.inotify.max_user_watches= 16384
>>>>> fs.inotify.max_user_instances = 24576
>>>>>
>>>>> One questions - should these values be equal?




limit for user exceeded

2020-03-31 Thread Maciej Milaszewski
Hi
System debian 8.11 dovecot-2.2.36.4 and I have some warnings in log likes:

Warning: Inotify watch limit for user exceeded, disabling. Increase
/proc/sys/fs/inotify/max_user_watches


cat /proc/sys/fs/inotify/max_user_watches
8192

in sysctl i change

#fs.inotify.max_user_watches= 8192
#fs.inotify.max_user_instances = 16384

fs.inotify.max_user_watches= 16384
fs.inotify.max_user_instances = 24576

One questions - should these values be equal?


dovecot-2.2.36-4 and antispam

2020-03-24 Thread Maciej Milaszewski
Hi
I use dovecot-2.2.36.4 on Debian 8.11 and plugin dovecot-antispam from
http://johannes.sipsolutions.net/Projects/dovecot-antispam

All works fine but in roundcube i get two folder: Spam (real: Junk) and SPAM


in sieve i have:

require ["fileinto","imap4flags"];
# rule:[SPAM Box]
if header :contains "X-Spam-Flag" "YES"
{
setflag "\\Seen";
fileinto "SPAM";    #   --->   because dovecot-antispam need folder SPAM
to teach spam or ham
stop;
}

problem probably fixed when I change
fileinto "SPAM";
to
fileinto "Junk";


But plugin "dovecot-antispam" probably not working ...
Maybe I think wrongly

Can you recommend any other plugin that will work properly and teach
spam with Junk for dovecot2.2.36-4





in dovecot:

15-mailboxes.conf:
...
 mailbox Drafts {
    special_use = \Drafts
  }
  mailbox Junk {
    special_use = \Junk
  }

  mailbox SPAM {
    special_use = \Junk
  }

  mailbox Trash {
    special_use = \Trash
  }



90-plugin.conf:
.
 antispam_backend = MAILTRAIN
  #antispam_mail_sendmail_args  = --for;%u
  antispam_mail_spam = --spam
  antispam_mail_notspam = --ham
  antispam_mail_sendmail = /usr/local/bin/spam-learn.sh
  antispam_pipe_tmpdir = /tmp

  antispam_spam_pattern_ignorecase = spam;inbox.spam;Unwanted
  antispam_trash_pattern_ignorecase = trash;Deleted
*;Junk*;wiadomokosz

  antispam_debug_target = syslog
  antispam_verbose_debug = 1


script to learn

cat /usr/local/bin/spam-learn.sh
#!/bin/sh

date >> /tmp/spam.txt
echo $@ >> /tmp/spam.txt

if [ "x$1" = "x--spam" ]; then
    /usr/bin/pyzor report >> /tmp/spam.txt 2>&1
fi
if [ "x$1" = "x--ham" ]; then
    /usr/bin/pyzor whitelist >> /tmp/ham.txt 2>&1
fi


-- 
Maciej Miłaszewski
Starszy Administrator Systemowy
IQ PL Sp. z o.o.

Biuro Obsługi Klienta:
e-mail: b...@iq.pl
tel.: +48 58 326 09 90 - 94
fax: +48 58 326 09 99

Dział pomocy: https://www.iq.pl/pomoc
Informacja dotycząca przetwarzania danych osobowych: https://www.iq.pl/kontakt

IQ PL Sp. z o.o. z siedzibą w Gdańsku (80-298), ul. Geodetów 16, KRS 
007725, Sąd rejestrowy: Sąd Rejonowy w Gdańsku VII Wydział KRS, kapitał 
zakładowy: 140.000 PLN, NIP 5832736211, REGON 192478853



Re: Error: Raw backtrace and index cache

2019-11-20 Thread Maciej Milaszewski IQ PL via dovecot
Hi
I have change now to dovecot-2.2.36.4 and problem seems not to occur

On 20.11.2019 13:48, Aki Tuomi via dovecot wrote:
> This is likely fixed in more recent version, can you try with 2.2.36?
>
> Aki
>
> On 20.11.2019 11.20, Maciej Milaszewski IQ PL wrote:
>> Hi
>> Thanx for replay.
>>
>> Log:
>>
>> http://paste.debian.net/1117077/
>>
>>
>> On 20.11.2019 10:07, Aki Tuomi wrote:
>>> Firstly, 2.2.13 is about 5 years old. So there's that. It would be
>>> helpful if you can reproduce this with 2.2.36.
>>>
>>> Also, you forgot to actually include in your log snippet the panic. So
>>> maybe few more lines before the Raw backtrace?
>>>
>>> Aki
>>>
>>> On 20.11.2019 10.54, Maciej Milaszewski IQ PL via dovecot wrote:
>>>> Hi
>>>> I have "problem" with dovect 2.2.13 from repo debian8 and I don't know
>>>> how to solve it ...
>>>>
>>>> Server is a virtual (kvm) with debian 8.11 (postfix + dovecot from repo)
>>>> and storage is mounting via nfs (I have use only one dovecot with
>>>> external storage)
>>>>
>>>> All works fine but sometime ( after a few hours ) I have got a problem
>>>> with dovecot cache (i use indexes)
>>>> logs -> http://paste.debian.net/1117072/
>>>>
>>>>
>>>> The store is mounted via nfs in /home/
>>>> index is local in  /var/dovecot_indexes%h
>>>>
>>>> All go back to normal when i remove indexes like:
>>>> find /var/dovecot_indexes/home/ -name 'dovecot*' -type f -delete
>>>> but this is not good solution
>>>>
>>>> What am I doing wrong ?
>>>>
>>>>
>>>> For tunning nfs in 10-mail.conf:
>>>>
>>>> -
>>>> "mail_location = maildir:~/Maildir:INDEX=/var/dovecot_indexes%h
>>>>
>>>> namespace inbox {
>>>>   inbox = yes
>>>> }
>>>>
>>>> mmap_disable = yes
>>>> dotlock_use_excl = no
>>>> mail_fsync = always
>>>>
>>>> mail_nfs_storage = no
>>>> mail_nfs_index = no
>>>>
>>>> lock_method = fcntl
>>>> mail_temp_dir = /tmp
>>>> mail_plugins = quota expire notify mail_log
>>>>
>>>> mailbox_idle_check_interval = 30 secs
>>>> mail_temp_scan_interval = 1w
>>>> maildir_very_dirty_syncs = no
>>>>
>>>> -
>>>>
>>>> doveconf -n
>>>>
>>>> # 2.2.13: /etc/dovecot/dovecot.conf
>>>> # OS: Linux 3.16.0-9-amd64 x86_64 Debian 8.11
>>>> auth_mechanisms = plain login
>>>> disable_plaintext_auth = no
>>>> dotlock_use_excl = no
>>>> lda_original_recipient_header = X-Original-To
>>>> log_path = /var/log/dovecot/dovecot.mail133
>>>> mail_fsync = always
>>>> mail_location = maildir:~/Maildir:INDEX=/var/dovecot_indexes%h
>>>> mail_plugins = quota expire notify mail_log
>>>> managesieve_notify_capability = mailto
>>>> managesieve_sieve_capability = fileinto reject envelope
>>>> encoded-character vacation subaddress comparator-i;ascii-numeric
>>>> relational regex imap4flags copy include variables body enotify
>>>> environment mailbox date index ihave duplicate mime foreverypart 
>>>> extracttext
>>>> mmap_disable = yes
>>>> namespace inbox {
>>>>   inbox = yes
>>>>   location =
>>>>   mailbox Drafts {
>>>>     special_use = \Drafts
>>>>   }
>>>>   mailbox Junk {
>>>>     special_use = \Junk
>>>>   }
>>>>   mailbox Sent {
>>>>     special_use = \Sent
>>>>   }
>>>>   mailbox "Sent Messages" {
>>>>     special_use = \Sent
>>>>   }
>>>>   mailbox Trash {
>>>>     special_use = \Trash
>>>>   }
>>>>   prefix =
>>>> }
>>>> passdb {
>>>>   driver = pam
>>>> }
>>>> passdb {
>>>>   args = /etc/dovecot/dovecot-sql.conf
>>>>   driver = sql
>>>> }
>>>> plugin {
>>>>   mail_log_events = delete undelete expunge copy mailbox_delete
>>>> mailbox_rename
>>>>   mail_log_fields = uid box msgid size
>>>>   sieve = ~/dovecot.sieve
>>>>   sieve_default = /var/lib/dove

Re: Error: Raw backtrace and index cache

2019-11-20 Thread Maciej Milaszewski IQ PL via dovecot
Hi
Thanx for replay.

Log:

http://paste.debian.net/1117077/


On 20.11.2019 10:07, Aki Tuomi wrote:
> Firstly, 2.2.13 is about 5 years old. So there's that. It would be
> helpful if you can reproduce this with 2.2.36.
>
> Also, you forgot to actually include in your log snippet the panic. So
> maybe few more lines before the Raw backtrace?
>
> Aki
>
> On 20.11.2019 10.54, Maciej Milaszewski IQ PL via dovecot wrote:
>> Hi
>> I have "problem" with dovect 2.2.13 from repo debian8 and I don't know
>> how to solve it ...
>>
>> Server is a virtual (kvm) with debian 8.11 (postfix + dovecot from repo)
>> and storage is mounting via nfs (I have use only one dovecot with
>> external storage)
>>
>> All works fine but sometime ( after a few hours ) I have got a problem
>> with dovecot cache (i use indexes)
>> logs -> http://paste.debian.net/1117072/
>>
>>
>> The store is mounted via nfs in /home/
>> index is local in  /var/dovecot_indexes%h
>>
>> All go back to normal when i remove indexes like:
>> find /var/dovecot_indexes/home/ -name 'dovecot*' -type f -delete
>> but this is not good solution
>>
>> What am I doing wrong ?
>>
>>
>> For tunning nfs in 10-mail.conf:
>>
>> -
>> "mail_location = maildir:~/Maildir:INDEX=/var/dovecot_indexes%h
>>
>> namespace inbox {
>>   inbox = yes
>> }
>>
>> mmap_disable = yes
>> dotlock_use_excl = no
>> mail_fsync = always
>>
>> mail_nfs_storage = no
>> mail_nfs_index = no
>>
>> lock_method = fcntl
>> mail_temp_dir = /tmp
>> mail_plugins = quota expire notify mail_log
>>
>> mailbox_idle_check_interval = 30 secs
>> mail_temp_scan_interval = 1w
>> maildir_very_dirty_syncs = no
>>
>> -
>>
>> doveconf -n
>>
>> # 2.2.13: /etc/dovecot/dovecot.conf
>> # OS: Linux 3.16.0-9-amd64 x86_64 Debian 8.11
>> auth_mechanisms = plain login
>> disable_plaintext_auth = no
>> dotlock_use_excl = no
>> lda_original_recipient_header = X-Original-To
>> log_path = /var/log/dovecot/dovecot.mail133
>> mail_fsync = always
>> mail_location = maildir:~/Maildir:INDEX=/var/dovecot_indexes%h
>> mail_plugins = quota expire notify mail_log
>> managesieve_notify_capability = mailto
>> managesieve_sieve_capability = fileinto reject envelope
>> encoded-character vacation subaddress comparator-i;ascii-numeric
>> relational regex imap4flags copy include variables body enotify
>> environment mailbox date index ihave duplicate mime foreverypart extracttext
>> mmap_disable = yes
>> namespace inbox {
>>   inbox = yes
>>   location =
>>   mailbox Drafts {
>>     special_use = \Drafts
>>   }
>>   mailbox Junk {
>>     special_use = \Junk
>>   }
>>   mailbox Sent {
>>     special_use = \Sent
>>   }
>>   mailbox "Sent Messages" {
>>     special_use = \Sent
>>   }
>>   mailbox Trash {
>>     special_use = \Trash
>>   }
>>   prefix =
>> }
>> passdb {
>>   driver = pam
>> }
>> passdb {
>>   args = /etc/dovecot/dovecot-sql.conf
>>   driver = sql
>> }
>> plugin {
>>   mail_log_events = delete undelete expunge copy mailbox_delete
>> mailbox_rename
>>   mail_log_fields = uid box msgid size
>>   sieve = ~/dovecot.sieve
>>   sieve_default = /var/lib/dovecot/sieve/default.sieve
>>   sieve_dir = %h/sieve
>>   sieve_global_dir = /var/lib/dovecot/sieve/
>> }
>> protocols = " imap lmtp sieve pop3"
>> service auth {
>>   unix_listener auth-master {
>>     group = users
>>     mode = 0666
>>     user = virtual
>>   }
>>   unix_listener auth-userdb {
>>     group = users
>>     user = virtual
>>   }
>> }
>> service lmtp {
>>   inet_listener lmtp {
>>     port = 24
>>   }
>>   unix_listener /var/spool/postfix/private/dovecot-lmtp {
>>     group = postfix
>>     mode = 0600
>>     user = postfix
>>   }
>>   user = virtual
>> }
>> service managesieve-login {
>>   inet_listener sieve {
>>     address = 127.0.0.1 94.124.15.58
>>     port = 4190
>>   }
>> }
>> ssl_ca = /etc/postfix/ssl/mail.maximail.pl.pem
>> ssl_cert = > ssl_key = > ssl_protocols = !SSLv2 !SSLv3
>> userdb {
>>   driver = passwd
>> }
>> userdb {
>>   args = /etc/dovecot/dovecot-sql.conf
>>   driver = sql
>> }
>> protocol lmtp {
>>   info_log_path = /var/log/dovecot/dovecot.mali133
>>   lmtp_save_to_detail_mailbox = yes
>>   mail_plugins = quota sieve notify push_notification
>> }
>> protocol lda {
>>   auth_socket_path = /var/run/dovecot/auth-master
>>   lda_mailbox_autocreate = yes
>>   log_path = /var/log/dovecot/dovecot-lda.mail133
>>   mail_plugins = sieve
>>   postmaster_address = root
>> }
>>


-- 
Maciej Miłaszewski
Starszy Administrator Systemowy
IQ PL Sp. z o.o.

Biuro Obsługi Klienta:
e-mail: b...@iq.pl
tel.: +48 58 326 09 90 - 94
fax: +48 58 326 09 99

Dział pomocy: https://www.iq.pl/pomoc
Informacja dotycząca przetwarzania danych osobowych: https://www.iq.pl/kontakt

IQ PL Sp. z o.o. z siedzibą w Gdańsku (80-298), ul. Geodetów 16, KRS 
007725, Sąd rejestrowy: Sąd Rejonowy w Gdańsku VII Wydział KRS, kapitał 
zakładowy: 140.000 PLN, NIP 5832736211, REGON 192478853



Error: Raw backtrace and index cache

2019-11-20 Thread Maciej Milaszewski IQ PL via dovecot
Hi
I have "problem" with dovect 2.2.13 from repo debian8 and I don't know
how to solve it ...

Server is a virtual (kvm) with debian 8.11 (postfix + dovecot from repo)
and storage is mounting via nfs (I have use only one dovecot with
external storage)

All works fine but sometime ( after a few hours ) I have got a problem
with dovecot cache (i use indexes)
logs -> http://paste.debian.net/1117072/


The store is mounted via nfs in /home/
index is local in  /var/dovecot_indexes%h

All go back to normal when i remove indexes like:
find /var/dovecot_indexes/home/ -name 'dovecot*' -type f -delete
but this is not good solution

What am I doing wrong ?


For tunning nfs in 10-mail.conf:

-
"mail_location = maildir:~/Maildir:INDEX=/var/dovecot_indexes%h

namespace inbox {
  inbox = yes
}

mmap_disable = yes
dotlock_use_excl = no
mail_fsync = always

mail_nfs_storage = no
mail_nfs_index = no

lock_method = fcntl
mail_temp_dir = /tmp
mail_plugins = quota expire notify mail_log

mailbox_idle_check_interval = 30 secs
mail_temp_scan_interval = 1w
maildir_very_dirty_syncs = no

-

doveconf -n

# 2.2.13: /etc/dovecot/dovecot.conf
# OS: Linux 3.16.0-9-amd64 x86_64 Debian 8.11
auth_mechanisms = plain login
disable_plaintext_auth = no
dotlock_use_excl = no
lda_original_recipient_header = X-Original-To
log_path = /var/log/dovecot/dovecot.mail133
mail_fsync = always
mail_location = maildir:~/Maildir:INDEX=/var/dovecot_indexes%h
mail_plugins = quota expire notify mail_log
managesieve_notify_capability = mailto
managesieve_sieve_capability = fileinto reject envelope
encoded-character vacation subaddress comparator-i;ascii-numeric
relational regex imap4flags copy include variables body enotify
environment mailbox date index ihave duplicate mime foreverypart extracttext
mmap_disable = yes
namespace inbox {
  inbox = yes
  location =
  mailbox Drafts {
    special_use = \Drafts
  }
  mailbox Junk {
    special_use = \Junk
  }
  mailbox Sent {
    special_use = \Sent
  }
  mailbox "Sent Messages" {
    special_use = \Sent
  }
  mailbox Trash {
    special_use = \Trash
  }
  prefix =
}
passdb {
  driver = pam
}
passdb {
  args = /etc/dovecot/dovecot-sql.conf
  driver = sql
}
plugin {
  mail_log_events = delete undelete expunge copy mailbox_delete
mailbox_rename
  mail_log_fields = uid box msgid size
  sieve = ~/dovecot.sieve
  sieve_default = /var/lib/dovecot/sieve/default.sieve
  sieve_dir = %h/sieve
  sieve_global_dir = /var/lib/dovecot/sieve/
}
protocols = " imap lmtp sieve pop3"
service auth {
  unix_listener auth-master {
    group = users
    mode = 0666
    user = virtual
  }
  unix_listener auth-userdb {
    group = users
    user = virtual
  }
}
service lmtp {
  inet_listener lmtp {
    port = 24
  }
  unix_listener /var/spool/postfix/private/dovecot-lmtp {
    group = postfix
    mode = 0600
    user = postfix
  }
  user = virtual
}
service managesieve-login {
  inet_listener sieve {
    address = 127.0.0.1 94.124.15.58
    port = 4190
  }
}
ssl_ca = /etc/postfix/ssl/mail.maximail.pl.pem
ssl_cert = 

dovecot and ldap

2019-10-31 Thread Maciej Milaszewski IQ PL via dovecot
Hi
Sorry for my question...

I use dovecot+ldap

How realy works (in dovecot-2.2.x ) lists of LDAP hosts to use ?

-- dovecot.conf 

hosts = ldap.domain.pl:389 ldap-slave.domain.pl:389
#uris =



This is simpe HA ? I mean if ldap.domain.pl have problem another request
go to ldap-slave.domain.pl ?



Re: Recent Dovecot on old operating system

2019-09-19 Thread Maciej Milaszewski IQ PL via dovecot
Hi
For test in my smal testing lab i downlod latest dovecot-2.2.36.4 and
debian 7.x

./configure --prefix=/usr/local/dovecot-2.2.36.4 --sysconfdir=/etc
--with-mysql --with-ssl=openssl --with-solr --with-storages=maildir,imapc

and make

working ok

debian 7:
dovecot-director

debian 8:
dovecot node

and tested like

debian 8:
dovecot-director

debian 7:
dovecot node

All works fine

On 19.09.2019 12:28, Gerald via dovecot wrote:
> Hi,
>
> sorry for the dumb question and please ignore this post if you think it's far 
> beyond; i know it's not the way to go, but for reasons ...
>
> Has anyone running a self compiled recent dovecot (2.2.36.4) on Debian-7 and 
> does it work?
>  Or thinks it should work.
>
> Surprisingly it actually compiles flawlessly on Debian-7, but i wonder wether 
> it will become a complete mess replacingthe existing dovecot (2.1.7) with the 
> new one, serving a few dozens users and any kind of mail clients.
>
> Libraries of the compiled Dovecot:
> $ ldd ./src/master/.libs/dovecotlinux-vdso.so.1 =>  
> (0x7ffeacfb1000)
> libdovecot.so.0 => 
> /usr/local/00-DBAI/dovecot/lib/dovecot/libdovecot.so.0 (0x7f557b2a9000)
> libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x7f557af1c000)
> libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x7f557ad18000)
> librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x7f557ab1)
> /lib64/ld-linux-x86-64.so.2 (0x7f557b5dc000)
> libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 
> (0x7f557a8f4000)
>
> libc6 versions:Debian-7: 2.13-38Debian-9: 2.24-11
>
> thanks so far, gerald 
>
>
>
>
>



Multiple certificate option SNI

2019-09-13 Thread Maciej Milaszewski IQ PL via dovecot
Hi
I have some problem with SNI and dovecot 2.2.36.4

Server debian 9.x ad dovecot-2.2.36.4

default server ssl cert is a wildcard like *.domain.com (digicert)

ssl_ca = /var/control/cert.pem
ssl_cert = https://wiki.dovecot.org/SSL/DovecotConfiguration

like:

local_name imap.mail.test.domain.com {
  ssl_cert = 

Re: Multiple certificate option

2019-09-10 Thread Maciej Milaszewski IQ PL via dovecot
Hi
This is for all dovecot version ?

On 10.09.2019 08:05, Greg Wildman via dovecot wrote:
> On Fri, 2019-09-06 at 17:25 -0700, remo--- via dovecot wrote:
>> What is the best way to adopt multiple certs? 
> I have a setup that creates letsencrypt certs for each customer domain.
> To automate this I have the following at the end of conf.d/10-ssl.conf
>
>   !include ssl.d/*.conf
>
> This includes any .conf file under conf.d/ssl.d
>
> Now it is a simple matter to add and remove certificates for each
> domain as the letsencrypt job runs. Each config file looks like this
>
> $cat ssl.d/somedomain_co_za.conf
> local_name imap.somedomain.co.za {
>   ssl_cert =ssl_key  =  }
>
>
> YMMV.
>


-- 
Maciej Miłaszewski
Starszy Administrator Systemowy
IQ PL Sp. z o.o.

Biuro Obsługi Klienta:
e-mail: b...@iq.pl
tel.: +48 58 326 09 90 - 94
fax: +48 58 326 09 99

Dział pomocy: https://www.iq.pl/pomoc
Informacja dotycząca przetwarzania danych osobowych: https://www.iq.pl/kontakt

IQ PL Sp. z o.o. z siedzibą w Gdańsku (80-298), ul. Geodetów 16, KRS 
007725, Sąd rejestrowy: Sąd Rejonowy w Gdańsku VII Wydział KRS, kapitał 
zakładowy: 140.000 PLN, NIP 5832736211, REGON 192478853




signature.asc
Description: OpenPGP digital signature


solr

2019-07-10 Thread Maciej Milaszewski IQ PL via dovecot
Hi
I have set up SOLR in accordance with documentation and it runs well.
I use solr like:
...
fts = solr
fts_solr = debug url=http://IP:8983/solr/ (solr in external machine)
..

Is replication of this system really essential ?? Due to my tests,
rollback of the solr server on external machine lasts less than minute
and it is nearly non-visible from client side.

On the other hand solr replication is quite complicated process and
rollback or master-slave switch in this case is non-trivial task, that
may have result in whole dataset inconsistency.

Do you have any experience in such cases ? Maby load-balance in HAProxy
colud do the thing ? Something like:

.
server search1 192.168.1.1:8983 check port 8983 inter 20s fastinter 2
server search2 192.168.1.2:8983 backup
.

Best Regards



Re: solr vs fts

2019-07-04 Thread Maciej Milaszewski IQ PL via dovecot


>> A few clients have 25K and more e-mail
>>
>> I thinking about use solr like:
>>  fts = solr
>>  fts_solr = debug url=http://IP:8983/solr/ (solr in external machine)
>>
>> Does it make sense ? use dovecot_indexes and fts ?
>> What is the difference in performance?
>>
> Hi!
>
> Dovecot indexes are not actually related to FTS that much. Using FTS
> usually makes sense since it speeds up IMAP fulltext searches.
>
> Aki
>
Hi
So you're advised to use a solr or something else?



solr vs fts

2019-07-04 Thread Maciej Milaszewski IQ PL via dovecot
Hi
I have a question about tunning dovecot-2.2.36.x

Mail was stared in storage via nfs in MAILDIR via
/home/us/usern...@domain.ltd/MAILDIR/
I use additionally local dovecot_indexes via SSD disk
(/var/dovecot_indexes%h)

A few clients have 25K and more e-mail

I thinking about use solr like:
 fts = solr
 fts_solr = debug url=http://IP:8983/solr/ (solr in external machine)

Does it make sense ? use dovecot_indexes and fts ?
What is the difference in performance?



Re: director in rings

2019-03-07 Thread Maciej Milaszewski IQ PL via dovecot
48:06 2019]  [] ? kmem_getpages+0x5b/0x110
[śro mar  6 08:48:06 2019]  [] ?
fallback_alloc+0x1cf/0x210
[śro mar  6 08:48:06 2019]  [] ?
tg3_alloc_rx_data+0x6d/0x260 [tg3]
[śro mar  6 08:48:06 2019]  [] ? __kmalloc+0x227/0x4e0
[śro mar  6 08:48:06 2019]  [] ?
tg3_alloc_rx_data+0x6d/0x260 [tg3]
[śro mar  6 08:48:06 2019]  [] ?
tg3_alloc_rx_data+0x6d/0x260 [tg3]
[śro mar  6 08:48:06 2019]  [] ?
tg3_poll_work+0x471/0xeb0 [tg3]
[śro mar  6 08:48:06 2019]  [] ?
tg3_poll_msix+0x35/0x140 [tg3]
[śro mar  6 08:48:06 2019]  [] ? net_rx_action+0x129/0x250
[śro mar  6 08:48:06 2019]  [] ? __do_softirq+0xf1/0x2d0
[śro mar  6 08:48:06 2019]  [] ? irq_exit+0x95/0xa0
[śro mar  6 08:48:06 2019]  [] ? do_IRQ+0x52/0xe0
[śro mar  6 08:48:06 2019]  [] ?
common_interrupt+0x81/0x81
[śro mar  6 08:48:06 2019]   Mem-Info:
[śro mar  6 08:48:06 2019] Node 0 DMA per-cpu:
[śro mar  6 08:48:06 2019] CPU    0: hi:    0, btch:   1 usd:   0
[śro mar  6 08:48:06 2019] CPU    1: hi:    0, btch:   1 usd:   0
[śro mar  6 08:48:06 2019] CPU    2: hi:    0, btch:   1 usd:   0
[śro mar  6 08:48:06 2019] CPU    3: hi:    0, btch:   1 usd:   0


śro mar  6 08:48:06 2019] imap-login: page allocation failure: order:2,
mode:0x204020
[śro mar  6 08:48:06 2019] CPU: 38 PID: 5656 Comm: imap-login Not
tainted 3.16.0-5-amd64 #1 Debian 3.16.51-3+deb8u1
[śro mar  6 08:48:06 2019] Hardware name: Dell Inc. PowerEdge
R620/01W23F, BIOS 2.6.1 02/12/2018
[śro mar  6 08:48:06 2019]   8151f937
00204020 88082fc63b98
[śro mar  6 08:48:06 2019]  81148d8f 
818ea8b0 88080002
[śro mar  6 08:48:06 2019]  00012fffbe00 88082fffcc00
0046 0001
[śro mar  6 08:48:06 2019] Call Trace:
[śro mar  6 08:48:06 2019]    [] ?
dump_stack+0x5d/0x78
[śro mar  6 08:48:06 2019]  [] ?
warn_alloc_failed+0xdf/0x130
[śro mar  6 08:48:06 2019]  [] ?
__alloc_pages_nodemask+0x8ef/0xb50
[śro mar  6 08:48:06 2019]  [] ? kmem_getpages+0x5b/0x110
[śro mar  6 08:48:06 2019]  [] ?
fallback_alloc+0x1cf/0x210
[śro mar  6 08:48:06 2019]  [] ?
tg3_alloc_rx_data+0x6d/0x260 [tg3]
[śro mar  6 08:48:06 2019]  [] ? __kmalloc+0x227/0x4e0
[śro mar  6 08:48:06 2019]  [] ?
tg3_alloc_rx_data+0x6d/0x260 [tg3]
[śro mar  6 08:48:06 2019]  [] ?
tg3_alloc_rx_data+0x6d/0x260 [tg3]
[śro mar  6 08:48:06 2019]  [] ?
tg3_poll_work+0x471/0xeb0 [tg3]
[śro mar  6 08:48:06 2019]  [] ?
tg3_poll_msix+0x35/0x140 [tg3]
[śro mar  6 08:48:06 2019]  [] ? net_rx_action+0x129/0x250
[śro mar  6 08:48:06 2019]  [] ?
run_rebalance_domains+0x3f/0x190
[śro mar  6 08:48:06 2019]  [] ? __do_softirq+0xf1/0x2d0
[śro mar  6 08:48:06 2019]  [] ?
do_softirq_own_stack+0x1c/0x30
[śro mar  6 08:48:06 2019]    [] ?
do_softirq+0x4d/0x60
[śro mar  6 08:48:06 2019]  [] ?
__local_bh_enable_ip+0x84/0x90
[śro mar  6 08:48:06 2019]  [] ? tcp_recvmsg+0x4c/0xc40
[śro mar  6 08:48:06 2019]  [] ? set_next_entity+0x56/0x70
[śro mar  6 08:48:06 2019]  [] ?
pick_next_task_fair+0x6e1/0x820
[śro mar  6 08:48:06 2019]  [] ? __switch_to+0x15c/0x5a0
[śro mar  6 08:48:06 2019]  [] ? inet_recvmsg+0x6a/0x80
[śro mar  6 08:48:06 2019]  [] ?
sock_aio_read.part.7+0xfe/0x120
[śro mar  6 08:48:06 2019]  [] ? do_sync_read+0x5c/0x90
[śro mar  6 08:48:06 2019]  [] ? vfs_read+0x135/0x170
[śro mar  6 08:48:06 2019]  [] ? SyS_read+0x42/0xa0
[śro mar  6 08:48:06 2019]  [] ?
system_call_fast_compare_end+0x10/0x15

now I changed memory and tunning min_free_kbytes in kernel

My question is about soultions keeplive+haproxy

1)director1-ringdirector2  
2)director3 - standalone as backup

Options 1 and 2 have connect to 5 backend dovecot



On 07.03.2019 08:43, Aki Tuomi via dovecot wrote:
> On 6.3.2019 14.26, Maciej Milaszewski IQ PL via dovecot wrote:
>> Hi
>> Maby stupid question :)
>>
>> It possible to have 3 directors (frontend)
>> but without rings ?
>>
>> All directors connect to this same dovecot (backend) - all backad have
>> this same login_trusted_networks
>>
>>
> Why would you even use a director then?
>
> Aki
>


-- 
Maciej Miłaszewski
IQ PL Sp. z o.o.
Starszy Administrator Systemowy

Biuro Obsługi Klienta:
e-mail: b...@iq.pl
tel.: +48 58 326 09 90 - 94
fax: +48 58 326 09 99

Dział pomocy: https://www.iq.pl/pomoc
Informacja dotycząca przetwarzania danych osobowych: https://www.iq.pl/kontakt
Jakość gwarantuje: ISO 9001:2000

IQ PL Sp. z o.o. z siedzibą w Gdańsku (80-298), ul. Geodetów 16, KRS 
007725, Sąd rejestrowy: Sąd Rejonowy w Gdańsku VII Wydział KRS, 
kapitał zakładowy: 140.000 PLN, NIP 5832736211, REGON 192478853



director in rings

2019-03-06 Thread Maciej Milaszewski IQ PL via dovecot
Hi
Maby stupid question :)

It possible to have 3 directors (frontend)
but without rings ?

All directors connect to this same dovecot (backend) - all backad have
this same login_trusted_networks


-- 
Maciej Miłaszewski
IQ PL Sp. z o.o.
Starszy Administrator Systemowy




problem witht working director

2019-02-13 Thread Maciej Milaszewski IQ PL via dovecot
Hi
I have dovecot director with 2-3 ring and 4 dovecot

My ring is :
10.0.100.2  9090 right never   synced  8   37305225 39763861
0    723   2019-02-13 16:02:04 2019-02-13
16:02:04 
10.0.100.3  9090 left  never   synced  8   57409664 8707830 
0    737   2019-02-13 16:02:04 2019-02-13
16:02:04 
10.0.100.4  9090 self  never   ring synced 1   -    -   
-    - -   - 

10.0.100.4 - This is my default server where clients connect - default
direcor 2.2.36
10.0.100.3 - second direcot 2.2.36
10.0.100.2 - second director with older ver 2.2.18


Today I found the error in my default mailserver
Feb 13 13:21:23 kernel: [24253349.641695] qmail-remote: page allocation
failure: order:2, mode:0x204020
and problem with logging via imap and pop3
I try "doveadm director status" i get time out

I try restart dovecot and get:
Feb 13 13:21:03 thebe3 dovecot: master: Fatal: Dovecot is already
running? Socket already exists: /var/run/dovecot/login/director

before failure i found:
"imap-login: Error: write(proxy-notify) failed: Resource temporarily
unavailable"

but my setting they seem correct https://paste.debian.net/1067596/


I moved IP (via keepalive) all to other director (secound servcer)
And when I try "doveadm director status" i get time out

https://paste.debian.net/1067601/

I do not understand why I had a problem with second server
and restart dovecot in server2
restart solved the problem ?


alterstorage for archiv

2019-02-07 Thread Maciej Milaszewski IQ PL via dovecot
Hi
I have a testing lab with:
debian10 + dovecot-2.2.33 (director) and one backaned with 2.2.33.2

mail_location = maildir:~/Maildir:INDEX=/var/dovecot_indexes%h

All works fine but I thinking now about solutions to moved all messeges
older by (14day) to my alter storage with poor disk to folder Archive

like:/vmail/domain/user@domain/Maildir/
to:
/otherstorage/vmail/domain/user@domain/
And i thinking about ( hmm mapping folder Archive)

/vmail/domain/user@domain/Maildir/Archive/

mail_location =
maildir:~/Maildir:INDEX=/var/dovecot_indexes%h:ALT:/otherstorage%h

is that "solutions" Is good idea ?

Or maby is another working solutions ? because: doveadm altmove not
working with Maildir




jmap

2019-01-25 Thread Maciej Milaszewski IQ PL
Hi
Is there any chance that Dovecot-2.2.36 supports "jmap" ?

-- 
Maciej Miłaszewski
IQ PL Sp. z o.o.
Starszy Administrator Systemowy

Biuro Obsługi Klienta:
e-mail: b...@iq.pl
tel.: +48 58 326 09 90 - 94
fax: +48 58 326 09 99

Dział pomocy: https://www.iq.pl/pomoc
Informacja dotycząca przetwarzania danych osobowych: https://www.iq.pl/kontakt
Jakość gwarantuje: ISO 9001:2000

IQ PL Sp. z o.o. z siedzibą w Gdańsku (80-298), ul. Geodetów 16, KRS 
007725, Sąd rejestrowy: Sąd Rejonowy w Gdańsku VII Wydział KRS, 
kapitał zakładowy: 140.000 PLN, NIP 5832736211, REGON 192478853



Re: debian10+dovecot-2.2.33.2

2019-01-24 Thread Maciej Milaszewski IQ PL
Hi
All works fine :) Thanks a lot Aki :)

On 24.01.2019 19:59, Aki Tuomi wrote:
> Another known issue, fixed with
>
> https://github.com/dovecot/core/commit/ca4c2579f0456072bdb505932a9cf7602e42afd2.patch
>
> Aki
>
>> On 24 January 2019 at 20:54 Maciej Milaszewski IQ PL 
>>  wrote:
>>
>>
>> Hi
>> Like bumerang.
>>
>> 2.2.36 works fine on debian10 but connect by telnet like:
>>
>> Connected to 46.xxx.xxx.xxx.
>> Escape character is '^]'.
>> * OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE
>> IDLE STARTTLS AUTH=PLAIN AUTH=LOGIN] Imap ready.
>> ^]
>> telnet> q
>> Connection closed.
>>
>> dovenull   664  0.0  0.0   7796  6052 ?    S    19:54   0:00  \_
>> dovecot/imap-login  
>> ??
>> dovenull   665  0.0  0.0   7672  5444 ?    S    19:54   0:00  \_
>> dovecot/imap-login
>>
>> strace:
>>
>> strace -p 664
>> strace: Process 664 attached
>> gettimeofday({tv_sec=1548355306, tv_usec=938191}, NULL) = 0
>>
>>
>> On 24.01.2019 19:34, Maciej Milaszewski IQ PL wrote:
>>> Hi
>>> Thenx for replay - problem solved :)
>>>
>>> I forget `autoreconf -vi`
>>>
>>> You are rox :)
>>>
>>>  On 24.01.2019 19:10, Aki Tuomi wrote:
>>>> You need to do `autoreconf -vi` before configure, won't work otherwise.
>>>>
>>>> Aki
>>>>
>>>>> autoreconf -viOn 24 January 2019 at 20:09 Maciej Milaszewski IQ PL 
>>>>>  wrote:
>>>>>
>>>>>
>>>>> Hi
>>>>> Thenx. I use your patch but problem not solved.
>>>>>
>>>>>
>>>>> Hunk #1 succeeded at 334 with fuzz 2 (offset 19 lines).
>>>>> patching file src/auth/mycrypt.c
>>>>>
>>>>> ./configure --prefix=/usr/local/dovecot-2.2.36 --sysconfdir=/etc
>>>>> --with-ldap=yes --with-mysql --with-ssl=openssl --with-solr
>>>>> --with-storages=maildir,imapc
>>>>>
>>>>> Install prefix . : /usr/local/dovecot-2.2.36
>>>>> File offsets ... : 64bit
>>>>> I/O polling  : epoll
>>>>> I/O notifys  : inotify
>>>>> SSL  : yes (OpenSSL)
>>>>> GSSAPI . : no
>>>>> passdbs  : static passwd passwd-file shadow checkpassword ldap sql
>>>>> dcrypt ..: yes
>>>>>  : -pam -bsdauth -sia -vpopmail
>>>>> userdbs  : static prefetch passwd passwd-file checkpassword ldap
>>>>> sql nss
>>>>>  : -vpopmail
>>>>> SQL drivers  : mysql
>>>>>  : -pgsql -sqlite -cassandra
>>>>> Full text search : squat solr
>>>>>  : -lucene
>>>>>
>>>>>
>>>>> Jan 24 19:06:34 postfix dovecot: ssl-params: Generating SSL parameters
>>>>> Jan 24 19:06:34 postfix dovecot: master: Error: service(auth): command
>>>>> startup failed, throttling for 2 secs
>>>>> Jan 24 19:06:34 postfix dovecot: auth: Fatal: master: service(auth):
>>>>> child 11238 killed with signal 11 (core dumped)
>>>>> Jan 24 19:06:34 postfix dovecot: director: Error: Auth server
>>>>> disconnected unexpectedly
>>>>> Jan 24 19:06:34 postfix dovecot: director: Error: Auth server
>>>>> disconnected unexpectedly
>>>>>
>>>>>
>>>>> New LWP 11244]
>>>>> [Thread debugging using libthread_db enabled]
>>>>> Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
>>>>> Core was generated by `dovecot/auth'.
>>>>> Program terminated with signal SIGSEGV, Segmentation fault.
>>>>> #0  0x7f44ee4e8c7a in ?? () from /lib/x86_64-linux-gnu/libc.so.6
>>>>> (gdb) bt full
>>>>> #0  0x7f44ee4e8c7a in ?? () from /lib/x86_64-linux-gnu/libc.so.6
>>>>> No symbol table info available.
>>>>> #1  0x559531747bc8 in password_scheme_register_crypt () at
>>>>> password-scheme-crypt.c:144
>>>>>     i = 0
>>>>>     crypted = 
>>>>> #2  0x5595317478dc in password_schemes_init () at 
>>>>> password-scheme.c:875
>>>>>     i = 
>>>>> #3  0x5595317227fb in main_preinit () at main.c:188
>>>>>     mod_set = {abi_version = 0xf9d2c8bc >>>> at address 0xf9d2c8bc>, binary_name = 0x7f44ee41d5e0 <_gnutls_log_level>
>>>>> "", setting_name = 0x0,
>>>>>   filter_callback = 0x7f44ee395cfb, filter_context =
>>>>> 0x756e65470004, require_init_funcs = 1, debug = 0,
>>>>> ignore_dlopen_errors = 1, ignore_missing = 0}
>>>>>     services = 
>>>>>     mod_set = 
>>>>>     services = 
>>>>> #4  main (argc=, argv=) at main.c:396
>>>>>     c = 
>>>>>
>>>>>
>>>>> On 24.01.2019 18:33, Aki Tuomi wrote:
>>>>>> This has been fixed with
>>>>>>
>>>>>> https://github.com/dovecot/core/commit/63a74b9e8e0604486a15a879e7f1a27257322400.patch
>>>>>>
>>>>>> Aki
>>>>>>
>>



Re: debian10+dovecot-2.2.33.2

2019-01-24 Thread Maciej Milaszewski IQ PL
Hi
Like bumerang.

2.2.36 works fine on debian10 but connect by telnet like:

Connected to 46.xxx.xxx.xxx.
Escape character is '^]'.
* OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE
IDLE STARTTLS AUTH=PLAIN AUTH=LOGIN] Imap ready.
^]
telnet> q
Connection closed.

dovenull   664  0.0  0.0   7796  6052 ?    S    19:54   0:00  \_
dovecot/imap-login  
??
dovenull   665  0.0  0.0   7672  5444 ?    S    19:54   0:00  \_
dovecot/imap-login

strace:

strace -p 664
strace: Process 664 attached
gettimeofday({tv_sec=1548355306, tv_usec=938191}, NULL) = 0


On 24.01.2019 19:34, Maciej Milaszewski IQ PL wrote:
> Hi
> Thenx for replay - problem solved :)
>
> I forget `autoreconf -vi`
>
> You are rox :)
>
>  On 24.01.2019 19:10, Aki Tuomi wrote:
>> You need to do `autoreconf -vi` before configure, won't work otherwise.
>>
>> Aki
>>
>>> autoreconf -viOn 24 January 2019 at 20:09 Maciej Milaszewski IQ PL 
>>>  wrote:
>>>
>>>
>>> Hi
>>> Thenx. I use your patch but problem not solved.
>>>
>>>
>>> Hunk #1 succeeded at 334 with fuzz 2 (offset 19 lines).
>>> patching file src/auth/mycrypt.c
>>>
>>> ./configure --prefix=/usr/local/dovecot-2.2.36 --sysconfdir=/etc
>>> --with-ldap=yes --with-mysql --with-ssl=openssl --with-solr
>>> --with-storages=maildir,imapc
>>>
>>> Install prefix . : /usr/local/dovecot-2.2.36
>>> File offsets ... : 64bit
>>> I/O polling  : epoll
>>> I/O notifys  : inotify
>>> SSL  : yes (OpenSSL)
>>> GSSAPI . : no
>>> passdbs  : static passwd passwd-file shadow checkpassword ldap sql
>>> dcrypt ..: yes
>>>  : -pam -bsdauth -sia -vpopmail
>>> userdbs  : static prefetch passwd passwd-file checkpassword ldap
>>> sql nss
>>>  : -vpopmail
>>> SQL drivers  : mysql
>>>  : -pgsql -sqlite -cassandra
>>> Full text search : squat solr
>>>  : -lucene
>>>
>>>
>>> Jan 24 19:06:34 postfix dovecot: ssl-params: Generating SSL parameters
>>> Jan 24 19:06:34 postfix dovecot: master: Error: service(auth): command
>>> startup failed, throttling for 2 secs
>>> Jan 24 19:06:34 postfix dovecot: auth: Fatal: master: service(auth):
>>> child 11238 killed with signal 11 (core dumped)
>>> Jan 24 19:06:34 postfix dovecot: director: Error: Auth server
>>> disconnected unexpectedly
>>> Jan 24 19:06:34 postfix dovecot: director: Error: Auth server
>>> disconnected unexpectedly
>>>
>>>
>>> New LWP 11244]
>>> [Thread debugging using libthread_db enabled]
>>> Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
>>> Core was generated by `dovecot/auth'.
>>> Program terminated with signal SIGSEGV, Segmentation fault.
>>> #0  0x7f44ee4e8c7a in ?? () from /lib/x86_64-linux-gnu/libc.so.6
>>> (gdb) bt full
>>> #0  0x7f44ee4e8c7a in ?? () from /lib/x86_64-linux-gnu/libc.so.6
>>> No symbol table info available.
>>> #1  0x559531747bc8 in password_scheme_register_crypt () at
>>> password-scheme-crypt.c:144
>>>     i = 0
>>>     crypted = 
>>> #2  0x5595317478dc in password_schemes_init () at password-scheme.c:875
>>>     i = 
>>> #3  0x5595317227fb in main_preinit () at main.c:188
>>>     mod_set = {abi_version = 0xf9d2c8bc >> at address 0xf9d2c8bc>, binary_name = 0x7f44ee41d5e0 <_gnutls_log_level>
>>> "", setting_name = 0x0,
>>>   filter_callback = 0x7f44ee395cfb, filter_context =
>>> 0x756e65470004, require_init_funcs = 1, debug = 0,
>>> ignore_dlopen_errors = 1, ignore_missing = 0}
>>>     services = 
>>>     mod_set = 
>>>     services = 
>>> #4  main (argc=, argv=) at main.c:396
>>>     c = 
>>>
>>>
>>> On 24.01.2019 18:33, Aki Tuomi wrote:
>>>> This has been fixed with
>>>>
>>>> https://github.com/dovecot/core/commit/63a74b9e8e0604486a15a879e7f1a27257322400.patch
>>>>
>>>> Aki
>>>>


-- 
Maciej Miłaszewski
IQ PL Sp. z o.o.
Starszy Administrator Systemowy

Biuro Obsługi Klienta:
e-mail: b...@iq.pl
tel.: +48 58 326 09 90 - 94
fax: +48 58 326 09 99

Dział pomocy: https://www.iq.pl/pomoc
Informacja dotycząca przetwarzania danych osobowych: https://www.iq.pl/kontakt
Jakość gwarantuje: ISO 9001:2000

IQ PL Sp. z o.o. z siedzibą w Gdańsku (80-298), ul. Geodetów 16, KRS 
007725, Sąd rejestrowy: Sąd Rejonowy w Gdańsku VII Wydział KRS, 
kapitał zakładowy: 140.000 PLN, NIP 5832736211, REGON 192478853




Re: debian10+dovecot-2.2.33.2

2019-01-24 Thread Maciej Milaszewski IQ PL
Hi
Thenx for replay - problem solved :)

I forget `autoreconf -vi`

You are rox :)

 On 24.01.2019 19:10, Aki Tuomi wrote:
> You need to do `autoreconf -vi` before configure, won't work otherwise.
>
> Aki
>
>> autoreconf -viOn 24 January 2019 at 20:09 Maciej Milaszewski IQ PL 
>>  wrote:
>>
>>
>> Hi
>> Thenx. I use your patch but problem not solved.
>>
>>
>> Hunk #1 succeeded at 334 with fuzz 2 (offset 19 lines).
>> patching file src/auth/mycrypt.c
>>
>> ./configure --prefix=/usr/local/dovecot-2.2.36 --sysconfdir=/etc
>> --with-ldap=yes --with-mysql --with-ssl=openssl --with-solr
>> --with-storages=maildir,imapc
>>
>> Install prefix . : /usr/local/dovecot-2.2.36
>> File offsets ... : 64bit
>> I/O polling  : epoll
>> I/O notifys  : inotify
>> SSL  : yes (OpenSSL)
>> GSSAPI . : no
>> passdbs  : static passwd passwd-file shadow checkpassword ldap sql
>> dcrypt ..: yes
>>  : -pam -bsdauth -sia -vpopmail
>> userdbs  : static prefetch passwd passwd-file checkpassword ldap
>> sql nss
>>  : -vpopmail
>> SQL drivers  : mysql
>>  : -pgsql -sqlite -cassandra
>> Full text search : squat solr
>>  : -lucene
>>
>>
>> Jan 24 19:06:34 postfix dovecot: ssl-params: Generating SSL parameters
>> Jan 24 19:06:34 postfix dovecot: master: Error: service(auth): command
>> startup failed, throttling for 2 secs
>> Jan 24 19:06:34 postfix dovecot: auth: Fatal: master: service(auth):
>> child 11238 killed with signal 11 (core dumped)
>> Jan 24 19:06:34 postfix dovecot: director: Error: Auth server
>> disconnected unexpectedly
>> Jan 24 19:06:34 postfix dovecot: director: Error: Auth server
>> disconnected unexpectedly
>>
>>
>> New LWP 11244]
>> [Thread debugging using libthread_db enabled]
>> Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
>> Core was generated by `dovecot/auth'.
>> Program terminated with signal SIGSEGV, Segmentation fault.
>> #0  0x7f44ee4e8c7a in ?? () from /lib/x86_64-linux-gnu/libc.so.6
>> (gdb) bt full
>> #0  0x7f44ee4e8c7a in ?? () from /lib/x86_64-linux-gnu/libc.so.6
>> No symbol table info available.
>> #1  0x559531747bc8 in password_scheme_register_crypt () at
>> password-scheme-crypt.c:144
>>     i = 0
>>     crypted = 
>> #2  0x5595317478dc in password_schemes_init () at password-scheme.c:875
>>     i = 
>> #3  0x5595317227fb in main_preinit () at main.c:188
>>     mod_set = {abi_version = 0xf9d2c8bc > at address 0xf9d2c8bc>, binary_name = 0x7f44ee41d5e0 <_gnutls_log_level>
>> "", setting_name = 0x0,
>>   filter_callback = 0x7f44ee395cfb, filter_context =
>> 0x756e65470004, require_init_funcs = 1, debug = 0,
>> ignore_dlopen_errors = 1, ignore_missing = 0}
>>     services = 
>>     mod_set = 
>>     services = 
>> #4  main (argc=, argv=) at main.c:396
>>     c = 
>>
>>
>> On 24.01.2019 18:33, Aki Tuomi wrote:
>>> This has been fixed with
>>>
>>> https://github.com/dovecot/core/commit/63a74b9e8e0604486a15a879e7f1a27257322400.patch
>>>
>>> Aki
>>>
>>



Re: debian10+dovecot-2.2.33.2

2019-01-24 Thread Maciej Milaszewski IQ PL
Hi
Thenx. I use your patch but problem not solved.


Hunk #1 succeeded at 334 with fuzz 2 (offset 19 lines).
patching file src/auth/mycrypt.c

./configure --prefix=/usr/local/dovecot-2.2.36 --sysconfdir=/etc
--with-ldap=yes --with-mysql --with-ssl=openssl --with-solr
--with-storages=maildir,imapc

Install prefix . : /usr/local/dovecot-2.2.36
File offsets ... : 64bit
I/O polling  : epoll
I/O notifys  : inotify
SSL  : yes (OpenSSL)
GSSAPI . : no
passdbs  : static passwd passwd-file shadow checkpassword ldap sql
dcrypt ..: yes
 : -pam -bsdauth -sia -vpopmail
userdbs  : static prefetch passwd passwd-file checkpassword ldap
sql nss
 : -vpopmail
SQL drivers  : mysql
 : -pgsql -sqlite -cassandra
Full text search : squat solr
 : -lucene


Jan 24 19:06:34 postfix dovecot: ssl-params: Generating SSL parameters
Jan 24 19:06:34 postfix dovecot: master: Error: service(auth): command
startup failed, throttling for 2 secs
Jan 24 19:06:34 postfix dovecot: auth: Fatal: master: service(auth):
child 11238 killed with signal 11 (core dumped)
Jan 24 19:06:34 postfix dovecot: director: Error: Auth server
disconnected unexpectedly
Jan 24 19:06:34 postfix dovecot: director: Error: Auth server
disconnected unexpectedly


New LWP 11244]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Core was generated by `dovecot/auth'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x7f44ee4e8c7a in ?? () from /lib/x86_64-linux-gnu/libc.so.6
(gdb) bt full
#0  0x7f44ee4e8c7a in ?? () from /lib/x86_64-linux-gnu/libc.so.6
No symbol table info available.
#1  0x559531747bc8 in password_scheme_register_crypt () at
password-scheme-crypt.c:144
    i = 0
    crypted = 
#2  0x5595317478dc in password_schemes_init () at password-scheme.c:875
    i = 
#3  0x5595317227fb in main_preinit () at main.c:188
    mod_set = {abi_version = 0xf9d2c8bc , binary_name = 0x7f44ee41d5e0 <_gnutls_log_level>
"", setting_name = 0x0,
  filter_callback = 0x7f44ee395cfb, filter_context =
0x756e65470004, require_init_funcs = 1, debug = 0,
ignore_dlopen_errors = 1, ignore_missing = 0}
    services = 
    mod_set = 
    services = 
#4  main (argc=, argv=) at main.c:396
    c = 


On 24.01.2019 18:33, Aki Tuomi wrote:
> This has been fixed with
>
> https://github.com/dovecot/core/commit/63a74b9e8e0604486a15a879e7f1a27257322400.patch
>
> Aki
>




Re: debian10+dovecot-2.2.33.2

2019-01-24 Thread Maciej Milaszewski IQ PL
Hi
Thenx. Dump core:

Core was generated by `dovecot/auth'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x7fa910394c7a in ?? () from /lib/x86_64-linux-gnu/libc.so.6
(gdb) bt full
#0  0x7fa910394c7a in ?? () from /lib/x86_64-linux-gnu/libc.so.6
No symbol table info available.
#1  0x5604e6f98bc8 in password_scheme_register_crypt () at
password-scheme-crypt.c:144
    i = 0
    crypted = 
#2  0x5604e6f988dc in password_schemes_init () at password-scheme.c:875
    i = 
#3  0x5604e6f737fb in main_preinit () at main.c:188
    mod_set = {abi_version = 0x119a20dc , binary_name = 0x7fa9102c95e0 <_gnutls_log_level>
"", setting_name = 0x0,
  filter_callback = 0x7fa910241cfb, filter_context =
0x756e65470004, require_init_funcs = 1, debug = 0,
ignore_dlopen_errors = 1, ignore_missing = 0}
    services = 
    mod_set = 
    services = 
#4  main (argc=, argv=) at main.c:396
    c = 

On 24.01.2019 18:16, Aki Tuomi wrote:
> Try
>
> gdb /usr/local/dovecot/libexec/dovecot/auth /var/run/dovecot/core
> bt full
>
> Aki
>> On 24 January 2019 at 18:53 Maciej Milaszewski IQ PL <
>> maciej.milaszew...@iq.pl <mailto:maciej.milaszew...@iq.pl>> wrote:
>>
>>
>> Hi
>> Thenx but maby problem is in ssl or core file and gdb was incorrectly
>> used
>>
>>
>> # 2.2.36 (1f10bfa63): /etc/dovecot/dovecot.conf
>> # Pigeonhole version 0.4.24 (124e06aa)
>> # OS: Linux 4.19.0-1-amd64 x86_64 Debian buster/sid
>>
>> Jan 24 16:56:38 thebe-postfix dovecot: master: Dovecot v2.2.36
>> (1f10bfa63) starting up for imap, pop3, lmtp, sieve
>> Jan 24 16:56:39 thebe-postfix dovecot: master: Error: service(auth):
>> command startup failed, throttling for 2 secs
>> Jan 24 16:56:39 thebe-postfix dovecot: auth: Fatal: master:
>> service(auth): child 11078 killed with signal 11 (core dumped)
>> Jan 24 16:56:39 thebe-postfix dovecot: director: Error: Auth server
>> disconnected unexpectedly
>>
>> telnet IP 143
>> ..
>> Escape character is '^]'.
>> * OK Waiting for authentication process to respond..
>> * BYE Disconnected: Auth process broken
>> Connection closed by foreign host.
>>
>>
>> Jan 24 16:56:48 thebe-postfix dovecot: director: Error: Auth server
>> disconnected unexpectedly
>> Jan 24 16:56:56 thebe-postfix dovecot: master: Error: service(auth):
>> command startup failed, throttling for 16 secs
>> Jan 24 16:56:56 thebe-postfix dovecot: auth: Fatal: master:
>> service(auth): child 11082 killed with signal 11 (core dumped)
>> an 24 16:56:56 thebe-postfix dovecot: director: Error: Auth server
>> disconnected unexpectedly
>> Jan 24 16:56:56 thebe-postfix dovecot: director: Error: Auth server
>> disconnected unexpectedly
>> Jan 24 16:56:56 thebe-postfix dovecot: director: Error: Auth server
>> disconnected unexpectedly
>> Jan 24 16:56:56 thebe-postfix dovecot: director: Error: Auth server
>> disconnected unexpectedly
>> Jan 24 16:57:07 thebe-postfix dovecot: imap-login: Warning: Auth process
>> not responding, delayed sending initial response (greeting): user=<>,
>> rip=46.xxx.xxx.xxx, lip=46.xxx.xxx.xxx, secured,
>> session=
>> Jan 24 16:57:12 thebe-postfix dovecot: master: Error: service(auth):
>> command startup failed, throttling for 32 secs
>> Jan 24 16:57:12 thebe-postfix dovecot: auth: Fatal: master:
>> service(auth): child 11084 killed with signal 11 (core dumped)
>> Jan 24 16:5Core was generated by `dovecot/auth'.
>> Program terminated with signal SIGSEGV, Segmentation fault.
>> #0  0x7fa910394c7a in ?? () from /lib/x86_64-linux-gnu/libc.so.6
>> (gdb) bt full
>> #0  0x7fa910394c7a in ?? () from /lib/x86_64-linux-gnu/libc.so.6
>> No symbol table info available.
>> #1  0x5604e6f98bc8 in password_scheme_register_crypt () at
>> password-scheme-crypt.c:144
>>     i = 0
>>     crypted = 
>> #2  0x5604e6f988dc in password_schemes_init () at
>> password-scheme.c:875
>>     i = 
>> #3  0x5604e6f737fb in main_preinit () at main.c:188
>>     mod_set = {abi_version = 0x119a20dc > memory at address 0x119a20dc>, binary_name = 0x7fa9102c95e0
>> <_gnutls_log_level> "", setting_name = 0x0,
>>   filter_callback = 0x7fa910241cfb, filter_context =
>> 0x756e65470004, require_init_funcs = 1, debug = 0,
>> ignore_dlopen_errors = 1, ignore_missing = 0}
>>     services = 
>>     mod_set = 
>>     services = 
>> #4  main (argc=, argv=) at main.c:396
>>     c = 7:12 thebe-postfix dovecot: director:
>>

Re: debian10+dovecot-2.2.33.2

2019-01-24 Thread Maciej Milaszewski IQ PL
Hi
Thenx but maby problem is in ssl or core file and gdb was incorrectly used


# 2.2.36 (1f10bfa63): /etc/dovecot/dovecot.conf
# Pigeonhole version 0.4.24 (124e06aa)
# OS: Linux 4.19.0-1-amd64 x86_64 Debian buster/sid

Jan 24 16:56:38 thebe-postfix dovecot: master: Dovecot v2.2.36
(1f10bfa63) starting up for imap, pop3, lmtp, sieve
Jan 24 16:56:39 thebe-postfix dovecot: master: Error: service(auth):
command startup failed, throttling for 2 secs
Jan 24 16:56:39 thebe-postfix dovecot: auth: Fatal: master:
service(auth): child 11078 killed with signal 11 (core dumped)
Jan 24 16:56:39 thebe-postfix dovecot: director: Error: Auth server
disconnected unexpectedly

telnet IP 143
..
Escape character is '^]'.
* OK Waiting for authentication process to respond..
* BYE Disconnected: Auth process broken
Connection closed by foreign host.


Jan 24 16:56:48 thebe-postfix dovecot: director: Error: Auth server
disconnected unexpectedly
Jan 24 16:56:56 thebe-postfix dovecot: master: Error: service(auth):
command startup failed, throttling for 16 secs
Jan 24 16:56:56 thebe-postfix dovecot: auth: Fatal: master:
service(auth): child 11082 killed with signal 11 (core dumped)
an 24 16:56:56 thebe-postfix dovecot: director: Error: Auth server
disconnected unexpectedly
Jan 24 16:56:56 thebe-postfix dovecot: director: Error: Auth server
disconnected unexpectedly
Jan 24 16:56:56 thebe-postfix dovecot: director: Error: Auth server
disconnected unexpectedly
Jan 24 16:56:56 thebe-postfix dovecot: director: Error: Auth server
disconnected unexpectedly
Jan 24 16:57:07 thebe-postfix dovecot: imap-login: Warning: Auth process
not responding, delayed sending initial response (greeting): user=<>,
rip=46.xxx.xxx.xxx, lip=46.xxx.xxx.xxx, secured, session=
Jan 24 16:57:12 thebe-postfix dovecot: master: Error: service(auth):
command startup failed, throttling for 32 secs
Jan 24 16:57:12 thebe-postfix dovecot: auth: Fatal: master:
service(auth): child 11084 killed with signal 11 (core dumped)
Jan 24 16:57:12 thebe-postfix dovecot: director: Error: Auth server
disconnected unexpectedly
Jan 24 16:57:12 thebe-postfix dovecot: director: Error: Auth server
disconnected unexpectedly
Jan 24 16:57:12 thebe-postfix dovecot: director: Error: Auth server
disconnected unexpectedly
Jan 24 16:57:12 thebe-postfix dovecot: director: Error: Auth server
disconnected unexpectedly
Jan 24 16:57:12 thebe-postfix dovecot: director: Error: Auth server
disconnected unexpectedly
Jan 24 16:57:12 thebe-postfix dovecot: director: Error: Auth server
disconnected unexpectedly
Jan 24 16:57:12 thebe-postfix dovecot: imap-login: Disconnected: Auth
process broken (disconnected before auth was ready, waited 15 secs):
user=<>, rip=46.xxx.xxx.xxx, lip=46.xxx.xxx.xxx, secured,
session=
Jan 24 16:57:12 thebe-postfix dovecot: director: Error: Auth server
disconnected unexpectedly
Jan 24 16:57:12 thebe-postfix dovecot: director: Error: Auth server
disconnected unexpectedly

core:

gdb --args /usr/local/dovecot/libexec/dovecot/imap-login
/var/run/dovecot/core
Starting program: /usr/local/dovecot-2.2.36/libexec/dovecot/imap-login
/var/run/dovecot/core
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
process 11878 is executing new program:
/usr/local/dovecot-2.2.36/bin/doveconf
process 11878 is executing new program:
/usr/local/dovecot-2.2.36/libexec/dovecot/imap-login
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Fatal: settings_check(ssl) failed: ssl_verify_client_cert set, but
ssl_ca not
[Inferior 1 (process 11878) exited with code 0131]


Fatal: settings_check(ssl) failed: ssl_verify_client_cert set, but
ssl_ca not


If this true why works fine in debian 8.x ?



> Hi!
>
> Can you get backtrace from the core file?
>
> https://dovecot.org/bugreport.html
>
> Aki
>> On 24 January 2019 at 17:10 Maciej Milaszewski IQ PL <
>> maciej.milaszew...@iq.pl <mailto:maciej.milaszew...@iq.pl>> wrote:
>>
>>
>> Hi
>> Thenx for replay.
>>
>> Finaly for a test in my lab
>>
>> I upgrate to latest 2.2.36 and os debian 10
>> In this machine is dovecot director
>>
>> But if i tested via telnet
>>
>> root@postfix:~# telnet 46.xxx.xxx.xxx 143
>> Trying 46.xxx.xxx.xxx...
>> Connected to 46.xxx.xxx.xxx.
>> Escape character is '^]'.
>> * BYE Disconnected: Auth process broken
>> Connection closed by foreign host.
>>
>> In log:
>> Jan 24 16:02:07 postfix dovecot: master: Dovecot v2.2.36 (1f10bfa63)
>> starting up for imap, pop3, lmtp, sieve
>> Jan 24 16:02:07 postfix dovecot: master: Error: service(auth): command
>> startup failed, throttling for 2 secs
>> Jan 24 16:02:07 postfix dovecot: auth: Fatal: master: service(auth):
>

Re: debian10+dovecot-2.2.33.2

2019-01-24 Thread Maciej Milaszewski IQ PL
Hi
Thenx for replay.

Finaly for a test in my lab

I upgrate to latest 2.2.36 and os debian 10
In this machine is dovecot director

But if i tested via telnet

root@postfix:~# telnet 46.xxx.xxx.xxx 143
Trying 46.xxx.xxx.xxx...
Connected to 46.xxx.xxx.xxx.
Escape character is '^]'.
* BYE Disconnected: Auth process broken
Connection closed by foreign host.

In log:
Jan 24 16:02:07 postfix dovecot: master: Dovecot v2.2.36 (1f10bfa63)
starting up for imap, pop3, lmtp, sieve
Jan 24 16:02:07 postfix dovecot: master: Error: service(auth): command
startup failed, throttling for 2 secs
Jan 24 16:02:07 postfix dovecot: auth: Fatal: master: service(auth):
child 10443 killed with signal 11 (core dumped)
Jan 24 16:02:07 postfix dovecot: director: Error: Auth server
disconnected unexpectedly


in strace:

ioctl(3, FIONBIO, [1])  = 0
setsockopt(3, SOL_SOCKET, SO_OOBINLINE, [1], 4) = 0
select(4, [0 3], [], [3], {tv_sec=0, tv_usec=0}) = 0 (Timeout)
select(4, [0 3], [], [3], NULL) = 1 (in [3])
recvfrom(3, "* BYE Disconnected: Auth process"..., 8191, 0, NULL, NULL) = 41
select(4, [0 3], [1], [3], {tv_sec=0, tv_usec=0}) = 2 (in [3], out [1],
left {tv_sec=0, tv_usec=0})
write(1, "* BYE Disconnected: Auth process"..., 40* BYE Disconnected:
Auth process broken
) = 40
recvfrom(3, "", 8151, 0, NULL, NULL)    = 0
..
ioctl(0, TCGETS, {B38400 opost isig icanon echo ...}) = 0
ioctl(0, SNDCTL_TMR_STOP or TCSETSW, {B38400 opost isig icanon echo
...}) = 0
ioctl(0, TCGETS, {B38400 opost isig icanon echo ...}) = 0
ioctl(0, FIONBIO, [0])  = 0
ioctl(1, FIONBIO, [0])  = 0
write(2, "Connection closed by foreign hos"..., 35Connection closed by
foreign host.
) = 35
close(-1)   = -1 EBADF (Bad file descriptor)
exit_group(1)   = ?
+++ exited with 1 +++




Second machine (backend with dovecot clinet) is old debian (8.11) and
2.2.32 | ( and tested 2.2.36)

telnet 10.0.0.24 143
Trying 10.0.0.24...
Connected to 10.0.0.24.
Escape character is '^]'.
* OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE
IDLE AUTH=PLAIN AUTH=LOGIN] Imap ready.

and works fine

In productions i have debian 8.11 and 9.x all workings fine (this same
configurations etc)

Where i found solutions  etc ?

On 23.01.2019 08:35, Aki Tuomi via dovecot wrote:
> Maybe you should try 2.2.36?
>
> Aki
>
> On 22.1.2019 16.47, Maciej Milaszewski IQ PL wrote:
>> Hi
>> I have little problem with debian10 and dovecot 2.2.33.2
>>
>> ps -ax
>> 21815 ?    S  0:00 dovecot/pop3-login director
>> 21816 ?    S  0:00 dovecot/pop3-login director
>> 21817 ?    S  0:00 dovecot/pop3-login director
>> 21818 ?    S  0:00 dovecot/pop3-login director
>> 21819 ?    S  0:00 dovecot/pop3-login director
>> 21821 ?    S  0:00 dovecot/pop3-login director
>> 21822 ?    S  0:00 dovecot/pop3-login director
>>
>> But if I testes via telnet like this:
>>
>> telnet 46.xxx.xxx.113 143
>> Trying 46.xxx.xxx.113...
>> Connected to 46.xxx.xxx.113.
>> Escape character is '^]'.
>> * OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE
>> IDLE STARTTLS AUTH=PLAIN AUTH=LOGIN] Imap ready.
>> ^]
>> telnet> q
>> Connection closed.
>>
>> and ps:
>> 21808 ?    S  0:00 dovecot/imap-login  
>> ???
>> 21826 ?    S  0:00 dovecot/imap-login  
>> ?
>>
>> strace:
>> strace -p 21808
>> strace: Process 21808 attached
>> gettimeofday({tv_sec=1548168193, tv_usec=419420}, NULL) = 0
>> epoll_wait(14,
>>
>> doveconf:
>> doveconf -n |head -n 4
>> # 2.2.33.2 (d6601f4ec): /etc/dovecot/dovecot.conf
>> # Pigeonhole version 0.4.8 (0c4ae064f307+)
>> # OS: Linux 4.19.0-1-amd64 x86_64 Debian buster/sid
>> auth_cache_negative_ttl = 5 mins
>>
>>
>>
>> Probably dovecot not closed correctly imap-login and this same for
>> imap-pop3. Any idea ?
>>
>>




debian10+dovecot-2.2.33.2

2019-01-22 Thread Maciej Milaszewski IQ PL
Hi
I have little problem with debian10 and dovecot 2.2.33.2

ps -ax
21815 ?    S  0:00 dovecot/pop3-login director
21816 ?    S  0:00 dovecot/pop3-login director
21817 ?    S  0:00 dovecot/pop3-login director
21818 ?    S  0:00 dovecot/pop3-login director
21819 ?    S  0:00 dovecot/pop3-login director
21821 ?    S  0:00 dovecot/pop3-login director
21822 ?    S  0:00 dovecot/pop3-login director

But if I testes via telnet like this:

telnet 46.xxx.xxx.113 143
Trying 46.xxx.xxx.113...
Connected to 46.xxx.xxx.113.
Escape character is '^]'.
* OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE
IDLE STARTTLS AUTH=PLAIN AUTH=LOGIN] Imap ready.
^]
telnet> q
Connection closed.

and ps:
21808 ?    S  0:00 dovecot/imap-login  
???
21826 ?    S  0:00 dovecot/imap-login  
?

strace:
strace -p 21808
strace: Process 21808 attached
gettimeofday({tv_sec=1548168193, tv_usec=419420}, NULL) = 0
epoll_wait(14,

doveconf:
doveconf -n |head -n 4
# 2.2.33.2 (d6601f4ec): /etc/dovecot/dovecot.conf
# Pigeonhole version 0.4.8 (0c4ae064f307+)
# OS: Linux 4.19.0-1-amd64 x86_64 Debian buster/sid
auth_cache_negative_ttl = 5 mins



Probably dovecot not closed correctly imap-login and this same for
imap-pop3. Any idea ?


-- 
Maciej Miłaszewski
IQ PL Sp. z o.o.
Starszy Administrator Systemowy

Biuro Obsługi Klienta:
e-mail: b...@iq.pl
tel.: +48 58 326 09 90 - 94
fax: +48 58 326 09 99

Dział pomocy: https://www.iq.pl/pomoc
Informacja dotycząca przetwarzania danych osobowych: https://www.iq.pl/kontakt
Jakość gwarantuje: ISO 9001:2000

IQ PL Sp. z o.o. z siedzibą w Gdańsku (80-298), ul. Geodetów 16, KRS 
007725, Sąd rejestrowy: Sąd Rejonowy w Gdańsku VII Wydział KRS, 
kapitał zakładowy: 140.000 PLN, NIP 5832736211, REGON 192478853




doveadm + HA

2019-01-07 Thread Maciej Milaszewski IQ PL
Hi
I have two server directors in ring and 5 dovecot servers (2.2.36)
IP for IMAP and POP3 is a VIP (keepalived)


What is the best solutions to get realy HA for 5 dovecot servers ?
Maby corosync+pacemeker ? But this solution is too problematic and hardcore

Why I need HA ?
Doveadmin is too lazy and doveadm director does not know that one
machine broke down and still sends traffic



Re: LDAP stored quota

2018-11-22 Thread Maciej Milaszewski IQ PL
Hi
Do you try recalc and get ?

On 22.11.2018 08:56, Vincent Seynhaeve wrote:
>
> Hello,
>
> I'm trying to set up LDAP stored quota on Dovecot but it doesn't work
> and doesn't get reported by the command doveadm quota get.
>
> I'm using the field departmentNumber in my LDAP server to store the quota.
>
>
> doveadm quota get -u test
>
> Quota name Type    Value
> Limit 
>   
> %
> User quota STORAGE 0
> - 
>   
> 0
> User quota MESSAGE 0
> - 
>   
> 0
>
>
> log file associated with doveadm quota get command:
>
> Nov 21 11:38:47 imap dovecot: auth: Debug: master in:
> USER#0111#011test#011service=doveadm
> Nov 21 11:38:47 imap dovecot: auth: Debug: ldap(test): user search:
> base=ou=People,dc=example,dc=com scope=subtree
> filter=(&(objectClass=posixAccount)(uid=test)) fields=departmentNumber
> Nov 21 11:38:47 imap dovecot: auth: Debug: ldap(test): result:
> departmentNumber=1M; departmentNumber unused
> Nov 21 11:38:47 imap dovecot: auth: Debug: ldap(test): result:
> departmentNumber=1M
> Nov 21 11:38:47 imap dovecot: auth: Debug: userdb out:
> USER#0111#011test#011mailRoutingAddress=user =uid=vmail =gid=mail
> =home=/var/mail//test =quota_rule=*:bytes=1M
>
>
> Bellow my configuration files:
>
> conf.d/10-mail.conf
>
> mail_plugins =  $mail_plugins quota
>
>
> conf.d/20-imap.conf
>
>
> protocol imap {
>   mail_plugins = $mail_plugins imap_quota
> }
>
>
> conf.d/90-quota.conf
>
> plugin {
>
>   quota = maildir:User quota
>   quota_rule2 = Trash:storage=+100M
>   quota_grace = 10%%
>   quota_status_success = DUNNO
>   quota_status_nouser = DUNNO
>   quota_status_overquota = "552 5.2.2 Mailbox is full"
>
> }
>
>
> dovecot-ldap.conf.ext
>
> user_attrs= \
> =mailRoutingAddress=user \
> =uid=vmail \
> =gid=mail \
> =home=/var/mail/%d/%n \
> =quota_rule=*:bytes=%{ldap:departmentNumber}
>
>
> Can somebody help me with this or give me some advice for debugging?
>


-- 
Maciej Miłaszewski
IQ PL Sp. z o.o.
Administrator Systemowy

Biuro Obsługi Klienta:
e-mail: b...@iq.pl
tel.: +48 58 326 09 90 - 94
fax: +48 58 326 09 99

Dział pomocy: https://www.iq.pl/pomoc
Informacja dotycząca przetwarzania danych osobowych: https://www.iq.pl/kontakt
Jakość gwarantuje: ISO 9001:2000

IQ PL Sp. z o.o. z siedzibą w Gdańsku (80-298), ul. Geodetów 16, KRS 
007725, Sąd rejestrowy: Sąd Rejonowy w Gdańsku VII Wydział KRS, 
kapitał zakładowy: 140.000 PLN, NIP 5832736211, REGON 192478853



Re: slow mailbox refreshes

2018-07-10 Thread Maciej Milaszewski IQ PL
Hello
Fabian for tunning maby this help you:

- change type sort of roundcube

Problem: https://github.com/roundcube/roundcubemail/issues/3556
solved: https://github.com/roundcube/roundcubemail/issues/5072#ticket

- if you have many big account user solr for full-text search

Probably is problem with dovecot and dovecot give you timeout
- first get - you get timeout
- second all is ok



This is simple script to test it:

-- test.py ---

#!/usr/bin/env python3

import imaplib
import time
import logging
import re

class IMAPLoginAdapter(logging.LoggerAdapter):

    def process(self, msg, kwargs):
    return '[%s] %s' % (self.extra['imap_login'], msg), kwargs

def timeit(method):

    def timed(*args, **kw):
    ts = time.time()
    result = method(*args, **kw)
    te = time.time()
    imap_logger.info("%s %2.2f sec", method.__name__, te - ts)
    return result

    return timed

@timeit
def test_search_all(imap):
    type_, data = imap.uid("search", None, "ALL")
    uids = data[0].split()
    imap_logger.info("%s %d", type_, len(uids))
    return uids

@timeit
def test_peek_index(imap, uids):
    page_uids = uids[:50]
    page_uids = b",".join(page_uids)
    type_, data = imap.uid("fetch", page_uids, "(INTERNALDATE
BODY.PEEK[HEADER.FIELDS (DATE)])")
    imap_logger.info("%s %d", type_, len(data))

@timeit
def test_size(imap):
    type_, data = imap.fetch("1:*", "(RFC822.SIZE)")
    size = 0
    for d in data:
    match = re.search(br"^[0-9]+ \(RFC822\.SIZE ([0-9]+)\)$", d)
    if not match:
    raise ValueError
    size += int(match.group(1))
    imap_logger.info("%s %d / %d", type_, size, len(data))

def imap_init(config):
    if config.has_option("IMAP", "port"):
    imap = imaplib.IMAP4_SSL(config.get("IMAP", "server"),
config.get("IMAP", "port"))
    else:
    imap = imaplib.IMAP4_SSL(config.get("IMAP", "server"))
    imap.login(config.get("IMAP", "login"), config.get("IMAP", "password"))
    imap.select("INBOX")
    return imap

def imap_close(imap):
    imap.close()
    imap.logout()

def main():
    import argparse
    import configparser
    parser = argparse.ArgumentParser()
    parser.add_argument("config_file", help="IMAP access config file")
    args = parser.parse_args()
    config = configparser.ConfigParser()
    config.read(args.config_file)
    imap_logger.extra['imap_login'] = config.get("IMAP", "login")
    imap = imap_init(config)
    # uids = test_search_all(imap)
    # test_peek_index(imap, uids)
    test_size(imap)
    imap_close(imap)

logging.basicConfig(
    # filename="test.log",
    level=logging.INFO,
    format="%(asctime)s %(message)s"
)

imap_logger = IMAPLoginAdapter(logging.getLogger(), {'imap_login': None})

if __name__ == "__main__":
    main()
-- end test.py -

example.ini:

-- start example.ini 

[IMAP]
server = imap.youserver.org
login = your_username
password = your_password

--- stop example.ini -


example:

./test.py example.ini
2018-07-09 16:58:48,324 [duzomai...@dasit1.foomydomain.org] OK 239440164
/ 2
2018-07-09 16:58:48,325 [duzomai...@dasit1.foomydomain.org] test_size
0.29 sec




W dniu 09.07.2018 o 16:37, Fabian A. Santiago pisze:
> Hello,
>
> I am using dovecot 2.3.2 on my private email server in conjunction with:
>
> centos 7.5
> apache 2.4.6
> mariadb 10.2.16
> roundcube mail 1.3.6
> php 5.6.36
> postfix 2.10.1
>
>
> I have one mailbox with nearly 30k messages in it dispersed across
> several folders. it's often very slow in refreshing the message list,
> especially in the one largest 25k+ message folder. is this simply to
> be expected based on my message count or is there some kind of
> performance optimization and tuning i can do to improve the response
> times?
>
> The mail server itself is a linode hosted VPS:
>
> intel xeon e5-2680 v2 @ 2.80 GHz, 4 cores
> 8 GB real memory
> 2 GB virtual memory
> 200 GB local storage (ext4)
> KVM hypervisor
>
> Thanks everyone for any guidance you can offer.
>
>


-- 
Maciej Miłaszewski
IQ PL Sp. z o.o.
Administrator Systemowy




dovecot-2.3.36 and flush

2018-06-27 Thread Maciej Milaszewski IQ PL
Hi
I have a problem with doveadm and flush

# 2.2.36 (1f10bfa63): /etc/dovecot/dovecot.conf
# Pigeonhole version 0.4.24.rc1 (debaa297)
# OS: Linux 3.16.0-5-amd64 x86_64 Debian 8.10

example:
#doveadm director status
mail server ip tag vhosts state state changed users
10.0.100.24    1  up    - 65
10.0.100.25    100    up    - 989

#doveadm director flush 10.0.100.24

#doveadm director status
mail server ip tag vhosts state state changed users
10.0.100.24    1  up    - 65
10.0.100.25    100    up    - 990

dovecot node 10.0.100.24 not flushed.


dovecot was compile from source:

./configure --prefix=/usr/local/dovecot-2.2.36 --sysconfdir=/etc
--with-ldap=yes --with-mysql --with-ssl=openssl --with-solr
--with-storages=maildir,imapc

pigehole:
./configure --prefix=/usr/local/pigeonhole-0.4.24/
--with-dovecot=/usr/local/dovecot-2.2.36/lib/dovecot/
--with-managesieve=yes --with-ldap=yes

strace command:
#strace doveadm director flush 10.0.100.24


- full starce
--

execve("/usr/local/dovecot/bin/doveadm", ["doveadm", "director",
"flush", "10.0.100.24"], [/* 26 vars */]) = 0
brk(0)  = 0x126c000
access("/etc/ld.so.nohwcap", F_OK)  = -1 ENOENT (No such file or
directory)
mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0)
= 0x7efe7ce0
access("/etc/ld.so.preload", R_OK)  = -1 ENOENT (No such file or
directory)
open("/usr/local/dovecot-2.2.36/lib/dovecot/tls/x86_64/libz.so.1",
O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
stat("/usr/local/dovecot-2.2.36/lib/dovecot/tls/x86_64", 0x7fff1ebbfed0)
= -1 ENOENT (No such file or directory)
open("/usr/local/dovecot-2.2.36/lib/dovecot/tls/libz.so.1",
O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
stat("/usr/local/dovecot-2.2.36/lib/dovecot/tls", 0x7fff1ebbfed0) = -1
ENOENT (No such file or directory)
open("/usr/local/dovecot-2.2.36/lib/dovecot/x86_64/libz.so.1",
O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
stat("/usr/local/dovecot-2.2.36/lib/dovecot/x86_64", 0x7fff1ebbfed0) =
-1 ENOENT (No such file or directory)
open("/usr/local/dovecot-2.2.36/lib/dovecot/libz.so.1",
O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
stat("/usr/local/dovecot-2.2.36/lib/dovecot",
{st_mode=S_IFDIR|S_ISGID|0755, st_size=12288, ...}) = 0
open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=25845, ...}) = 0
mmap(NULL, 25845, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7efe7cdf9000
close(3)    = 0
access("/etc/ld.so.nohwcap", F_OK)  = -1 ENOENT (No such file or
directory)
open("/lib/x86_64-linux-gnu/libz.so.1", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0
\"\0\0\0\0\0\0"..., 832) = 832
fstat(3, {st_mode=S_IFREG|0644, st_size=109144, ...}) = 0
mmap(NULL, 2204200, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3,
0) = 0x7efe7c9c7000
mprotect(0x7efe7c9e1000, 2093056, PROT_NONE) = 0
mmap(0x7efe7cbe, 8192, PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x19000) = 0x7efe7cbe
close(3)    = 0
open("/usr/local/dovecot-2.2.36/lib/dovecot/libcrypt.so.1",
O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
access("/etc/ld.so.nohwcap", F_OK)  = -1 ENOENT (No such file or
directory)
open("/lib/x86_64-linux-gnu/libcrypt.so.1", O_RDONLY|O_CLOEXEC) = 3
read(3,
"\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\300\f\0\0\0\0\0\0"...,
832) = 832
fstat(3, {st_mode=S_IFREG|0644, st_size=35176, ...}) = 0
mmap(NULL, 2318848, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3,
0) = 0x7efe7c79
mprotect(0x7efe7c798000, 2093056, PROT_NONE) = 0
mmap(0x7efe7c997000, 8192, PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x7000) = 0x7efe7c997000
mmap(0x7efe7c999000, 184832, PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7efe7c999000
close(3)    = 0
open("/usr/local/dovecot-2.2.36/lib/dovecot/libdovecot-storage.so.0",
O_RDONLY|O_CLOEXEC) = 3
read(3,
"\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0@\"\3\0\0\0\0\0"..., 832)
= 832
fstat(3, {st_mode=S_IFREG|0755, st_size=6473496, ...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0)
= 0x7efe7cdf8000
mmap(NULL, 3174264, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3,
0) = 0x7efe7c489000
mprotect(0x7efe7c587000, 2097152, PROT_NONE) = 0
mmap(0x7efe7c787000, 36864, PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0xfe000) = 0x7efe7c787000
close(3)    = 0
open("/usr/local/dovecot-2.2.36/lib/dovecot/libdovecot.so.0",
O_RDONLY|O_CLOEXEC) = 3
read(3,
"\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0`\t\3\0\0\0\0\0"..., 832)
= 832
fstat(3, {st_mode=S_IFREG|0755, st_size=5219824, ...}) = 0
mmap(NULL, 3345960,