Re: Enabling auth_cache_verify_password_with_worker disables proxy mode

2021-01-28 Thread Aki Tuomi


> On 28/01/2021 20:22 Malte Schmidt  wrote:
> 
> 
> Setting "auth_cache_verify_password_with_worker = yes" in order to leverage 
> multiple cores for the Dovecot authentication-process causes Dovecot not to 
> proxy anymore. With debug-logging I figured:
> 
> auth_cache_verify_password_with_worker = no
> 
> passdb out: OK 1 user=username host=bla port=10993 ssl=any-cert 
> mail_crypt_global_public_key=key mail_crypt_global_private_key=otherkey 
> hostip=123.123.123.123 proxy pass=pw
> 
> auth_cache_verify_password_with_worker = yes
> 
> passdb out: OK 1 user=username
> 
> The rest seems missing.
> 
> Dovecot version: v2.3.11.3
> 
> Searching for this issue, I found at least two posts mentioning the same 
> symptoms:
> 
> https://dovecot.org/pipermail/dovecot/2018-April/111583.html
> https://dovecot.org/pipermail/dovecot/2020-April/118564.html
> https://listen.jpberlin.de/pipermail/dovecot/2020-April/001915.html (same as 
> the english one from april 2020)

Hi!

Thanks for taking the time to report this. We are tracking this now as DOP-2235.

Aki


Re: Master user password mismatch

2021-01-28 Thread Aki Tuomi
Did you try with `doveadm pw -t 'hash-goes-here'`?

Sometimes you need to use

passdb {
  driver = passwd-file
  args = scheme=your-pw-scheme /path/to/file
}

Note that the path must be placed last.

Aki

> On 28/01/2021 20:53 Gregory Sloop  wrote:
> 
> 
> Anyone?
>  
>  
>  
> 
> 
>Trying to get master users working.
>  
>  I'm not sure what info would be best, but here's the detail I have now, in 
> trying to get it working.
>  Setup the master user file, and enabled master users in the conf files.
>  Created the master user file and user/password hash.
>  
>  Turned on authentication debug.
>  When I try something like:
>  telnet localhost 143
>  and then supply the master user login - kind of like this:
>  1 login joeb*jb-master somepassword
>  
>  I get this in the logs. (Some obfuscation done.)
>  ---
>  dovecot: auth: Debug: auth client connected (pid=24985)
>  dovecot: auth: Debug: client in: 
> AUTH#0111#011PLAIN#011service=imap#011secured#011session=MM6QC9a5SIYB#011lip=::1#011rip=::1#011lport=143#011rport=34376#011resp=
>  dovecot: auth: Debug: 
> passwd-file(jb-master,::1,master,): Master 
> user lookup for login: joeb
>  dovecot: auth: Debug: 
> passwd-file(jb-master,::1,master,): lookup: 
> user=jb-master file=/etc/dovecot/masterusers-test
>  dovecot: auth: 
> passwd-file(jb-master,::1,master,): 
> Password mismatch
>  dovecot: auth: Debug: client passdb out: FAIL#0111#011user=jb-master
>  ---
>  
>  Yet I can use
>  htpasswd -b -c -s /etc/dovecot/masterusers-test jb-master somepassword
>  And this succeeds. (I created the masterusers-test file with httpasswd)
>  
>  So, I must have the password right, but dovecot is till failing the auth, 
> claiming a bad password.
>  
>  How do I go about getting more detail so I can determine what's wrong?
>  
>  TIA
>  -Greg
>  
>


Re: Infinity loop when run "doveadm quota get -A" from Dovecot Director with 500 users

2021-01-28 Thread Aki Tuomi
Thanks for the patch, we'll take a look at it. 
Aki

> On 29/01/2021 04:49 Duc Anh Do  wrote:
> 
> 
> Hi all,
> 
> Because I think this is a race condition so instead of using only one 
> current_ioloop:
>   * I create ioloop for each Backend in the list of Director
>   * I think connections from aDirector to same Backend are synchronous so I 
> don't create ioloop for each connection
>   * I think ioloop must be destroyed in correct order so I use a linked list 
> to manage them, create then push and pop then destroy (sorry, I don't know 
> any existing structure in source of Dovecot that I can re-use)
> I tested my patch with both "doveadm quota get -A" and "doveadm quota get -u 
> xxx" many times. No error occurs (timeout leak, segment fault... etc).
> If you are interested in my patch, any comment is highly appreciated. I 
> modified source files which might be shared with other doveadm commands so 
> I'm not sure it's safe 100%.
> 
> Thanks,
> Anh Do
> 
> 
> On Wed, 27 Jan 2021 at 16:20, Duc Anh Do  wrote:
> > Hi all,
> > 
> > I have one Dovecot Director, two Dovecot Backends and one LDAP server with 
> > about 500 users. I would like to run doveadm quota get -A from the Director.
> > In each Backend, there is no problem when run the command:
> > # doveadm quota get -A
> > user1 User quota STORAGE 0 10485760 0
> > user1 User quota MESSAGE 0 - 0
> > …
> > user500 User quota STORAGE 0 10485760 0
> > user500 User quota MESSAGE 0 - 0
> > 
> > However, when I run from the Director, the command might stuck in an 
> > infinity loop (I have to terminate to quit):
> > # doveadm quota get -A
> > user1 User quota STORAGE 0 10485760 0
> > user1 User quota MESSAGE 0 - 0
> > …
> > user49 User quota STORAGE 0 10485760 0
> > user49 User quota MESSAGE 0 - 0
> > user66 User quota STORAGE 0 10485760 0
> > user66 User quota MESSAGE 0 - 0
> > ^Cdoveadm(user86): Error: doveadm server failure
> > doveadm: Error: Failed to iterate through some users
> > doveadm: Error: backend2.local:24245: Command quota get failed for user53: 
> > EOF
> > doveadm: Error: backend1.local:24245: Command quota get failed for user66: 
> > EOF
> > doveadm: Error: Aborted
> > 
> > This problem occurs in both Dovecot 2.2.36 and Dovecot 2.3.11, 2.3.13 (I 
> > build Dovecot from source). It's ok for me to get quota of one user from 
> > the Director:
> > # doveadm quota get -u user1
> > Quota name Type Value Limit %
> > User quota STORAGE 0 10485760 0
> > User quota MESSAGE 0 - 0
> > And if there's only one Backend,doveadm quota get -A from the Director 
> > works well too.
> > 
> > After investigating, I found the infinity loop:
> > File src/doveadm/doveadm-mail-server.c:
> > static void doveadm_server_flush_one(struct doveadm_server *server)
> > {
> >  unsigned int count = array_count(>queue);
> > 
> >  do {
> >  io_loop_run(current_ioloop);
> >  } while (array_count(>queue) == count &&
> >  doveadm_server_have_used_connections(server) &&
> >  !DOVEADM_MAIL_SERVER_FAILED());
> > }
> > 
> > In case there're many Backends, I see only global variable current_ioloop 
> > is used to notify in the callback function. Might this be a race condition?
> > I understand there's a workaround to do my work:
> > 
> >   * Run doveadm user '*' to get all users
> >   * Loop through all users and run doveadm quota get -u xxx
> > 
> > Thanks,
> > Anh Do
> 
> 
> -- 
> 
> Thanks,
> Duc Anh
> 
> Email: doducanh2...@gmail.com
> Skype: ducanh.do88
> Mobile: +84975730526


SMTP tool for Email validation

2021-01-28 Thread Amol Kale
Hi,

 

We are looking for a tool for bulk SMTP testing / Email validation using
telnet or similar protocol ( which doesn't send the mails )

 

It generally involves steps like NSlookup to check MX of destination server,
opening port25, telnet to communicate with client server, then finally
checking the correctness of user email account. 

 

Please connect if you have such ready tool with you or you can develop

 

Process example-

https://blog.mailtrap.io/verify-email-address-without-sending/

 

Thanks & regards

Amol Kale

Founder Director

Talent Trackers HR

7350002596



Re: [EXT] Re: Reminder Re: Dovecot Gmail OAuth2.0 Setting Question

2021-01-28 Thread 福田泰葵
Google is responding to me as Unauthorized.
So I need to send my credentials such as access token in the request
parameter for authentication in google’s Get User API request.
But I don’t know how to configure dovecot to achieve that.
Could you please help me with this?

Best regards,
-
〒163-6017 東京都新宿区西新宿6-8-1 住友不動産新宿オークタワー
株式会社 ジャストシステム  技術企画室 情報システムグループ  福田泰葵
e-mail: taiki.fuk...@justsystems.com
内線: 5158
TEL: 03-5324-7900
mobile: 080-6198-7328
-


2021年1月29日(金) 3:30 Odhiambo Washington :

> Your clue is in the log:
>
> 1611654464.207331 "message": "Request is missing required authentication
> credential. Expected OAuth 2 access token, login cookie or other valid
> authentication credential. See
> https://developers.google.com/identity/sign-in/web/devconsole-project.;,
> 1611654464.207331 "status": "UNAUTHENTICATED" 1611654464.207331 }
>
>
>
> On Thu, 28 Jan 2021 at 09:25, 福田泰葵  wrote:
>
>> Dear Mr. Tuomi
>>
>> Do you have any idea how to solve this problem?
>>
>> Best regards,
>>
>> -
>> 〒163-6017 東京都新宿区西新宿6-8-1 住友不動産新宿オークタワー
>> 株式会社 ジャストシステム  技術企画室 情報システムグループ  福田泰葵
>> e-mail: taiki.fuk...@justsystems.com
>> 内線: 5158
>> TEL: 03-5324-7900
>> mobile: 080-6198-7328
>>
>> -
>>
>>
>> 2021年1月26日(火) 18:51 福田泰葵 :
>>
>>> Dear Mr. Tuomi
>>>
>>> Thank you for the instruction.
>>> I was able to output rawlogs.
>>> The following is the result.
>>>
>>> 20210126-184744.1.1.in:
>>>
>>> 1611654464.207331 HTTP/1.1 401 Unauthorized
>>> 1611654464.207331 Cache-Control: no-cache, no-store, max-age=0, 
>>> must-revalidate
>>> 1611654464.207331 Pragma: no-cache
>>> 1611654464.207331 Expires: Mon, 01 Jan 1990 00:00:00 GMT
>>> 1611654464.207331 Date: Tue, 26 Jan 2021 09:47:44 GMT
>>> 1611654464.207331 Vary: X-Origin
>>> 1611654464.207331 Vary: Referer
>>> 1611654464.207331 Content-Type: application/json; charset=UTF-8
>>> 1611654464.207331 Server: ESF
>>> 1611654464.207331 X-XSS-Protection: 0
>>> 1611654464.207331 X-Frame-Options: SAMEORIGIN
>>> 1611654464.207331 X-Content-Type-Options: nosniff
>>> 1611654464.207331 Alt-Svc: h3-29=":443"; ma=2592000,h3-T051=":443"; 
>>> ma=2592000,h3-Q050=":443"; ma=2592000,h3-Q046=":443"; 
>>> ma=2592000,h3-Q043=":443"; ma=2592000,quic=":443"; ma=2592000; v="46,43"
>>> 1611654464.207331 Accept-Ranges: none
>>> 1611654464.207331 Vary: Origin,Accept-Encoding
>>> 1611654464.207331 Transfer-Encoding: chunked
>>> 1611654464.207331
>>> 1611654464.207331 130
>>> 1611654464.207331 {
>>> 1611654464.207331   "error": {
>>> 1611654464.207331 "code": 401,
>>> 1611654464.207331 "message": "Request is missing required 
>>> authentication credential. Expected OAuth 2 access token, login cookie or 
>>> other valid authentication credential. See 
>>> https://developers.google.com/identity/sign-in/web/devconsole-project.;,
>>> 1611654464.207331 "status": "UNAUTHENTICATED"
>>> 1611654464.207331   }
>>> 1611654464.207331 }
>>> 1611654464.207331
>>> 1611654464.207737 0
>>> 1611654464.207737
>>>
>>> 20210126-184744.1.1.out:
>>>
>>> 1611654464.165704 GET /oauth2/v2/userinfo HTTP/1.1
>>> 1611654464.165704 Host: www.googleapis.com
>>> 1611654464.165704 Date: Tue, 26 Jan 2021 09:47:44 GMT
>>> 1611654464.165704 User-Agent: dovecot-oauth2-passdb/2.3.13
>>> 1611654464.165704 Connection: Keep-Alive
>>> 1611654464.165727 Authorization: Bearer ??
>>> 1611654464.165730
>>>
>>> Best regards,
>>> --
>>>
>>> 〒163-6017 東京都新宿区西新宿6-8-1 住友不動産新宿オークタワー
>>> 株式会社 ジャストシステム 技術企画室 情報システムグループ 福田泰葵
>>> e-mail: taiki.fuk...@justsystems.com
>>> 内線: 5158
>>> TEL: 03-5324-7900
>>> mobile: 080-6198-7328
>>> --
>>>
>>> 2021年1月26日(火) 18:35 Aki Tuomi aki.tu...@open-xchange.com
>>> :
>>>
>>> No, the directory must exist. I'm sorry I wasn't clear enough when I
 replied last time, but dovecot will not create the directory. You need to
 create it and make it writable.

 Aki

 > On 26/01/2021 11:09 福田泰葵  wrote:
 >
 >
 > Dear Mr. Tuomi
 >
 > Sorry, I have added the setting PrivateTmp=no to
 /etc/systemd/system/dovecot.service.d/override.conf
 > However, /tmp/oauth2 was not created.
 >
 > Best regards,
 >
 >
 -
 > 〒163-6017 東京都新宿区西新宿6-8-1 住友不動産新宿オークタワー
 > 株式会社 ジャストシステム 技術企画室 情報システムグループ 福田泰葵
 > e-mail: 

Re: Infinity loop when run "doveadm quota get -A" from Dovecot Director with 500 users

2021-01-28 Thread Duc Anh Do
Hi all,

Because I think this is a race condition so instead of using only one
*current_ioloop*:

   - I create ioloop for each Backend in the list of Director
   - I think connections from a Director to same Backend are synchronous so
   I don't create ioloop for each connection
   - I think ioloop must be destroyed in correct order so I use a linked
   list to manage them, create then push and pop then destroy (sorry, I don't
   know any existing structure in source of Dovecot that I can re-use)

I tested my patch with both "doveadm quota get -A" and "doveadm quota get
-u xxx" many times. No error occurs (timeout leak, segment fault... etc).
If you are interested in my patch, any comment is highly appreciated. I
modified source files which might be shared with other doveadm commands so
I'm not sure it's safe 100%.

Thanks,
Anh Do

On Wed, 27 Jan 2021 at 16:20, Duc Anh Do  wrote:

> Hi all,
>
> I have one Dovecot Director, two Dovecot Backends and one LDAP server with
> about 500 users. I would like to run *doveadm quota get -A* from the
> Director.
> In each Backend, there is no problem when run the command:
> # doveadm quota get -A
> user1 User quota STORAGE 0 10485760
>0
> user1 User quota MESSAGE 0-
>0
> …
> user500   User quota STORAGE 0 10485760
>0
> user500   User quota MESSAGE 0-
>0
>
> However, when I run from the Director, the command might stuck in an
> infinity loop (I have to terminate to quit):
> # doveadm quota get -A
> user1 User quota STORAGE 0 10485760
>0
> user1 User quota MESSAGE 0-
>0
> …
> user49User quota STORAGE 0 10485760   0
> user49User quota MESSAGE 0-   0
> user66User quota STORAGE 0 10485760   0
> user66User quota MESSAGE 0-   0
> ^Cdoveadm(user86): Error: doveadm server failure
> doveadm: Error: Failed to iterate through some users
> doveadm: Error: backend2.local:24245: Command quota get failed for user53:
> EOF
> doveadm: Error: backend1.local:24245: Command quota get failed for user66:
> EOF
> doveadm: Error: Aborted
>
> This problem occurs in both Dovecot 2.2.36 and Dovecot 2.3.11, 2.3.13 (I
> build Dovecot from source). It's ok for me to get quota of one user from
> the Director:
> # doveadm quota get -u user1
> Quota name TypeValueLimit%
> User quota STORAGE 0 104857600
> User quota MESSAGE 0-0
> And if there's only one Backend, *doveadm quota get -A* from the Director
> works well too.
>
> After investigating, I found the infinity loop:
> File src/doveadm/doveadm-mail-server.c:
> static void doveadm_server_flush_one(struct doveadm_server *server)
> {
>unsigned int count = array_count(>queue);
>
>do {
>  io_loop_run(current_ioloop);
>} while (array_count(>queue) == count &&
>  doveadm_server_have_used_connections(server) &&
>  !DOVEADM_MAIL_SERVER_FAILED());
> }
>
> In case there're many Backends, I see only global variable
> *current_ioloop* is used to notify in the callback function. Might this
> be a race condition?
> I understand there's a workaround to do my work:
>
>- Run *doveadm user '*'* to get all users
>- Loop through all users and run *doveadm quota get -u xxx*
>
>
> Thanks,
> Anh Do
>


-- 
Thanks,
Duc Anh

Email: doducanh2...@gmail.com
Skype: ducanh.do88
Mobile: +84975730526


fix-doveadm-quota-director-20210128-2055.patch
Description: Binary data


Re: Dovecot Mail Server - Cloud Compatibility

2021-01-28 Thread Bernardo Reino

On Thu, 28 Jan 2021, Michael Peddemors wrote:

Given the reputation of the Azure IP space, be hesitant to start operating an 
email server there, UNLESS you can get MS to give you SWIP or 'rwhois' for 
your IP space.


As long as it's only dovecot (IMAP) and not anything doing SMTP, there should be 
no reputation issue...



[...]

On 2021-01-27 1:48 p.m., Kristie Buller wrote:

 I have a question regarding the Dovecot Server compatibility with Azure
 Cloud, which our application is migrating to. We are currently using
 Dovecot Mail Server v2.3.4.  I need to verify that the software is
 compatible with that type of environment or what we would need to continue
 using the software once we migrate.
 Thank you,

 *Kristie Buller*
 IBM System Engineer - eSign & SMART

 ABS Sales & Contracting Delivery, AT Account

 801 Chestnut - St. Louis, Mo

 Email: kristie.bul...@ibm.com
 phone: 618.660.6766

Re: Master user password mismatch

2021-01-28 Thread Gregory Sloop
Anyone?




Trying to get master users working.

I'm not sure what info would be best, but here's the detail I have now, in 
trying to get it working.
Setup the master user file, and enabled master users in the conf files.
Created the master user file and user/password hash.

Turned on authentication debug.
When I try something like:
telnet localhost 143
and then supply the master user login - kind of like this:
1 login joeb*jb-master somepassword

I get this in the logs. (Some obfuscation done.)
---
dovecot: auth: Debug: auth client connected (pid=24985)
dovecot: auth: Debug: client in: 
AUTH#0111#011PLAIN#011service=imap#011secured#011session=MM6QC9a5SIYB#011lip=::1#011rip=::1#011lport=143#011rport=34376#011resp=
dovecot: auth: Debug: 
passwd-file(jb-master,::1,master,): Master 
user lookup for login: joeb
dovecot: auth: Debug: 
passwd-file(jb-master,::1,master,): lookup: 
user=jb-master file=/etc/dovecot/masterusers-test
dovecot: auth: 
passwd-file(jb-master,::1,master,): Password 
mismatch
dovecot: auth: Debug: client passdb out: FAIL#0111#011user=jb-master
---

Yet I can use
htpasswd -b -c -s /etc/dovecot/masterusers-test jb-master somepassword
And this succeeds. (I created the masterusers-test file with httpasswd)

So, I must have the password right, but dovecot is till failing the auth, 
claiming a bad password.

How do I go about getting more detail so I can determine what's wrong?

TIA
-Greg



Re: dovecot and broken uidlist

2021-01-28 Thread Tom Talpey

On 1/28/2021 11:14 AM, Maciej Milaszewski wrote:

Hi
For test I crete a new director with 2.3.13 and node 2.3.13 I mount 
storage via nfs with this same options:


rw,sec=sys,noexec,noatime,tcp,hard,rsize=65536,wsize=65536,intr,nordirplus,nfsvers=3,tcp,actimeo=120

I create a simple MTA and change MX to thi same like director1

In kernel 5.8.0-0.bpo.2-amd64 problem exists
In kernel 3.x - not exists

In problem exists I check Maildir/dovecot-uidlist

3 V1424432537 N16208 G92c4ee0d93aa1260c62909c4ba82
16144 :1611352119.M505834P25597.dovecot2,S=18282,W=18620
16145 :1611352123.M269121P19872.dovecot2,S=18266,W=18604
16146 :1611762747.M502108P9747.dovecot7,S=6595,W=6726
16150 :1611835594.M756718P9986.dovecot7,S=62439,W=63817
16163 :1611828091.M231204P5202.dovecot7,S=19348,W=19855
16208 :1611849420.M137743P24417.dovecot7,S=12064,W=12296
16209 :1611828091.M144806P5202.dovecot7,S=2806,W=2865
16210 :1611837438.M678475P12027.dovecot7,S=17713,W=18072
16211 :1611757939.M493064P7136.dovecot7,S=30783,W=31520
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@$


A block of zeros in a file opened for append is a classic NFSv3 race.
Your mount options allow 120 seconds of attribute caching (actimeo=120).
One of these attributes is the file size, which is also the end of file
marker for append. If the file is changed by another client, the append
mode writes will land on the wrong offset, possibly overwriting or
punching holes.

If you use the "noac" mount option, this will reduce the window of
vulnerability, but it will not eliminate it. It's also possible there
is some issue in attribute caching in the 5.8 kernel. Do you have
other options between 3.16 and 5.8?

The best fix is to use a more robust NFS dialect such as v4.2.

Tom.


If not exists:

16144 :1611352119.M505834P25597.dovecot2,S=18282,W=18620
16145 :1611352123.M269121P19872.dovecot2,S=18266,W=18604
16146 :1611762747.M502108P9747.dovecot7,S=6595,W=6726
16150 :1611835594.M756718P9986.dovecot7,S=62439,W=63817
16163 :1611828091.M231204P5202.dovecot7,S=19348,W=19855
16208 :1611849420.M137743P24417.dovecot7,S=12064,W=12296
16209 :1611828091.M144806P5202.dovecot7,S=2806,W=2865
16210 :1611837438.M678475P12027.dovecot7,S=17713,W=18072
16211 :1611757939.M493064P7136.dovecot7,S=30783,W=31520

On 23.01.2021 00:59, Alessio Cecchi wrote:


Hi,

after some tests I notice a difference in dovecot-uidlist line format 
when message is read from "old kernel" and "new kernel":


81184 G1611334252.M95445P32580.mail05.myserver.com 
:1611334252.M95445P32580.mail05.myserver.com,S=38689,W=39290
81185 G1611336004.M47750P3921.mail01.myserver.com 
:1611336004.M47750P3921.mail01.myserver.com,S=15917,W=16212
81186 G1611338535.M542784P10852.mail03.myserver.com 
:1611338535.M542784P10852.mail03.myserver.com,S=12651,W=12855
81187 G1611341375.M164702P13505.mail01.myserver.com 
:1611341375.M164702P13505.mail01.myserver.com,S=8795,W=8964
81189 G1611354389.M984432P14754.mail06.myserver.com 
:1611354389.M984432P14754.mail06.myserver.com,S=3038,W=3096

81191 :1611355746.M365669P10402.mail03.myserver.com,S=3049,W=3107
81193 :1611356442.M611719P20778.mail01.myserver.com,S=1203,W=1230
81194 G1611356752.M573233P27082.mail01.myserver.com 
:1611356752.M573233P27082.mail01.myserver.com,S=1210,W=1238
81195 G1611356991.M905681P30704.mail01.myserver.com 
:1611356991.M905681P30704.mail01.myserver.com,S=1220,W=1249

81197 :1611357210.M42178P1962.mail01.myserver.com,S=1220,W=1250
81199 :1611357560.M26894P7157.mail01.myserver.com,S=1233,W=1264

With "old kernel" (where all works fine) UID number are incremental 
and in the line there is one more field that start with "G1611...".


With "new kernel" (where error comes) UID number skip always a number 
and the field "G1611..." is missing.


Maciej, do you also have this behavior?

Why Dovecot create different uidlist line format with different kernel?

Il 22/01/21 17:50, Maciej Milaszewski ha scritto:

Hi
I using pop/imap and LMTP via director and user go back in dovecot node

Current: 10.0.100.22 (expires 2021-01-22 17:42:44)
Hashed: 10.0.100.22
Initial config: 10.0.100.22

I have 6 dovecot backands and index via local ssd disk
mail_location = maildir:~/Maildir:INDEX=/var/dovecot_indexes%h

user never log in two different nodes in this same time

I update debian from 8 to 9 (and to 10) and tested via kerlnel 4.x and
5.x and problem exists
If I change kernel to 3.16.x problem not exists
I tested like:

problem exists:
dovecot1-5 with 4.x
and
dovecot1-4 - with 3.19.x
dovecot5 - with 4.x
and
dovecot1-5 - with 5.x
and
dovecot1-4 - with 4.x
dovecot5 - with 5.x

not exists:
dovecot1-5 - with 3.19.x

not exists:
dovecot1-5 - with 3.19.x+kernel-care

I use NetAPP with mount options:

Re: [EXT] Re: Reminder Re: Dovecot Gmail OAuth2.0 Setting Question

2021-01-28 Thread Odhiambo Washington
Your clue is in the log:

1611654464.207331 "message": "Request is missing required authentication
credential. Expected OAuth 2 access token, login cookie or other valid
authentication credential. See
https://developers.google.com/identity/sign-in/web/devconsole-project.;,
1611654464.207331 "status": "UNAUTHENTICATED" 1611654464.207331 }



On Thu, 28 Jan 2021 at 09:25, 福田泰葵  wrote:

> Dear Mr. Tuomi
>
> Do you have any idea how to solve this problem?
>
> Best regards,
>
> -
> 〒163-6017 東京都新宿区西新宿6-8-1 住友不動産新宿オークタワー
> 株式会社 ジャストシステム  技術企画室 情報システムグループ  福田泰葵
> e-mail: taiki.fuk...@justsystems.com
> 内線: 5158
> TEL: 03-5324-7900
> mobile: 080-6198-7328
>
> -
>
>
> 2021年1月26日(火) 18:51 福田泰葵 :
>
>> Dear Mr. Tuomi
>>
>> Thank you for the instruction.
>> I was able to output rawlogs.
>> The following is the result.
>>
>> 20210126-184744.1.1.in:
>>
>> 1611654464.207331 HTTP/1.1 401 Unauthorized
>> 1611654464.207331 Cache-Control: no-cache, no-store, max-age=0, 
>> must-revalidate
>> 1611654464.207331 Pragma: no-cache
>> 1611654464.207331 Expires: Mon, 01 Jan 1990 00:00:00 GMT
>> 1611654464.207331 Date: Tue, 26 Jan 2021 09:47:44 GMT
>> 1611654464.207331 Vary: X-Origin
>> 1611654464.207331 Vary: Referer
>> 1611654464.207331 Content-Type: application/json; charset=UTF-8
>> 1611654464.207331 Server: ESF
>> 1611654464.207331 X-XSS-Protection: 0
>> 1611654464.207331 X-Frame-Options: SAMEORIGIN
>> 1611654464.207331 X-Content-Type-Options: nosniff
>> 1611654464.207331 Alt-Svc: h3-29=":443"; ma=2592000,h3-T051=":443"; 
>> ma=2592000,h3-Q050=":443"; ma=2592000,h3-Q046=":443"; 
>> ma=2592000,h3-Q043=":443"; ma=2592000,quic=":443"; ma=2592000; v="46,43"
>> 1611654464.207331 Accept-Ranges: none
>> 1611654464.207331 Vary: Origin,Accept-Encoding
>> 1611654464.207331 Transfer-Encoding: chunked
>> 1611654464.207331
>> 1611654464.207331 130
>> 1611654464.207331 {
>> 1611654464.207331   "error": {
>> 1611654464.207331 "code": 401,
>> 1611654464.207331 "message": "Request is missing required authentication 
>> credential. Expected OAuth 2 access token, login cookie or other valid 
>> authentication credential. See 
>> https://developers.google.com/identity/sign-in/web/devconsole-project.;,
>> 1611654464.207331 "status": "UNAUTHENTICATED"
>> 1611654464.207331   }
>> 1611654464.207331 }
>> 1611654464.207331
>> 1611654464.207737 0
>> 1611654464.207737
>>
>> 20210126-184744.1.1.out:
>>
>> 1611654464.165704 GET /oauth2/v2/userinfo HTTP/1.1
>> 1611654464.165704 Host: www.googleapis.com
>> 1611654464.165704 Date: Tue, 26 Jan 2021 09:47:44 GMT
>> 1611654464.165704 User-Agent: dovecot-oauth2-passdb/2.3.13
>> 1611654464.165704 Connection: Keep-Alive
>> 1611654464.165727 Authorization: Bearer ??
>> 1611654464.165730
>>
>> Best regards,
>> --
>>
>> 〒163-6017 東京都新宿区西新宿6-8-1 住友不動産新宿オークタワー
>> 株式会社 ジャストシステム 技術企画室 情報システムグループ 福田泰葵
>> e-mail: taiki.fuk...@justsystems.com
>> 内線: 5158
>> TEL: 03-5324-7900
>> mobile: 080-6198-7328
>> --
>>
>> 2021年1月26日(火) 18:35 Aki Tuomi aki.tu...@open-xchange.com
>> :
>>
>> No, the directory must exist. I'm sorry I wasn't clear enough when I
>>> replied last time, but dovecot will not create the directory. You need to
>>> create it and make it writable.
>>>
>>> Aki
>>>
>>> > On 26/01/2021 11:09 福田泰葵  wrote:
>>> >
>>> >
>>> > Dear Mr. Tuomi
>>> >
>>> > Sorry, I have added the setting PrivateTmp=no to
>>> /etc/systemd/system/dovecot.service.d/override.conf
>>> > However, /tmp/oauth2 was not created.
>>> >
>>> > Best regards,
>>> >
>>> >
>>> -
>>> > 〒163-6017 東京都新宿区西新宿6-8-1 住友不動産新宿オークタワー
>>> > 株式会社 ジャストシステム 技術企画室 情報システムグループ 福田泰葵
>>> > e-mail: taiki.fuk...@justsystems.com
>>> > 内線: 5158
>>> > TEL: 03-5324-7900
>>> > mobile: 080-6198-7328
>>> >
>>> -
>>> >
>>> >
>>> >
>>> > 2021年1月26日(火) 18:01 Aki Tuomi :
>>> > > That is because you are using systemd, where the unit file, by
>>> default, has PrivateTmp=yes.
>>> > >
>>> > >  You can look under /tmp for dovecot private tmp directory and
>>> create the directory there, or you can temporarily disable this security
>>> measure.
>>> > >
>>> > >  systemctl edit dovecot
>>> > >
>>> > >  [Service]
>>> > >  PrivateTmp=no
>>> > >
>>> > >  systemctl daemon-reload
>>> > >  systemctl restart dovecot
>>> > >
>>> > >  Aki
>>> > >
>>> > >  > On 26/01/2021 10:57 福田泰葵  wrote:
>>> > >  >
>>> > >  >
>>> > >  > Dear Mr. Tuomi
>>> > >  >
>>> > >  > I have added the setting 

Enabling auth_cache_verify_password_with_worker disables proxy mode

2021-01-28 Thread Malte Schmidt

Setting "auth_cache_verify_password_with_worker = yes" in order to leverage 
multiple cores for the Dovecot authentication-process causes Dovecot not to 
proxy anymore. With debug-logging I figured:

auth_cache_verify_password_with_worker = no

passdb out: OK 1 user=username host=bla port=10993 ssl=any-cert 
mail_crypt_global_public_key=key mail_crypt_global_private_key=otherkey  
hostip=123.123.123.123 proxy pass=pw

auth_cache_verify_password_with_worker = yes

passdb out: OK 1 user=username

The rest seems missing.

Dovecot version: v2.3.11.3

Searching for this issue, I found at least two posts mentioning the same 
symptoms:

https://dovecot.org/pipermail/dovecot/2018-April/111583.html
https://dovecot.org/pipermail/dovecot/2020-April/118564.html
https://listen.jpberlin.de/pipermail/dovecot/2020-April/001915.html (same as 
the english one from april 2020)​


Re: dovecot and broken uidlist

2021-01-28 Thread Maciej Milaszewski
Hi
For test I crete a new director with 2.3.13 and node 2.3.13 I mount
storage via nfs with this same options:

rw,sec=sys,noexec,noatime,tcp,hard,rsize=65536,wsize=65536,intr,nordirplus,nfsvers=3,tcp,actimeo=120

I create a simple MTA and change MX to thi same like director1

In kernel 5.8.0-0.bpo.2-amd64 problem exists
In kernel 3.x - not exists

In problem exists I check Maildir/dovecot-uidlist

3 V1424432537 N16208 G92c4ee0d93aa1260c62909c4ba82
16144 :1611352119.M505834P25597.dovecot2,S=18282,W=18620
16145 :1611352123.M269121P19872.dovecot2,S=18266,W=18604
16146 :1611762747.M502108P9747.dovecot7,S=6595,W=6726
16150 :1611835594.M756718P9986.dovecot7,S=62439,W=63817
16163 :1611828091.M231204P5202.dovecot7,S=19348,W=19855
16208 :1611849420.M137743P24417.dovecot7,S=12064,W=12296
16209 :1611828091.M144806P5202.dovecot7,S=2806,W=2865
16210 :1611837438.M678475P12027.dovecot7,S=17713,W=18072
16211 :1611757939.M493064P7136.dovecot7,S=30783,W=31520
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@$

If not exists:

16144 :1611352119.M505834P25597.dovecot2,S=18282,W=18620
16145 :1611352123.M269121P19872.dovecot2,S=18266,W=18604
16146 :1611762747.M502108P9747.dovecot7,S=6595,W=6726
16150 :1611835594.M756718P9986.dovecot7,S=62439,W=63817
16163 :1611828091.M231204P5202.dovecot7,S=19348,W=19855
16208 :1611849420.M137743P24417.dovecot7,S=12064,W=12296
16209 :1611828091.M144806P5202.dovecot7,S=2806,W=2865
16210 :1611837438.M678475P12027.dovecot7,S=17713,W=18072
16211 :1611757939.M493064P7136.dovecot7,S=30783,W=31520

On 23.01.2021 00:59, Alessio Cecchi wrote:
>
> Hi,
>
> after some tests I notice a difference in dovecot-uidlist line format
> when message is read from "old kernel" and "new kernel":
>
> 81184 G1611334252.M95445P32580.mail05.myserver.com
> :1611334252.M95445P32580.mail05.myserver.com,S=38689,W=39290
> 81185 G1611336004.M47750P3921.mail01.myserver.com
> :1611336004.M47750P3921.mail01.myserver.com,S=15917,W=16212
> 81186 G1611338535.M542784P10852.mail03.myserver.com
> :1611338535.M542784P10852.mail03.myserver.com,S=12651,W=12855
> 81187 G1611341375.M164702P13505.mail01.myserver.com
> :1611341375.M164702P13505.mail01.myserver.com,S=8795,W=8964
> 81189 G1611354389.M984432P14754.mail06.myserver.com
> :1611354389.M984432P14754.mail06.myserver.com,S=3038,W=3096
> 81191 :1611355746.M365669P10402.mail03.myserver.com,S=3049,W=3107
> 81193 :1611356442.M611719P20778.mail01.myserver.com,S=1203,W=1230
> 81194 G1611356752.M573233P27082.mail01.myserver.com
> :1611356752.M573233P27082.mail01.myserver.com,S=1210,W=1238
> 81195 G1611356991.M905681P30704.mail01.myserver.com
> :1611356991.M905681P30704.mail01.myserver.com,S=1220,W=1249
> 81197 :1611357210.M42178P1962.mail01.myserver.com,S=1220,W=1250
> 81199 :1611357560.M26894P7157.mail01.myserver.com,S=1233,W=1264
>
> With "old kernel" (where all works fine) UID number are incremental
> and in the line there is one more field that start with "G1611...".
>
> With "new kernel" (where error comes) UID number skip always a number
> and the field "G1611..." is missing.
>
> Maciej, do you also have this behavior?
>
> Why Dovecot create different uidlist line format with different kernel?
>
> Il 22/01/21 17:50, Maciej Milaszewski ha scritto:
>> Hi
>> I using pop/imap and LMTP via director and user go back in dovecot node
>>
>> Current: 10.0.100.22 (expires 2021-01-22 17:42:44)
>> Hashed: 10.0.100.22
>> Initial config: 10.0.100.22
>>
>> I have 6 dovecot backands and index via local ssd disk
>> mail_location = maildir:~/Maildir:INDEX=/var/dovecot_indexes%h
>>
>> user never log in two different nodes in this same time
>>
>> I update debian from 8 to 9 (and to 10) and tested via kerlnel 4.x and
>> 5.x and problem exists
>> If I change kernel to 3.16.x problem not exists
>> I tested like:
>>
>> problem exists:
>> dovecot1-5 with 4.x
>> and
>> dovecot1-4 - with 3.19.x
>> dovecot5 - with 4.x
>> and
>> dovecot1-5 - with 5.x
>> and
>> dovecot1-4 - with 4.x
>> dovecot5 - with 5.x
>>
>> not exists:
>> dovecot1-5 - with 3.19.x
>>
>> not exists:
>> dovecot1-5 - with 3.19.x+kernel-care
>>
>> I use NetAPP with mount options:
>> rw,sec=sys,noexec,noatime,tcp,soft,rsize=32768,wsize=32768,intr,nordirplus,nfsvers=3,actimeo=120
>> I try with nocto and without nocto
>>
>> big guys from NetApp says "nfs 4.x need auth via kerberos "
>>
>>
>>
>> On 22.01.2021 16:08, Alessio Cecchi wrote:
>>> Hi Maciej,
>>>
>>> I'm using LDA for delivery email in mailbox (Maildir) and I
>>> think(hope) that switching to LMTP via director will fix my problem,
>>> but I d'ont know why wiht old kernel works and with recent no.
>>>
>>> Are you using POP/IMAP and LMTP via director so any update to dovecot
>>> indexes is done from the same server?
>>>
>>> Il 19/01/21 16:22, Maciej Milaszewski ha 

Re: Dovecot Mail Server - Cloud Compatibility

2021-01-28 Thread Michael Peddemors
Given the reputation of the Azure IP space, be hesitant to start 
operating an email server there, UNLESS you can get MS to give you SWIP 
or 'rwhois' for your IP space.


Just an FYI..

High levels' of AUTH attacks originating from that space as well.  To 
the point we even have a reputation list just for that IP space, in case 
you want to block AUTH attacks from it.


There was a quote from an MS engineer who even placed doubt on accepting 
an email from the Azure space.. of course, the world evolves, but 
without a 'rwhois' or SWIP that shows transparency, I would not 
recommend it..


On 2021-01-27 1:48 p.m., Kristie Buller wrote:
I have a question regarding the Dovecot Server compatibility with Azure 
Cloud, which our application is migrating to. We are currently using 
Dovecot Mail Server v2.3.4.  I need to verify that the software is 
compatible with that type of environment or what we would need to 
continue using the software once we migrate.

Thank you,

*Kristie Buller*
IBM System Engineer - eSign & SMART

ABS Sales & Contracting Delivery, AT Account

801 Chestnut - St. Louis, Mo

Email: kristie.bul...@ibm.com
phone: 618.660.6766






--
"Catch the Magic of Linux..."

Michael Peddemors, President/CEO LinuxMagic Inc.
Visit us at http://www.linuxmagic.com @linuxmagic
A Wizard IT Company - For More Info http://www.wizard.ca
"LinuxMagic" a Registered TradeMark of Wizard Tower TechnoServices Ltd.

604-682-0300 Beautiful British Columbia, Canada

This email and any electronic data contained are confidential and intended
solely for the use of the individual or entity to which they are addressed.
Please note that any views or opinions presented in this email are solely
those of the author and are not intended to represent those of the company.


Re: Shared mailboxes, users with dots and a bug in subscriptions

2021-01-28 Thread Aki Tuomi


> On 28/01/2021 16:55 Tobias Stein  wrote:
> 
>  
> Hi Aki,
> 
> Thanks for your prompt reply! :-)
> And because i classically forgot to attach
> the dovecot-sysreport, i'll deliver it now. :-)
> 
> 
> Yes, you're right. Setting :LAYOUT=fs would be a workaround.
> I'd also have to migrate every
> single mailbox to the new hierarchical layout.
> The hierarchical separator list->sep would
> indeed change to „/‟ and the subscriptions
> would be split differently.
> 
> Please correct me when i'm wrong, but
> the namespace/separator would have to be changed too,
> to prevent splitting on another "wrong" position.
> The current
> shared/root@example   com/testsubtest
> would become to
> sharedr...@example.comtestsubtest.
> Which is also wrong because there is no user shared.
> So the namespace separator could be set to again something
> different (from „auth_username_chars‟ + "/+")
> like „^°!§%&=?;:#¹²³‟ which all would be ugly.
> And with namespace/sep set to „°‟ leading to the form
> shared°r...@example.com°test  subtest.
> 
> But this would not resolve the actual bug, that subscriptions
> are not split and persisted correctly.
> In the end i would just be forced to use :LAYOUT=fs
> to mitigate the bug, even if i like the flat layout. :-)
> 
> I think there should be a default, which is valid
> for a common deployment with all features working.
> Maildir++ for sure is a great choice for this,
> but the implementation has a flaw:
> a hard-coded „separator‟, which collides with
> the DNS label delimiter, when storing subscriptions.
> 
> 
> Best Regards
> Tobias

You can also just change the namespace hierarchy separator to fix this:

namespace {
   separator = /
}

This will cause clients to redownload mails but requires no other changes.

Aki


Re: Shared mailboxes, users with dots and a bug in subscriptions

2021-01-28 Thread Tobias Stein
Hi Aki,

Thanks for your prompt reply! :-)
And because i classically forgot to attach
the dovecot-sysreport, i'll deliver it now. :-)


Yes, you're right. Setting :LAYOUT=fs would be a workaround.
I'd also have to migrate every
single mailbox to the new hierarchical layout.
The hierarchical separator list->sep would
indeed change to „/‟ and the subscriptions
would be split differently.

Please correct me when i'm wrong, but
the namespace/separator would have to be changed too,
to prevent splitting on another "wrong" position.
The current
shared/root@example com/testsubtest
would become to
shared  r...@example.comtestsubtest.
Which is also wrong because there is no user shared.
So the namespace separator could be set to again something
different (from „auth_username_chars‟ + "/+")
like „^°!§%&=?;:#¹²³‟ which all would be ugly.
And with namespace/sep set to „°‟ leading to the form
shared°r...@example.com°testsubtest.

But this would not resolve the actual bug, that subscriptions
are not split and persisted correctly.
In the end i would just be forced to use :LAYOUT=fs
to mitigate the bug, even if i like the flat layout. :-)

I think there should be a default, which is valid
for a common deployment with all features working.
Maildir++ for sure is a great choice for this,
but the implementation has a flaw:
a hard-coded „separator‟, which collides with
the DNS label delimiter, when storing subscriptions.


Best Regards
Tobias

dovecot-sysreport-mx1-1611828216.tar.gz
Description: application/compressed-tar


Re: 2.13 read, i_stream_read_memarea: assertion failed: (!stream->blocking)

2021-01-28 Thread Stuart Henderson
On 2021/01/28 14:51, Aki Tuomi wrote:
> Hi!
> 
> Would it be possible for you to provide the mail through obfuscation.
> 
> You can use https://dovecot.org/tools/maildir-obfuscate.pl for this purpose.
> 
> If you are using maildir format, you can decompress the mail with lz4 
> decompression tool. If you are using dbox format, it's slightly trickier as 
> you need to remove the container around the mail first, and then decompress 
> it.

It's mdbox (sorry I thought I had mentioned that earlier but seems I missed
it).

So here's the tail of output from doveadm -u sthen fetch "uid text" mailbox 
ports

--snip--snip--snip--
uid: 34088
text:
Return-Path: 
[...]
> www/seamonkey   Could not find gconf-2.0
> -> seems repeatable but I don't understand why py3 would be involved
> so maybe it's fallout from something else
> 
> graphics/vulkan-tools   'time' has no attribute 'clock'
> -> port is outdated; this was fixed as part doveadm(sthen): Panic: file 
> istream.c: line 332 (i_stream_read_memarea): assertion failed: 
> (!stream->blocking)
--snip--snip--snip--

The actual message has a few more lines, it's a public list post so I can
compare 'doveadm fetch -u sthen "uid text" mailbox ports uid 34088' against
https://marc.info/?l=openbsd-ports=159372656616988=raw and see that it
is fetched completely.

So, on to extracting the compressed mail, and this might give some clues:

$ doveadm -o mail_plugins=virtual fetch -u sthen text mailbox ports uid 34088 > 
34088.z
doveadm(sthen): Error: Mailbox ports: Deleting corrupted cache record 
uid=34088: UID 34088: Broken physical size in mailbox ports: 
read(/home/sthen/mdbox/storage/m.6167) failed: Cached message size larger than 
expected (8851 > 4542, box=ports, UID=34088)
doveadm(sthen): Error: read(/home/sthen/mdbox/storage/m.6167) failed: Cached 
message size larger than expected (8851 > 4542, box=ports, UID=34088)
doveadm(sthen): Error: fetch(text) failed for box=ports uid=34088: 
read(/home/sthen/mdbox/storage/m.6167) failed: Cached message size larger than 
expected (8851 > 4542, box=ports, UID=34088)

$ tail -n +2 34088.z > 34088.zz
$ doveadm fs get compress lz4:0:posix 34088.zz > plain
..and (not surprising since the message is displayed OK with "fetch ... uid")
the complete message is there..

If I try fetching that message again I no longer see the "Deleting
corrupted cache record" but I do still get a crash at the same point
if I fetch text for the whole mailbox.

Looking at the visible bits in that mdbox file I don't think I will be
able to identify the various correspondents enough to get their
permission to provide the whole file. (The list mail is public anyway,
but from what I can make out past the lz4 there's at least some private
mail in the file). But happy to dig around in there if there's
anything that might help, I can be /m'd on freenode (sthen) if real-
time is better for that.



> Aki
> 
> > On 28/01/2021 14:45 Stuart Henderson  wrote:
> > 
> >  
> > On 2021-01-24, Stuart Henderson  wrote:
> > > I'm seeing this on some mailboxes with 2.13 on OpenBSD amd64 (recent
> > > snapshot):
> > >
> > > dovecot: imap(sthen)<47220>: Panic: file istream.c: 
> > > line 332 (i_stream_read_memarea): assertion failed: (!stream->blocking)
> > >
> > > Using sieve, imapsieve, replicator, zlib (zlib_save = lz4 and has
> > > been for some time, so the relevant messages probably use this).
> > > Using mmap_disable because OpenBSD.
> > >
> > > Any suggestions how to handle it, preferably automatically?
> > > (even if a message is corrupt/lost it would be really nice if a
> > > standard client could still access the mailbox rather than kill
> > > the imap process while reading headers).
> > 
> > Thought I'd try doveadm force-resync ("For sdbox and mdbox mailboxes the
> > storage files will be also checked") but this doesn't help.
> > 
> > Getting some ideas for the seemingly related thread about zstd/xz
> > https://dovecot.org/pipermail/dovecot/2020-September/119890.html I've had
> > a play with doveadm fetch.
> > 
> > Doing 'doveadm fetch -u sthen "uid text" mailbox ports | grep ^uid | tail'
> > I find various messages from around June/July 2020 that trigger the crash.
> > Expunging the last displayed uid at that point I get further but I run into
> > more after a few messages. I don't mind doing that with this mailbox to
> > get things working but if it can be used to provide/test something more
> > robust that would be better.
> > 
> > Using "doveadm -o mail_plugins=virtual fetch" I see that they're definitely
> > lz4 compressed.
> > 
> > Seems odd but if I do text fetches by uid I don't run into any failure?
> > 
> > $ for i in `cat /tmp/uid.p|cut -d: -f2`;do doveadm fetch -u sthen text 
> > mailbox ports uid $i > /dev/null || echo $i;done
> > [no output]
> > 
> > Any ideas?
> > 
> > 
> > 
> > > bt first for ease of reading, followed by bt full in case it has any
> > > additional clues.
> > >
> > > (gdb) bt
> > > #0  

Re: Shared mailboxes, users with dots and a bug in subscriptions

2021-01-28 Thread Aki Tuomi


> On 28/01/2021 15:15 Tobias Stein  wrote:
> 
>  
> Hi,
> 
> i'm running Dovecot 2.3.14.alpha0 with shared namespaces
> and stumbled across some errors messages logged,
> when the list of subscribed mailboxes is queried by a client.
> For every distinct account of in the list of subscriptions
> two corresponding lines are logged:
> 
> Jan 28 11:42:34 mx1 dovecot: auth: missing passwd file: 
> /etc/dovecot/private/example/users
> Jan 28 11:42:34 mx1 dovecot: auth: missing passwd file: 
> /etc/dovecot/private/example/users
> Jan 28 11:42:34 mx1 dovecot: auth: missing passwd file: 
> /etc/dovecot/private/example/users
> Jan 28 11:42:36 mx1 dovecot: 
> imap(example_u...@example.com)<3638>: Error: 
> mkdir(/var/run/dovecot/user-not-found/noc@example) failed: Permission denied 
> (euid=109(vmail) egid=118(vmail) missing +w perm: /var/run/dovecot, dir owned 
> by 0:0 mode=0755)
> Jan 28 11:42:36 mx1 dovecot: 
> imap(example_u...@example.com)<3638>: Error: 
> mkdir(/var/run/dovecot/user-not-found/info@example) failed: Permission denied 
> (euid=109(vmail) egid=118(vmail) missing +w perm: /var/run/dovecot, dir owned 
> by 0:0 mode=0755)
> Jan 28 11:42:36 mx1 dovecot: 
> imap(example_u...@example.com)<3638>: Error: 
> mkdir(/var/run/dovecot/user-not-found/root@example) failed: Permission denied 
> (euid=109(vmail) egid=118(vmail) missing +w perm: /var/run/dovecot, dir owned 
> by 0:0 mode=0755)
> 
> Similar messages are logged,
> when invalid entries are listed in '/var/lib/dovecot/db/shared-mailboxes',
> which i already pruned and haven't received them anymore since.
> 
> I think these errors are caused by an unintended behaviour
> when writing "~/Maildir/subscriptions",
> which looks (shortened) like this.
> 
> V 2
> 
> INBOX/INBOX
> shared/noc@examplecom/INBOX
> shared/info@example   com/INBOX
> shared/root@example   com/test
> shared/root@example   com/testtest_sub
> 
> The subscription-file.c
> explodes the name on every hierarchy separator ('.','\0') and
> inserts a TAB character. Unfortunately it also explodes on
> the DNS label delimiter „.‟. This should probably be fixed
> by passing a structure containing the required information
> to the formatter to distinguish mailboxes from domain-names.
> 
> Subscription in combination with multiple domains and
> shared mailboxes seems broken to me. Actually i can't even explain to me,
> why it is working in face of the errors. :-)
> 
> 
> Unfortunately in Maildir++ the separator dot is hard-coded.
> 
> There is a very old thread on this mailing list,
> that suggests using „auth_username_translation‟
> to replace dots with a different character,
> but this idea is getting worse the longer i think about it.
> 
> I absolutely dislike the idea to set LAYOUT=fs,
> namespace/separator = § to change the separators
> to split on, because this would mean to restructure the
> physical layout of all mailboxes (hierarchically) and
> mess around with lots of files.
> 
> 
> I attached a dovecot-sysreport to reproduce the behaviour.
> 
> /etc/dovecot/private/example.com/users looks like this:
> ###user:password:uid:gid:(gecos):home:(shell):extra_fields
> noc:{SSHA512}_hash_::
> info:{SSHA512}_hash_::
> root:{SSHA512}_hash_::
> 
> Please correct me if i'm wrong or
> point me to a workaround,
> but i think the layout code needs some love. :-)
> 
> 
> Best regards
> Tobias


You should probably add :LAYOUT=FS on your mail locations. This will change the 
folder naming into foo/bar/baz instead of .foo.bar.baz.

Aki


Shared mailboxes, users with dots and a bug in subscriptions

2021-01-28 Thread Tobias Stein
Hi,

i'm running Dovecot 2.3.14.alpha0 with shared namespaces
and stumbled across some errors messages logged,
when the list of subscribed mailboxes is queried by a client.
For every distinct account of in the list of subscriptions
two corresponding lines are logged:

Jan 28 11:42:34 mx1 dovecot: auth: missing passwd file: 
/etc/dovecot/private/example/users
Jan 28 11:42:34 mx1 dovecot: auth: missing passwd file: 
/etc/dovecot/private/example/users
Jan 28 11:42:34 mx1 dovecot: auth: missing passwd file: 
/etc/dovecot/private/example/users
Jan 28 11:42:36 mx1 dovecot: 
imap(example_u...@example.com)<3638>: Error: 
mkdir(/var/run/dovecot/user-not-found/noc@example) failed: Permission denied 
(euid=109(vmail) egid=118(vmail) missing +w perm: /var/run/dovecot, dir owned 
by 0:0 mode=0755)
Jan 28 11:42:36 mx1 dovecot: 
imap(example_u...@example.com)<3638>: Error: 
mkdir(/var/run/dovecot/user-not-found/info@example) failed: Permission denied 
(euid=109(vmail) egid=118(vmail) missing +w perm: /var/run/dovecot, dir owned 
by 0:0 mode=0755)
Jan 28 11:42:36 mx1 dovecot: 
imap(example_u...@example.com)<3638>: Error: 
mkdir(/var/run/dovecot/user-not-found/root@example) failed: Permission denied 
(euid=109(vmail) egid=118(vmail) missing +w perm: /var/run/dovecot, dir owned 
by 0:0 mode=0755)

Similar messages are logged,
when invalid entries are listed in '/var/lib/dovecot/db/shared-mailboxes',
which i already pruned and haven't received them anymore since.

I think these errors are caused by an unintended behaviour
when writing "~/Maildir/subscriptions",
which looks (shortened) like this.

V   2

INBOX/INBOX
shared/noc@example  com/INBOX
shared/info@example com/INBOX
shared/root@example com/test
shared/root@example com/testtest_sub

The subscription-file.c
explodes the name on every hierarchy separator ('.','\0') and
inserts a TAB character. Unfortunately it also explodes on
the DNS label delimiter „.‟. This should probably be fixed
by passing a structure containing the required information
to the formatter to distinguish mailboxes from domain-names.

Subscription in combination with multiple domains and
shared mailboxes seems broken to me. Actually i can't even explain to me,
why it is working in face of the errors. :-)


Unfortunately in Maildir++ the separator dot is hard-coded.

There is a very old thread on this mailing list,
that suggests using „auth_username_translation‟
to replace dots with a different character,
but this idea is getting worse the longer i think about it.

I absolutely dislike the idea to set LAYOUT=fs,
namespace/separator = § to change the separators
to split on, because this would mean to restructure the
physical layout of all mailboxes (hierarchically) and
mess around with lots of files.


I attached a dovecot-sysreport to reproduce the behaviour.

/etc/dovecot/private/example.com/users looks like this:
###user:password:uid:gid:(gecos):home:(shell):extra_fields
noc:{SSHA512}_hash_::
info:{SSHA512}_hash_::
root:{SSHA512}_hash_::

Please correct me if i'm wrong or
point me to a workaround,
but i think the layout code needs some love. :-)


Best regards
Tobias




Re: 2.13 read, i_stream_read_memarea: assertion failed: (!stream->blocking)

2021-01-28 Thread Aki Tuomi
Hi!

Would it be possible for you to provide the mail through obfuscation.

You can use https://dovecot.org/tools/maildir-obfuscate.pl for this purpose.

If you are using maildir format, you can decompress the mail with lz4 
decompression tool. If you are using dbox format, it's slightly trickier as you 
need to remove the container around the mail first, and then decompress it.

Aki

> On 28/01/2021 14:45 Stuart Henderson  wrote:
> 
>  
> On 2021-01-24, Stuart Henderson  wrote:
> > I'm seeing this on some mailboxes with 2.13 on OpenBSD amd64 (recent
> > snapshot):
> >
> > dovecot: imap(sthen)<47220>: Panic: file istream.c: line 
> > 332 (i_stream_read_memarea): assertion failed: (!stream->blocking)
> >
> > Using sieve, imapsieve, replicator, zlib (zlib_save = lz4 and has
> > been for some time, so the relevant messages probably use this).
> > Using mmap_disable because OpenBSD.
> >
> > Any suggestions how to handle it, preferably automatically?
> > (even if a message is corrupt/lost it would be really nice if a
> > standard client could still access the mailbox rather than kill
> > the imap process while reading headers).
> 
> Thought I'd try doveadm force-resync ("For sdbox and mdbox mailboxes the
> storage files will be also checked") but this doesn't help.
> 
> Getting some ideas for the seemingly related thread about zstd/xz
> https://dovecot.org/pipermail/dovecot/2020-September/119890.html I've had
> a play with doveadm fetch.
> 
> Doing 'doveadm fetch -u sthen "uid text" mailbox ports | grep ^uid | tail'
> I find various messages from around June/July 2020 that trigger the crash.
> Expunging the last displayed uid at that point I get further but I run into
> more after a few messages. I don't mind doing that with this mailbox to
> get things working but if it can be used to provide/test something more
> robust that would be better.
> 
> Using "doveadm -o mail_plugins=virtual fetch" I see that they're definitely
> lz4 compressed.
> 
> Seems odd but if I do text fetches by uid I don't run into any failure?
> 
> $ for i in `cat /tmp/uid.p|cut -d: -f2`;do doveadm fetch -u sthen text 
> mailbox ports uid $i > /dev/null || echo $i;done
> [no output]
> 
> Any ideas?
> 
> 
> 
> > bt first for ease of reading, followed by bt full in case it has any
> > additional clues.
> >
> > (gdb) bt
> > #0  thrkill () at /tmp/-:3
> > #1  0x0e9158c7c3ae in _libc_abort () at 
> > /usr/src/lib/libc/stdlib/abort.c:51
> > #2  0x0e91adc00c26 in default_fatal_finish (type=LOG_TYPE_PANIC, 
> > status=0) at failures.c:459
> > #3  0x0e91adbff034 in fatal_handler_real (ctx=0x7f7cdd90, 
> > format=,
> > args=) at failures.c:471
> > #4  0x0e91adbfffb1 in i_internal_fatal_handler (ctx=0x0,
> > format=0x6 , args=0x0) at 
> > failures.c:866
> > #5  0x0e91adbff266 in i_panic (format=0x6  > at address 0x6>)
> > at failures.c:523
> > #6  0x0e91adc0f15c in i_stream_read_memarea (stream=0xe917ead3480) at 
> > istream.c:332
> > #7  0x0e91adc1925f in read_more (sstream=0xe91431c4800) at 
> > istream-seekable.c:149
> > #8  0x0e91adc19090 in read_from_buffer (sstream=0xe91431c4800, 
> > ret_r=0x7f7cded8)
> > at istream-seekable.c:204
> > #9  0x0e91adc1856d in i_stream_seekable_read (stream=0xe91431c4800) at 
> > istream-seekable.c:265
> > #10 0x0e91adc0f0a4 in i_stream_read_memarea (stream=0xe91431c4880) at 
> > istream.c:313
> > #11 0x0e91adc16bbc in i_stream_limit_read (stream=0xe91e9ccca00) at 
> > istream-limit.c:49
> > #12 0x0e91adc0f0a4 in i_stream_read_memarea (stream=0xe91e9ccca80) at 
> > istream.c:313
> > #13 0x0e91adc0f79c in i_stream_read_copy_from_parent 
> > (istream=) at istream.c:387
> > #14 0x0e918e9729fa in i_stream_mail_read (stream=0xe91e9ccc000) at 
> > istream-mail.c:115
> > #15 0x0e91adc0f0a4 in i_stream_read_memarea (stream=0xe91e9ccc080) at 
> > istream.c:313
> > #16 0x0e91adc104f5 in i_stream_read (stream=0xe91e9ccc080) at 
> > istream.c:271
> > #17 i_stream_read_data (stream=0xe91e9ccc080, data_r=0x7f7ce110, 
> > size_r=0x7f7ce100, threshold=1)
> > at istream.c:747
> > #18 0x0e91adbd6a18 in i_stream_read_bytes (stream=0x0, 
> > data_r=,
> > size_r=, wanted=) at 
> > ../../src/lib/istream.h:214
> > #19 message_parse_header_next (ctx=0xe9183e1b180, hdr_r=0x7f7ce1d0) at 
> > message-header-parser.c:85
> > #20 0x0e91adbceef2 in read_header (mstream=0xe91e9cc6000) at 
> > istream-header-filter.c:195
> > #21 i_stream_header_filter_read (stream=0xe91e9cc6000) at 
> > istream-header-filter.c:450
> > #22 0x0e91adc0f0a4 in i_stream_read_memarea (stream=0xe91e9cc6080) at 
> > istream.c:313
> > #23 0x0e91adc104f5 in i_stream_read (stream=0xe91e9cc6080) at 
> > istream.c:271
> > #24 i_stream_read_data (stream=0xe91e9cc6080, data_r=0x7f7ce2c0, 
> > size_r=0x7f7ce2c8, threshold=0)
> > at istream.c:747
> > #25 0x0e91adbdc07b in i_stream_read_bytes (stream=0xe91e9cc6080, 
> > data_r=,
> > 

Re: 2.13 read, i_stream_read_memarea: assertion failed: (!stream->blocking)

2021-01-28 Thread Stuart Henderson
On 2021-01-24, Stuart Henderson  wrote:
> I'm seeing this on some mailboxes with 2.13 on OpenBSD amd64 (recent
> snapshot):
>
> dovecot: imap(sthen)<47220>: Panic: file istream.c: line 
> 332 (i_stream_read_memarea): assertion failed: (!stream->blocking)
>
> Using sieve, imapsieve, replicator, zlib (zlib_save = lz4 and has
> been for some time, so the relevant messages probably use this).
> Using mmap_disable because OpenBSD.
>
> Any suggestions how to handle it, preferably automatically?
> (even if a message is corrupt/lost it would be really nice if a
> standard client could still access the mailbox rather than kill
> the imap process while reading headers).

Thought I'd try doveadm force-resync ("For sdbox and mdbox mailboxes the
storage files will be also checked") but this doesn't help.

Getting some ideas for the seemingly related thread about zstd/xz
https://dovecot.org/pipermail/dovecot/2020-September/119890.html I've had
a play with doveadm fetch.

Doing 'doveadm fetch -u sthen "uid text" mailbox ports | grep ^uid | tail'
I find various messages from around June/July 2020 that trigger the crash.
Expunging the last displayed uid at that point I get further but I run into
more after a few messages. I don't mind doing that with this mailbox to
get things working but if it can be used to provide/test something more
robust that would be better.

Using "doveadm -o mail_plugins=virtual fetch" I see that they're definitely
lz4 compressed.

Seems odd but if I do text fetches by uid I don't run into any failure?

$ for i in `cat /tmp/uid.p|cut -d: -f2`;do doveadm fetch -u sthen text mailbox 
ports uid $i > /dev/null || echo $i;done
[no output]

Any ideas?



> bt first for ease of reading, followed by bt full in case it has any
> additional clues.
>
> (gdb) bt
> #0  thrkill () at /tmp/-:3
> #1  0x0e9158c7c3ae in _libc_abort () at 
> /usr/src/lib/libc/stdlib/abort.c:51
> #2  0x0e91adc00c26 in default_fatal_finish (type=LOG_TYPE_PANIC, 
> status=0) at failures.c:459
> #3  0x0e91adbff034 in fatal_handler_real (ctx=0x7f7cdd90, 
> format=,
> args=) at failures.c:471
> #4  0x0e91adbfffb1 in i_internal_fatal_handler (ctx=0x0,
> format=0x6 , args=0x0) at 
> failures.c:866
> #5  0x0e91adbff266 in i_panic (format=0x6  address 0x6>)
> at failures.c:523
> #6  0x0e91adc0f15c in i_stream_read_memarea (stream=0xe917ead3480) at 
> istream.c:332
> #7  0x0e91adc1925f in read_more (sstream=0xe91431c4800) at 
> istream-seekable.c:149
> #8  0x0e91adc19090 in read_from_buffer (sstream=0xe91431c4800, 
> ret_r=0x7f7cded8)
> at istream-seekable.c:204
> #9  0x0e91adc1856d in i_stream_seekable_read (stream=0xe91431c4800) at 
> istream-seekable.c:265
> #10 0x0e91adc0f0a4 in i_stream_read_memarea (stream=0xe91431c4880) at 
> istream.c:313
> #11 0x0e91adc16bbc in i_stream_limit_read (stream=0xe91e9ccca00) at 
> istream-limit.c:49
> #12 0x0e91adc0f0a4 in i_stream_read_memarea (stream=0xe91e9ccca80) at 
> istream.c:313
> #13 0x0e91adc0f79c in i_stream_read_copy_from_parent (istream= out>) at istream.c:387
> #14 0x0e918e9729fa in i_stream_mail_read (stream=0xe91e9ccc000) at 
> istream-mail.c:115
> #15 0x0e91adc0f0a4 in i_stream_read_memarea (stream=0xe91e9ccc080) at 
> istream.c:313
> #16 0x0e91adc104f5 in i_stream_read (stream=0xe91e9ccc080) at 
> istream.c:271
> #17 i_stream_read_data (stream=0xe91e9ccc080, data_r=0x7f7ce110, 
> size_r=0x7f7ce100, threshold=1)
> at istream.c:747
> #18 0x0e91adbd6a18 in i_stream_read_bytes (stream=0x0, data_r= out>,
> size_r=, wanted=) at 
> ../../src/lib/istream.h:214
> #19 message_parse_header_next (ctx=0xe9183e1b180, hdr_r=0x7f7ce1d0) at 
> message-header-parser.c:85
> #20 0x0e91adbceef2 in read_header (mstream=0xe91e9cc6000) at 
> istream-header-filter.c:195
> #21 i_stream_header_filter_read (stream=0xe91e9cc6000) at 
> istream-header-filter.c:450
> #22 0x0e91adc0f0a4 in i_stream_read_memarea (stream=0xe91e9cc6080) at 
> istream.c:313
> #23 0x0e91adc104f5 in i_stream_read (stream=0xe91e9cc6080) at 
> istream.c:271
> #24 i_stream_read_data (stream=0xe91e9cc6080, data_r=0x7f7ce2c0, 
> size_r=0x7f7ce2c8, threshold=0)
> at istream.c:747
> #25 0x0e91adbdc07b in i_stream_read_bytes (stream=0xe91e9cc6080, 
> data_r=,
> size_r=, wanted= memory at address 0x1>)
> at ../../src/lib/istream.h:214
> #26 message_get_header_size (input=0xe91e9cc6080, hdr=0x7f7ce328, 
> has_nuls_r=0x7f7ce3b7)
> at message-size.c:19
> #27 0x0e8f25e767d9 in imap_msgpart_get_partial_header (mail=0xe91e9cc4848,
> mail_input=, msgpart=0xe91c92a2048, 
> virtual_size_r=,
> result_r=0x7f7ce408, have_crlfs_r=) at 
> imap-msgpart.c:401
> #28 imap_msgpart_open_normal (mail=, msgpart=0xe91c92a2048, 
> part=,
> virtual_size_r=, result_r=0x7f7ce408, 
> have_crlfs_r=)
> at imap-msgpart.c:637
> #29 imap_msgpart_open (mail=, msgpart=0xe91c92a2048, 
>