Re: doveadm import error: quota: Unknown namespace: INBOX/

2024-04-17 Thread Ralf Becker via dovecot

Noone an idea?

No longer been able to restore mailboxes seems a little scary ...

Ralf

Am 12.04.24 um 14:07 schrieb Ralf Becker via dovecot:

Dovecot version is 2.3.20 and I try to restore a folder hierarchy from an older
snapshot of the mailbox (folders in question have been deleted):
sudo -u dovecot doveadm -Dv import -u p...@xyz.de -s mdbox:$(pwd)/pbs-2024-03-
19/mdbox INBOX mailbox 'projekte/8-BZ/*'
I'm getting the following error:
Apr 12 10:52:18 doveadm(p...@xyz.de): Error: quota: Unknown namespace: INBOX/
I also tried restoring in a (not existing) restore folder: Restore-2024-03-19
and using search query "mailbox 'projekte/8-BZ'", all give the same result :(
Any ideas what might be wrong, I did this many times before, and it worked, so
I'm puzzeled ...
Here is the full output of doveadm import command above and doveconf -n:
...



--
Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0

___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


doveadm import error: quota: Unknown namespace: INBOX/

2024-04-12 Thread Ralf Becker via dovecot
Dovecot version is 2.3.20 and I try to restore a folder hierarchy from an older
snapshot of the mailbox (folders in question have been deleted):
sudo -u dovecot doveadm -Dv import -u p...@xyz.de -s mdbox:$(pwd)/pbs-2024-03-
19/mdbox INBOX mailbox 'projekte/8-BZ/*'
I'm getting the following error:
Apr 12 10:52:18 doveadm(p...@xyz.de): Error: quota: Unknown namespace: INBOX/
I also tried restoring in a (not existing) restore folder: Restore-2024-03-19
and using search query "mailbox 'projekte/8-BZ'", all give the same result :(
Any ideas what might be wrong, I did this many times before, and it worked, so
I'm puzzeled ...
Here is the full output of doveadm import command above and doveconf -n:
root@aab41229427e:/var/dovecot/imap/xyz.de# sudo -u dovecot doveadm -Dv import
-u p...@xyz.de -s mdbox:$(pwd)/pbs-2024-03-19/mdbox INBOX mailbox 'projekte/8-
BZ/*'
Debug: Loading modules from directory: /usr/lib/dovecot/modules/doveadm
Debug: Skipping module doveadm_acl_plugin, because dlopen() failed: /usr/lib/
dovecot/modules/doveadm/lib10_doveadm_acl_plugin.so: undefined symbol:
acl_user_module (this is usually intentional, so just ignore this message)
Debug: Skipping module doveadm_quota_plugin, because dlopen() failed: /usr/lib/
dovecot/modules/doveadm/lib10_doveadm_quota_plugin.so: undefined symbol:
quota_user_module (this is usually intentional, so just ignore this message)
Debug: Module loaded: /usr/lib/dovecot/modules/doveadm/
lib10_doveadm_sieve_plugin.so
Debug: Skipping module doveadm_fts_lucene_plugin, because dlopen() failed: /
usr/lib/dovecot/modules/doveadm/lib20_doveadm_fts_lucene_plugin.so: undefined
symbol: lucene_index_iter_deinit (this is usually intentional, so just ignore
this message)
Debug: Skipping module doveadm_fts_plugin, because dlopen() failed: /usr/lib/
dovecot/modules/doveadm/lib20_doveadm_fts_plugin.so: undefined symbol:
fts_user_get_language_list (this is usually intentional, so just ignore this
message)
Debug: Skipping module doveadm_mail_crypt_plugin, because dlopen() failed: /
usr/lib/dovecot/modules/doveadm/libdoveadm_mail_crypt_plugin.so: undefined
symbol: mail_crypt_box_get_pvt_digests (this is usually intentional, so just
ignore this message)
Debug: Loading modules from directory: /usr/lib/dovecot/modules/doveadm
Debug: Skipping module doveadm_acl_plugin, because dlopen() failed: /usr/lib/
dovecot/modules/doveadm/lib10_doveadm_acl_plugin.so: undefined symbol:
acl_user_module (this is usually intentional, so just ignore this message)
Debug: Skipping module doveadm_quota_plugin, because dlopen() failed: /usr/lib/
dovecot/modules/doveadm/lib10_doveadm_quota_plugin.so: undefined symbol:
quota_user_module (this is usually intentional, so just ignore this message)
Debug: Module loaded: /usr/lib/dovecot/modules/doveadm/
lib10_doveadm_sieve_plugin.so
Debug: Skipping module doveadm_fts_lucene_plugin, because dlopen() failed: /
usr/lib/dovecot/modules/doveadm/lib20_doveadm_fts_lucene_plugin.so: undefined
symbol: lucene_index_iter_deinit (this is usually intentional, so just ignore
this message)
Debug: Skipping module doveadm_fts_plugin, because dlopen() failed: /usr/lib/
dovecot/modules/doveadm/lib20_doveadm_fts_plugin.so: undefined symbol:
fts_user_get_language_list (this is usually intentional, so just ignore this
message)
Debug: Skipping module doveadm_mail_crypt_plugin, because dlopen() failed: /
usr/lib/dovecot/modules/doveadm/libdoveadm_mail_crypt_plugin.so: undefined
symbol: mail_crypt_box_get_pvt_digests (this is usually intentional, so just
ignore this message)
Apr 12 10:52:18 Debug: Loading modules from directory: /usr/lib/dovecot/modules
Apr 12 10:52:18 Debug: Module loaded: /usr/lib/dovecot/modules/
lib01_acl_plugin.so
Apr 12 10:52:18 Debug: Module loaded: /usr/lib/dovecot/modules/
lib01_mail_lua_plugin.so
Apr 12 10:52:18 Debug: Module loaded: /usr/lib/dovecot/modules/
lib10_quota_plugin.so
Apr 12 10:52:18 Debug: Module loaded: /usr/lib/dovecot/modules/
lib15_notify_plugin.so
Apr 12 10:52:18 Debug: Module loaded: /usr/lib/dovecot/modules/
lib20_mail_log_plugin.so
Apr 12 10:52:18 Debug: Module loaded: /usr/lib/dovecot/modules/
lib20_push_notification_plugin.so
Apr 12 10:52:18 Debug: Module loaded: /usr/lib/dovecot/modules/
lib20_replication_plugin.so
Apr 12 10:52:18 Debug: Module loaded: /usr/lib/dovecot/modules/
lib22_push_notification_lua_plugin.so
Apr 12 10:52:18 Debug: Loading modules from directory: /usr/lib/dovecot/
modules/doveadm
Apr 12 10:52:18 Debug: Module loaded: /usr/lib/dovecot/modules/doveadm/
lib10_doveadm_acl_plugin.so
Apr 12 10:52:18 Debug: Module loaded: /usr/lib/dovecot/modules/doveadm/
lib10_doveadm_quota_plugin.so
Apr 12 10:52:18 Debug: Skipping module doveadm_fts_lucene_plugin, because
dlopen() failed: /usr/lib/dovecot/modules/doveadm/
lib20_doveadm_fts_lucene_plugin.so: undefined symbol: lucene_index_iter_deinit
(this is usually intentional, so just ignore this message)
Apr 12 10:52:18 Debug: Skipping module doveadm_fts_plugin, because dlopen()
failed: /

How to disable BINARY extension

2023-07-10 Thread Ralf Becker via dovecot
Reason for disabling are many broken mails containing unescaped equal 
signs ("=") and our webmailer is using BINARY, when offered, and than 
does not display the mail and reports Dovecots error "Invalid 
quoted-printable input trailer: '=' not followed by two hex digits".


I'm fully aware that the mails are broken, unfortunately all sorts of 
programs or senders generate them ...


I can use

    imap_capability = 

but this is for multiple reasons not ideal:
- new caps would never show up
- pre-login the are also displayed

There seems to be no syntax like: imap_capability = -BINARY

Is there any other way to disable just binary?

Ralf

--
Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0

___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: Outlook fails to connect to Dovecot submission server

2023-05-22 Thread Ralf Becker via dovecot

Hi Nikolaos,

Am 22.05.23 um 15:25 schrieb Nikolaos Pyrgiotis:


Have you tried adding the line below to your submision config?

submission_client_workarounds = whitespace-before-path

I tried, also with all other workarounds, unfortunately that (alone) 
does not help :(


Ralf

--
Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: Outlook fails to connect to Dovecot submission server

2023-05-19 Thread Ralf Becker via dovecot

Hi Andrzej,

Am 19.05.23 um 17:17 schrieb Andrzej Milewski:

Hello,
I may be mistaken, but I don't see "auth_mechanism = plain login" in 
your configuration. It's possible that you are using something 
different for authentication, but I don't see it in the configuration.


The config file was output with doveadm config -n, that only shows 
non-default values. The default is auth_mechanism = plain, and that is 
explicitly set in conf.d/10-auth.conf.


I'll try Monday if additionally enabling login makes any difference, but 
I doubt it.


Ralf




On Wed, May 17, 2023 at 4:04 PM Ralf Becker via dovecot 
 wrote:


Dovecot 2.3.20 including it's submission server works well with all
sorts of clients, but Outlook.

Outlook works / can connect to Dovecot IMAP service with same
certificate TLS config, but it fails to connect using SMTPs on
port 465.
Other clients connect and send mails without problem, also openssl
s_client can connect and reports no problems.

I tried with Outlook Version 365 on Windows 11 (no cloud) and
"Microsoft® Outlook® 2021 MSO (Version 2304 Build
16.0.16327.20200) 64-bit".

I already enabled all submission_client_workarounds and lowered
min_ssl_version from TLSv1.2 to TLSv1, but that changed nothing.

I can see nothing failing in the logs, thought the OL connection
wizard
always check IMAP and SMTP together, so it's hard to say what the
problem is.

The same two Outlook version connect without a problem to Postfix
authenticating via SASL to Dovecot also requiring a minimum TLS
version
of 1.2.
They just wont connect with Dovecot submission server.

Any ideas what's wrong, or how to debug that further?

Ralf

-- 


Ralf Becker
EGroupware GmbH [www.egroupware.org <http://www.egroupware.org>]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org



--
Andrzej

___
dovecot mailing list --dovecot@dovecot.org
To unsubscribe send an email todovecot-le...@dovecot.org



--
Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: Outlook fails to connect to Dovecot submission server

2023-05-19 Thread Ralf Becker via dovecot

Am 17.05.23 um 20:03 schrieb dovecot--- via dovecot:
Dovecot ... submission server works well with all sorts of clients, 
but Outlook.


I thought that was M$ intent. They purposefully design their ecosystem 
to not play well with others so the average person will think 
something is wrong with the competitor's software, give up and just 
continue to use outlook with M$ services.


Remember how IE didn't render pages properly, but had a larger market 
share so most webmasters designed sites to look best in IE. Making 
sites not work well in Netscape, causing the average person to think 
Netscape sucked so they continued to use IE which in turn caused more 
webmasters to only develop sites for IE.

___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


As an open-source activist I can understand the sentiment, thought for 
regular users this always seems to be broken in the server :(


@Aki and @Timo: is that a known problem with submission server you plan 
to address, or if not, how can I help to debug that further?


Obviously I can get back to the traditional setup using Postfix and 
SASL, if that's the only way to support Outlook ...


Ralf

--
Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0

___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Outlook fails to connect to Dovecot submission server

2023-05-17 Thread Ralf Becker via dovecot
Dovecot 2.3.20 including it's submission server works well with all 
sorts of clients, but Outlook.


Outlook works / can connect to Dovecot IMAP service with same 
certificate TLS config, but it fails to connect using SMTPs on port 465.
Other clients connect and send mails without problem, also openssl 
s_client can connect and reports no problems.


I tried with Outlook Version 365 on Windows 11 (no cloud) and 
"Microsoft® Outlook® 2021 MSO (Version 2304 Build 16.0.16327.20200) 64-bit".


I already enabled all submission_client_workarounds and lowered 
min_ssl_version from TLSv1.2 to TLSv1, but that changed nothing.


I can see nothing failing in the logs, thought the OL connection wizard 
always check IMAP and SMTP together, so it's hard to say what the 
problem is.


The same two Outlook version connect without a problem to Postfix 
authenticating via SASL to Dovecot also requiring a minimum TLS version 
of 1.2.

They just wont connect with Dovecot submission server.

Any ideas what's wrong, or how to debug that further?

Ralf

--

Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0
# 2.3.20 (80a5ac675d): /etc/dovecot/dovecot.conf
# Pigeonhole version 0.5.20 (149edcf2)
# OS: Linux 4.19.0-23-amd64 x86_64 Ubuntu 20.04.6 LTS 
# Hostname: testbox.egroupware.org
auth_master_user_separator = *
first_valid_uid = 90
log_path = /dev/stderr
mail_attribute_dict = file:%h/dovecot-metadata
mail_gid = dovecot
mail_location = mdbox:~/mdbox
mail_plugins = acl quota mail_lua notify push_notification push_notification_lua
mail_uid = dovecot
mail_vsize_bg_after_count = 100
managesieve_notify_capability = mailto
managesieve_sieve_capability = fileinto reject envelope encoded-character 
vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy 
include variables body enotify environment mailbox date index ihave duplicate 
mime foreverypart extracttext
namespace inbox {
  inbox = yes
  location = 
  mailbox Archive {
auto = subscribe
special_use = \Archive
  }
  mailbox Drafts {
auto = subscribe
special_use = \Drafts
  }
  mailbox Junk {
auto = subscribe
special_use = \Junk
  }
  mailbox Sent {
auto = subscribe
special_use = \Sent
  }
  mailbox "Sent Messages" {
auto = no
special_use = \Sent
  }
  mailbox Templates {
auto = subscribe
  }
  mailbox Trash {
auto = subscribe
special_use = \Trash
  }
  prefix = INBOX/
  separator = /
}
namespace users {
  location = mdbox:%%h/mdbox
  prefix = user/%%n/
  separator = /
  type = shared
}
passdb {
  args = /etc/dovecot/dovecot-sql-master.conf.ext
  driver = sql
  master = yes
  result_success = continue
}
passdb {
  args = /etc/dovecot/dovecot-sql.conf.ext
  driver = sql
}
plugin {
  acl = vfile:/etc/dovecot/global-acls:cache_secs=300
  acl_shared_dict = file:/var/dovecot/shared-mailboxes.db
  push_lua_url = http://#hidden_use-P_to_show#@push:9501/
  push_notification_driver = lua:file=/etc/dovecot/dovecot-push.lua
  quota = count:User quota
  quota_rule = *:storage=10GB
  quota_vsizes = yes
  sieve = file:~/sieve;active=~/.dovecot.sieve
}
protocols = " imap lmtp sieve pop3 submission"
service lmtp {
  inet_listener lmtp {
port = 24
  }
}
service submission-login {
  inet_listener smtps {
port = 465
ssl = yes
  }
}
ssl = required
ssl_cert = ___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: Dovecot 2.3.19.1: BUG: Mailbox renaming algorithm got into a potentially infinite loop, aborting

2022-11-14 Thread Ralf Becker

Am 14.11.22 um 10:06 schrieb Aki Tuomi:

On 14/11/2022 10:56 EET Ralf Becker  wrote:

  
I found this from 2019 on the list:


https://www.mail-archive.com/dovecot@dovecot.org/msg78898.html

Timo mentioned it's tracked internally as DOP-1501, but I could find
nothing about it being fixed on the (public) list.

Any news about that and is the workaround of deleting
dovecot.mailbox.log* and
dovecot.list.index* still the recommendation, when running into this bug.

Ralf

--
Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0

Hi Ralf,

this is still not fixed, and the workaround is still valid.

Aki


Thx Aki :)

I applied the workaround now and the replication started again for that 
mailbox.


Ralf

--
Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0



Dovecot 2.3.19.1: BUG: Mailbox renaming algorithm got into a potentially infinite loop, aborting

2022-11-14 Thread Ralf Becker

I found this from 2019 on the list:

https://www.mail-archive.com/dovecot@dovecot.org/msg78898.html

Timo mentioned it's tracked internally as DOP-1501, but I could find 
nothing about it being fixed on the (public) list.


Any news about that and is the workaround of deleting 
dovecot.mailbox.log* and

dovecot.list.index* still the recommendation, when running into this bug.

Ralf

--
Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0



Re: doveadm backup|sync works for every folder but INBOX

2022-11-08 Thread Ralf Becker

Hi Aki,

Am 08.11.22 um 12:02 schrieb Ralf Becker:

Hi Aki,

Am 08.11.22 um 10:07 schrieb Aki Tuomi:

I send the full log again to your private address.

Ralf

Seems I can reproduce this issue, we'll look into this.

Aki

Hi!

Can you try if

https://github.com/dovecot/core/compare/a485443e00a7e1b93ed9b7d065c89fcd2eb90865%5E..0f67c79e48fa783b658606a99cf18db8daf7884e.patch 



fixes your issue?


I will try and come back to you.

I currently have no build environment for Dovecot and change the 
migration to use imapsync instead of doveadm sync|backup.


Ralf

Do you by chance build nightly containers or have a Docker file building 
Dovecot from the git sources?


That would make testing things like that a lot easier :)

Ralf

--
Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0



Re: doveadm backup|sync works for every folder but INBOX

2022-11-08 Thread Ralf Becker

Hi Aki,

Am 08.11.22 um 10:07 schrieb Aki Tuomi:

I send the full log again to your private address.

Ralf

Seems I can reproduce this issue, we'll look into this.

Aki

Hi!

Can you try if

https://github.com/dovecot/core/compare/a485443e00a7e1b93ed9b7d065c89fcd2eb90865%5E..0f67c79e48fa783b658606a99cf18db8daf7884e.patch

fixes your issue?


I will try and come back to you.

I currently have no build environment for Dovecot and change the 
migration to use imapsync instead of doveadm sync|backup.


Ralf

--
Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0



Re: doveadm backup|sync works for every folder but INBOX

2022-11-03 Thread Ralf Becker

Am 03.11.22 um 14:29 schrieb Aki Tuomi:

On 03/11/2022 13:43 EET Ralf Becker  wrote:

  
Hi Aki,


Am 03.11.22 um 12:27 schrieb Aki Tuomi:

On 03/11/2022 13:23 EET Aki Tuomi  wrote:

On 03/11/2022 13:19 EET Ralf Becker  wrote:

   
Hi Aki,


Am 03.11.22 um 10:54 schrieb Aki Tuomi:

On 03/11/2022 11:46 EET Ralf Becker  wrote:


Hi Aki,


Am 03.11.22 um 10:29 schrieb Aki Tuomi:

On 03/11/2022 11:27 EET Ralf Becker  wrote:

 
Hi Aki,


Am 03.11.22 um 09:12 schrieb Aki Tuomi:

On 03/11/2022 10:09 EET Ralf Becker  wrote:

  
Hi Aki,


Am 03.11.22 um 08:50 schrieb Aki Tuomi:

On 03/11/2022 09:46 EET Ralf Becker  wrote:

   
I'm trying to migrate an old Cyrus 2.5 server to Dovecot 2.3.19 using

doveadm backup -R, which works for all folders but the INBOX itself,
which always stays empty.

The Cyrus side uses altnamespace:no and unixhierarchysep:no, it's used
as imapc: remote in doveadm backup -R with imapc_list_prefix=INBOX

Dovecot uses the following namespace to migrate into:

namespace inboxes {
     inbox = yes
     location =
     mailbox Sent {
       auto = subscribe
       special_use = \Sent
     }
     ### some more folders omitted ###
     prefix = INBOX/
     separator = /
     subscriptions = no
}

Hi!

When syncing mailboxes from other server, you should use migration config file, 
which has **no** auto=subscribe or auto=create folders, as these can mess up 
with synchronization.

Please see https://doc.dovecot.org/admin_manual/migrating_mailboxes/ for more 
details.

Does a migration config file specified with doveadm -c  add to and
overwrite the existing Dovecot configuration for the time the command
runs, like the -o options, or do I need to start a separate server with
a full configuration to e.g. have my authentication and mailbox location
available?

Ralf


It does not add/replace/overwrite configuration, you provide a fresh config 
file which is used *instead of* the default dovecot.conf.

You don't need to run a separate instance necessarely, although in some larger 
migrations this has been used as well.

I created now a separate instance with a modified configuration file
with no auto=subscribe (or create), no replication and an empty storage.
doveadm config -n is attached.

Unfortunately the result is identical to my previous tries:

doveadm -o namespace/subs/location=mbox:/var/dovecot/subs -o
imapc_user='someuser' -o imapc_password='secret' -D backup -n INBOX/ -R
-u someuser@somedomain imapc: 2>&1 | tee /tmp/doveadm-backup.log

Nov 03 09:06:35 dsync(someuser@somedomain): Warning: Mailbox changes
caused a desync. You may want to run dsync again: Remote lost mailbox
GUID c92f64f79f0d1ed01e6d5b314f04886c (maybe it was just deleted?)

doveadm mailbox status -u someuser@somedomain all INBOX
INBOX messages=0 recent=0 uidnext=1 uidvalidity=1577952633 unseen=0
highestmodseq=1 vsize=0 guid=c92f64f79f0d1ed01e6d5b314f04886c
firstsaved=never

Any ideas what else to try or how to debug that further?

I can send you the full log to your personal address, if that helps ...

Ralf

You should rm -rf the target folder first. Can you attach `doveadm -D backup` 
logs? Check that it won't contain passwords.

The mailbox directory did NOT exist before, therefore no need to rm -rf it.

I send the logs to your private address only, I feel not comfortable to
post them on the list.

Ralf


1. You did not delete the mailbox.

2. You are using **mbox** for subscription namespace, please don't.

Also

Nov 03 09:05:33 dsync(): Debug: brain M: Local mailbox tree: INBOX 
guid=c8adef115c8463633500effb6190 uid_validity=1667466332 uid_next=1 
subs=no last_change=0 last_subs=0
Nov 03 09:05:33 dsync(): Debug: brain S: Local mailbox tree: INBOX 
guid=c92f64f79f0d1ed01e6d5b314f04886c uid_validity=1577952633 uid_next=32746 
subs=no last_change=0 last_subs=0

This clearly shows that you did not, in fact, rm -rf the user's mailboxes prir 
running backup. Can you please try again, and clean up the user from t target 
mailsystem before running backup again?

I would also strongly recommend not using ACL plugin while doing backup, unless 
you are backing up ACLs from source system.

I removed all ACL plugin and config from my Dovecot config and removed
the subscription namespace from my doveadm backup command, but now it
fails with an error:

doveadm -o imapc_user='someuser' -o imapc_password='secret' -D backup -n
INBOX/ -R -u someuser@somedomain imapc:

Nov 03 11:12:15 doveadm(someuser@somedomain 156): Debug: Effective
uid=90, gid=101, home=/var/dovecot/imap/somedomain/someuser
Nov 03 11:12:15 doveadm(someuser@somedomain 156): Debug: Home dir not
found: /var/dovecot/imap/somedomain/someuser

Nov 03 11:12:15 doveadm(someuser@somedomain): Error: namespace
configuration error: subscriptions=yes namespace missing

doveadm config -n is attached.

Ralf

You can keep the subscription namespace in your

Re: doveadm backup|sync works for every folder but INBOX

2022-11-03 Thread Ralf Becker

Hi Aki,

Am 03.11.22 um 12:27 schrieb Aki Tuomi:

On 03/11/2022 13:23 EET Aki Tuomi  wrote:

On 03/11/2022 13:19 EET Ralf Becker  wrote:

  
Hi Aki,


Am 03.11.22 um 10:54 schrieb Aki Tuomi:

On 03/11/2022 11:46 EET Ralf Becker  wrote:

   
Hi Aki,


Am 03.11.22 um 10:29 schrieb Aki Tuomi:

On 03/11/2022 11:27 EET Ralf Becker  wrote:


Hi Aki,


Am 03.11.22 um 09:12 schrieb Aki Tuomi:

On 03/11/2022 10:09 EET Ralf Becker  wrote:

 
Hi Aki,


Am 03.11.22 um 08:50 schrieb Aki Tuomi:

On 03/11/2022 09:46 EET Ralf Becker  wrote:

  
I'm trying to migrate an old Cyrus 2.5 server to Dovecot 2.3.19 using

doveadm backup -R, which works for all folders but the INBOX itself,
which always stays empty.

The Cyrus side uses altnamespace:no and unixhierarchysep:no, it's used
as imapc: remote in doveadm backup -R with imapc_list_prefix=INBOX

Dovecot uses the following namespace to migrate into:

namespace inboxes {
    inbox = yes
    location =
    mailbox Sent {
      auto = subscribe
      special_use = \Sent
    }
    ### some more folders omitted ###
    prefix = INBOX/
    separator = /
    subscriptions = no
}

Hi!

When syncing mailboxes from other server, you should use migration config file, 
which has **no** auto=subscribe or auto=create folders, as these can mess up 
with synchronization.

Please see https://doc.dovecot.org/admin_manual/migrating_mailboxes/ for more 
details.

Does a migration config file specified with doveadm -c  add to and
overwrite the existing Dovecot configuration for the time the command
runs, like the -o options, or do I need to start a separate server with
a full configuration to e.g. have my authentication and mailbox location
available?

Ralf


It does not add/replace/overwrite configuration, you provide a fresh config 
file which is used *instead of* the default dovecot.conf.

You don't need to run a separate instance necessarely, although in some larger 
migrations this has been used as well.

I created now a separate instance with a modified configuration file
with no auto=subscribe (or create), no replication and an empty storage.
doveadm config -n is attached.

Unfortunately the result is identical to my previous tries:

doveadm -o namespace/subs/location=mbox:/var/dovecot/subs -o
imapc_user='someuser' -o imapc_password='secret' -D backup -n INBOX/ -R
-u someuser@somedomain imapc: 2>&1 | tee /tmp/doveadm-backup.log

Nov 03 09:06:35 dsync(someuser@somedomain): Warning: Mailbox changes
caused a desync. You may want to run dsync again: Remote lost mailbox
GUID c92f64f79f0d1ed01e6d5b314f04886c (maybe it was just deleted?)

doveadm mailbox status -u someuser@somedomain all INBOX
INBOX messages=0 recent=0 uidnext=1 uidvalidity=1577952633 unseen=0
highestmodseq=1 vsize=0 guid=c92f64f79f0d1ed01e6d5b314f04886c
firstsaved=never

Any ideas what else to try or how to debug that further?

I can send you the full log to your personal address, if that helps ...

Ralf

You should rm -rf the target folder first. Can you attach `doveadm -D backup` 
logs? Check that it won't contain passwords.

The mailbox directory did NOT exist before, therefore no need to rm -rf it.

I send the logs to your private address only, I feel not comfortable to
post them on the list.

Ralf


1. You did not delete the mailbox.

2. You are using **mbox** for subscription namespace, please don't.

Also

Nov 03 09:05:33 dsync(): Debug: brain M: Local mailbox tree: INBOX 
guid=c8adef115c8463633500effb6190 uid_validity=1667466332 uid_next=1 
subs=no last_change=0 last_subs=0
Nov 03 09:05:33 dsync(): Debug: brain S: Local mailbox tree: INBOX 
guid=c92f64f79f0d1ed01e6d5b314f04886c uid_validity=1577952633 uid_next=32746 
subs=no last_change=0 last_subs=0

This clearly shows that you did not, in fact, rm -rf the user's mailboxes prir 
running backup. Can you please try again, and clean up the user from t target 
mailsystem before running backup again?

I would also strongly recommend not using ACL plugin while doing backup, unless 
you are backing up ACLs from source system.

I removed all ACL plugin and config from my Dovecot config and removed
the subscription namespace from my doveadm backup command, but now it
fails with an error:

doveadm -o imapc_user='someuser' -o imapc_password='secret' -D backup -n
INBOX/ -R -u someuser@somedomain imapc:

Nov 03 11:12:15 doveadm(someuser@somedomain 156): Debug: Effective
uid=90, gid=101, home=/var/dovecot/imap/somedomain/someuser
Nov 03 11:12:15 doveadm(someuser@somedomain 156): Debug: Home dir not
found: /var/dovecot/imap/somedomain/someuser

Nov 03 11:12:15 doveadm(someuser@somedomain): Error: namespace
configuration error: subscriptions=yes namespace missing

doveadm config -n is attached.

Ralf

You can keep the subscription namespace in your config, otherwise you have no 
place to store subscriptions into. Just don't use mbox driver f

Re: doveadm backup|sync works for every folder but INBOX

2022-11-03 Thread Ralf Becker

Hi Aki,

Am 03.11.22 um 10:54 schrieb Aki Tuomi:

On 03/11/2022 11:46 EET Ralf Becker  wrote:

  
Hi Aki,


Am 03.11.22 um 10:29 schrieb Aki Tuomi:

On 03/11/2022 11:27 EET Ralf Becker  wrote:

   
Hi Aki,


Am 03.11.22 um 09:12 schrieb Aki Tuomi:

On 03/11/2022 10:09 EET Ralf Becker  wrote:


Hi Aki,


Am 03.11.22 um 08:50 schrieb Aki Tuomi:

On 03/11/2022 09:46 EET Ralf Becker  wrote:

 
I'm trying to migrate an old Cyrus 2.5 server to Dovecot 2.3.19 using

doveadm backup -R, which works for all folders but the INBOX itself,
which always stays empty.

The Cyrus side uses altnamespace:no and unixhierarchysep:no, it's used
as imapc: remote in doveadm backup -R with imapc_list_prefix=INBOX

Dovecot uses the following namespace to migrate into:

namespace inboxes {
   inbox = yes
   location =
   mailbox Sent {
     auto = subscribe
     special_use = \Sent
   }
   ### some more folders omitted ###
   prefix = INBOX/
   separator = /
   subscriptions = no
}

Hi!

When syncing mailboxes from other server, you should use migration config file, 
which has **no** auto=subscribe or auto=create folders, as these can mess up 
with synchronization.

Please see https://doc.dovecot.org/admin_manual/migrating_mailboxes/ for more 
details.

Does a migration config file specified with doveadm -c  add to and
overwrite the existing Dovecot configuration for the time the command
runs, like the -o options, or do I need to start a separate server with
a full configuration to e.g. have my authentication and mailbox location
available?

Ralf


It does not add/replace/overwrite configuration, you provide a fresh config 
file which is used *instead of* the default dovecot.conf.

You don't need to run a separate instance necessarely, although in some larger 
migrations this has been used as well.

I created now a separate instance with a modified configuration file
with no auto=subscribe (or create), no replication and an empty storage.
doveadm config -n is attached.

Unfortunately the result is identical to my previous tries:

doveadm -o namespace/subs/location=mbox:/var/dovecot/subs -o
imapc_user='someuser' -o imapc_password='secret' -D backup -n INBOX/ -R
-u someuser@somedomain imapc: 2>&1 | tee /tmp/doveadm-backup.log

Nov 03 09:06:35 dsync(someuser@somedomain): Warning: Mailbox changes
caused a desync. You may want to run dsync again: Remote lost mailbox
GUID c92f64f79f0d1ed01e6d5b314f04886c (maybe it was just deleted?)

doveadm mailbox status -u someuser@somedomain all INBOX
INBOX messages=0 recent=0 uidnext=1 uidvalidity=1577952633 unseen=0
highestmodseq=1 vsize=0 guid=c92f64f79f0d1ed01e6d5b314f04886c
firstsaved=never

Any ideas what else to try or how to debug that further?

I can send you the full log to your personal address, if that helps ...

Ralf

You should rm -rf the target folder first. Can you attach `doveadm -D backup` 
logs? Check that it won't contain passwords.

The mailbox directory did NOT exist before, therefore no need to rm -rf it.

I send the logs to your private address only, I feel not comfortable to
post them on the list.

Ralf


1. You did not delete the mailbox.

2. You are using **mbox** for subscription namespace, please don't.

Also

Nov 03 09:05:33 dsync(): Debug: brain M: Local mailbox tree: INBOX 
guid=c8adef115c8463633500effb6190 uid_validity=1667466332 uid_next=1 
subs=no last_change=0 last_subs=0
Nov 03 09:05:33 dsync(): Debug: brain S: Local mailbox tree: INBOX 
guid=c92f64f79f0d1ed01e6d5b314f04886c uid_validity=1577952633 uid_next=32746 
subs=no last_change=0 last_subs=0

This clearly shows that you did not, in fact, rm -rf the user's mailboxes prir 
running backup. Can you please try again, and clean up the user from t target 
mailsystem before running backup again?

I would also strongly recommend not using ACL plugin while doing backup, unless 
you are backing up ACLs from source system.


I removed all ACL plugin and config from my Dovecot config and removed 
the subscription namespace from my doveadm backup command, but now it 
fails with an error:


doveadm -o imapc_user='someuser' -o imapc_password='secret' -D backup -n 
INBOX/ -R -u someuser@somedomain imapc:


Nov 03 11:12:15 doveadm(someuser@somedomain 156): Debug: Effective 
uid=90, gid=101, home=/var/dovecot/imap/somedomain/someuser
Nov 03 11:12:15 doveadm(someuser@somedomain 156): Debug: Home dir not 
found: /var/dovecot/imap/somedomain/someuser


Nov 03 11:12:15 doveadm(someuser@somedomain): Error: namespace 
configuration error: subscriptions=yes namespace missing


doveadm config -n is attached.

Ralf

--
Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0
# 2.3.19.1 (9b53102964): /etc/dovecot/dovecot.conf
# Pigeonhole version 0.5.1

Re: doveadm backup|sync works for every folder but INBOX

2022-11-03 Thread Ralf Becker

Hi Aki,

Am 03.11.22 um 10:29 schrieb Aki Tuomi:

On 03/11/2022 11:27 EET Ralf Becker  wrote:

  
Hi Aki,


Am 03.11.22 um 09:12 schrieb Aki Tuomi:

On 03/11/2022 10:09 EET Ralf Becker  wrote:

   
Hi Aki,


Am 03.11.22 um 08:50 schrieb Aki Tuomi:

On 03/11/2022 09:46 EET Ralf Becker  wrote:


I'm trying to migrate an old Cyrus 2.5 server to Dovecot 2.3.19 using

doveadm backup -R, which works for all folders but the INBOX itself,
which always stays empty.

The Cyrus side uses altnamespace:no and unixhierarchysep:no, it's used
as imapc: remote in doveadm backup -R with imapc_list_prefix=INBOX

Dovecot uses the following namespace to migrate into:

namespace inboxes {
  inbox = yes
  location =
  mailbox Sent {
    auto = subscribe
    special_use = \Sent
  }
  ### some more folders omitted ###
  prefix = INBOX/
  separator = /
  subscriptions = no
}

Hi!

When syncing mailboxes from other server, you should use migration config file, 
which has **no** auto=subscribe or auto=create folders, as these can mess up 
with synchronization.

Please see https://doc.dovecot.org/admin_manual/migrating_mailboxes/ for more 
details.

Does a migration config file specified with doveadm -c  add to and
overwrite the existing Dovecot configuration for the time the command
runs, like the -o options, or do I need to start a separate server with
a full configuration to e.g. have my authentication and mailbox location
available?

Ralf


It does not add/replace/overwrite configuration, you provide a fresh config 
file which is used *instead of* the default dovecot.conf.

You don't need to run a separate instance necessarely, although in some larger 
migrations this has been used as well.

I created now a separate instance with a modified configuration file
with no auto=subscribe (or create), no replication and an empty storage.
doveadm config -n is attached.

Unfortunately the result is identical to my previous tries:

doveadm -o namespace/subs/location=mbox:/var/dovecot/subs -o
imapc_user='someuser' -o imapc_password='secret' -D backup -n INBOX/ -R
-u someuser@somedomain imapc: 2>&1 | tee /tmp/doveadm-backup.log

Nov 03 09:06:35 dsync(someuser@somedomain): Warning: Mailbox changes
caused a desync. You may want to run dsync again: Remote lost mailbox
GUID c92f64f79f0d1ed01e6d5b314f04886c (maybe it was just deleted?)

doveadm mailbox status -u someuser@somedomain all INBOX
INBOX messages=0 recent=0 uidnext=1 uidvalidity=1577952633 unseen=0
highestmodseq=1 vsize=0 guid=c92f64f79f0d1ed01e6d5b314f04886c
firstsaved=never

Any ideas what else to try or how to debug that further?

I can send you the full log to your personal address, if that helps ...

Ralf

You should rm -rf the target folder first. Can you attach `doveadm -D backup` 
logs? Check that it won't contain passwords.


The mailbox directory did NOT exist before, therefore no need to rm -rf it.

I send the logs to your private address only, I feel not comfortable to 
post them on the list.


Ralf

--
Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0



Re: doveadm backup|sync works for every folder but INBOX

2022-11-03 Thread Ralf Becker

Hi Aki,

Am 03.11.22 um 09:12 schrieb Aki Tuomi:

On 03/11/2022 10:09 EET Ralf Becker  wrote:

  
Hi Aki,


Am 03.11.22 um 08:50 schrieb Aki Tuomi:

On 03/11/2022 09:46 EET Ralf Becker  wrote:

   
I'm trying to migrate an old Cyrus 2.5 server to Dovecot 2.3.19 using

doveadm backup -R, which works for all folders but the INBOX itself,
which always stays empty.

The Cyrus side uses altnamespace:no and unixhierarchysep:no, it's used
as imapc: remote in doveadm backup -R with imapc_list_prefix=INBOX

Dovecot uses the following namespace to migrate into:

namespace inboxes {
     inbox = yes
     location =
     mailbox Sent {
       auto = subscribe
       special_use = \Sent
     }
     ### some more folders omitted ###
     prefix = INBOX/
     separator = /
     subscriptions = no
}

Hi!

When syncing mailboxes from other server, you should use migration config file, 
which has **no** auto=subscribe or auto=create folders, as these can mess up 
with synchronization.

Please see https://doc.dovecot.org/admin_manual/migrating_mailboxes/ for more 
details.

Does a migration config file specified with doveadm -c  add to and
overwrite the existing Dovecot configuration for the time the command
runs, like the -o options, or do I need to start a separate server with
a full configuration to e.g. have my authentication and mailbox location
available?

Ralf


It does not add/replace/overwrite configuration, you provide a fresh config 
file which is used *instead of* the default dovecot.conf.

You don't need to run a separate instance necessarely, although in some larger 
migrations this has been used as well.


I created now a separate instance with a modified configuration file 
with no auto=subscribe (or create), no replication and an empty storage. 
doveadm config -n is attached.


Unfortunately the result is identical to my previous tries:

doveadm -o namespace/subs/location=mbox:/var/dovecot/subs -o 
imapc_user='someuser' -o imapc_password='secret' -D backup -n INBOX/ -R 
-u someuser@somedomain imapc: 2>&1 | tee /tmp/doveadm-backup.log


Nov 03 09:06:35 dsync(someuser@somedomain): Warning: Mailbox changes 
caused a desync. You may want to run dsync again: Remote lost mailbox 
GUID c92f64f79f0d1ed01e6d5b314f04886c (maybe it was just deleted?)


doveadm mailbox status -u someuser@somedomain all INBOX
INBOX messages=0 recent=0 uidnext=1 uidvalidity=1577952633 unseen=0 
highestmodseq=1 vsize=0 guid=c92f64f79f0d1ed01e6d5b314f04886c 
firstsaved=never


Any ideas what else to try or how to debug that further?

I can send you the full log to your personal address, if that helps ...

Ralf

--
Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0
# 2.3.19.1 (9b53102964): /etc/dovecot/dovecot.conf
# Pigeonhole version 0.5.19 (4eae2f79)
# OS: Linux 4.15.0-140-generic x86_64  
# Hostname: cb63c26e434e
auth_cache_negative_ttl = 2 mins
auth_cache_size = 10 M
auth_cache_ttl = 5 mins
auth_master_user_separator = *
auth_mechanisms = plain login
auth_username_chars = 
"abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ01234567890.-_@#"
default_client_limit = 3500
default_process_limit = 512
disable_plaintext_auth = no
doveadm_port = 26
first_valid_uid = 90
imapc_features = rfc822.size fetch-headers
imapc_host = 10.44.88.3
imapc_list_prefix = INBOX
listen = *
log_path = /dev/stderr
login_greeting = Dovecot FRA.khs ready
mail_access_groups = dovecot
mail_attribute_dict = file:%h/dovecot-metadata
mail_fsync = never
mail_gid = dovecot
mail_location = mdbox:~/mdbox
mail_log_prefix = "%s(%u %p): "
mail_max_userip_connections = 200
mail_plugins = acl quota notify replication mail_log
mail_prefetch_count = 20
mail_uid = dovecot
managesieve_notify_capability = mailto
managesieve_sieve_capability = fileinto reject envelope encoded-character 
vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy 
include variables body enotify environment mailbox date ihave vnd.dovecot.debug
mbox_min_index_size = 1000 B
mdbox_rotate_size = 50 M
namespace inboxes {
  inbox = yes
  location = 
  mailbox Drafts {
special_use = \Drafts
  }
  mailbox Junk {
special_use = \Junk
  }
  mailbox Sent {
special_use = \Sent
  }
  mailbox Trash {
special_use = \Trash
  }
  prefix = INBOX/
  separator = /
  subscriptions = no
}
namespace subs {
  hidden = yes
  list = no
  location = 
  prefix = 
  separator = /
}
namespace users {
  location = mdbox:%%h/mdbox
  prefix = user/%%n/
  separator = /
  subscriptions = no
  type = shared
}
passdb {
  args = /etc/dovecot/dovecot-dict-master-auth.conf
  driver = dict
  master = yes
}
passdb {
  args = /etc/dovecot/dovecot-dict-auth.conf
  driver = dict
}
plugin {
  acl = vfile
  acl_shared_dict = file:/var/dovecot/imap/%d/shared-mailboxes.db
  mail_log_events = delete undelet

Re: doveadm backup|sync works for every folder but INBOX

2022-11-03 Thread Ralf Becker

Hi Aki,

Am 03.11.22 um 08:50 schrieb Aki Tuomi:

On 03/11/2022 09:46 EET Ralf Becker  wrote:

  
I'm trying to migrate an old Cyrus 2.5 server to Dovecot 2.3.19 using

doveadm backup -R, which works for all folders but the INBOX itself,
which always stays empty.

The Cyrus side uses altnamespace:no and unixhierarchysep:no, it's used
as imapc: remote in doveadm backup -R with imapc_list_prefix=INBOX

Dovecot uses the following namespace to migrate into:

namespace inboxes {
    inbox = yes
    location =
    mailbox Sent {
      auto = subscribe
      special_use = \Sent
    }
    ### some more folders omitted ###
    prefix = INBOX/
    separator = /
    subscriptions = no
}

Hi!

When syncing mailboxes from other server, you should use migration config file, 
which has **no** auto=subscribe or auto=create folders, as these can mess up 
with synchronization.

Please see https://doc.dovecot.org/admin_manual/migrating_mailboxes/ for more 
details.


Does a migration config file specified with doveadm -c  add to and 
overwrite the existing Dovecot configuration for the time the command 
runs, like the -o options, or do I need to start a separate server with 
a full configuration to e.g. have my authentication and mailbox location 
available?


Ralf

--
Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0



doveadm backup|sync works for every folder but INBOX

2022-11-03 Thread Ralf Becker
 dict(file): dict 
destroyed
Nov 02 10:55:45 dsync(someuser@somedomain): Debug: dict(file): Waiting 
for dict to finish pending operations
Nov 02 10:55:45 dsync(someuser@somedomain): Debug: dict(file): dict 
destroyed
Nov 02 10:55:45 dsync(someuser@somedomain): Debug: 
imapc(10.44.88.3:143): Disconnected

Nov 02 10:55:45 dsync(someuser@somedomain): Debug: User session is finished
Nov 02 10:55:45 dsync(someuser@somedomain): Debug: dict(file): dict 
destroyed
Nov 02 10:55:45 dsync(someuser@somedomain): Debug: dict(file): Waiting 
for dict to finish pending operations
Nov 02 10:55:45 dsync(someuser@somedomain): Debug: dict(file): dict 
destroyed
Nov 02 10:55:45 doveadm(40708): Debug: auth-master: conn 
unix:/run/dovecot/auth-userdb (pid=1,uid=0): Disconnected: Connection 
closed (fd=9)


doveadm mailbox status -u someuser@somedomain all INBOX
INBOX messages=0 recent=0 uidnext=1 uidvalidity=1667223083 unseen=0 
highestmodseq=1 vsize=0 guid=c92f64f79f0d1ed01e6d5b314f04886c 
firstsaved=never


The interesting part is that INBOX somehow seems to have two different 
GUIDs, depending on how I sync it (compare the two doveadm mailbox 
status outputs).


In a tcpdump of the IMAP trafic I can see a STATUS "INBOX" (UIDNEXT 
UIDVALIDITY) but no EXAMINE "INBOX" or something else I would expect 
from the sync.


I can provide the full output of doveadm backup/sync and a pcap of the 
IMAP trafic, but not to the public list.


Any idea what's wrong with the sync of the INBOX or other suggestions / 
imapc parameters to use?


Ralf
--
Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0



Re: v2.3.19.1 released

2022-06-20 Thread Ralf Becker

Hi Timo,

attached is the log with auth_debug=true from the starting process and 
running "doveadm auth test ralfimapt...@egroupware.org" and one other 
regular passdb lookup.


I replaced passwords and the customer email with XX.

I also run "doveadm user '*'" to test the iteration, which worked.

Ralf


Am 20.06.22 um 13:32 schrieb Ralf Becker:

Hi Timo,

Am 20.06.22 um 12:17 schrieb Timo Sirainen:

On 20. Jun 2022, at 10.03, Ralf Becker  wrote:
   Fixes: Panic: file userdb-blocking.c: line 125 
(userdb_blocking_iter_next): assertion failed: (ctx->conn != NULL)
As the above Panic is fixed I tried again (see my attached mail to 
the 2.3.19 release) and I can confirm to no longer get the Panic, 
BUT authentication is NOT working either :(


Reverting back to a container with Dovecot 2.3.16, get's everything 
working again.


We use a hourly updated local SQLight database and a dict for user- 
and passdb.


Is the usage of multiple backends no longer supported, or did 
something in that regard changed between 2.3.16 and 2.3.19.1?
We have lots of tests using multiple backends for authentication, and 
lots of people are using many passdbs/userdbs in production. I was 
only aware of iteration being broken with multiple userdbs, since 
that's not used so much. And we added a test to verify that multiple 
userdb iteration is actually returning results from both userdbs, so 
that shouldn't be completely broken either.


So I'd need more details of what exactly goes wrong and how. Is it 
the authentication or the iteration that is now broken?


I only seen authentication errors in doveadm log errors and our 
montioring trying to access the backend with user credentials.



Logs with auth_debug=yes would likely help.


I will get you the logs tonight, don't want to switch (one leg of) the 
production system during daytime.

I can then also try eg. doveadm user -A to check the iteration.


Also:

Here's the relevant part of my config (full doveadm config -n is 
attached):


userdb {
  args = /etc/dovecot/dovecot-sql.conf
  driver = sql
}
userdb {
  args = /etc/dovecot/dovecot-dict-auth.conf
  driver = dict
}
passdb {
  args = /etc/dovecot/dovecot-dict-master-auth.conf
  driver = dict
  master = yes
}
passdb {
  args = /etc/dovecot/dovecot-dict-auth.conf
  driver = dict
}

What do these external conf files contain?


/etc/dovecot/dovecot-sql.conf:

driver = sqlite
connect = /etc/dovecot/users.sqlite

#password_query = SELECT userid AS username, domain, password \
#  FROM users WHERE userid = '%n' AND domain = '%d'
#user_query = SELECT home, uid, gid FROM users WHERE userid = '%n' AND 
domain = '%d'

# return no userdb, as db contains only user-names
#user_query = SELECT home,NULL AS uid,NULL AS gid FROM users WHERE 
userid = '%n' AND domain = '%d'

user_query = SELECT home,NULL AS uid,NULL AS gid, \
    '*:bytes='||(quota*1048576) AS quota_rule, \
    userid||'@'||domain AS master_user, \
    LOWER(REPLACE(groups||',', ',', '@'||domain||',')) AS 
acl_groups \

    FROM users WHERE userid = '%n' AND domain = '%d'

# For using doveadm -A:
iterate_query = SELECT userid AS username, domain FROM users

/etc/dovecot/dovecot-dict-auth.conf:

uri = proxy:/var/run/dovecot_auth_proxy/socket:somewhere
#uri = proxy:10.44.99.180:2001:somewhere

password_key = passdb/%u/%w
user_key = userdb/%u
iterate_disable = yes
#iterate_disable = no
#iterate_prefix = userdb/
default_pass_scheme = md5

/etc/dovecot/dovecot-dict-master-auth.conf:

uri = proxy:/var/run/dovecot_auth_proxy/socket:somewhere
#uri = proxy:10.44.99.180:2001:somewhere

#password_key = master/%{login_domain}/%u/%w
password_key = master/%{login_user}/%u/%w
iterate_disable = yes
default_pass_scheme = md5

Thanks :)

Ralf



--
Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0


dovecot-auth_debug.log.bz2
Description: BZip2 compressed data


Re: v2.3.19.1 released

2022-06-20 Thread Ralf Becker

Hi Timo,

Am 20.06.22 um 12:17 schrieb Timo Sirainen:

On 20. Jun 2022, at 10.03, Ralf Becker  wrote:

   Fixes: Panic: file userdb-blocking.c: line 125 (userdb_blocking_iter_next): 
assertion failed: (ctx->conn != NULL)

As the above Panic is fixed I tried again (see my attached mail to the 2.3.19 
release) and I can confirm to no longer get the Panic, BUT authentication is 
NOT working either :(

Reverting back to a container with Dovecot 2.3.16, get's everything working 
again.

We use a hourly updated local SQLight database and a dict for user- and passdb.

Is the usage of multiple backends no longer supported, or did something in that 
regard changed between 2.3.16 and 2.3.19.1?

We have lots of tests using multiple backends for authentication, and lots of 
people are using many passdbs/userdbs in production. I was only aware of 
iteration being broken with multiple userdbs, since that's not used so much. 
And we added a test to verify that multiple userdb iteration is actually 
returning results from both userdbs, so that shouldn't be completely broken 
either.

So I'd need more details of what exactly goes wrong and how. Is it the 
authentication or the iteration that is now broken?


I only seen authentication errors in doveadm log errors and our 
montioring trying to access the backend with user credentials.



Logs with auth_debug=yes would likely help.


I will get you the logs tonight, don't want to switch (one leg of) the 
production system during daytime.

I can then also try eg. doveadm user -A to check the iteration.


Also:


Here's the relevant part of my config (full doveadm config -n is attached):

userdb {
  args = /etc/dovecot/dovecot-sql.conf
  driver = sql
}
userdb {
  args = /etc/dovecot/dovecot-dict-auth.conf
  driver = dict
}
passdb {
  args = /etc/dovecot/dovecot-dict-master-auth.conf
  driver = dict
  master = yes
}
passdb {
  args = /etc/dovecot/dovecot-dict-auth.conf
  driver = dict
}

What do these external conf files contain?


/etc/dovecot/dovecot-sql.conf:

driver = sqlite
connect = /etc/dovecot/users.sqlite

#password_query = SELECT userid AS username, domain, password \
#  FROM users WHERE userid = '%n' AND domain = '%d'
#user_query = SELECT home, uid, gid FROM users WHERE userid = '%n' AND 
domain = '%d'

# return no userdb, as db contains only user-names
#user_query = SELECT home,NULL AS uid,NULL AS gid FROM users WHERE 
userid = '%n' AND domain = '%d'

user_query = SELECT home,NULL AS uid,NULL AS gid, \
    '*:bytes='||(quota*1048576) AS quota_rule, \
    userid||'@'||domain AS master_user, \
    LOWER(REPLACE(groups||',', ',', '@'||domain||',')) AS acl_groups \
    FROM users WHERE userid = '%n' AND domain = '%d'

# For using doveadm -A:
iterate_query = SELECT userid AS username, domain FROM users

/etc/dovecot/dovecot-dict-auth.conf:

uri = proxy:/var/run/dovecot_auth_proxy/socket:somewhere
#uri = proxy:10.44.99.180:2001:somewhere

password_key = passdb/%u/%w
user_key = userdb/%u
iterate_disable = yes
#iterate_disable = no
#iterate_prefix = userdb/
default_pass_scheme = md5

/etc/dovecot/dovecot-dict-master-auth.conf:

uri = proxy:/var/run/dovecot_auth_proxy/socket:somewhere
#uri = proxy:10.44.99.180:2001:somewhere

#password_key = master/%{login_domain}/%u/%w
password_key = master/%{login_user}/%u/%w
iterate_disable = yes
default_pass_scheme = md5

Thanks :)

Ralf

--
Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0



Re: v2.3.19.1 released

2022-06-20 Thread Ralf Becker

Hi Aki,

Am 14.06.22 um 12:24 schrieb Aki Tuomi:

Hi everyone!

Due to a severe bug in doveadm deduplicate, we are releasing patch release 
2.3.19.1. Please find it at locations below:

https://dovecot.org/releases/2.3/dovecot-2.3.19.1.tar.gz
https://dovecot.org/releases/2.3/dovecot-2.3.19.1.tar.gz.sig
Binary packages in https://repo.dovecot.org/
Docker images in https://hub.docker.com/r/dovecot/dovecot

Aki Tuomi
Open-Xchange oy

---

- doveadm deduplicate: Non-duplicate mails were deleted. v2.3.19 regression.
- auth: Crash would occur when iterating multiple backends.
   Fixes: Panic: file userdb-blocking.c: line 125 (userdb_blocking_iter_next): 
assertion failed: (ctx->conn != NULL)


As the above Panic is fixed I tried again (see my attached mail to the 
2.3.19 release) and I can confirm to no longer get the Panic, BUT 
authentication is NOT working either :(


Reverting back to a container with Dovecot 2.3.16, get's everything 
working again.


We use a hourly updated local SQLight database and a dict for user- and 
passdb.


Is the usage of multiple backends no longer supported, or did something 
in that regard changed between 2.3.16 and 2.3.19.1?


Here's the relevant part of my config (full doveadm config -n is attached):

userdb {
  args = /etc/dovecot/dovecot-sql.conf
  driver = sql
}
userdb {
  args = /etc/dovecot/dovecot-dict-auth.conf
  driver = dict
}
passdb {
  args = /etc/dovecot/dovecot-dict-master-auth.conf
  driver = dict
  master = yes
}
passdb {
  args = /etc/dovecot/dovecot-dict-auth.conf
  driver = dict
}

The SQLight DB is used for listing all users and to keep the replication 
running, even if the dict is unavailable.

Any ideas what might be the cause or how to narrow the problem down?

Ralf

--
Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0
--- Begin Message ---

After updating to 2.3.19 (from 2.3.16) passdb and userdb lookups fail:

root@backup:~# doveadm user r...@egroupware.org; doveadm log errors

userdb lookup: user r...@egroupware.org doesn't exist
field    value

May 15 07:22:18 Panic: auth: file userdb-blocking.c: line 124 
(userdb_blocking_iter_next): assertion failed: (ctx->conn != NULL)
May 15 07:22:18 Error: auth: Raw backtrace: 
/usr/lib/dovecot/libdovecot.so.0(backtrace_append+0x41) [0x7f019a651c91] 
-> /usr/lib/dovecot/libdovecot.so.0(backtrace_get+0x22) [0x7f019a651db2] 
-> /usr/lib/dovecot/libdovecot.so.0(+0x10b0bb) [0x7f019a65f0bb] -> 
/usr/lib/dovecot/libdovecot.so.0(+0x10b157) [0x7f019a65f157] -> 
/usr/lib/dovecot/libdovecot.so.0(+0x5d375) [0x7f019a5b1375] -> 
dovecot/auth [0 wait, 0 passdb, 0 userdb](+0x157a7) [0x55e256d287a7] -> 
dovecot/auth [0 wait, 0 passdb, 0 userdb](+0x1954b) [0x55e256d2c54b] -> 
dovecot/auth [0 wait, 0 passdb, 0 userdb](+0x36ca7) [0x55e256d49ca7] -> 
dovecot/auth [0 wait, 0 passdb, 0 userdb](+0x2ab86) [0x55e256d3db86] -> 
/usr/lib/dovecot/libdovecot.so.0(io_loop_handle_timeouts+0x15f) 
[0x7f019a67576f] -> 
/usr/lib/dovecot/libdovecot.so.0(io_loop_handler_run_internal+0xcf) 
[0x7f019a67702f] -> 
/usr/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x54) 
[0x7f019a675a54] -> /usr/lib/dovecot/libdovecot.so.0(io_loop_run+0x40) 
[0x7f019a675bc0] -> 
/usr/lib/dovecot/libdovecot.so.0(master_service_run+0x17) 
[0x7f019a5e7207] -> dovecot/auth [0 wait, 0 passdb, 0 
userdb](main+0x3c8) [0x55e256d29588] -> 
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf3) [0x7f019a2de0b3] 
-> dovecot/auth [0 wait, 0 passdb, 0 userdb](_start+0x2e) [0x55e256d2976e]
May 15 07:22:19 Fatal: auth: master: service(auth): child 19 killed with 
signal 6 (core dumped)
May 15 07:22:19 Error: replicator: auth-master: userdb list: 
Disconnected unexpectedly
May 15 07:22:19 Error: replicator: listing users failed, can't replicate 
existing data
May 15 07:22:19 Error: doveadm(arash 2stud...@bb-trunk.egroupware.de): 
User doesn't exist
May 15 07:22:19 Error: doveadm(arash teac...@bb-trunk.egroupware.de): 
User doesn't exist
May 15 07:22:20 Error: doveadm(christoph 
thys...@bb-trunk.egroupware.de): User doesn't exist
May 15 07:23:21 Error: doveadm(arash stud...@bb-trunk.egroupware.de): 
User doesn't exist
May 15 07:24:02 Error: 
doveadm(schie...@uni-kl.de@bb-trunk.egroupware.de): User doesn't exist
May 15 07:24:07 Error: doveadm(sab...@uni-kl.de@bb-trunk.egroupware.de): 
User doesn't exist
May 15 07:24:24 Error: 
doveadm(ralf.imapt...@outdoor-training.de@bb-trunk.egroupware.de): User 
doesn't exist
May 15 07:24:31 Error: doveadm(arash to...@bb-trunk.egroupware.de): User 
doesn't exist
May 15 07:24:31 Error: 
doveadm(becke...@uni-kl.de@bb-trunk.egroupware.de): User doesn't exist
May 15 07:24:49 Error: 
doveadm(olat.vcrp.de:2723414...@bb-trunk.egroupware.de): User doesn't exist
May 15 07:24:56 Error: 

Re: Panic: file userdb-blocking with Dovecot 2.3.19

2022-05-25 Thread Ralf Becker

Hi Niklas,

I reported the same error in the "Dovecot v2.3.19 released" thread and 
Aki responded with:


> Thank you for reporting this issue. I can reproduce it locally, and 
we'll take a look at it.


So let's hope the best it get fixed for the next release.

Ralf


Am 24.05.22 um 16:00 schrieb Niklas Meyer:


Hello all,

since we´ve tested around with the new dovecot release in the mailcow 
project we´ve came across a curious and new error with Dovecot:


/auth: Panic: file userdb-blocking.c: line 124 
(userdb_blocking_iter_next): assertion failed: (ctx->conn != NULL)/


*System Information:*

*Dovecot Version: 2.3.19 (b3ad6004dc)**
*

*OS: Debian 11 (dovecot is running in a docker container)*

*CPU: x86*

*Filesystem: ext4*

The error occurs only with the newest Dovecot release (2.3.19), the 
config hasn´t been changed.


In the Attachment you can find all (hopefully) helpful informations.

Maybe it is a simple fix.

Kind regards and thanks for your help.

--
The mailcow Team
- Niklas
---
The Infrastructure Company GmbH
Parkstr. 42
47877 Willich

Handelsregister: Amtsgericht Krefeld, HRB 15904
USt-IdNr.: DE308854956
Geschäftsführer: Martin Vogt

/*Hinweis: *Der Inhalt dieser E-Mail ist vertraulich und nur für den 
in der Nachricht angegebenen Empfänger bestimmt.
Es ist strengstens untersagt, irgendeinen Teil dieser Nachricht ohne 
schriftliche Zustimmung des Absenders an Dritte weiterzugeben.
Wenn Sie diese Nachricht irrtümlich erhalten haben, antworten Sie 
bitte auf diese Nachricht und folgen Sie mit ihrer Löschung,
so dass wir sicherstellen können, dass ein solcher Fehler in Zukunft 
nicht mehr vorkommt./



--
Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0


Re: Dovecot v2.3.19 released: User/PassDB lookups fail after update

2022-05-15 Thread Ralf Becker
cting to the LDAP server and
   aborting LDAP requests earlier.
- auth: Process crashed if userdb iteration was attempted while auth-workers
   were already full handling auth requests.
- auth: db-oauth2: Using %{oauth2:name} variables caused unnecessary
   introspection requests.
- dict: Timeouts may have been leaked at deinit.
- director: Ring may have become unstable if a backend's tag was changed.
   It could also have caused director process to crash.
- doveadm kick: Numeric parameter was treated as IP address.
- doveadm: Proxying can panic when flushing print output. Fixes
   Panic: file ioloop.c: line 865 (io_loop_destroy): assertion failed:
   (ioloop == current_ioloop).
- doveadm sync: BROKENCHAR was wrongly changed to '_' character when
   migrating mailboxes. This was set by default to %, so any mailbox
   names containing % characters were modified to "_25".
- imapc: Copying or moving mails with doveadm to an imapc mailbox could
   have produced "Error: Syncing mailbox '[...]' failed" Errors. The
   operation itself succeeded but attempting to sync the destination
   mailbox failed.
- imapc: Prevent index log synchronization errors when two or more imapc
   sessions are adding messages to the same mailbox index files, i.e.
   INDEX=MEMORY is not used.
- indexer: Process was slowly leaking memory for each indexing request.
- lib-fts: fts header filters caused binary content to be sent to the
   indexer with non-default configuration.
- doveadm-server: Process could hang in some situations when printing
   output to TCP client, e.g. when printing doveadm sync state.
- lib-index: dovecot.index.log files were often read and parsed entirely,
   rather than only the parts that were actually necessary. This mainly
   increased CPU usage.
- lmtp-proxy: Session ID forwarding would cause same session IDs being
   used when delivering same mail to multiple backends.
- log: Log prefix update may have been lost if log process was busy.
   This could have caused log prefixes to be empty or in some cases
   reused between sessions, i.e. log lines could have been logged for the
   wrong user/session.
- mail_crypt: Plugin crashes if it's loaded only for some users. Fixes
   Panic: Module context mail_crypt_user_module missing.
- mail_crypt: When LMTP was delivering mails to both recipients with mail
   encryption enabled and not enabled, the non-encrypted recipients may
   have gotten mails encrypted anyway. This happened when the first
   recipient was encrypted (mail_crypt_save_version=2) and the 2nd
   recipient was not encrypted (mail_crypt_save_version=0).
- pop3: Session would crash if empty line was sent.
- stats: HTTP server leaked memory.
- submission-login: Long credentials, such as OAUTH2 tokens, were refused
   during SASL interactive due to submission server applying line length
   limits.
- submission-login: When proxying to remote host, authentication was not
   using interactive SASL when logging in using long credentials such as
   OAUTH2 tokens. This caused authentication to fail due to line length
   constraints in SMTP protocol.
- submission: Terminating the client connection with QUIT command after
   mail transaction is started with MAIL command and before it is
   finished with DATA/BDAT can cause a segfault crash.
- virtual: doveadm search queries with mailbox-guid as the only parameter
   crashes: Panic: file virtual-search.c: line 77 (virtual_search_get_records):
   assertion failed: (result != 0)



--
Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0
# 2.3.19 (b3ad6004dc): /etc/dovecot/dovecot.conf
# Pigeonhole version 0.5.19 (4eae2f79)
# OS: Linux 4.15.0-176-generic x86_64 Ubuntu 20.04.4 LTS 
# Hostname: f7cd89ea62ff
auth_cache_negative_ttl = 2 mins
auth_cache_size = 10 M
auth_cache_ttl = 5 mins
auth_master_user_separator = *
auth_mechanisms = plain login
auth_username_chars = 
"abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ01234567890.-_@#"
default_client_limit = 3500
default_process_limit = 512
disable_plaintext_auth = no
doveadm_password = # hidden, use -P to show it
doveadm_port = 12345
first_valid_uid = 90
listen = *
log_path = /dev/stderr
login_greeting = Dovecot KA.nfs ready
mail_access_groups = dovecot
mail_attribute_dict = file:%h/dovecot-metadata
mail_gid = dovecot
mail_location = mdbox:~/mdbox
mail_log_prefix = "%s(%u %p): "
mail_max_userip_connections = 200
mail_plugins = acl quota notify replication mail_log mail_lua notify 
push_notification push_notification_lua
mail_uid = dovecot
managesieve_notify_capability = mailto
managesieve_sieve_capability = fileinto reject envelope encoded-character 
vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy 
include variables body enotify environment mailbox date ihave vnd.dovecot.debug
m

Re: Containerize dovecot?

2021-08-24 Thread Ralf Becker

Hi Rob,

Am 24.08.21 um 09:13 schrieb MRob:
Hello, anyone here has containerized dovecot? Can I ask general advice 
and experience please? are there any recommended articles/tutorial for 
containerize deploymnt and auto-scaling? Thank you.


We (www.egroupware.org) run Dovecot only containerized:

* as mail-server addon to our groupware server:
    - https://github.com/EGroupware/egroupware/wiki/EGroupwareMail
    - 
https://github.com/EGroupware/build.opensuse.org/tree/master/server:eGroupWare/egroupware-mail


* for our own SAAS hosting with multiple directors and backend-pairs

We use 2 different containers:

a) based on Dovecot CE repo including Lua for our push notifications
b) based on Alpine for directors, SASL and backend-pairs without push

So far we're not autoscaling anything.

Ralf

--
Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0




Re: Replication stalled by failed attemts to read attachments (mail_attachment_dir)

2021-08-09 Thread Ralf Becker

Thanks for the explanation Timo :)

As the migrated server with dbox and mail_attachment_dir is only a 
temporary step, to replicate into our regular server with mdbox, I have 
now a script listening on the errors and creating symlinks for the 
change mailbox guids.


Sending a periodic "doveadm replicator replicate -f '*'" triggers the 
errors and the script fixes them. Only a matter of a little time and I 
should be done with it.


Ralf


Am 09.08.21 um 11:08 schrieb Timo Sirainen:

On 9. Aug 2021, at 10.41, Ralf Becker  wrote:

Made some progress, the attachments are not lost, the new Dovecot server (it 
was a migration from 2.2.19 to 2.3.15 on a different host), searches them under 
a partially different filename:

Aug 09 08:26:19 doveadm: Error: dsync(61c8ab10dbe7): 
read(attachments-connector(/var/dovecot/imap/$domain/$user/mailboxes/INBOX/dbox-Mails/u.23306))
 failed: 
read(/var/dovecot/imap/attachments/99/93/999382f10d91c26e28f964efd3039f42041b73c00ea71da86481476e7adb1ce5a50653b2814ba57b9e8a4f5284203f1b4d3ec0c617de04d750a42b9d0cfb7855-7ca8b3186e43f560fa19838cbfe1-e8584c2cf5a10b6164007dc04144-23306[base64:19
 b/l]) failed: 
open(/var/dovecot/imap/attachments/99/93/999382f10d91c26e28f964efd3039f42041b73c00ea71da86481476e7adb1ce5a50653b2814ba57b9e8a4f5284203f1b4d3ec0c617de04d750a42b9d0cfb7855-7ca8b3186e43f560fa19838cbfe1-e8584c2cf5a10b6164007dc04144-23306)
 failed: No such file or directory (last sent=mail, last recv=mail_request 
(EOL))

..

So the questions are:
- what is that 3rd part

That's the mailbox GUID. dsync and replication is supposed to preserved them. 
You could check them with:

doveadm mailbox status -u user guid '*'


- how could it have changed by the migration
- is there a way to force it back to the existing file-names

Looks like they have changed. You could change the GUIDs afterwards also:

doveadm mailbox update -u user --guid  mailboxname

BTW. The GUID can also be used to see its creation timestamp:

578fed299419c150550c838cbfe1 = Fri  7 Dec 00:17:56 EET 2012
e8584c2cf5a10b6164007dc04144 = Thu  5 Aug 11:31:49 EEST 2021

Using guid2date.sh:

#!/bin/bash

guid=$1
hex=`printf $guid|cut -c 9-16|sed 's/\(..\)\(..\)\(..\)\(..\)/\4\3\2\1/'`
dec=`printf "%d" 0x$hex`
time=`date -d "1970-01-01 UTC $dec seconds"`

printf "$guid\nhex: $hex\ndec: $dec\ntime: $time\n"



--
Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0




Re: Replication stalled by failed attemts to read attachments (mail_attachment_dir)

2021-08-09 Thread Ralf Becker
Made some progress, the attachments are not lost, the new Dovecot server 
(it was a migration from 2.2.19 to 2.3.15 on a different host), searches 
them under a partially different filename:


Aug 09 08:26:19 doveadm: Error: dsync(61c8ab10dbe7): 
read(attachments-connector(/var/dovecot/imap/$domain/$user/mailboxes/INBOX/dbox-Mails/u.23306)) 
failed: 
read(/var/dovecot/imap/attachments/99/93/999382f10d91c26e28f964efd3039f42041b73c00ea71da86481476e7adb1ce5a50653b2814ba57b9e8a4f5284203f1b4d3ec0c617de04d750a42b9d0cfb7855-7ca8b3186e43f560fa19838cbfe1-e8584c2cf5a10b6164007dc04144-23306[base64:19 
b/l]) failed: 
open(/var/dovecot/imap/attachments/99/93/999382f10d91c26e28f964efd3039f42041b73c00ea71da86481476e7adb1ce5a50653b2814ba57b9e8a4f5284203f1b4d3ec0c617de04d750a42b9d0cfb7855-7ca8b3186e43f560fa19838cbfe1-e8584c2cf5a10b6164007dc04144-23306) 
failed: No such file or directory (last sent=mail, last 
recv=mail_request (EOL))


Looking at the filesystem:

root@ka-nfs-mail:~# ls -l 
/var/dovecot/imap/attachments/99/93/999382f10d91c26e28f964efd3039f42041b73c00ea71da86481476e7adb1ce5a50653b2814ba57b9e8a4f5284203f1b4d3ec0c617de04d750a42b9d0cfb7855-7ca8b3186e43f560fa19838cbfe1-e8584c2cf5a10b6164007dc04144-23306
ls: cannot access 
'/poolN/nordenham/attachments/99/93/999382f10d91c26e28f964efd3039f42041b73c00ea71da86481476e7adb1ce5a50653b2814ba57b9e8a4f5284203f1b4d3ec0c617de04d750a42b9d0cfb7855-7ca8b3186e43f560fa19838cbfe1-e8584c2cf5a10b6164007dc04144-23306': 
No such file or directory


root@ka-nfs-mail:~# ls -l 
/var/dovecot/imap/attachments/99/93/999382f10d91c26e28f964efd3039f42041b73c00ea71da86481476e7adb1ce5a50653b2814ba57b9e8a4f5284203f1b4d3ec0c617de04d750a42b9d0cfb7855-7ca8b3186e43f560fa19838cbfe1-*-23306
-rw--- 1 90 systemd-journal 605950 Jul 19 09:18 
/var/dovecot/imap/attachments/99/93/999382f10d91c26e28f964efd3039f42041b73c00ea71da86481476e7adb1ce5a50653b2814ba57b9e8a4f5284203f1b4d3ec0c617de04d750a42b9d0cfb7855-7ca8b3186e43f560fa19838cbfe1-578fed299419c150550c838cbfe1-23306


So the attachment still exists (verified by it content), but the 3rd 
dash-separated filename-part has changed:

1. configured (mail_attachment_hash) sha256 hash of the attachment-content
2. guid_128 
(https://github.com/dovecot/core/blob/a5209c83c3a82386c94d466eec5fea394973e88f/src/lib-storage/index/index-attachment.c#L114)

3. no idea, but changed
4. uid of the mail

So the questions are:
- what is that 3rd part
- how could it have changed by the migration
- is there a way to force it back to the existing file-names

Unfortunately I found no documentation but the source-code on Github 
about the inner working of that attachment store :(


Any help or ideas would be really appreciated :)

Ralf


Am 07.08.21 um 19:47 schrieb Ralf Becker:
A separate mail-attachment store and replication seems to a very bad 
idea!


If anything goes wrong eg. full filesystem or an accidentality deleted 
attachment-file, you wont get your replication up again :(


Only way I found so far is greping the log for the error and use that 
to expunge the whole mails, something like:


docker logs -f dovecot 2>&1 |
grep 'Error: dsync(61c8ab10dbe7): read(attachments-connector' |
cut -d/ -f5,6,8,10 | cut -d ')' -f1 | sort | uniq | sed -e 
's#/\(u.\)\?# #g' |

while read domain user mailbox uid; do
    doveadm expunge -u $user@$domain mailbox $mailbox uid $uid
done

(The above delete the mail from both nodes!)

While having the above running on the node with the problem, you need 
to trigger a full sync on the *other* node with:


    doveadm replicator replicate -f $user@$domain

until there are no more errors and

    watch doveadm replicator status $user@$domain

shows the mailbox in sync again.

I'm happy to hear better suggestions, specially some more automatic 
and - possibly - only removing the missing attachment and not the 
whole mail.


Ralf


Am 05.08.21 um 18:03 schrieb Ralf Becker:
I'm migrating an older Dovecot 2.2.19 installation with dbox and 
mail_attachment_dir to 2.3.15 with replication to a second server.


The storage from the old server was rsync'ed to a new server running 
Dovecot 2.3.15 in a container using similar configuration with dbox 
and mail_attachments. That servers alone seems to run fine.


As a next step I tried to establish replication to the second empty 
server, which is configured to use mdbox and no mail_attachment_dir, 
as the rest of my Dovecot servers. Looking at "doveadm replicator 
status '*'" some of the mailboxes replicate correct, but roughly half 
of them fail :(


Looking a the Dovecot log with mail_debug=yes shows the replication 
fails on the source (server with dbox and mail_attachment_dir) 
typically with one of two errors:


a) attachment has different size:

Aug 05 15:41:58 doveadm: Debug: Mailbox INBOX: UID 30: Looked up 
field date.received from mail cache
Aug 05 15:41:58 doveadm: Err

Re: Replication stalled by failed attemts to read attachments (mail_attachment_dir)

2021-08-07 Thread Ralf Becker

A separate mail-attachment store and replication seems to a very bad idea!

If anything goes wrong eg. full filesystem or an accidentality deleted 
attachment-file, you wont get your replication up again :(


Only way I found so far is greping the log for the error and use that to 
expunge the whole mails, something like:


docker logs -f dovecot 2>&1 |
grep 'Error: dsync(61c8ab10dbe7): read(attachments-connector' |
cut -d/ -f5,6,8,10 | cut -d ')' -f1 | sort | uniq | sed -e 's#/\(u.\)\?# 
#g' |

while read domain user mailbox uid; do
    doveadm expunge -u $user@$domain mailbox $mailbox uid $uid
done

(The above delete the mail from both nodes!)

While having the above running on the node with the problem, you need to 
trigger a full sync on the *other* node with:


    doveadm replicator replicate -f $user@$domain

until there are no more errors and

    watch doveadm replicator status $user@$domain

shows the mailbox in sync again.

I'm happy to hear better suggestions, specially some more automatic and 
- possibly - only removing the missing attachment and not the whole mail.


Ralf


Am 05.08.21 um 18:03 schrieb Ralf Becker:
I'm migrating an older Dovecot 2.2.19 installation with dbox and 
mail_attachment_dir to 2.3.15 with replication to a second server.


The storage from the old server was rsync'ed to a new server running 
Dovecot 2.3.15 in a container using similar configuration with dbox 
and mail_attachments. That servers alone seems to run fine.


As a next step I tried to establish replication to the second empty 
server, which is configured to use mdbox and no mail_attachment_dir, 
as the rest of my Dovecot servers. Looking at "doveadm replicator 
status '*'" some of the mailboxes replicate correct, but roughly half 
of them fail :(


Looking a the Dovecot log with mail_debug=yes shows the replication 
fails on the source (server with dbox and mail_attachment_dir) 
typically with one of two errors:


a) attachment has different size:

Aug 05 15:41:58 doveadm: Debug: Mailbox INBOX: UID 30: Looked up field 
date.received from mail cache
Aug 05 15:41:58 doveadm: Error: dsync(cdaf50f7580e): 
read(attachments-connector(/var/dovecot/imap///mailboxes/INBOX/dbox-Mails/u.30)) 
failed: 
read(/var/dovecot/imap/attachments/64/4e/644ecfbc09223b8ccafc6f36aa86603cde91e76716b9b5ed0cf875bc0e183b6b4581d671f8321647be99f26d2d22088e999714529e189b25a6e7d5b428fbf5d5-1a6b701e68c9c050b10a838cbfe1-f86a701e68c9c050b10a838cbfe1-30[base64:18 
b/l]) failed: Stream is larger than expected (262658 > 262657, eof=1) 
(last sent=mail, last recv=mail_request (EOL))


showing on the other log as:

Aug 05 15:49:18 doveadm: Debug: Mailbox INBOX: UID 29: Looked up field 
guid from mail cache
Aug 05 15:49:19 doveadm: Error: dsync(142878d8d6bf): read() failed: 
read(10.44.99.1) failed: dot-input stream ends without '.' line (last 
sent=mail_request (EOL), last recv=mail)


b) attachment is not found at all:

Aug 05 15:43:58 doveadm(@ 1441): Debug: fs-sis: fs-sis: 
fs-posix: 
open(/var/dovecot/imap/attachments/e2/0a/e20ab165f3a2e7164520341cddb06cd053dc1a87fdd5c6360b45c20a2c9d33576c8ea2daa42062f9bb0fd94f9c5f9919ad86fb11e29effe3bdf4e853fa7c0e1d-20a0b63613e6c050620b838cbfe1-c8d5221c39a40b6125097dc04144-3) 
failed: No such file or directory
Aug 05 15:43:58 doveadm(@ 1441): Error: 
dsync(cdaf50f7580e): 
read(attachments-connector(/var/dovecot/imap///mailboxes/INBOX/dbox-Mails/u.3)) 
failed: 
read(/var/dovecot/imap/attachments/e2/0a/e20ab165f3a2e7164520341cddb06cd053dc1a87fdd5c6360b45c20a2c9d33576c8ea2daa42062f9bb0fd94f9c5f9919ad86fb11e29effe3bdf4e853fa7c0e1d-20a0b63613e6c050620b838cbfe1-c8d5221c39a40b6125097dc04144-3[base64:19 
b/l]) failed: 
open(/var/dovecot/imap/attachments/e2/0a/e20ab165f3a2e7164520341cddb06cd053dc1a87fdd5c6360b45c20a2c9d33576c8ea2daa42062f9bb0fd94f9c5f9919ad86fb11e29effe3bdf4e853fa7c0e1d-20a0b63613e6c050620b838cbfe1-c8d5221c39a40b6125097dc04144-3) 
failed: No such file or directory (last sent=mail, last 
recv=mail_request (EOL))


That caused the replication to fail and not newer mails to be 
replicated for the given user and mailbox, which can be verified eg. 
with "doveadm mailbox status -u @ all inbox".


The concerned mails are pretty old / the attachment itself can not be 
recovered, and nothing can be done about them. But how can I repair 
the mailbox, so all unaffected mails / attachments get replicated?


I already tried multiple "doveadm force-resync -u @ 
INBOX" in case a) with no visible effect / still same error.


Both Dovecot configurations are attached, in case they matter ...

Ralf



--
Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0




Replication stalled by failed attemts to read attachments (mail_attachment_dir)

2021-08-05 Thread Ralf Becker
I'm migrating an older Dovecot 2.2.19 installation with dbox and 
mail_attachment_dir to 2.3.15 with replication to a second server.


The storage from the old server was rsync'ed to a new server running 
Dovecot 2.3.15 in a container using similar configuration with dbox and 
mail_attachments. That servers alone seems to run fine.


As a next step I tried to establish replication to the second empty 
server, which is configured to use mdbox and no mail_attachment_dir, as 
the rest of my Dovecot servers. Looking at "doveadm replicator status 
'*'" some of the mailboxes replicate correct, but roughly half of them 
fail :(


Looking a the Dovecot log with mail_debug=yes shows the replication 
fails on the source (server with dbox and mail_attachment_dir) typically 
with one of two errors:


a) attachment has different size:

Aug 05 15:41:58 doveadm: Debug: Mailbox INBOX: UID 30: Looked up field 
date.received from mail cache
Aug 05 15:41:58 doveadm: Error: dsync(cdaf50f7580e): 
read(attachments-connector(/var/dovecot/imap///mailboxes/INBOX/dbox-Mails/u.30)) 
failed: 
read(/var/dovecot/imap/attachments/64/4e/644ecfbc09223b8ccafc6f36aa86603cde91e76716b9b5ed0cf875bc0e183b6b4581d671f8321647be99f26d2d22088e999714529e189b25a6e7d5b428fbf5d5-1a6b701e68c9c050b10a838cbfe1-f86a701e68c9c050b10a838cbfe1-30[base64:18 
b/l]) failed: Stream is larger than expected (262658 > 262657, eof=1) 
(last sent=mail, last recv=mail_request (EOL))


showing on the other log as:

Aug 05 15:49:18 doveadm: Debug: Mailbox INBOX: UID 29: Looked up field 
guid from mail cache
Aug 05 15:49:19 doveadm: Error: dsync(142878d8d6bf): read() failed: 
read(10.44.99.1) failed: dot-input stream ends without '.' line (last 
sent=mail_request (EOL), last recv=mail)


b) attachment is not found at all:

Aug 05 15:43:58 doveadm(@ 1441): Debug: fs-sis: fs-sis: 
fs-posix: 
open(/var/dovecot/imap/attachments/e2/0a/e20ab165f3a2e7164520341cddb06cd053dc1a87fdd5c6360b45c20a2c9d33576c8ea2daa42062f9bb0fd94f9c5f9919ad86fb11e29effe3bdf4e853fa7c0e1d-20a0b63613e6c050620b838cbfe1-c8d5221c39a40b6125097dc04144-3) 
failed: No such file or directory
Aug 05 15:43:58 doveadm(@ 1441): Error: 
dsync(cdaf50f7580e): 
read(attachments-connector(/var/dovecot/imap///mailboxes/INBOX/dbox-Mails/u.3)) 
failed: 
read(/var/dovecot/imap/attachments/e2/0a/e20ab165f3a2e7164520341cddb06cd053dc1a87fdd5c6360b45c20a2c9d33576c8ea2daa42062f9bb0fd94f9c5f9919ad86fb11e29effe3bdf4e853fa7c0e1d-20a0b63613e6c050620b838cbfe1-c8d5221c39a40b6125097dc04144-3[base64:19 
b/l]) failed: 
open(/var/dovecot/imap/attachments/e2/0a/e20ab165f3a2e7164520341cddb06cd053dc1a87fdd5c6360b45c20a2c9d33576c8ea2daa42062f9bb0fd94f9c5f9919ad86fb11e29effe3bdf4e853fa7c0e1d-20a0b63613e6c050620b838cbfe1-c8d5221c39a40b6125097dc04144-3) 
failed: No such file or directory (last sent=mail, last 
recv=mail_request (EOL))


That caused the replication to fail and not newer mails to be replicated 
for the given user and mailbox, which can be verified eg. with "doveadm 
mailbox status -u @ all inbox".


The concerned mails are pretty old / the attachment itself can not be 
recovered, and nothing can be done about them. But how can I repair the 
mailbox, so all unaffected mails / attachments get replicated?


I already tried multiple "doveadm force-resync -u @ INBOX" 
in case a) with no visible effect / still same error.


Both Dovecot configurations are attached, in case they matter ...

Ralf

--
Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0

# 2.3.15 (0503334ab1): /etc/dovecot/dovecot.conf
# Pigeonhole version 0.5.15 (e6a84e31)
# OS: Linux 4.15.0-140-generic x86_64  
# Hostname: cdaf50f7580e
auth_cache_negative_ttl = 2 mins
auth_cache_size = 10 M
auth_cache_ttl = 5 mins
auth_master_user_separator = *
auth_mechanisms = plain login
auth_username_chars = 
"abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ01234567890.-_@#"
default_client_limit = 3500
default_process_limit = 512
disable_plaintext_auth = no
doveadm_password = # hidden, use -P to show it
doveadm_port = 12345
first_valid_uid = 90
listen = *
log_path = /dev/stderr
login_greeting = Dovecot FRA ready
mail_access_groups = dovecot
mail_attribute_dict = file:%h/dovecot-metadata
mail_debug = yes
mail_gid = dovecot
mail_location = mdbox:~/mdbox
mail_log_prefix = "%s(%u %p): "
mail_max_userip_connections = 200
mail_plugins = acl quota notify replication mail_log
mail_uid = dovecot
managesieve_notify_capability = mailto
managesieve_sieve_capability = fileinto reject envelope encoded-character 
vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy 
include variables body enotify environment mailbox date ihave vnd.dovecot.debug
mbox_min_index_size = 1000 B
mdbox_rotate_size = 50 M
name

Re: Problem with Lua notification script since 2.3.15 update

2021-06-28 Thread Ralf Becker

Hi Aki,

Am 28.06.21 um 12:19 schrieb Aki Tuomi:

As workaround, you could try dropping script_init function. It might still require 
downgrading to 2.3.14 to avoid other similar asserts. script_init & 
script_deinit functions are not mandatory.



Commenting out my (anyway empty) script_init function seems to fix the 
problem with 2.3.15, as far a quick check on my laptop goes.


Thanks :)

Ralf




Aki


On 28/06/2021 13:07 Aki Tuomi  wrote:

  
Hi!


This has been fixed in master with 
https://github.com/dovecot/core/commit/2b508d396cb1442f4da715b762ca544639bde456.patch

We'll see what to do about 2.3.15.

Aki


On 28/06/2021 12:47 Vytenis Adm  wrote:

  
Not a solution, but I'd like to second this issue.


We have a Lua push configured as well, and are currently running
dovecot-2.3.14-5 (CentOS 7.9)

After bootstrapping new instances with dovecot 2.3.15 the exact same
issue appeared:

Jun 27 18:28:37 imaphost.tld dovecot[1331]:
imap(u...@example.com)<23828>: Panic: file
dlua-script.c: line 224 (dlua_script_init): assertion failed:
(lua_gettop(script->L) == 0)
Jun 27 18:28:37 imaphost.tld dovecot[1331]:
imap(u...@example.com)<23828>: Error: Raw backtrace:
/usr/lib64/dovecot/libdovecot.so.0(backtrace_append+0x42)
[0x7fb027d2e862] ->
/usr/lib64/dovecot/libdovecot.so.0(backtrace_get+0x1e) [0x7fb027d2e96e]
-> /usr/lib64/dovecot/libdovecot.so.0(+0xf50fe) [0x7fb027d3c0fe] ->
/usr/lib64/dovecot/libdovecot.so.0(+0xf51a1) [0x7fb027d3c1a1] ->
/usr/lib64/dovecot/libdovecot.so.0(i_fatal+0) [0x7fb027c8c60c] ->
/usr/lib64/dovecot/libdovecot-lua.so.0(+0x4214) [0x7fb027062214] ->
/usr/lib64/dovecot/lib22_push_notification_lua_plugin.so(+0x2b7b)
[0x7fb024f78b7b] ->
/usr/lib64/dovecot/lib20_push_notification_plugin.so(push_notification_driver_init+0x199)
[0x7fb0260ff819] ->
/usr/lib64/dovecot/lib20_push_notification_plugin.so(+0x76df)
[0x7fb0261016df] ->
/usr/lib64/dovecot/lib20_push_notification_plugin.so(+0x7e00)
[0x7fb026101e00] ->
/usr/lib64/dovecot/libdovecot-storage.so.0(hook_mail_user_created+0x209)
[0x7fb0280529b9] ->
/usr/lib64/dovecot/libdovecot-storage.so.0(mail_user_init+0x220)
[0x7fb028059130] ->
/usr/lib64/dovecot/libdovecot-storage.so.0(mail_storage_service_next_with_session_suffix+0x5ff)
[0x7fb0280565cf] ->
/usr/lib64/dovecot/libdovecot-storage.so.0(mail_storage_service_lookup_next+0x4f)
[0x7fb028056cef] -> dovecot/imap(client_create_from_input+0x110)
[0x55702bdf2120] -> dovecot/imap(+0x2d417) [0x55702bdf2417] ->
/usr/lib64/dovecot/libdovecot.so.0(+0x73d96) [0x7fb027cbad96] ->
/usr/lib64/dovecot/libdovecot.so.0(+0x74153) [0x7fb027cbb153] ->
/usr/lib64/dovecot/libdovecot.so.0(+0x74f27) [0x7fb027cbbf27] ->
/usr/lib64/dovecot/libdovecot.so.0(connection_input_default+0x158)
[0x7fb027d33b48] ->
/usr/lib64/dovecot/libdovecot.so.0(io_loop_call_io+0x65)
[0x7fb027d54425] ->
/usr/lib64/dovecot/libdovecot.so.0(io_loop_handler_run_internal+0x12b)
[0x7fb027d55dab] ->
/usr/lib64/dovecot/libdovecot.so.0(io_loop_handler_run+0x59)
[0x7fb027d54529] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_run+0x38)
[0x7fb027d54768] ->
/usr/lib64/dovecot/libdovecot.so.0(master_service_run+0x13)
[0x7fb027cbe3c3] -> dovecot/imap(main+0x342) [0x55702bdd42f2] ->
/lib64/libc.so.6(__libc_start_main+0xf5) [0x7fb02789b555] ->
dovecot/imap(+0xf4f5) [0x55702bdd44f5]

Jun 27 18:28:37 imaphost.tld dovecot[1331]:
imap(u...@example.com)<23828>: Fatal: master:
service(imap): child 23828 killed with signal 6 (core dumps disabled -
https://dovecot.org/bugreport.html#coredumps)


The workaround was to downgrade to dovecot-2.3.14-5, and the issues were
gone.




On 2021-06-28 12:31, Ralf Becker wrote:

If the Lua-notifications are enabled, Dovecot imap dies immediately at
authentication:

Jun 28 09:17:29 imap-login: Info: Login: user=, method=PLAIN,
rip=172.18.0.12, lip=172.18.0.16, mpid=16, TLS,
session=
Jun 28 09:17:29 imap(ralf)<16>: Panic: file
dlua-script.c: line 224 (dlua_script_init): assertion failed:
(lua_gettop(script->L) == 0)
Jun 28 09:17:29 imap(ralf)<16>: Error: Raw
backtrace: /usr/lib/dovecot/libdovecot.so.0(backtrace_append+0x41)
[0x7f9f537ca5c1] ->
/usr/lib/dovecot/libdovecot.so.0(backtrace_get+0x22) [0x7f9f537ca6e2]
-> /usr/lib/dovecot/libdovecot.so.0(+0x1070bb) [0x7f9f537d70bb] ->
/usr/lib/dovecot/libdovecot.so.0(+0x107157) [0x7f9f537d7157] ->
/usr/lib/dovecot/libdovecot.so.0(+0x5bb2b) [0x7f9f5372bb2b] ->
/usr/lib/dovecot/libdovecot-lua.so.0(+0x5354) [0x7f9f53311354] ->
/usr/lib/dovecot/modules/lib22_push_notification_lua_plugin.so(+0x3842)
[0x7f9f532c6842] ->
/usr/lib/dovecot/modules/lib20_push_notification_plugin.so(push_notification_driver_init+0x19c)
[0x7f9f532d062c] ->
/usr/lib/dovecot/modules/lib20_push_notification_plugin.so(+0x846f)
[0x7f9f532d246f] ->
/usr/lib/dovecot/modules/lib20_push_notification_plugin.so(+0x8b6a)
[0x7f9f532d2b6a] ->
/usr/lib/dovecot/libdovecot-s

Problem with Lua notification script since 2.3.15 update

2021-06-28 Thread Ralf Becker
If the Lua-notifications are enabled, Dovecot imap dies immediately at 
authentication:


Jun 28 09:17:29 imap-login: Info: Login: user=, method=PLAIN, 
rip=172.18.0.12, lip=172.18.0.16, mpid=16, TLS, session=
Jun 28 09:17:29 imap(ralf)<16>: Panic: file 
dlua-script.c: line 224 (dlua_script_init): assertion failed: 
(lua_gettop(script->L) == 0)
Jun 28 09:17:29 imap(ralf)<16>: Error: Raw backtrace: 
/usr/lib/dovecot/libdovecot.so.0(backtrace_append+0x41) [0x7f9f537ca5c1] 
-> /usr/lib/dovecot/libdovecot.so.0(backtrace_get+0x22) [0x7f9f537ca6e2] 
-> /usr/lib/dovecot/libdovecot.so.0(+0x1070bb) [0x7f9f537d70bb] -> 
/usr/lib/dovecot/libdovecot.so.0(+0x107157) [0x7f9f537d7157] -> 
/usr/lib/dovecot/libdovecot.so.0(+0x5bb2b) [0x7f9f5372bb2b] -> 
/usr/lib/dovecot/libdovecot-lua.so.0(+0x5354) [0x7f9f53311354] -> 
/usr/lib/dovecot/modules/lib22_push_notification_lua_plugin.so(+0x3842) 
[0x7f9f532c6842] -> 
/usr/lib/dovecot/modules/lib20_push_notification_plugin.so(push_notification_driver_init+0x19c) 
[0x7f9f532d062c] -> 
/usr/lib/dovecot/modules/lib20_push_notification_plugin.so(+0x846f) 
[0x7f9f532d246f] -> 
/usr/lib/dovecot/modules/lib20_push_notification_plugin.so(+0x8b6a) 
[0x7f9f532d2b6a] -> 
/usr/lib/dovecot/libdovecot-storage.so.0(hook_mail_user_created+0x211) 
[0x7f9f539084e1] -> 
/usr/lib/dovecot/libdovecot-storage.so.0(mail_user_init+0x20b) 
[0x7f9f5390e80b] -> 
/usr/lib/dovecot/libdovecot-storage.so.0(mail_storage_service_next_with_session_suffix+0x587) 
[0x7f9f5390bd57] -> 
/usr/lib/dovecot/libdovecot-storage.so.0(mail_storage_service_lookup_next+0x53) 
[0x7f9f5390c4c3] -> dovecot/imap(client_create_from_input+0x180) 
[0x5626eb18fe40] -> dovecot/imap(+0x3112a) [0x5626eb19012a] -> 
/usr/lib/dovecot/libdovecot.so.0(+0x8d60e) [0x7f9f5375d60e] -> 
/usr/lib/dovecot/libdovecot.so.0(+0x8d97b) [0x7f9f5375d97b] -> 
/usr/lib/dovecot/libdovecot.so.0(+0x8e8be) [0x7f9f5375e8be] -> 
/usr/lib/dovecot/libdovecot.so.0(connection_input_default+0x15e) 
[0x7f9f537cef5e] -> 
/usr/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x6d) [0x7f9f537ed3ed] 
-> /usr/lib/dovecot/libdovecot.so.0(io_loop_handler_run_internal+0x145) 
[0x7f9f537eea15] -> 
/usr/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x54) 
[0x7f9f537ed494] -> /usr/lib/dovecot/libdovecot.so.0(io_loop_run+0x40) 
[0x7f9f537ed600] -> 
/usr/lib/dovecot/libdovecot.so.0(master_service_run+0x17) 
[0x7f9f537609e7] -> dovecot/imap(main+0x469) [0x5626eb172c19] -> 
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf3) [0x7f9f535020b3] 
-> dovecot/imap(_start+0x2e) [0x5626eb172cae]
Jun 28 09:17:29 imap(ralf)<16>: Fatal: master: 
service(imap): child 16 killed with signal 6 (core dumps disabled - 
https://dovecot.org/bugreport.html#coredumps)


I use the following Dovecot configuration:

# Store METADATA information in a file dovecot-metadata in user's home
mail_attribute_dict = file:%h/dovecot-metadata

# enable metadata
protocol imap {
  imap_metadata = yes
}

# add necessary plugins for Lua push notifications
mail_plugins = $mail_plugins mail_lua notify push_notification 
push_notification_lua


# Lua notification script and URL of EGroupware push server
plugin {
  push_notification_driver = lua:file=/etc/dovecot/dovecot-push.lua
  push_lua_url = 
https://Bearer:sec...@boulder.egroupware.org/egroupware/push

}

The Lua script is available under 
https://github.com/EGroupware/swoolepush/blob/master/doc/dovecot-push.lua


The whole Dovecot configurations is available under

https://github.com/EGroupware/build.opensuse.org/tree/master/server:eGroupWare/egroupware-mail/egroupware-mail/dovecot

Dovecot runs in a Ubuntu 20.04 based container and seems to use the 
correct liblua5.3:


root@750978e5c0ee:/# dpkg -l|grep -i lua
ii  dovecot-lua    2:2.3.15-1+ubuntu20.04 amd64    
secure POP3/IMAP server - LUA support
ii  liblua5.3-0:amd64  5.3.3-1.1ubuntu2 amd64    Shared 
library for the Lua interpreter version 5.3
ii  lua-json   1.3.4-2 all  JSON decoder/encoder 
for Lua
ii  lua-lpeg:amd64 1.0.2-1 amd64    LPeg library for the 
Lua language
ii  lua-socket:amd64   3.0~rc1+git+ac3201d-4 amd64    
TCP/UDP socket library for the Lua language


Everything was running fine with 2.3.13, have not tried with 2.3.14 yet.

Any ideas?

Ralf

--
Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0




Temporary disable push notifications while restoring a mailbox

2021-03-31 Thread Ralf Becker

We use "doveamd import" to restore mailboxes from a snapshot.

Since we use push with Dovecot 2.3.13:

mail_plugins = $mail_plugins mail_lua notify push_notification 
push_notification_lua


plugin {
  push_notification_driver = lua:file=/etc/dovecot/dovecot-push.lua
  push_lua_url = http://172.17.0.1/

the import creates a flood of push-notifications.

Therefore the questions: is there a possibility to disable push for 
"doveadm import" on the command line (not in general)?


Kind regards

Ralf

--
Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0




How to reset / remove all replicator users

2020-12-04 Thread Ralf Becker
After creating a new replication pair, it was started with an old sqlite 
userdb, we use for iteration / replication.


The situation was quickly resolved by generating a new sqlite userdb, as 
it is done hourly anyway.


Unfortunately I can not get ride of all access users from the replication:

/ # doveadm replicator status
Queued 'sync' requests    0
Queued 'high' requests    0
Queued 'low' requests 0
Queued 'failed' requests  0
Queued 'full resync' requests 14
Waiting 'failed' requests 1
Total number of known users   724

/ # doveadm user '*'|wc -l
507

The replication process creates again and again a lot (empty) mailboxes 
on both sides of the replication.


I tried the obvious with no avail:

/ # doveadm replicator remove '*'
Error: Replicator failed: User not found

I also tried restarting the pair one after the other, but it seems the 
users to replicate are somewhere persisted.


I'm also trying to delete the users from the newly created (empty) 
mailboxes, but the number of known users in "doveadm replicator status" 
is unchanged:


root:/var/dovecot/imap# for domain in $(ls -1 |grep -v 
bb-trunk.egroupware.de|grep -v outdoor-training.de|grep -v rbz-); do for 
user in $domain/*; do echo "$(basename $user)@$domain"; doveadm 
replicator remove "$(basename $user)@$domain"; done; rm -rf $domain; done


Any idea what else I can do, or where the replication stores it's users?

Ralf

--
Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0




Re: director_username_hash = %d and doveadm director map

2020-11-30 Thread Ralf Becker

Hi Timo,

it's a long time :)

Am 30.11.20 um 15:07 schrieb Timo Sirainen:
On 29. Nov 2020, at 16.10, Ralf Becker <mailto:r...@egroupware.org>> wrote:


To answer my question I was able to identify the director code on 
Github and the hashes are the first 4 byte of the binary md5 written 
as a 32 bit integer.


With that I was able to write a script that runs doveadm director 
map, queries all domains from our internal management, calculates the 
hashes and displays a joined list:


doveadm-director map | grep rbz
rbz-x.de <http://rbz-x.de> 3766880388 10.44.88.5   
nfs  2020-11-29 15:06:53
rbz-.de <http://rbz-.de> 3088689059 
10.44.88.1   extern   2020-11-29 15:07:11


Are you doing this to lots of domains or just a few? I think you could 
have also individually looked up the backends with "doveadm director 
status anyth...@domain.de <mailto:anyth...@domain.de> tag". Of course, 
requires knowing also the tag for the domain already in this lookup.



I only moved single domains from one tag / backend-pair to an other. 
doveadm director status x...@domain.com tag1 gives one of the IPs of 
backends of tag1, same with tag2.





When I move a domain between backends / tags, I see for some time the 
moved domain is listed for both tags, thought doveadm who on the 
backends show users are only connected to the new backend. No idea 
why that ist. Trying doveadm director move does NOT change that 
situation.


There's not really a concept of "moving user between tags". The 
intention was that you could have "user1@domain" in tag1 and 
"user1@domain" in tag2 and they would be different users. So when 
you're returning a new tag from passdb then director just treats it as 
a different user. The old user is remembered 
until director_user_expire has passed without the user being accessed.



That explains it. I was able to verify, that all connections go to the 
new tag, by doing a doveadm who on all backends.



I currently disable the domain in our dict used for userdb and 
passdb, clear the auth cache of all directors and flush them, before 
(final) rsync of the mailboxes of the domain to the new backend. When 
our dicts answer again with the new director tag, connections are 
going to the correct backend-pair. But it takes some hours for the 
old mapping to disappear.


Is that the expected behavior?


Yes. And as long as the user isn't accessed via the old tag it doesn't 
matter.



That's what I observed too. Just looks a bit scarry ...


Is doveadm director move supposted to work with 
director_username_hash = %d?


It should work if you do:

doveadm director move anyth...@domain.de 
<mailto:anyth...@domain.de> backend2


It updates the director internal mapping, and it also sends a 
KICK-DIRECTOR-HASH to each login process in each director. This in 
turn should match every user with that same domain and kick them out.


But maybe the issue is that you're again trying to move the user 
between tags? That's not what is really happening. It's moving 
anyth...@domain.de <mailto:anyth...@domain.de> in backend2's tag from 
its currently assigned backend to backend2. If domain.de 
<http://domain.de>'s backend was already backend2 then nothing is 
done. You could maybe kludge this by moving it to backend1 and then 
quickly to backend2 so at least one of them does the kicking.



Yes, I tried to move between tags, not backends of a tag.

Ok, then I know now the tags in director are really separate and I dont 
need to worry about the domain being mapped twice for different tags.


Only thing which is a bit annoying with director_username_hash = %d, it 
that doveadm director map writes  instead of the domain:


kubectl exec -t dovecot-director-0 -c dovecot-director -- doveadm 
director map

user  hash   mail server ip expire time
 518789780  10.44.88.1     2020-11-30 17:38:39
 2520389888 10.44.88.1 2020-11-30 17:39:18

Thanks a lot for you explenations :)

Ralf

--
Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0




Re: director_username_hash = %d and doveadm director map

2020-11-30 Thread Ralf Becker

Am 29.11.20 um 22:09 schrieb Aki Tuomi:

Did you try `doveadm director flush`?



Yes, thought together with (different) tags it gives this wired behavior.

I use doveadm director flush all the time to get connections to the 
other backend of one pair.


Ralf


On 29/11/2020 17:10 Ralf Becker  wrote:

  
To answer my question I was able to identify the director code on Github

and the hashes are the first 4 byte of the binary md5 written as a 32
bit integer.

With that I was able to write a script that runs doveadm director map,
queries all domains from our internal management, calculates the hashes
and displays a joined list:

doveadm-director map | grep rbz
rbz-x.de 3766880388 10.44.88.5
nfs  2020-11-29 15:06:53
rbz-.de  3088689059 10.44.88.1
extern   2020-11-29 15:07:11

When I move a domain between backends / tags, I see for some time the
moved domain is listed for both tags, thought doveadm who on the
backends show users are only connected to the new backend. No idea why
that ist. Trying doveadm director move does NOT change that situation.

I currently disable the domain in our dict used for userdb and passdb,
clear the auth cache of all directors and flush them, before (final)
rsync of the mailboxes of the domain to the new backend. When our dicts
answer again with the new director tag, connections are going to the
correct backend-pair. But it takes some hours for the old mapping to
disappear.

Is that the expected behavior?
Is doveadm director move supposted to work with director_username_hash = %d?

Ralf


Am 23.11.20 um 15:15 schrieb Ralf Becker:

Our directors hash by domain (director_username_hash = %d), as some of
our users share folders with other users of the same domain.

We now started using director tags to map domains to their backends.

Unfortunately doveadm director map seems no to work with
director_username_hash = %d

user    hash    mail server ip expire time
 432784257 10.44.88.1   2020-11-23 13:10:55
 4244233328 10.44.88.1   2020-11-23 13:13:55
 1913982503 10.44.88.1   2020-11-23 13:15:40

How can I check to which backend / IP a domain is maped, aka how is
that hash calculated?

doveadm director move seems also not to do anything meaningful for %d,
or at least I have not found out how to use it to move a domain to a
different backend.

Hoping for some insight :)

Ralf



--
Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0



Re: director_username_hash = %d and doveadm director map

2020-11-29 Thread Ralf Becker
To answer my question I was able to identify the director code on Github 
and the hashes are the first 4 byte of the binary md5 written as a 32 
bit integer.


With that I was able to write a script that runs doveadm director map, 
queries all domains from our internal management, calculates the hashes 
and displays a joined list:


doveadm-director map | grep rbz
rbz-x.de 3766880388 10.44.88.5   
nfs  2020-11-29 15:06:53
rbz-.de  3088689059 10.44.88.1   
extern   2020-11-29 15:07:11


When I move a domain between backends / tags, I see for some time the 
moved domain is listed for both tags, thought doveadm who on the 
backends show users are only connected to the new backend. No idea why 
that ist. Trying doveadm director move does NOT change that situation.


I currently disable the domain in our dict used for userdb and passdb, 
clear the auth cache of all directors and flush them, before (final) 
rsync of the mailboxes of the domain to the new backend. When our dicts 
answer again with the new director tag, connections are going to the 
correct backend-pair. But it takes some hours for the old mapping to 
disappear.


Is that the expected behavior?
Is doveadm director move supposted to work with director_username_hash = %d?

Ralf


Am 23.11.20 um 15:15 schrieb Ralf Becker:
Our directors hash by domain (director_username_hash = %d), as some of 
our users share folders with other users of the same domain.


We now started using director tags to map domains to their backends.

Unfortunately doveadm director map seems no to work with 
director_username_hash = %d


user    hash    mail server ip expire time
 432784257 10.44.88.1   2020-11-23 13:10:55
 4244233328 10.44.88.1   2020-11-23 13:13:55
 1913982503 10.44.88.1   2020-11-23 13:15:40

How can I check to which backend / IP a domain is maped, aka how is 
that hash calculated?


doveadm director move seems also not to do anything meaningful for %d, 
or at least I have not found out how to use it to move a domain to a 
different backend.


Hoping for some insight :)

Ralf



--
Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0




director_username_hash = %d and doveadm director map

2020-11-23 Thread Ralf Becker
Our directors hash by domain (director_username_hash = %d), as some of 
our users share folders with other users of the same domain.


We now started using director tags to map domains to their backends.

Unfortunately doveadm director map seems no to work with 
director_username_hash = %d


user    hash    mail server ip expire time
 432784257 10.44.88.1   2020-11-23 13:10:55
 4244233328 10.44.88.1   2020-11-23 13:13:55
 1913982503 10.44.88.1   2020-11-23 13:15:40

How can I check to which backend / IP a domain is maped, aka how is that 
hash calculated?


doveadm director move seems also not to do anything meaningful for %d, 
or at least I have not found out how to use it to move a domain to a 
different backend.


Hoping for some insight :)

Ralf

--
Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0




Re: Support of INDEXPVT in Dovecot 2.3 with replication

2020-10-29 Thread Ralf Becker
Thanks Aki :)

Am 29.10.20 um 13:47 schrieb Aki Tuomi:
> At the moment the correct way to use shared folders in a replication pair is 
> to access them with imapc from the other pair.

Can you please elaborate a bit more?

In 2.2 I could understand how that works, as INDEXPVT was not
replicated, but worked, so using eg. Node A as primary and Node B
accessing it via imapc would give the same result, as long as both nodes
are up and running.

For me, that was the referenced mail from August, INDEXPVT stopped
working in 2.3 with replication enabled.

Ralf


>> On 29/10/2020 14:43 Ralf Becker  wrote:
>>
>>  
>> In reference to an earlier mail from me, I'd like to ask:
>>
>> Have there been any changes in regard to INDEXPVT and replication or are
>> there any plans in that direction?
>>
>> Thanks :)
>>
>> Ralf
>>
>>
>> On 03.08.20 at 11:20 Ralf Becker wrote:
>>> So far the only thing we noticed: private seen flags on shared user
>>> folders (which were never supported for replication!) seem to be not
>>> functioning any more in 2.3. Not functioning means, if they are
>>> configured you can not set a mail to seen in a shared user folder. After
>>> removing this configuration:
>>>
>>> location = mdbox:%%h/mdbox:INDEXPVT=~/shared/%%u  --> mdbox:%%h/mdbox
>>>
>>> seen flags behave as expected / are identical now if you access the
>>> mailbox direct or via the shared user folder, and the are identical on
>>> both backends.
>>>
>>> Ralf
>> -- 
>> Ralf Becker
>> EGroupware GmbH [www.egroupware.org]
>> Handelsregister HRB Kaiserslautern 3587
>> Geschäftsführer Birgit und Ralf Becker
>> Leibnizstr. 17, 67663 Kaiserslautern, Germany
>> Telefon +49 631 31657-0


-- 
Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0



Support of INDEXPVT in Dovecot 2.3 with replication

2020-10-29 Thread Ralf Becker
In reference to an earlier mail from me, I'd like to ask:

Have there been any changes in regard to INDEXPVT and replication or are
there any plans in that direction?

Thanks :)

Ralf


On 03.08.20 at 11:20 Ralf Becker wrote:
> So far the only thing we noticed: private seen flags on shared user
> folders (which were never supported for replication!) seem to be not
> functioning any more in 2.3. Not functioning means, if they are
> configured you can not set a mail to seen in a shared user folder. After
> removing this configuration:
>
> location = mdbox:%%h/mdbox:INDEXPVT=~/shared/%%u  --> mdbox:%%h/mdbox
>
> seen flags behave as expected / are identical now if you access the
> mailbox direct or via the shared user folder, and the are identical on
> both backends.
>
> Ralf

-- 
Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0




Re: How to access mailbox metadata in Lua push driver

2020-08-04 Thread Ralf Becker
 status.unseen
end return;
end table.insert(ctx.messages, {
user = ctx.meta,
["imap-uidvalidity"] = event.uid_validity,
["imap-uid"] = event.uid,
folder = event.mailbox,
event = event.name,
flags = event.flags,
keywords = event.keywords,
unseen = status.unseen
})
end function arrayEqual(t1, t2)
if (#t1 ~= #t2)
then return false end if (#t1 == 1 and t1[1] == t2[1])
then return true end return json:encode(t1) == json:encode(t2)
end function dovecot_lua_notify_event_flags_clear(ctx, event)
-- check if there is a push token registered AND something to clear (TB
sends it empty) if (ctx.meta == nil or (#event.flags == 0 and #event.keywords 
== 0)) then return end table.insert(ctx.messages, {
user = ctx.meta,
["imap-uidvalidity"] = event.uid_validity,
["imap-uid"] = event.uid,
folder = event.mailbox,
event = event.name,
flags = event.flags,
["flags-old"] = event.flags_old,
keywords = event.keywords,
["keywords-old"] = event.keywords_old
})
end function dovecot_lua_notify_end_txn(ctx)
-- report all states for i,msg in ipairs(ctx.messages) do local e = 
dovecot.event(ctx.event)
e:set_name("lua_notify_mail_finished")
reqbody = json:encode(msg)
e:log_debug(ctx.ep .. " - sending " .. reqbody)
res, code = http.request({
method = "PUT",
url = ctx.ep,
source = ltn12.source.string(reqbody),
headers={
["content-type"] = "application/json; charset=utf-8",
["content-length"] = tostring(#reqbody)
}
})
e:add_int("result_code", code)
e:log_info("Mail notify status " .. tostring(code))
end end

It's also in our Github repo:
https://raw.githubusercontent.com/EGroupware/swoolepush/master/doc/dovecot-push.lua

Ralf

-- 
Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0



signature.asc
Description: OpenPGP digital signature


Errors in Lua push docu

2020-08-04 Thread Ralf Becker
https://doc.dovecot.org/configuration_manual/push_notification/#message-events


  *

dovecot_lua_notify_event_flags_set(context, {name, mailbox, uid,
uid_validity, flags, keywords_set})

--> dovecot_lua_notify_event_flags_set(context, {name, mailbox, uid,
uid_validity, flags, keywords})

Called when message flags or keywords are set. flags is a bitmask.
keywords_set is a table of strings of the keywords set by the event.

  *

dovecot_lua_notify_event_flags_clear(context, {name, mailbox, uid,
uid_validity, flags, keywords_clear, keywords_old})

--> dovecot_lua_notify_event_flags_clear(context, {name, mailbox, uid,
uid_validity, flags, flags_old, keywords, keywords_old})

Ralf

-- 
Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0



signature.asc
Description: OpenPGP digital signature


Re: How to access mailbox metadata in Lua push driver

2020-08-03 Thread Ralf Becker
Some answers to my questions, a first version of my script and more
questions ;)

Am 03.08.20 um 18:15 schrieb Ralf Becker:

> Currently looking into the following questions:
>
> - can I get the rfc 5423 type of event somehow (obviously I can set it
> on the event myself depending of the function called)
event.name
> - looking at the example code, it looks like it can be called for
> multiple messages, when does that happen (LMTP send more then one)

still no idea, maybe Ake?

I noticed that some events have the same uid-validity, are the from a
single transaction, eg. I delete my Trash?

> - why is the mailbox status put into an other structure and send with
> a different notifiction
> - does anyone have a code snippet to send a JSON encoded message
> (probably easy to figure out looking at Lua docu)

these two I managed to solve im my current version of the script, which
also support now all message event types:

-- To use -- -- plugin { -- push_notification_driver =
lua:file=/etc/dovecot/dovecot-push.lua -- push_lua_url =
http://push.notification.server/handler -- } -- -- server is sent a PUT
message with JSON body like push_notification_driver =
ox:url= user_from_metadata -- local http = require "socket.http" 
local ltn12 = require "ltn12" -- luarocks install json-lua local json = require 
"JSON" function table_get(t, k, d)
  return t[k] or d end function script_init()
  return 0 end function dovecot_lua_notify_begin_txn(user)
local meta = user:metadata_get("/private/vendor/vendor.dovecot/http-notify")
return {user=user, event=dovecot.event(), 
ep=user:plugin_getenv("push_lua_url"), messages={}, meta=meta}
end function dovecot_lua_notify_event_message_new(ctx, event)
-- check if there is a push token registered if (ctx.meta == nil or 
ctx.meta == '') then return end -- get mailbox status local mbox = 
ctx.user:mailbox(event.mailbox)
mbox:sync()
local status = mbox:status(dovecot.storage.STATUS_RECENT, 
dovecot.storage.STATUS_UNSEEN, dovecot.storage.STATUS_MESSAGES)
mbox:free()
table.insert(ctx.messages, {
  user = ctx.meta,
  ["imap-uidvalidity"] = event.uid_validity,
  ["imap-uid"] = event.uid,
  folder = event.mailbox,
  event = event.name,
  from = event.from,
  subject = event.subject,
  snippet = event.snippet,
  unseen = status.unseen
})
end function dovecot_lua_notify_event_message_append(ctx, event)
  dovecot_lua_notify_event_message_new(ctx, event)
end function dovecot_lua_notify_event_message_read(ctx, event)
-- check if there is a push token registered if (ctx.meta == nil or 
ctx.meta == '') then return end -- get mailbox status local mbox = 
ctx.user:mailbox(event.mailbox)
mbox:sync()
local status = mbox:status(dovecot.storage.STATUS_RECENT, 
dovecot.storage.STATUS_UNSEEN, dovecot.storage.STATUS_MESSAGES)
mbox:free()
table.insert(ctx.messages, {
user = ctx.meta,
["imap-uidvalidity"] = event.uid_validity,
["imap-uid"] = event.uid,
folder = event.mailbox,
event = event.name,
unseen = status.unseen
})
end function dovecot_lua_notify_event_message_trash(ctx, event)
dovecot_lua_notify_event_message_read(ctx, event)
end function dovecot_lua_notify_event_message_expunge(ctx, event)
dovecot_lua_notify_event_message_read(ctx, event)
end function dovecot_lua_notify_event_flags_set(ctx, event)
-- check if there is a push token registered if (ctx.meta == nil or 
ctx.meta == '') then return end table.insert(ctx.messages, {
user = ctx.meta,
["imap-uidvalidity"] = event.uid_validity,
["imap-uid"] = event.uid,
folder = event.mailbox,
event = event.name,
flags = event.flags,
["keywords-set"] = event.keywords_set
})
end function dovecot_lua_notify_event_flags_clear(ctx, event)
-- check if there is a push token registered if (ctx.meta == nil or 
ctx.meta == '') then return end table.insert(ctx.messages, {
user = ctx.meta,
["imap-uidvalidity"] = event.uid_validity,
["imap-uid"] = event.uid,
folder = event.mailbox,
event = event.name,
flags = event.flags,
["keywords-clear"] = event.keywords_clear,
["keywords-old"] = event.keywords_old
})
end function dovecot_lua_notify_end_txn(ctx)
-- report all states for i,msg in ipairs(ctx.messages) do local e = 
dovecot.event(ctx.event)
e:set_name("lua_notify_mail_finished")
reqbody = json:encode(msg)
e:log_debug(ctx.ep .. " - sending " .. reqbody)
res, code = http.request({
method = "PUT",
url = ctx.ep,
source = ltn12.source.string(reqbody),
headers={
["content-type&qu

Re: How to access mailbox metadata in Lua push driver

2020-08-03 Thread Ralf Becker
Making progress :)

I'll document some obtracles I found, to make it easier for the next one
implementing push with Dovecot and Lua.

First I tried with my usual Alpine based container, but Alpine seems not
to build the Lua stuff for Dovecot :(

So I moved to an Ubuntu 18.04 based container and the official Dovecot
CE repo:

FROM ubuntu:18.04
RUN apt-get update && \
   apt-get install -y apt-transport-https gpg curl && \
   curl https://repo.dovecot.org/DOVECOT-REPO-GPG | gpg --import && \ gpg 
--export ED409DA1 > /etc/apt/trusted.gpg.d/dovecot.gpg && \ echo "deb 
https://repo.dovecot.org/ce-2.3-latest/ubuntu/bionic bionic main" > 
/etc/apt/sources.list.d/dovecot.list && \
   apt-get update && \
   bash -c "apt-get install -y
dovecot-{core,imapd,sqlite,managesieved,sieve,pop3d,lmtpd,submissiond,lua}
lua-socket"
CMD [ "/usr/sbin/dovecot","-F","-c","/etc/dovecot/dovecot.conf" ]

I had to install lua-socket, which is used by the example script and not 
required by dovecot-lua, which is ok, you just need to know.

Using Aki's code snippet as user= lead to an other error:

Aug 03 14:54:15 Error: doveadm: lua: /usr/share/lua/5.2/socket/url.lua:31: bad 
argument #1 to 'gsub' (string expected, got nil)
Aug 03 14:54:15 Error: lmtp( 38): lmtp-server: conn 10.9.94.14:42092 
[1]: rcpt : lua: /usr/share/lua/5.2/socket/url.lua:31: bad argument 
#1 to 'gsub' (string expected, got nil)

I'm now skipping the notification, if no metadata is set, like the OX
driver does:

function dovecot_lua_notify_event_message_new(ctx, event)
  -- check if there is a push token registered if (ctx.meta == nil or ctx.meta 
== '') then return end


Currently looking into the following questions:

- can I get the rfc 5423 type of event somehow (obviously I can set it
on the event myself depending of the function called)
- looking at the example code, it looks like it can be called for
multiple messages, when does that happen (LMTP send more then one)
- why is the mailbox status put into an other structure and send with a
different notifiction
- does anyone have a code snippet to send a JSON encoded message
(probably easy to figure out looking at Lua docu)

Ralf

Am 03.08.20 um 11:56 schrieb Ralf Becker:
> Thanks Aki, I'll check it out :)
>
>
> Am 03.08.20 um 11:40 schrieb Aki Tuomi:
>>> On 03/08/2020 12:31 Ralf Becker  wrote:
>>>
>>>  
>>> We're currently using the OX push driver, which is straight forward
>>> (simple web hook) and allows to store (multiple) push tokens of our
>>> webmailer direct in mailbox metadata.
>>>
>>> Only drawback is that it only supports new arriving mails in the INBOX,
>>> even mails moved via Sieve to other folders are NOT reported.
>>>
>>> Therefore we updated now to Dovecot 2.3(.10.1) to also get mails moved
>>> by user or Sieve scripts, deleted mails or flag changes.
>>>
>>> As far as I read the example Lua scripts and (a little) the Dovecot C
>>> code, the nice indirection of using mailbox metadata to a) enable push
>>> and b) store push tokens (modify reported user attribute with them) does
>>> NOT exist in the Lua driver by default.
>>>
>>> So my questions is: how can I access mailbox metadata from within the
>>> Lua script, to make eg. the example script behave like the OX driver
>>> with user_from_metadata set?
>>>
>>> I'm happy to update the Lua examples, if that's any help ...
>>>
>>> Ralf
>>>
>>> -- 
>>> Ralf Becker
>>> EGroupware GmbH [www.egroupware.org]
>>> Handelsregister HRB Kaiserslautern 3587
>>> Geschäftsführer Birgit und Ralf Becker
>>> Leibnizstr. 17, 67663 Kaiserslautern, Germany
>>> Telefon +49 631 31657-0
>> Actually it does exist:
>>
>> https://doc.dovecot.org/admin_manual/lua/#mail_user.metadata_get
>>
>> or
>>
>> https://doc.dovecot.org/admin_manual/lua/#object-mailbox
>>
>> mailbox:metadata_get()
>>
>> You get both of these objects from the push notification data, you just have 
>> to keep them in the context state. (See the example scripts)
>>
>> function dovecot_lua_notify_begin_txn(user)
>>local meta = user:metadata_get("/private/key")
>>return {messages={}, ep=user:plugin_getenv("push_lua_url"), 
>> username=user.username, meta=meta}
>> end
>>
>> Aki


-- 
Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0



signature.asc
Description: OpenPGP digital signature


Re: How to access mailbox metadata in Lua push driver

2020-08-03 Thread Ralf Becker
Thanks Aki, I'll check it out :)


Am 03.08.20 um 11:40 schrieb Aki Tuomi:
>> On 03/08/2020 12:31 Ralf Becker  wrote:
>>
>>  
>> We're currently using the OX push driver, which is straight forward
>> (simple web hook) and allows to store (multiple) push tokens of our
>> webmailer direct in mailbox metadata.
>>
>> Only drawback is that it only supports new arriving mails in the INBOX,
>> even mails moved via Sieve to other folders are NOT reported.
>>
>> Therefore we updated now to Dovecot 2.3(.10.1) to also get mails moved
>> by user or Sieve scripts, deleted mails or flag changes.
>>
>> As far as I read the example Lua scripts and (a little) the Dovecot C
>> code, the nice indirection of using mailbox metadata to a) enable push
>> and b) store push tokens (modify reported user attribute with them) does
>> NOT exist in the Lua driver by default.
>>
>> So my questions is: how can I access mailbox metadata from within the
>> Lua script, to make eg. the example script behave like the OX driver
>> with user_from_metadata set?
>>
>> I'm happy to update the Lua examples, if that's any help ...
>>
>> Ralf
>>
>> -- 
>> Ralf Becker
>> EGroupware GmbH [www.egroupware.org]
>> Handelsregister HRB Kaiserslautern 3587
>> Geschäftsführer Birgit und Ralf Becker
>> Leibnizstr. 17, 67663 Kaiserslautern, Germany
>> Telefon +49 631 31657-0
> Actually it does exist:
>
> https://doc.dovecot.org/admin_manual/lua/#mail_user.metadata_get
>
> or
>
> https://doc.dovecot.org/admin_manual/lua/#object-mailbox
>
> mailbox:metadata_get()
>
> You get both of these objects from the push notification data, you just have 
> to keep them in the context state. (See the example scripts)
>
> function dovecot_lua_notify_begin_txn(user)
>local meta = user:metadata_get("/private/key")
>return {messages={}, ep=user:plugin_getenv("push_lua_url"), 
> username=user.username, meta=meta}
> end
>
> Aki

-- 
Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0




signature.asc
Description: OpenPGP digital signature


How to access mailbox metadata in Lua push driver

2020-08-03 Thread Ralf Becker
We're currently using the OX push driver, which is straight forward
(simple web hook) and allows to store (multiple) push tokens of our
webmailer direct in mailbox metadata.

Only drawback is that it only supports new arriving mails in the INBOX,
even mails moved via Sieve to other folders are NOT reported.

Therefore we updated now to Dovecot 2.3(.10.1) to also get mails moved
by user or Sieve scripts, deleted mails or flag changes.

As far as I read the example Lua scripts and (a little) the Dovecot C
code, the nice indirection of using mailbox metadata to a) enable push
and b) store push tokens (modify reported user attribute with them) does
NOT exist in the Lua driver by default.

So my questions is: how can I access mailbox metadata from within the
Lua script, to make eg. the example script behave like the OX driver
with user_from_metadata set?

I'm happy to update the Lua examples, if that's any help ...

Ralf

-- 
Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0




signature.asc
Description: OpenPGP digital signature


Re: Question about migration 2.2 - 2.3 with replication

2020-08-03 Thread Ralf Becker
Am 30.07.20 um 23:02 schrieb Ralf Becker:
> Do both replication nodes need to be updated at the same time?
>
> Or can a 2.2(.36.4) node replicate with a 2.3(.10.1)?
>
> Ralf


In case someone's looking here for an answer about 2.2 to 2.3 update
with replication and directors:

1. I first updated the directors, for which I run into the documented
problem with consistent hashing:
- docu says you can NOT run a director rind with different settings for
consistent hashing (director_consistent_hashing)
- I thought because I have nothing configured explicit, I wont be
affected, but that is wrong, as the default changed and therefore you are!
- to keep the director ring running, you have to explicitly configure
consistent hashing under 2.2 (I scaled down our K8s director service to
1 and reloaded that director, maybe you can also reload multiple
directors at the same time, I have not tried that)
- then you need to remove the  director_consistent_hashing  and update
your directors one by one
I also had to fix a couple of changed config settings eg.
ssl_protocols-->ssl_min_protocol

2. replicating backends: I updated one of the backends to 2.3 and it
started replicating with the old backend. I only been in that situation
for a couple of minutes (at night) before I updated the second backend
to 2.3.

So far the only thing we noticed: private seen flags on shared user
folders (which were never supported for replication!) seem to be not
functioning any more in 2.3. Not functioning means, if they are
configured you can not set a mail to seen in a shared user folder. After
removing this configuration:

location = mdbox:%%h/mdbox:INDEXPVT=~/shared/%%u  --> mdbox:%%h/mdbox

seen flags behave as expected / are identical now if you access the
mailbox direct or via the shared user folder, and the are identical on
both backends.

Ralf

-- 
Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0




signature.asc
Description: OpenPGP digital signature


Question about migration 2.2 - 2.3 with replication

2020-07-30 Thread Ralf Becker
Do both replication nodes need to be updated at the same time?

Or can a 2.2(.36.4) node replicate with a 2.3(.10.1)?

Ralf

-- 
Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0




signature.asc
Description: OpenPGP digital signature


Re: Sieve and OX push_notification_drive seem to not work together

2020-07-21 Thread Ralf Becker
Hi Aki,

Am 21.07.20 um 15:07 schrieb Aki Tuomi:
>> On 21/07/2020 13:35 Ralf Becker > <mailto:r...@egroupware.org>> wrote:
>>  
>>  
>> While it's documented that the OX push_notification_driver only supports
>> MessageNew events, it does NOT generate any event if a Sieve script
>> moves the message on arrival to an other folder, neither in INBOX were
>> it original arrives by LMTP, nor in the folder Sieve moves the message.
>>  
>> Is that a misconfiguration on my side, or a know / desired limitation or
>> just a bug?
>>  
>> I use LMTP to deliver mails from Postfix to Dovecot:
>>  
>> protocol lmtp {
>> mail_plugins = $mail_plugins notify push_notification
>> }
>>  
>> Does eg. the same need to be configured somehow for the Sieve plugin?
>>  
>> Ralf
>  
> I would recommend using the lua one. The OX one ignores non-INBOX
> folders.


Ok, so I won't get a notfication from a mail with Sieve moved to a
different folder, because Sieve happens before and OX notifications
ignore everything but INBOX.

The Lua one requires Dovecto 2.3, as we are currently still on 2.2.36.4,
but planning to migrate in the near future.

Ralf

-- 
Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0




signature.asc
Description: OpenPGP digital signature


Sieve and OX push_notification_drive seem to not work together

2020-07-21 Thread Ralf Becker
While it's documented that the OX push_notification_driver only supports
MessageNew events, it does NOT generate any event if a Sieve script
moves the message on arrival to an other folder, neither in INBOX were
it original arrives by LMTP, nor in the folder Sieve moves the message.

Is that a misconfiguration on my side, or a know / desired limitation or
just a bug?

I use LMTP to deliver mails from Postfix to Dovecot:

protocol lmtp {
  mail_plugins = $mail_plugins notify push_notification
}

Does eg. the same need to be configured somehow for the Sieve plugin?

Ralf

-- 
Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0




signature.asc
Description: OpenPGP digital signature


Re: Questions about Dovecot push notifications

2020-07-21 Thread Ralf Becker
I managed to figure out some of my questions and answer them here I case
someone else runs into them:

Am 16.07.20 um 09:39 schrieb Ralf Becker:
> I read the docu available under:
> https://doc.dovecot.org/configuration_manual/push_notification/
>
> I'm using Dovecot 2.2.36.4 with directors and a replicating pair of
> backends using a custom dict with proxy protocol for user- and passdb
> plus a userdb using sqlite for backup.
>
> I understand 2.2 only allows to notify about new arriving mails, not eg.
> flag-changes which would require 2.3 with LUA.
>
> I want to use the ox notification driver with a http url (https seems to
> be 2.3 only).
>
> I read to enable push notifications on a mailbox I need:
>
> a) IMAP metadata enabled incl. a backend/driver to store the metadata
> b) enable push for the individual mailbox using a doveadm command (maybe
> also via IMAP, which would be easier in my case)
>
> My questions are:
>
> 1. can I set some static metadata via userdb to enable push for all
> mailboxes?
>
> 2. if no, does setting the metadata on one replicating backend
> replicates it to the other one too?


Yes, metadata get's replicated, you even have to create the metadata
configuration on both replication nodes, otherwise the replication stops!


> 3. what purpose does the user= argument in the doveadm
> command to enable push server:
>
>     doveadm mailbox metadata set -u u...@example.com -s ""
> /private/vendor/vendor.dovecot/http-notify user=11@3
>
> Does it replace "u...@example.com" in the push payload with
> ?


That allows to store eg. your push token in the user attribute Dovecot
sends out.

You have to specify user_from_metadata in push_notification_driver.


> 4. can I set the required metadata via IMAP command preferable with a
> (already configured) master user?


As get and set metadata is a regular IMAP command that should work,
thought I don't use it currently.

Ralf

-- 
Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0




signature.asc
Description: OpenPGP digital signature


Questions about Dovecot push notifications

2020-07-16 Thread Ralf Becker
I read the docu available under:
https://doc.dovecot.org/configuration_manual/push_notification/

I'm using Dovecot 2.2.36.4 with directors and a replicating pair of
backends using a custom dict with proxy protocol for user- and passdb
plus a userdb using sqlite for backup.

I understand 2.2 only allows to notify about new arriving mails, not eg.
flag-changes which would require 2.3 with LUA.

I want to use the ox notification driver with a http url (https seems to
be 2.3 only).

I read to enable push notifications on a mailbox I need:

a) IMAP metadata enabled incl. a backend/driver to store the metadata
b) enable push for the individual mailbox using a doveadm command (maybe
also via IMAP, which would be easier in my case)

My questions are:

1. can I set some static metadata via userdb to enable push for all
mailboxes?

2. if no, does setting the metadata on one replicating backend
replicates it to the other one too?

3. what purpose does the user= argument in the doveadm
command to enable push server:

    doveadm mailbox metadata set -u u...@example.com -s ""
/private/vendor/vendor.dovecot/http-notify user=11@3

Does it replace "u...@example.com" in the push payload with
?

4. can I set the required metadata via IMAP command preferable with a
(already configured) master user?

Thanks in advance :)

Ralf

-- 
Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0




signature.asc
Description: OpenPGP digital signature


Solved: Sieve: reject certain mime-types and notify recipient

2019-01-15 Thread Ralf Becker
I case someone is interesed too, why it was not working:

Am 14.01.19 um 20:22 schrieb Ralf Becker:
> I have to reject office files for a certain domain plus notifying the
> original recipient about the rejection too.
>
> require ["fileinto","reject","body","enotify","variables"];
>
> if allof (address :contains ["To","TO","Cc","CC"] "@example.org", body
> :content  "application/msword" :contains "") {
>     set "to" "${1}";


The set does not work in allof with an other condition, so I use now:

if address :contains ["To","TO","Cc","CC"] "@example.org" {
    set "to" "${1}";
}


>     # :matches is used to get the value of the Subject header
>     if header :matches "Subject" "*" {
>     set "subject" "${1}";
>     }
>     # :matches is used to get the value of the From header
>     if header :matches "From" "*" {
>     set "from" "${1}";
>     }
>     notify :message "Rejected Office Datei ${from}: ${subject}" "${to}";


The notify needs as argument "mailto:${to}";.


>     reject text:
> Aus Sicherheitsgründen nehmen wir keine Office Dateien mehr an. Bitte
> senden Sie uns ein PDF.
> .
> ;
> }
>
> A manual sievec call gives not error and if I remove everything but the
> reject line it works.
>
> Any ideas?


I used the sieve-test binary to figure out why it was not working.

My full script for checking mime-types as well as extensions and also
notifying the orginal recipient is now the following:

require [ "foreverypart", "mime",
"fileinto","reject","body","enotify","variables"];

if address :contains ["To","TO","Cc","CC"] "@example.org" {
    # :matches is used to get value of to, subject and from
    if address :matches ["To","TO","Cc","CC"] "*" {
    set "to" "${1}";
    }
    if header :matches "Subject" "*" {
    set "subject" "${1}";
    }
    if header :matches "From" "*" {
    set "from" "${1}";
    }

    # reject based on mime-type
    if allof(body :content  "application/msword" :contains "",
 body :content  "application/msexcel" :contains "") {
    # send notification to original recipient
    notify :message "Rejected Office Datei von ${from}:
${subject}" "mailto:${to}";;
    # send rejection message to sender
    reject text:
Aus Sicherheitsgründen nehmen wir keine Office Dateien mehr an. Bitte
senden Sie uns ein PDF.
--
Ralf Becker
EGroupware GmbH
.
;
    stop;
    }
    # reject based on extension of attachment
    foreverypart
    {
    if header :mime :param "filename" :matches
["Content-Type", "Content-Disposition"]
    ["*.doc","*.xsl"]
    {
    # send notification to original recipient
    notify :message "Rejected
Dateierweiterung/Fileextension ${from}: ${subject}" "mailto:${to}";;
    # send rejection message to sender
    reject text:
Aus Sicherheitsgründen nehmen wir kein Office Dateien mehr an. Bitte
senden Sie uns ein PDF.
--
Ralf Becker
EGroupware GmbH
.
;
    stop;
    }
    }
}

Ralf

-- 
Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0



signature.asc
Description: OpenPGP digital signature


Sieve: reject certain mime-types and notify recipient

2019-01-14 Thread Ralf Becker
I have to reject office files for a certain domain plus notifying the
original recipient about the rejection too.

require ["fileinto","reject","body","enotify","variables"];

if allof (address :contains ["To","TO","Cc","CC"] "@example.org", body
:content  "application/msword" :contains "") {
    set "to" "${1}";
    # :matches is used to get the value of the Subject header
    if header :matches "Subject" "*" {
    set "subject" "${1}";
    }
    # :matches is used to get the value of the From header
    if header :matches "From" "*" {
    set "from" "${1}";
    }
    notify :message "Rejected Office Datei ${from}: ${subject}" "${to}";
    reject text:
Aus Sicherheitsgründen nehmen wir keine Office Dateien mehr an. Bitte
senden Sie uns ein PDF.
.
;
}

A manual sievec call gives not error and if I remove everything but the
reject line it works.

Any ideas?

-- 

Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0



signature.asc
Description: OpenPGP digital signature


Re: mail-migration with dovadm sync/backup and mail_attachment_dir

2018-08-27 Thread Ralf Becker
Am 27.08.18 um 10:20 schrieb Ralf Becker:
> I need to run a mail-migration from a Dovecot with mail_attachment_dir /
> single instance storage enabled.
>
> As mailboxes and mail_attachment_dir are rsynced I would normally run a
>
> doveadm backup -u  -R -d dbox:/dbox

I tried it now with:

    doveadm -o mail_attachment_dir= backup -u  -R
-d dbox:

Thought I'm unsure if doveadm use the option now for just the source
(reverse destination) or also for -u / destination, which would be wrong
for us?

Ralf

> How can I tell it the mail_attachment_dir the rsynced source / (reverse)
> destination uses?
>
> Or do I have to start a Dovecot process and use imapc for the migration?
>
> Ralf

-- 
Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0




signature.asc
Description: OpenPGP digital signature


mail-migration with dovadm sync/backup and mail_attachment_dir

2018-08-27 Thread Ralf Becker
I need to run a mail-migration from a Dovecot with mail_attachment_dir /
single instance storage enabled.

As mailboxes and mail_attachment_dir are rsynced I would normally run a

doveadm backup -u  -R -d dbox:/dbox

How can I tell it the mail_attachment_dir the rsynced source / (reverse)
destination uses?

Or do I have to start a Dovecot process and use imapc for the migration?

Ralf

-- 
Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0




signature.asc
Description: OpenPGP digital signature


Re: doveadm mailbox delete not working

2018-08-15 Thread Ralf Becker
One more update, I tried renaming that "[Test " folder to just "Test",
which creates an also not deletable folder "Test".

Good news is, if I delete these folders on both replication nodes from
mdbox/mailboxes and mdbox/subscriptions they seem to be really gone.
I can event create the folder "Test" again and it behaves normal aka. I
can delete it afterwards via IMAP.

Any objections againts that workaround (deleting the folder in
mdbox/mailboxes and from mdbox/subscriptions)?

Ralf

Am 15.08.18 um 18:23 schrieb Ralf Becker:
> I found a way to reproduce the problem :)
>
> Use an arbitrary mailbox (maybe my Dovecot config with mdbox etc. required).
> Then use Thunderbird (52.9.1 on Mac) to create eg. the following folder
> (without the quotes!):
>
>     "[Test / Test]"
>
> This is a space, a slash and a space between the Test in square brackets.
>
> TB now creates the following two folders:
>
> 1. "INBOX/[Test " (not subscribed)
> 2. "INBOX/[Test / Test]" (subscribed)
>
> I can delete the 2. one, but the 1. one is not deletable, neither via TB
> nor doveadm command:
>
> / # doveadm mailbox status -u b...@bb-trunk.egroupware.de all 'INBOX/[Test '
> doveadm(b...@bb-trunk.egroupware.de): Error: Mailbox INBOX/[Test : Failed
> to lookup mailbox status: Mailbox doesn't exist: INBOX/[Test
> / # doveadm mailbox delete -u b...@bb-trunk.egroupware.de 'INBOX/[Test '
> doveadm(b...@bb-trunk.egroupware.de): Info: Mailbox deleted: INBOX/[Test
> / # doveadm mailbox list -u b...@bb-trunk.egroupware.de 'INBOX/[Test '
> INBOX/[Test
>
> I can see the folder under mdbox/mailboxes and in mdbox/subscriptions.
>
> I hope that allows you to reproduce it.
>
> Ralf
>
> Am 14.08.18 um 15:13 schrieb Ralf Becker:
>> I have a user who has several folders in his mailbox, which we can not
>> delete, neither via IMAP nor via doveadm:
>>
>> root@ka-nfs-mail:~# doveadm mailbox list -u  | grep hbereiche
>> | cat -v
>> INBOX/[Fachbereiche ^M
>> INBOX/Fachbereiche ^M
>> INBOX/hbereiche^M
>> INBOX/hbereiche/LAGen]^M
>> INBOX/hbereiche/LAG^M
>> INBOX/[Fachbereiche^M
>> INBOX/[Fachbereiche/LAGen]^M
>> INBOX/[Fachbereiche]^M
>> INBOX/[Fachbereiche]/LAGen]^M
>> INBOX/[Fachbereiche]/LAGe^M
>> root@ka-nfs-mail:~# doveadm mailbox delete  -u 
>> 'INBOX/Fachbereiche '
>> doveadm(): Info: Mailbox deleted: INBOX/Fachbereiche
>> root@ka-nfs-mail:~# doveadm mailbox list -u | grep hbereiche |
>> cat -v
>> INBOX/[Fachbereiche ^M
>> INBOX/Fachbereiche ^M
>> INBOX/hbereiche^M
>> INBOX/hbereiche/LAGen]^M
>> INBOX/hbereiche/LAG^M
>> INBOX/[Fachbereiche^M
>> INBOX/[Fachbereiche/LAGen]^M
>> INBOX/[Fachbereiche]^M
>> INBOX/[Fachbereiche]/LAGen]^M
>> INBOX/[Fachbereiche]/LAGe^M
>>
>> As far as I tried none of these folders can be deleted (I added single
>> quotes for trailing space and tried to delete subfolders first).
>>
>> Mailbox is in mdbox format on a replication pair under Dovecot 2.2.36
>> and I tried both nodes of the replication with same result.
>>
>> Any ideas?
>>
>> Ralf
>>

-- 
Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0




signature.asc
Description: OpenPGP digital signature


Re: doveadm mailbox delete not working

2018-08-15 Thread Ralf Becker
I found a way to reproduce the problem :)

Use an arbitrary mailbox (maybe my Dovecot config with mdbox etc. required).
Then use Thunderbird (52.9.1 on Mac) to create eg. the following folder
(without the quotes!):

    "[Test / Test]"

This is a space, a slash and a space between the Test in square brackets.

TB now creates the following two folders:

1. "INBOX/[Test " (not subscribed)
2. "INBOX/[Test / Test]" (subscribed)

I can delete the 2. one, but the 1. one is not deletable, neither via TB
nor doveadm command:

/ # doveadm mailbox status -u b...@bb-trunk.egroupware.de all 'INBOX/[Test '
doveadm(b...@bb-trunk.egroupware.de): Error: Mailbox INBOX/[Test : Failed
to lookup mailbox status: Mailbox doesn't exist: INBOX/[Test
/ # doveadm mailbox delete -u b...@bb-trunk.egroupware.de 'INBOX/[Test '
doveadm(b...@bb-trunk.egroupware.de): Info: Mailbox deleted: INBOX/[Test
/ # doveadm mailbox list -u b...@bb-trunk.egroupware.de 'INBOX/[Test '
INBOX/[Test

I can see the folder under mdbox/mailboxes and in mdbox/subscriptions.

I hope that allows you to reproduce it.

Ralf

Am 14.08.18 um 15:13 schrieb Ralf Becker:
> I have a user who has several folders in his mailbox, which we can not
> delete, neither via IMAP nor via doveadm:
>
> root@ka-nfs-mail:~# doveadm mailbox list -u  | grep hbereiche
> | cat -v
> INBOX/[Fachbereiche ^M
> INBOX/Fachbereiche ^M
> INBOX/hbereiche^M
> INBOX/hbereiche/LAGen]^M
> INBOX/hbereiche/LAG^M
> INBOX/[Fachbereiche^M
> INBOX/[Fachbereiche/LAGen]^M
> INBOX/[Fachbereiche]^M
> INBOX/[Fachbereiche]/LAGen]^M
> INBOX/[Fachbereiche]/LAGe^M
> root@ka-nfs-mail:~# doveadm mailbox delete  -u 
> 'INBOX/Fachbereiche '
> doveadm(): Info: Mailbox deleted: INBOX/Fachbereiche
> root@ka-nfs-mail:~# doveadm mailbox list -u | grep hbereiche |
> cat -v
> INBOX/[Fachbereiche ^M
> INBOX/Fachbereiche ^M
> INBOX/hbereiche^M
> INBOX/hbereiche/LAGen]^M
> INBOX/hbereiche/LAG^M
> INBOX/[Fachbereiche^M
> INBOX/[Fachbereiche/LAGen]^M
> INBOX/[Fachbereiche]^M
> INBOX/[Fachbereiche]/LAGen]^M
> INBOX/[Fachbereiche]/LAGe^M
>
> As far as I tried none of these folders can be deleted (I added single
> quotes for trailing space and tried to delete subfolders first).
>
> Mailbox is in mdbox format on a replication pair under Dovecot 2.2.36
> and I tried both nodes of the replication with same result.
>
> Any ideas?
>
> Ralf
>

-- 
Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0




signature.asc
Description: OpenPGP digital signature


Re: doveadm mailbox delete not working

2018-08-15 Thread Ralf Becker
Hi Steffen,

Am 15.08.18 um 15:58 schrieb Steffen Kaiser:
> On Tue, 14 Aug 2018, Ralf Becker wrote:
>
> > Date: Tue, 14 Aug 2018 15:13:12 +0200
> > From: Ralf Becker 
> > To: dovecot@dovecot.org
> > Subject: doveadm mailbox delete not working
>
> > I have a user who has several folders in his mailbox, which we can not
> > delete, neither via IMAP nor via doveadm:
>
> > root@ka-nfs-mail:~# doveadm mailbox list -u  | grep hbereiche
> > | cat -v
> > INBOX/[Fachbereiche ^M
>
> > Any ideas?
>
> I haven't seen this idea and you've wrote nothing about the ^M:

The ^M is the regular CR from the doveadm output converted by cat -v and
I used it to show there is a trailing space.

> The ^M means that there is a "\015" / \r at the end of the output.
> Where does this char come from? In "normal" output, this char is
> almost invisible, esp. at the end of a line. I don't know how Dovecot
> handles this char internally.
>
> The char should show up in the JSON formatted list, Aki suggested, too:
>
> doveadm -fjson mailbox list -u user INBOX/*
>
> But I haven't seen the output in your replies.
>
> Can you verify in the filesystem, if the char is there, too? E.g.
> ls -1 | cat -v

root@ka-nfs-mail:/poolN/dovecot/imap///mdbox/mailboxes# ls
-1|cat -v|grep hbereich
[Fachbereiche
[Fachbereiche
[Fachbereiche]
Fachbereiche
hbereiche

So there is no ^M/CR in the filename itself.

Ralf

> Maybe
>
> doveadm mailbox delete  -u  'INBOX/Fachbereiche '"\015"
> Would help?
>
> -- Steffen Kaiser

-- 
Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0




signature.asc
Description: OpenPGP digital signature


Re: doveadm mailbox delete not working

2018-08-15 Thread Ralf Becker
Hi Sami,

Am 15.08.18 um 11:29 schrieb Sami Ketola:
>
>
>> On 15 Aug 2018, at 9.29, Ralf Becker > <mailto:r...@egroupware.org>> wrote:
>>
>> Am 14.08.18 um 18:51 schrieb Aki Tuomi:
>>> Try 
>>>
>>> doveadm mailbox list -u user INBOX/*
>>
>> Hmm, posted that before, it lists all these undeletable mailboxes:
>
>
> Can you also post your doveconf -n to be sure that the folder is just
> not autocreated after delete.

We are autocreating (and subscribing) some folders, but not the ones
this thread is about, just the standard folders like Sent, Drafts,
Trash, ...

Output from doveadm config -n is attached. It's from one of the
replication node, the other one has identical config, apart from the
things which need to be different for the replication. There is also a
director pair in front of them, just for your info.

Ralf

-- 
Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0

# 2.2.36 (1f10bfa63): /etc/dovecot/dovecot.conf
# Pigeonhole version 0.4.24 (124e06aa)
# OS: Linux 4.15.0-23-generic x86_64  
# Hostname: 8f18f1de92f7
auth_cache_negative_ttl = 2 mins
auth_cache_size = 10 M
auth_cache_ttl = 5 mins
auth_master_user_separator = *
auth_mechanisms = plain login
auth_username_chars = 
"abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ01234567890.-_@#"
default_client_limit = 3500
default_process_limit = 512
disable_plaintext_auth = no
doveadm_password =  # hidden, use -P to show it
doveadm_port = 12345
first_valid_uid = 90
listen = *
log_path = /dev/stderr
mail_access_groups = dovecot
mail_gid = dovecot
mail_location = mdbox:~/mdbox
mail_log_prefix = "%s(%u %p): "
mail_max_userip_connections = 200
mail_plugins = acl quota notify replication mail_log
mail_uid = dovecot
managesieve_notify_capability = mailto
managesieve_sieve_capability = fileinto reject envelope encoded-character 
vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy 
include variables body enotify environment mailbox date ihave vnd.dovecot.debug
mbox_min_index_size = 1000 B
mbox_write_locks = fcntl
mdbox_rotate_size = 50 M
namespace inboxes {
  inbox = yes
  location = 
  mailbox Drafts {
auto = subscribe
special_use = \Drafts
  }
  mailbox Junk {
auto = subscribe
special_use = \Junk
  }
  mailbox Sent {
auto = subscribe
special_use = \Sent
  }
  mailbox Templates {
auto = subscribe
  }
  mailbox Trash {
auto = subscribe
special_use = \Trash
  }
  prefix = INBOX/
  separator = /
  subscriptions = no
}
namespace subs {
  hidden = yes
  list = no
  location = 
  prefix = 
  separator = /
}
namespace users {
  location = mdbox:%%h/mdbox:INDEXPVT=~/shared/%%u
  prefix = user/%%n/
  separator = /
  subscriptions = no
  type = shared
}
passdb {
  args = /etc/dovecot/dovecot-dict-master-auth.conf
  driver = dict
  master = yes
}
passdb {
  args = /etc/dovecot/dovecot-dict-auth.conf
  driver = dict
}
plugin {
  acl = vfile
  acl_shared_dict = file:/var/dovecot/imap/%d/shared-mailboxes.db
  mail_log_events = delete undelete expunge copy mailbox_delete mailbox_rename
  mail_log_fields = uid box msgid size
  mail_replica = tcp:10.44.88.1
  quota = dict:User quota::ns=INBOX/:file:%h/dovecot-quota
  quota_rule = *:storage=100GB
  sieve = ~/sieve/dovecot.sieve
  sieve_after = /var/dovecot/sieve/after.d/
  sieve_before = /var/dovecot/sieve/before.d/
  sieve_dir = ~/sieve
  sieve_extensions = +editheader
  sieve_user_log = ~/.sieve.log
}
postmaster_address = adm...@egroupware.org
protocols = imap pop3 lmtp sieve
quota_full_tempfail = yes
replication_dsync_parameters = -d -n INBOX -l 30 -U
service aggregator {
  fifo_listener replication-notify-fifo {
user = dovecot
  }
  unix_listener replication-notify {
user = dovecot
  }
}
service auth-worker {
  user = $default_internal_user
}
service auth {
  drop_priv_before_exec = no
  inet_listener {
port = 113
  }
}
service doveadm {
  inet_listener {
port = 12345
  }
  inet_listener {
port = 26
  }
  vsz_limit = 640 M
}
service imap-login {
  inet_listener imap {
port = 143
  }
  inet_listener imaps {
port = 993
ssl = yes
  }
  process_min_avail = 5
  service_count = 1
  vsz_limit = 64 M
}
service imap {
  executable = imap
  process_limit = 2048
  vsz_limit = 640 M
}
service lmtp {
  inet_listener lmtp {
port = 24
  }
  unix_listener lmtp {
mode = 0666
  }
  vsz_limit = 512 M
}
service managesieve-login {
  inet_listener sieve {
port = 4190
  }
  inet_listener sieve_deprecated {
port = 2000
  }
}
service pop3-login {
  inet_listener pop3 {
port = 110
  }
  inet_listener pop3s {
port = 995
ssl = yes
  }
}
service pop3 {
  executable = pop3
}
service postlogin {
  executable = script-login -d rawlog -b -t
}
service replicator {
  pr

Re: doveadm mailbox delete not working

2018-08-15 Thread Ralf Becker
Hi Aki,

Am 15.08.18 um 11:31 schrieb Aki Tuomi:
> Such fun folders. Can you try doveadm -Dv mailbox delete -u username
> folder and post it to the list?

root@ka-nfs-mail:~# doveadm -Dv mailbox delete -u 
'INBOX/Fachbereiche '
Debug: Loading modules from directory: /usr/lib/dovecot
Debug: Module loaded: /usr/lib/dovecot/lib01_acl_plugin.so
Debug: Module loaded: /usr/lib/dovecot/lib10_quota_plugin.so
Debug: Module loaded: /usr/lib/dovecot/lib15_notify_plugin.so
Debug: Module loaded: /usr/lib/dovecot/lib20_mail_log_plugin.so
Debug: Module loaded: /usr/lib/dovecot/lib20_replication_plugin.so
Debug: Loading modules from directory: /usr/lib/dovecot/doveadm
Debug: Module loaded: /usr/lib/dovecot/doveadm/lib10_doveadm_acl_plugin.so
Debug: Skipping module doveadm_expire_plugin, because dlopen() failed:
Error relocating
/usr/lib/dovecot/doveadm/lib10_doveadm_expire_plugin.so:
expire_set_lookup: symbol not found (this is usually intentional, so
just ignore this message)
Debug: Module loaded: /usr/lib/dovecot/doveadm/lib10_doveadm_quota_plugin.so
Debug: Module loaded: /usr/lib/dovecot/doveadm/lib10_doveadm_sieve_plugin.so
Debug: Skipping module doveadm_fts_plugin, because dlopen() failed:
Error relocating /usr/lib/dovecot/doveadm/lib20_doveadm_fts_plugin.so:
fts_backend_rescan: symbol not found (this is usually intentional, so
just ignore this message)
Debug: Skipping module doveadm_mail_crypt_plugin, because dlopen()
failed: Error relocating
/usr/lib/dovecot/doveadm/libdoveadm_mail_crypt_plugin.so:
mail_crypt_box_get_public_key: symbol not found (this is usually
intentional, so just ignore this message)
doveadm( 46922): Debug: Added userdb setting:
plugin/master_user=
doveadm( 46922): Debug: Added userdb setting:
plugin/userdb_acl_groups=koakram@,wahlkampfnetzwerk@,wahlkalender
2017@,lgs@
doveadm( 46922): Debug: Added userdb setting:
plugin/userdb_quota_rule=*:bytes=1572864
doveadm(): Debug: Effective uid=90, gid=101,
home=/var/dovecot/imap//
doveadm(): Debug: Quota root: name=User quota backend=dict
args=:ns=INBOX/:file:/var/dovecot/imap///dovecot-quota
doveadm(): Debug: Quota rule: root=User quota mailbox=*
bytes=107374182400 messages=0
doveadm(): Debug: Quota grace: root=User quota
bytes=10737418240 (10%)
doveadm(): Debug: dict quota: user=,
uri=file:/var/dovecot/imap///dovecot-quota, noenforcing=0
doveadm(): Debug: Namespace inboxes: type=private,
prefix=INBOX/, sep=/, inbox=yes, hidden=no, list=yes, subscriptions=no
location=mdbox:~/mdbox
doveadm(): Debug: fs:
root=/var/dovecot/imap///mdbox, index=, indexpvt=,
control=, inbox=, alt=
doveadm(): Debug: acl: initializing backend with data: vfile
doveadm(): Debug: acl: acl username = 
doveadm(): Debug: acl: owner = 1
doveadm(): Debug: acl vfile: Global ACLs disabled
doveadm(): Debug: Namespace users: type=shared,
prefix=user/%n/, sep=/, inbox=no, hidden=no, list=yes, subscriptions=no
location=mdbox:%h/mdbox:INDEXPVT=~/shared/%u
doveadm(): Debug: shared: root=/run/dovecot, index=,
indexpvt=, control=, inbox=, alt=
doveadm(): Debug: acl: initializing backend with data: vfile
doveadm(): Debug: acl: acl username = 
doveadm(): Debug: acl: owner = 0
doveadm(): Debug: acl vfile: Global ACLs disabled
doveadm(): Debug: Namespace subs: type=private, prefix=,
sep=/, inbox=no, hidden=yes, list=no, subscriptions=yes
location=mdbox:~/mdbox
doveadm(): Debug: fs:
root=/var/dovecot/imap///mdbox, index=, indexpvt=,
control=, inbox=, alt=
doveadm(): Debug: acl: initializing backend with data: vfile
doveadm(): Debug: acl: acl username = 
doveadm(): Debug: acl: owner = 1
doveadm(): Debug: acl vfile: Global ACLs disabled
doveadm(): Debug: quota: quota_over_flag check:
quota_over_script unset - skipping
doveadm(): Debug: INBOX/Fachbereiche : Mailbox opened because:
mailbox delete
doveadm(): Debug: acl vfile: file
/var/dovecot/imap///mdbox/mailboxes/Fachbereiche
/dbox-Mails/dovecot-acl not found
doveadm(): Debug: Namespace INBOX/: Using permissions from
/var/dovecot/imap///mdbox: mode=0700 gid=default
doveadm(): Debug: replication: Replication requested by
'mailbox delete', priority=1
doveadm(): Info: Mailbox deleted: INBOX/Fachbereiche

Event doveadm reporting the folder is deleted, it is not:

root@ka-nfs-mail:~# doveadm mailbox list  -u 
'INBOX/Fachbereiche '
INBOX/Fachbereiche

But it can not get a status of it:

root@ka-nfs-mail:~# doveadm mailbox status  -u  all
'INBOX/Fachbereiche '
doveadm(): Error: Mailbox INBOX/Fachbereiche : Failed to
lookup mailbox status: Mailbox doesn't exist: INBOX/Fachbereiche

Really wired

Ralf

> Aki
>
> On 15.08.2018 11:03, Ralf Becker wrote:
>> Hi Aki,
>>
>> I respond to you only on purpose, as the a dont want to show all folders
>> public, please remove them when you reply to the list.
>>
>> This is the folder my previous examples where about:
>>
>>     "mailbox": "INBOX/[Fachbereiche "  <-- trailing space
&g

Re: doveadm mailbox delete not working

2018-08-14 Thread Ralf Becker
Am 14.08.18 um 18:51 schrieb Aki Tuomi:
> Try 
>
> doveadm mailbox list -u user INBOX/*

Hmm, posted that before, it lists all these undeletable mailboxes:

root@ka-nfs-mail:~# doveadm mailbox list  -u  'INBOX/*' | grep
hbereich
INBOX/[Fachbereiche
INBOX/Fachbereiche
INBOX/hbereiche
INBOX/hbereiche/LAGen]
INBOX/hbereiche/LAG
INBOX/[Fachbereiche
INBOX/[Fachbereiche/LAGen]
INBOX/[Fachbereiche]
INBOX/[Fachbereiche]/LAGen]
INBOX/[Fachbereiche]/LAGe

When I try deleting it, everything looks ok:

root@ka-nfs-mail:~# doveadm mailbox delete -u 
'INBOX/Fachbereiche '
doveadm(): Info: Mailbox deleted: INBOX/Fachbereiche

But listing the mailboxes shows it's still there:

root@ka-nfs-mail:~# doveadm mailbox list  -u  'INBOX/*' 2>&1 |
grep hbereich
INBOX/[Fachbereiche
INBOX/Fachbereiche
INBOX/hbereiche
INBOX/hbereiche/LAGen]
INBOX/hbereiche/LAG
INBOX/[Fachbereiche
INBOX/[Fachbereiche/LAGen]
INBOX/[Fachbereiche]
INBOX/[Fachbereiche]/LAGen]
INBOX/[Fachbereiche]/LAGe

Ralf

> Aki
>
>> On 14 August 2018 at 19:20 Ralf Becker  wrote:
>>
>>
>> Hi Aki,
>>
>> Am 14.08.18 um 16:42 schrieb Aki Tuomi:
>>> Hi,
>>>
>>> the thing I'm actually looking for is that whether the sync causes the 
>>> folder to be restored, so it might be a better idea for you to try and spot 
>>> this from the logs. I assume that as an SP that you are using mail_log 
>>> plugin, so that might be useful to spot if this happens. You can also try 
>>> looking at the UIDVALIDITY value of the folder, it usually corresponds to 
>>> the creation unixtime.
>> Hmm, I dont get a mailbox status for the folder 'INBOX/Fachbereiche '
>> (trailing space):
>>
>> root@ka-nfs-mail:~# doveadm -Dv mailbox status  -u  all
>> 'INBOX/Fachbereiche '
>> Debug: Loading modules from directory: /usr/lib/dovecot
>> Debug: Module loaded: /usr/lib/dovecot/lib01_acl_plugin.so
>> Debug: Module loaded: /usr/lib/dovecot/lib10_quota_plugin.so
>> Debug: Module loaded: /usr/lib/dovecot/lib15_notify_plugin.so
>> Debug: Module loaded: /usr/lib/dovecot/lib20_mail_log_plugin.so
>> Debug: Module loaded: /usr/lib/dovecot/lib20_replication_plugin.so
>> Debug: Loading modules from directory: /usr/lib/dovecot/doveadm
>> Debug: Module loaded: /usr/lib/dovecot/doveadm/lib10_doveadm_acl_plugin.so
>> Debug: Skipping module doveadm_expire_plugin, because dlopen() failed:
>> Error relocating
>> /usr/lib/dovecot/doveadm/lib10_doveadm_expire_plugin.so:
>> expire_set_lookup: symbol not found (this is usually intentional, so
>> just ignore this message)
>> Debug: Module loaded: /usr/lib/dovecot/doveadm/lib10_doveadm_quota_plugin.so
>> Debug: Module loaded: /usr/lib/dovecot/doveadm/lib10_doveadm_sieve_plugin.so
>> Debug: Skipping module doveadm_fts_plugin, because dlopen() failed:
>> Error relocating /usr/lib/dovecot/doveadm/lib20_doveadm_fts_plugin.so:
>> fts_backend_rescan: symbol not found (this is usually intentional, so
>> just ignore this message)
>> Debug: Skipping module doveadm_mail_crypt_plugin, because dlopen()
>> failed: Error relocating
>> /usr/lib/dovecot/doveadm/libdoveadm_mail_crypt_plugin.so:
>> mail_crypt_box_get_public_key: symbol not found (this is usually
>> intentional, so just ignore this message)
>> doveadm( 43723): Debug: Added userdb setting:
>> plugin/master_user=
>> doveadm( 43723): Debug: Added userdb setting:
>> plugin/userdb_acl_groups=koakram@,wahlkampfnetzwerk@,wahlkalender
>> 2017@,lgs@
>> doveadm( 43723): Debug: Added userdb setting:
>> plugin/userdb_quota_rule=*:bytes=1572864
>> doveadm(): Debug: Effective uid=90, gid=101,
>> home=/var/dovecot/imap//
>> doveadm(): Debug: Quota root: name=User quota backend=dict
>> args=:ns=INBOX/:file:/var/dovecot/imap///dovecot-quota
>> doveadm(): Debug: Quota rule: root=User quota mailbox=*
>> bytes=107374182400 messages=0
>> doveadm(): Debug: Quota grace: root=User quota
>> bytes=10737418240 (10%)
>> doveadm(): Debug: dict quota: user=,
>> uri=file:/var/dovecot/imap///dovecot-quota, noenforcing=0
>> doveadm(): Debug: Namespace inboxes: type=private,
>> prefix=INBOX/, sep=/, inbox=yes, hidden=no, list=yes, subscriptions=no
>> location=mdbox:~/mdbox
>> doveadm(): Debug: fs:
>> root=/var/dovecot/imap///mdbox, index=, indexpvt=,
>> control=, inbox=, alt=
>> doveadm(): Debug: acl: initializing backend with data: vfile
>> doveadm(): Debug: acl: acl username = 
>> doveadm(): Debug: acl: owner = 1
>> doveadm(): Debug: acl vfile: Global ACLs disabled
>> doveadm(): Debug: Namespace users: type=shared,
>> prefix

Re: doveadm mailbox delete not working

2018-08-14 Thread Ralf Becker
ing module doveadm_expire_plugin, because dlopen() failed:
Error relocating
/usr/lib/dovecot/doveadm/lib10_doveadm_expire_plugin.so:
expire_set_lookup: symbol not found (this is usually intentional, so
just ignore this message)
Debug: Module loaded: /usr/lib/dovecot/doveadm/lib10_doveadm_quota_plugin.so
Debug: Module loaded: /usr/lib/dovecot/doveadm/lib10_doveadm_sieve_plugin.so
Debug: Skipping module doveadm_fts_plugin, because dlopen() failed:
Error relocating /usr/lib/dovecot/doveadm/lib20_doveadm_fts_plugin.so:
fts_backend_rescan: symbol not found (this is usually intentional, so
just ignore this message)
Debug: Skipping module doveadm_mail_crypt_plugin, because dlopen()
failed: Error relocating
/usr/lib/dovecot/doveadm/libdoveadm_mail_crypt_plugin.so:
mail_crypt_box_get_public_key: symbol not found (this is usually
intentional, so just ignore this message)
doveadm( 46127): Debug: Added userdb setting:
plugin/master_user=
doveadm( 46127): Debug: Added userdb setting:
plugin/userdb_acl_groups=koakram@,wahlkampfnetzwerk@,wahlkalender
2017@,lgs@
doveadm( 46127): Debug: Added userdb setting:
plugin/userdb_quota_rule=*:bytes=1572864
doveadm(): Debug: Effective uid=90, gid=101,
home=/var/dovecot/imap//
doveadm(): Debug: Quota root: name=User quota backend=dict
args=:ns=INBOX/:file:/var/dovecot/imap///dovecot-quota
doveadm(): Debug: Quota rule: root=User quota mailbox=*
bytes=107374182400 messages=0
doveadm(): Debug: Quota grace: root=User quota
bytes=10737418240 (10%)
doveadm(): Debug: dict quota: user=,
uri=file:/var/dovecot/imap///dovecot-quota, noenforcing=0
doveadm(): Debug: Namespace inboxes: type=private,
prefix=INBOX/, sep=/, inbox=yes, hidden=no, list=yes, subscriptions=no
location=mdbox:~/mdbox
doveadm(): Debug: fs:
root=/var/dovecot/imap///mdbox, index=, indexpvt=,
control=, inbox=, alt=
doveadm(): Debug: acl: initializing backend with data: vfile
doveadm(): Debug: acl: acl username = 
doveadm(): Debug: acl: owner = 1
doveadm(): Debug: acl vfile: Global ACLs disabled
doveadm(): Debug: Namespace users: type=shared,
prefix=user/%n/, sep=/, inbox=no, hidden=no, list=yes, subscriptions=no
location=mdbox:%h/mdbox:INDEXPVT=~/shared/%u
doveadm(): Debug: shared: root=/run/dovecot, index=,
indexpvt=, control=, inbox=, alt=
doveadm(): Debug: acl: initializing backend with data: vfile
doveadm(): Debug: acl: acl username = 
doveadm(): Debug: acl: owner = 0
doveadm(): Debug: acl vfile: Global ACLs disabled
doveadm(): Debug: Namespace subs: type=private, prefix=,
sep=/, inbox=no, hidden=yes, list=no, subscriptions=yes
location=mdbox:~/mdbox
doveadm(): Debug: fs:
root=/var/dovecot/imap///mdbox, index=, indexpvt=,
control=, inbox=, alt=
doveadm(): Debug: acl: initializing backend with data: vfile
doveadm(): Debug: acl: acl username = 
doveadm(): Debug: acl: owner = 1
doveadm(): Debug: acl vfile: Global ACLs disabled
doveadm(): Debug: quota: quota_over_flag check:
quota_over_script unset - skipping
INBOX/Fachbereiche

Is there some kind of index for existing mailboxes which needs rebuilding?

Ralf

>
> Aki
>
>> On 14 August 2018 at 17:18 Ralf Becker  wrote:
>>
>>
>> Hi Aki,
>>
>> thanks for looking into this :)
>>
>> Am 14.08.18 um 15:15 schrieb Aki Tuomi:
>>> can you turn on mail_debug=yes and run doveadm -Dv mailbox delete and
>>> provide output and logs from both servers?
>> root@ka-nfs-mail:~# doveadm -Dv mailbox delete  -u h 'INBOX/Fachbereiche '
>> Debug: Loading modules from directory: /usr/lib/dovecot
>> Debug: Module loaded: /usr/lib/dovecot/lib01_acl_plugin.so
>> Debug: Module loaded: /usr/lib/dovecot/lib10_quota_plugin.so
>> Debug: Module loaded: /usr/lib/dovecot/lib15_notify_plugin.so
>> Debug: Module loaded: /usr/lib/dovecot/lib20_mail_log_plugin.so
>> Debug: Module loaded: /usr/lib/dovecot/lib20_replication_plugin.so
>> Debug: Loading modules from directory: /usr/lib/dovecot/doveadm
>> Debug: Module loaded: /usr/lib/dovecot/doveadm/lib10_doveadm_acl_plugin.so
>> Debug: Skipping module doveadm_expire_plugin, because dlopen() failed:
>> Error relocating
>> /usr/lib/dovecot/doveadm/lib10_doveadm_expire_plugin.so:
>> expire_set_lookup: symbol not found (this is usually intentional, so
>> just ignore this message)
>> Debug: Module loaded: /usr/lib/dovecot/doveadm/lib10_doveadm_quota_plugin.so
>> Debug: Module loaded: /usr/lib/dovecot/doveadm/lib10_doveadm_sieve_plugin.so
>> Debug: Skipping module doveadm_fts_plugin, because dlopen() failed:
>> Error relocating /usr/lib/dovecot/doveadm/lib20_doveadm_fts_plugin.so:
>> fts_backend_rescan: symbol not found (this is usually intentional, so
>> just ignore this message)
>> Debug: Skipping module doveadm_mail_crypt_plugin, because dlopen()
>> failed: Error relocating
>> /usr/lib/dovecot/doveadm/libdoveadm_mail_crypt_plugin.so:
>> m

Re: doveadm mailbox delete not working

2018-08-14 Thread Ralf Becker
Hi Aki,

thanks for looking into this :)

Am 14.08.18 um 15:15 schrieb Aki Tuomi:
> can you turn on mail_debug=yes and run doveadm -Dv mailbox delete and
> provide output and logs from both servers?
root@ka-nfs-mail:~# doveadm -Dv mailbox delete  -u h 'INBOX/Fachbereiche '
Debug: Loading modules from directory: /usr/lib/dovecot
Debug: Module loaded: /usr/lib/dovecot/lib01_acl_plugin.so
Debug: Module loaded: /usr/lib/dovecot/lib10_quota_plugin.so
Debug: Module loaded: /usr/lib/dovecot/lib15_notify_plugin.so
Debug: Module loaded: /usr/lib/dovecot/lib20_mail_log_plugin.so
Debug: Module loaded: /usr/lib/dovecot/lib20_replication_plugin.so
Debug: Loading modules from directory: /usr/lib/dovecot/doveadm
Debug: Module loaded: /usr/lib/dovecot/doveadm/lib10_doveadm_acl_plugin.so
Debug: Skipping module doveadm_expire_plugin, because dlopen() failed:
Error relocating
/usr/lib/dovecot/doveadm/lib10_doveadm_expire_plugin.so:
expire_set_lookup: symbol not found (this is usually intentional, so
just ignore this message)
Debug: Module loaded: /usr/lib/dovecot/doveadm/lib10_doveadm_quota_plugin.so
Debug: Module loaded: /usr/lib/dovecot/doveadm/lib10_doveadm_sieve_plugin.so
Debug: Skipping module doveadm_fts_plugin, because dlopen() failed:
Error relocating /usr/lib/dovecot/doveadm/lib20_doveadm_fts_plugin.so:
fts_backend_rescan: symbol not found (this is usually intentional, so
just ignore this message)
Debug: Skipping module doveadm_mail_crypt_plugin, because dlopen()
failed: Error relocating
/usr/lib/dovecot/doveadm/libdoveadm_mail_crypt_plugin.so:
mail_crypt_box_get_public_key: symbol not found (this is usually
intentional, so just ignore this message)
doveadm(): Debug: auth PASS input:
doveadm( 32679): Debug: auth USER input: 
userdb_quota_rule=*:bytes=1572864 master_user=
userdb_acl_groups=koakram@,wahlkampfnetzwerk@,wahlkalender
2017@,lgs@ home=/var/dovecot/imap//
doveadm( 32679): Debug: Added userdb setting:
plugin/master_user=
doveadm( 32679): Debug: Added userdb setting:
plugin/userdb_acl_groups=koakram@,wahlkampfnetzwerk@,wahlkalender
2017@,lgs@
doveadm( 32679): Debug: Added userdb setting:
plugin/userdb_quota_rule=*:bytes=1572864
doveadm(): Debug: Effective uid=90, gid=101,
home=/var/dovecot/imap//
doveadm(): Debug: Quota root: name=User quota backend=dict
args=:ns=INBOX/:file:/var/dovecot/imap///dovecot-quota
doveadm(): Debug: Quota rule: root=User quota mailbox=*
bytes=107374182400 messages=0
doveadm(): Debug: Quota grace: root=User quota
bytes=10737418240 (10%)
doveadm(): Debug: dict quota: user=,
uri=file:/var/dovecot/imap///dovecot-quota, noenforcing=0
doveadm(): Debug: Namespace inboxes: type=private,
prefix=INBOX/, sep=/, inbox=yes, hidden=no, list=yes, subscriptions=no
location=mdbox:~/mdbox
doveadm(): Debug: fs:
root=/var/dovecot/imap///mdbox, index=, indexpvt=,
control=, inbox=, alt=
doveadm(): Debug: acl: initializing backend with data: vfile
doveadm(): Debug: acl: acl username = 
doveadm(): Debug: acl: owner = 1
doveadm(): Debug: acl vfile: Global ACLs disabled
doveadm(): Debug: Namespace users: type=shared,
prefix=user/%n/, sep=/, inbox=no, hidden=no, list=yes, subscriptions=no
location=mdbox:%h/mdbox:INDEXPVT=~/shared/%u
doveadm(): Debug: shared: root=/run/dovecot, index=,
indexpvt=, control=, inbox=, alt=
doveadm(): Debug: acl: initializing backend with data: vfile
doveadm(): Debug: acl: acl username = 
doveadm(): Debug: acl: owner = 0
doveadm(): Debug: acl vfile: Global ACLs disabled
doveadm(): Debug: Namespace subs: type=private, prefix=,
sep=/, inbox=no, hidden=yes, list=no, subscriptions=yes
location=mdbox:~/mdbox
doveadm(): Debug: fs:
root=/var/dovecot/imap///mdbox, index=, indexpvt=,
control=, inbox=, alt=
doveadm(): Debug: acl: initializing backend with data: vfile
doveadm(): Debug: acl: acl username = 
doveadm(): Debug: acl: owner = 1
doveadm(): Debug: acl vfile: Global ACLs disabled
doveadm(): Debug: quota: quota_over_flag check:
quota_over_script unset - skipping
doveadm(): Debug: INBOX/Fachbereiche : Mailbox opened because:
mailbox delete
doveadm(): Debug: acl vfile: file
/var/dovecot/imap///mdbox/mailboxes/Fachbereiche
/dbox-Mails/dovecot-acl not found
doveadm(): Debug: Namespace INBOX/: Using permissions from
/var/dovecot/imap///mdbox: mode=0700 gid=default
doveadm(): Debug: replication: Replication requested by
'mailbox delete', priority=1
doveadm(): Info: Mailbox deleted: INBOX/Fachbereiche

Output and logs are from the (less loaded) standby/backup node. I can
get the logs from the active node tonight.

I had to remove some folder-names for privacy reasons, but they all have
the same output in the logs.

Ralf

>
>
>
> ---
> Aki Tuomi
> Dovecot oy
>
>  Original message 
> From: Ralf Becker 
> Date: 14/08/2018 16:13 (GMT+02:00)
> To: dovecot@dovecot.org
> Subject: doveadm mailbox delete not working
>
> I have a user who has several folders in his mailbox, which we can not
> delete, neither

doveadm mailbox delete not working

2018-08-14 Thread Ralf Becker
I have a user who has several folders in his mailbox, which we can not
delete, neither via IMAP nor via doveadm:

root@ka-nfs-mail:~# doveadm mailbox list -u  | grep hbereiche
| cat -v
INBOX/[Fachbereiche ^M
INBOX/Fachbereiche ^M
INBOX/hbereiche^M
INBOX/hbereiche/LAGen]^M
INBOX/hbereiche/LAG^M
INBOX/[Fachbereiche^M
INBOX/[Fachbereiche/LAGen]^M
INBOX/[Fachbereiche]^M
INBOX/[Fachbereiche]/LAGen]^M
INBOX/[Fachbereiche]/LAGe^M
root@ka-nfs-mail:~# doveadm mailbox delete  -u 
'INBOX/Fachbereiche '
doveadm(): Info: Mailbox deleted: INBOX/Fachbereiche
root@ka-nfs-mail:~# doveadm mailbox list -u | grep hbereiche |
cat -v
INBOX/[Fachbereiche ^M
INBOX/Fachbereiche ^M
INBOX/hbereiche^M
INBOX/hbereiche/LAGen]^M
INBOX/hbereiche/LAG^M
INBOX/[Fachbereiche^M
INBOX/[Fachbereiche/LAGen]^M
INBOX/[Fachbereiche]^M
INBOX/[Fachbereiche]/LAGen]^M
INBOX/[Fachbereiche]/LAGe^M

As far as I tried none of these folders can be deleted (I added single
quotes for trailing space and tried to delete subfolders first).

Mailbox is in mdbox format on a replication pair under Dovecot 2.2.36
and I tried both nodes of the replication with same result.

Any ideas?

Ralf

-- 
Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0




signature.asc
Description: OpenPGP digital signature


Dovecot replication does not replicate subscription in shared mailboxes

2018-03-23 Thread Ralf Becker
We use a pair of replicating Dovecot servers.

If a user subscribes to a folder, shared by an other user, in one
replica, that does not get replicated to the other node. Subscription of
regular folders below INBOX seem to replicate correctly.

Is that a general bug, a known problem or do we need to enable that somehow.

Here's our configuration for the concerned namespace:

namespace users {
  location = mdbox:%%h/mdbox:INDEXPVT=~/shared/%%u
  prefix = user/%%n/
  separator = /
  subscriptions = no
  type = shared
}

I attached the full doveconf -n.

Thanks for any points :)

Ralf

-- 
Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0

# 2.2.32 (dfbe293d4): /etc/dovecot/dovecot.conf
# Pigeonhole version 0.4.20 (7cd71ba)
# OS: Linux 4.4.0-116-generic x86_64  
auth_cache_negative_ttl = 2 mins
auth_cache_size = 10 M
auth_cache_ttl = 5 mins
auth_master_user_separator = *
auth_mechanisms = plain login
auth_username_chars = 
"abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ01234567890.-_@#"
default_client_limit = 3500
default_process_limit = 512
disable_plaintext_auth = no
doveadm_password =  # hidden, use -P to show it
doveadm_port = 12345
first_valid_uid = 90
listen = *
log_path = /dev/stderr
mail_access_groups = dovecot
mail_gid = dovecot
mail_location = mdbox:~/mdbox
mail_log_prefix = "%s(%u %p): "
mail_max_userip_connections = 200
mail_plugins = acl quota notify replication mail_log
mail_uid = dovecot
managesieve_notify_capability = mailto
managesieve_sieve_capability = fileinto reject envelope encoded-character 
vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy 
include variables body enotify environment mailbox date ihave vnd.dovecot.debug
mbox_min_index_size = 1000 B
mdbox_rotate_size = 50 M
namespace inboxes {
  inbox = yes
  location = 
  mailbox Drafts {
auto = subscribe
special_use = \Drafts
  }
  mailbox Junk {
auto = subscribe
special_use = \Junk
  }
  mailbox Sent {
auto = subscribe
special_use = \Sent
  }
  mailbox Templates {
auto = subscribe
  }
  mailbox Trash {
auto = subscribe
special_use = \Trash
  }
  prefix = INBOX/
  separator = /
  subscriptions = no
}
namespace subs {
  hidden = yes
  list = no
  location = 
  prefix = 
  separator = /
}
namespace users {
  location = mdbox:%%h/mdbox:INDEXPVT=~/shared/%%u
  prefix = user/%%n/
  separator = /
  subscriptions = no
  type = shared
}
passdb {
  args = /etc/dovecot/dovecot-dict-master-auth.conf
  driver = dict
  master = yes
}
passdb {
  args = /etc/dovecot/dovecot-dict-auth.conf
  driver = dict
}
plugin {
  acl = vfile
  acl_shared_dict = file:/var/dovecot/imap/%d/shared-mailboxes.db
  mail_log_events = delete undelete expunge copy mailbox_delete mailbox_rename
  mail_log_fields = uid box msgid size
  mail_replica = tcp:10.44.99.1
  quota = dict:User quota::ns=INBOX/:file:%h/dovecot-quota
  quota_rule = *:storage=100GB
  sieve = ~/sieve/dovecot.sieve
  sieve_after = /var/dovecot/sieve/after.d/
  sieve_before = /var/dovecot/sieve/before.d/
  sieve_dir = ~/sieve
  sieve_extensions = +editheader
  sieve_user_log = ~/.sieve.log
}
postmaster_address = adm...@egroupware.org
protocols = imap pop3 lmtp sieve
quota_full_tempfail = yes
replication_dsync_parameters = -d -n INBOX -l 30 -U
service aggregator {
  fifo_listener replication-notify-fifo {
user = dovecot
  }
  unix_listener replication-notify {
user = dovecot
  }
}
service auth-worker {
  user = $default_internal_user
}
service auth {
  drop_priv_before_exec = no
  inet_listener {
port = 113
  }
}
service doveadm {
  inet_listener {
port = 12345
  }
  inet_listener {
port = 26
  }
  vsz_limit = 512 M
}
service imap-login {
  inet_listener imap {
port = 143
  }
  inet_listener imaps {
port = 993
ssl = yes
  }
  process_min_avail = 5
  service_count = 1
  vsz_limit = 64 M
}
service imap {
  executable = imap
  process_limit = 2048
  vsz_limit = 512 M
}
service lmtp {
  inet_listener lmtp {
port = 24
  }
  unix_listener lmtp {
mode = 0666
  }
  vsz_limit = 512 M
}
service managesieve-login {
  inet_listener sieve {
port = 4190
  }
  inet_listener sieve_deprecated {
port = 2000
  }
}
service pop3-login {
  inet_listener pop3 {
port = 110
  }
  inet_listener pop3s {
port = 995
ssl = yes
  }
}
service pop3 {
  executable = pop3
}
service postlogin {
  executable = script-login -d rawlog -b -t
}
service replicator {
  process_min_avail = 1
  unix_listener replicator-doveadm {
group = dovecot
mode = 0660
user = dovecot
  }
}
ssl_cert = 

signature.asc
Description: OpenPGP digital signature


Re: Replication to wrong mailbox

2017-11-02 Thread Ralf Becker
Hi Timo,

Am 02.11.17 um 10:34 schrieb Timo Sirainen:
> On 30 Oct 2017, at 11.05, Ralf Becker  wrote:
>> It happened now twice that replication created folders and mails in the
>> wrong mailbox :(
>>
>> Here's the architecture we use:
>> - 2 Dovecot (2.2.32) backends in two different datacenters replicating
>> via a VPN connection
>> - Dovecot directors in both datacenters talks to both backends with
>> vhost_count of 100 vs 1 for local vs remote backend
>> - backends use proxy dict via a unix domain socket and socat to talk via
>> tcp to a dict on a different server (kubernetes cluster)
>> - backends have a local sqlite userdb for iteration (also containing
>> home directories, as just iteration is not possible)
>> - serving around 7000 mailboxes in a roughly 200 different domains
>>
>> Everything works as expected, until dict is not reachable eg. due to a
>> server failure or a planed reboot of a node of the kubernetes cluster.
>> In that situation it can happen that some requests are not answered,
>> even with Kubernetes running multiple instances of the dict.
>> I can only speculate what happens then: it seems the connection failure
>> to the remote dict is not correctly handled and leads to situation in
>> which last mailbox/home directory is used for the replication :(
> It sounds to me like a userdb lookup changes the username during a dict 
> failure. Although I can't really think of how that could happen. 

Me neither.

Users are in multiple MariaDB databases on a Galera cluster. We have no
problems or unexpected changes there.

The dict is running multiple time, but that might not guarantee no
single request might fail.

> The only thing that comes to my mind is auth_cache, but in that case I'd 
> expect the same problem to happen even when there aren't dict errors.
>
> For testing you could see if it's reproducible with:
>
>  - get random username
>  - do doveadm user 
>  - verify that the result contains the same input user
>
> Then do that in a loop rapidly and restart your test kubernetes once in a 
> while.


Ok, I'll give that a try. It's would be a lot easier then the whole
replication setup.

Ralf

-- 
Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0




signature.asc
Description: OpenPGP digital signature


Re: Replication to wrong mailbox

2017-11-02 Thread Ralf Becker
Hi Aki,

Am 02.11.17 um 09:57 schrieb Aki Tuomi:
> Can you somehow reproduce this issue with auth_debug=yes and
> mail_debug=yes and provide those logs?

I will try, knowing now someone will look at the logs.

It might take a couple of days. I'll come back once I have the logs.

Ralf

> Aki
>
>
> On 02.11.2017 10:55, Ralf Becker wrote:
>> No one any idea?
>>
>> Replication into wrong mailboxes caused by an unavailable proxy dict
>> backend is a serious privacy and/or security problem!
>>
>> Ralf
>>
>> Am 30.10.17 um 10:05 schrieb Ralf Becker:
>>> It happened now twice that replication created folders and mails in the
>>> wrong mailbox :(
>>>
>>> Here's the architecture we use:
>>> - 2 Dovecot (2.2.32) backends in two different datacenters replicating
>>> via a VPN connection
>>> - Dovecot directors in both datacenters talks to both backends with
>>> vhost_count of 100 vs 1 for local vs remote backend
>>> - backends use proxy dict via a unix domain socket and socat to talk via
>>> tcp to a dict on a different server (kubernetes cluster)
>>> - backends have a local sqlite userdb for iteration (also containing
>>> home directories, as just iteration is not possible)
>>> - serving around 7000 mailboxes in a roughly 200 different domains
>>>
>>> Everything works as expected, until dict is not reachable eg. due to a
>>> server failure or a planed reboot of a node of the kubernetes cluster.
>>> In that situation it can happen that some requests are not answered,
>>> even with Kubernetes running multiple instances of the dict.
>>> I can only speculate what happens then: it seems the connection failure
>>> to the remote dict is not correctly handled and leads to situation in
>>> which last mailbox/home directory is used for the replication :(
>>>
>>> When it happened the first time we attributed it to the fact that the
>>> Sqlite database at that time contained no home directory information,
>>> which we fixed after. This first time (server failure) took a couple of
>>> minutes and lead to many mailboxes containing mostly folders but also
>>> some new arrived mails belonging to other mailboxes/users. We could only
>>> resolve that situation by rolling back to a zfs snapshot before the
>>> downtime.
>>>
>>> The second time was last Friday night during a (much shorter) reboot of
>>> a Kubernetes node and lead only to a single mailbox containing folders
>>> and mails of other mailboxes. That was verified by looking at timestamps
>>> of directories below $home/mdbox/mailboxes and files in $home/mdbox/storage.
>>> I can not tell if adding the home directory to the Sqlite database or
>>> the shorter time of the failure limited the wrong replication to a
>>> single mailbox.
>>>
>>> Can someone with more knowledge of the Dovecot code please check/verify
>>> how replication deals with failures in proxy dict. I'm of cause happy to
>>> provide more information of our configuration if needed.
>>>
>>> Here is an exert of our configuration (full doveconf -n is attached):
>>>
>>> passdb {
>>>   args = /etc/dovecot/dovecot-dict-master-auth.conf
>>>   driver = dict
>>>   master = yes
>>> }
>>> passdb {
>>>   args = /etc/dovecot/dovecot-dict-auth.conf
>>>   driver = dict
>>> }
>>> userdb {
>>>   driver = prefetch
>>> }
>>> userdb {
>>>   args = /etc/dovecot/dovecot-dict-auth.conf
>>>   driver = dict
>>> }
>>> userdb {
>>>   args = /etc/dovecot/dovecot-sql.conf
>>>   driver = sql
>>> }
>>>
>>> dovecot-dict-auth.conf:
>>> uri = proxy:/var/run/dovecot_auth_proxy/socket:backend
>>> password_key = passdb/%u/%w
>>> user_key = userdb/%u
>>> iterate_disable = yes
>>>
>>> dovecot-dict-master-auth.conf:
>>> uri = proxy:/var/run/dovecot_auth_proxy/socket:backend
>>> password_key = master/%{login_user}/%u/%w
>>> iterate_disable = yes
>>>
>>> dovecot-sql.conf:
>>> driver = sqlite
>>> connect = /etc/dovecot/users.sqlite
>>> user_query = SELECT home,NULL AS uid,NULL AS gid FROM users WHERE userid
>>> = '%n' AND domain = '%d'
>>> iterate_query = SELECT userid AS username, domain FROM users


-- 
Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0






signature.asc
Description: OpenPGP digital signature


Re: Replication to wrong mailbox

2017-11-02 Thread Ralf Becker
No one any idea?

Replication into wrong mailboxes caused by an unavailable proxy dict
backend is a serious privacy and/or security problem!

Ralf

Am 30.10.17 um 10:05 schrieb Ralf Becker:
> It happened now twice that replication created folders and mails in the
> wrong mailbox :(
>
> Here's the architecture we use:
> - 2 Dovecot (2.2.32) backends in two different datacenters replicating
> via a VPN connection
> - Dovecot directors in both datacenters talks to both backends with
> vhost_count of 100 vs 1 for local vs remote backend
> - backends use proxy dict via a unix domain socket and socat to talk via
> tcp to a dict on a different server (kubernetes cluster)
> - backends have a local sqlite userdb for iteration (also containing
> home directories, as just iteration is not possible)
> - serving around 7000 mailboxes in a roughly 200 different domains
>
> Everything works as expected, until dict is not reachable eg. due to a
> server failure or a planed reboot of a node of the kubernetes cluster.
> In that situation it can happen that some requests are not answered,
> even with Kubernetes running multiple instances of the dict.
> I can only speculate what happens then: it seems the connection failure
> to the remote dict is not correctly handled and leads to situation in
> which last mailbox/home directory is used for the replication :(
>
> When it happened the first time we attributed it to the fact that the
> Sqlite database at that time contained no home directory information,
> which we fixed after. This first time (server failure) took a couple of
> minutes and lead to many mailboxes containing mostly folders but also
> some new arrived mails belonging to other mailboxes/users. We could only
> resolve that situation by rolling back to a zfs snapshot before the
> downtime.
>
> The second time was last Friday night during a (much shorter) reboot of
> a Kubernetes node and lead only to a single mailbox containing folders
> and mails of other mailboxes. That was verified by looking at timestamps
> of directories below $home/mdbox/mailboxes and files in $home/mdbox/storage.
> I can not tell if adding the home directory to the Sqlite database or
> the shorter time of the failure limited the wrong replication to a
> single mailbox.
>
> Can someone with more knowledge of the Dovecot code please check/verify
> how replication deals with failures in proxy dict. I'm of cause happy to
> provide more information of our configuration if needed.
>
> Here is an exert of our configuration (full doveconf -n is attached):
>
> passdb {
>   args = /etc/dovecot/dovecot-dict-master-auth.conf
>   driver = dict
>   master = yes
> }
> passdb {
>   args = /etc/dovecot/dovecot-dict-auth.conf
>   driver = dict
> }
> userdb {
>   driver = prefetch
> }
> userdb {
>   args = /etc/dovecot/dovecot-dict-auth.conf
>   driver = dict
> }
> userdb {
>   args = /etc/dovecot/dovecot-sql.conf
>   driver = sql
> }
>
> dovecot-dict-auth.conf:
> uri = proxy:/var/run/dovecot_auth_proxy/socket:backend
> password_key = passdb/%u/%w
> user_key = userdb/%u
> iterate_disable = yes
>
> dovecot-dict-master-auth.conf:
> uri = proxy:/var/run/dovecot_auth_proxy/socket:backend
> password_key = master/%{login_user}/%u/%w
> iterate_disable = yes
>
> dovecot-sql.conf:
> driver = sqlite
> connect = /etc/dovecot/users.sqlite
> user_query = SELECT home,NULL AS uid,NULL AS gid FROM users WHERE userid
> = '%n' AND domain = '%d'
> iterate_query = SELECT userid AS username, domain FROM users

-- 
Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0




signature.asc
Description: OpenPGP digital signature


Replication to wrong mailbox

2017-10-30 Thread Ralf Becker
It happened now twice that replication created folders and mails in the
wrong mailbox :(

Here's the architecture we use:
- 2 Dovecot (2.2.32) backends in two different datacenters replicating
via a VPN connection
- Dovecot directors in both datacenters talks to both backends with
vhost_count of 100 vs 1 for local vs remote backend
- backends use proxy dict via a unix domain socket and socat to talk via
tcp to a dict on a different server (kubernetes cluster)
- backends have a local sqlite userdb for iteration (also containing
home directories, as just iteration is not possible)
- serving around 7000 mailboxes in a roughly 200 different domains

Everything works as expected, until dict is not reachable eg. due to a
server failure or a planed reboot of a node of the kubernetes cluster.
In that situation it can happen that some requests are not answered,
even with Kubernetes running multiple instances of the dict.
I can only speculate what happens then: it seems the connection failure
to the remote dict is not correctly handled and leads to situation in
which last mailbox/home directory is used for the replication :(

When it happened the first time we attributed it to the fact that the
Sqlite database at that time contained no home directory information,
which we fixed after. This first time (server failure) took a couple of
minutes and lead to many mailboxes containing mostly folders but also
some new arrived mails belonging to other mailboxes/users. We could only
resolve that situation by rolling back to a zfs snapshot before the
downtime.

The second time was last Friday night during a (much shorter) reboot of
a Kubernetes node and lead only to a single mailbox containing folders
and mails of other mailboxes. That was verified by looking at timestamps
of directories below $home/mdbox/mailboxes and files in $home/mdbox/storage.
I can not tell if adding the home directory to the Sqlite database or
the shorter time of the failure limited the wrong replication to a
single mailbox.

Can someone with more knowledge of the Dovecot code please check/verify
how replication deals with failures in proxy dict. I'm of cause happy to
provide more information of our configuration if needed.

Here is an exert of our configuration (full doveconf -n is attached):

passdb {
  args = /etc/dovecot/dovecot-dict-master-auth.conf
  driver = dict
  master = yes
}
passdb {
  args = /etc/dovecot/dovecot-dict-auth.conf
  driver = dict
}
userdb {
  driver = prefetch
}
userdb {
  args = /etc/dovecot/dovecot-dict-auth.conf
  driver = dict
}
userdb {
  args = /etc/dovecot/dovecot-sql.conf
  driver = sql
}

dovecot-dict-auth.conf:
uri = proxy:/var/run/dovecot_auth_proxy/socket:backend
password_key = passdb/%u/%w
user_key = userdb/%u
iterate_disable = yes

dovecot-dict-master-auth.conf:
uri = proxy:/var/run/dovecot_auth_proxy/socket:backend
password_key = master/%{login_user}/%u/%w
iterate_disable = yes

dovecot-sql.conf:
driver = sqlite
connect = /etc/dovecot/users.sqlite
user_query = SELECT home,NULL AS uid,NULL AS gid FROM users WHERE userid
= '%n' AND domain = '%d'
iterate_query = SELECT userid AS username, domain FROM users

-- 
Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0

# 2.2.32 (dfbe293d4): /etc/dovecot/dovecot.conf
# Pigeonhole version 0.4.20 (7cd71ba)
# OS: Linux 4.4.0-97-generic x86_64  
auth_cache_negative_ttl = 2 mins
auth_cache_size = 10 M
auth_cache_ttl = 5 mins
auth_master_user_separator = *
auth_mechanisms = plain login
auth_username_chars = 
"abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ01234567890.-_@#"
default_client_limit = 3500
default_process_limit = 512
disable_plaintext_auth = no
doveadm_password =  # hidden, use -P to show it
doveadm_port = 12345
first_valid_uid = 90
listen = *
log_path = /dev/stderr
mail_access_groups = dovecot
mail_gid = dovecot
mail_location = mdbox:~/mdbox
mail_log_prefix = "%s(%u %p): "
mail_max_userip_connections = 200
mail_plugins = acl quota notify replication mail_log
mail_uid = dovecot
managesieve_notify_capability = mailto
managesieve_sieve_capability = fileinto reject envelope encoded-character 
vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy 
include variables body enotify environment mailbox date ihave vnd.dovecot.debug
mbox_min_index_size = 1000 B
mdbox_rotate_size = 50 M
namespace inboxes {
  inbox = yes
  location = 
  mailbox Drafts {
auto = subscribe
special_use = \Drafts
  }
  mailbox Junk {
auto = subscribe
special_use = \Junk
  }
  mailbox Sent {
auto = subscribe
special_use = \Sent
  }
  mailbox Templates {
auto = subscribe
  }
  mailbox Trash {
auto = subscribe
special_use = \Trash
  }
  prefix = INBOX/
  separator = /
  subscriptions = no
}
namespace subs {
  hidden = yes
  list = no
  locatio

Re: sieve stopped working and doveadm mailbox list without -s shows less folders then with

2017-09-19 Thread Ralf Becker
Hi Timo,

Am 20.09.17 um 02:06 schrieb Timo Sirainen:
> On 19 Sep 2017, at 1.03, Ralf Becker  <mailto:r...@egroupware.org>> wrote:
>>
>> doveadm(@): Info: Mailbox INBOX/AA is NOT visible in LIST
>>
>> How to fix that situation?
>>
>> Is there a way to reset acl of all folders of a user to all rights for
>> the owner?
>>
>> root@fra-nfs-mail:/var/dovecot/imap//# find -name
>> "dovecot-acl*"
>> ./mdbox/mailboxes/INBOX/dbox-Mails/dovecot-acl
>> ./mdbox/mailboxes/AA/dbox-Mails/dovecot-acl
>> ./mdbox/dovecot-acl-list
>
> Did you try deleting dovecot-acl-list to see if it makes a difference?
> What do these two dovecot-acl files contain?

root@fra-nfs-mail:/var/dovecot/imap//# find -name
dovecot-acl\*
./mdbox/mailboxes/INBOX/dbox-Mails/dovecot-acl
./mdbox/dovecot-acl-list
root@fra-nfs-mail:/var/dovecot/imap/gruene-berlin.de/Christopher.Poschmann#
cat mdbox/mailboxes/INBOX/dbox-Mails/dovecot-acl
owner akxeilprwts
user=@ akxeilprwts

This are the ACLs I set before.

> If you delete those, it should reset all the ACLs.

root@fra-nfs-mail:/var/dovecot/imap/gruene-berlin.de/Christopher.Poschmann#
find -name dovecot-acl\*
./mdbox/mailboxes/INBOX/dbox-Mails/dovecot-acl
./mdbox/dovecot-acl-list
root@fra-nfs-mail:/var/dovecot/imap/gruene-berlin.de/Christopher.Poschmann#
cat mdbox/mailboxes/INBOX/dbox-Mails/dovecot-acl
owner akxeilprwts
user=christopher.poschm...@gruene-berlin.de akxeilprwts
root@fra-nfs-mail:/var/dovecot/imap/gruene-berlin.de/Christopher.Poschmann#
rm mdbox/mailboxes/INBOX/dbox-Mails/dovecot-acl mdbox/dovecot-acl-list
root@fra-nfs-mail:/var/dovecot/imap/gruene-berlin.de/Christopher.Poschmann#
doveadm mailbox list -u christopher.poschm...@gruene-berlin.de
user
INBOX
root@fra-nfs-mail:/var/dovecot/imap//# doveadm acl debug
-u @ INBOX
doveadm(@): Info: Mailbox 'INBOX' is in namespace 'INBOX/'
doveadm(@): Info: Mailbox path:
/var/dovecot/imap///mdbox/mailboxes/INBOX/dbox-Mails
doveadm(@): Info: All message flags are shared across
users in mailbox
doveadm(@): Info: User @ has no rights for
mailbox
doveadm(@): Error: User @ is missing
'lookup' right
doveadm(@): Info: Mailbox INBOX is NOT visible in LIST

Problem still exists, after deleting the dovecot-acl* files :(

I believe it's some kind of corruption in the mdbox files.
I tried to move the mailbox away and import it again from the moved
location, which so far fixed most of the problems we had in the past,
but in case of that mailbox, it failed with a fatal error (see my first
post in this thread).

Ralf

-- 
Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0




signature.asc
Description: OpenPGP digital signature


Re: sieve stopped working and doveadm mailbox list without -s shows less folders then with

2017-09-19 Thread Ralf Becker
Am 19.09.17 um 00:03 schrieb Ralf Becker:
> Hi Timo,
>
> update to 2.2.32 (suggested by Aki) did not change the situation ...
>
> Am 18.09.17 um 20:49 schrieb Timo Sirainen:
>> On 18 Sep 2017, at 20.12, Ralf Becker > <mailto:r...@egroupware.org>> wrote:
>>> Hi Timo,
>>>
>>> Am 18.09.17 um 12:03 schrieb Timo Sirainen:
>>>> On 18 Sep 2017, at 12.10, Ralf Becker >>> <mailto:r...@egroupware.org>
>>>> <mailto:r...@egroupware.org>> wrote:
>>>>> Am 14.09.17 um 01:07 schrieb Timo Sirainen:
>>>>>> On 7 Sep 2017, at 17.42, Ralf Becker >>>>> <mailto:r...@egroupware.org>
>>>>>> <mailto:r...@egroupware.org>> wrote:
>>>>>>> Dovecot 2.2.31 with mailboxes in mdbox format.
>>>>>>>
>>>>>>> Since a couple of days some mailboxes have the problem, that sieve
>>>>>>> rules
>>>>>>> moving mails to folders stop working and .sieve.log in mailbox shows:
>>>>>>>
>>>>>>> sieve: info: started log at Sep 07 13:57:17.
>>>>>>> error:
>>>>>>> msgid=<20170907155704.egroupware.s4ythvjrr12wsijlpkbk...@somedomain.egroupware.de
>>>>>>> <mailto:20170907155704.egroupware.s4ythvjrr12wsijlpkbk...@somedomain.egroupware.de>
>>>>>>> <mailto:20170907155704.egroupware.s4ythvjrr12wsijlpkbk...@somedomain.egroupware.de>>:
>>>>>>> failed to store into mailbox 'INBOX/Munser': Mailbox doesn't exist:
>>>>>>> INBOX/Munser.
>>>>>>>
>>>>>>> When I do a doveadm mailbox list -s -u @ I get all
>>>>>>> folders
>>>>>>> incl. the one mentioned above, while doveadm mailbox list without -s
>>>>>>> shows just
>>>>>>> user
>>>>>>> INBOX
>>>>>> Subscriptions are stored independently from the actual folders. So
>>>>>> it looks like the subscription file exists and is correct, but
>>>>>> somehow you've lost all the folders. Do you see the folders in the
>>>>>> filesystem under user/mailboxes/ directory? 
>>>>> Yes, the folders exist under
>>>>> /var/dovecot/imap///mdbox/mailboxes/.
>>>>> Just doveadm mailbox list -u @ (without -s) does only
>>>>> show
>>>>> INBOX and user.
>>>>> (I can send you the list of folders via private mail, but I can not
>>>>> post
>>>>> them on the list.)
>>>>>
>>>>> Anything I can do to get Dovecot to eg. rescan the folders from the
>>>>> filesystem or any other way to fix that problem?
>>>>> I have it with a couple of mailboxes, so I believe it's some kind of
>>>>> systematic problem, nothing the users did.
>>>> I can't really think of any reason why it wouldn't simply work.
>>>> Especially since you're not using v2.2.32, the folder listing is
>>>> always performed by listing the directories in filesystem, so there's
>>>> nothing really to resync. What's your doveconf -n? You could try with
>>>> mailbox_list_index=no if that happens to make any difference, but it
>>>> shouldn't.
>>>>
>>>> You could also try what "strace -o log -s 100 doveadm mailbox list -u
>>>> user@domain" shows. Is it opening the correct mailboxes/ directory?
>>>> Maybe the path is just wrong for some reason (some typo added
>>>> somewhere)?
>>>
>>> Nope it lstats the correct directories, but does not show them.
>>>
>>> I send you the strace / sysdig output per private mail, as it contains
>>> private information of that user.
>> Looks like you have some dovecot-acl and dovecot-acl-list files, so it
>> has to be because Dovecot thinks the ACLs are preventing access to the
>> user. Try deleting dovecot-acl-list to see if the problem is with
>> that. If not, look at the dovecot-acl files and/or "doveadm acl debug
>> -u user@domain " to figure out what's
>> wrong.
>
> root@fra-nfs-mail:~# doveadm acl debug -u @ INBOX/AA
> doveadm(@): Info: Mailbox 'AA' is in namespace 'INBOX/'
> doveadm(@): Info: Mailbox path:
> /var/dovecot/imap///mdbox/mailboxes/AA/dbox-Mails
> doveadm(@): Info: All message flags are shared across
> users in mailbox
> doveadm(@): Info: User @ has no righ

Re: sieve stopped working and doveadm mailbox list without -s shows less folders then with

2017-09-18 Thread Ralf Becker
Hi Timo,

update to 2.2.32 (suggested by Aki) did not change the situation ...

Am 18.09.17 um 20:49 schrieb Timo Sirainen:
> On 18 Sep 2017, at 20.12, Ralf Becker  <mailto:r...@egroupware.org>> wrote:
>>
>> Hi Timo,
>>
>> Am 18.09.17 um 12:03 schrieb Timo Sirainen:
>>> On 18 Sep 2017, at 12.10, Ralf Becker >> <mailto:r...@egroupware.org>
>>> <mailto:r...@egroupware.org>> wrote:
>>>>
>>>> Am 14.09.17 um 01:07 schrieb Timo Sirainen:
>>>>> On 7 Sep 2017, at 17.42, Ralf Becker >>>> <mailto:r...@egroupware.org>
>>>>> <mailto:r...@egroupware.org>> wrote:
>>>>>> Dovecot 2.2.31 with mailboxes in mdbox format.
>>>>>>
>>>>>> Since a couple of days some mailboxes have the problem, that sieve
>>>>>> rules
>>>>>> moving mails to folders stop working and .sieve.log in mailbox shows:
>>>>>>
>>>>>> sieve: info: started log at Sep 07 13:57:17.
>>>>>> error:
>>>>>> msgid=<20170907155704.egroupware.s4ythvjrr12wsijlpkbk...@somedomain.egroupware.de
>>>>>> <mailto:20170907155704.egroupware.s4ythvjrr12wsijlpkbk...@somedomain.egroupware.de>
>>>>>> <mailto:20170907155704.egroupware.s4ythvjrr12wsijlpkbk...@somedomain.egroupware.de>>:
>>>>>> failed to store into mailbox 'INBOX/Munser': Mailbox doesn't exist:
>>>>>> INBOX/Munser.
>>>>>>
>>>>>> When I do a doveadm mailbox list -s -u @ I get all
>>>>>> folders
>>>>>> incl. the one mentioned above, while doveadm mailbox list without -s
>>>>>> shows just
>>>>>> user
>>>>>> INBOX
>>>>> Subscriptions are stored independently from the actual folders. So
>>>>> it looks like the subscription file exists and is correct, but
>>>>> somehow you've lost all the folders. Do you see the folders in the
>>>>> filesystem under user/mailboxes/ directory? 
>>>>
>>>> Yes, the folders exist under
>>>> /var/dovecot/imap///mdbox/mailboxes/.
>>>> Just doveadm mailbox list -u @ (without -s) does only
>>>> show
>>>> INBOX and user.
>>>> (I can send you the list of folders via private mail, but I can not
>>>> post
>>>> them on the list.)
>>>>
>>>> Anything I can do to get Dovecot to eg. rescan the folders from the
>>>> filesystem or any other way to fix that problem?
>>>> I have it with a couple of mailboxes, so I believe it's some kind of
>>>> systematic problem, nothing the users did.
>>>
>>> I can't really think of any reason why it wouldn't simply work.
>>> Especially since you're not using v2.2.32, the folder listing is
>>> always performed by listing the directories in filesystem, so there's
>>> nothing really to resync. What's your doveconf -n? You could try with
>>> mailbox_list_index=no if that happens to make any difference, but it
>>> shouldn't.
>>>
>>> You could also try what "strace -o log -s 100 doveadm mailbox list -u
>>> user@domain" shows. Is it opening the correct mailboxes/ directory?
>>> Maybe the path is just wrong for some reason (some typo added
>>> somewhere)?
>>
>>
>> Nope it lstats the correct directories, but does not show them.
>>
>> I send you the strace / sysdig output per private mail, as it contains
>> private information of that user.
>
> Looks like you have some dovecot-acl and dovecot-acl-list files, so it
> has to be because Dovecot thinks the ACLs are preventing access to the
> user. Try deleting dovecot-acl-list to see if the problem is with
> that. If not, look at the dovecot-acl files and/or "doveadm acl debug
> -u user@domain " to figure out what's
> wrong.


root@fra-nfs-mail:~# doveadm acl debug -u @ INBOX/AA
doveadm(@): Info: Mailbox 'AA' is in namespace 'INBOX/'
doveadm(@): Info: Mailbox path:
/var/dovecot/imap///mdbox/mailboxes/AA/dbox-Mails
doveadm(@): Info: All message flags are shared across
users in mailbox
doveadm(@): Info: User @ has no rights for
mailbox
doveadm(@): Error: User @ is missing
'lookup' right
doveadm(@): Info: Mailbox INBOX/AA is NOT visible in LIST

Ok, but when I try to fix it:

root@fra-nfs-mail:~# doveadm acl add -u @ INBOX/AA
user=@ admin create delete expunge insert lookup post read
write write-deleted write-seen

root@fra

Re: sieve stopped working and doveadm mailbox list without -s shows less folders then with

2017-09-18 Thread Ralf Becker
Am 18.09.17 um 11:34 schrieb Ralf Becker:
> Hi Aki,
>
> Am 18.09.17 um 11:22 schrieb Aki Tuomi:
>> On 18.09.2017 12:20, Ralf Becker wrote:
>>> Hi Aki,
>>>
>>> Am 18.09.17 um 11:13 schrieb Aki Tuomi:
>>>> On 18.09.2017 12:10, Ralf Becker wrote:
>>>>> Am 14.09.17 um 01:07 schrieb Timo Sirainen:
>>>>>> On 7 Sep 2017, at 17.42, Ralf Becker  wrote:
>>>>>>> Dovecot 2.2.31 with mailboxes in mdbox format.
>>>>>>>
>>>>>>> Since a couple of days some mailboxes have the problem, that sieve rules
>>>>>>> moving mails to folders stop working and .sieve.log in mailbox shows:
>>>>>>>
>>>>>>> sieve: info: started log at Sep 07 13:57:17.
>>>>>>> error:
>>>>>>> msgid=<20170907155704.egroupware.s4ythvjrr12wsijlpkbk...@somedomain.egroupware.de>:
>>>>>>> failed to store into mailbox 'INBOX/Munser': Mailbox doesn't exist:
>>>>>>> INBOX/Munser.
>>>>>>>
>>>>>>> When I do a doveadm mailbox list -s -u @ I get all folders
>>>>>>> incl. the one mentioned above, while doveadm mailbox list without -s
>>>>>>> shows just
>>>>>>> user
>>>>>>> INBOX
>>>>>> Subscriptions are stored independently from the actual folders. So it 
>>>>>> looks like the subscription file exists and is correct, but somehow 
>>>>>> you've lost all the folders. Do you see the folders in the filesystem 
>>>>>> under user/mailboxes/ directory? 
>>>>> Yes, the folders exist under
>>>>> /var/dovecot/imap///mdbox/mailboxes/.
>>>>> Just doveadm mailbox list -u @ (without -s) does only show
>>>>> INBOX and user.
>>>>> (I can send you the list of folders via private mail, but I can not post
>>>>> them on the list.)
>>>>>
>>>>> Anything I can do to get Dovecot to eg. rescan the folders from the
>>>>> filesystem or any other way to fix that problem?
>>>>> I have it with a couple of mailboxes, so I believe it's some kind of
>>>>> systematic problem, nothing the users did.
>>>>>
>>>>> Ralf
>>>>>
>>>>>> My guess is that it only has INBOX, which means the folders were deleted 
>>>>>> by something (Dovecot corruption can't lose entire folders - something 
>>>>>> must explicitly delete them).
>>>> You can always try doveadm force-resync -u victim "*"
>>>>
>>>> You should run it twice, I guess.
>>> Tried that before and just tried it again, no luck :(
>>>
>>> root@fra-nfs-mail:/var/dovecot/imap/# doveadm force-resync -u
>>> @ "*"
>>> doveadm(@): Warning: fscking index file
>>> /var/dovecot/imap///mdbox/storage/dovecot.map.index
>>> doveadm(@): Warning: mdbox
>>> /var/dovecot/imap///mdbox/storage: rebuilding indexes
>>> doveadm(@): Warning: Transaction log file
>>> /var/dovecot/imap///mdbox/storage/dovecot.map.index.log
>>> was locked for 72 seconds (mdbox storage rebuild)
>>> doveadm(@): Warning: fscking index file
>>> /var/dovecot/imap///mdbox/storage/dovecot.map.index
>>>
>>> root@fra-nfs-mail:/var/dovecot/imap/# doveadm force-resync -u
>>> @ "*"
>>> doveadm(@): Warning: fscking index file
>>> /var/dovecot/imap///mdbox/storage/dovecot.map.index
>>> doveadm(@): Warning: mdbox
>>> /var/dovecot/imap///mdbox/storage: rebuilding indexes
>>> doveadm(@): Warning: fscking index file
>>> /var/dovecot/imap///mdbox/storage/dovecot.map.index
>>>
>>> root@fra-nfs-mail:/var/dovecot/imap/# doveadm mailbox list -u
>>> @
>>> user
>>> INBOX
>>>
>>> What else can I do to analyse the problem?
>>>
>>> Ralf
>>>
>> It seems you are running into
>> https://github.com/dovecot/core/commit/c8be39472a93a5de2cc1051bdbd4468bea0ca7ba#diff-664ea8e9082f57f29f8a284ced77d165
> That commit is part of 2.2.32, as far as I can see on Github, so I
> *only* need to update?
>
> I'm a bit reluctant to update, after all the problems in the version
> bitween 2.2.27 and 2.2.31 ...
>
> You recon the update 2.2.31 to .32 has no know problems so far?
>
> Ralf

Did the update to 2.2.32 now, but no change, after 2 force-resync
doveadm mailbox list still only reports INBOX and user.

Trying Timos ACL stuff now ...

Ralf

-- 
Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0




signature.asc
Description: OpenPGP digital signature


Re: sieve stopped working and doveadm mailbox list without -s shows less folders then with

2017-09-18 Thread Ralf Becker
Hi Timo,

Am 18.09.17 um 12:03 schrieb Timo Sirainen:
> On 18 Sep 2017, at 12.10, Ralf Becker  <mailto:r...@egroupware.org>> wrote:
>>
>> Am 14.09.17 um 01:07 schrieb Timo Sirainen:
>>> On 7 Sep 2017, at 17.42, Ralf Becker >> <mailto:r...@egroupware.org>> wrote:
>>>> Dovecot 2.2.31 with mailboxes in mdbox format.
>>>>
>>>> Since a couple of days some mailboxes have the problem, that sieve
>>>> rules
>>>> moving mails to folders stop working and .sieve.log in mailbox shows:
>>>>
>>>> sieve: info: started log at Sep 07 13:57:17.
>>>> error:
>>>> msgid=<20170907155704.egroupware.s4ythvjrr12wsijlpkbk...@somedomain.egroupware.de
>>>> <mailto:20170907155704.egroupware.s4ythvjrr12wsijlpkbk...@somedomain.egroupware.de>>:
>>>> failed to store into mailbox 'INBOX/Munser': Mailbox doesn't exist:
>>>> INBOX/Munser.
>>>>
>>>> When I do a doveadm mailbox list -s -u @ I get all
>>>> folders
>>>> incl. the one mentioned above, while doveadm mailbox list without -s
>>>> shows just
>>>> user
>>>> INBOX
>>> Subscriptions are stored independently from the actual folders. So
>>> it looks like the subscription file exists and is correct, but
>>> somehow you've lost all the folders. Do you see the folders in the
>>> filesystem under user/mailboxes/ directory? 
>>
>> Yes, the folders exist under
>> /var/dovecot/imap///mdbox/mailboxes/.
>> Just doveadm mailbox list -u @ (without -s) does only show
>> INBOX and user.
>> (I can send you the list of folders via private mail, but I can not post
>> them on the list.)
>>
>> Anything I can do to get Dovecot to eg. rescan the folders from the
>> filesystem or any other way to fix that problem?
>> I have it with a couple of mailboxes, so I believe it's some kind of
>> systematic problem, nothing the users did.
>
> I can't really think of any reason why it wouldn't simply work.
> Especially since you're not using v2.2.32, the folder listing is
> always performed by listing the directories in filesystem, so there's
> nothing really to resync. What's your doveconf -n? You could try with
> mailbox_list_index=no if that happens to make any difference, but it
> shouldn't.
>
> You could also try what "strace -o log -s 100 doveadm mailbox list -u
> user@domain" shows. Is it opening the correct mailboxes/ directory?
> Maybe the path is just wrong for some reason (some typo added somewhere)?


Nope it lstats the correct directories, but does not show them.

I send you the strace / sysdig output per private mail, as it contains
private information of that user.

Ralf

-- 
Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0




signature.asc
Description: OpenPGP digital signature


Re: sieve stopped working and doveadm mailbox list without -s shows less folders then with

2017-09-18 Thread Ralf Becker
Hi Aki,

Am 18.09.17 um 11:22 schrieb Aki Tuomi:
> On 18.09.2017 12:20, Ralf Becker wrote:
>> Hi Aki,
>>
>> Am 18.09.17 um 11:13 schrieb Aki Tuomi:
>>> On 18.09.2017 12:10, Ralf Becker wrote:
>>>> Am 14.09.17 um 01:07 schrieb Timo Sirainen:
>>>>> On 7 Sep 2017, at 17.42, Ralf Becker  wrote:
>>>>>> Dovecot 2.2.31 with mailboxes in mdbox format.
>>>>>>
>>>>>> Since a couple of days some mailboxes have the problem, that sieve rules
>>>>>> moving mails to folders stop working and .sieve.log in mailbox shows:
>>>>>>
>>>>>> sieve: info: started log at Sep 07 13:57:17.
>>>>>> error:
>>>>>> msgid=<20170907155704.egroupware.s4ythvjrr12wsijlpkbk...@somedomain.egroupware.de>:
>>>>>> failed to store into mailbox 'INBOX/Munser': Mailbox doesn't exist:
>>>>>> INBOX/Munser.
>>>>>>
>>>>>> When I do a doveadm mailbox list -s -u @ I get all folders
>>>>>> incl. the one mentioned above, while doveadm mailbox list without -s
>>>>>> shows just
>>>>>> user
>>>>>> INBOX
>>>>> Subscriptions are stored independently from the actual folders. So it 
>>>>> looks like the subscription file exists and is correct, but somehow 
>>>>> you've lost all the folders. Do you see the folders in the filesystem 
>>>>> under user/mailboxes/ directory? 
>>>> Yes, the folders exist under
>>>> /var/dovecot/imap///mdbox/mailboxes/.
>>>> Just doveadm mailbox list -u @ (without -s) does only show
>>>> INBOX and user.
>>>> (I can send you the list of folders via private mail, but I can not post
>>>> them on the list.)
>>>>
>>>> Anything I can do to get Dovecot to eg. rescan the folders from the
>>>> filesystem or any other way to fix that problem?
>>>> I have it with a couple of mailboxes, so I believe it's some kind of
>>>> systematic problem, nothing the users did.
>>>>
>>>> Ralf
>>>>
>>>>> My guess is that it only has INBOX, which means the folders were deleted 
>>>>> by something (Dovecot corruption can't lose entire folders - something 
>>>>> must explicitly delete them).
>>> You can always try doveadm force-resync -u victim "*"
>>>
>>> You should run it twice, I guess.
>> Tried that before and just tried it again, no luck :(
>>
>> root@fra-nfs-mail:/var/dovecot/imap/# doveadm force-resync -u
>> @ "*"
>> doveadm(@): Warning: fscking index file
>> /var/dovecot/imap///mdbox/storage/dovecot.map.index
>> doveadm(@): Warning: mdbox
>> /var/dovecot/imap///mdbox/storage: rebuilding indexes
>> doveadm(@): Warning: Transaction log file
>> /var/dovecot/imap///mdbox/storage/dovecot.map.index.log
>> was locked for 72 seconds (mdbox storage rebuild)
>> doveadm(@): Warning: fscking index file
>> /var/dovecot/imap///mdbox/storage/dovecot.map.index
>>
>> root@fra-nfs-mail:/var/dovecot/imap/# doveadm force-resync -u
>> @ "*"
>> doveadm(@): Warning: fscking index file
>> /var/dovecot/imap///mdbox/storage/dovecot.map.index
>> doveadm(@): Warning: mdbox
>> /var/dovecot/imap///mdbox/storage: rebuilding indexes
>> doveadm(@): Warning: fscking index file
>> /var/dovecot/imap///mdbox/storage/dovecot.map.index
>>
>> root@fra-nfs-mail:/var/dovecot/imap/# doveadm mailbox list -u
>> @
>> user
>> INBOX
>>
>> What else can I do to analyse the problem?
>>
>> Ralf
>>
> It seems you are running into
> https://github.com/dovecot/core/commit/c8be39472a93a5de2cc1051bdbd4468bea0ca7ba#diff-664ea8e9082f57f29f8a284ced77d165

That commit is part of 2.2.32, as far as I can see on Github, so I
*only* need to update?

I'm a bit reluctant to update, after all the problems in the version
bitween 2.2.27 and 2.2.31 ...

You recon the update 2.2.31 to .32 has no know problems so far?

Ralf

-- 
Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0




signature.asc
Description: OpenPGP digital signature


Re: sieve stopped working and doveadm mailbox list without -s shows less folders then with

2017-09-18 Thread Ralf Becker
Hi Aki,

Am 18.09.17 um 11:13 schrieb Aki Tuomi:
> On 18.09.2017 12:10, Ralf Becker wrote:
>> Am 14.09.17 um 01:07 schrieb Timo Sirainen:
>>> On 7 Sep 2017, at 17.42, Ralf Becker  wrote:
>>>> Dovecot 2.2.31 with mailboxes in mdbox format.
>>>>
>>>> Since a couple of days some mailboxes have the problem, that sieve rules
>>>> moving mails to folders stop working and .sieve.log in mailbox shows:
>>>>
>>>> sieve: info: started log at Sep 07 13:57:17.
>>>> error:
>>>> msgid=<20170907155704.egroupware.s4ythvjrr12wsijlpkbk...@somedomain.egroupware.de>:
>>>> failed to store into mailbox 'INBOX/Munser': Mailbox doesn't exist:
>>>> INBOX/Munser.
>>>>
>>>> When I do a doveadm mailbox list -s -u @ I get all folders
>>>> incl. the one mentioned above, while doveadm mailbox list without -s
>>>> shows just
>>>> user
>>>> INBOX
>>> Subscriptions are stored independently from the actual folders. So it looks 
>>> like the subscription file exists and is correct, but somehow you've lost 
>>> all the folders. Do you see the folders in the filesystem under 
>>> user/mailboxes/ directory? 
>> Yes, the folders exist under
>> /var/dovecot/imap///mdbox/mailboxes/.
>> Just doveadm mailbox list -u @ (without -s) does only show
>> INBOX and user.
>> (I can send you the list of folders via private mail, but I can not post
>> them on the list.)
>>
>> Anything I can do to get Dovecot to eg. rescan the folders from the
>> filesystem or any other way to fix that problem?
>> I have it with a couple of mailboxes, so I believe it's some kind of
>> systematic problem, nothing the users did.
>>
>> Ralf
>>
>>> My guess is that it only has INBOX, which means the folders were deleted by 
>>> something (Dovecot corruption can't lose entire folders - something must 
>>> explicitly delete them).
> You can always try doveadm force-resync -u victim "*"
>
> You should run it twice, I guess.

Tried that before and just tried it again, no luck :(

root@fra-nfs-mail:/var/dovecot/imap/# doveadm force-resync -u
@ "*"
doveadm(@): Warning: fscking index file
/var/dovecot/imap///mdbox/storage/dovecot.map.index
doveadm(@): Warning: mdbox
/var/dovecot/imap///mdbox/storage: rebuilding indexes
doveadm(@): Warning: Transaction log file
/var/dovecot/imap///mdbox/storage/dovecot.map.index.log
was locked for 72 seconds (mdbox storage rebuild)
doveadm(@): Warning: fscking index file
/var/dovecot/imap///mdbox/storage/dovecot.map.index

root@fra-nfs-mail:/var/dovecot/imap/# doveadm force-resync -u
@ "*"
doveadm(@): Warning: fscking index file
/var/dovecot/imap///mdbox/storage/dovecot.map.index
doveadm(@): Warning: mdbox
/var/dovecot/imap///mdbox/storage: rebuilding indexes
doveadm(@): Warning: fscking index file
/var/dovecot/imap///mdbox/storage/dovecot.map.index

root@fra-nfs-mail:/var/dovecot/imap/# doveadm mailbox list -u
@
user
INBOX

What else can I do to analyse the problem?

Ralf

-- 
Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0




signature.asc
Description: OpenPGP digital signature


Re: sieve stopped working and doveadm mailbox list without -s shows less folders then with

2017-09-18 Thread Ralf Becker
Am 14.09.17 um 01:07 schrieb Timo Sirainen:
> On 7 Sep 2017, at 17.42, Ralf Becker  wrote:
>> Dovecot 2.2.31 with mailboxes in mdbox format.
>>
>> Since a couple of days some mailboxes have the problem, that sieve rules
>> moving mails to folders stop working and .sieve.log in mailbox shows:
>>
>> sieve: info: started log at Sep 07 13:57:17.
>> error:
>> msgid=<20170907155704.egroupware.s4ythvjrr12wsijlpkbk...@somedomain.egroupware.de>:
>> failed to store into mailbox 'INBOX/Munser': Mailbox doesn't exist:
>> INBOX/Munser.
>>
>> When I do a doveadm mailbox list -s -u @ I get all folders
>> incl. the one mentioned above, while doveadm mailbox list without -s
>> shows just
>> user
>> INBOX
> Subscriptions are stored independently from the actual folders. So it looks 
> like the subscription file exists and is correct, but somehow you've lost all 
> the folders. Do you see the folders in the filesystem under user/mailboxes/ 
> directory? 

Yes, the folders exist under
/var/dovecot/imap///mdbox/mailboxes/.
Just doveadm mailbox list -u @ (without -s) does only show
INBOX and user.
(I can send you the list of folders via private mail, but I can not post
them on the list.)

Anything I can do to get Dovecot to eg. rescan the folders from the
filesystem or any other way to fix that problem?
I have it with a couple of mailboxes, so I believe it's some kind of
systematic problem, nothing the users did.

Ralf

> My guess is that it only has INBOX, which means the folders were deleted by 
> something (Dovecot corruption can't lose entire folders - something must 
> explicitly delete them).

-- 
Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0




signature.asc
Description: OpenPGP digital signature


Re: sieve stopped working and doveadm mailbox list without -s shows less folders then with

2017-09-11 Thread Ralf Becker
Noone an idea how it can be that subscribed folders are more then all
folders and how to repair that situation?

Ralf

Am 07.09.17 um 16:42 schrieb Ralf Becker:
> Dovecot 2.2.31 with mailboxes in mdbox format.
>
> Since a couple of days some mailboxes have the problem, that sieve rules
> moving mails to folders stop working and .sieve.log in mailbox shows:
>
> sieve: info: started log at Sep 07 13:57:17.
> error:
> msgid=<20170907155704.egroupware.s4ythvjrr12wsijlpkbk...@somedomain.egroupware.de>:
> failed to store into mailbox 'INBOX/Munser': Mailbox doesn't exist:
> INBOX/Munser.
>
> When I do a doveadm mailbox list -s -u @ I get all folders
> incl. the one mentioned above, while doveadm mailbox list without -s
> shows just
> user
> INBOX
>
> I already tried doveadm force-resync -u @ INBOX, but it
> did not change anything.
>
> I also moved the mailbox in filesystem to an other name and tried to
> restore it from there, which helped with most broken mailbox problems in
> the pre 2.2.31 aftermath, but that failed completly:
>
> /var/dovecot/imap/ # mv  .broken
>
> /var/dovecot/imap/ # doveadm force-resync -u @ INBOX
>
> /var/dovecot/imap/ # sudo -u dovecot doveadm -Dv import -u
> @ -s mdbox:$(pwd)/.broken/mdbox
> INBOX all
> Debug: Loading modules from directory: /usr/lib/dovecot
> Debug: Module loaded: /usr/lib/dovecot/lib01_acl_plugin.so
> Debug: Module loaded: /usr/lib/dovecot/lib10_quota_plugin.so
> Debug: Module loaded: /usr/lib/dovecot/lib15_notify_plugin.so
> Debug: Module loaded: /usr/lib/dovecot/lib20_mail_log_plugin.so
> Debug: Module loaded: /usr/lib/dovecot/lib20_replication_plugin.so
> Debug: Loading modules from directory: /usr/lib/dovecot/doveadm
> Debug: Module loaded: /usr/lib/dovecot/doveadm/lib10_doveadm_acl_plugin.so
> Debug: Skipping module doveadm_expire_plugin, because dlopen() failed:
> Error relocating
> /usr/lib/dovecot/doveadm/lib10_doveadm_expire_plugin.so:
> expire_set_lookup: symbol not found (this is usually intentional, so
> just ignore this message)
> Debug: Module loaded: /usr/lib/dovecot/doveadm/lib10_doveadm_quota_plugin.so
> Debug: Module loaded: /usr/lib/dovecot/doveadm/lib10_doveadm_sieve_plugin.so
> Debug: Skipping module doveadm_fts_plugin, because dlopen() failed:
> Error relocating /usr/lib/dovecot/doveadm/lib20_doveadm_fts_plugin.so:
> fts_backend_rescan: symbol not found (this is usually intentional, so
> just ignore this message)
> Debug: Skipping module doveadm_mail_crypt_plugin, because dlopen()
> failed: Error relocating
> /usr/lib/dovecot/doveadm/libdoveadm_mail_crypt_plugin.so:
> mail_crypt_box_get_public_key: symbol not found (this is usually
> intentional, so just ignore this message)
> doveadm(@ 54303): Debug: Added userdb setting:
> plugin/master_user=@
> doveadm(@ 54303): Debug: Added userdb setting:
> plugin/userdb_acl_groups=admins@,hts büro@,hts@
> doveadm(@ 54303): Debug: Added userdb setting:
> plugin/userdb_quota_rule=*:bytes=1048576
> doveadm(@): Debug: Effective uid=90, gid=101,
> home=/var/dovecot/imap//
> doveadm(@): Debug: Quota root: name=User quota
> backend=dict
> args=:ns=INBOX/:file:/var/dovecot/imap///dovecot-quota
> doveadm(@): Debug: Quota rule: root=User quota mailbox=*
> bytes=107374182400 messages=0
> doveadm(@): Debug: Quota grace: root=User quota
> bytes=10737418240 (10%)
> doveadm(@): Debug: dict quota: user=@,
> uri=file:/var/dovecot/imap///dovecot-quota, noenforcing=0
> doveadm(@): Debug: Namespace inboxes: type=private,
> prefix=INBOX/, sep=/, inbox=yes, hidden=no, list=yes, subscriptions=no
> location=mdbox:~/mdbox
> doveadm(@): Debug: fs:
> root=/var/dovecot/imap///mdbox, index=, indexpvt=,
> control=, inbox=, alt=
> doveadm(@): Debug: acl: initializing backend with data: vfile
> doveadm(@): Debug: acl: acl username = @
> doveadm(@): Debug: acl: owner = 0
> doveadm(@): Debug: acl vfile: Global ACLs disabled
> doveadm(@): Debug: Namespace users: type=shared,
> prefix=user/%n/, sep=/, inbox=no, hidden=no, list=yes, subscriptions=no
> location=mdbox:%h/mdbox:INDEXPVT=~/shared/%u
> doveadm(@): Debug: shared: root=/run/dovecot, index=,
> indexpvt=, control=, inbox=, alt=
> doveadm(@): Debug: acl: initializing backend with data: vfile
> doveadm(@): Debug: acl: acl username = @
> doveadm(@): Debug: acl: owner = 0
> doveadm(@): Debug: acl vfile: Global ACLs disabled
> doveadm(@): Debug: Namespace subs: type=private, prefix=,
> sep=/, inbox=no, hidden=yes, list=no, subscriptions=yes
> location=mdbox:~/mdbox
> doveadm(@): Debug: fs:
> root=/var/dovecot/imap///mdbox, index=, indexpvt=,
> control=, inbox=, alt=
> doveadm(@): Debug: acl: initializing backend with data: vfile
> doveadm(@): Debug: acl: acl username 

sieve stopped working and doveadm mailbox list without -s shows less folders then with

2017-09-07 Thread Ralf Becker
@): Debug: Quota rule: root=User quota mailbox=*
bytes=107374182400 messages=0
doveadm(@): Debug: Quota grace: root=User quota
bytes=10737418240 (10%)
doveadm(@): Debug: dict quota: user=@,
uri=file:/var/dovecot/imap///dovecot-quota, noenforcing=0
doveadm(@): Debug: fs:
root=/var/dovecot/imap//.broken/mdbox, index=, indexpvt=,
control=, inbox=, alt=
doveadm(@): Debug: acl: initializing backend with data: vfile
doveadm(@): Debug: acl: acl username = @
doveadm(@): Debug: acl: owner = 0
doveadm(@): Debug: acl vfile: Global ACLs disabled
doveadm(@): Error: quota: Unknown namespace: INBOX/
doveadm(@): Debug: quota: quota_over_flag check:
quota_over_script unset - skipping
doveadm(@): Debug: acl vfile: file
/var/dovecot/imap//.broken/mdbox/mailboxes/INBOX/dbox-Mails/dovecot-acl
not found
doveadm(@): Debug: acl vfile: file
/var/dovecot/imap//.broken/mdbox/mailboxes/dovecot-acl not
found
doveadm(@): Debug: acl: Mailbox not in dovecot-acl-list:
MailboxA
doveadm(@): Debug: acl: Mailbox not in dovecot-acl-list:
MailboxB

doveadm(@): Debug: INBOX: Mailbox opened because: import
doveadm(@): Debug: Namespace : Using permissions from
/var/dovecot/imap///mdbox: mode=0700 gid=default
doveadm(@): Debug: replication: Replication requested by
'mailbox subscribe', priority=1
doveadm(@): Debug: INBOX/INBOX: Mailbox opened because: import
doveadm(@): Debug: acl vfile: file
/var/dovecot/imap///mdbox/mailboxes/INBOX/dbox-Mails/dovecot-acl
not found
doveadm(@): Debug: acl vfile: file
/var/dovecot/imap///mdbox/mailboxes/dovecot-acl not found
doveadm(@): Error: Opening INBOX failed: Mailbox doesn't
exist: INBOX/INBOX
doveadm(@): Error: Syncing mailbox INBOX/INBOX failed:
Opening INBOX failed: Mailbox doesn't exist: INBOX/INBOX

Any ideas what the problem could be and how to fix it?

Or what other information I can supply to help diagnose the problem.

Ralf

-- 
Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0




signature.asc
Description: OpenPGP digital signature


Re: proxy-dict with tcp connection

2017-08-04 Thread Ralf Becker
Hi Aki,

Am 03.08.17 um 19:49 schrieb Aki Tuomi:
>> On August 3, 2017 at 2:10 PM Ralf Becker  wrote:
>>
>>
>> I try to create a patch to allow (proxy-)dict to use tcp connections
>> instead of a unix domain socket.
>>
>> I'm replacing connection_init_client_unix with connection_init_client_ip:
>>
>> --- ./src/lib-dict/dict-client.c.orig
>> +++ ./src/lib-dict/dict-client.c
>> @@ -721,6 +721,10 @@ client_dict_init(struct dict *driver, const char *uri,
>>  struct ioloop *old_ioloop = current_ioloop;
>>  struct client_dict *dict;
>>  const char *p, *dest_uri, *path;
>> +const char *const *args;
>> +unsigned int argc;
>> +struct ip_addr ip;
>> +in_port_t port=0;
>>  unsigned int idle_msecs = DICT_CLIENT_DEFAULT_TIMEOUT_MSECS;
>>  unsigned int warn_slow_msecs = DICT_CLIENT_DEFAULT_WARN_SLOW_MSECS;
>>
>> @@ -772,7 +776,21 @@ client_dict_init(struct dict *driver, const char *uri,
>>  dict->warn_slow_msecs = warn_slow_msecs;
>>  i_array_init(&dict->cmds, 32);
>>
>> -if (uri[0] == ':') {
>> +args = t_strsplit(uri, ":");
>> +for(argc=0; args[argc] != NULL; argc++);
>> +
>> +if (argc == 3) {/* host:ip:somewhere --> argc == 3 */
>> +if (net_addr2ip(args[0], &ip) < 0) {
>> +*error_r = t_strdup_printf("Invalid IP: %s in URI: %s",
>> args[0], uri);
>> +return -1;
>> +}
>> +if (net_str2port(args[1], &port) < 0) {
>> +*error_r = t_strdup_printf("Invalid port: %s in URI: %s",
>> args[1], uri);
>> +return -1;
>> +}
>> +dest_uri = strrchr(uri, ':');
>> +} else if (uri[0] == ':') {
>>  /* default path */
>>  path = t_strconcat(set->base_dir,
>>  "/"DEFAULT_DICT_SERVER_SOCKET_FNAME, NULL);
>> @@ -784,7 +802,13 @@ client_dict_init(struct dict *driver, const char *uri,
>>  path = t_strconcat(set->base_dir, "/",
>>  t_strdup_until(uri, dest_uri), NULL);
>>  }
>> -connection_init_client_unix(dict_connections, &dict->conn.conn, path);
>> +if (port > 0) {
>> +connection_init_client_ip(dict_connections, &dict->conn.conn,
>> &ip, port);
>> +} else {
>> +connection_init_client_unix(dict_connections, &dict->conn.conn,
>> path);
>> +}
>>  dict->uri = i_strdup(dest_uri + 1);
>>
>>  dict->ioloop = io_loop_create();
>>
>> But unfortunately this crashes:
>>
>> Jul 28 13:20:04 auth: Error: auth worker: Aborted PASSL request for
>> i...@outdoor-training.de: Worker process died unexpectedly
>> Jul 28 13:20:04 auth-worker(705): Fatal: master: service(auth-worker):
>> child 705 killed with signal 11 (core dumped)
>> Jul 28 13:20:04 doveadm(10.44.88.1,i...@outdoor-training.de): Error:
>> user i...@outdoor-training.de: Auth PASS lookup failed
>>
>> It looks like the tcp connection gets opened non-blocking and the first
>> write / dict lookup happens to early:
>>
>> 4303041 13:44:25.120398220 0 auth (29884) < connect
>> res=-115(EINPROGRESS) tuple=172.18.0.2:47552->10.44.99.180:2001
>>
>> Looking at dict-memcached-ascii.c I probably need to do something like:
>>
>> i_array_init(&dict->input_states, 4);
>> i_array_init(&dict->replies, 4);
>>
>> dict->ioloop = io_loop_create();
>> io_loop_set_current(old_ioloop);
>> *dict_r = &dict->dict;
>>
>> to wait until the socket is ready ...
>>
>> Any idea / tips?
>>
>> Ralf
> It's probably cleaner to make a "proxy-tcp" driver so parsing all the funny 
> things gets easier. Also it will require some restructing in the 
> client_dict_connect code.

Thanks for the advice. I will try to find a way to share protocol stuff,
but keep connection details separate.

Can you elaborate a bit about how eg. dict-memcached-ascii is waiting
between opening the socket and writing to it:

   
https://github.com/dovecot/core/blob/master/src/lib-dict/dict-memcached-ascii.c#L434

Btw. do you know if your company would be available to develop such
small enhancements like dict using tcp instead of unix-domain sockets?

Ralf

-- 
Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0




signature.asc
Description: OpenPGP digital signature


proxy-dict with tcp connection

2017-08-03 Thread Ralf Becker
I try to create a patch to allow (proxy-)dict to use tcp connections
instead of a unix domain socket.

I'm replacing connection_init_client_unix with connection_init_client_ip:

--- ./src/lib-dict/dict-client.c.orig
+++ ./src/lib-dict/dict-client.c
@@ -721,6 +721,10 @@ client_dict_init(struct dict *driver, const char *uri,
 struct ioloop *old_ioloop = current_ioloop;
 struct client_dict *dict;
 const char *p, *dest_uri, *path;
+const char *const *args;
+unsigned int argc;
+struct ip_addr ip;
+in_port_t port=0;
 unsigned int idle_msecs = DICT_CLIENT_DEFAULT_TIMEOUT_MSECS;
 unsigned int warn_slow_msecs = DICT_CLIENT_DEFAULT_WARN_SLOW_MSECS;

@@ -772,7 +776,21 @@ client_dict_init(struct dict *driver, const char *uri,
 dict->warn_slow_msecs = warn_slow_msecs;
 i_array_init(&dict->cmds, 32);

-if (uri[0] == ':') {
+args = t_strsplit(uri, ":");
+for(argc=0; args[argc] != NULL; argc++);
+
+if (argc == 3) {/* host:ip:somewhere --> argc == 3 */
+if (net_addr2ip(args[0], &ip) < 0) {
+*error_r = t_strdup_printf("Invalid IP: %s in URI: %s",
args[0], uri);
+return -1;
+}
+if (net_str2port(args[1], &port) < 0) {
+*error_r = t_strdup_printf("Invalid port: %s in URI: %s",
args[1], uri);
+return -1;
+}
+dest_uri = strrchr(uri, ':');
+} else if (uri[0] == ':') {
 /* default path */
 path = t_strconcat(set->base_dir,
 "/"DEFAULT_DICT_SERVER_SOCKET_FNAME, NULL);
@@ -784,7 +802,13 @@ client_dict_init(struct dict *driver, const char *uri,
 path = t_strconcat(set->base_dir, "/",
 t_strdup_until(uri, dest_uri), NULL);
 }
-connection_init_client_unix(dict_connections, &dict->conn.conn, path);
+if (port > 0) {
+connection_init_client_ip(dict_connections, &dict->conn.conn,
&ip, port);
+} else {
+connection_init_client_unix(dict_connections, &dict->conn.conn,
path);
+}
 dict->uri = i_strdup(dest_uri + 1);

 dict->ioloop = io_loop_create();

But unfortunately this crashes:

Jul 28 13:20:04 auth: Error: auth worker: Aborted PASSL request for
i...@outdoor-training.de: Worker process died unexpectedly
Jul 28 13:20:04 auth-worker(705): Fatal: master: service(auth-worker):
child 705 killed with signal 11 (core dumped)
Jul 28 13:20:04 doveadm(10.44.88.1,i...@outdoor-training.de): Error:
user i...@outdoor-training.de: Auth PASS lookup failed

It looks like the tcp connection gets opened non-blocking and the first
write / dict lookup happens to early:

4303041 13:44:25.120398220 0 auth (29884) < connect
res=-115(EINPROGRESS) tuple=172.18.0.2:47552->10.44.99.180:2001

Looking at dict-memcached-ascii.c I probably need to do something like:

i_array_init(&dict->input_states, 4);
i_array_init(&dict->replies, 4);

    dict->ioloop = io_loop_create();
io_loop_set_current(old_ioloop);
*dict_r = &dict->dict;

to wait until the socket is ready ...

Any idea / tips?

Ralf
-- 
Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0


--- ./src/lib-dict/dict-client.c.orig
+++ ./src/lib-dict/dict-client.c
@@ -721,6 +721,10 @@ client_dict_init(struct dict *driver, const char *uri,
struct ioloop *old_ioloop = current_ioloop;
struct client_dict *dict;
const char *p, *dest_uri, *path;
+   const char *const *args;
+   unsigned int argc;
+   struct ip_addr ip;
+   in_port_t port=0;
unsigned int idle_msecs = DICT_CLIENT_DEFAULT_TIMEOUT_MSECS;
unsigned int warn_slow_msecs = DICT_CLIENT_DEFAULT_WARN_SLOW_MSECS;

@@ -772,7 +776,21 @@ client_dict_init(struct dict *driver, const char *uri,
dict->warn_slow_msecs = warn_slow_msecs;
i_array_init(&dict->cmds, 32);

-   if (uri[0] == ':') {
+   args = t_strsplit(uri, ":");
+   for(argc=0; args[argc] != NULL; argc++);
+
+   if (argc == 3) {/* host:ip:somewhere --> argc == 3 */
+   if (net_addr2ip(args[0], &ip) < 0) {
+   *error_r = t_strdup_printf("Invalid IP: %s in URI: %s", 
args[0], uri);
+   return -1;
+   }
+   if (net_str2port(args[1], &port) < 0) {
+   *error_r = t_strdup_printf("Invalid port: %s in URI: 
%s", args[1], uri);
+   return -1;
+   }
+   dest_uri = strrchr(uri, ':');
+   i_warning("using TCP URI: %s with %d args", uri, argc);
+   } else if (uri[0] == ':') 

Replication and public folders with private (seen) flags

2017-08-03 Thread Ralf Becker
We started using Dovecot replication between two nodes and noticed that
our configured private flags (INDEXPVT) in public/shared mailboxes are
not replicated. We are only replicating INBOX namespace, as we dont want
to replicate content of shared mailboxes for every user again.

Is there a way to replicate the INDEXPVT or is that not (yet) implemented?

Dovecot versions:

# 2.2.31 (65cde28): /etc/dovecot/dovecot.conf
# Pigeonhole version 0.4.19 (e5c7051)
# OS: Linux 4.4.0-87-generic x86_64

Using following namespaces:

namespace inboxes {
  inbox = yes
  location =
  mailbox Drafts {
auto = subscribe
special_use = \Drafts
  }
  mailbox Junk {
auto = subscribe
special_use = \Junk
  }
  mailbox Sent {
auto = subscribe
special_use = \Sent
  }
  mailbox Templates {
auto = subscribe
  }
  mailbox Trash {
auto = subscribe
special_use = \Trash
  }
  prefix = INBOX/
  separator = /
  subscriptions = no
}
namespace subs {
  hidden = yes
  list = no
  location =
  prefix =
  separator = /
}
namespace users {
  location = mdbox:%%h/mdbox:INDEXPVT=~/shared/%%u
  prefix = user/%%n/
  separator = /
  subscriptions = no
  type = shared
}

And the following replication config:

root@ka-nfs-mail:~# cat /etc/dovecot/replication.conf
service aggregator {
  fifo_listener replication-notify-fifo {
user = dovecot
  }
  unix_listener replication-notify {
user = dovecot
  }
}

service replicator {
  process_min_avail = 1
  unix_listener replicator-doveadm {
group = dovecot
user = dovecot
mode = 0660
  }
}

service doveadm {
  inet_listener {
port = 12345
#ssl = yes
  }
}

doveadm_port = 12345
doveadm_password = ***

plugin {
  #mail_replica = tcp:10.44.99.1 # use doveadm_port
  mail_replica = tcp:10.44.88.1 # use doveadm_port
}

replication_dsync_parameters = -d -n INBOX -l 30 -U

-- 
Ralf Becker
EGroupware GmbH [www.egroupware.org]
Handelsregister HRB Kaiserslautern 3587
Geschäftsführer Birgit und Ralf Becker
Leibnizstr. 17, 67663 Kaiserslautern, Germany
Telefon +49 631 31657-0




signature.asc
Description: OpenPGP digital signature


Re: [Dovecot] Different PROXY for IMAP and POP3 using LDAP-auth

2009-12-13 Thread Ralf Becker
--
For all non german speaking people:

Oliver asked:
In an old posting I've read something about different proxy hosts for
IMAP and POP3.
http://www.dovecot.org/list/dovecot/2008-July/031885.html

I've got the same problem and want to ask you, if there is a patch for
replacing [variable names in] pass_attrs.
---

Hello Oliver,

I've attached the patch I'm using. It works in all 1.1.x versions.

Regards, Ralf


Oliver Eales schrieb am 13.12.2009 01:40:
> Hallo,
> 
> ich hatte ein etwas älteres Posting von Ihnen auf der Dovecot
> Mailingliste gefunden bei dem es um unterschiedliche Proxy hosts für
> IMAP und POP3 ging.
> http://www.dovecot.org/list/dovecot/2008-July/031885.html
> 
> Ich habe das gleiche Problem und wollte fragen ob sie evtl. einen Patch
> zu Ersetzung der pass_attrs umgesetzt haben.
> 
> Danke und viele Grüße,
> Oliver Eales
> 
>  
> 

-- 
______

 Dipl.-Inform. (FH) Ralf Becker Rechenzentrum (r/ft) der FH Trier
 (Network|Mail|Web|Firewall)   University of applied sciences
 Administrator   Schneidershof, D-54293 Trier

   Mail: beck...@fh-trier.deFon: +49 651 8103 499
Web: http://www.fh-trier.de/~beckerrFax: +49 651 8103 214
 PubKey: http://www.fh-trier.de/~beckerr Crypto: GnuPG, S/MIME
__

 Wenn Gott gewollt haette, dass E-Mail in HTML geschrieben wuerden,
 endeten Gebete traditionell mit . (Tom Listen)


dovecot-ldap-attribute-templates.patch.gz
Description: GNU Zip compressed data


Re: [Dovecot] testing needed (AIX) with "failed: 2. offset=2"

2009-10-20 Thread Ralf Becker
Testing all again multiple times

System  | Kernel | x out of 100 failed with
| x Bit  | "failed: 2. offset=2"
++
AIX 5300-08 | 64 |   0 (!)
AIX 5300-08 | 32 |  69
AIX 5300-03 | 64 |  81
AIX 5200-06 | 64 |   9
AIX 5100-00 | 32 |   1
AIX 4330-10 | 32 |  59
AIX 4330-11 | 32 |  61

This looks a little bit random, doesn't it?


Ralf

Timo Sirainen schrieb am 20.10.2009 10:36:
> That's weird.. Did you run it a couple of times on the failed ones?
> 
> On Oct 20, 2009, at 4:31 AM, Ralf Becker wrote:
> 
>> Success on
>>
>> AIX 5300-08
>> AIX 5300-06
>> AIX 5200-06
>> AIX 5100-00
>> AIX 4330-11
>>
>> Failed on
>>
>> AIX 5300-03failed: 2. offset=2
>> AIX 4330-10failed: 2. offset=2
>>
>>
>> Timo Sirainen schrieb am 19.10.2009 23:55:
>>> Can someone find an OS where the attached program doesn't work? It
>>> should print "success". So far tested for success: Linux 2.6, Solaris
>>> 10, FreeBSD 7.2, OpenBSD 4.2.
>>>
>>
>> -- 
>> __
>>
>> Dipl.-Inform. (FH) Ralf Becker Rechenzentrum (r/ft) der FH Trier
>> (Network|Mail|Web|Firewall)   University of applied sciences
>> Administrator   Schneidershof, D-54293 Trier
>>
>>   Mail: beck...@fh-trier.deFon: +49 651 8103 499
>>Web: http://www.fh-trier.de/~beckerrFax: +49 651 8103 214
>> PubKey: http://www.fh-trier.de/~beckerr Crypto: GnuPG, S/MIME
>> ______
>>
>> Wenn Gott gewollt haette, dass E-Mail in HTML geschrieben wuerden,
>> endeten Gebete traditionell mit . (Tom Listen)
>>
> 

-- 
__

 Dipl.-Inform. (FH) Ralf Becker Rechenzentrum (r/ft) der FH Trier
 (Network|Mail|Web|Firewall)   University of applied sciences
 Administrator   Schneidershof, D-54293 Trier

   Mail: beck...@fh-trier.deFon: +49 651 8103 499
Web: http://www.fh-trier.de/~beckerrFax: +49 651 8103 214
 PubKey: http://www.fh-trier.de/~beckerr Crypto: GnuPG, S/MIME
__

 Wenn Gott gewollt haette, dass E-Mail in HTML geschrieben wuerden,
 endeten Gebete traditionell mit . (Tom Listen)



smime.p7s
Description: S/MIME Cryptographic Signature


Re: [Dovecot] testing needed (AIX) with "failed: 2. offset=2"

2009-10-20 Thread Ralf Becker
Success on

AIX 5300-08
AIX 5300-06
AIX 5200-06
AIX 5100-00
AIX 4330-11

Failed on

AIX 5300-03 failed: 2. offset=2
AIX 4330-10 failed: 2. offset=2


Timo Sirainen schrieb am 19.10.2009 23:55:
> Can someone find an OS where the attached program doesn't work? It
> should print "success". So far tested for success: Linux 2.6, Solaris
> 10, FreeBSD 7.2, OpenBSD 4.2.
> 

-- 
__

 Dipl.-Inform. (FH) Ralf Becker Rechenzentrum (r/ft) der FH Trier
 (Network|Mail|Web|Firewall)   University of applied sciences
 Administrator   Schneidershof, D-54293 Trier

   Mail: beck...@fh-trier.deFon: +49 651 8103 499
Web: http://www.fh-trier.de/~beckerrFax: +49 651 8103 214
 PubKey: http://www.fh-trier.de/~beckerr Crypto: GnuPG, S/MIME
__

 Wenn Gott gewollt haette, dass E-Mail in HTML geschrieben wuerden,
 endeten Gebete traditionell mit . (Tom Listen)



smime.p7s
Description: S/MIME Cryptographic Signature


Re: [Dovecot] testing needed (Cygwin)

2009-10-20 Thread Ralf Becker
Success on

CYGWIN_NT-5.1 XX 1.5.25(0.156/4/2) 2008-06-12 19:34 i686 Cygwin


Timo Sirainen schrieb am 19.10.2009 23:55:
> Can someone find an OS where the attached program doesn't work? It
> should print "success". So far tested for success: Linux 2.6, Solaris
> 10, FreeBSD 7.2, OpenBSD 4.2.
> 

-- 
__

 Dipl.-Inform. (FH) Ralf Becker Rechenzentrum (r/ft) der FH Trier
 (Network|Mail|Web|Firewall)   University of applied sciences
 Administrator   Schneidershof, D-54293 Trier

   Mail: beck...@fh-trier.deFon: +49 651 8103 499
Web: http://www.fh-trier.de/~beckerrFax: +49 651 8103 214
 PubKey: http://www.fh-trier.de/~beckerr Crypto: GnuPG, S/MIME
__

 Wenn Gott gewollt haette, dass E-Mail in HTML geschrieben wuerden,
 endeten Gebete traditionell mit . (Tom Listen)



smime.p7s
Description: S/MIME Cryptographic Signature


Re: [Dovecot] Segfault in quota-fs plugin

2009-09-23 Thread Ralf Becker
Hi Brandon,

since our patches are very similar, they should be a good way to fix
this issue. For me it's working since 7 days now without a harm.

See: http://markmail.org/message/p736w5f4dmamfnom

Ralf

Brandon Davidson schrieb am 23.09.2009 03:41:
> Hi all,
> 
> We recently attempted to update our Dovecot installation to version
> 1.2.5. After doing so, we noticed a constant stream of crash messages in
> our log file:
> 
> Sep 22 15:58:41 hostname dovecot: imap-login: Login: user=,
> method=PLAIN, rip=X.X.X.X, lip=X.X.X.X, TLS
> Sep 22 15:58:41 hostname dovecot: dovecot: child 6339 (imap) killed with
> signal 11 (core dumps disabled)
> 
> We rolled back to version 1.2.4, and installed 1.2.5 on a test system -
> something we'll have to make sure to do *before* rolling new versions
> into production.
> 
> Anyway, after examining a few core files from the test system, it looks
> like the recent changes to the quota plugin (specifically the maildir
> backend's late initialization fix) have broken the other backends. Stack
> trace and further debugging are available here:
> http://uoregon.edu/~brandond/dovecot-1.2.5/bt.txt
> 
> The relevant code seems to have been added in changeset 9380:
> http://hg.dovecot.org/dovecot-1.2/rev/fe063e0d7109
> 
> Specifically, quota.c line 447 does not check to see if the backend
> implements init_limits before calling it, resulting in a null function
> call for all backends that do not do so. Since this crash would appear
> to affect all quota backends other than maildir, it should be a pretty
> easy to reproduce.
> 
> I've attached a patch which seems to fix the obvious code issue. I can't
> guarantee it's the correct fix since this is my first poke at the
> Dovecot source, but it seems to have stopped the crashing on our test
> host.
> 
> Regards,
> 
> -Brandon

-- 
__

 Dipl.-Inform. (FH) Ralf Becker Rechenzentrum (r/ft) der FH Trier
 (Network|Mail|Web|Firewall)   University of applied sciences
 Administrator   Schneidershof, D-54293 Trier

   Mail: beck...@fh-trier.deFon: +49 651 8103 499
Web: http://www.fh-trier.de/~beckerrFax: +49 651 8103 214
 PubKey: http://www.fh-trier.de/~beckerr Crypto: GnuPG, S/MIME
__

 Wenn Gott gewollt haette, dass E-Mail in HTML geschrieben wuerden,
 endeten Gebete traditionell mit . (Tom Listen)


smime.p7s
Description: S/MIME Cryptographic Signature


[Dovecot] 1.2.5: quota.c + quota_fs.c crashs dovecot

2009-09-15 Thread Ralf Becker
Hi Timo,

"quota.c" calls "root->backend.v.init_limits(root)", but "quota_fs.c"
initializes this with NULL. On AIX e.g. this results in core dumping
imap with SIGILL. I've attached a (quick and dirty) patch.

Ralf
-- 
______

 Dipl.-Inform. (FH) Ralf Becker Rechenzentrum (r/ft) der FH Trier
 (Network|Mail|Web|Firewall)   University of applied sciences
 Administrator   Schneidershof, D-54293 Trier

   Mail: beck...@fh-trier.deFon: +49 651 8103 499
Web: http://www.fh-trier.de/~beckerrFax: +49 651 8103 214
 PubKey: http://www.fh-trier.de/~beckerr Crypto: GnuPG, S/MIME
__

 Wenn Gott gewollt haette, dass E-Mail in HTML geschrieben wuerden,
 endeten Gebete traditionell mit . (Tom Listen)
--- dovecot-1.2.5/src/plugins/quota/quota.c.org 2009-09-15 20:12:12.0 
+0200
+++ dovecot-1.2.5/src/plugins/quota/quota.c 2009-09-15 20:12:27.0 
+0200
@@ -444,7 +444,7 @@
bool found;
 
if (!root->set->force_default_rule) {
-   if (root->backend.v.init_limits(root) < 0)
+   if (root->backend.v.init_limits && 
root->backend.v.init_limits(root) < 0)
return -1;
}
 


smime.p7s
Description: S/MIME Cryptographic Signature


Re: [Dovecot] script to report quota upon user's request

2009-08-20 Thread Ralf Becker
Hi Dimitrios,

I use (and like) this:

https://addons.mozilla.org/de/thunderbird/addon/881


Ralf

Δημήτριος Καραπιπέρης schrieb am 20.08.2009 08:45:
> Hi there
> is there any binary/script to repot user's quota upon request?
> 
> 
> 
> Dimitrios Karapiperis
> 

-- 
__

 Dipl.-Inform. (FH) Ralf Becker Rechenzentrum (r/ft) der FH Trier
 (Network|Mail|Web|Firewall)   University of applied sciences
 Administrator   Schneidershof, D-54293 Trier

   Mail: beck...@fh-trier.deFon: +49 651 8103 499
Web: http://www.fh-trier.de/~beckerrFax: +49 651 8103 214
 PubKey: http://www.fh-trier.de/~beckerr Crypto: GnuPG, S/MIME
__

 Wenn Gott gewollt haette, dass E-Mail in HTML geschrieben wuerden,
 endeten Gebete traditionell mit . (Tom Listen)



smime.p7s
Description: S/MIME Cryptographic Signature


[Dovecot] AIX and posix_fallocate

2009-07-30 Thread Ralf Becker
Hi,

AIX's implementation of posix_fallocate is a little bit, let me say,
peculiar. Attached is a patch to "fix" (=work around) this.

Without you'll see this in the logs:

Jul 28 01:17:41 trevi mail:err|error dovecot: IMAP(beckerr):
posix_fallocate() failed: File exists
Jul 28 01:17:41 trevi mail:err|error dovecot: IMAP(beckerr):
file_set_size() failed with mbox file
/u/f0/rzuser/beckerr/Mail/Ham: File exists

Funny, isn't it?

This is what it should be:

Jul 28 01:17:41 trevi mail:err|error dovecot: IMAP(beckerr):
posix_fallocate() failed: Operation not supported on socket
Jul 28 01:17:41 trevi mail:err|error dovecot: IMAP(beckerr):
file_set_size() failed with mbox file
/u/f0/rzuser/beckerr/Mail/Ham: Operation not supported on socket

The problem is, that errno is not correcly set, when posix_fallocate
returns EOPNOTSUPP (="Operation not supported on socket"). In this
case the return code has to be checked rather than errno.

When patched dovecot handles err==EOPNOTSUPP the same way like
errno==EINVAL on Solaris.


A note for all AIX Admins:
Without APAR
  IZ48778: POSIX_FALLOCATE() FAILS WITH ERROR-25(ENOTTY) resp.
  IZ46961: POSIX_FALLOCATE() FAILS WITH ERROR-25(ENOTTY)
   APPLIES TO AIX 5300-06
you don't even get EOPNOTSUPP: posix_fallocate fails with NOTTY.
So you have to install one of this fixes to make the patch work.

Ralf

-- 
______

 Dipl.-Inform. (FH) Ralf Becker Rechenzentrum (r/ft) der FH Trier
 (Network|Mail|Web|Firewall)   University of applied sciences
 Administrator   Schneidershof, D-54293 Trier

   Mail: beck...@fh-trier.deFon: +49 651 8103 499
Web: http://www.fh-trier.de/~beckerrFax: +49 651 8103 214
 PubKey: http://www.fh-trier.de/~beckerr Crypto: GnuPG, S/MIME
__

 Wenn Gott gewollt haette, dass E-Mail in HTML geschrieben wuerden,
 endeten Gebete traditionell mit . (Tom Listen)
--- ./lib/file-set-size.c.org   2009-07-06 12:56:17.0 +0200
+++ ./lib/file-set-size.c   2009-07-06 12:54:40.0 +0200
@@ -43,9 +43,15 @@
 
 #ifdef HAVE_POSIX_FALLOCATE
if (posix_fallocate_supported) {
-   if (posix_fallocate(fd, st.st_size, size - st.st_size) == 0)
+   int err;
+   if ((err = posix_fallocate(fd, st.st_size, size - st.st_size)) 
== 0)
return 0;
 
+   if (err == EOPNOTSUPP /* AIX */ ) {
+   /* Ignore this error silently.
+  You have to test err, because errno is not
+  correcly set on some versions of AIX */
+   } else
if (errno != EINVAL /* Solaris */) {
if (!ENOSPACE(errno))
i_error("posix_fallocate() failed: %m");


smime.p7s
Description: S/MIME Cryptographic Signature


Re: [Dovecot] Unable to (un)subscribe mbox with AIX, NFS and netapp filer

2009-07-29 Thread Ralf Becker
Hi Timo,

today I found this in the logs again:

Jul 29 10:38:27 trevi mail:err|error dovecot: IMAP(beckerr):
 fchown(/u/f0/rzuser/beckerr/Mail/.subscriptions.lock, -1, -1)
 failed: Invalid argument

Jul 29 10:38:27 trevi mail:err|error dovecot: IMAP(beckerr):
 file_dotlock_open() failed with subscription file
 /u/f0/rzuser/beckerr/Mail/.subscriptions: Invalid argument


I located the bug in src/lib/file-dotlock.c ... a patch is attached.


Ralf

Timo Sirainen schrieb am 07.07.2009 18:40:
> On Tue, 2009-07-07 at 17:58 +0200, Axel Luttgens wrote:
>>> Is my understanding of these sentences correct?
>>> "If owner and group are -1, nothing is done?"
>>>
>>> In this case it should be save to skip the call, shouldn't it?
>> Yes, I guess so.
> 
> Yes. Committed: http://hg.dovecot.org/dovecot-1.2/rev/d6337be8ae30
> 
>> Unless the rationale for that call is to ensure a correct cache  
>> flushing for NFS clients, while being some kind of (costly) no-op  
>> otherwise?
> 
> In that case I would have used those nfs_flush_*() functions.
> 

-- 
______

 Dipl.-Inform. (FH) Ralf Becker Rechenzentrum (r/ft) der FH Trier
 (Network|Mail|Web|Firewall)   University of applied sciences
 Administrator   Schneidershof, D-54293 Trier

   Mail: beck...@fh-trier.deFon: +49 651 8103 499
Web: http://www.fh-trier.de/~beckerrFax: +49 651 8103 214
 PubKey: http://www.fh-trier.de/~beckerr Crypto: GnuPG, S/MIME
__

 Wenn Gott gewollt haette, dass E-Mail in HTML geschrieben wuerden,
 endeten Gebete traditionell mit . (Tom Listen)
--- dovecot-1.2.2/src/lib/file-dotlock.c.org2009-07-29 10:44:21.0 
+0200
+++ dovecot-1.2.2/src/lib/file-dotlock.c2009-07-29 10:44:42.0 
+0200
@@ -780,7 +780,7 @@
fd = file_dotlock_open(set, path, flags, &dotlock);
umask(old_mask);
 
-   if (fd != -1 && (uid != (uid)-1 || gid != (gid_t)-1)) {
+   if (fd != -1 && (uid != (uid_t)-1 || gid != (gid_t)-1)) {
if (fchown(fd, uid, gid) < 0) {
if (errno == EPERM && uid == (uid_t)-1) {
i_error("%s", eperm_error_get_chgrp("fchown",


smime.p7s
Description: S/MIME Cryptographic Signature


Re: [Dovecot] v1.2.2 released

2009-07-27 Thread Ralf Becker

Timo Sirainen schrieb am 27.07.2009 08:54:
> On Mon, 2009-07-27 at 08:49 +0200, Ralf Becker wrote:
>> "settings.c", line 132.31: 1506-045 (S) Undeclared identifier GLOB_BRACE.
>>
>> This is because AIX glob doesn't support GLOB_BRACE :-|
> 
> Should have known.. Solaris is also broken, but BSDs are fine, so
> hopefully not too many people will complain. :)
> http://hg.dovecot.org/dovecot-1.2/rev/47449880c0b4 helps.
> 

Thanks. This works. :-)

Ralf

-- 
______

 Dipl.-Inform. (FH) Ralf Becker Rechenzentrum (r/ft) der FH Trier
 (Network|Mail|Web|Firewall)   University of applied sciences
 Administrator   Schneidershof, D-54293 Trier

   Mail: beck...@fh-trier.deFon: +49 651 8103 499
Web: http://www.fh-trier.de/~beckerrFax: +49 651 8103 214
 PubKey: http://www.fh-trier.de/~beckerr Crypto: GnuPG, S/MIME
__

 Wenn Gott gewollt haette, dass E-Mail in HTML geschrieben wuerden,
 endeten Gebete traditionell mit . (Tom Listen)


smime.p7s
Description: S/MIME Cryptographic Signature


Re: [Dovecot] v1.2.2 released

2009-07-26 Thread Ralf Becker
Hi Timo,

building dovecot 1.2.2 on AIX (5.3) fails with the new "!include" feature:

"settings.c", line 132.31: 1506-045 (S) Undeclared identifier GLOB_BRACE.

This is because AIX glob doesn't support GLOB_BRACE :-|

https://publib.boulder.ibm.com/infocenter/pseries/v5r3/index.jsp?topic=/com.ibm.aix.basetechref/doc/basetrf1/glob.htm


Ralf
-- 
______

 Dipl.-Inform. (FH) Ralf Becker Rechenzentrum (r/ft) der FH Trier
 (Network|Mail|Web|Firewall)   University of applied sciences
 Administrator   Schneidershof, D-54293 Trier

   Mail: beck...@fh-trier.deFon: +49 651 8103 499
Web: http://www.fh-trier.de/~beckerrFax: +49 651 8103 214
 PubKey: http://www.fh-trier.de/~beckerr Crypto: GnuPG, S/MIME
__

 Wenn Gott gewollt haette, dass E-Mail in HTML geschrieben wuerden,
 endeten Gebete traditionell mit . (Tom Listen)


smime.p7s
Description: S/MIME Cryptographic Signature


Re: [Dovecot] expire plugin with 1.2 cronjob undefined symbol: capability_string

2009-07-08 Thread Ralf Becker
Thanks, that's it!
I've added it to the dovecot wiki:
   http://wiki.dovecot.org/Plugins/Expire


By the way:

You are using:

  #!/bin/bash
  MAIL_PLUGINS=${mail_plugins//imap_quota/}
  /usr/lib/dovecot/expire-tool $1

While "mail_plugins" in not defined and therefore empty it's the same
like:

  #!/bin/bash
  MAIL_PLUGINS=
  /usr/lib/dovecot/expire-tool $1

so you are not removing just imap_quota but your are removing _all_
plugins. Just for correctness :-)


Ralf


Robert Schetterer schrieb am 08.07.2009 11:57:
> 
> Hm, I ve read
> 
> http://dovecot.org/pipermail/dovecot/2009-June/040126.html
> 
> and used this script ( only problem to original was case sensitve
> mail_plugins)
> 
> #!/bin/bash
> MAIL_PLUGINS=${mail_plugins//imap_quota/}
> /usr/lib/dovecot/expire-tool $1
> 
> now it runs
> dovecot -c /etc/dovecot/dovecot.conf --exec-mail ext
> /usr/sbin/expire-tool.sh --test
> Info: Loading modules from directory: /usr/lib/dovecot/modules/imap
> Info: rob...@schetterer.com/Trash: stop, expire time in future: Wed Jul
> 15 10:34:39 2009
> 
> but i still have to test if there is a real delete

-- 
______

 Dipl.-Inform. (FH) Ralf Becker Rechenzentrum (r/ft) der FH Trier
 (Network|Mail|Web|Firewall)   University of applied sciences
 Administrator   Schneidershof, D-54293 Trier

   Mail: beck...@fh-trier.deFon: +49 651 8103 499
Web: http://www.fh-trier.de/~beckerrFax: +49 651 8103 214
 PubKey: http://www.fh-trier.de/~beckerr Crypto: GnuPG, S/MIME
__

 Wenn Gott gewollt haette, dass E-Mail in HTML geschrieben wuerden,
 endeten Gebete traditionell mit . (Tom Listen)


smime.p7s
Description: S/MIME Cryptographic Signature


Re: [Dovecot] rquota RPC with netapp filer

2009-07-07 Thread Ralf Becker
Hi Timo,

I've tested it and you're right. Disabling group quotas by adding
":user:" to all "quota" statements is quite enough to fix this. Thanks
for the hint.

Ralf

Timo Sirainen schrieb am 07.07.2009 21:16:
> On Sat, 2009-07-04 at 09:14 +0200, Ralf Becker wrote:
>> Hello list,
>>
>> with dovecot 1.2 on AIX rquota RPC calls fail with
>>
>>   quota-fs: remote ext rquota call failed: RPC:
>>   1832-012 Program/version mismatch
> ..
>> Using the attached rquota.x fixes the problem. 
> 
> Are you sure that's needed?
> 
>> In addition you have to
>> disable group quota checking by adding "user" to your "quota"
>> definition. 
> 
> Isn't this enough to get rid of the first problem too?
> 
> Also this should remove the need for that too:
> http://hg.dovecot.org/dovecot-1.2/rev/534de78dbe84

-- 
__

 Dipl.-Inform. (FH) Ralf Becker Rechenzentrum (r/ft) der FH Trier
 (Network|Mail|Web|Firewall)   University of applied sciences
 Administrator   Schneidershof, D-54293 Trier

   Mail: beck...@fh-trier.deFon: +49 651 8103 499
Web: http://www.fh-trier.de/~beckerrFax: +49 651 8103 214
 PubKey: http://www.fh-trier.de/~beckerr Crypto: GnuPG, S/MIME
__

 Wenn Gott gewollt haette, dass E-Mail in HTML geschrieben wuerden,
 endeten Gebete traditionell mit . (Tom Listen)


smime.p7s
Description: S/MIME Cryptographic Signature


Re: [Dovecot] Unable to (un)subscribe mbox with AIX, NFS and netapp filer

2009-07-06 Thread Ralf Becker
Hello Axel,

>> attached is a small tool to test fchown on a freshly created file:
>> <>
> 
> Damn... didn't go through... ;-)
> 
ok... maybe i missed it... let's do it inline:

#include 
#include 
#include 
#include 
#include 

main(int argc, char **argv)
{
int f = open(argv[1],O_CREAT|O_TRUNC);
printf("fchown returns %i\n",fchown(f,-1L,-1L));
if (errno) printf("errno=%i (%s)\n", errno, strerror(errno));
close(f);
unlink(argv[1]);
}


> As a result, I'm not sure whether entirely hiding the log message is a
> good idea; perhaps just change the logging level would be better, so
> that one keeps the ability to track possibly problematic file systems...

It's not just the log message. If you have a look on the entire
function, you'll see that it fails if fchown fails:

static int
file_dotlock_open_mode_full(<...>,uid_t uid, gid_t gid,<...>)
{
<...>

if (fd != -1) {
if (fchown(fd, uid, gid) < 0) {
if (errno == EPERM && uid == (uid_t)-1) {
i_error("%s", eperm_error_get_chgrp("fchown",

file_dotlock_get_lock_path(dotlock),
gid, gid_origin));
 } else {
i_error("fchown(%s, %ld, %ld) failed: %m",
file_dotlock_get_lock_path(dotlock),
(long)uid, (long)gid);
 }
 file_dotlock_delete(&dotlock);
 return -1;
}
}
*dotlock_r = dotlock;
return fd;
}

While this function seems to create all dotlock files (not just for
the .subscribtions file) this means that on same NFS(4) file systems
dotlocking is actually not working.

The linux man page of chown(3) (in place of fchown(3)) says:

-8<
If owner or group is specified as ( uid_t)-1 or ( gid_t)-1,
respectively, the corresponding ID of the file shall not be changed.
If both owner and group are -1, the times need not be updated.
Upon successful completion, chown() shall mark for update the st_ctime
field of the file.
--->8--

Is my understanding of these sentences correct?
"If owner and group are -1, nothing is done?"

In this case it should be save to skip the call, shouldn't it?

Ralf

-- 
__

 Dipl.-Inform. (FH) Ralf Becker Rechenzentrum (r/ft) der FH Trier
 (Network|Mail|Web|Firewall)   University of applied sciences
 Administrator   Schneidershof, D-54293 Trier

   Mail: beck...@fh-trier.deFon: +49 651 8103 499
Web: http://www.fh-trier.de/~beckerrFax: +49 651 8103 214
 PubKey: http://www.fh-trier.de/~beckerr Crypto: GnuPG, S/MIME
__

 Wenn Gott gewollt haette, dass E-Mail in HTML geschrieben wuerden,
 endeten Gebete traditionell mit . (Tom Listen)


smime.p7s
Description: S/MIME Cryptographic Signature


Re: [Dovecot] Unable to (un)subscribe mbox with AIX, NFS and netapp filer

2009-07-06 Thread Ralf Becker


Frank Bonnet schrieb:
> 
> Which Ontap version do you run on the filer ?
> 
NetApp Release 7.2.4: Fri Nov 16 00:34:57 PST 2007


-- 
__

 Dipl.-Inform. (FH) Ralf Becker Rechenzentrum (r/ft) der FH Trier
 (Network|Mail|Web|Firewall)   University of applied sciences
 Administrator   Schneidershof, D-54293 Trier

 Mail:   beck...@fh-trier.deFon: +49 651 8103 499
 WWW:http://www.fh-trier.de/~beckerrFax: +49 651 8103 214
__


smime.p7s
Description: S/MIME Cryptographic Signature


Re: [Dovecot] Unable to (un)subscribe mbox with AIX, NFS and netapp filer

2009-07-06 Thread Ralf Becker
Hello Axel,

attached is a small tool to test fchown on a freshly created file:
<>

These are the tests I've made using this tool:

0) compile
   
   > xlc /tmp/t.c -o /tmp/t

   environemnt
   ---
   >mount | grep -E "(filer0|hd4)"
/dev/hd4 /jfsJun 11 23:25 
rw,log=/dev/hd8
   filer0   /vol/mail/net/var_mailnfs3   Jun 11 23:26 
rw,proto=tcp,port=2049,wsize=65534,rsize=65534
   filer0   /vol/home/u/f0nfs4   Jul 06 08:09 
rw,proto=tcp,port=2049,vers=4,wsize=65534,rsize=65534

1) local filesystem => SUCCESS
   ---
   > /tmp/t /tmp/test.txt
   fchown returns 0

2) NFS3 filesystem => SUCCESS
   --
   > /tmp/t /net/var_mail/spool/test.txt
   fchown returns 0

3) NFS4 filesystem => ERROR
   
   > /tmp/t /u/f0/test.txt
   fchown returns -1
   errno=22 (Invalid argument)


So I should alter the to subject to
 ... with AIX, NFS4 and netapp ...
:-)


Back to your question:

> Wouldn't it be worth to check what kind of entity gets created under
> your environment?

yes and it's just a raw file

and as suspected conclusion

EINVAL is returned because the owner or group ID is not a value supported
by the implementation (of NFS4 on netapp filers?)

Ralf

Axel Luttgens schrieb:
> According to the posix specification, fchown may return EINVAL when the
> owner or group ID is not a value supported by the implementation, or
> when the fildes argument refers to a pipe or socket or an fattach()-ed
> STREAM and the implementation disallows execution of fchown() on a pipe.
> 
> Wouldn't it be worth to check what kind of entity gets created under
> your environment?
> I ask because I wouldn't exclude without further investigations the
> possibility of encountering other side effects wrt files throughout the
> code.
> 

-- 
__

 Dipl.-Inform. (FH) Ralf Becker Rechenzentrum (r/ft) der FH Trier
 (Network|Mail|Web|Firewall)   University of applied sciences
 Administrator   Schneidershof, D-54293 Trier

 Mail:   beck...@fh-trier.deFon: +49 651 8103 499
 WWW:http://www.fh-trier.de/~beckerrFax: +49 651 8103 214
__


smime.p7s
Description: S/MIME Cryptographic Signature


[Dovecot] Unable to (un)subscribe mbox with AIX, NFS and netapp filer

2009-07-06 Thread Ralf Becker
Hi Timo,

dovecot 1.2.0 is great! Faster and more stable on mboxes that 1.1.x by
far... good job :-)

Today I stumbled over a strange problem when I tried to subscribe an
existing Mailbox (mbox), which doesn't work.

Thunderbird IMAP log

9932[4124558]: 40f3498:imap.fh-trier.de:A:SendData:
11 subscribe "Mail/Archiv/1998"
9932[4124558]: ReadNextLine [stream=403f280 nb=108 needmore=0]
9932[4124558]: 40f3498:imap.fh-trier.de:A:CreateNewLineFromSocket:
11 NO [SERVERBUG] Internal error occurred.
Refer to server log for more information.
[2009-07-06 08:14:32]

Server log
--
Jul  6 08:14:32 trevi mail:info dovecot: IMAP(beckerr):
  Namespace Mail/: Using permissions from
  /u/f0/rzuser/beckerr/Mail: mode=0700 gid=-1

Jul  6 08:14:32 trevi mail:err|error dovecot: IMAP(beckerr):
  fchown(/u/f0/rzuser/beckerr/Mail/.subscriptions.lock, -1, -1)
  failed: Invalid argument

Jul  6 08:14:32 trevi mail:err|error dovecot: IMAP(beckerr):
  file_dotlock_open() failed with subscription file
  /u/f0/rzuser/beckerr/Mail/.subscriptions: Invalid argument




The error just appears on NFS mounted shared and I'm not sure if
AIX or netapp is the cause. So to determine the real problem is
not easy, but to fix it is:

While uid and gid are both -1 the call could be suppressed, because
nothing is really changed:

--- ./lib/file-dotlock.c.org2009-07-06 09:25:14.0 +0200
+++ ./lib/file-dotlock.c2009-07-06 09:24:48.0 +0200
@@ -780,7 +780,7 @@
fd = file_dotlock_open(set, path, flags, &dotlock);
umask(old_mask);

-   if (fd != -1) {
+   if (fd != -1 && (uid != -1 || gid != -1)) {
if (fchown(fd, uid, gid) < 0) {
if (errno == EPERM && uid == (uid_t)-1) {
i_error("%s", eperm_error_get_chgrp("fchown",




Ralf
-- 
______

 Dipl.-Inform. (FH) Ralf Becker Rechenzentrum (r/ft) der FH Trier
 (Network|Mail|Web|Firewall)   University of applied sciences
 Administrator   Schneidershof, D-54293 Trier

   Mail: beck...@fh-trier.deFon: +49 651 8103 499
Web: http://www.fh-trier.de/~beckerrFax: +49 651 8103 214
 PubKey: http://www.fh-trier.de/~beckerr Crypto: GnuPG, S/MIME
__

 Wenn Gott gewollt haette, dass E-Mail in HTML geschrieben wuerden,
 endeten Gebete traditionell mit . (Tom Listen)


smime.p7s
Description: S/MIME Cryptographic Signature


[Dovecot] rquota RPC with netapp filer

2009-07-04 Thread Ralf Becker
Hello list,

with dovecot 1.2 on AIX rquota RPC calls fail with

  quota-fs: remote ext rquota call failed: RPC:
  1832-012 Program/version mismatch

when using a netapp filer.

This is what netapp says:

Bug ID  97288
Title   Support requested for rquota version 2
Bug Severity5 - Suggestion
Bug Status  Closed
Product Data ONTAP
Bug TypeNFS
Description Linux has implemented its own extensions to the rquota
protocol as a version 2 of that protocol.  ONTAP only
supports v1 as that is the only version defined by
Sun, the owner of that RPC program number.
Workaround  -
Fixed-InThis bug is not scheduled to be fixed, you may opt to
Version open a technical support case if you would like to
contact Network Appliance regarding the status of this
bug. A complete list of releases where this bug is
fixed is available here.


So this should be no AIX specific problem.

Using the attached rquota.x fixes the problem. In addition you have to
disable group quota checking by adding "user" to your "quota"
definition. Otherwise you'll see:

   quota-fs: rquota not compiled with group support


Regards, Ralf

-- 
______

 Dipl.-Inform. (FH) Ralf Becker Rechenzentrum (r/ft) der FH Trier
 (Network|Mail|Web|Firewall)   University of applied sciences
 Administrator   Schneidershof, D-54293 Trier

   Mail: beck...@fh-trier.deFon: +49 651 8103 499
Web: http://www.fh-trier.de/~beckerrFax: +49 651 8103 214
 PubKey: http://www.fh-trier.de/~beckerr Crypto: GnuPG, S/MIME
__

 Wenn Gott gewollt haette, dass E-Mail in HTML geschrieben wuerden,
 endeten Gebete traditionell mit . (Tom Listen)
/*
 *   COMPONENT_NAME: onchdrs
 *
 *   FUNCTIONS: none
 *
 *   ORIGINS: 24,27
 *
 *
 *   (C) COPYRIGHT International Business Machines Corp. 1988,1993
 *   All Rights Reserved
 *   Licensed Materials - Property of IBM
 *   US Government Users Restricted Rights - Use, duplication or
 *   disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
 */


/* 
 * Copyright (c) 1988 by Sun Microsystems, Inc.
 * (#) from SUN 1.4
 */

/*
 * Remote quota protocol
 * Requires unix authentication
 */

const RQ_PATHLEN = 1024;

struct getquota_args {
string gqa_pathp;   /* path to filesystem of interest */
int gqa_uid;/* inquire about quota for uid */
};

/*
 * remote quota structure
 */
struct rquota {
int rq_bsize;   /* block size for block counts */
bool rq_active; /* indicates whether quota is active */
unsigned int rq_bhardlimit; /* absolute limit on disk blks alloc */
unsigned int rq_bsoftlimit; /* preferred limit on disk blks */
unsigned int rq_curblocks;  /* current block count */
unsigned int rq_fhardlimit; /* absolute limit on allocated files */
unsigned int rq_fsoftlimit; /* preferred file limit */
unsigned int rq_curfiles;   /* current # allocated files */
unsigned int rq_btimeleft;  /* time left for excessive disk use */
unsigned int rq_ftimeleft;  /* time left for excessive files */
};  

enum gqr_status {
Q_OK = 1,   /* quota returned */
Q_NOQUOTA = 2,  /* noquota for uid */
Q_EPERM = 3 /* no permission to access quota */
};

union getquota_rslt switch (gqr_status status) {
case Q_OK:
rquota gqr_rquota;  /* valid if status == Q_OK */
case Q_NOQUOTA:
void;
case Q_EPERM:
void;
};

program RQUOTAPROG {
version RQUOTAVERS {
/*
 * Get all quotas
 */
getquota_rslt
RQUOTAPROC_GETQUOTA(getquota_args) = 1;

/*
 * Get active quotas only
 */
getquota_rslt
RQUOTAPROC_GETACTIVEQUOTA(getquota_args) = 2;
} = 1;
} = 100011;


smime.p7s
Description: S/MIME Cryptographic Signature


Re: [Dovecot] Multiple quota roots with quota-fs backend

2008-07-21 Thread Ralf Becker


Timo Sirainen schrieb am 21.07.2008 02:03:

But most people haven't needed it, so it would be unnecessary extra
checks for them. Wonder if it'd be better to just let them have the
extra check or add a configuration option..

I'm also a bit worried that if this check was always done it would break
some existing setup where the check doesn't work correctly for some
reason.



I see the point :-)



./src/plugins/quota/quota.c:

Looking over the new function 'quota_root_is_visible' let's me think, 
that Grandy Fu's original solution (*) should work then, because 
'noenforcing' does cover not only quota checking but also quota root 
reporting. So 'noenforcing' is then just an alias for 'hidden'.

Right?

So the meaning will change from
  dovecot-1.1.1: 'noenforcing' => report, but don't enforce
to
  dovecot-1.1.2: 'noenforcing' => don't report and don't enforce

Or do I miss something?



-
(*)
 plugin {
   quota  = fs:INBOX:mount=/var/mail
   quota2  = fs:home:noenforcing:mount=/home/h1
 }


--
__

  Dipl.-Inform. (FH) Ralf Becker Rechenzentrum (r/ft) der FH Trier
  (Network|Mail|Web|Firewall)   University of applied sciences
  Administrator   Schneidershof, D-54293 Trier

Mail: [EMAIL PROTECTED]Fon: +49 651 8103 499
 Web: http://www.fh-trier.de/~beckerrFax: +49 651 8103 214
  PubKey: http://www.fh-trier.de/~beckerr Crypto: GnuPG, S/MIME
__

  Wenn Gott gewollt haette, dass E-Mail in HTML geschrieben wuerden,
  endeten Gebete traditionell mit . (Tom Listen)


smime.p7s
Description: S/MIME Cryptographic Signature


  1   2   >