Re: last_login plugin and $remote_ip

2023-04-19 Thread Alessio Cecchi via dovecot

Ciao Fabrizio,

set login_trusted_networks to point to the proxies in the backends. This 
way you’ll get the clients’ actual IP addresses logged instead of the 
proxy’s.


https://doc.dovecot.org/settings/core/#core_setting-login_trusted_networks

Il 19/04/23 09:18, Fabrizio Cuseo ha scritto:

Good morning.
I am planning a dovecot system with:
- 3 x glusterfs servers (with 2 volumes, 1 ssd for short term mail, and 1 with 
bigger hdd for long term archive mail)
- 1 x mysql server (another server with active replica will be added)
- 3 x mbox servers (with dovecot pop/imap/lmpt/sieve/postfix)
- 3 x dovecot proxy/directors for pop3/imap/smtp
- 4 x proxmox mail gateway for antispam/antivirus in front of smtp servers
- 1 x centralized syslog server

All have private ip addresses, and in front there is a firewall with HA_proxy 
to make high availability and load balancing.


My only problem now is using last_login plugin; i have configured on the 
mailbox servers on pop3/imap, but the ip address that is written on mysql is 
the proxy/director address, not the real client ip address.
No results using real_remote_ip.

Apr 19 09:14:31 mailbox-01 dovecot: pop3-login: Login: user=, 
method=PLAIN, rip=172.16.27.31, lip=172.16.27.21, mpid=19723, 
session=<42nHLKv5JsqsEBsf>
Apr 19 09:14:31 mailproxy-01 dovecot: pop3-login: 
proxy(usern...@domain.it,172.16.27.21:110): Started proxying to 172.16.27.21 (1.978 secs): 
user=, method=PLAIN, rip=212.66.96.188, lip=172.16.27.31, 
session=
Apr 19 09:14:34 mailbox-01 dovecot: 
pop3(usern...@domain.it)<19723><42nHLKv5JsqsEBsf>: Disconnected: Logged out 
top=0/0, retr=0/0, del=0/37, size=115779706
Apr 19 09:14:34 mailproxy-01 dovecot: pop3-login: 
proxy(usern...@domain.it,172.16.27.21:110): Disconnected by server (0s idle, in=45, 
out=82): user=, method=PLAIN, rip=212.66.96.188, 
lip=172.16.27.31, session=


in db I have last_ip: 172.16.27.31, not 212.66.96.188

-


dovecot -n
# 2.3.16 (7e2e900c1a): /etc/dovecot/dovecot.conf
# Pigeonhole version 0.5.16 (09c29328)
# OS: Linux 5.15.0-69-generic x86_64 Ubuntu 22.04.2 LTS
# Hostname: mailbox-01
auth_default_realm = .it
default_client_limit = 2500
dict {
   mysql = mysql:/etc/dovecot/dovecot-dict-sql.conf.ext
   sieve = mysql:/etc/dovecot/dict-sieve-sql.conf
   sql = mysql:/etc/dovecot/dovecot-dict-sql.conf.ext
}
disable_plaintext_auth = no
doveadm_api_key = # hidden, use -P to show it
first_valid_gid = 89
first_valid_uid = 89
imap_client_workarounds = tb-extra-mailbox-sep delay-newmail
login_greeting = Welcome to mail server
mail_fsync = always
mail_gid = 89
mail_location = mbox:~/mail:INBOX=/var/mail/%u
mail_plugins = quota
mail_privileged_group = mail
mail_uid = 89
mailbox_list_index_very_dirty_syncs = yes
mdbox_rotate_size = 128 M
mmap_disable = yes
namespace inbox {
   inbox = yes
   location =
   mailbox Drafts {
 special_use = \Drafts
   }
   mailbox Junk {
 special_use = \Junk
   }
   mailbox Sent {
 special_use = \Sent
   }
   mailbox "Sent Messages" {
 special_use = \Sent
   }
   mailbox Trash {
 special_use = \Trash
   }
   prefix =
   separator = .
}
passdb {
   driver = pam
}
passdb {
   args = /etc/dovecot/dovecot-sql.conf.ext
   driver = sql
}
plugin {
   last_login_dict = proxy::sql
   last_login_key = # hidden, use -P to show it
   last_login_precision = ms
   quota = count:User quota
   quota_clone_dict = proxy::mysql
   quota_grace = 50M
   quota_rule2 = Trash:storage=+100M
   quota_vsizes = yes
   quota_warning = storage=95%% quota-warning 95 %u
   quota_warning2 = storage=80%% quota-warning 80 %u
   sieve = dict:proxy::sieve;name=active
   sieve_extensions = +vacation-seconds
   sieve_vacation_default_period = 7d
   sieve_vacation_max_period = 30d
   sieve_vacation_min_period = 1h
}
pop3_client_workarounds = outlook-no-nuls oe-ns-eoh
protocols = " imap lmtp pop3"
service dict {
   unix_listener dict {
 group = mail2023
 mode = 0660
 user = mail2023
   }
}
service doveadm {
   inet_listener {
 port = 2425
   }
   inet_listener http {
 port = 8080
   }
   unix_listener doveadm-server {
 user = mail2023
   }
}
service imap {
   process_limit = 1024
}
service lmtp {
   inet_listener lmtp {
 port = 24
   }
   unix_listener /var/spool/postfix/private/dovecot-lmtp {
 group = mail2023
 mode = 0666
 user = mail2023
   }
}
service pop3 {
   process_limit = 250
}
service quota-warning {
   executable = script /usr/local/bin/quota-warning.sh
   unix_listener quota-warning {
 mode = 0666
 user = mail2023
   }
   user = mail2023
}
service stats {
   unix_listener stats-reader {
 group = mail2023
 mode = 0660
 user = mail2023
   }
   unix_listener stats-writer {
 group = mail2023
 mode = 0660
 user = mail2023
   }
}
ssl_cert = 
--
Alessio Cecchi
Postmaster @http://www.qboxmail.it
https://www.linkedin.com/in/alessice
___
dovecot 

Re: Maildir compression

2023-04-18 Thread Alessio Cecchi via dovecot

Hi,

the quota, reported by dovecot, will be the same before/after 
compression because dovecot calculate the quota based on the "original" 
size of the message, that is (should be) present in the name of the 
maildir file.


Also, index file does not need to be updated, I did this several years 
ago with a script I made and everything worked fine.


Ciao

Il 17/04/23 15:23, Jose David Bravo A ha scritto:

Hello,

I would like to compress the maildir of some of our users, I'm 
thinking to use this script: 
https://github.com/George-NG/dovecot-maildir-compress/blob/master/dovecot-maildir-compress


My question is, how dovecot will calculate the quota for that user 
after the compression? will dovecot calculate the quota of that user 
using the real size of the message file after compression, or the 
plain size before compression?


Is it necessary to recreate the index of that maildir after the 
compression is made? If so, what command should I use?



Thank you!


Jose Bravo

___
dovecot mailing list --dovecot@dovecot.org
To unsubscribe send an email todovecot-le...@dovecot.org


--
Alessio Cecchi
Postmaster @http://www.qboxmail.it
https://www.linkedin.com/in/alessice
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


iPhone Mail clients generate high Solr I/O

2023-03-20 Thread Alessio Cecchi

Hi,

I'm running a mail server with Dovecot 2.3.20 and Apache Solr as FTS 
backend.


It seems that iPhone/iPad Mail clients are generating some IMAP 
searches, especially on header Message-ID, that are increasing our Solr 
I/O load, especially during the night.


Are these kind of queries normal? What are they?

You can find some logs below.

Dovecot logs:

Mar 13 21:06:14 dovecot: imap-login: Login: user=, 
method=PLAIN, rip=x.x.x.x, lip=z.z.z.z, mpid=11263, secured, 
session=
Mar 13 21:06:14 dovecot: imap(user@domain) session=: 
ID sent: name=iPhone Mail, version=20D67, os=iOS, os-version=16.3.1 (20D67)
Mar 13 21:06:14 dovecot: imap(user@domain) session=: 
Disconnected: Logged out in=207 out=1128 deleted=0 expunged=0 
autoexpunged=0 trashed=0 appended=0 hdr_count=0 hdr_bytes=0 body_count=0 
body_bytes=0


Solr logs:

2023-03-13 21:06:14.411 INFO  (qtp1671846437-1652) [   x:dovecot] 
o.a.s.c.S.Request [dovecot]  webapp=/solr path=/select 
params={q={!lucene+q.op%3DAND}hdr:+OR+hdr:<\!\%26\!AAAYAI\/Oo\%2bconzzmkcvmey7ywqjcgaaaep968yud4n1lmhwjxwgturqbaa%3d...@domain5.tld>+OR+hdr:+OR+hdr:+OR+hdr:<24e36a6e5bb2410c8090482b3e1eb...@domain8.tld>&fl=uid,score&sort=uid+asc&fq=%2Bbox:d0f433254ee1b161c82c2e1056c4+%2Buser:user@domain&rows=88&wt=xml} 
hits=0 status=0 QTime=275
2023-03-13 21:06:14.511 INFO  (qtp1671846437-1628) [   x:dovecot] 
o.a.s.c.S.Request [dovecot]  webapp=/solr path=/select 
params={q={!lucene+q.op%3DAND}hdr:+OR+hdr:+OR+hdr:<00eb01d8c8dc$caa8a580$5ff9f080$@domain1.tld>+OR+hdr:+OR+hdr:<00c701d8bb83$f307bc70$d9173550$@domain1.tld>&fl=uid,score&sort=uid+asc&fq=%2Bbox:d0f433254ee1b161c82c2e1056c4+%2Buser:user@domain&rows=88&wt=xml} 
hits=0 status=0 QTime=59
2023-03-13 21:06:14.714 INFO  (qtp1671846437-1652) [   x:dovecot] 
o.a.s.c.S.Request [dovecot]  webapp=/solr path=/select 
params={q={!lucene+q.op%3DAND}hdr:+OR+hdr:+OR+hdr:+OR+hdr:<\!\%26\!AAAYAI\/Oo\%2BConZZMkCvmEY7yWqjCgAAAEDl2MLfkWRxJjQW\%2b3dohzn0baa%3d...@domain5.tld>+OR+hdr:&fl=uid,score&sort=uid+asc&fq=%2Bbox:d0f433254ee1b161c82c2e1056c4+%2Buser:user@domain&rows=88&wt=xml} 
hits=0 status=0 QTime=178
2023-03-13 21:06:14.771 INFO  (qtp1671846437-1628) [   x:dovecot] 
o.a.s.c.S.Request [dovecot]  webapp=/solr path=/select 
params={q={!lucene+q.op%3DAND}hdr:+OR+hdr:+OR+hdr:&fl=uid,score&sort=uid+asc&fq=%2Bbox:d0f433254ee1b161c82c2e1056c4+%2Buser:user@domain&rows=88&wt=xml} 
hits=0 status=0 QTime=34
2023-03-13 21:06:14.899 INFO  (qtp1671846437-1652) [   x:dovecot] 
o.a.s.c.S.Request [dovecot]  webapp=/solr path=/select 
params={q={!lucene+q.op%3DAND}hdr:+OR+hdr:<02e5e16df27945b6a764f49a5d9e0...@domain8.tld>+OR+hdr:+OR+hdr:<4edb1720076ab71575067b0fc2b5d...@domain2.tld>+OR+hdr:<1657012847.68884...@domain6.tld>&fl=uid,score&sort=uid+asc&fq=%2Bbox:4830f5144ee1b161c82c2e1056c4+%2Buser:user@domain&rows=1&wt=xml} 
hits=0 status=0 QTime=46


Thanks

--
Alessio Cecchi
Postmaster @http://www.qboxmail.it
https://www.linkedin.com/in/alessice


Re: how to setup timestamp

2023-03-08 Thread Alessio Cecchi

Hi,

I used delay_until in some migrations procedure, here a snippet from my 
config:


# cat /etc/dovecot/extra-passdb
aless...@ciao.com:::delay_until=1490275466

passdb {
  args = /etc/dovecot/dovecot-sql.conf.ext
  driver = sql
  result_success = continue-ok
}
passdb {
  args = /etc/dovecot/extra-passdb
  driver = passwd-file
  result_internalfail = return-fail
  skip = unauthenticated
}

Timestamp must now+delay (max 5 minutes)

Ciao

Il 07/03/23 09:26, tomate aceite ha scritto:



Does anyone got any example about how to implement delay_until = 
timestamp ?


I am not sure where i can setup it.

Thanks!

Re: Error: Corrupted index cache file

2023-02-15 Thread Alessio Cecchi

Hi Sohin,

I don't remember how was solved the problem with Ubuntu, but in my 
current setup (some hundreds of thousands of mailboxes), and in general 
with Maildir and small size files I prefer to use NFSv3 that have less 
"problem" with lock since it is stateless. Try also to remove all "ac" 
options from fstab, my fstab for NFS Maildir is very simple:


rw,nfsvers=3,noatime,nodiratime,_netdev,nordirplus

But I'm not using Linux as NFS for server. Let me know if you have any 
improvements with my mount options.


Ciao

Il 13/02/23 10:39, Sohin Vyacheslav ha scritto:



12.02.2023 21:36, Alessio Cecchi пишет:

I've run into this error in the past in some situations:

- one was during the migration from Centos 6 to 7, probably the NFS 
client in the kernel had some differences in cache management


- one was with Ubuntu and NFS server in Google Cloud, I don't 
remember exactly how I solved it in that case but the problem was the 
NFS server (maybe because the NFS server only supported version 4.1 
and there were locking issues)


- one was where I used Director but local delivery via LDA, I solved 
it by switching to delivery via LMTP


What NFS server/storage are you using? And with what NFS version your 
Maildir are mounted?

We use NFS on Ubuntu
nfs-kernel-server 1:1.3.4-2.1ubuntu5.5
nfs-common 1:1.3.4-2.1ubuntu5.5

$ nfsstat | grep nfs
Server nfs v4:

Now Maildir mounted with these options:
type nfs4 
(rw,noatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,acregmin=1800,acregmax=1800,acdirmin=1800,acdirmax=1800,hard,nordirplus,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=IP-address,local_lock=none,addr=ip-address)



Are you using LDA or LMTP for delivery?

Uses LMTP for delivery.


After what change did the problem start?

It seems that problem exists a long time.




Re: Error: Corrupted index cache file

2023-02-12 Thread Alessio Cecchi

Hi Sohin,

I've run into this error in the past in some situations:

- one was during the migration from Centos 6 to 7, probably the NFS 
client in the kernel had some differences in cache management


- one was with Ubuntu and NFS server in Google Cloud, I don't remember 
exactly how I solved it in that case but the problem was the NFS server 
(maybe because the NFS server only supported version 4.1 and there were 
locking issues)


- one was where I used Director but local delivery via LDA, I solved it 
by switching to delivery via LMTP


What NFS server/storage are you using? And with what NFS version your 
Maildir are mounted?


Are you using LDA or LMTP for delivery?

After what change did the problem start?

Ciao

Il 10/02/23 16:15, Sohin Vyacheslav ha scritto:

Hi All,

In mail.log exists an error messages "Error: Corrupted index cache 
file 
/data/mail/vhosts/domain.com/u...@domain.com/Maildir/dovecot.index.cache: 
invalid record size" for some accounts.


I tried these steps to fix it:

1. Running commands
# doveadm -D -v index -u u...@domain.com INBOX
# doveadm -v force-resync -u u...@domain.com INBOX

after this I saw that the following files are updated in ../Maildir/:
dovecot.index.cache
dovecot.index.log
dovecot-uidlist

But after some time error messages occurred again.

2. Added NFS mount option 'nordirplus' to /etc/fstab with remount 
/data partition, add 'mmap_disable = yes' and 'mail_fsync = always' to 
10-mail.conf as advised on 
https://doc.dovecot.org/configuration_manual/nfs/ (because /data 
partition mounted via NFS);


3. Renaming 3 files with/without Dovecot service pre-stop.

# mv dovecot.index old.dovecot.index
# mv dovecot.index.cache old.dovecot.index.cache
# mv dovecot.index.log old.dovecot.index.log

Unfortunately, all done steps didn't fix issue.
What is the proper way to fix corrupted index cache errors?


dovecot package: 1:2.2.33.2-1ubuntu4.8 (Ubuntu-18.04).


p.s. for some of email accounts also exists error messages "Error: 
Broken file ../Maildir/dovecot-uidlist line ###: Invalid data:" 
besides mentioned cache error.




--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice



Re: NFS Config vs Ceph / GlusterFS

2022-04-06 Thread Alessio Cecchi

Hi,

I have about 100TB of mailboxes in Maildir format on NFS (NetApp FAS) 
and works very well, for performance but also stability.


The main problem of using Ceph or GlusterFS to store Maildir is the high 
use of metadata that dovecot require for check new messages and others 
activity. On my storage/NFS the main part of the traffic and I/O is 
metadata traffic on small file (high file count workload).


And Ceph or GlusterFS are very inefficient with this kind of workload 
(many metadata GETATTR/ACCESS/LOOKUP and high numer of small files).


Ciao

Il 05/04/22 01:40, dove...@ptld.com ha scritto:

Do all of the configuration considerations pertaining to using NFS on

 https://doc.dovecot.org/configuration_manual/nfs/

equally apply to using something like Ceph / GlusterFS?


And if people wouldn't mind chiming in with which (NFS, Ceph & GlusterFS) they 
feel is better for maildir mail storage on dedicated non-container servers?
Which is better for robustness / stability?
Which is better for speed / performance?


Thank you.


Is fts_autoindex_exclude options working fine?

2022-03-07 Thread Alessio Cecchi

Hi,

we have some trouble with Solr ad FTS engine. Solr have some spike of 
load and responding slow when an user expunge or append a message. 
fts_autoindex is set to "yes".


So I add to 90-plugin.conf these options:

fts_autoindex_exclude = Drafts
fts_autoindex_exclude2 = Spam
fts_autoindex_exclude3 = Trash
fts_autoindex_exclude4 = Sent

or

fts_autoindex_exclude = \Drafts
fts_autoindex_exclude2 = \Junk
fts_autoindex_exclude3 = \Trash
fts_autoindex_exclude4 = \Sent

But the problem is still present.

Here some example of logs when users have timeout in their email client, 
specially when saving in Drafts automatically:


Mar  7 12:21:37 pop01 dovecot: imap(us...@company1.com) 
session=: Warning: Maildir 
/home/vmail/domains/company1.com/user1/Maildir/.Drafts: Synchronization 
took 119 seconds (0 new msgs, 0 flag change attempts, 1 expunge attempts)
Mar  7 12:21:37 pop01 dovecot: imap(us...@company1.com) 
session=: Disconnected: Connection closed (UID EXPUNGE 
finished 118.715 secs ago) in=77617 out=679511 deleted=1 expunged=1 
autoexpunged=0 trashed=0 appended=2 hdr_count=0 hdr_bytes=0 body_count=0 
body_bytes=0


Mar  7 12:21:52 pop02 dovecot: imap(us...@company2.net) 
session=: Warning: Maildir 
/home/vmail/domains/company2.net/user2/Maildir/.Trash: Synchronization 
took 149 seconds (0 new msgs, 0 flag change attempts, 1 expunge attempts)
Mar  7 12:21:52 pop02 dovecot: imap(us...@company2.net) 
session=: Disconnected: Logged out in=2407 out=7025 
deleted=0 expunged=0 autoexpunged=1 trashed=0 appended=0 hdr_count=0 
hdr_bytes=0 body_count=0 body_bytes=0


Is my fts_autoindex_exclude wrong or there are some limits with this option?

Here the relevant part of 15-mailboxes.conf

namespace inbox {
  inbox = yes
  location =
  mailbox Archive {
    auto = subscribe
    special_use = \Archive
  }
  mailbox Drafts {
    auto = subscribe
    special_use = \Drafts
  }
  mailbox Sent {
    auto = subscribe
    special_use = \Sent
  }
  mailbox "Sent Messages" {
    special_use = \Sent
  }
  mailbox Spam {
    auto = subscribe
    special_use = \Junk
  }
  mailbox Trash {
    auto = subscribe
    special_use = \Trash
  }
  mailbox virtual/All {
    comment = All my messages
    special_use = \All
  }
  prefix =
  separator = /
}

Thanks

--
Alessio Cecchi
Postmaster @http://www.qboxmail.it
https://www.linkedin.com/in/alessice


Re: Issue in 2.3.18 with fts_header_excludes/includes

2022-03-02 Thread Alessio Cecchi

Hi,

the problem was confirmed and fixed here:

https://github.com/dovecot/core/commit/b5653c9d095f302cf9674de263f873c8fc80446b

In 2.3.18 fts_header_excludes/includes is broken so don't enable the option.

Thanks

Il 15/02/22 12:40, Alessio Cecchi ha scritto:


Hi,

I'm still doing some test with fts-flatcurve, after updating to 2.3.18 
I enabled:


fts_header_excludes = *
fts_header_includes = From To Cc Bcc Subject Message-ID

but when I try to re-index all messages with FTS flatcurve the 
fts-flatcurve/ folder inside the Maildir/ become huge.


Taking for example a Maildir with cur folder of 1.1G the related 
fts-flatcurve folder passed from 140M to 2.2G.


After remove fts_header_excludes/includes config options, and re-index 
all messages, the fts-flatcurve/ size return to normal size.


Is an issue?

Thanks

--
Alessio Cecchi
Postmaster @http://www.qboxmail.it
https://www.linkedin.com/in/alessice


--
Alessio Cecchi
Postmaster @http://www.qboxmail.it
https://www.linkedin.com/in/alessice


Issue in 2.3.18 with fts_header_excludes/includes

2022-02-15 Thread Alessio Cecchi

Hi,

I'm still doing some test with fts-flatcurve, after updating to 2.3.18 I 
enabled:


fts_header_excludes = *
fts_header_includes = From To Cc Bcc Subject Message-ID

but when I try to re-index all messages with FTS flatcurve the 
fts-flatcurve/ folder inside the Maildir/ become huge.


Taking for example a Maildir with cur folder of 1.1G the related 
fts-flatcurve folder passed from 140M to 2.2G.


After remove fts_header_excludes/includes config options, and re-index 
all messages, the fts-flatcurve/ size return to normal size.


Is an issue?

Thanks

--
Alessio Cecchi
Postmaster @http://www.qboxmail.it
https://www.linkedin.com/in/alessice


Re: Lucene support for FTS - EOL date.

2022-02-06 Thread Alessio Cecchi

Il 06/02/22 18:16, Robert L Mathews ha scritto:

On 2/6/22 12:05 AM, Alessio Cecchi wrote:

I'm testing it and is almost "ready for production".


Out of interest, why "almost"? Can you share what problems you've 
encountered with it?


Since is not mark as "stable" you cannot know if somethings will be 
changed with the next commit, you must built it from git and rebuild 
dovecot from source in order to add some feature (icu, stemmer, textcat) 
that are not present in the official repo.


More, in my test is not clear how will be big the "fts-flatcurve" 
database directory, with the latest dovecot (2.3.18) and fts-flatcurve 
the xapian database is more than double of the Maildir/ and I don't know 
why (I'm testing with an old xapian 1.4, I will try to upgrade it).


I found some issue during the switch from fts-solr to fts-flatcurve that 
require to delete some dovecot.* file in order to restore the correct 
result in the SEARCH.


I have not understand if Virtual mailboxes works fine.

So ready for a small environment where you can delete all dovecot index 
files is something is not clear/working fine but not ready for a 
medium/large environment where you need, for example, a well defined 
procedure for migration from one fts plugin to another.


But I'm still testing it and I hope to see a stable version for it as 
soon as possible.


--
Alessio Cecchi
Postmaster @http://www.qboxmail.it
https://www.linkedin.com/in/alessice


Re: Lucene support for FTS - EOL date.

2022-02-06 Thread Alessio Cecchi

Il 05/02/22 22:33, Jacek Grabowski ha scritto:
We are talking about the large-scale commercial use of Dovecot + 
lucene as an FTS.

MIgration to SOLR is some kind of challenge that will take some time.
I'm trying to find out how much time I have.
On the one hand i would like to use only the newest version of Dovecot 
but on the other hand i'm not able to switch fts from lucene to solr 
in the near future


Solr, especially in a large environment is a nightmare. If you use 
Lucene the most "simple" switch is to the new fts-flatcurve and Xapian:


https://github.com/slusarz/dovecot-fts-flatcurve

I'm testing it and is almost "ready for production".

Ciao

--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice



FTS Tokenization filters normalizer-icu vs lowercase

2022-01-20 Thread Alessio Cecchi

Hi,

I'm trying to setup fts-flatcurve with tokenization.

What are the differences/benefits with "fts_filters = normalizer-icu" vs 
"fts_filters = lowercase"?


Reading the Doc I found about normalizer-icu "This is potentially very 
resource intensive." and about lowercase "Supports UTF8, when compiled 
with libicu".


So, using lowercase is almost the same that normalizer-icu but faster?

FYI

for using fts-flatcurve with dovecot RPM packages from repo.dovecot.org 
you have to rebuild with --with-icu --with-stemmer --with-textcat and 
related library.


Thanks

--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice



Re: FTS flatcurve not index messages when SEARCH run on Virtual Mailboxes

2022-01-17 Thread Alessio Cecchi

Il 11/01/22 15:27, Aki Tuomi ha scritto:

On 11/01/2022 15:21 Alessio Cecchi  wrote:


Hi,
I'm testing FTS flatcurve plugin in order to understand if I can switch from 
FTS Solr to flatcurve.
  
  In my configuration I have enabled Virtual mailboxes and for search in all folders I just SEARCH on Virtual/All folder. I this (virtual) folder is not indexed with FTS Solr Dovecot start to index it (or all real folders).
  
  But with FTS flatcurve when I SEARCH on Virtual/All for the first time the indexer process does not start and the search return empty. Only if I run manually "doveadm index -q -u ales...@email.net '*'" flatcurve find messages.
  
  Can flatcurve have the same feature as Solr for Virtual mailboxes?

Here a sample of my configuration:
namespace Virtual {
  hidden = yes
  list = no
  location = virtual:/etc/dovecot/virtual:INDEX=~/Maildir/virtual
  prefix = Virtual/
  separator = /
  subscriptions = no
  }
  
  namespace inbox {

  [...]
  mailbox virtual/All {
  comment = All my messages
  special_use = \All
  }
  }
  
  # cat /etc/dovecot/virtual/All/dovecot-virtual

  *
  all
  


Thanks

--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice

Hi!

plugin {
   fts_autoindex = yes
   fts_enforced = yes
}

probably fixes your issue.


Hi,

probably I have found where is the issue.

My Dovecot is configured with fts=solr and my mailbox is already indexed 
with Solr. For testing Flatcurve I just change dovecot config from 
fts=solr to fts=flatcurve and run "doveadm fts rescan -u EMAIL".


Now, after "fts rescan" on my account if I SEARCH on a standard mailbox 
folder, flatcurve index is updated, if I SEARCH on Virtual/All flatcurve 
index is not updated.


If I test SEARCH, with flatcurve, on a newly created mailbox account, 
never indexed with Solr (or if I delete all dovecot.* file on my mailbox 
account previously index with Solr) also a SEARCH on Virtual/All 
updating flatcurve index.


So my question is, there is a specific command for switch from an FTS 
plugin to another or there is a bug in this procedure?


Thanks

--

Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice



Re: FTS flatcurve not index messages when SEARCH run on Virtual Mailboxes

2022-01-11 Thread Alessio Cecchi

Il 11/01/22 15:27, Aki Tuomi ha scritto:

On 11/01/2022 15:21 Alessio Cecchi  wrote:


Hi,
I'm testing FTS flatcurve plugin in order to understand if I can switch from 
FTS Solr to flatcurve.
  
  In my configuration I have enabled Virtual mailboxes and for search in all folders I just SEARCH on Virtual/All folder. I this (virtual) folder is not indexed with FTS Solr Dovecot start to index it (or all real folders).
  
  But with FTS flatcurve when I SEARCH on Virtual/All for the first time the indexer process does not start and the search return empty. Only if I run manually "doveadm index -q -u ales...@email.net '*'" flatcurve find messages.
  
  Can flatcurve have the same feature as Solr for Virtual mailboxes?

Here a sample of my configuration:
namespace Virtual {
  hidden = yes
  list = no
  location = virtual:/etc/dovecot/virtual:INDEX=~/Maildir/virtual
  prefix = Virtual/
  separator = /
  subscriptions = no
  }
  
  namespace inbox {

  [...]
  mailbox virtual/All {
  comment = All my messages
  special_use = \All
  }
  }
  
  # cat /etc/dovecot/virtual/All/dovecot-virtual

  *
  all
  


Thanks

--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice

Hi!

plugin {
   fts_autoindex = yes
   fts_enforced = yes
}

probably fixes your issue.

Aki


Hi Aki,

in 90-plugin.conf I have already:

    fts = flatcurve
    fts_autoindex = yes
    fts_enforced = yes (with fts_solr I'm using body instead and 
works fine)


but only when I SEARCH on a specific mailbox index-worker start to index 
messages.


When I SEARCH on Virtual/All result are empty or contain only messages 
from previously index mailboxes. I notice also that when SEARCH on 
Virtual/All directory "fts-flatcurve" is created but is empty:


# du -sh .Sent/fts-flatcurve/
20K    .Sent/fts-flatcurve/

And only when index/SEARCH run directly on this folder fts-flatcurve is 
populated:


# du -sh .Sent/fts-flatcurve/
27M    .Sent/fts-flatcurve/

I hope that fts-flatcurve will follow the same behavior of fts-solr 
about dovecot settings.


Thanks

--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice



FTS flatcurve not index messages when SEARCH run on Virtual Mailboxes

2022-01-11 Thread Alessio Cecchi

Hi,

I'm testing FTS flatcurve plugin in order to understand if I can switch 
from FTS Solr to flatcurve.


In my configuration I have enabled Virtual mailboxes and for search in 
all folders I just SEARCH on Virtual/All folder. I this (virtual) folder 
is not indexed with FTS Solr Dovecot start to index it (or all real 
folders).


But with FTS flatcurve when I SEARCH on Virtual/All for the first time 
the indexer process does not start and the search return empty. Only if 
I run manually "doveadm index -q -u ales...@email.net '*'" flatcurve 
find messages.


Can flatcurve have the same feature as Solr for Virtual mailboxes?

Here a sample of my configuration:

namespace Virtual {
  hidden = yes
  list = no
  location = virtual:/etc/dovecot/virtual:INDEX=~/Maildir/virtual
  prefix = Virtual/
  separator = /
  subscriptions = no
}

namespace inbox {
[...]
  mailbox virtual/All {
    comment = All my messages
    special_use = \All
  }
}

# cat /etc/dovecot/virtual/All/dovecot-virtual
*
  all

Thanks

--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice



Re: Can dovecot be leveraged to exploit Solr/Log4shell?

2021-12-14 Thread Alessio Cecchi

Hi,

for Solr you can edit your solr.in.sh file to include:

SOLR_OPTS="$SOLR_OPTS -Dlog4j2.formatMsgNoLookups=true"

and should be enough to prevent this vulnerability.

Ciao

Il 13/12/21 23:43, Joseph Tam ha scritto:


I'm surprised I haven't seen this mentioned yet.

An internet red alert went out Friday on a new zero-day exploit. It is an
input validation problem where Java's Log4j module can be instructed via
a specially crafted string to fetch and execute code from a remote LDAP
server.  It has been designated the Log4shell exploit (CVE-2021-44228).

Although I don't use it, I immediately thought of Solr, which provides
some dovecot installations with search indexing.  Can dovecot be made
to pass on arbitrary loggable strings to affected versions of Solr 
(7.4.0-7.7.3,

8.0.0-8.11.0)?

Those running Solr to implement Dovecot FTS should look at

https://solr.apache.org/security.html#apache-solr-affected-by-apache-log4j-cve-2021-44228 



Joseph Tam 


--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice



Re: Doveadm fetch slow and 100%CPU with a specific message-id

2021-10-25 Thread Alessio Cecchi

Il 25/10/21 21:33, Alessio Cecchi ha scritto:

Il 25/10/21 19:05, Aki Tuomi ha scritto:

On 25/10/2021 19:40 Alessio Cecchi  wrote:


Hi,
I'm using doveadm fetch in order to find the mailbox where a 
messagge is stored:
doveadm fetch -u ales...@domain.com "mailbox" HEADER Message-ID 
'1...@domain.com'
If the messagge-id is ... long more than? ... I don't know, the 
lookup is very very slow, here an example:
with message-id 9c102380c557e7e146a33cb4b49ab...@cbweb.cecchi.net 
respons time: 3 secs
with message-id 
kz1zoaa8qnsfz64hte9p3k0oojl24xtq7vumb3q...@www.myxmail.com response 
time: 80 secs and java/solr use the 100% CPU

Both messages are in the same folder (Trash)

If I add -D to doveadm it stuck some seconds every time connect to 
Solr:


Oct 25 18:30:08 doveadm(ales...@domain.com): Debug: http-client: 
queue http://127.0.0.1:8983: Connection to peer 127.0.0.1:8983 
claimed request [Req1: GET 
http://127.0.0.1:8983/solr/dovecot/select?wt=xml&fl=uid,score&rows=26&sort=uid+asc&q=%7b!lucene+q.op%3dAND%7dhdr:kz1zoaa8qnsfz64hte9p3k0oojl24xtq7vumb3q...@www.myxmail.com&fq=%2Bbox:13c0e32ee6430860201fc5b62527+%2Buser:ales...@domain.com]
  Oct 25 18:30:08 doveadm(ales...@domain.com): Debug: http-client: 
conn 127.0.0.1:8983 [1]: Claimed request [Req1: GET 
http://127.0.0.1:8983/solr/dovecot/select?wt=xml&fl=uid,score&rows=26&sort=uid+asc&q=%7b!lucene+q.op%3dAND%7dhdr:kz1zoaa8qnsfz64hte9p3k0oojl24xtq7vumb3q...@www.myxmail.com&fq=%2Bbox:13c0e32ee6430860201fc5b62527+%2Buser:ales...@domain.com]
  Oct 25 18:30:08 doveadm(ales...@domain.com): Debug: http-client: 
request [Req1: GET 
http://127.0.0.1/solr/dovecot/select?wt=xml&fl=uid,score&rows=26&sort=uid+asc&q=%7b!lucene+q.op%3dAND%7dhdr:kz1zoaa8qnsfz64hte9p3k0oojl24xtq7vumb3q...@www.myxmail.com&fq=%2Bbox:13c0e32ee6430860201fc5b62527+%2Buser:alessio@domain...: 
Sent header
  Oct 25 18:30:08 doveadm(ales...@domain.com): Debug: http-client: 
peer 127.0.0.1:8983: No more requests to service for this peer (1 
connections exist, 0 pending)

[...]
Oct 25 18:30:08 doveadm(ales...@domain.com): Debug: http-client: 
queue http://127.0.0.1:8983

My dovecot version is 2.3.16 and Solr 7.7.

Why?
Thanks

--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice


I wasn't able to reproduce this issue locally. Maybe your solr config 
has issues?


Aki


Yes, could be my Solr setup. I'll investigate.

Is there a way to disable fts/solr when using doveadm? Like -o 
"plugin/fts="


Yes, with:

doveadm -o "plugin/fts=" fetch -u alessio@...

the time needed to lookup the message is now around 3 seconds for all query.

But, is possible to use the -o "plugin/fts=" in doveadm http API?

Because I run doveadm via http API with curl.

Thanks

--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice



Re: Doveadm fetch slow and 100%CPU with a specific message-id

2021-10-25 Thread Alessio Cecchi

Il 25/10/21 19:05, Aki Tuomi ha scritto:

On 25/10/2021 19:40 Alessio Cecchi  wrote:


Hi,
I'm using doveadm fetch in order to find the mailbox where a messagge is stored:
doveadm fetch -u ales...@domain.com "mailbox" HEADER Message-ID 
'1...@domain.com'
If the messagge-id is ... long more than? ... I don't know, the lookup is very 
very slow, here an example:
with message-id 9c102380c557e7e146a33cb4b49ab...@cbweb.cecchi.net respons time: 
3 secs
with message-id kz1zoaa8qnsfz64hte9p3k0oojl24xtq7vumb3q...@www.myxmail.com 
response time: 80 secs and java/solr use the 100% CPU
Both messages are in the same folder (Trash)

If I add -D to doveadm it stuck some seconds every time connect to Solr:

Oct 25 18:30:08 doveadm(ales...@domain.com): Debug: http-client: queue http://127.0.0.1:8983: 
Connection to peer 127.0.0.1:8983 claimed request [Req1: GET 
http://127.0.0.1:8983/solr/dovecot/select?wt=xml&fl=uid,score&rows=26&sort=uid+asc&q=%7b!lucene+q.op%3dAND%7dhdr:kz1zoaa8qnsfz64hte9p3k0oojl24xtq7vumb3q...@www.myxmail.com&fq=%2Bbox:13c0e32ee6430860201fc5b62527+%2Buser:ales...@domain.com]
  Oct 25 18:30:08 doveadm(ales...@domain.com): Debug: http-client: conn 127.0.0.1:8983 [1]: Claimed 
request [Req1: GET 
http://127.0.0.1:8983/solr/dovecot/select?wt=xml&fl=uid,score&rows=26&sort=uid+asc&q=%7b!lucene+q.op%3dAND%7dhdr:kz1zoaa8qnsfz64hte9p3k0oojl24xtq7vumb3q...@www.myxmail.com&fq=%2Bbox:13c0e32ee6430860201fc5b62527+%2Buser:ales...@domain.com]
  Oct 25 18:30:08 doveadm(ales...@domain.com): Debug: http-client: request [Req1: GET 
http://127.0.0.1/solr/dovecot/select?wt=xml&fl=uid,score&rows=26&sort=uid+asc&q=%7b!lucene+q.op%3dAND%7dhdr:kz1zoaa8qnsfz64hte9p3k0oojl24xtq7vumb3q...@www.myxmail.com&fq=%2Bbox:13c0e32ee6430860201fc5b62527+%2Buser:alessio@domain...:
 Sent header
  Oct 25 18:30:08 doveadm(ales...@domain.com): Debug: http-client: peer 
127.0.0.1:8983: No more requests to service for this peer (1 connections exist, 
0 pending)
[...]
Oct 25 18:30:08 doveadm(ales...@domain.com): Debug: http-client: queue 
http://127.0.0.1:8983
My dovecot version is 2.3.16 and Solr 7.7.

Why?
Thanks

--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice


I wasn't able to reproduce this issue locally. Maybe your solr config has 
issues?

Aki


Yes, could be my Solr setup. I'll investigate.

Is there a way to disable fts/solr when using doveadm? Like -o "plugin/fts="

Thanks

--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice



Doveadm fetch slow and 100%CPU with a specific message-id

2021-10-25 Thread Alessio Cecchi

Hi,

I'm using doveadm fetch in order to find the mailbox where a messagge is 
stored:


doveadm fetch -u ales...@domain.com "mailbox" HEADER Message-ID 
'1...@domain.com'


If the messagge-id is ... long more than? ... I don't know, the lookup 
is very very slow, here an example:


with message-id 9c102380c557e7e146a33cb4b49ab...@cbweb.cecchi.net 
respons time: 3 secs


with message-id 
kz1zoaa8qnsfz64hte9p3k0oojl24xtq7vumb3q...@www.myxmail.com response 
time: 80 secs and java/solr use the 100% CPU


Both messages are in the same folder (Trash)

If I add -D to doveadm it stuck some seconds every time connect to Solr:

Oct 25 18:30:08 doveadm(ales...@domain.com): Debug: http-client: queue 
http://127.0.0.1:8983: Connection to peer 127.0.0.1:8983 claimed request 
[Req1: GET 
http://127.0.0.1:8983/solr/dovecot/select?wt=xml&fl=uid,score&rows=26&sort=uid+asc&q=%7b!lucene+q.op%3dAND%7dhdr:kz1zoaa8qnsfz64hte9p3k0oojl24xtq7vumb3q...@www.myxmail.com&fq=%2Bbox:13c0e32ee6430860201fc5b62527+%2Buser:ales...@domain.com] 

Oct 25 18:30:08 doveadm(ales...@domain.com): Debug: http-client: conn 
127.0.0.1:8983 [1]: Claimed request [Req1: GET 
http://127.0.0.1:8983/solr/dovecot/select?wt=xml&fl=uid,score&rows=26&sort=uid+asc&q=%7b!lucene+q.op%3dAND%7dhdr:kz1zoaa8qnsfz64hte9p3k0oojl24xtq7vumb3q...@www.myxmail.com&fq=%2Bbox:13c0e32ee6430860201fc5b62527+%2Buser:ales...@domain.com]
Oct 25 18:30:08 doveadm(ales...@domain.com): Debug: http-client: request 
[Req1: GET 
http://127.0.0.1/solr/dovecot/select?wt=xml&fl=uid,score&rows=26&sort=uid+asc&q=%7b!lucene+q.op%3dAND%7dhdr:kz1zoaa8qnsfz64hte9p3k0oojl24xtq7vumb3q...@www.myxmail.com&fq=%2Bbox:13c0e32ee6430860201fc5b62527+%2Buser:alessio@domain...: 
Sent header
Oct 25 18:30:08 doveadm(ales...@domain.com): Debug: http-client: peer 
127.0.0.1:8983: No more requests to service for this peer (1 connections 
exist, 0 pending)


[...]

Oct 25 18:30:08 doveadm(ales...@domain.com): Debug: http-client: queue 
http://127.0.0.1:8983


My dovecot version is 2.3.16 and Solr 7.7.

Why?

Thanks

--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice



local_name in dovecot for SNI and wildcard

2021-10-20 Thread Alessio Cecchi

Hi,

in dovecot configuration the options "local_name" can support wildcard 
domain name (es. *.mailserver.com)?


This because we have a wildcard SSL certificate and I prefer to 
specifica a wildcard name instead of single.


Is fine this?

local_name *.mailserver.com {
 ssl_cert = http://www.qboxmail.it
https://www.linkedin.com/in/alessice



Re: Broken uidlist when using NFS on newer kernels

2021-10-12 Thread Alessio Cecchi

Hi Jeremy,

I had the same problem as you.

We run an email hosting service with Maildir on NetApp NFS, Dovecot 
Director and Backend servers for POP/IMAP and messagges deliverd via 
dovecot-lda by MXs. After the upgrade from CentOS 6 to CentOS 7 I found 
the same issue as you (on dovecot-uidlist).


After many tests we decided to switch from LDA to LMTP, that was already 
in our roadmap, so read and delivery of messagges is always on the same 
backend. And the problem was solved.


I haven't found any others workarounds.

Swith from LDA to LMTP was not so simple for us since our MX wasn't able 
to talk LMTP but we have write some custom C++ code and was done. You 
should also consider to add some directors since also incoming emails 
will transit from it.


If you would like to talk about how we solve on MXs side I will happy to 
talk with you.


Ciao

Il 08/10/21 21:01, Jeremy Hanmer ha scritto:
I know this has been reported in the past, but I think I have some 
useful new information on the problem. After an OS upgrade from Ubuntu 
Xenial (4.4.0 kernel) to Ubuntu Focal (5.4.0 kernel) and corresponding 
upgrade from Dovecot 2.2.27 to 2.3.7.2, we've started seeing broken 
uidlist files to an extent that's making larger mail boxes nearly 
unusable because the file is constantly being regenerated. I've also 
used the 2.3.16-2+ubuntu20.04 version distributed from dovecot.org 
<http://dovecot.org> and the behavior is unchanged. The environment 
consists of NFS mounts from a NetApp device, with a couple dozen MX 
servers receiving mail and about a hundred IMAP/POP servers.


This is the exact error (note the blank after "invalid data"):
Error: Mailbox INBOX: Broken file 
/mnt/morty/morty2/gravest/x15775549/Maildir/dovecot-uidlist line 373: 
Invalid data:


I've been able to trigger the problem rather easily by piping an email 
to dovecot-lda in a loop and reading the resulting dovecot-uidlist 
file on a different server. What it shows is that occasionally we're 
seeing the last line of the file prepended with a number of null bytes 
equal to the line that's being written (for example, if the entry is 
"35322 
:1633719038.M516419P3623238.pdx1-sub0-mail-mx202,S=2777,W=2832", we'll 
have it prepended by 69 null bytes). This then breaks the IMAP 
process' ability to read the file. My first thought was to extend the 
retry functionality so the imap proces makes more attempts to read the 
file when it detects a problem like this, but would love input from 
someone more familiar with the codebase.


--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice



Re: Panic on service(imap) during FETCH/APPEND some messages

2021-09-26 Thread Alessio Cecchi

Hi,

just for info, this bug is still present in Dovecot 2.3.16 (7e2e900c1a)

Thanks

Il 02/06/21 19:21, Alessio Cecchi ha scritto:

Hi,

I have captured a first core dump:


Jun 02 19:02:37 Panic: imap(us...@email.com) 
session=: file index-mail-headers.c: line 198 
(index_mail_parse_header_init): assertion failed: 
(!mail->data.header_parser_initialized) 


--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice



Panic/Fatal error in lmtp when quota is full

2021-09-14 Thread Alessio Cecchi

Hi,

on dovecot 2.3.16 with delivery via LMTP I found many of this error when 
a user have the mailbox full and personal sieve rules:


- lmtp.log:

Sep 14 09:58:27 pop05 dovecot: lmtp(us...@company.com) 
session=: sieve: deliverytime=122, 
msgid=<000601d7a8f8$fc350d50$f49f27f0$@company.com>, 
sender=us...@company.com, from=us...@company.com, subject="my subject 
1": fileinto action: failed to store into mailbox 'INBOX/Personale': 
Quota exceeded (mailbox for user is full)
Sep 14 09:58:27 pop05 dovecot: lmtp(us...@company.com) 
session=: sieve: deliverytime=122, 
msgid=<000601d7a8f8$fc350d50$f49f27f0$@company.com>, 
sender=us...@company.com, from=us...@company.com, subject="my subject 
1": failed to store into mailbox 'INBOX': Quota exceeded (mailbox for 
user is full)
Sep 14 09:58:27 pop05 dovecot: lmtp(us...@company.com) 
session=: sieve: Execution of script 
/home/vmail/domains/company.com/user1/.dovecot.sieve failed with 
unsuccessful implicit keep (user logfile 
/home/vmail/domains/company.com/user1/.dovecot.sieve.log may reveal 
additional details)


Sep 14 09:58:27 pop05 dovecot: lmtp(26946): Panic: file mail-user.c: 
line 229 (mail_user_deinit): assertion failed: ((*user)->refcount == 1)


Sep 14 09:58:27 pop05 dovecot: lmtp(26946): Error: Raw backtrace: 
/usr/lib64/dovecot/libdovecot.so.0(backtrace_append+0x42) 
[0x7f52dfb4e632] -> 
/usr/lib64/dovecot/libdovecot.so.0(backtrace_get+0x1e) [0x7f52dfb4e73e] 
-> /usr/lib64/dovecot/libdovecot.so.0(+0xf66fe) [0x7f52dfb5c6fe] -> 
/usr/lib64/dovecot/libdovecot.so.0(+0xf67a1) [0x7f52dfb5c7a1] -> 
/usr/lib64/dovecot/libdovecot.so.0(i_fatal+0) [
0x7f52dfaaba18] -> /usr/lib64/dovecot/libdovecot-storage.so.0(+0x57567) 
[0x7f52dfe79567] -> dovecot/lmtp [10.0.2.26 DATA](lmtp_local_data+0x4dc) 
[0x55e09f333e2c] -> dovecot/lmtp [10.0.2.26 
DATA](client_default_cmd_data+0x18b) [0x55e09f33273b] -> dovecot/lmtp 
[10.0.2.26 DATA](cmd_data_continue+0x204) [0x55e09f3324d4] -> 
/usr/lib64/dovecot/libdovecot.so.0(+0x5fe64) [0x7f52dfac5e64] -> /usr/li
b64/dovecot/libdovecot.so.0(io_loop_call_io+0x65) [0x7f52dfb74ad5] -> 
/usr/lib64/dovecot/libdovecot.so.0(io_loop_handler_run_internal+0x12b) 
[0x7f52dfb7649b] -> 
/usr/lib64/dovecot/libdovecot.so.0(io_loop_handler_run+0x59) 
[0x7f52dfb74bd9] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_run+0x38) 
[0x7f52dfb74e18] -> 
/usr/lib64/dovecot/libdovecot.so.0(master_service_run+0x13) 
[0x7f52dfadddf3] -> dovecot/lmtp [10.0.2.26 DATA](main+0x20b) 
[0x55e09f330f0b] -> /lib64/libc.so.6(__libc_start_main+0xf5) 
[0x7f52df6ba555] -> dovecot/lmtp [10.0.2.26 DATA](+0x6041) [0x55e09f331041]


Sep 14 09:58:27 pop05 dovecot: lmtp(26946): Fatal: master: 
service(lmtp): child 26946 killed with signal 6 (core dumps disabled - 
https://dovecot.org/bugreport.html#coredumps)


- sieve.log:

sieve: info: started log at 2021-09-14 10:25:54 +0200.
error: deliverytime=8, 
msgid=<49ef01d7a942$2168a200$6439e600$@company.com>, 
sender=send...@company.com, from=send...@company.com, subject="I: 
Undelivered Mail Returned to Sender": fileinto action: failed to store 
into mailbox 'INBOX/Comunicazioni': Quota exceeded (mailbox for user is 
full).
error: deliverytime=9, 
msgid=<49ef01d7a942$2168a200$6439e600$@company.com>, 
sender=send...@company.com, from=send...@company.com, subject="I: 
Undelivered Mail Returned to Sender": failed to store into mailbox 
'INBOX': Quota exceeded (mailbox for user is full).


Hope it can be fixed,

Thanks

--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice



Re: Dovecot - FTS Solr: disk usage & position information?

2021-09-02 Thread Alessio Cecchi

Hi Vincent,

thanks for your investigations!

Il 01/09/21 11:27, Vincent Brillault ha scritto:

Dear all,

Just a status update, in case this can help others.

We went forward and disabled the position information indexing and the 
re-indexed of our mail data (over a couple of days to avoid 
overloading the systems). Before the re-indexing we had 1.33 TiB in 
our Solr Indexes. After re-indexation, we had only 542 GiB, that's a 
60% of our storage requirements for our FTS indexes :)

this optimization also produce a less RAM requirements on Solr server?


So far, we haven't been reported any issue or measurable differences 
by our users concerning the quality of the FTS. From further 
debugging, as discussed on the solr-user mailing list 
(https://lists.apache.org/thread.html/rcdf8bb97be0839e57928ad5fa34501ec8a73392c11248db91206bc33%40%3Cusers.solr.apache.org%3E), 
I've come to the conclusion that, with the current integration between 
Dovecot and Solr (esp the fact that `"` is escaped), it's impossible 
to trigger phrase queries from user queries as long as 
autoGeneratePhraseQueries is false.


I've attached the schema.xml and solrconfig.xml we are now using with 
Solr 8.6.0, in case there is any interest from others. Let me know if 
you prefer a MR to update the xmls present in 
https://github.com/dovecot/core/tree/master/doc.


The attached schema and config file also works with Solr 7.7.0? Since 
dovecot provide schema and config for 7.7.0 will be useful for many of 
us a path based on it.


Thanks

--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice



Panic indexer-worker on Dovecot 2.3.15

2021-06-29 Thread Alessio Cecchi
0007f7212b5c768 in io_loop_run (ioloop=0x9eb7b050) at 
ioloop.c:740

    __func__ = "io_loop_run"
#33 0x7f7212ac63c3 in master_service_run (service=0x9eb7aeb0, 
callback=callback@entry=0x9e5225f0 )

    at master-service.c:862
No locals.
#34 0x9e522437 in main (argc=1, argv=0x9eb7ab90) at 
indexer-worker.c:76
    storage_service_flags = 
(MAIL_STORAGE_SERVICE_FLAG_USERDB_LOOKUP | 
MAIL_STORAGE_SERVICE_FLAG_TEMP_PRIV_DROP | 
MAIL_STORAGE_SERVICE_FLAG_NO_IDLE_TIMEOUT)

    c = 
(gdb) quit

Hope can be fixed.
Thanks

--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice



Re: Dovecot v2.3.15 released

2021-06-22 Thread Alessio Cecchi

Il 21/06/21 13:18, Timo Sirainen ha scritto:

 + imap: Support official RFC8970 preview/snippet syntax. Old methods of
   retrieving preview information via IMAP commands ("SNIPPET and PREVIEW
   with explicit algorithm selection") have been deprecated.


Hi,

After upgrading dovecot from 2.3.14 to 2.3.15 I noticed a problem 
parsing FETCH response of PREVIEW attribute.


Basically there is any space after the preview content and the rest of 
the string which causes issues on parsing:


a UID FETCH 2539 (MODSEQ UID FLAGS INTERNALDATE PREVIEW 
BODY.PEEK[HEADER.FIELDS (FROM TO SUBJECT DATE)])
* 8 FETCH (UID 2539 MODSEQ (3) FLAGS (\Seen $HasNoAttachment) 
INTERNALDATE "04-Mar-2021 12:18:02 +0100" PREVIEW 
"test"BODY[HEADER.FIELDS (FROM TO SUBJECT DATE)] {151}


With dovecot 2.3.14 there was no problem:

a UID FETCH 2539 (MODSEQ UID FLAGS INTERNALDATE PREVIEW 
BODY.PEEK[HEADER.FIELDS (FROM TO SUBJECT DATE)])
* 8 FETCH (UID 2539 MODSEQ (3) FLAGS (\Seen $HasNoAttachment) 
INTERNALDATE "04-Mar-2021 12:18:02 +0100" PREVIEW (FUZZY "test") 
BODY[HEADER.FIELDS (FROM TO SUBJECT DATE)] {151}


Can you check it please?
Thanks

--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice



Re: Panic on service(imap) during FETCH/APPEND some messages

2021-06-03 Thread Alessio Cecchi
3e9b46e5 in imap_fetch_more (ctx=0x55993f11b538, 
cmd=cmd@entry=0x55993f11b358) at imap-fetch.c:617

    ret = 
    __func__ = "imap_fetch_more"
#26 0x55993e9a47ed in cmd_fetch (cmd=0x55993f11b358) at cmd-fetch.c:337
    client = 0x55993f1160e8
    ctx = 0x55993f11b538
    args = 0x55993f135a68
    next_arg = 
    list_arg = 0x7ffccbc9708f
    search_args = 0x0
    qresync_args = {qresync_sample_seqset = 0x55993f0e3308, 
qresync_sample_uidset = 0x7ff2c4ffdaff}

---Type  to continue, or q  to quit---
    messageset = 0x55993f135bd8 "7238"
    send_vanished = 
    ret = 
#27 0x55993e9b1614 in command_exec (cmd=0x55993f11b358) at 
imap-commands.c:201

    hook = 0x55993f0ee630
    finished = 
    __func__ = "command_exec"
#28 0x55993e9af502 in client_command_input (cmd=0x55993f11b358) at 
imap-client.c:1204

    client = 0x55993f1160e8
    command = 
    tag = 0x7ff2c4fe8115  "[]A\\\303f\017\037D"
    name = 0x55993f135e40 "\250_\023?\231U"
    ret = 
    __func__ = "client_command_input"
#29 0x55993e9af591 in client_command_input 
(cmd=cmd@entry=0x55993f11b358) at imap-client.c:1271

    client = 0x55993f1160e8
    command = 
    tag = 0x7ff2c4f7be42  
"H\205\333I\211E"

    name = 0x55993f135bd0 "fetch"
    ret = 
    __func__ = "client_command_input"
#30 0x55993e9af759 in client_command_input (cmd=0x55993f11b358) at 
imap-client.c:1238

    client = 0x55993f1160e8
    command = 
    tag = 0x55993f135bc0 "6"
    name = 0x55993f135bc8 "UID"
    ret = 
    __func__ = "client_command_input"
#31 0x55993e9afa15 in client_handle_next_command 
(remove_io_r=, client=0x55993f1160e8) at 
imap-client.c:1313

No locals.
#32 client_handle_input (client=client@entry=0x55993f1160e8) at 
imap-client.c:1327

    _data_stack_cur_id = 3
    remove_io = false
    handled_commands = false
    __func__ = "client_handle_input"
#33 0x55993e9afff9 in client_input (client=0x55993f1160e8) at 
imap-client.c:1371

    cmd = 0x55993f10e700
    output = 0x55993f11b1b0
    bytes = 48
    __func__ = "client_input"
#34 0x7ff2c4fc7f45 in io_loop_call_io (io=0x55993f135980) at 
ioloop.c:714

    ioloop = 0x55993f0ec030
---Type  to continue, or q  to quit---
    t_id = 2
    __func__ = "io_loop_call_io"
#35 0x7ff2c4fc98fb in io_loop_handler_run_internal 
(ioloop=ioloop@entry=0x55993f0ec030) at ioloop-epoll.c:222

    ctx = 0x55993f0eccb0
    events = 
    list = 0x55993f0f7290
    io = 
    tv = {tv_sec = 1799, tv_usec = 999034}
    events_count = 
    msecs = 
    ret = 1
    i = 0
    call = 
    __func__ = "io_loop_handler_run_internal"
#36 0x7ff2c4fc8049 in io_loop_handler_run 
(ioloop=ioloop@entry=0x55993f0ec030) at ioloop.c:766

    __func__ = "io_loop_handler_run"
#37 0x7ff2c4fc8288 in io_loop_run (ioloop=0x55993f0ec030) at 
ioloop.c:739

    __func__ = "io_loop_run"
#38 0x7ff2c4f32bb3 in master_service_run (service=0x55993f0ebe90, 
callback=callback@entry=0x55993e9be220 )

    at master-service.c:853
No locals.
#39 0x55993e9a1202 in main (argc=2, argv=0x55993f0ebb90) at main.c:546
    set_roots = {0x7ff2c52670c0 , 
0x55993ebd05e0 , 0x0}
    login_set = {auth_socket_path = 0x55993f0e34d8 "", 
postlogin_socket_path = 0x55993f0e3508 "", postlogin_timeout_secs = 60,
  callback = 0x55993e9bec20 , 
failure_callback = 0x55993e9be330 ,

  request_auth_token = true}
    service_flags = 
    storage_service_flags = 
    username = 0x0
    auth_socket_path = 
    c = 
    error = 0x3800380 
(gdb)

Let me knok if you need more details.
Thanks

Il 01/06/21 07:24, Aki Tuomi ha scritto:

Hi!

Any chance you could collect coredumps for these and posting output of

gdb /usr/lib/dovecot/imap /path/to/core
bt full

systemd-coredump can be used for this, and 
https://www.dovecot.org/bugreport-mail contains hints how to get core dumps 
otherwise.


--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice



Re: Panic on indexer-worker and Dovecot stop to search on Solr

2021-06-03 Thread Alessio Cecchi

Il 31/05/21 21:00, Aki Tuomi ha scritto:

On 31/05/2021 21:58 Alessio Cecchi  wrote:

  
Il 31/05/21 18:17, Aki Tuomi ha scritto:



On 31/05/2021 19:09 Alessio Cecchi  wrote:


Hi,
I have setup on a (little busy) Dovecot server FTS with Solr and Virtual folder to enable 
"search in all folders" for users.
   
   All works fine until for some users the indexer-worker process crash.

After this crash Dovecot stop to query Solr for new search in BODY, returning 
SERVERBUG, for all users on the server and only with a dovecot restart users 
can search in BODY again.
Before the dovecot restart I can see that on Solr server no query from dovecot 
arriving anymore (for SEARCH BODY because SEARCH in FROM/TO/SUBJECT still works 
fine)
   

Hi!

It's a known bug and will be fixed in next release. It happens with tika and 
solr being used together.

Aki

Good to know, I'm not using tika but decode2text, is the same?


Hm. Not necessarely. It could be something related to some other http 
connection though. The fix done should help in any case.

Aki


Hi,

I'm not sure if can help but I have collect the core dump also for this 
crash. I notice that happens only when indexing Virtual mailbox folder, 
example:


- doveadm index -u us...@email.com '*' works fine

- doveadm index -u us...@email.com 'Virtual/All' crash

If I run first the index on '*' and after on 'Virtual/All' works fine 
for both. But when users from webmail search in all folders via 
Virtual/All always crash and after no more indexer-worker run until I 
reload dovecot or kill "indexer" process.


Here the core dump:

Jun 03 05:05:34 Panic: indexer-worker(us...@email.com) 
session=: file 
http-client-request.c: line 1240 (http_client_request_send_more): 
assertion failed: (req->payload_input != NULL)
Jun 03 05:05:34 Error: indexer-worker(us...@email.com) 
session=: Raw backtrace: 
/usr/lib64/dovecot/libdovecot.so.0(backtrace_append+0x42) 
[0x7f5d132cbac2] -> 
/usr/lib64/dovecot/libdovecot.so.0(backtrace_get+0x1e) [0x7f5d132cbbce] 
-> /usr/lib64/dovecot/libdovecot.so.0(+0xf3cde) [0x7f5d132d8cde] -> 
/usr/lib64/dovecot/libdovecot.so.0(+0xf3d81) [0x7f5d132d8d81] -> 
/usr/lib64/dovecot/libdovecot.so.0(i_fatal+0) [0x7f5d1322a25a] -> 
/usr/lib64/dovecot/libdovecot.so.0(http_client_request_send_more+0x3dd) 
[0x7f5d132729ad] -> 
/usr/lib64/dovecot/libdovecot.so.0(http_client_connection_output+0xf1) 
[0x7f5d13277101] -> /usr/lib64/dovecot/libdovecot.so.0(+0x11d2b0) 
[0x7f5d133022b0] -> 
/usr/lib64/dovecot/libdovecot.so.0(io_loop_call_io+0x65) 
[0x7f5d132f0f45] -> 
/usr/lib64/dovecot/libdovecot.so.0(io_loop_handler_run_internal+0x12b) 
[0x7f5d132f28fb] -> 
/usr/lib64/dovecot/libdovecot.so.0(io_loop_handler_run+0x59) 
[0x7f5d132f1049] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_run+0x38) 
[0x7f5d132f1288] -> 
/usr/lib64/dovecot/libdovecot.so.0(http_client_wait+0xcd) 
[0x7f5d1328020d] -> 
/usr/lib64/dovecot/lib21_fts_solr_plugin.so(solr_connection_select+0xe4) 
[0x7f5d11440174] -> /usr/lib64/dovecot/lib21_fts_solr_plugin.so(+0x45d4) 
[0x7f5d1143c5d4] -> 
/usr/lib64/dovecot/lib20_fts_plugin.so(fts_backend_get_last_uid+0x6e) 
[0x7f5d125af3fe] -> /usr/lib64/dovecot/lib20_fts_plugin.so(+0xf952) 
[0x7f5d125b5952] -> /usr/lib64/dovecot/lib20_fts_plugin.so(+0x10ef6) 
[0x7f5d125b6ef6] -> /usr/lib64/dovecot/lib20_virtual_plugin.so(+0x966a) 
[0x7f5d1239b66a] -> /usr/lib64/dovecot/lib20_fts_plugin.so(+0x10ba6) 
[0x7f5d125b6ba6] -> 
/usr/lib64/dovecot/libdovecot-storage.so.0(mail_precache+0x2e) 
[0x7f5d135dbb0e] -> dovecot/indexer-worker [us...@email.com 
Virtual/All](+0x2924) [0x561e504cb924] -> 
/usr/lib64/dovecot/libdovecot.so.0(io_loop_call_io+0x65) 
[0x7f5d132f0f45] -> 
/usr/lib64/dovecot/libdovecot.so.0(io_loop_handler_run_internal+0x12b) 
[0x7f5d132f28fb] -> 
/usr/lib64/dovecot/libdovecot.so.0(io_loop_handler_run+0x59) 
[0x7f5d132f1049] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_run+0x38) 
[0x7f5d132f1288] -> 
/usr/lib64/dovecot/libdovecot.so.0(master_service_run+0x13) 
[0x7f5d1325bbb3] -> dovecot/indexer-worker [us...@email.com 
Virtual/All](main+0xd7) [0x561e504cb1f7] -> 
/lib64/libc.so.6(__libc_start_main+0xf5) [0x7f5d12e39555] -> 
dovecot/indexer-worker [us...@email.com Virtual/All](+0x22ba) 
[0x561e504cb2ba]
Jun 03 05:05:34 Fatal: indexer-worker(us...@email.com) 
session=: master: 
service(indexer-worker): child 2019 killed with signal 6 (core dumped)



[root@popimap ~]# gdb /usr/libexec/dovecot/indexer-worker 
/var/core/core.indexer-worker.2019

GNU gdb (GDB) Red Hat Enterprise Linux 7.6.1-120.el7
Copyright (C) 2013 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later 
<http://gnu.org/licenses/gpl.html>

This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "sho

Re: Panic on service(imap) during FETCH/APPEND some messages

2021-06-02 Thread Alessio Cecchi
12a030) at ioloop-epoll.c:195

    ctx = 0x56134512acb0
    events = 
    list = 
    io = 
    tv = {tv_sec = 1795, tv_usec = 60554}
    events_count = 7
    msecs = 1795061
    ret = 0
    i = 
    call = 
    __func__ = "io_loop_handler_run_internal"
#25 0x7f8a3625e049 in io_loop_handler_run 
(ioloop=ioloop@entry=0x56134512a030) at ioloop.c:766

    __func__ = "io_loop_handler_run"
#26 0x7f8a3625e288 in io_loop_run (ioloop=0x56134512a030) at 
ioloop.c:739

    __func__ = "io_loop_run"
#27 0x7f8a361c8bb3 in master_service_run (service=0x561345129e90, 
callback=callback@entry=0x561344cd2220 )

---Type  to continue, or q  to quit---
    at master-service.c:853
No locals.
#28 0x561344cb5202 in main (argc=2, argv=0x561345129b90) at main.c:546
    set_roots = {0x7f8a364fd0c0 , 
0x561344ee45e0 , 0x0}

    login_set = {
  auth_socket_path = 0x5613451214d8 "0x7f8a36245d81] -> 
/usr/lib64/dovecot/libdovecot.so.0(i_fatal+0) [0x7f8a3619725a] -> 
/usr/lib64/dovecot/libdovecot-storage.so.0(index_mail_parse_header_init+0x3b9) 
[0x7f8a365c6dc9] -> /usr/lib64/dovec"...,
  postlogin_socket_path = 0x561345121508 ".so.0(i_fatal+0) 
[0x7f8a3619725a] -> 
/usr/lib64/dovecot/libdovecot-storage.so.0(index_mail_parse_header_init+0x3b9) 
[0x7f8a365c6dc9] -> 
/usr/lib64/dovecot/libdovecot-storage.so.0(index_mail_parse_head"...,
  postlogin_timeout_secs = 60, callback = 0x561344cd2c20 
,
  failure_callback = 0x561344cd2330 , 
request_auth_token = true}

    service_flags = 
    storage_service_flags = 
    username = 0x0
    auth_socket_path = 
    c = 
    error = 0x3800380 
(gdb)

Il 01/06/21 07:24, Aki Tuomi ha scritto:

Hi!

Any chance you could collect coredumps for these and posting output of

gdb /usr/lib/dovecot/imap /path/to/core
bt full

systemd-coredump can be used for this, and 
https://www.dovecot.org/bugreport-mail contains hints how to get core dumps 
otherwise.

Aki


On 31/05/2021 23:32 Alessio Cecchi  wrote:

  
Hi,


when I check "doveadm log errors" I found some fatal error repeated many
times by around the same users






Are already know bugs?

Thanks

--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice


--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice



Panic on service(imap) during FETCH/APPEND some messages

2021-05-31 Thread Alessio Cecchi
cot-storage.so.0(+0x77428) 
[0x7fe4e8348428] -> 
/usr/lib64/dovecot/libdovecot-storage.so.0(mail_get_stream_because+0x64) 
[0x7fe4e83105a4] -> 
/usr/lib64/dovecot/libdovecot-storage.so.0(index_mail_prefetch+0x96) 
[0x7fe4e8393bc6] -> 
/usr/lib64/dovecot/libdovecot-storage.so.0(mail_prefetch+0x2e) 
[0x7fe4e830fe4e] -> /usr/lib64/dovecot/libdovecot-storage.so.0(+0xc98cd) 
[0x7fe4e839a8cd] -> 
/usr/lib64/dovecot/libdovecot-storage.so.0(index_storage_search_next_nonblock+0x110) 
[0x7fe4e839ae30] -> 
/usr/lib64/dovecot/libdovecot-storage.so.0(mailbox_search_next_nonblock+0x22) 
[0x7fe4e831eaa2] -> 
/usr/lib64/dovecot/libdovecot-storage.so.0(mailbox_search_next+0x3d) 
[0x7fe4e831eb0d] -> dovecot/imap [us...@email.com 89.45.183.8 UID 
fetch](+0x211dc) [0x5649a2e121dc] -> dovecot/imap [us...@email.com 
89.45.183.8 UID fetch](imap_fetch_more+0x35) [0x5649a2e136e5] -> 
dovecot/imap [us...@email.com 89.45.183.8 UID fetch](cmd_fetch+0x34d) 
[0x5649a2e037ed] -> dovecot/imap [us...@email.com 89.45.183.8 UID 
fetch](command_exec+0x64) [0x5649a2e10614] -> dovecot/imap 
[us...@email.com 89.45.183.8 UID fetch](+0x1d502) [0x5649a2e0e502] -> 
dovecot/imap [us...@email.com 89.45.183.8 UID fetch](+0x1d591) 
[0x5649a2e0e591] -> dovecot/imap [us...@email.com 89.45.183.8 UID 
fetch](+0x1d759) [0x5649a2e0e759] -> dovecot/imap [us...@email.com 
89.45.183.8 UID fetch](client_handle_input+0x205) [0x5649a2e0ea15] -> 
dovecot/imap [us...@email.com 89.45.183.8 UID fetch](client_input+0x79) 
[0x5649a2e0eff9] -> 
/usr/lib64/dovecot/libdovecot.so.0(io_loop_call_io+0x65) [0x7fe4e8025f45]
May 31 12:06:05 pop01 dovecot: imap(us...@email.com) 
session=<3b6TX53DetK5HrcI>: Fatal: master: service(imap): child 29622 
killed with signal 6 (core dumps disabled - 
https://dovecot.org/bugreport.html#coredumps)


- Panic: file index-mail-headers.c

May 31 18:38:20 pop07 dovecot: imap(us...@email.com) 
session=: Panic: file index-mail-headers.c: line 198 
(index_mail_parse_header_init): assertion failed: 
(!mail->data.header_parser_initialized)
May 31 18:38:20 pop07 dovecot: imap(us...@email.com) 
session=: Error: Raw backtrace: 
/usr/lib64/dovecot/libdovecot.so.0(backtrace_append+0x42) 
[0x7f73d1683ac2] -> 
/usr/lib64/dovecot/libdovecot.so.0(backtrace_get+0x1e) [0x7f73d1683bce] 
-> /usr/lib64/dovecot/libdovecot.so.0(+0xf3cde) [0x7f73d1690cde] -> 
/usr/lib64/dovecot/libdovecot.so.0(+0xf3d81) [0x7f73d1690d81] -> 
/usr/lib64/dovecot/libdovecot.so.0(i_fatal+0) [0x7f73d15e225a] -> 
/usr/lib64/dovecot/libdovecot-storage.so.0(index_mail_parse_header_init+0x3b9) 
[0x7f73d1a11dc9] -> 
/usr/lib64/dovecot/libdovecot-storage.so.0(index_mail_parse_headers_internal+0x2b) 
[0x7f73d1a127eb] -> 
/usr/lib64/dovecot/libdovecot-storage.so.0(index_mail_init_stream+0x19f) 
[0x7f73d1a15b9f] -> /usr/lib64/dovecot/libdovecot-storage.so.0(+0x77428) 
[0x7f73d19cb428] -> 
/usr/lib64/dovecot/libdovecot-storage.so.0(mail_get_stream_because+0x64) 
[0x7f73d19935a4] -> /usr/lib64/dovecot/libdovecot-storage.so.0(+0x77207) 
[0x7f73d19cb207] -> 
/usr/lib64/dovecot/libdovecot-storage.so.0(mail_get_virtual_size+0x38) 
[0x7f73d1993158] -> 
/usr/lib64/dovecot/libdovecot-storage.so.0(maildir_save_finish+0x154) 
[0x7f73d19cc504] -> 
/usr/lib64/dovecot/libdovecot-storage.so.0(mailbox_save_cancel+0x3e) 
[0x7f73d19a239e] -> dovecot/imap [us...@email.com 35.67.148.57 
APPEND](+0xf5d2) [0x558743e215d2] -> dovecot/imap [us...@email.com 
35.67.148.57 APPEND](+0x10918) [0x558743e22918] -> dovecot/imap 
[us...@email.com 35.67.148.57 APPEND](command_exec+0x64) 
[0x558743e31614] -> dovecot/imap [us...@email.com 35.67.148.57 
APPEND](client_command_cancel+0x49) [0x558743e2edc9] -> dovecot/imap 
[us...@email.com 35.67.148.57 APPEND](+0x1cef4) [0x558743e2eef4] -> 
/usr/lib64/dovecot/libdovecot.so.0(io_loop_handle_timeouts+0x12b) 
[0x7f73d16a8d3b] -> 
/usr/lib64/dovecot/libdovecot.so.0(io_loop_handler_run_internal+0xcc) 
[0x7f73d16aa89c] -> 
/usr/lib64/dovecot/libdovecot.so.0(io_loop_handler_run+0x59) 
[0x7f73d16a9049] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_run+0x38) 
[0x7f73d16a9288] -> 
/usr/lib64/dovecot/libdovecot.so.0(master_service_run+0x13) 
[0x7f73d1613bb3] -> dovecot/imap [us...@email.com 35.67.148.57 
APPEND](main+0x342) [0x558743e21202] -> 
/lib64/libc.so.6(__libc_start_main+0xf5) [0x7f73d11f1555] -> 
dovecot/imap [us...@email.com 35.67.148.57 APPEND](+0xf405) [0x558743e21405]
May 31 18:38:20 pop07 dovecot: imap(us...@email.com) 
session=: Fatal: master: service(imap): child 9786 
killed with signal 6 (core dumps disabled - 
https://dovecot.org/bugreport.html#coredumps)


Are already know bugs?

Thanks

--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice



Re: Panic on indexer-worker and Dovecot stop to search on Solr

2021-05-31 Thread Alessio Cecchi

Il 31/05/21 18:17, Aki Tuomi ha scritto:


On 31/05/2021 19:09 Alessio Cecchi  wrote:


Hi,
I have setup on a (little busy) Dovecot server FTS with Solr and Virtual folder to enable 
"search in all folders" for users.
  
  All works fine until for some users the indexer-worker process crash.

After this crash Dovecot stop to query Solr for new search in BODY, returning 
SERVERBUG, for all users on the server and only with a dovecot restart users 
can search in BODY again.
Before the dovecot restart I can see that on Solr server no query from dovecot 
arriving anymore (for SEARCH BODY because SEARCH in FROM/TO/SUBJECT still works 
fine)
  

Hi!

It's a known bug and will be fixed in next release. It happens with tika and 
solr being used together.

Aki

Good to know, I'm not using tika but decode2text, is the same?

--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice



Panic on indexer-worker and Dovecot stop to search on Solr

2021-05-31 Thread Alessio Cecchi
ndler.loader.XMLLoader.load(XMLLoader.java:188)
    at 
org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:97)


This is the relevant part of dovecot configuration:
plugin {
  fts = solr
  fts_autoindex = yes
  fts_decoder = decode2text
  fts_enforced = body
  fts_index_timeout = 5s
  fts_solr = url=http://10.0.1.3:8983/solr/dovecot/
  [...]
}

Is this an already know bug? Any workaround?
Thanks

--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice



Re: Adding virtual folders to an existing dovecot installation

2021-05-06 Thread Alessio Cecchi

Il 06/05/21 15:11, Christian Wolf ha scritto:

Hello dear dovecot mailinglist,

Now, I have first one question: When I use the virtual folder and read a
message/mark it as read, will this be reflected on the underlaying folder or
will it cause trouble on dovecot?

Will be reflected on original folder without trouble.

Bonus question: Is it possible to restrict the effect of the virtual plugin to
certain (virtual) user accounts?
I tried with include Virtual folder configuration under "remote 1.2.3.4" 
but don't works.


--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice



Re: Help with imapc and Shared Folder in a Cluster

2021-04-28 Thread Alessio Cecchi

Il 28/04/21 11:49, Markus Valentin ha scritto:

On 27/04/2021 22:04 Alessio Cecchi  wrote:
Il 23/04/21 09:29, Markus Valentin ha scritto:

On 4/22/21 11:49 PM, Alessio Cecchi wrote:> I'm tryng to setup Shared
Mailboxes in Dovecot (2.3.14) Cluster as

explained here:

https://doc.dovecot.org/configuration_manual/shared_mailboxes/cluster_setup/


but I'm not happy:

# doveadm acl debug -u te...@emailtest.net shared/test2/Sent

doveadm(te...@emailtest.net): Info: imapc(10.0.0.202:143): Connected to
10.0.0.202:143 (local 10.0.0.203:58054)
doveadm(te...@emailtest.net): Info: imapc(10.0.0.202:143): Connected to
10.0.0.202:143 (local 10.0.0.203:58056)
doveadm(te...@emailtest.net): Error: imapc(10.0.0.202:143):
Authentication failed: [AUTHENTICATIONFAILED] Authentication failed.
doveadm(te...@emailtest.net): Error: Can't open mailbox
shared/test2/Sent: Authentication failed: [AUTHENTICATIONFAILED]
Authentication failed.

ACL, master-user, master-password works fine because with regular
configuration shared folders works fine and also with master-user or
with master-password I can login and see and access to shared/ namespace
and shared folders.

But when I try to switch location from

location = maildir:%%h/Maildir:INDEX=~/Maildir/shared/%%u

to

location = imapc:~/Maildir/shared/%%u/
[...]
imapc_host = 10.0.0.202
imapc_master_user = %u
#imapc_user = %u
imapc_password = Password
imapc_features = search

stop working.

The relevant error is this:

Apr 22 22:57:14 doveadm(te...@testemail.net): Info:
imapc(10.0.0.203:143): Connected to 10.0.0.202:143 (local 10.0.0.203:58070)
Apr 22 22:57:14 doveadm(te...@testemail.net): Debug:
imapc(10.0.0.203:143): Server capabilities: IMAP4rev1 SASL-IR
LOGIN-REFERRALS ID ENABLE IDLE XLIST LITERAL+ AUTH=PLAIN AUTH=LOGIN
Apr 22 22:57:14 doveadm(te...@testemail.net): Debug:
imapc(10.0.0.203:143): Authenticating as te...@testemail.net for user
te...@testemail.net
Apr 22 22:57:16 doveadm(te...@testemail.net): Error:
imapc(10.0.0.203:143): Authentication failed: [AUTHENTICATIONFAILED]
Authentication failed.
Apr 22 22:57:16 doveadm(te...@testemail.net): Debug:
imapc(10.0.0.203:143): Disconnected
Apr 22 22:57:16 doveadm(te...@testemail.net): Error: Can't open mailbox
shared/test2/Sent: Authentication failed: [AUTHENTICATIONFAILED]
Authentication failed.

Please note "Authenticating as te...@testemail.net for user
te...@testemail.net" failed.

So my question is, the documentation page is update and right or I
missing something?

Hi,

from my perspective it is likely that te...@testemail.net can't be
authenticated as a master user which is required for this setup to work.

  From the cluster setup page:

"You’ll need to setup master user logins to work for all the users. The
logged in user becomes the master user. The master user doesn’t actually
have any special privileges. "


Hi,

after some days of debug I have found a solution to have shared folders
works via imapc, even if partially.

First, in the documentation page there is an error, the right "location"
should be like this:

location = imapc:%%h/Maildir

with %%h/ instead of ~/

After I have setup two passdb like these:

passdb {
     driver = static
     args = password=P4ssw0rd
     result_success = continue
}

passdb {
    driver = sql
    args = /etc/dovecot/dovecot-sql-master.conf.ext
    master = yes
    result_success = continue
}

where the first is required (only on backend dovecot) when the sharing
user (test2) need to login (with imapc_password) and the second (both in
director and backend dovecot) when the "test1" need to login into
sharing (test2) account like master user.

So acl debug works fine:

# doveadm acl debug -u te...@emailtest.net shared/test2/Sent
doveadm(te...@emailtest.net): Info: imapc(10.0.0.202:143): Connected to
10.0.0.202:143 (local 10.0.0.203:39698)
doveadm(te...@emailtest.net): Info: imapc(10.0.0.202:143): Connected to
10.0.0.202:143 (local 10.0.0.203:39700)
doveadm(te...@emailtest.net): Info: Mailbox 'Sent' is in namespace
'shared/test2/'
doveadm(te...@emailtest.net): Info: Mailbox path:
/home/vmail/domains/emailtest.net/test2/Maildir/.Sent
doveadm(te...@emailtest.net): Info: All message flags are shared across
users in mailbox
doveadm(te...@emailtest.net): Info: User te...@emailtest.net has rights:
lookup read write write-seen write-deleted insert expunge
doveadm(te...@emailtest.net): Info: Mailbox found from dovecot-acl-list
doveadm(te...@emailtest.net): Info: User te...@emailtest.net found from
ACL shared dict
doveadm(te...@emailtest.net): Info: Mailbox shared/test2/Sent is visible
in LIST

But the are still some issues, if the sharing ring is like "test2 share
a folder with test1 that share a folder with test3 that share a folder
test2" dovecot have a loop until max_user_connections is reached.
Probably until option "acl_ignore_namespace" will be available we cannot
solve this.

Moreove

Re: Help with imapc and Shared Folder in a Cluster

2021-04-27 Thread Alessio Cecchi



Il 23/04/21 09:29, Markus Valentin ha scritto:

On 4/22/21 11:49 PM, Alessio Cecchi wrote:> I'm tryng to setup Shared
Mailboxes in Dovecot (2.3.14) Cluster as

explained here:

https://doc.dovecot.org/configuration_manual/shared_mailboxes/cluster_setup/


but I'm not happy:

# doveadm acl debug -u te...@emailtest.net shared/test2/Sent

doveadm(te...@emailtest.net): Info: imapc(10.0.0.202:143): Connected to
10.0.0.202:143 (local 10.0.0.203:58054)
doveadm(te...@emailtest.net): Info: imapc(10.0.0.202:143): Connected to
10.0.0.202:143 (local 10.0.0.203:58056)
doveadm(te...@emailtest.net): Error: imapc(10.0.0.202:143):
Authentication failed: [AUTHENTICATIONFAILED] Authentication failed.
doveadm(te...@emailtest.net): Error: Can't open mailbox
shared/test2/Sent: Authentication failed: [AUTHENTICATIONFAILED]
Authentication failed.

ACL, master-user, master-password works fine because with regular
configuration shared folders works fine and also with master-user or
with master-password I can login and see and access to shared/ namespace
and shared folders.

But when I try to switch location from

location = maildir:%%h/Maildir:INDEX=~/Maildir/shared/%%u

to

location = imapc:~/Maildir/shared/%%u/
[...]
imapc_host = 10.0.0.202
imapc_master_user = %u
#imapc_user = %u
imapc_password = Password
imapc_features = search

stop working.

The relevant error is this:

Apr 22 22:57:14 doveadm(te...@testemail.net): Info:
imapc(10.0.0.203:143): Connected to 10.0.0.202:143 (local 10.0.0.203:58070)
Apr 22 22:57:14 doveadm(te...@testemail.net): Debug:
imapc(10.0.0.203:143): Server capabilities: IMAP4rev1 SASL-IR
LOGIN-REFERRALS ID ENABLE IDLE XLIST LITERAL+ AUTH=PLAIN AUTH=LOGIN
Apr 22 22:57:14 doveadm(te...@testemail.net): Debug:
imapc(10.0.0.203:143): Authenticating as te...@testemail.net for user
te...@testemail.net
Apr 22 22:57:16 doveadm(te...@testemail.net): Error:
imapc(10.0.0.203:143): Authentication failed: [AUTHENTICATIONFAILED]
Authentication failed.
Apr 22 22:57:16 doveadm(te...@testemail.net): Debug:
imapc(10.0.0.203:143): Disconnected
Apr 22 22:57:16 doveadm(te...@testemail.net): Error: Can't open mailbox
shared/test2/Sent: Authentication failed: [AUTHENTICATIONFAILED]
Authentication failed.

Please note "Authenticating as te...@testemail.net for user
te...@testemail.net" failed.

So my question is, the documentation page is update and right or I
missing something?

Hi,

from my perspective it is likely that te...@testemail.net can't be
authenticated as a master user which is required for this setup to work.

 From the cluster setup page:

"You’ll need to setup master user logins to work for all the users. The
logged in user becomes the master user. The master user doesn’t actually
have any special privileges. "


Hi,

after some days of debug I have found a solution to have shared folders 
works via imapc, even if partially.


First, in the documentation page there is an error, the right "location" 
should be like this:


location = imapc:%%h/Maildir

with %%h/ instead of ~/

After I have setup two passdb like these:

passdb {
   driver = static
   args = password=P4ssw0rd
   result_success = continue
}

passdb {
  driver = sql
  args = /etc/dovecot/dovecot-sql-master.conf.ext
  master = yes
  result_success = continue
}

where the first is required (only on backend dovecot) when the sharing 
user (test2) need to login (with imapc_password) and the second (both in 
director and backend dovecot) when the "test1" need to login into 
sharing (test2) account like master user.


So acl debug works fine:

# doveadm acl debug -u te...@emailtest.net shared/test2/Sent
doveadm(te...@emailtest.net): Info: imapc(10.0.0.202:143): Connected to 
10.0.0.202:143 (local 10.0.0.203:39698)
doveadm(te...@emailtest.net): Info: imapc(10.0.0.202:143): Connected to 
10.0.0.202:143 (local 10.0.0.203:39700)
doveadm(te...@emailtest.net): Info: Mailbox 'Sent' is in namespace 
'shared/test2/'
doveadm(te...@emailtest.net): Info: Mailbox path: 
/home/vmail/domains/emailtest.net/test2/Maildir/.Sent
doveadm(te...@emailtest.net): Info: All message flags are shared across 
users in mailbox
doveadm(te...@emailtest.net): Info: User te...@emailtest.net has rights: 
lookup read write write-seen write-deleted insert expunge

doveadm(te...@emailtest.net): Info: Mailbox found from dovecot-acl-list
doveadm(te...@emailtest.net): Info: User te...@emailtest.net found from 
ACL shared dict
doveadm(te...@emailtest.net): Info: Mailbox shared/test2/Sent is visible 
in LIST


But the are still some issues, if the sharing ring is like "test2 share 
a folder with test1 that share a folder with test3 that share a folder 
test2" dovecot have a loop until max_user_connections is reached. 
Probably until option "acl_ignore_namespace" will be available we cannot 
solve this.


Moreover, if both test1 and test2 mark as read/unread the same message 
in a shared f

Re: Help with imapc and Shared Folder in a Cluster

2021-04-23 Thread Alessio Cecchi

Il 23/04/21 09:29, Markus Valentin ha scritto:

On 4/22/21 11:49 PM, Alessio Cecchi wrote:> I'm tryng to setup Shared
Mailboxes in Dovecot (2.3.14) Cluster as

explained here:

https://doc.dovecot.org/configuration_manual/shared_mailboxes/cluster_setup/


but I'm not happy:

# doveadm acl debug -u te...@emailtest.net shared/test2/Sent

doveadm(te...@emailtest.net): Info: imapc(10.0.0.202:143): Connected to
10.0.0.202:143 (local 10.0.0.203:58054)
doveadm(te...@emailtest.net): Info: imapc(10.0.0.202:143): Connected to
10.0.0.202:143 (local 10.0.0.203:58056)
doveadm(te...@emailtest.net): Error: imapc(10.0.0.202:143):
Authentication failed: [AUTHENTICATIONFAILED] Authentication failed.
doveadm(te...@emailtest.net): Error: Can't open mailbox
shared/test2/Sent: Authentication failed: [AUTHENTICATIONFAILED]
Authentication failed.

ACL, master-user, master-password works fine because with regular
configuration shared folders works fine and also with master-user or
with master-password I can login and see and access to shared/ namespace
and shared folders.

But when I try to switch location from

location = maildir:%%h/Maildir:INDEX=~/Maildir/shared/%%u

to

location = imapc:~/Maildir/shared/%%u/
[...]
imapc_host = 10.0.0.202
imapc_master_user = %u
#imapc_user = %u
imapc_password = Password
imapc_features = search

stop working.

The relevant error is this:

Apr 22 22:57:14 doveadm(te...@testemail.net): Info:
imapc(10.0.0.203:143): Connected to 10.0.0.202:143 (local 10.0.0.203:58070)
Apr 22 22:57:14 doveadm(te...@testemail.net): Debug:
imapc(10.0.0.203:143): Server capabilities: IMAP4rev1 SASL-IR
LOGIN-REFERRALS ID ENABLE IDLE XLIST LITERAL+ AUTH=PLAIN AUTH=LOGIN
Apr 22 22:57:14 doveadm(te...@testemail.net): Debug:
imapc(10.0.0.203:143): Authenticating as te...@testemail.net for user
te...@testemail.net
Apr 22 22:57:16 doveadm(te...@testemail.net): Error:
imapc(10.0.0.203:143): Authentication failed: [AUTHENTICATIONFAILED]
Authentication failed.
Apr 22 22:57:16 doveadm(te...@testemail.net): Debug:
imapc(10.0.0.203:143): Disconnected
Apr 22 22:57:16 doveadm(te...@testemail.net): Error: Can't open mailbox
shared/test2/Sent: Authentication failed: [AUTHENTICATIONFAILED]
Authentication failed.

Please note "Authenticating as te...@testemail.net for user
te...@testemail.net" failed.

So my question is, the documentation page is update and right or I
missing something?

Hi,

from my perspective it is likely that te...@testemail.net can't be
authenticated as a master user which is required for this setup to work.

 From the cluster setup page:

"You’ll need to setup master user logins to work for all the users. The
logged in user becomes the master user. The master user doesn’t actually
have any special privileges. "


Hi Markus,

really thanks for your support.

I understand your explanation but I don't understand how to apply it on 
master user/password side.


I must put in configuration file "imapc_password = master-secret" where 
"master-secret" is a fixed string, and "imapc_master_user = %u" that is 
replaced with "te...@testemail.net" in my case.


So I have insert in auth-master.conf:

passdb {
   driver = static
   args = password=master-secret
   result_success = continue
}

but I don't think is right/sufficient since, if I understand what you 
said, the master user name will be "te...@testemail.net" (from %u) , so 
login format at IMAP level will be 
"te...@testemail.net*"te...@testemail.net"


but this require a passdb conf more similar to

passdb {
  driver = sql
  args = /etc/dovecot/dovecot-sql-master.conf.ext
  master = yes
  result_success = continue
}

so every %u can be master user, but the password cannot be fixed in this 
case, since will be the password for every users.


Should I mix passdb driver = sql with args = password=master-secret?

Or what?

Thanks

--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice



Help with imapc and Shared Folder in a Cluster

2021-04-22 Thread Alessio Cecchi
te_default_period = 1h
  sieve_duplicate_max_period = 1d
  sieve_extensions = +vacation-seconds
  sieve_max_redirects = 25
  sieve_vacation_default_period = 1d
  sieve_vacation_min_period = 4h
  sieve_vacation_send_from_recipient = yes
  zlib_save = gz
  zlib_save_level = 6
}
pop3_client_workarounds = outlook-no-nuls oe-ns-eoh
pop3_fast_size_lookups = yes
pop3_logout_format = top=%t/%p, retr=%r/%b, del=%d/%{deleted_bytes}/%m, 
size=%s, bytes=%i/%o

protocols = imap pop3 lmtp sieve
service auth {
  client_limit = 6524
  unix_listener auth-userdb {
    group = vchkpw
    mode = 0660
    user = vmail
  }
}
service dict {
  process_limit = 500
  unix_listener dict {
    group = vchkpw
    mode = 0660
    user = vmail
  }
}
service doveadm {
  inet_listener {
    port = 2425
  }
}
service imap-login {
  process_min_avail = 12
  service_count = 0
}
service imap-postlogin {
  executable = script-login /etc/dovecot/scripts/imap-postlogin.sh
  unix_listener imap-postlogin {
    group = vchkpw
    mode = 0660
    user = vmail
  }
  user = vmail
}
service imap {
  executable = imap imap-postlogin
  process_limit = 8000
  vsz_limit = 2 G
}
service lmtp {
  inet_listener lmtp {
    port = 24
  }
  process_min_avail = 12
}
service managesieve-login {
  inet_listener sieve {
    port = 4190
  }
}
service pop3-login {
  process_min_avail = 12
  service_count = 0
}
service pop3-postlogin {
  executable = script-login /etc/dovecot/scripts/pop3-postlogin.sh
  unix_listener pop3-postlogin {
    group = vchkpw
    mode = 0660
    user = vmail
  }
  user = vmail
}
service pop3 {
  executable = pop3 pop3-postlogin
}
service quota-warning {
  executable = script /etc/dovecot/scripts/quota-warning.sh
  unix_listener quota-warning {
    user = vmail
  }
  user = vmail
}
service stats {
  client_limit = 10240
  unix_listener stats-writer {
    group = vchkpw
    mode = 0660
    user = vmail
  }
}
ssl = no
submission_host = 127.0.0.1
userdb {
  driver = prefetch
}
userdb {
  args = /etc/dovecot/dovecot-sql.conf.ext
  driver = sql
}
verbose_proctitle = yes
protocol lmtp {
  mail_fsync = optimized
  mail_plugins = quota acl zlib fts fts_solr virtual sieve notify 
push_notification

  namespace inbox {
    location =
    mailbox Spam {
  autoexpunge = 31 days
    }
    mailbox Trash {
  autoexpunge = 31 days
    }
    prefix =
  }
}
protocol lda {
  mail_fsync = optimized
  mail_plugins = quota acl zlib fts fts_solr virtual sieve notify 
push_notification

}
protocol imap {
  mail_max_userip_connections = 10
  mail_plugins = quota acl zlib fts fts_solr virtual imap_quota imap_acl
  namespace inbox {
    location =
    mailbox Spam {
  autoexpunge = 31 days
    }
    mailbox Trash {
  autoexpunge = 31 days
    }
    prefix =
  }
}
protocol sieve {
  mail_max_userip_connections = 2
}
protocol pop3 {
  mail_max_userip_connections = 15
}
remote 10.0.1.0/24 {
  protocol imap {
    imap_metadata = yes
  }
}
local 10.0.0.0/24 {
  doveadm_password = # hidden, use -P to show it
}

Thanks

--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice



fts_enforced=yes or body return NO [SERVERBUG] Internal error occurred when SEARCH

2021-04-20 Thread Alessio Cecchi

Hi,

we have some issues related to the fts_enforced option in dovecot config.

Following the table in the docs 
(https://doc.dovecot.org/settings/plugin/fts-plugin/#plugin-fts-setting-fts-enforced), 
we cannot figure out the current behavior of the flag.


In our case we have dovecot up and running, dovecot indexes updated 
using doveadm command (fts rescan/index), but solr instance offline for 
testing.


We try to search using header subject. We tried "yes" and "body" 
fts_enforced flag's options but we get in telnet session:


C: a search header subject "test"
S: a NO [SERVERBUG] Internal error occurred. Refer to server log for 
more information. [2021-04-20 15:52:57] (0.005 + 0.000 + 0.004 secs).


But dovecot-log don't include any specific error, except for a generic:

Apr 20 15:52:57 imap dovecot: imap(t...@emailtest.net) 
session=: Error: fts_solr: Lookup failed: 
connect(10.0.0.2:8983) failed: Connection refused


Is there any flag or method to check if a solr instance is up and 
running and, eventually, switching to the internal dovecot indexes instead ?


Using the "no" option, for enforced flag, for us is unavailable because 
the related search could be too long and we could have timeout waiting 
the results.


We aim to achieve the following result: if our solr instance is 
unavailable, we should switch the imap searches from full-text to 
headers only.


Could you suggest something in this direction? We running doveoct 2.3.14.

Thanks

--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice



mail_max_userip_connections per remote IP not working

2021-04-13 Thread Alessio Cecchi

Hi,

I'm tryng to set a specific mail_max_userip_connections for a remote IP 
(webmail IMAP software), but it seems not working:


remote 1.2.3.4 {
  protocol imap {
    mail_max_userip_connections = 100
  }
}

and also this isn't working

remote 1.2.3.4 {
    mail_max_userip_connections = 100
}

I insert it at the end of 20-imap.conf file.

Is something wrong or is not supported?

I'm running dovecot 2.3.14.

Thanks

--
Alessio Cecchi
Postmaster @http://www.qboxmail.it
https://www.linkedin.com/in/alessice



Re: start dovecot multiple instances

2021-03-08 Thread Alessio Cecchi

Hi Gonzalo,

I had running two instances of dovecot on the same server for a long 
time, but with CentOS 6. I remember to have two different /etc/dovecot/ 
and /etc/director/ configuration, start it with:


# dovecot -c /etc/dovecot/dovecot.conf
# dovecot -c /etc/director/dovecot.conf

and confirm that was running with

# doveadm instance list

Il 05/03/21 13:15, Gonzalo Palacios Goicolea ha scritto:


Hi All,

I'm trying to run dovecot with dovecot-director on the same server, 
using two different instances, but I'n not able to make the two 
instances start with systemctl (we are using RHEL7)


May be I shueld create a 
/etc/systemd/system/dovecot.service.d/service.conf file and add there 
any command?


Thanks and best regards



Re: director implementation

2021-03-08 Thread Alessio Cecchi

Il 08/03/21 17:55, Gonzalo Palacios Goicolea ha scritto:
My doubt is if it's recommended to delete all dovecot.index* files 
before passing the traffic through de director servers or it's not 
required. Any information on this way will be appreciated.

Is not required. Dovecot will fix previous errors itself.


Re: Dovecot v2.3.14.rc1 released

2021-02-18 Thread Alessio Cecchi

Hi,

with this release I'm unable to build RPM from SRPMS, with 2.3.13 builds 
works fine, probably is a simple error in dovecot.spec file where 
"%{_libdir}/dovecot/lib21_fts_lucene_plugin.so" should be removed:




+ cd /home/alessice/rpmbuild/BUILD
+ cd dovecot-2.3.14
+ unset DISPLAY
+ echo 'Skip make check'
Skip make check
+ exit 0
Processing files: dovecot-2.3.14-0.rc1.x86_64
errore: File not found: 
/home/alessice/rpmbuild/BUILDROOT/dovecot-2.3.14-0.rc1.x86_64/usr/lib64/dovecot/lib21_fts_lucene_plugin.so

Esecuzione(%doc) in corso: /bin/sh -e /var/tmp/rpm-tmp.PfW7IL
+ umask 022
+ cd /home/alessice/rpmbuild/BUILD
+ cd dovecot-2.3.14
+ 
DOCDIR=/home/alessice/rpmbuild/BUILDROOT/dovecot-2.3.14-0.rc1.x86_64/usr/share/doc/dovecot-2.3.14

+ export DOCDIR
+ rm -rf 
/home/alessice/rpmbuild/BUILDROOT/dovecot-2.3.14-0.rc1.x86_64/usr/share/doc/dovecot-2.3.14
+ /bin/mkdir -p 
/home/alessice/rpmbuild/BUILDROOT/dovecot-2.3.14-0.rc1.x86_64/usr/share/doc/dovecot-2.3.14
+ cp -pr docinstall/documentation.txt docinstall/dovecot-openssl.cnf 
docinstall/example-config docinstall/mkcert.sh 
docinstall/solr-config-7.7.0.xml docinstall/solr-schema-7.7.0.xml 
docinstall/solr-schema.xml docinstall/wiki AUTHORS ChangeLog COPYING 
COPYING.LGPL COPYING.MIT NEWS README 
/home/alessice/rpmbuild/BUILDROOT/dovecot-2.3.14-0.rc1.x86_64/usr/share/doc/dovecot-2.3.14

+ exit 0


Errori di compilazione RPM:
    File not found: 
/home/alessice/rpmbuild/BUILDROOT/dovecot-2.3.14-0.rc1.x86_64/usr/lib64/dovecot/lib21_fts_lucene_plugin.so




If you are asking why I need to rebuild from source, is because I'm 
still migration some server from CentOS 6 to 7.


Thanks

Il 17/02/21 15:38, Aki Tuomi ha scritto:

We are pleased to release first release candidate for v2.3.14. We have done 
changes to packaging so please give us any feedback on how it works.


--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice



Warning sqlpool(mysql): Query failed

2021-02-17 Thread Alessio Cecchi

Hi,

with the last dovecot version (2.3.13) I see many of this warnings via 
"doveadm log errors":


Feb 17 03:26:28 Warning: dict(31513): conn unix:dict (pid=31330,uid=95): 
dict(mysql): sqlpool(mysql): Query failed, 
retrying: MySQL server has gone away (idled for 679 secs)
Feb 17 03:48:53 Warning: dict(31013): conn unix:dict (pid=31805,uid=95): 
dict(mysql): sqlpool(mysql): Query failed, 
retrying: MySQL server has gone away (idled for 685 secs)
Feb 17 03:54:12 Warning: dict(32529): conn unix:dict (pid=3147,uid=95): 
dict(mysql): sqlpool(mysql): Query failed, 
retrying: MySQL server has gone away (idled for 670 secs)


What do they mean?

I thinks are related to inactivity timeout on mysql server (where I have 
a wait_timeout=600), but these warnings could be problems or can be 
fixed with some setting in dovecot?


My dovecot configuration about dict:

dict {
  acl = mysql:/etc/dovecot/dovecot-dict-sql.conf.ext
  expire = mysql:/etc/dovecot/dovecot-dict-sql.conf.ext
  sieve = mysql:/etc/dovecot/dovecot-dict-sieve-sql.conf.ext
  sqlquota = mysql:/etc/dovecot/dovecot-dict-sql.conf.ext
}
plugin {
  [...]
  acl_shared_dict = proxy::acl
  expire_dict = proxy::expire
  quota2 = dict:Quota Usage::noenforcing:proxy::sqlquota
  sieve = file:~/sieve;active=~/.dovecot.sieve
  sieve_before = dict:proxy::sieve;name=activesql
  [...]
}
service dict {
  process_limit = 500
  unix_listener dict {
    group = vmail
    mode = 0660
    user = vmail
  }
}

Thanks

--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice



Re: dovecot and broken uidlist

2021-01-22 Thread Alessio Cecchi

Hi,

after some tests I notice a difference in dovecot-uidlist line format 
when message is read from "old kernel" and "new kernel":


81184 G1611334252.M95445P32580.mail05.myserver.com 
:1611334252.M95445P32580.mail05.myserver.com,S=38689,W=39290
81185 G1611336004.M47750P3921.mail01.myserver.com 
:1611336004.M47750P3921.mail01.myserver.com,S=15917,W=16212
81186 G1611338535.M542784P10852.mail03.myserver.com 
:1611338535.M542784P10852.mail03.myserver.com,S=12651,W=12855
81187 G1611341375.M164702P13505.mail01.myserver.com 
:1611341375.M164702P13505.mail01.myserver.com,S=8795,W=8964
81189 G1611354389.M984432P14754.mail06.myserver.com 
:1611354389.M984432P14754.mail06.myserver.com,S=3038,W=3096

81191 :1611355746.M365669P10402.mail03.myserver.com,S=3049,W=3107
81193 :1611356442.M611719P20778.mail01.myserver.com,S=1203,W=1230
81194 G1611356752.M573233P27082.mail01.myserver.com 
:1611356752.M573233P27082.mail01.myserver.com,S=1210,W=1238
81195 G1611356991.M905681P30704.mail01.myserver.com 
:1611356991.M905681P30704.mail01.myserver.com,S=1220,W=1249

81197 :1611357210.M42178P1962.mail01.myserver.com,S=1220,W=1250
81199 :1611357560.M26894P7157.mail01.myserver.com,S=1233,W=1264

With "old kernel" (where all works fine) UID number are incremental and 
in the line there is one more field that start with "G1611...".


With "new kernel" (where error comes) UID number skip always a number 
and the field "G1611..." is missing.


Maciej, do you also have this behavior?

Why Dovecot create different uidlist line format with different kernel?

Il 22/01/21 17:50, Maciej Milaszewski ha scritto:

Hi
I using pop/imap and LMTP via director and user go back in dovecot node

Current: 10.0.100.22 (expires 2021-01-22 17:42:44)
Hashed: 10.0.100.22
Initial config: 10.0.100.22

I have 6 dovecot backands and index via local ssd disk
mail_location = maildir:~/Maildir:INDEX=/var/dovecot_indexes%h

user never log in two different nodes in this same time

I update debian from 8 to 9 (and to 10) and tested via kerlnel 4.x and
5.x and problem exists
If I change kernel to 3.16.x problem not exists
I tested like:

problem exists:
dovecot1-5 with 4.x
and
dovecot1-4 - with 3.19.x
dovecot5 - with 4.x
and
dovecot1-5 - with 5.x
and
dovecot1-4 - with 4.x
dovecot5 - with 5.x

not exists:
dovecot1-5 - with 3.19.x

not exists:
dovecot1-5 - with 3.19.x+kernel-care

I use NetAPP with mount options:
rw,sec=sys,noexec,noatime,tcp,soft,rsize=32768,wsize=32768,intr,nordirplus,nfsvers=3,actimeo=120
I try with nocto and without nocto

big guys from NetApp says "nfs 4.x need auth via kerberos "



On 22.01.2021 16:08, Alessio Cecchi wrote:

Hi Maciej,

I'm using LDA for delivery email in mailbox (Maildir) and I
think(hope) that switching to LMTP via director will fix my problem,
but I d'ont know why wiht old kernel works and with recent no.

Are you using POP/IMAP and LMTP via director so any update to dovecot
indexes is done from the same server?

Il 19/01/21 16:22, Maciej Milaszewski ha scritto:

Hi
I use lmtp and you ?

On 19.01.2021 10:45, Alessio Cecchi wrote:

Hi Maciej,

I had the same issue when I switched dovecot backend from Cento 6 to
Centos 7.

Also my configuration is similar to you, Dovecot Direcot, Dovecot
backend that share Maildir via NFS on NetApp.

For local delivery of emails are you using LDA or LMTP? I'm using LDA.

Let me know.

Thanks

Il 13/01/21 15:56, Maciej Milaszewski ha scritto:

Hi
I have been trying resolve my problem with dovecot for a few days and I
dont have idea

My environment is: dovecot director+5 dovecot guest

dovecot-2.2.36.4 from source
Linux 3.16.0-11-amd64
storage via nfs (NetApp)

all works fine but when I update OS from debian 8 (kernel 3.16.x) to
debian 9 (kernel 4.9.x ) sometimes I get random in logs:
Broken dovecot-uidlist

examle:
Error: Broken file
/vmail2/po/pollygraf.xxx_pg_pollygraf/Maildir/dovecot-uidlist line 88:
Invalid data:

(for random users - sometimes 10 error in day per node, some times more)

File looks ok

But if I change kernel to 3.16.x problem with "Broken file
dovecot-uidlist"  - not exists
if turn to 4.9 or 5.x - problem exists

I have storage via nfs with opions:
rw,sec=sys,noexec,noatime,tcp,hard,rsize=65536,wsize=65536,intr,nordirplus,nfsvers=3,tcp,actimeo=120
I tested with "nocto" or without "nocto" - nothing changes ..

nfs options in node:
mmap_disable = yes
mail_fsync = always

I bet the configuration is correct and I wonder why the problem occurs
with other kernels
3.x.x - ok
4.x - not ok

I check and user who have problem did not connect to another node in
this time

I dont have idea why problem exists on the kernel 4.x but not in 3.x



--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice

--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice




Re: dovecot and broken uidlist

2021-01-22 Thread Alessio Cecchi

Hi Maciej,

I'm using LDA for delivery email in mailbox (Maildir) and I think(hope) 
that switching to LMTP via director will fix my problem, but I d'ont 
know why wiht old kernel works and with recent no.


Are you using POP/IMAP and LMTP via director so any update to dovecot 
indexes is done from the same server?


Il 19/01/21 16:22, Maciej Milaszewski ha scritto:

Hi
I use lmtp and you ?

On 19.01.2021 10:45, Alessio Cecchi wrote:

Hi Maciej,

I had the same issue when I switched dovecot backend from Cento 6 to
Centos 7.

Also my configuration is similar to you, Dovecot Direcot, Dovecot
backend that share Maildir via NFS on NetApp.

For local delivery of emails are you using LDA or LMTP? I'm using LDA.

Let me know.

Thanks

Il 13/01/21 15:56, Maciej Milaszewski ha scritto:

Hi
I have been trying resolve my problem with dovecot for a few days and I
dont have idea

My environment is: dovecot director+5 dovecot guest

dovecot-2.2.36.4 from source
Linux 3.16.0-11-amd64
storage via nfs (NetApp)

all works fine but when I update OS from debian 8 (kernel 3.16.x) to
debian 9 (kernel 4.9.x ) sometimes I get random in logs:
Broken dovecot-uidlist

examle:
Error: Broken file
/vmail2/po/pollygraf.xxx_pg_pollygraf/Maildir/dovecot-uidlist line 88:
Invalid data:

(for random users - sometimes 10 error in day per node, some times more)

File looks ok

But if I change kernel to 3.16.x problem with "Broken file
dovecot-uidlist"  - not exists
if turn to 4.9 or 5.x - problem exists

I have storage via nfs with opions:
rw,sec=sys,noexec,noatime,tcp,hard,rsize=65536,wsize=65536,intr,nordirplus,nfsvers=3,tcp,actimeo=120
I tested with "nocto" or without "nocto" - nothing changes ..

nfs options in node:
mmap_disable = yes
mail_fsync = always

I bet the configuration is correct and I wonder why the problem occurs
with other kernels
3.x.x - ok
4.x - not ok

I check and user who have problem did not connect to another node in
this time

I dont have idea why problem exists on the kernel 4.x but not in 3.x



--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice



--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice



Re: dovecot and broken uidlist

2021-01-22 Thread Alessio Cecchi

Hi Claudio,

I made a test with NFS mount with nfsvers=4.1 and CentOS 7 as NFS client 
(our Netapp already have NFS 4.1 enabled) but the problem is still present.


More, I don't like to switch to NFS 4 because is statefull, NFS v3 is 
stateless and for example during maintanace or upgrade of NFS server 
clients haven't problems, the reboot of Netapp is trasparent.


I don't think the problem is related to Netapp, I see the same error in 
a setup of a customer based on Google Cloud (Ubuntu as Dovecot and NFS 
client and Google Cloud NFS volume as storage).


In my case I'm using LDA for local delivery of emails so I hope that 
swithcing to LMTP I will resolve the issue but I'm not use since others 
users said that they are aready using LMTP.


I don't know why on old Linux distro works and recents distro have the 
issue ...


Il 19/01/21 20:21, Claudio Cuqui ha scritto:
It's a long shot..but I would try to use nfsvers=4.1 in the nfs 
mount option (instead of nfsvers=3)  - if your netapp supports it - 
with a newer kernel - 4.14-stable or 4.19-stable (if possible). The 
reason for that, is a nasty bug found in linux nfs client with older 
kernels...


https://about.gitlab.com/blog/2018/11/14/how-we-spent-two-weeks-hunting-an-nfs-bug/

Hope this helps...

Regards,

Claudio


Em qua., 13 de jan. de 2021 às 12:18, Maciej Milaszewski 
mailto:maciej.milaszew...@iq.pl>> escreveu:


Hi
I have been trying resolve my problem with dovecot for a few days
and I
dont have idea

My environment is: dovecot director+5 dovecot guest

dovecot-2.2.36.4 from source
Linux 3.16.0-11-amd64
storage via nfs (NetApp)

all works fine but when I update OS from debian 8 (kernel 3.16.x) to
debian 9 (kernel 4.9.x ) sometimes I get random in logs:
Broken dovecot-uidlist

examle:
Error: Broken file
/vmail2/po/pollygraf.xxx_pg_pollygraf/Maildir/dovecot-uidlist line 88:
Invalid data:

(for random users - sometimes 10 error in day per node, some times
more)

File looks ok

But if I change kernel to 3.16.x problem with "Broken file
dovecot-uidlist"  - not exists
if turn to 4.9 or 5.x - problem exists

I have storage via nfs with opions:

rw,sec=sys,noexec,noatime,tcp,hard,rsize=65536,wsize=65536,intr,nordirplus,nfsvers=3,tcp,actimeo=120
I tested with "nocto" or without "nocto" - nothing changes ..

nfs options in node:
mmap_disable = yes
mail_fsync = always

I bet the configuration is correct and I wonder why the problem occurs
with other kernels
3.x.x - ok
4.x - not ok

I check and user who have problem did not connect to another node in
this time

I dont have idea why problem exists on the kernel 4.x but not in 3.x



--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice



Re: dovecot and broken uidlist

2021-01-19 Thread Alessio Cecchi

Hi Maciej,

I had the same issue when I switched dovecot backend from Cento 6 to 
Centos 7.


Also my configuration is similar to you, Dovecot Direcot, Dovecot 
backend that share Maildir via NFS on NetApp.


For local delivery of emails are you using LDA or LMTP? I'm using LDA.

Let me know.

Thanks

Il 13/01/21 15:56, Maciej Milaszewski ha scritto:

Hi
I have been trying resolve my problem with dovecot for a few days and I
dont have idea

My environment is: dovecot director+5 dovecot guest

dovecot-2.2.36.4 from source
Linux 3.16.0-11-amd64
storage via nfs (NetApp)

all works fine but when I update OS from debian 8 (kernel 3.16.x) to
debian 9 (kernel 4.9.x ) sometimes I get random in logs:
Broken dovecot-uidlist

examle:
Error: Broken file
/vmail2/po/pollygraf.xxx_pg_pollygraf/Maildir/dovecot-uidlist line 88:
Invalid data:

(for random users - sometimes 10 error in day per node, some times more)

File looks ok

But if I change kernel to 3.16.x problem with "Broken file
dovecot-uidlist"  - not exists
if turn to 4.9 or 5.x - problem exists

I have storage via nfs with opions:
rw,sec=sys,noexec,noatime,tcp,hard,rsize=65536,wsize=65536,intr,nordirplus,nfsvers=3,tcp,actimeo=120
I tested with "nocto" or without "nocto" - nothing changes ..

nfs options in node:
mmap_disable = yes
mail_fsync = always

I bet the configuration is correct and I wonder why the problem occurs
with other kernels
3.x.x - ok
4.x - not ok

I check and user who have problem did not connect to another node in
this time

I dont have idea why problem exists on the kernel 4.x but not in 3.x



--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice



Re: Dovecot 2.3.13 source rpm build fails on Centos 8

2021-01-08 Thread Alessio Cecchi

Il 08/01/21 03:34, st...@keptprivate.com ha scritto:


I tried to post this in a more nuanced way, but the fact is the latest 
source RPM does not build on the latest Centos 8.


> + sed -i 's|/etc/ssl|/etc/pki/dovecot|' doc/mkcert.sh  <http://mkcert.sh>  
doc/example-config/conf.d/10-ssl.co  <http://10-ssl.co>nf
> + '[' -e buildinfo.com  <http://buildinfo.com>mit ']'
> ++ head -1 buildinfo.com  <http://buildinfo.com>mit
> + COMMIT=89f716dc2ec7362864a368d32533184  b55fb7831
> ++ /bin/sh /home/build/rpmbuild/SOURCES/lsb_release -is
>
  /bin/sh: /home/build/rpmbuild/SOURCES/lsb_release: No such file or directory
> + ID=


Hi,

I solved with a:

cp /usr/bin/lsb_release /home/build/rpmbuild/SOURCES/lsb_release

but probably the dovecot.spec file inside the src.rpm need a fix.

Ciao

--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice



Re: Pigeonhole v0.5.13 released

2021-01-06 Thread Alessio Cecchi

Il 04/01/21 13:02, Aki Tuomi ha scritto:


We are pleased to release pigeonhole 0.5.13. You can download it from
locations below:

https://pigeonhole.dovecot.org/releases/2.3/dovecot-2.3-pigeonhole-0.5.13.tar.gz
https://pigeonhole.dovecot.org/releases/2.3/dovecot-2.3-pigeonhole-0.5.13.tar.gz.sig
Binary packages in https://repo.dovecot.org/
Docker images in https://hub.docker.com/r/dovecot/dovecot


Hi,

why after 2.3.10 release dovecot-pigeonhole, but also dovecot-coi, SRC 
RPM packages are no more available in the SRPMS repo directory?


I need to rebuild from source for some systems.

Thanks

--

Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice


Re: Strange index error on NFS after upgrade to CentOS 7 from 6

2020-08-25 Thread Alessio Cecchi

Il 25/08/20 16:27, Aki Tuomi ha scritto:

On 25/08/2020 16:43 Alessio Cecchi  wrote:


Hi,
  
  I'm running 3 Dovecot director and 6 Dovecot backend on CentOS 6 and Dovecot 2.3.10, with Maildir shared on NFSv3 (on NetApp).
  
  Since CentOS 6 will be EOL on November I started to upgrade to CentOS 7 the first Dovecot backend.
  
  But after Director start to direct users on new CentOS 7 server some errors, for any users, come up, an example:
  
  Aug 21 14:56:41 Error: imap(ales...@cecchi.it) session=<11sBlmKtj1qelGW/>: Mailbox INBOX: Broken or unexpectedly changed file /home/vmail/domains/cecchi.it/alessio/Maildir/dovecot-uidlist line 18664: Invalid data: - re-reading from beginning

  Aug 21 14:56:41 Error: imap(ales...@cecchi.it) session=<11sBlmKtj1qelGW/>: 
Mailbox INBOX: Broken file 
/home/vmail/domains/cecchi.it/alessio/Maildir/dovecot-uidlist line 18664: Invalid 
data:
  
  Consequently to the error the user see the synchronization of his mailbox again.
  
  The Dovecot version and configuration is the same that in CentOS 6, also the mount parameters are the same, so I suspect that could be some difference in the NFS client implementation on CentOS 7.
  
  This the entry on fstab:
  
  192.168.1.2:/vmail0 /mnt/vmail0 nfs rw,nfsvers=3,noatime,nodiratime,_netdev,nordirplus 0 0

  # Bind for vmail0
  /mnt/vmail0/domains /home/vmail/domains none 
x-systemd.requires=/mnt/vmail0,x-systemd.automount,bind 0 0
  
  In the archive of this list I found a similar issue but without a solutions:
  
  https://dovecot.org/pipermail/dovecot/2018-October/113207.html
  
  Do you have any suggestions?

  Thanks

--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice

What versions are you running mixed?

Aki


Hi Aki,

the only mixed version is CentOS version on Dovecot backend:

CentoS 6 backend:

# dovecot -n
# 2.3.10 (0da0eff44): /etc/dovecot/dovecot.conf
# Pigeonhole version 0.5.10 (bf8ef1c2)
# OS: Linux 2.6.32-754.29.2.el6.x86_64 x86_64 CentOS release 6.10 (Final)

Centos 7 backend:

# dovecot -n
# 2.3.10 (0da0eff44): /etc/dovecot/dovecot.conf
# Pigeonhole version 0.5.10 (bf8ef1c2)
# OS: Linux 3.10.0-1127.18.2.el7.x86_64 x86_64 CentOS Linux release 
7.8.2003 (Core)


and also the dovecot configuration is the same (was copied via rsync):

$ diff dovecot-a-cos6.txt dovecot-a-cos7.txt
3,4c3,4
< # OS: Linux 2.6.32-754.29.2.el6.x86_64 x86_64 CentOS release 6.10 (Final)
< # Hostname: pop01
---
> # OS: Linux 3.10.0-1127.18.2.el7.x86_64 x86_64 CentOS Linux release 
7.8.2003 (Core)

> # Hostname: pop02
126c126
< import_environment = TZ CORE_OUTOFMEM CORE_ERROR
---
> import_environment = TZ CORE_OUTOFMEM CORE_ERROR LISTEN_PID LISTEN_FDS

I have noticed this, a user have the "Indalid data" error on Centos 7, I 
moved him to Centos 6, with doveadm director move, and the "Invalid 
data" error come again also on Centos 6 but only one time, like if the 
dovecot-uidlist file was wrong and need to be fixed, but after the "fix" 
executed from Dovecot in Centos 6 the file become fine.


Instead, on Centos 7 the "Invalid data" error comes multiple time.

Thanks

--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice



Strange index error on NFS after upgrade to CentOS 7 from 6

2020-08-25 Thread Alessio Cecchi

Hi,

I'm running 3 Dovecot director and 6 Dovecot backend on CentOS 6 and 
Dovecot 2.3.10, with Maildir shared on NFSv3 (on NetApp).


Since CentOS 6 will be EOL on November I started to upgrade to CentOS 7 
the first Dovecot backend.


But after Director start to direct users on new CentOS 7 server some 
errors, for any users, come up, an example:


Aug 21 14:56:41 Error: imap(ales...@cecchi.it) 
session=<11sBlmKtj1qelGW/>: Mailbox INBOX: Broken or unexpectedly 
changed file 
/home/vmail/domains/cecchi.it/alessio/Maildir/dovecot-uidlist line 
18664: Invalid data:  - re-reading from beginning
Aug 21 14:56:41 Error: imap(ales...@cecchi.it) 
session=<11sBlmKtj1qelGW/>: Mailbox INBOX: Broken file 
/home/vmail/domains/cecchi.it/alessio/Maildir/dovecot-uidlist line 
18664: Invalid data:


Consequently to the error the user see the synchronization of his 
mailbox again.


The Dovecot version and configuration is the same that in CentOS 6, also 
the mount parameters are the same, so I suspect that could be some 
difference in the NFS client implementation on CentOS 7.


This the entry on fstab:

192.168.1.2:/vmail0 /mnt/vmail0 nfs 
rw,nfsvers=3,noatime,nodiratime,_netdev,nordirplus    0    0

# Bind for vmail0
/mnt/vmail0/domains /home/vmail/domains none 
x-systemd.requires=/mnt/vmail0,x-systemd.automount,bind 0 0


In the archive of this list I found a similar issue but without a solutions:

https://dovecot.org/pipermail/dovecot/2018-October/113207.html

Do you have any suggestions?
Thanks

--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice



Indexer error after upgrade to 2.3.11.3

2020-08-19 Thread Alessio Cecchi

Hi,

after the upgrade to Dovecot 2.3.11.3, from 2.3.10.1, I see frequently 
these errors from different users:


Aug 18 11:02:35 Panic: indexer-worker(i...@domain.com) 
session=: file 
http-client-request.c: line 1232 (http_client_request_send_more): 
assertion failed: (req->payload_input != NULL)
Aug 18 11:02:35 Error: indexer-worker(i...@domain.com) 
session=: Raw backtrace: 
/usr/lib64/dovecot/libdovecot.so.0(backtrace_append+0x2f) 
[0x7f0ee3c828bf] -> 
/usr/lib64/dovecot/libdovecot.so.0(backtrace_get+0x26) [0x7f0ee3c829d6] 
-> /usr/lib64/dovecot/libdovecot.so.0(+0xeb7ba) [0x7f0ee3c8d7ba] -> 
/usr/lib64/dovecot/libdovecot.so.0(+0xeb801) [0x7f0ee3c8d801] -> 
/usr/lib64/dovecot/libdovecot.so.0(+0x42ff1) [0x7f0ee3be4ff1] -> 
/usr/lib64/dovecot/libdovecot.so.0(http_client_request_send_more+0x415) 
[0x7f0ee3c2ba25] -> 
/usr/lib64/dovecot/libdovecot.so.0(http_client_connection_output+0x114) 
[0x7f0ee3c30994] -> /usr/lib64/dovecot/libdovecot.so.0(+0x115470) 
[0x7f0ee3cb7470] -> 
/usr/lib64/dovecot/libdovecot.so.0(io_loop_call_io+0x55) 
[0x7f0ee3ca4eb5] -> 
/usr/lib64/dovecot/libdovecot.so.0(io_loop_handler_run_internal+0xdc) 
[0x7f0ee3ca6ebc] -> 
/usr/lib64/dovecot/libdovecot.so.0(io_loop_handler_run+0x5c) 
[0x7f0ee3ca4fac] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_run+0x38) 
[0x7f0ee3ca51f8] -> /usr/lib64/dovecot/libdovecot.so.0(+0x8a955) 
[0x7f0ee3c2c955] -> 
/usr/lib64/dovecot/libdovecot.so.0(http_client_request_finish_payload+0x21) 
[0x7f0ee3c2cbd1] -> 
/usr/lib64/dovecot/lib21_fts_solr_plugin.so(solr_connection_post_end+0x45) 
[0x7f0ee1c85d15] -> /usr/lib64/dovecot/lib21_fts_solr_plugin.so(+0x3fa0) 
[0x7f0ee1c81fa0] -> /usr/lib64/dovecot/lib20_fts_plugin.so(+0x86cc) 
[0x7f0ee297f6cc] -> 
/usr/lib64/dovecot/lib20_fts_plugin.so(fts_backend_update_deinit+0x2c) 
[0x7f0ee297f74c] -> /usr/lib64/dovecot/lib20_fts_plugin.so(+0xfd04) 
[0x7f0ee2986d04] -> /usr/lib64/dovecot/lib20_fts_plugin.so(+0xff3f) 
[0x7f0ee2986f3f] -> /usr/lib64/dovecot/lib10_quota_plugin.so(+0xf64b) 
[0x7f0ee2dc764b] -> /usr/lib64/dovecot/lib01_acl_plugin.so(+0xde43) 
[0x7f0ee2fdce43] -> 
/usr/lib64/dovecot/libdovecot-storage.so.0(mailbox_transaction_commit_get_changes+0x54) 
[0x7f0ee3f91db4] -> 
/usr/lib64/dovecot/libdovecot-storage.so.0(mailbox_transaction_commit+0x16) 
[0x7f0ee3f91e76] -> dovecot/indexer-worker [i...@domain.com 
INBOX](+0x291c) [0x557584acb91c] -> dovecot/indexer-worker 
[i...@domain.com INBOX](+0x2e54) [0x557584acbe54] -> 
/usr/lib64/dovecot/libdovecot.so.0(io_loop_call_io+0x55) 
[0x7f0ee3ca4eb5] -> 
/usr/lib64/dovecot/libdovecot.so.0(io_loop_handler_run_internal+0xdc) 
[0x7f0ee3ca6ebc] -> 
/usr/lib64/dovecot/libdovecot.so.0(io_loop_handler_run+0x5c) 
[0x7f0ee3ca4fac] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_run+0x38) 
[0x7f0ee3ca51f8]
Aug 18 11:02:35 Error: indexer: Indexer worker disconnected, discarding 
1 requests for i...@domain.com
Aug 18 11:02:35 Error: imap(i...@domain.com) session=: 
indexer failed to index mailbox INBOX
Aug 18 11:02:35 Fatal: indexer-worker(i...@domain.com) 
session=: master: 
service(indexer-worker): child 24604 killed with signal 6 (core dumps 
disabled - https://dovecot.org/bugreport.html#coredumps)


I'm using FTS with Solr 6.6.5. What is it?

Thanks

--
Alessio Cecchi
Postmaster @http://www.qboxmail.it
https://www.linkedin.com/in/alessice



Re: solr and dovecot 2.2.36

2020-08-18 Thread Alessio Cecchi

Hi Maciej,

version 6.6.x works fine, but probably also 7.7.x with schema from 
Dovecot 2.3.


Ciao

Il 18/08/20 14:00, Maciej Milaszewski ha scritto:

Hi
I have dovecot-2.2.36.4 (director) + 5 nodes dovecot (dovecot-2.2.36.4)

What version of Solr do you recommend ?


--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice



Indexer error after upgrade to 2.3.11.3

2020-08-18 Thread Alessio Cecchi

Hi,

after the upgrade to Dovecot 2.3.11.3, from 2.3.10.1, I see frequently 
these errors from different users:


Aug 18 11:02:35 Panic: indexer-worker(i...@domain.com) 
session=: file 
http-client-request.c: line 1232 (http_client_request_send_more): 
assertion failed: (req->payload_input != NULL)
Aug 18 11:02:35 Error: indexer-worker(i...@domain.com) 
session=: Raw backtrace: 
/usr/lib64/dovecot/libdovecot.so.0(backtrace_append+0x2f) 
[0x7f0ee3c828bf] -> 
/usr/lib64/dovecot/libdovecot.so.0(backtrace_get+0x26) [0x7f0ee3c829d6] 
-> /usr/lib64/dovecot/libdovecot.so.0(+0xeb7ba) [0x7f0ee3c8d7ba] -> 
/usr/lib64/dovecot/libdovecot.so.0(+0xeb801) [0x7f0ee3c8d801] -> 
/usr/lib64/dovecot/libdovecot.so.0(+0x42ff1) [0x7f0ee3be4ff1] -> 
/usr/lib64/dovecot/libdovecot.so.0(http_client_request_send_more+0x415) 
[0x7f0ee3c2ba25] -> 
/usr/lib64/dovecot/libdovecot.so.0(http_client_connection_output+0x114) 
[0x7f0ee3c30994] -> /usr/lib64/dovecot/libdovecot.so.0(+0x115470) 
[0x7f0ee3cb7470] -> 
/usr/lib64/dovecot/libdovecot.so.0(io_loop_call_io+0x55) 
[0x7f0ee3ca4eb5] -> 
/usr/lib64/dovecot/libdovecot.so.0(io_loop_handler_run_internal+0xdc) 
[0x7f0ee3ca6ebc] -> 
/usr/lib64/dovecot/libdovecot.so.0(io_loop_handler_run+0x5c) 
[0x7f0ee3ca4fac] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_run+0x38) 
[0x7f0ee3ca51f8] -> /usr/lib64/dovecot/libdovecot.so.0(+0x8a955) 
[0x7f0ee3c2c955] -> 
/usr/lib64/dovecot/libdovecot.so.0(http_client_request_finish_payload+0x21) 
[0x7f0ee3c2cbd1] -> 
/usr/lib64/dovecot/lib21_fts_solr_plugin.so(solr_connection_post_end+0x45) 
[0x7f0ee1c85d15] -> /usr/lib64/dovecot/lib21_fts_solr_plugin.so(+0x3fa0) 
[0x7f0ee1c81fa0] -> /usr/lib64/dovecot/lib20_fts_plugin.so(+0x86cc) 
[0x7f0ee297f6cc] -> 
/usr/lib64/dovecot/lib20_fts_plugin.so(fts_backend_update_deinit+0x2c) 
[0x7f0ee297f74c] -> /usr/lib64/dovecot/lib20_fts_plugin.so(+0xfd04) 
[0x7f0ee2986d04] -> /usr/lib64/dovecot/lib20_fts_plugin.so(+0xff3f) 
[0x7f0ee2986f3f] -> /usr/lib64/dovecot/lib10_quota_plugin.so(+0xf64b) 
[0x7f0ee2dc764b] -> /usr/lib64/dovecot/lib01_acl_plugin.so(+0xde43) 
[0x7f0ee2fdce43] -> 
/usr/lib64/dovecot/libdovecot-storage.so.0(mailbox_transaction_commit_get_changes+0x54) 
[0x7f0ee3f91db4] -> 
/usr/lib64/dovecot/libdovecot-storage.so.0(mailbox_transaction_commit+0x16) 
[0x7f0ee3f91e76] -> dovecot/indexer-worker [i...@domain.com 
INBOX](+0x291c) [0x557584acb91c] -> dovecot/indexer-worker 
[i...@domain.com INBOX](+0x2e54) [0x557584acbe54] -> 
/usr/lib64/dovecot/libdovecot.so.0(io_loop_call_io+0x55) 
[0x7f0ee3ca4eb5] -> 
/usr/lib64/dovecot/libdovecot.so.0(io_loop_handler_run_internal+0xdc) 
[0x7f0ee3ca6ebc] -> 
/usr/lib64/dovecot/libdovecot.so.0(io_loop_handler_run+0x5c) 
[0x7f0ee3ca4fac] -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_run+0x38) 
[0x7f0ee3ca51f8]
Aug 18 11:02:35 Error: indexer: Indexer worker disconnected, discarding 
1 requests for i...@domain.com
Aug 18 11:02:35 Error: imap(i...@domain.com) session=: 
indexer failed to index mailbox INBOX
Aug 18 11:02:35 Fatal: indexer-worker(i...@domain.com) 
session=: master: 
service(indexer-worker): child 24604 killed with signal 6 (core dumps 
disabled - https://dovecot.org/bugreport.html#coredumps)


I'm using FTS with Solr 6.6.5. What is it?

Thanks

--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice



SPECIAL-USE and Outlook folders

2020-05-07 Thread Alessio Cecchi

Hi,

in the recent version of Outlook seem impossible to remap the Sent (but 
also Drafts) folders in order to show as the localized name.


I have enabled the "special_use" in 10-mailbox.conf but still is not 
working:


namespace inbox {
  inbox = yes
  location =
  mailbox Drafts {
    auto = subscribe
    special_use = \Drafts
  }
  mailbox Sent {
    auto = subscribe
    special_use = \Sent
  }
  mailbox "Sent Messages" {
    special_use = \Sent
  }
  mailbox Spam {
    auto = subscribe
    special_use = \Junk
  }
  mailbox Trash {
    auto = subscribe
    special_use = \Trash
  }
  prefix =
  separator = /
}

On Internet someone suggest to add the imap_capability "XLIST" but I 
haven't understand if it the solutions.


Does anyone have any suggestions?

Thanks

--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice



Re: net_connect_unix(imap) failed

2020-04-04 Thread Alessio Cecchi

Hi Philips,

I have the same error after upgrade from dovecot 2.2 to dovecot 2.3, 
could be related to "service stats",


try to grep in the dovecot.log for "reached" and look for an error like:

dovecot: master: Warning: service(stats): client_limit (1000) reached, 
client connections are being dropped


If you find it add to 10-master.conf:

service stats {
  client_limit = 10240
  unix_listener stats-writer {
    mode = 0660
    #user = vmail
    #group = vmail
  }
}

then restart dovecot.

Let me know if you solve or find others limit reached in the log.

Ciao

Il 31/03/20 18:22, Philipp Ewald ha scritto:

Hello everyone,

we have a huge problem with dovecot and IMAP connections.

we got the following errors: 39665 today

dovecot: imap-login: Error: master(imap): net_connect_unix(imap) 
failed: Resource temporarily unavailable - 
http://wiki2.dovecot.org/SocketUnavailable (client-pid=29066, 
client-id=1755, rip=IP, created 562 msecs ago, received 0/4 bytes)


we thinks this may be a problem with authorization take too long? 
Authorization is not local and with SQL.


i found the following in source code of Dovecot:
#define SOCKET_CONNECT_RETRY_MSECS 500
#define SOCKET_CONNECT_RETRY_WARNING_INTERVAL_SECS 2
[...]
i_error("master(%s): %s (client-pid=%u, client-id=%u, rip=%s, created 
%u msecs ago, received %u/%zu bytes)",


This is no process limit problem:

ps auxf | grep -c "[d]ovecot/imap$"
688

ps auxf | grep -c "[d]ovecot/imap-login$"
100


cat /proc/`pidof dovecot`/limits
Limit Soft Limit   Hard Limit   Units
Max cpu time  unlimited unlimited    seconds
Max file size unlimited unlimited    bytes
Max data size unlimited unlimited    bytes
Max stack size    8388608 unlimited    bytes
Max core file size    0 unlimited    bytes
Max resident set  unlimited unlimited    bytes
Max processes 64053 64053    processes
Max open files    65535 65535    files
Max locked memory 65536 65536    bytes
Max address space unlimited unlimited    bytes
Max file locks    unlimited unlimited    locks
Max pending signals   64053 64053    signals
Max msgqueue size 819200 819200   bytes
Max nice priority 0    0
Max realtime priority 0    0
Max realtime timeout  unlimited unlimited    us

ulimit -n
1024


dovecot --version
2.3.4.1 (f79e8e7e4)


protocols = imap pop3
service imap-login {
  process_min_avail = 4
  service_count = 0
}
service imap {
  process_limit = 4096
}
service pop3-login {
  process_min_avail = 4
  service_count = 0
}
service pop3 {
  process_limit = 4096
}



Can someone explain why we got this error and how to fix? If you need 
another information please tell me.




--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice



Deliver log with "msgid=?" when email come from Exchange/Office 365

2020-03-27 Thread Alessio Cecchi

Hi,

I noticed that all emails that come from Exchange/Office 365 have the 
message-id, in deliver log, reported as


msgid=? 

instead of expected

msgid=

My deliver_log format is:

deliver_log_format = deliverytime=%{delivery_time}, msgid=%m, sender=%e, 
from=%f, subject="%s": %$


The behaviour is the same on Dovecot 2.2 and 2.3, and only dovecot have 
the "?", we have a mail filter before dovecot and the message-id is 
registred fine, without the ? in front.


Is an issue of my setup or a bug in dovecot?
Thanks

--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice



Director with dovecot 2.3 and Panic/Fatal error

2020-03-19 Thread Alessio Cecchi

Hi,

after the upgrade to Dovecot 2.3 for our director ring we found some 
times in the log errors like this:


Mar 18 14:22:51 Panic: imap-login: file iostream-openssl.c: line 599 
(openssl_iostream_handle_error): assertion failed: (errno != 0)
Mar 18 14:22:51 Fatal: imap-login: master: service(imap-login): child 
1726 killed with signal 6 (core dumps disabled - 
https://dovecot.org/bugreport.html#coredumps)


Backend is still Dovecot 2.2.36 and Director is 2.3.10.
I hope it can be fixed.

Thanks

--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice



Re: Headsup on feature removal

2020-03-19 Thread Alessio Cecchi


Il 19/03/20 02:01, John Stoffel ha scritto:

Alessio> ### user_query for vpopmail
Alessio> user_query = SELECT pw_dir AS home, 89 AS uid, 89 AS gid,
Alessio> concat('*:backend=', pw_shell) AS quota_rule FROM vpopmail
Alessio> WHERE pw_name = '%n' AND pw_domain = '%d'

Careful!  You need to explain that 89 is the UID and GID of the
vpopmail user account?  Or some other account?  I don't use either of
these auth methods, but this just struck me a a little magical.


Hi John,

what you said is true but historically in vpopmail environments uid and 
gid are usually hardcoded at 89


Anyone can check their uid and gid with "id vpopmail" command from shell 
and update as necessary.


--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice



Re: Quota plugin and director

2020-03-18 Thread Alessio Cecchi

Ciao Simone,

why you want each backend to recalc quota only for its managed users and 
not run "doveadm quota recalc -A" only one time from a backend tha 
recalc quota for all users?


Il 10/03/20 11:18, Simone Lazzaris ha scritto:


Hello dovecot!

I administer a dovecot installation with 30k users. I've got 4 dovecot 
directors as frontend and 10 backends.


The mailbox are now in maildir format, with maildir++ quota, on a 
shared netapp filer. Indexes are local on each backend.


I'm reconfiguring the quota plugin: as a first step, I want to use the 
clone plugin to keep a copy of the quota on a redis database. Next, 
I'm going to use the "count" quota backend.


I've configured without (many) issues the quota clone plugin, but now 
I want to force the recalculation on all the mailboxes, because I've 
got some (not many, but some) mailboxes that are mostly unused and are 
not refreshed.


At first, I was going to use "doveadm quota recalc -A", but I want 
each backend to perform the recalculation ONLY for the users he it's 
managing.


I can't perform "doveadm quota recalc -A" on the directors, because 
the quota plugin is enabled only on the backends.


I can parse the user mapping on the directors and split the 
calculation, one user a time, on the backends, but I feel I'm choosing 
a overly complicated path.


So which is the right way to do this?

Thanks.

--

Simone LazzarisStaff R&D Qcom S.p.A. a Socio UnicoSocietà soggetta 
all'attività di direzione e coordinamento di Intred S.p.A.


Via Roggia Vignola, 9 | 24047 Treviglio (BG) T +39 0363 47905 | D +39 
0363 1970352 simone.lazza...@qcom.it <mailto:simone.lazza...@qcom.it>| 
www.qcom.it <https://www.qcom.it>Qcom Official PagesLinkedIn 
<https://www.linkedin.com/company/qcom-spa>| Facebook 
<http://www.facebook.com/qcomspa>



--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice



Re: Headsup on feature removal

2020-03-18 Thread Alessio Cecchi

Hi Aki and Remo,

switch from vpopmail driver to SQL driver (if you are using vpopmail 
with mysql as backend) is very simple.


First you need to setup the right query for vpopmail database:

# cat /etc/dovecot/dovecot-sql.conf.ext

### Vpopmail
driver = mysql
connect = host=192.168.1.2 dbname=vpopmail user=vpopmail password=Vp0pM4iL
default_pass_scheme = MD5-CRYPT

### Query to get a list of all usernames.
iterate_query = SELECT CONCAT(pw_name, '@', pw_domain) AS user FROM vpopmail

### user_query for vpopmail
user_query = SELECT pw_dir AS home, 89 AS uid, 89 AS gid, 
concat('*:backend=', pw_shell) AS quota_rule FROM vpopmail WHERE pw_name 
= '%n' AND pw_domain = '%d'


### password_query for vpopmail (not used)
#password_query = SELECT CONCAT(pw_name, '@', pw_domain) AS user, 
pw_passwd AS password FROM vpopmail WHERE pw_name = '%n' AND pw_domain = 
'%d'


### password_query for vpopmail with prefetch
password_query = SELECT CONCAT(pw_name, '@', pw_domain) AS user, 
pw_passwd AS password, concat('*:backend=', pw_shell) as 
userdb_quota_rule, 89 AS userdb_uid, 89 AS userdb_gid, pw_dir AS 
userdb_home FROM vpopmail WHERE pw_name = '%n' AND pw_domain = '%d'


after to setup auth-sql like this:

# cat /etc/dovecot/conf.d/auth-sql.conf.ext

passdb {
  driver = sql
  args = /etc/dovecot/dovecot-sql.conf.ext
}

userdb {
  driver = prefetch
}

userdb {
  driver = sql
  args = /etc/dovecot/dovecot-sql.conf.ext
}

and after to swith from auth-vpopmail to auth-sql from 
/etc/dovecot/conf.d/10-auth.conf


You can also setup Dovecot in order to apply vpopmail 
POP/IMAP/SMTP/Webmail gids/domains limits for example with a password 
query more complicated like this:


password_query = SELECT CONCAT(pw_name, '@', pw_domain) AS user, 
pw_passwd AS password, concat('*:backend=', pw_shell) as 
userdb_quota_rule, 89 AS userdb_uid, 89 AS userdb_gid, pw_dir AS 
userdb_home FROM vpopmail LEFT JOIN limits ON vpopmail.pw_domain = 
limits.domain WHERE pw_name = '%n' AND pw_domain='%d' AND (( '%s' = 
'smtp' AND (pw_gid & 2048)<>2048 AND COALESCE(disable_smtp,0)!=1) OR 
('%s' = 'pop3' AND (pw_gid & 2)<>2 AND COALESCE(disable_pop,0) != 1 ) OR 
('%s' = 'imap' AND ('%r'='192.168.100.1' OR '%r'='192.168.100.2') AND 
(pw_gid & 4)<>4 AND COALESCE(disable_webmail,0)!=1) OR ('%s' = 'imap' 
AND ('%r'!='192.168.100.1' AND '%r'!='192.168.100.2') AND (pw_gid & 
8)<>8 AND COALESCE(disable_imap,0)!=1));


where 192.168.100.1 and 192.168.100.2 are the IPs of your webmail servers.

For a more beautifull setup and to show in dovecot logs "user disabled" 
instead of "password error" you can put this password_query under the 
dovecot auth-deny.conf.ext configurations.


If you need more help or info I can help you.

Ciao

Il 18/03/20 18:26, Aki Tuomi ha scritto:

Hi!

I understand that it is not trivial to move away from vpopmail and does require 
changing a working system. But then again, one should be able to configure 
MySQL passdb/userdb with vpopmail schema.

I am not familiar with vpopmail but if someone comes with instructions we can 
polish them a bit (if necessary) and publish them as howto on doc.dovecot.org.

Aki


On 18/03/2020 17:52 Remo Mattei  wrote:


So I am on of the many users with qmail, and using vpopmail auth, I guess 
chatting with some other guys in the other mailing list we will convert to 
mysql driver but this is a lot of work for many people.

I do understand dropping things out but a valid solutions needs to be proposed.

Remo


--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice



Re: Dovecot Director upgrade from 2.2 to 2.3

2019-09-10 Thread Alessio Cecchi via dovecot

Il 22/07/19 18:49, Timo Sirainen ha scritto:

On 22 Jul 2019, at 17.45, Alessio Cecchi <mailto:ales...@skye.it>> wrote:


one server of the ring is now running Dovecot 2.3.7 and works fine 
with the others Dovecot 2.2 since 3 days.


I notice only that the load avarage of this CentOS 7 server is higher 
compared with CentOS 6 and Dovecot 2.2, but I don't know if is 
related to the new operating system or Dovecot (hardware is the same).


How much higher? Can you check the individual dovecot processes' CPU 
usage? I guess mainly director, imap-login and pop3-login. The 
director code should be pretty much the same though.


The SSL code in login processes changed in v2.3, so I wonder if the 
new code has some performance issues.


Just for info,

I have upgraded all directors ring to Dovecot 2.3 but maintaining CentOS 
6 and the load average is the same, so the increase of load during the 
first upgrade was caused by CentOS 7, not by Dovecot 2.3.


Ciao

--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice



Re: Dovecot Director upgrade from 2.2 to 2.3

2019-07-31 Thread Alessio Cecchi via dovecot

Hi Timo,

here you can see two images with the load average and CPU usage with 
Dovecot 2.2 (Centos 6) and 2.3 (Centos 7) on the same hardware and same 
configuration:


https://imgur.com/a/1hsItlc

Load average increment is relevant but CPU usage is similar.

Il 22/07/19 18:49, Timo Sirainen ha scritto:

On 22 Jul 2019, at 17.45, Alessio Cecchi <mailto:ales...@skye.it>> wrote:


one server of the ring is now running Dovecot 2.3.7 and works fine 
with the others Dovecot 2.2 since 3 days.


I notice only that the load avarage of this CentOS 7 server is higher 
compared with CentOS 6 and Dovecot 2.2, but I don't know if is 
related to the new operating system or Dovecot (hardware is the same).


How much higher? Can you check the individual dovecot processes' CPU 
usage? I guess mainly director, imap-login and pop3-login. The 
director code should be pretty much the same though.


The SSL code in login processes changed in v2.3, so I wonder if the 
new code has some performance issues.



--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice



Re: Dovecot Director upgrade from 2.2 to 2.3

2019-07-22 Thread Alessio Cecchi via dovecot

Il 18/07/19 21:42, Timo Sirainen ha scritto:
On 18 Jul 2019, at 11.44, Alessio Cecchi via dovecot 
mailto:dovecot@dovecot.org>> wrote:


Hi,

I have a setup with 3 Dovecot Director v2.2.36 and 
director_consistent_hashing = yes ;-)


Now I would like to upgrade to 2.3.7, first only Director and after 
also Backend.


Can works fine a ring of director with mixed 2.2 and 2.3 version?

Mi idea is to setup a new Director server with 2.3, stop one server 
with 2.2 and insert the new 2.3 in the current ring to check if works 
fine.


If all works fine I will replace all Director 2.2 with 2.3 version.



There's no known reason why it wouldn't work. But be prepared in case 
there is an unknown reason.


Hi,

one server of the ring is now running Dovecot 2.3.7 and works fine with 
the others Dovecot 2.2 since 3 days.


I notice only that the load avarage of this CentOS 7 server is higher 
compared with CentOS 6 and Dovecot 2.2, but I don't know if is related 
to the new operating system or Dovecot (hardware is the same).


If I do not encounter problems in the next days I will also update the 
other Directors.


Thanks

--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice



Dovecot Director upgrade from 2.2 to 2.3

2019-07-18 Thread Alessio Cecchi via dovecot

Hi,

I have a setup with 3 Dovecot Director v2.2.36 and 
director_consistent_hashing = yes ;-)


Now I would like to upgrade to 2.3.7, first only Director and after also 
Backend.


Can works fine a ring of director with mixed 2.2 and 2.3 version?

Mi idea is to setup a new Director server with 2.3, stop one server with 
2.2 and insert the new 2.3 in the current ring to check if works fine.


If all works fine I will replace all Director 2.2 with 2.3 version.

Thanks

--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice



Re: Error: o_stream_send_istream and Disconnected in APPEND

2019-07-15 Thread Alessio Cecchi via dovecot


Il 11/07/19 23:31, Timo Sirainen ha scritto:
On 11 Jul 2019, at 10.13, Alessio Cecchi via dovecot 
mailto:dovecot@dovecot.org>> wrote:


Hi,

I'm running some Dovecot servers configured with LVS + Director + 
Backend + NFS and version 2.2.36.3 (a7d78f5a2).


In the last days I see an increased number of these error:

Error: 
o_stream_send_istream(/nfs/mail/company.com/info/Maildir/.Sent/tmp/1562771349.M255624P9151.pop01 
<http://company.com/info/Maildir/.Sent/tmp/1562771349.M255624P9151.pop01>) 
failed: Broken pipe


always with action "Disconnected in APPEND", when users try to upload 
a message in Sent or Drafts.




I think it simply means that Dovecot sees that the client disconnected 
while it was APPENDing the mail. Although I don't know why they would 
suddenly start now. And I especially don't understand why the error is 
"Broken pipe". Dovecot uses it internally when it closes input 
streams, so it's possibly that, but why would isn't that happening 
elsewhere then.. Did you upgrade your kernel recently? I guess it's 
also possible that there is some bug in Dovecot, but I don't remember 
any changes related to this for a long time.


I guess it could actually be writing as well, because "Broken pipe" is 
set also for closed output streams, so maybe some failed NFS write 
could cause it (although it really should have logged a different 
error in that case, so if that was the reason this is a bug).


Dovecot v2.3.x would log a different error depending on if the problem 
was reading or writing, which would make this clearer.


Thanks Timo,

the operating system is CentOS 6 from years, and we doing regular update 
every time available. And also the NFS storage is the same from years.


The error is not starting to show now, I see it occasionally since 2017 
but after the last network upgrade (Firewall and Switch) are coming more 
frequently. And only for Thunderbird and sometimes Apple iOS Mail.


The error never occurred on old servers when we didn't have a physical 
firewall, but we used iptables on individual servers, and MTU on network 
interfaces ha MTU 9000, but many others components was updated in the 
meantime.


We will upgrade to Dovecot 2.3 in the next months.

--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice



Re: Sieve problem with duplicate and fileinto in the same set of rules

2019-07-15 Thread Alessio Cecchi via dovecot

Il 15/07/19 14:34, Gianluca Scaglia via dovecot ha scritto:


Hi there,

on my mail server (postfix, dovecot 2.2.27 in Debian 9) I have an 
automatic forwarding (with sender_bcc_maps in Postfix) for all the 
emails sent in smtp from the same server, that are then put in the 
Sent folder with a sieve rule.


In this way, however, when a user sends an e-mail to himself, both 
copies end up in the Sent folder and it's not good.


To resolve, I tried using the Sieve “duplicate” extension along with 
“fileinto” but I can't get it to work.



I solved the problem of duplicated email with this Sieve rule:

require ["duplicate", "fileinto", "mailbox"];

if duplicate :seconds 60 {
    fileinto "Trash";
}

Hope this can hel you.

Ciao

--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice



Re: Error: o_stream_send_istream and Disconnected in APPEND

2019-07-11 Thread Alessio Cecchi via dovecot

Hi,

with a Python script I have found that all users that have this errors 
are using Thunderbird:


# ./check.py /var/log/dovecot/dovecot.log
 ==> name=Thunderbird, version=60.7.2
 ==> name=Thunderbird, version=60.7.2
 ==> name=Thunderbird, version=60.7.2
 ==> name=Thunderbird, version=60.7.2
 ==> name=Thunderbird, version=60.7.2
 ==> name=Thunderbird, version=60.8.0
 ==> name=Thunderbird, version=60.7.2
 ==> name=Thunderbird, version=60.7.2
 ==> name=Thunderbird, version=60.7.2
<8WsIY1CN5uoCLHgk> ==> name=Thunderbird, version=60.7.2
 ==> name=Thunderbird, version=60.7.2
<4ezjaFCNfJVQEudp> ==> name=Thunderbird, version=60.7.2

So seem a problem with only Thunderbird, but why? And can be mitigated 
by us?


Thanks

Il 11/07/19 09:13, Alessio Cecchi via dovecot ha scritto:


Hi,

I'm running some Dovecot servers configured with LVS + Director + 
Backend + NFS and version 2.2.36.3 (a7d78f5a2).


In the last days I see an increased number of these error:

Error: 
o_stream_send_istream(/nfs/mail/company.com/info/Maildir/.Sent/tmp/1562771349.M255624P9151.pop01) 
failed: Broken pipe


always with action "Disconnected in APPEND", when users try to upload 
a message in Sent or Drafts.


Here the logs:


Director:
Jul 10 14:53:06 imap-login: Info: proxy(i...@company.com): started 
proxying to 10.0.0.20:143: user=, method=PLAIN, 
rip=84.1.2.3, lip=195.1.2.3, lport=993, TLS, session=
Jul 10 15:09:56 imap-login: Info: proxy(i...@company.com): 
disconnecting 84.1.2.3 (Disconnected by client: EOF(0s idle, 
in=648413, out=6602)): user=, method=PLAIN, 
rip=84.1.2.3, lip=195.1.2.3, lport=993, TLS, session=


Backend:
Jul 10 14:53:06 pop01 dovecot: imap-login: ID sent: 
x-session-id=eA6XKFONwP1f5Sh4, x-originating-ip=84.1.2.3, 
x-originating-port=64960, x-connected-ip=195.1.2.3, 
x-connected-port=993, x-proxy-ttl=4: user=<>, rip=84.1.2.3, 
lip=195.1.2.3, secured, session=
Jul 10 14:53:06 pop01 dovecot: imap-login: Login: 
user=, method=PLAIN, rip=84.1.2.3, lip=195.1.2.3, 
mpid=9151, secured, session=
Jul 10 14:53:07 pop01 dovecot: imap(i...@company.com) 
session=: ID sent: name=Thunderbird, version=60.7.2
Jul 10 15:09:56 pop01 dovecot: imap(i...@company.com) 
session=: Error: 
o_stream_send_istream(/nfs/mail/company.com/info/Maildir/.Sent/tmp/1562771349.M255624P9151.pop01) 
failed: Broken pipe
Jul 10 15:09:57 pop01 dovecot: imap(i...@company.com) 
session=: Disconnected in APPEND (1 msgs, 48 secs, 
0/2657336 bytes) in=886661 out=26654 del=1 expu=0 trash=0


I can't understand if is a network problem (Firewall? Load Balancer? 
Switch?) or a users LAN problem. Or others.


When this happen user see an error message like "Unable to save email 
in Sent folder".


Any suggestions?

Thanks

--
Alessio Cecchi
Postmaster @http://www.qboxmail.it
https://www.linkedin.com/in/alessice


--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice



Error: o_stream_send_istream and Disconnected in APPEND

2019-07-11 Thread Alessio Cecchi via dovecot

Hi,

I'm running some Dovecot servers configured with LVS + Director + 
Backend + NFS and version 2.2.36.3 (a7d78f5a2).


In the last days I see an increased number of these error:

Error: 
o_stream_send_istream(/nfs/mail/company.com/info/Maildir/.Sent/tmp/1562771349.M255624P9151.pop01) 
failed: Broken pipe


always with action "Disconnected in APPEND", when users try to upload a 
message in Sent or Drafts.


Here the logs:


Director:
Jul 10 14:53:06 imap-login: Info: proxy(i...@company.com): started 
proxying to 10.0.0.20:143: user=, method=PLAIN, 
rip=84.1.2.3, lip=195.1.2.3, lport=993, TLS, session=
Jul 10 15:09:56 imap-login: Info: proxy(i...@company.com): disconnecting 
84.1.2.3 (Disconnected by client: EOF(0s idle, in=648413, out=6602)): 
user=, method=PLAIN, rip=84.1.2.3, lip=195.1.2.3, 
lport=993, TLS, session=


Backend:
Jul 10 14:53:06 pop01 dovecot: imap-login: ID sent: 
x-session-id=eA6XKFONwP1f5Sh4, x-originating-ip=84.1.2.3, 
x-originating-port=64960, x-connected-ip=195.1.2.3, 
x-connected-port=993, x-proxy-ttl=4: user=<>, rip=84.1.2.3, 
lip=195.1.2.3, secured, session=
Jul 10 14:53:06 pop01 dovecot: imap-login: Login: 
user=, method=PLAIN, rip=84.1.2.3, lip=195.1.2.3, 
mpid=9151, secured, session=
Jul 10 14:53:07 pop01 dovecot: imap(i...@company.com) 
session=: ID sent: name=Thunderbird, version=60.7.2
Jul 10 15:09:56 pop01 dovecot: imap(i...@company.com) 
session=: Error: 
o_stream_send_istream(/nfs/mail/company.com/info/Maildir/.Sent/tmp/1562771349.M255624P9151.pop01) 
failed: Broken pipe
Jul 10 15:09:57 pop01 dovecot: imap(i...@company.com) 
session=: Disconnected in APPEND (1 msgs, 48 secs, 
0/2657336 bytes) in=886661 out=26654 del=1 expu=0 trash=0


I can't understand if is a network problem (Firewall? Load Balancer? 
Switch?) or a users LAN problem. Or others.


When this happen user see an error message like "Unable to save email in 
Sent folder".


Any suggestions?

Thanks

--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice



Re: masteruser can not select INBOX

2019-04-09 Thread Alessio Cecchi via dovecot

Hi,

you can do it via post-login script as explained in 
https://wiki.dovecot.org/Authentication/MasterUsers


I have a post login script similar to:

#!/bin/bash
export USERNAME=${USER%@*}
export DOMAIN=${USER#*@}
exec "$@"

and works fine.

Ciao

Il 09/04/19 09:46, Ludwig Wieland via dovecot ha scritto:

Thank you,

How and where ?


I configured only this:
cat /Library/Server/Mail/Data/shared/shared-mailboxes
* user=masteruser lr


masteruser is ok for all masters (mailmaster) ?

Luda

Am 09.04.2019 um 09:33 schrieb Aki Tuomi <mailto:aki.tu...@open-xchange.com>>:


Hi!

You need to grant the master user rights in your ACL file.

Aki



--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice



Test SASL authentication via telnet (or similar)

2019-03-28 Thread Alessio Cecchi via dovecot

Hi,

I'm looking for a way to autenticate my email users via Dovecot SASL TCP 
connections from an external nodejs or python script.


Dovecot configuration is fine, if I set in postfix smtpd_sasl_path = 
inet:127.0.0.1:12345 works fine.


But if a try via "telnet 127.0.0.1 12345" to chat with SASL in dovecot 
log found:


dovecot: auth: Error: Authentication client not compatible with this 
server (mixed old and new binaries?)


What is the right way to chat with SASL via TCP port?

Thanks

--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice



FETCH BODY vs FETCH BODYSTRUCTURE + FETCH BODY.PEEK

2018-11-15 Thread Alessio Cecchi

Hello,

we are developing a web page, for a ticket system, to show the email 
messages ordered by the biggest attachments size present in an IMAP 
folder, with a pagination of 100 messages per page. Each raw represents 
a message with "DATE, FROM, SUBJECT, the preview (the first 500 chars of 
the text/html of the messsage) and the list of the attachments (name + 
size) it contains.


The problem is that the page load is VERY slow because we fetch the 
entire BODY of all 100 messages, in order to get the preview and the 
attachments name and size.


Now, considering that each messages can contain up to 50 MB of 
attachments, we risk to download 5000MB from IMAP for every page.


An option might be to perform two FETCH for each message:

- one fetch with just the BODYSTRUCTURE (where we can get the 
attachments name and size and the text/html parts).


- one fetch with the body parts (BODY.PEEK[]) we need to build the 
preview (text + html).


We are wondering if there is a single FETCH command to get the text and 
html parts (needed to build the preview) with only the name and the size 
of the attachments without downloading the entire message.


Thanks.

--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice



autoexpunged in IMAP logout format

2018-11-14 Thread Alessio Cecchi

Hi,

in my imap_logout_format I register %{deleted} %{expunged} %{trashed} 
but now I saw that is available also %{autoexpunged}.


Is necessary to add also %{autoexpunged} in logout format to have the 
count of all deleted email via IMAP?


Thanks

--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice



Re: Help for UID THREAD and SORT optimization

2018-10-02 Thread Alessio Cecchi

Il 02/10/2018 12:08, Timo Sirainen ha scritto:

On 2 Oct 2018, at 12.22, Alessio Cecchi <mailto:ales...@skye.it>> wrote:


Hello,

we are developing a library to show last arrived messages of all 
threads in a folder with more than 300k messages.


As per the RFC 5256, the IMAP thread command has only the option to 
specify the grouping criteria (REFERENCES vs ORDEREDSUBJECT). So we 
implemented an algorithm which gets the full UID SORT ordered by date 
in reverse order then gets all thread relations and then post process 
all outputs to obtain the final result.


It sounds like what you want is REFS threading, which Dovecot 
supports, but it didn't make it into official RFC: 
https://tools.ietf.org/html/draft-gulbrandsen-imap-inthread-05



Thanks Timo,

you are the best!

p.s. was a pleasure to meet you and the OX team in Rome last week

--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice



Help for UID THREAD and SORT optimization

2018-10-02 Thread Alessio Cecchi

Hello,

we are developing a library to show last arrived messages of all threads 
in a folder with more than 300k messages.


As per the RFC 5256, the IMAP thread command has only the option to 
specify the grouping criteria (REFERENCES vs ORDEREDSUBJECT). So we 
implemented an algorithm which gets the full UID SORT ordered by date in 
reverse order then gets all thread relations and then post process all 
outputs to obtain the final result.
Once we have both output we loop through the first uids set (which is 
ordered) and for every item of it we loop through the other uids set 
with all the thread relations. If the uid of the first set is contained 
in one of the elements of the thread messages, we pick it up and push it 
in the resulting array.

This is a pseudo-code that shows what we are doing:

array_of_latest_messages_in_thread = []
array_of_sorted_uids = [n,n,n,n,...] // UID SORT (REVERSE DATE) UTF-8 ALL
array_of_thread_uids = [n,[n,n],n,[n,n,n],[n,n],n,n,...] // UID THREAD 
(REFERENCES) UTF-8 ALL

foreach(array_of_sorted_uids as s_uid){
    foreach(array_of_thread_uids as t_uids){
    if(t_uids contains s_uid){ // or a function to loop recursively 
t_uids to search if a leaf is equal to s_uid

    array_of_latest_messages_in_thread.push(s_uid)
    break
    }
    }
}

We have also made some little optimizations in the above code like for 
example skipping from the outer loop the uids already processed in the 
inner loop that have not being selected. Anyway, the loop is very 
expensive in term of computation and we are wondering if there is a 
better approach to this issue directly using IMAP instead of post 
processing the output of the two commands at application level.


Thanks.

--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice



Dovecot 2.2.36-rc1 and $HasAttachment

2018-05-10 Thread Alessio Cecchi

Hi,

I installed dovecot 2.2.36-rc1 and enabled:

mail_attachment_detection_options = add-flags-on-save

but when messages are delivery via dovecot-lda into mailbox (Maildir) 
keywords are not added. Only when the messages is open keywords 
($HasAttachment or $HasNoAttachment) are added.


From the wiki "attachments are detected and marked during save", 
delivery via LDA is "save"?


For info, doveadm rebuild attachments works fine.

Since 2.2.36 will be the last v2.2.x release I hope that this feature 
will be fixed since is very useful.


Thanks

--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice



mail_attachment_detection_options not working

2018-03-09 Thread Alessio Cecchi

Hi,

I enabled mail_attachment_detection_options on dovecot 2.2.34, in 
10-mail.conf I have:


mail_attachment_detection_options = add-flags-on-save

but when an email is delivered in mailbox (Maildir) via LMTP no flag 
(HasNoAttachment or HasAttachment) was added.


In dovecot-keywords file I have:

0 $HasNoAttachment
1 NonJunk
2 $HasAttachment

so I thinks that the feature was eanbled but is unable.

What am I doing wrong?

Thanks

--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice



Re: v2.2.34 released

2018-03-01 Thread Alessio Cecchi

Il 01/03/2018 07:11, A.L.E.C ha scritto:

On 02/28/2018 10:20 PM, Timo Sirainen wrote:

  + mail_attachment_detection_options setting controls when
$HasAttachment and $HasNoAttachment keywords are set for mails.

Is this a new feature? I can't find any documentation about these keywords and 
configuration.


Hi,

from

https://software.open-xchange.com/products/dovecot/doc/Release_Notes_for_Dovecot_Pro_2.2.34_2018-02-28.pdf

NEW FEATURE DOV-1221:
Attachment indicator
Mark email attachment presence using $HasAttachment / $HasNoAttachment 
keywords


--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice



doveadm log reopen not works with 2.2.33

2017-12-14 Thread Alessio Cecchi

Hi,

after the upgrade from dovecot 2.2.32 to 2.2.33 we notice that the 
/var/log/director/director.log was empty and the log are write in the 
logrotate file es. /var/log/director/director.log-20171201.


Log path is dovecot is:

log_path = /var/log/director/director.log

Logrotate configuration is:

/var/log/director/director.log {
  daily
  rotate 31
  missingok
  notifempty
  compress
  delaycompress
  sharedscripts
  postrotate
    doveadm -i director log reopen
  endscript
}

and worked fine until the last dovecot upgrade. Now the only way to 
"reopen" the log file is doveadm reload.


Is this a know bug?

Thanks

--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice



Re: Disabling index files?

2017-12-05 Thread Alessio Cecchi

Il 05/12/2017 12:53, Stroller ha scritto:

The wiki says:


Each mailbox has its own separate index files. **If the index files are 
disabled**, the same structures are still kept in the memory, except cache file 
is disabled completely (because the client probably won't fetch the same data 
twice within a connection). [1]

I tend to grep my maildirs quite often, and use the output to cp emails to 
other folders, so it's annoying when the dovecot.index.cache files show up in 
the results.

How do I disable the index files, please?


Hi,

try with mail_location = maildir:~/Maildir:INDEX=MEMORY

Ciao

--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice



Re: Dovecot 2.2.31: IMAP core dumped after upgrade

2017-07-06 Thread Alessio Cecchi

Il 07/07/2017 08:32, Aki Tuomi ha scritto:


On 07.07.2017 09:28, Alessio Cecchi wrote:

Il 03/07/2017 08:20, Aki Tuomi ha scritto:

On 02.07.2017 12:39, Alessio Cecchi wrote:

Hi,

after upgrade to dovecot 2.2.31 (ee) some users (very few) have
problem to see, via IMAP, their folders after login. The error in the
log is simple master: service(imap): child 15528 killed with signal 11
(core dumped). The user see only the INBOX folder. We are using
Director and NFS.

Below my configuration and the backtrace.

Thanks




Hi!

This issue is most likely fixed with
https://github.com/dovecot/core/commit/de5d6bb50931ea243f582ace5a31abb11b619ffe.patch




Hi,

I solved by downgrade to previous version. Probably the fix is not
included in dovecot-ee version.

Thanks


The fix is not included in any release yet. It will be on 2.2.32 release.

Aki
Ah ok, sorry for the mistake :-) But with downgrade in my case the imap 
core dump error has disappeared.


--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice


Re: Dovecot 2.2.31: IMAP core dumped after upgrade

2017-07-06 Thread Alessio Cecchi

Il 03/07/2017 08:20, Aki Tuomi ha scritto:


On 02.07.2017 12:39, Alessio Cecchi wrote:

Hi,

after upgrade to dovecot 2.2.31 (ee) some users (very few) have
problem to see, via IMAP, their folders after login. The error in the
log is simple master: service(imap): child 15528 killed with signal 11
(core dumped). The user see only the INBOX folder. We are using
Director and NFS.

Below my configuration and the backtrace.

Thanks




Hi!

This issue is most likely fixed with
https://github.com/dovecot/core/commit/de5d6bb50931ea243f582ace5a31abb11b619ffe.patch



Hi,

I solved by downgrade to previous version. Probably the fix is not 
included in dovecot-ee version.


Thanks

--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice


Dovecot 2.2.31: IMAP core dumped after upgrade

2017-07-02 Thread Alessio Cecchi
9d in command_exec (cmd=0xbebc78) at imap-commands.c:200
hook = 0xbca370
finished = 
__FUNCTION__ = "command_exec"
#16 0x004188d0 in client_command_input (cmd=0xbebc78) at 
imap-client.c:1080

client = 0xbe9508
command = 
__FUNCTION__ = "client_command_input"
#17 0x00418966 in client_command_input (cmd=0xbebc78) at 
imap-client.c:1140

client = 0xbe9508
command = 
__FUNCTION__ = "client_command_input"
#18 0x00418ca5 in client_handle_next_command (client=0xbe9508) 
at imap-client.c:1182

No locals.
#19 client_handle_input (client=0xbe9508) at imap-client.c:1194
_data_stack_cur_id = 3
ret = 176
remove_io = false
handled_commands = false
__FUNCTION__ = "client_handle_input"
#20 0x0041914f in client_input (client=0xbe9508) at 
imap-client.c:1241

cmd = 
output = 0xbeba60
bytes = 15
__FUNCTION__ = "client_input"
#21 0x7f9caeb33f01 in io_loop_call_io (io=0xbebb50) at ioloop.c:599
ioloop = 0xbc99a0
t_id = 2
__FUNCTION__ = "io_loop_call_io"
#22 0x7f9caeb35b1f in io_loop_handler_run_internal (ioloop=optimized out>) at ioloop-epoll.c:223

ctx = 0xbcb4c0
events = 
event = 0xbcc330
list = 0xbebbb0
io = 
tv = {tv_sec = 1799, tv_usec = 999143}
events_count = 
msecs = 
ret = 1
---Type  to continue, or q  to quit---
    i = 
    call = 
__FUNCTION__ = "io_loop_handler_run_internal"
#23 0x7f9caeb33fbc in io_loop_handler_run (ioloop=0xbc99a0) at 
ioloop.c:648

No locals.
#24 0x7f9caeb34178 in io_loop_run (ioloop=0xbc99a0) at ioloop.c:623
__FUNCTION__ = "io_loop_run"
#25 0x7f9caeabc6f3 in master_service_run (service=0xbc9840, 
callback=) at master-service.c:666

No locals.
#26 0x00426525 in main (argc=2, argv=0xbc95e0) at main.c:491
set_roots = {0x42f280, 0x637a40, 0x0}
login_set = {auth_socket_path = 0xbc1050 "\210\020\274", 
postlogin_socket_path = 0xbc1088 "[myu...@mydomain.eu 192.168.218.35 
LIST]", postlogin_timeout_secs = 60, callback = 0x426680 
,
  failure_callback = 0x425dd0 , 
request_auth_token = 1}

service_flags = 
storage_service_flags = 
username = 
auth_socket_path = 0x430284 "auth-master"
c = 

--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice


Re: Sieve dict and bindir question

2017-05-03 Thread Alessio Cecchi


Il 30/04/2017 18:29, Stephan Bosch ha scritto:

Op 4/28/2017 om 10:58 AM schreef Alessio Cecchi:

Hi,

I have setup the latest Dovecot and Sieve with dict in order to read
rules from MySQL and works fine:

sieve_before = dict:proxy::sieve;name=activesql;bindir=~/.sieve-bin

dict {
   sieve = mysql:/etc/dovecot/dovecot-dict-sieve-sql.conf.ext
}

# cat /etc/dovecot/dovecot-dict-sieve-sql.conf.ext

connect = host=10.1.1.1 dbname=dovecot user=dovecot password=Ciao
map {
 pattern = priv/sieve/name/$script_name
 table = user_sieve_scripts
 username_field = username
 value_field = id
 fields {
 script_name = $script_name
 }
}
map {
 pattern = priv/sieve/data/$id
 table = user_sieve_scripts
 username_field = username
 value_field = script_data
 fields {
 id = $id
 }
}

But when I update the rules in mysql sieve continue to apply only the
"old" rules stored in the binary. The only way to apply the new rules
is to delete the .sieve-bin/activesql.svbin. If I remove
";bindir=~/.sieve-bin" works fine.

This is a cache issue and can be fixed via setting, is an issue or is
a "feature" :-) ?

The wiki states the following:

The second query is only necessary when no compiled binary is available
or when the script has changed and needs to be recompiled. The data ID
is used to detect changes in the dict's underlying database.

Thanks now it's all clear :-)

--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice


Sieve dict and bindir question

2017-04-28 Thread Alessio Cecchi

Hi,

I have setup the latest Dovecot and Sieve with dict in order to read 
rules from MySQL and works fine:


sieve_before = dict:proxy::sieve;name=activesql;bindir=~/.sieve-bin

dict {
  sieve = mysql:/etc/dovecot/dovecot-dict-sieve-sql.conf.ext
}

# cat /etc/dovecot/dovecot-dict-sieve-sql.conf.ext

connect = host=10.1.1.1 dbname=dovecot user=dovecot password=Ciao
map {
pattern = priv/sieve/name/$script_name
table = user_sieve_scripts
username_field = username
value_field = id
fields {
script_name = $script_name
}
}
map {
pattern = priv/sieve/data/$id
table = user_sieve_scripts
username_field = username
value_field = script_data
fields {
id = $id
}
}

But when I update the rules in mysql sieve continue to apply only the 
"old" rules stored in the binary. The only way to apply the new rules is 
to delete the .sieve-bin/activesql.svbin. If I remove 
";bindir=~/.sieve-bin" works fine.


This is a cache issue and can be fixed via setting, is an issue or is a 
"feature" :-) ?


Thanks

--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice


Re: Director+NFS Experiences

2017-03-03 Thread Alessio Cecchi

Il 23/02/2017 23:08, Mark Moseley ha scritto:

As someone who is about to begin the process of moving from maildir to
mdbox on NFS (and therefore just about to start the 'director-ization' of
everything) for ~6.5m mailboxes, I'm curious if anyone can share any
experiences with it. The list is surprisingly quiet about this subject, and
articles on google are mainly just about setting director up. I've yet to
stumble across an article about someone's experiences with it.

Hi,

in the past I did some consulting for ISPs with 4-5mln mailboxes, they 
had "only" 6 Director and about 30 or more Dovecot backend.


About NFS, I had some trouble with Maildir, Director and NFSv4, I don't 
know if was a problem of client (Debian 6) or storage (NetApp Ontap 8.1) 
but with NFSv3 work fine. Now we should try again with Centos 6/7 and 
NFSv4.1.


--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice


Re: Mailbox size in log file

2017-03-02 Thread Alessio Cecchi

Il 02/03/2017 17:21, Sergio Bastian Rodríguez ha scritto:

Hello Dovecot list.

I need that Dovecot log writes mailbox size in all POP / IMAP connections, but 
I do not know if Dovecot can do that.
I have been searching about that with not successful.

For example, this is the log of our last email platform, different than Dovecot:

06:48:14 025BEE83 POP3 LOGIN user 'x...@xxx.com' MailboxSize = 61708 Capacity = 
2%
..
06:49:19 025BEE83 POP3 LOGOUT user 'x...@xxx.com' MailboxSize = 14472 Capacity 
= 0%

In this example we can know the mailbox size before and after the connection, 
and it shows that user has removed or downloaded all messages from server.

Now in Dovecot we have no information about that, and I cannot find any plugin 
which gives this us functionality.

Hi,

you can add some variables to logout log:

/etc/dovecot/conf.d/20-pop3.conf

# POP3 logout format string:
[...]
#  %s - mailbox size in bytes (before deletion)

--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice


Re: Replacement for antispam plugin

2017-02-10 Thread Alessio Cecchi

Il 10/02/2017 09:06, Aki Tuomi ha scritto:

Hi!
Since antispam plugin is deprecated and we would really prefer people
not to use it, we wrote instructions on how to replace it with
IMAPSieve. Comments and suggestions are most welcome.

https://wiki.dovecot.org/HowTo/AntispamWithSieve

Hi,

imap_stats plugin is required?

--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice


FTS: how to remove from Solr index deleted mailbox?

2017-01-29 Thread Alessio Cecchi

Hi,

I'm running Dovecot with FTS and Apache Solr as backend.

What is the command or the query to remove from Solr a deleted 
user/mailbox?


Thanks

--
Alessio Cecchi
Postmaster AT http://www.qboxmail.it
http://www.linkedin.com/in/alessice


Re: Moving to new password scheme

2017-01-25 Thread Alessio Cecchi

Il 24/01/2017 23:29, @lbutlr ha scritto:

dovecot is setup on a system with MD5-CRYPT password scheme for all users, and 
I would like to update this to something that is secure, probably 
SSHA256-CRYPT, but I want to do this seamlessly without the users having to 
jump through any hoops.

The users are in mySQL (managed via postfixadmin) and the mailbox record simply 
stores the hash in the password field. Users access their accounts though IMAP 
MUAs or Roundcube.

How would I setup my system so that if a user logs in and still has a $1$ 
password (MD5-CRYPT) their password will be encoded to the new SHCEME and then 
the SQL row updated with the $5$ password instead? Something where they are 
redirected after authentication to a page that forces them to renter their 
password (or choose a new one) is acceptable.

And, while I am here, is it worthwhile to set the -r flag to a large number 
(like something over 100,000 which sets takes about 0.25 seconds to do on my 
machine)?


Hi,

you can convert password scheme during the login:

http://wiki2.dovecot.org/HowTo/ConvertPasswordSchemes

Ciao

--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice


Re: doveadm expunge -A Error: Dictionary commit failed

2016-12-29 Thread Alessio Cecchi


Il 22/12/2016 00:35, Timo Sirainen ha scritto:

On 19 Dec 2016, at 8.25, Alessio Cecchi  wrote:


Hi,

with the latest dovecot-ee version (dovecot-ee-2.2.26.1-10) if I run “doveadm 
expunge -A mailbox Spam savedbefore 30d” dovecot an error:

doveadm: Error: dict-client: Commit failed: Dict server timeout: No input for 
1916.209 secs (1 commands pending, oldest sent 0.000 secs ago: C1) (reply took 
0.000 secs)
doveadm: Error: expire: Dictionary commit failed

Probably also "doveadm quota recalc -A" fail.

Everything worked fine up to version 2.2.24


Try if 2.2.27.1 works better. It has fixes related to this.



Thanks Timo, with 2.2.27.1 works fine.

--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice


Re: doveadm expunge -A Error: Dictionary commit failed

2016-12-20 Thread Alessio Cecchi



Il 20/12/2016 15:06, Aki Tuomi ha scritto:



On 20.12.2016 15:37, Alessio Cecchi wrote:

Il 19/12/2016 14:28, Aki Tuomi ha scritto:



On 19.12.2016 15:25, Alessio Cecchi wrote:

Hi,

with the latest dovecot-ee version (dovecot-ee-2.2.26.1-10) if I run
“doveadm expunge -A mailbox Spam savedbefore 30d” dovecot an error:

doveadm: Error: dict-client: Commit failed: Dict server timeout: No
input for 1916.209 secs (1 commands pending, oldest sent 0.000 secs
ago: C1) (reply took 0.000 secs)
doveadm: Error: expire: Dictionary commit failed

Probably also "doveadm quota recalc -A" fail.

Everything worked fine up to version 2.2.24

I hope can be fixed.
Thanks


Hi!

Can you check your server's logs?

Aki



Hi Aki,

no errors in the log, the only log in dovecot.log is:

Dec  8 09:56:54 mx01eeh dovecot: master: Dovecot v2.2.26.1 (8feb0e1)
starting up for sieve (core dumps disabled)

and also "doveadm error log" is empty, only in the shell where i run
"doveadm expunge -A" I see the error.

Thanks


For some reason dict-client in quota recalc cannot reach dict-server.
Can you provide doveconf -n?

Aki


Yes! Note: on this server dovecot act only as LDA, before the upgrade, 
with version 2.2.24 and the same configuration works fine.


Thanks

# 2.2.26.1 (8feb0e1): /etc/dovecot/dovecot.conf
# Pigeonhole version 0.4.16 (1dc4c73)
# OS: Linux 2.6.32-642.11.1.el6.x86_64 x86_64 CentOS release 6.8 (Final)
auth_cache_negative_ttl = 5 mins
auth_cache_size = 10 M
auth_cache_ttl = 20 mins
auth_mechanisms = plain login
auth_worker_max_count = 50
deliver_log_format = msgid=%m, from=%f, subject="%s": %$
dict {
  acl = mysql:/etc/dovecot/dovecot-dict-sql.conf.ext
  expire = mysql:/etc/dovecot/dovecot-dict-sql.conf.ext
  sieve = mysql:/etc/dovecot/dovecot-dict-sieve-sql.conf.ext
  sqlquota = mysql:/etc/dovecot/dovecot-dict-sql.conf.ext
}
disable_plaintext_auth = no
first_valid_gid = 89
first_valid_uid = 89
imap_client_workarounds = delay-newmail tb-extra-mailbox-sep tb-lsub-flags
imap_idle_notify_interval = 29 mins
last_valid_gid = 89
last_valid_uid = 89
lda_mailbox_autocreate = yes
lda_mailbox_autosubscribe = yes
mail_fsync = always
mail_location = maildir:~/Maildir
mail_plugins = quota acl expire zlib
maildir_very_dirty_syncs = yes
managesieve_notify_capability = mailto
managesieve_sieve_capability = fileinto reject envelope 
encoded-character vacation subaddress comparator-i;ascii-numeric 
relational regex imap4flags copy include variables body enotify 
environment mailbox date index ihave duplicate mime foreverypart extracttext

mmap_disable = yes
namespace {
  list = children
  location = maildir:%%h/Maildir:INDEX=~/Maildir/shared/%%u
  prefix = shared/%%n/
  separator = /
  subscriptions = no
  type = shared
}
namespace inbox {
  inbox = yes
  location =
  mailbox Drafts {
auto = subscribe
special_use = \Drafts
  }
  mailbox Sent {
auto = subscribe
special_use = \Sent
  }
  mailbox "Sent Messages" {
special_use = \Sent
  }
  mailbox Spam {
auto = subscribe
special_use = \Junk
  }
  mailbox Trash {
auto = subscribe
special_use = \Trash
  }
  prefix =
  separator = /
}
passdb {
  args = /etc/dovecot/dovecot-sql.conf.ext
  driver = sql
}
plugin {
  acl = vfile
  acl_shared_dict = proxy::acl
  antispam_backend = mailtrain
  antispam_mail_notspam = --ham
  antispam_mail_sendmail = /usr/bin/sa-learn
  antispam_mail_spam = --spam
  antispam_spam = Spam
  antispam_trash = Trash
  expire = Trash
  expire2 = Spam
  expire_dict = proxy::expire
  quota = maildir:UserQuota
  quota2 = dict:Quota Usage::noenforcing:proxy::sqlquota
  quota_grace = 10M
  quota_rule2 = Trash:storage=+100M
  quota_warning = storage=95%% quota-warning 95 %u
  quota_warning2 = storage=80%% quota-warning 80 %u
  sieve = file:~/sieve;active=~/.dovecot.sieve
  sieve_before = dict:proxy::sieve;name=activesql
  sieve_before2 = /etc/dovecot/sieve/before.sieve
  sieve_duplicate_default_period = 1h
  sieve_duplicate_max_period = 1d
  zlib_save = gz
  zlib_save_level = 6
}
pop3_client_workarounds = outlook-no-nuls oe-ns-eoh
pop3_fast_size_lookups = yes
protocols = sieve
sendmail_path = /var/qmail/bin/sendmail
service auth {
  client_limit = 6524
  unix_listener auth-userdb {
group = vchkpw
mode = 0660
user = vpopmail
  }
}
service dict {
  process_limit = 500
  unix_listener dict {
group = vchkpw
mode = 0660
user = vpopmail
  }
}
service imap-login {
  process_min_avail = 4
  service_count = 0
}
service imap-postlogin {
  executable = script-login /etc/dovecot/scripts/imap-postlogin.sh
  unix_listener imap-postlogin {
group = vchkpw
mode = 0660
user = vpopmail
  }
  user = vpopmail
}
service imap {
  executable = imap imap-postlogin
  process_limit = 5000
  service_count = 100
  vsz_limit = 384 M
}
service managesieve-login {
  inet_listener sieve {
port = 4190
  }
}
service pop3-login {
  process_min_avail = 4
  service_count = 0
}
se

Re: doveadm expunge -A Error: Dictionary commit failed

2016-12-20 Thread Alessio Cecchi

Il 19/12/2016 14:28, Aki Tuomi ha scritto:



On 19.12.2016 15:25, Alessio Cecchi wrote:

Hi,

with the latest dovecot-ee version (dovecot-ee-2.2.26.1-10) if I run
“doveadm expunge -A mailbox Spam savedbefore 30d” dovecot an error:

doveadm: Error: dict-client: Commit failed: Dict server timeout: No
input for 1916.209 secs (1 commands pending, oldest sent 0.000 secs
ago: C1) (reply took 0.000 secs)
doveadm: Error: expire: Dictionary commit failed

Probably also "doveadm quota recalc -A" fail.

Everything worked fine up to version 2.2.24

I hope can be fixed.
Thanks


Hi!

Can you check your server's logs?

Aki



Hi Aki,

no errors in the log, the only log in dovecot.log is:

Dec  8 09:56:54 mx01eeh dovecot: master: Dovecot v2.2.26.1 (8feb0e1) 
starting up for sieve (core dumps disabled)


and also "doveadm error log" is empty, only in the shell where i run 
"doveadm expunge -A" I see the error.


Thanks
--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice


doveadm expunge -A Error: Dictionary commit failed

2016-12-19 Thread Alessio Cecchi

Hi,

with the latest dovecot-ee version (dovecot-ee-2.2.26.1-10) if I run 
“doveadm expunge -A mailbox Spam savedbefore 30d” dovecot an error:


doveadm: Error: dict-client: Commit failed: Dict server timeout: No 
input for 1916.209 secs (1 commands pending, oldest sent 0.000 secs ago: 
C1) (reply took 0.000 secs)

doveadm: Error: expire: Dictionary commit failed

Probably also "doveadm quota recalc -A" fail.

Everything worked fine up to version 2.2.24

I hope can be fixed.
Thanks
--
Alessio Cecchi
Postmaster AT http://www.qboxmail.it
http://www.linkedin.com/in/alessice


Re: NFSv4 and Maildir

2016-09-30 Thread Alessio Cecchi


Il 23/09/2016 14:31, Robert Blayzor ha scritto:

Recently moving to newer storage platforms for mailbox storage so looking at 
moving mounts from NFSv3 with lots of issues with locking and caching to NFSv4.

There seems to be a lot of benefits to v4 along with some other new features, 
namely “delegation”.

So the question boils down to, to delegate or not delegate on Maildir storage. 
There may be many reasons based on actual platform why to do (or not to do 
this), but I want to get the general opinion from others that may have more 
experience with this. Our setup is several FreeBSD 10.x clients running 
Dovecot/Exim, NetApp NFS mail storage (probably moving to TrueNAS) and using F5 
load balancers for client side connections/SSL offload.

From what I’ve found (and what i’ve read in the RFC) is that delegation seems 
to work best when there is NOT a lot of file contention from clients accessing 
the same files. I realize that in some situations many people are using 
director to try and keep users on the same client; in our case we’re doing it 
with F5 iRules. The F5 iRules work great for POP3 and IMAP session persistence, 
but unfortunately that doesn’t work for SMTP and Dovecot LDA, so we still have 
possible race conditions from the MTA’s delivering into “INBOX”. (mostly 
dovecot indexes updating at the same time).

So the big question is, who is using Dovecot with maildirs with NFSv4 mounts. 
What has your experience been? Are you using delegation?  By choice and why did 
you come to that decision.

I’m drawing up the conclusion that if you can *mostly* control client control 
to specific files (ie: directing access to a mailbox to come from one client), 
then delegation might be ok. However, if you’re not using director and have 
several NFS mail clients racing to access mailboxes, then delegation might turn 
into chaos.


Your comments welcome and appreciated.


Hi Robert,

we have a setup with (CentOS 6) Director+Dovecot, Maildir as storage on 
NetApp NFS v3. Every time I try to switch to NFS v4 I found issue with 
lock (and others). So for me NFSv4 with Maildir is "unstable" or need a 
fine tuning that I don't know.


--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice


Re: Dovecot Pro Community Edition

2016-09-27 Thread Alessio Cecchi


Il 27/09/2016 21:32, Markus Petzsch ha scritto:

Hi,

I'm trying to setup push notifications for Open-Xchange (OX) and
struggle with finding the push_notification plugin. According to
https://oxpedia.org/wiki/index.php?title=AppSuite:OX_Mail#Requirements
it should be found in the Dovecot Pro Community Edition.

I'm currently running dovecot that came with CentOS 7.2 but am intrested
in OX's push feature. Can someone point me in the right direction where
to find the community edition repos?


Hi,

"and community edition" not "Dovecot Pro Community Edition". Push 
notification for OX is available in Dovecot standard repo here:


https://github.com/dovecot/core/tree/master/src/plugins/push-notification

Ciao
--
Alessio Cecchi
Postmaster @ http://www.qboxmail.it
https://www.linkedin.com/in/alessice


doveadm quota recalc returns a Segmentation fault

2016-09-03 Thread Alessio Cecchi
ol = 0x680858
optbuf = 0x680870
__FUNCTION__ = "doveadm_cmd_run_ver2"
#12 0x00430467 in doveadm_cmd_try_run_ver2 (cmd_name=optimized out>, argc=3, argv=0x6883a0, cctx=0x7fffe400)

at doveadm-cmd.c:446
cmd = 
#13 0x00432bdc in main (argc=4, argv=0x688398) at doveadm.c:379
cctx = {cmd = 0x69abf8, argc = 4, argv = 0x680a60, username = 
0x681288 "ales...@skye.it", cli = true, tcp_server = false,
  local_ip = {family = 0, u = {ip6 = {__in6_u = {__u6_addr8 = 
'\000' , __u6_addr16 = {0, 0, 0, 0, 0, 0,
0, 0}, __u6_addr32 = {0, 0, 0, 0}}}, ip4 = {s_addr = 
0}}}, remote_ip = {family = 0, u = {ip6 = {__in6_u = {
  __u6_addr8 = '\000' , __u6_addr16 = 
{0, 0, 0, 0, 0, 0, 0, 0}, __u6_addr32 = {0, 0, 0, 0}}},
  ip4 = {s_addr = 0}}}, local_port = 0, remote_port = 0, 
conn = 0x0}

cmd_name = 
quick_init = false
c = 
(gdb)

I hope can be fixed.
Thanks
--
Alessio Cecchi
Postmaster AT http://www.qboxmail.it
http://www.linkedin.com/in/alessice


Re: Doveadm error

2016-07-25 Thread Alessio Cecchi

Il 25.07.2016 00:03 Timo Sirainen ha scritto:

On 23 Jul 2016, at 04:05, Alessio Cecchi  wrote:


Il 15.07.2016 16:03 aki.tu...@dovecot.fi ha scritto:
On July 12, 2016 at 4:30 PM László Károlyi  
wrote:

Hey everyone,
I've got a weird error since I upgraded to the latest dovecot on my 
FreeBSD box:

root@postfixjail /# doveadm quota recalc -u x...@xxx.com
doveadm(x...@xxx.com): Error: dict-client: Commit failed: Deinit
fish: 'doveadm quota recalc -u xxx@…' terminated by signal SIGSEGV 
(Address boundary error)

root@postfixjail /# dovecot --version
2.2.25 (7be1766)


[...]


Hi
This bug is being fixed.


Hi Aki,

in what version of dovecot is being fixed? I still have the error:

# dovecot --version
2.2.25.2 (624a8f8)

# doveadm quota recalc -u ales...@skye.it
doveadm(ales...@skye.it): Error: dict-client: Commit failed: Deinit

Up to version 2.2.24 working fine.


Could you get gdb backtrace? Probably just:

gdb --args doveadm quota recalc -u user@domain
run
bt full

with 2.2.25.2 you'd need the dovecot-ee-debuginfo package.


Hi,

# gdb --args doveadm quota recalc -u ales...@skye.it
GNU gdb (GDB) Red Hat Enterprise Linux (7.2-90.el6)
Copyright (C) 2010 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later 
<http://gnu.org/licenses/gpl.html>

This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show 
copying"

and "show warranty" for details.
This GDB was configured as "x86_64-redhat-linux-gnu".
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>...
Reading symbols from /usr/bin/doveadm...Reading symbols from 
/usr/lib/debug/usr/bin/doveadm.debug...done.

done.
(gdb) run
Starting program: /usr/bin/doveadm quota recalc -u ales...@skye.it
[Thread debugging using libthread_db enabled]
doveadm(ales...@skye.it): Error: dict-client: Commit failed: Deinit

Program exited normally.
Missing separate debuginfos, use: debuginfo-install 
bzip2-libs-1.0.5-7.el6_0.x86_64 cyrus-sasl-lib-2.1.23-15.el6_6.2.x86_64 
dovecot-ee-pigeonhole-2.2.25.2-2.x86_64 glibc-2.12-1.192.el6.x86_64 
nspr-4.11.0-1.el6.x86_64 nss-3.21.0-8.el6.x86_64 
nss-softokn-freebl-3.14.3-23.el6_7.x86_64 nss-util-3.21.0-2.el6.x86_64 
openldap-2.4.40-12.el6.x86_64 zlib-1.2.3-29.el6.x86_64

(gdb) bt full
No stack.
(gdb)

And quota is now correctly update, so doveadm works fine but output the 
error "Error: dict-client: Commit failed: Deinit".


I hope can be fixed.
Thanks
--
Alessio Cecchi
Postmaster AT http://www.qboxmail.it
http://www.linkedin.com/in/alessice


Re: Doveadm error

2016-07-25 Thread Alessio Cecchi

Il 25.07.2016 00:03 Timo Sirainen ha scritto:

On 23 Jul 2016, at 04:05, Alessio Cecchi  wrote:


Il 15.07.2016 16:03 aki.tu...@dovecot.fi ha scritto:
On July 12, 2016 at 4:30 PM László Károlyi  
wrote:

Hey everyone,
I've got a weird error since I upgraded to the latest dovecot on my 
FreeBSD box:

root@postfixjail /# doveadm quota recalc -u x...@xxx.com
doveadm(x...@xxx.com): Error: dict-client: Commit failed: Deinit
fish: 'doveadm quota recalc -u xxx@…' terminated by signal SIGSEGV 
(Address boundary error)

root@postfixjail /# dovecot --version
2.2.25 (7be1766)


[...]


Hi
This bug is being fixed.


Hi Aki,

in what version of dovecot is being fixed? I still have the error:

# dovecot --version
2.2.25.2 (624a8f8)

# doveadm quota recalc -u ales...@skye.it
doveadm(ales...@skye.it): Error: dict-client: Commit failed: Deinit

Up to version 2.2.24 working fine.


Could you get gdb backtrace? Probably just:

gdb --args doveadm quota recalc -u user@domain
run
bt full

with 2.2.25.2 you'd need the dovecot-ee-debuginfo package.


I found the the command works fine but output the error.

I will try to get gdb backtrace.
Thanks
--
Alessio Cecchi
Postmaster AT http://www.qboxmail.it
http://www.linkedin.com/in/alessice


  1   2   3   >