Re: CVE-2019-11500: Critical vulnerability in Dovecot and Pigeonhole

2019-08-30 Thread Christian Balzer via dovecot


Daniel,

thanks so much for the detailed pointers.

So it turns out to be both the evil that is systemd and an overzealous
upgrade script.

Apollon, should I raise a Debian bug for this?

As for reasons, how do 50k proxy session on the proxy servers and 25k imap
processes on the mailbox servers sound?

Even on a server with just 6k users and 7k imap processes that causes a
massive load spike and a far longer service interruption (about 50
seconds) than I'm happy with.

Penultimately if people do set "shutdown_clients = no" they hopefully know
what they are doing and do expect that to work.

Regards,

Christian

On Fri, 30 Aug 2019 17:44:23 +0200 Daniel Lange via dovecot wrote:

> Am 30.08.19 um 17:38 schrieb Daniel Lange via dovecot:
> > Am 30.08.19 um 10:00 schrieb Christian Balzer via dovecot:  
> >> When upgrading on Debian Stretch with the security fix packages all
> >> dovecot processes get killed and then restarted despite having
> >> "shutdown_clients = no" set.  
> > 
> > This is systemd doing its "magic" (kill all control group processes), 
> > see https://dovecot.org/pipermail/dovecot/2016-June/104546.html
> > for a potential fix.  
> 
> Actually that will not be enough in the upgrade case as the maintainer 
> script calls
>   deb-systemd-invoke stop dovecot.socket dovecot.service
> 
> I personally think re-connecting clients are normal operations so I 
> wouldn't bother. But you could override the stop action in the systemd 
> unit if you have local reasons that warrant such a hack.
> 


-- 
Christian BalzerNetwork/Systems Engineer
ch...@gol.com   Rakuten Mobile Inc.


Re: flags not synced correctly with dovecot sync (dsync)

2019-08-30 Thread Dan Christensen via dovecot
Over many months I've been getting frequent errors in synchronization
using dsync.  They are hard to reproduce, and aren't always of the same
nature, but there's a particular case that I can reproduce 100% of the
time.  It involves read status being synced in the wrong direction
when two machines are both synchronizing with a server.

I'm reposting below a message that gives the method for reproducing the
problem.  I can still reproduce it with Ubuntu's 2:2.3.7.2-1~bionic
packages.

I also have problems with only one machine synchronizing with a server,
but those I can't reproduce on demand.

I'm happy to try patches or test other things.

I'm using Maildir storage.

Thanks for any suggestions!

Dan

On Feb 16, 2019, Dan Christensen via dovecot  wrote:

> I'm running dovecot 2.3.4.1 from https://repo.dovecot.org/ on Ubuntu
> 18.04 on three machines that I'll call server, laptop1 and laptop2.
>
> Both laptop1 and laptop2 run dovecot sync against server to keep local
> copies of my imap folders.  Even when I initially had only two machines,
> laptop1 and server, I occasionally noticed that flags were lost, usually
> custom flags used by Gnus, but I couldn't reliably reproduce the
> problem.
>
> Now that I have two laptops syncing against the server, the problem has
> gotten worse and I figured out a way to reproduce it:
>
> - on server: create new IMAP folder test, and put two read messages in it
> - on laptop1:  doveadm sync -u user -l 10 -m test -f user@server
> - on laptop2:  doveadm sync -u user -l 10 -m test -f user@server
>
> At this point, all three machines show the two messages M1 and M2
> as being read.
>
> - on laptop1: mark message M1 unread
> - on laptop2: mark message M2 unread
> - on laptop1:  doveadm sync -u user -l 10 -m test -f user@server
>   Both laptop1 and server have M1 unread, M2 read, as expected.
> - on laptop2:  doveadm sync -u user -l 10 -m test -f user@server
>   Now laptop2 and server have M1 *read*, M2 unread.
> - on laptop1:  doveadm sync -u user -l 10 -m test -f user@server
>   Now laptop1 and the server have both M1 and M2 *read*.
> - on laptop2:  doveadm sync -u user -l 10 -m test -f user@server
>   Now laptop2 has both read as well.
>
> The two lines that say "*read*" are wrong in my opinion.  dsync
> propagated a read mark to an unread message, even though that message
> was marked unread more recently than it was marked read.
>
> I usually use stateful sync, and get many related problems.
> I just did a test in which M1 and M2 started out read, and I
> started with empty files named dstate.test on laptop1 and laptop2.
> Then I did the above procedure, using the command
>
> doveadm sync -u user -l 10 -m test -s "`cat dstate.test`" user@server > 
> dstate.test
>
> At the end, laptop2 and server had both messages unread (which is good),
> but laptop1 had only M1 unread, and repeated runs of the sync command
> did not correct this.  So the stateful sync failed to detect a change.
>
> Are these bugs in dovecot?  Is there more information that I can
> provide?  The output of doveconf -n on one machine is below, and
> the others are almost identical.
>
> Thanks for any help!
>
> Dan
>
> # 2.3.4.1 (3c0b8769e): /etc/dovecot/dovecot.conf
> # OS: Linux 4.15.0-45-generic x86_64 Ubuntu 18.04.1 LTS 
> # Hostname: laptop2
> auth_mechanisms = plain login
> listen = 127.0.0.1
> mail_index_log2_max_age = 10 days
> mail_index_log_rotate_min_age = 1 days
> mail_index_log_rotate_min_size = 300 k
> mail_location = maildir:~/Maildir
> namespace inbox {
>   inbox = yes
>   location = 
>   mailbox Drafts {
> special_use = \Drafts
>   }
>   mailbox Junk {
> special_use = \Junk
>   }
>   mailbox Sent {
> special_use = \Sent
>   }
>   mailbox "Sent Messages" {
> special_use = \Sent
>   }
>   mailbox Trash {
> special_use = \Trash
>   }
>   prefix = 
> }
> passdb {
>   args = scheme=CRYPT username_format=%u /etc/dovecot/users
>   driver = passwd-file
> }
> protocols = imap
> service imap-login {
>   inet_listener imap {
> address = *
> port = 143
>   }
>   inet_listener imaps {
> address = *
> port = 943
> ssl = yes
>   }
> }
> service imap {
>   process_limit = 25
> }
> ssl_cert =  ssl_client_ca_dir = /etc/ssl/certs
> ssl_dh = # hidden, use -P to show it
> ssl_key = # hidden, use -P to show it
> userdb {
>   args = username_format=%u /etc/dovecot/users
>   driver = passwd-file
> }
> protocol lda {
>   postmaster_address = [elided]
> }
> protocol imap {
>   mail_max_userip_connections = 20
> }



Re: Sieve Header question.

2019-08-30 Thread Larry Rosenman via dovecot
Ok, I figured it out.  Needed to use a :regex match instead of :matches.


On Fri, Aug 30, 2019 at 5:25 PM Larry Rosenman  wrote:

> I'm trying to make my github processing better, but I'm missing something.
>
> I have the following:
> if address :all :contains "from" ["github.com"] {
>addflag "github";
>addflag "MyFlags" "github";
>set "mailbox" "GitHub";
>if address :matches :user  "to" "*" {
>   set "GHUser" "${1}";
>   addflag "${GHUser}";
>   addflag "MyFlags" "${GHuser}";
>}
>if header :matches "List-ID" "(.*/.*) <(.*)>" {
>   set "mailbox" "github-lists/${1}";
>}
>fileinto :flags "${MyFlags}" :create "${mailbox}";
>stop;
> }
>
> I'm trying to match the apache-incubator-superset part of:
> List-ID: apache/incubator-superset 
>
> What am I missing?
> --
> Larry Rosenman http://www.lerctr.org/~ler
> Phone: +1 214-642-9640 (c) E-Mail: larry...@gmail.com
> US Mail: 5708 Sabbia Dr, Round Rock, TX 78665-2106
>


-- 
Larry Rosenman http://www.lerctr.org/~ler
Phone: +1 214-642-9640 (c) E-Mail: larry...@gmail.com
US Mail: 5708 Sabbia Dr, Round Rock, TX 78665-2106


Sieve Header question.

2019-08-30 Thread Larry Rosenman via dovecot
I'm trying to make my github processing better, but I'm missing something.

I have the following:
if address :all :contains "from" ["github.com"] {
   addflag "github";
   addflag "MyFlags" "github";
   set "mailbox" "GitHub";
   if address :matches :user  "to" "*" {
  set "GHUser" "${1}";
  addflag "${GHUser}";
  addflag "MyFlags" "${GHuser}";
   }
   if header :matches "List-ID" "(.*/.*) <(.*)>" {
  set "mailbox" "github-lists/${1}";
   }
   fileinto :flags "${MyFlags}" :create "${mailbox}";
   stop;
}

I'm trying to match the apache-incubator-superset part of:
List-ID: apache/incubator-superset 

What am I missing?
-- 
Larry Rosenman http://www.lerctr.org/~ler
Phone: +1 214-642-9640 (c) E-Mail: larry...@gmail.com
US Mail: 5708 Sabbia Dr, Round Rock, TX 78665-2106


problems witch quota working

2019-08-30 Thread Bartłomiej Solarz-Niesłuchowski via dovecot

Good evening!

Dear list I have problem with quota-fs (rquota) support with dovecot.

I use dovecot version:

dovecot --version
2.3.7.1 (0152c8b10)

build options:

dovecot --build-options
Build options: ioloop=epoll notify=inotify openssl io_block_size=8192
SQL driver plugins: mysql postgresql sqlite
Passdb: checkpassword ldap pam passwd passwd-file shadow sql
Userdb: checkpassword ldap(plugin) passwd prefetch passwd-file sql

the privatedevices is in service set as no:

systemctl show -p PrivateDevices  dovecot
PrivateDevices=no

but quota (rquota on nfsv4) does not work:

doveadm quota get -u solarz
\doveadm(solarz): Error: Failed to get quota resource STORAGE: quota-fs: 
quotactl(Q_GETQUOTA, oceanic:/home) failed: No such file or directory

Quota name Type    Value Limit %
User quota STORAGE error error error

quota from os works:

quota solarz
Disk quotas for user solarz (uid 1761):
 Filesystem  blocks   quota   limit   grace   files   quota limit   
grace

/dev/mapper/vg01-lvol00
    1588264  4048 4048   12383 0   0
  oceanic:/home 76318340  86284030 87416830  561911 119 165

configuration:

# OS: Linux 5.2.9-200.fc30.x86_64 x86_64 Fedora release 30 (Thirty)
# Hostname: dervish.wsisiz.edu.pl
auth_cache_size = 8 k
mail_access_groups = mail
mail_fsync = always
mail_location = maildir:~/Maildir:INBOX=/var/spool/mail/%u
mail_max_userip_connections = 500
mail_nfs_index = yes
mail_nfs_storage = yes
mail_plugins = acl quota zlib trash
mail_privileged_group = mail
managesieve_notify_capability = mailto
managesieve_sieve_capability = fileinto reject envelope 
encoded-character vacation subaddress comparator-i;ascii-numeric 
relational regex imap4flags copy include variables body enotify 
environment mailbox date index ihave duplicate mime foreverypart extracttext

mbox_write_locks = fcntl
mmap_disable = yes
namespace inbox {
  inbox = yes
  location =
  mailbox Drafts {
    special_use = \Drafts
  }
  mailbox Junk {
    special_use = \Junk
  }
  mailbox Sent {
    special_use = \Sent
  }
  mailbox "Sent Messages" {
    special_use = \Sent
  }
  mailbox Trash {
    special_use = \Trash
  }
  prefix =
}
passdb {
  args = /etc/dovecot/deny-users
  deny = yes
  driver = passwd-file
}
passdb {
  args = cache_key=#hidden_use-P_to_show# max_requests=256
  driver = pam
}
passdb {
  args = cache_key=#hidden_use-P_to_show# max_requests=256
  driver = pam
}
plugin {
  quota = fs:User quota
  quota_grace = 10%%
  quota_status_nouser = DUNNO
  quota_status_overquota = 552 5.2.2 Mailbox is full
  quota_status_success = DUNNO
  quota_warning = storage=95%% quota-warning 95 %u
  quota_warning2 = storage=80%% quota-warning 80 %u
  sieve = file:~/sieve;active=~/.dovecot.sieve
}
postmaster_address = postmas...@wit.edu.pl
protocols = imap pop3
service anvil {
  client_limit = 1155
}
service auth {
  client_limit = 5248
  user = root
}
service imap-login {
  client_limit = 1024
  inet_listener imap {
    port = 0
  }
  inet_listener imaps {
    address = [::], *
    port = 993
  }
  process_limit = 1024
  service_count = 0
}
service imap {
  process_limit = 2048
  vsz_limit = 512 M
}
service pop3-login {
  client_limit = 1024
  inet_listener pop3 {
    port = 0
  }
  inet_listener pop3s {
    address = [::], *
    port = 995
  }
  process_limit = 128
  service_count = 0
}
service pop3 {
  process_limit = 2048
}
service quota-warning {
  executable = script /usr/local/bin/quota-warning.sh
  user = dovenull
}
ssl = required
ssl_cert = 

smime.p7s
Description: Kryptograficzna sygnatura S/MIME


Re: CVE-2019-11500: Critical vulnerability in Dovecot and Pigeonhole

2019-08-30 Thread Daniel Lange via dovecot

Am 30.08.19 um 17:38 schrieb Daniel Lange via dovecot:

Am 30.08.19 um 10:00 schrieb Christian Balzer via dovecot:

When upgrading on Debian Stretch with the security fix packages all
dovecot processes get killed and then restarted despite having
"shutdown_clients = no" set.


This is systemd doing its "magic" (kill all control group processes), 
see https://dovecot.org/pipermail/dovecot/2016-June/104546.html

for a potential fix.


Actually that will not be enough in the upgrade case as the maintainer 
script calls

 deb-systemd-invoke stop dovecot.socket dovecot.service

I personally think re-connecting clients are normal operations so I 
wouldn't bother. But you could override the stop action in the systemd 
unit if you have local reasons that warrant such a hack.


Re: CVE-2019-11500: Critical vulnerability in Dovecot and Pigeonhole

2019-08-30 Thread Daniel Lange via dovecot

Am 30.08.19 um 10:00 schrieb Christian Balzer via dovecot:

When upgrading on Debian Stretch with the security fix packages all
dovecot processes get killed and then restarted despite having
"shutdown_clients = no" set.


This is systemd doing its "magic" (kill all control group processes), 
see https://dovecot.org/pipermail/dovecot/2016-June/104546.html

for a potential fix.


Dovecot 2.3.7 - char "-" missing

2019-08-30 Thread Domenico Pastore via dovecot
Hello,

i have update dovecot from version 2.2.15 to 2.3.7.2.
I have a problem with mine java software because there is a different 
response when open connection to doveadm.

I need open socket to doveadm for get imap quota of a mailbox.

With version 2.2.15:
# telnet 192.160.10.4 924
Trying 192.160.10.4...
Connected to 192.160.10.4.
Escape character is '^]'.
-


With version 2.3.7.2:
# telnet 192.160.10.3 924
Trying 192.160.10.3...
Connected to 192.160.10.3.
Escape character is '^]'.


The difference is "-" character. The version 2.3.7 not respond with "-" 
character after opening the connection.

Is it possible to add the character again with a parameter?

Why did doveadm's answer change?


Br
Domenico


Re: doveadm backup mdbox - initial copy slow

2019-08-30 Thread Gerald Galster via dovecot


> On 30.8.2019 12.33, Gerald Galster via dovecot wrote:
>> Hello,
>> 
>> when calling doveadm backup like in the following example, it seems to flush 
>> every email to disk
>> maxing out harddrives (iops) and resulting in very poor performance:
>> 
>> Calling doveadm -o plugin/quota= backup -u "statusma...@domain.com" 
>> "mdbox:/backup/vmail/domain.com/statusmails"
>> finished in 212 secs [changes:1800 copy:1800 delete:0 expunge:0]
>> 
>> The source mdbox holds 1800 small emails and uses about 20 MB disk space 
>> only.
>> 
>> I've tried -f (full sync) and -l 30 (locking the mailbox) which did not get 
>> faster, nor did doveadm sync -1.
>> 
>> When using doveadm import all mails are copied in less than 3 seconds:
>> time doveadm -o mail=mdbox:/backup/vmail/domain.com/statusmails import 
>> mdbox:/var/vmail/domain.com/statusmails "" all
>> real 0m2.605s
>> 
>> Are there any other options I could try to speed up doveadm backup, avoiding 
>> the flush after each email?
>> 
>> This is doveadm from dovecot 2.2 (>= 2.2.33). Does anyone get better results 
>> with 2.3 tree?
>> 
>> Best regards
>> Gerald
> 
> Try setting mail_fsync=never

Thanks Aki, that did the trick!

Calling /usr/bin/doveadm -o plugin/quota= -o mail_fsync=never backup -u 
"statusma...@domain.com" "mdbox:/backup/vmail/domain.com/statusmails"
finished in 2 secs [changes:1925 copy:1925 delete:0 expunge:0]

Best regards
Gerald

Re: doveadm backup mdbox - initial copy slow

2019-08-30 Thread Aki Tuomi via dovecot


On 30.8.2019 12.33, Gerald Galster via dovecot wrote:
> Hello,
>
> when calling doveadm backup like in the following example, it seems to flush 
> every email to disk
> maxing out harddrives (iops) and resulting in very poor performance:
>
> Calling doveadm -o plugin/quota= backup -u "statusma...@domain.com" 
> "mdbox:/backup/vmail/domain.com/statusmails"
> finished in 212 secs [changes:1800 copy:1800 delete:0 expunge:0]
>
> The source mdbox holds 1800 small emails and uses about 20 MB disk space only.
>
> I've tried -f (full sync) and -l 30 (locking the mailbox) which did not get 
> faster, nor did doveadm sync -1.
>
> When using doveadm import all mails are copied in less than 3 seconds:
> time doveadm -o mail=mdbox:/backup/vmail/domain.com/statusmails import 
> mdbox:/var/vmail/domain.com/statusmails "" all
> real  0m2.605s
>
> Are there any other options I could try to speed up doveadm backup, avoiding 
> the flush after each email?
>
> This is doveadm from dovecot 2.2 (>= 2.2.33). Does anyone get better results 
> with 2.3 tree?
>
> Best regards
> Gerald

Try setting mail_fsync=never

Aki



doveadm backup mdbox - initial copy slow

2019-08-30 Thread Gerald Galster via dovecot
Hello,

when calling doveadm backup like in the following example, it seems to flush 
every email to disk
maxing out harddrives (iops) and resulting in very poor performance:

Calling doveadm -o plugin/quota= backup -u "statusma...@domain.com" 
"mdbox:/backup/vmail/domain.com/statusmails"
finished in 212 secs [changes:1800 copy:1800 delete:0 expunge:0]

The source mdbox holds 1800 small emails and uses about 20 MB disk space only.

I've tried -f (full sync) and -l 30 (locking the mailbox) which did not get 
faster, nor did doveadm sync -1.

When using doveadm import all mails are copied in less than 3 seconds:
time doveadm -o mail=mdbox:/backup/vmail/domain.com/statusmails import 
mdbox:/var/vmail/domain.com/statusmails "" all
real0m2.605s

Are there any other options I could try to speed up doveadm backup, avoiding 
the flush after each email?

This is doveadm from dovecot 2.2 (>= 2.2.33). Does anyone get better results 
with 2.3 tree?

Best regards
Gerald

Re: CVE-2019-11500: Critical vulnerability in Dovecot and Pigeonhole

2019-08-30 Thread Christian Balzer via dovecot


Hello,

Cc'ing Apollon in hopes he might have some insight here.

When upgrading on Debian Stretch with the security fix packages all
dovecot processes get killed and then restarted despite having 
"shutdown_clients = no" set. 

My guess would be a flaw in the upgrade procedure and/or unit files doing
a stop and start when the new imapd package is installed.

Can anybody think of a quick workaround or fix for this, as it's clearly
not intended behavior (nor needed for this issue).


Thanks,

Christian
-- 
Christian BalzerNetwork/Systems Engineer
ch...@gol.com   Rakuten Mobile Inc.