Re: [EXT] Replication going away?

2023-07-19 Thread Gerald Galster
>>> A 50-100 mailbox user server will run Dovecot CE just fine.  Pro would
>>> be overkill.
>> 
>> What is overkill? I always thought it had a bit more features and support.
> 
> For Pro 2.3, you need (at minimum) 7 Dovecot nodes + HA authentication + HA 
> storage + (minimum) 3 Cassandra nodes if using object storage.  This is per 
> site; most of our customers require data center redundancy as well, so 
> multiply as needed.  And this is only email retrieval; this doesn't even 
> begin to touch upon email transfer.
> 
> Email high availability isn't cheap.  (I would argue that if you truly need 
> this sort of carrier-grade HA for 50 users, it makes much more sense to use 
> email as-a-service than trying to do it yourself these days.  Unless you have 
> very specific reasons and a ton of cash.)

High availability currently is cheap with a small two server setup:
You need 3 servers or virtual machines: dovecot (and maybe postfix) running on 
two of them and mysql galera on all three.
This provides very affordable active/active geo-redundancy.

No offence, it's just a pity to see that feature disappering.

Best regards,
Gerald
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: Replication going away?

2023-07-19 Thread Gerald Galster
>> While I understand it takes effort to maintain the replication plugin, this 
>> is especially problematic for small active/active high-availability 
>> deployments.
> 
> To clarify: replication absolutely does not provide "active/active".  
> Replication was meant to copy data to a standby server, but you can't have 
> concurrent mailbox access.  This is why directors existed.

I have to disagree, active/active with two distinct servers is a special case 
and two-way replication has been working reliably in production for years.
Here's why: those two imap servers are separate instances with local storage, 
local quota and local pruning. There is no shared medium access like nfs that 
could lead to data corruption and hence no directors needed. Moreover the dsync 
manpage states "doveadm-sync - Dovecot's two-way mailbox synchronization 
utility". It's more like IMAP-clients copying mails from one server to another 
where the most current state wins.


>> I guess there are lots of servers that use replication for just 50 or 100 
>> mailboxes. Cloudstorage (like S3) would be overkill for these.
>> 
>> Do you provide dovecot pro subscriptions for such small deployments?
> 
> A 50-100 mailbox user server will run Dovecot CE just fine.  Pro would be 
> overkill.
> 
> All current Dovecot development assumes that storage is decoupled from the 
> system.  Shared (as in network available) storage is what you need if you 
> want high availability, whether in Pro or CE.


Thanks for clarification. So even if mdbox is still available and replication 
(backup) would work with dsync on the command line, there is no signaling layer 
to auto-trigger replication because storage is decoupled and this functionality 
is not needed anymore.

Best regards,
Gerald
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: Replication going away?

2023-07-19 Thread Gerald Galster
> Le 19/07/2023 à 19:53, Michael Peddemors a écrit :
>> Real world is a bit different.. DNS Caching.. While DNS Round Robin is good 
>> enough to distribute loads, it isnt' a very good method for failover, even 
>> with a very short TTL.  Many home routers, still insist on caching results 
>> for a long time, no matter what the TTL says, and of course Windows internal 
>> caching etc..
>> 
>> Should not confuse the issue.. call it a 'poor man's load balancer' if you 
>> will, but it more of a last line failover, and during the time it takes for 
>> DNS to retry, and find another active node, an AWFUL lot of disgruntled 
>> customers will be calling ;)
>> 
>> Also so interesting to see some resolvers that don't think of using the 
>> second record, if the first one is down..
>> 
> You're mixing things : DNS and Mail client behavior. It is a non sense.
> A resolver will serve records, It does not use them and do not care of what 
> is behind the record.
> A good client use the lists (of A or ) records to connect to the server 
> and will iterate on the list if the server behind the record is down.
> And DNS caching do it job nothing less, nothing more and is out of the 
> picture.

Emmanuel is right. Here's an example to clarify:

$ dig imap.web.de

;; ANSWER SECTION:
imap.web.de.226 IN  A   212.227.17.178
imap.web.de.226 IN  A   212.227.17.162

A dns query for imap.web.de address records (IN A) returns two ip addresses.
A local resolver receives those two ip addresses and usually passes them on
to clients while it may rotate the order, so that some clients will see
212.227.17.178, 212.227.17.162 and others will see 212.227.17.162, 
212.227.17.178.
It is possible to get the same order for subsequent requests but on a *global* 
scale
that roughly equals 50/50 loadbalancing.

Mail clients then connect to e.g. 212.227.17.178 and try 212.227.17.162 on 
connection
failure without any further dns involvement. Dns caching (ttl) is irrelevant in 
that case.

Best regards,
Gerald

___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: Replication going away?

2023-07-18 Thread Gerald Galster
>> While I understand it takes effort to maintain the replication plugin, this 
>> is especially problematic for small active/active high-availability 
>> deployments.
>> I guess there are lots of servers that use replication for just 50 or 100 
>> mailboxes. Cloudstorage (like S3) would be overkill for these.
> 
> Even without active/active, it's super useful for the simple
> active/backup configuration which I use on my personal mail server

This depends heavily on individual usage. Coming from an active/active
deployment it's a major step backwards though: usually two servers
are running independently in geographically dispersed datacenters.
High-availabilty is achieved by a simple DNS entry that returns two
ip addresses, one from each datacenter. Under normal circumstances
that gives you 50/50 loadbalancing without loadbalancers, without
additional components that can fail. In case one datacenter goes down,
and that happens to every datacenter at some time, the other datacenter
takes over - automatically, without any configuration changes.
Additionally mail user agents (Outlook, Thunderbird, ...) don't need
special configuration. If one ip address is unrechable they connect
to the other one obtained via DNS and users can quite seemlessly send
and receive email again. After the outage ceased and the other
datacenter is back online again, there is nothing to do.
No configuration changes, no error prone manual synchronization or
promoting passive to active - it just works and heals itself.
Being used to a carefree setup like that you don't want to go back.

Of course there are other possibilities like nfs, glusterfs, gfs2,
zfs snapshots, ceph, minio or dsync backup but they all have their own
drawbacks. For small mailservers that want high availability dsync
replication is quite the perfect solution.


> setup (one colo box, one home server) and a small company mail
> server; as such I'm pretty sad to see it go. Still, it is up
> to OX where they want to put their resources.

Well, it seems the dsync replication function is still there,
just the replication plugin that notifies what to replicate
is deprectated. Of course it's OX's decision, I'm just hoping
they were not aware how useful replication is in the before
mentioned scenario.

Moreover I'm quite sure this kind of small-scale replication
does not have any impact on customers upgrading to the new
cloud architecture. Big customers will go for cloud because
it scales way better and does not have replication induced
performance penalties and small customers probably can't
afford to upgrade because it's too pricey.


> I guess losing repl probably doesn't affect larger ISP type setups
> so much; it seems a bit more common to use shared storage (e.g.
> maildirs on an nfs appliance or similar) in those cases if they're
> actually running their own storage.
> 
>> Do you provide dovecot pro subscriptions for such small deployments?
> 
> Unless I misunderstood the message (and I don't think I did), repl
> was removed in pro too. (I don't expect that pro is available on my
> usual choice of OS anyway..).

As I understood it dsync is still working. Replication configured via
ssh is calling dsync under the hood, so if local storage and index/log
formats don't change for single deployments, it seems to be more of
a political decision. I know maintenance is not for free, that's why
I suggested to think about a dovecot small/medium business edition
with a more affordable price tag.

Best regards,
Gerald
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: Replication going away?

2023-07-18 Thread Gerald Galster


>> Just to understand that correctly: I could setup a (cron) based process for 
>> doveadm sync, but no longer a setup like 
>> plugin { 
>>  mail_replica = tcp:$IMAP_REPLICA_SERVER:$IMAP_REPLICA_PORT 
>> } 
>> where the cron would lead to some delay and would have to check for 
>> concurrent jobs?
> 
> You can also have that too.
> 
> doveadm sync -d 
> 
> makes it use mail_replica setting.

@Aki:

Is it possible to monitor actions that would have triggered replication in 
dovecot < 2.4, e.g. parsing logs or a lua script to imitate the previous 
behaviour?


@Michael Slusarz:

While I understand it takes effort to maintain the replication plugin, this is 
especially problematic for small active/active high-availability deployments.
I guess there are lots of servers that use replication for just 50 or 100 
mailboxes. Cloudstorage (like S3) would be overkill for these.

Do you provide dovecot pro subscriptions for such small deployments?

The basic replication functionality with dsync seems to be available in future 
versions. Would you consider releasing a "dovecot smb" version for small/medium 
businesses that maintains the replication plugin (without director) for a 
yearly subscription fee? Otherwise those affected would have to look for 
alternatives. The recent release of rust-written Stalwart mailserver comes to 
mind, that bundles a lot of functionality (smtp, pop, imap, jmap, s3, sieve, 
dkim/arc/spf/dmarc, dane, mta-sas, ...): https://stalw.art/blog/

Best regards,
Gerald
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: dovecot crash with Panic: file istream-header-filter.c: line 663

2023-03-13 Thread Gerald Galster


>>> After the above, it's no longer crashing, and my email client's "pending 
>>> operations" have
>>> cleared.
>> 
>> Does your server use ECC memory and if so, are there any errors logged 
>> (bitflip, ...)?
>> 
>> Best regards,
>> Gerald
> 
> I don't have the logs from that time them nor do I see any hardware / memory 
> errors.
> 
> I also haven't had any other odd failures.
> 
> But how can I tell if I have ECC memory or not?

You could install dmidecode and search for ECC (not L1/L2 cpu cache), e.g.

Physical Memory Array
Location: System Board Or Motherboard
Use: System Memory
Error Correction Type: Multi-bit ECC
Maximum Capacity: 128 GB
Error Information Handle: 0x0008
Number Of Devices: 4

It happens rarely but without ECC those errors often go without notice.

With ECC dmesg/kernel log might show warnings like

kernel: [Hardware Error]: Unified Memory Controller Ext. Error Code: 0, DRAM 
ECC error.
kernel: EDAC MC0: 1 CE Cannot decode normalized address on mc#0csrow#2channel#1 
...
kernel: [Hardware Error]: cache level: L3/GEN, tx: GEN, mem-tx: RD
kernel: core: [Hardware Error]: Machine check events logged
kernel: [Hardware Error]: Corrected error, no action required.

Best regards,
Gerald



Re: dovecot crash with Panic: file istream-header-filter.c: line 663

2023-03-13 Thread Gerald Galster
> After the above, it's no longer crashing, and my email client's "pending 
> operations" have
> cleared.

Does your server use ECC memory and if so, are there any errors logged 
(bitflip, ...)?

Best regards,
Gerald

Re: Replication not working - GUIDs conflict - will be merged later

2022-08-02 Thread Gerald Galster


> (we're using maildir, so I can just rsync the individual mails/folders)

I'm curious if anybody experienced this issue using mdbox.
As far as I remember it's better suited for replication as filenames
and location do not change on disk (index only).

Best regards
Gerald


Re: [External] reconsidering my (your?) current setup

2021-10-07 Thread Gerald Galster


>> The CentOS community manager is a friend and he understands that they really 
>> missed the mark on the messaging around CentOS8 Stream.
>> In short, I'm not sure it's going to be that bad of a solution.
> 
> 
> People think that because Redhat has repeatedly said stream is not Centos or 
> a replacement for Centos.
> 
> https://www.redhat.com/en/resources/centos-stream-checklist says:
> "CentOS Stream may seem like a natural choice to replace CentOS Linux, but it 
> is not designed for production use and can present many challenges in 
> enterprise environments."

Unfortunately that is true, they broke systemd-nspawn in CentOS Stream a while 
ago. It's a bad surprise when you update, reboot and your virtual servers don't 
work anymore.
(systemd-nspawn is more like openvz used to be, something between docker/podman 
and qemu/kvm).

RockyLinux is an alternative for me although some extra repos like glusterfs 
(storage sig) are still missing.

Best regards
Gerald



Re: Timo - is the v2.3.15 GCC limitation really necessarily or it's just a bug?

2021-07-29 Thread Gerald Galster
> You can use Developer Toolset.  It comes with newer GCC versions.  I
> don't know the precise CentOS repository layout, but there is a
> devtoolset-7-gcc package somewhere.


SCL is included with CentOS, see:

https://www.softwarecollections.org/en/scls/rhscl/devtoolset-7/
https://www.softwarecollections.org/en/scls/rhscl/devtoolset-8/

As CentOS 6 is end of life you could find packages here:
https://vault.centos.org/6.10/sclo/x86_64/rh/Packages/d/

Best regards
Gerald


Re: function for whitelisting IPs

2021-07-15 Thread Gerald Galster
> I run a personal email server. I can't emphasize enough how geofencing has 
> reduced the useless hacking on my email server. I only leave port 25 open to 
> the world. I use port 587.

Unfortunately that's not an option for commercial mailservers. You have to be 
open to communicate with the world.
Geofencing might be inaccurate. Often this data is extracted from ip-net 
registrations - the country where the company resides that registered that net 
might not be where the servers are located.
There are services like maxmind that are more accurate but are not free.

> Firewalls use memory but tend to be very light on the CPU other than when you 
> first start up the firewall. I assume they take the deny list and create a 
> table in RAM to efficiently block IPs. I have found that

This depends on how your firewall works. A standard linux firewall processes 
iptables rules one after another. With a lot of rules and high traffic this can 
cause very high cpu usage.
In case you're using ipsets (like a hashmap) that is not the case. There's also 
a difference if you block single ips or whole subnets.

> dynamic IP blocking programs such as sshguard or fail2ban are a CPU burden 
> since that table needs to be refreshed as new IPs are added or removed so I 
> have stopped using them. Not that the programs themselves are CPU intensive, 
> but they cause the firewall to be CPU intensive. I am considering using 
> sshguard again but with a very high threshold to add an IP to the deny list.

It's not that cpu intensive when using ipsets. On the other hand fail2ban 
itself uses quite some cpu and memory (sqlite databases can get large).
I haven't been using fail2ban because of that, so I don't know if the situation 
has improved.

> Regarding attempts to add 2FA by using RoundCube or similar web based email, 
> I think those programs just increase the attack surface. When I used a 
> hosting service I was hacked by an unpatched exploit in RoundCube.

Programs like fail2ban do not increase the attack surface under normal 
circumstances. They just scan logs and add firewall rules, which does not cost 
very much when using ipsets.

I'm very interested which roundcube bug that was, using roundcube myself. Can 
you have a look at the cve list, please:

https://www.cvedetails.com/vulnerability-list/vendor_id-8905/Roundcube.html

Best regards
Gerald

Re: function for whitelisting IPs

2021-07-15 Thread Gerald Galster


> Do you have any examples of such a function and how/where it is used ?

>I have a better idea:
>Have a function for whitelisting IPs, possible /24's or similiar, where a 
> login to roundcube or other webmail client (with 2FA) will add the IP onto a 
> whitelist for that account.

For some it might be sufficient to just return the allow_nets field:

https://doc.dovecot.org/configuration_manual/authentication/allow_nets/

Best regards
Gerald

Re: Feature request.

2020-10-09 Thread Gerald Galster


> I have to say I'm totally baffled since I do nothing when LetsEncrypt renews 
> the certificate. 
> 
> I know the cert has been updated because the mail clients asks me if I trust 
> the certificate. 
> 
> If it makes a difference I use the bash LetsEncrypt not the Python code.

I don't like all those dependencies certbot (python) installs, but it works 
flawlessly on CentOS.
On CentOS 8 you need to enable the EPEL *and* PowerTools repositories 
(/etc/yum/repos.d/...)

I've attached a small perl script that I call via cron 30 minutes after certbot 
starts which reloads services if necessary.

Best regards
Gerald



#!/usr/bin/perl

my $reload;

open(FF, "find /etc/letsencrypt/live -mtime -1 -name cert.pem |");
while(){
chomp;
next if !$_;
$reload++;
}
close(FF);

if($reload){
system("/usr/bin/systemctl reload httpd");
system("/usr/bin/systemctl reload postfix");
system("/usr/bin/systemctl reload dovecot");

}



Re: Version controlled (git) Maildir generated by Dovecot

2020-10-07 Thread Gerald Galster


>> -- Původní e-mail --
>> Od: Gerald Galster 
>> Komu: dovecot@dovecot.org
>> Datum: 7. 10. 2020 14:36:43
>> Předmět: Re: Version controlled (git) Maildir generated by Dovecot
>> 
>>> Could you please tell me / do you know if those dovecot* files have to be 
>>> also backed / archived?
>> 
>> IIRC the plain files from cur/ and new/ should suffice.
>> 
>> You can try it: create a new IMAP folder, then close your mail app.
>> Copy some mail files from another cur/ to this imap folder's cur/ directory.
>> Reopen your mail app and see if new mails are fetched.


> Or on a testing system I can just stop Dovecot, remove everything that 
> doesn't start with a number and restart Dovecot and connect Thunderbird and 
> see the results. Thank you.

This depends on what "everything" means.

I suggested to create a new imap folder because

- the Maildir is clean (necessary folders, correct permissions, no emails, ...)

- neither the server nor the client (thunderbird) did cache anything yet
  (no cache-invalidation, no re-download of old mails, ...)

- the client (thunderbird) created and autosubscribed the folder
  (subscribed folders are also stored on the server, like acls, ...)

With your approach you may (or may not) test more than just restoring a few 
mails inside an empty Maildir.

Best regards
Gerald



Re: Version controlled (git) Maildir generated by Dovecot

2020-10-07 Thread Gerald Galster


> Could you please tell me / do you know if those dovecot* files have to be 
> also backed / archived?

IIRC the plain files from cur/ and new/ should suffice.

You can try it: create a new IMAP folder, then close your mail app.
Copy some mail files from another cur/ to this imap folder's cur/ directory.
Reopen your mail app and see if new mails are fetched.

Best regards
Gerald

Re: Mail replication does not work

2020-09-02 Thread Gerald Galster
Hi,

> I have installed two Debian 10.5 (SSH server) systems
> and I would like replicate mails between them.
> But the replication is very unstable.

> mail_location = mbox:~/mail:INBOX=/var/mail/%u

I don't think mbox is suitable for replication. Try mdbox or sdbox.

Best regards
Gerald



Re: RHEL7/CentOS7 RPM of dovecot 2.3.11.3-3 seems to have dropped tcpwrap support

2020-08-21 Thread Gerald Galster
 At a guess it was removed from the spec for el8 (which does not support
 tcpwrap) and somehow got removed from el7 by accident.  The ghettoforge
 dovecot23 packages have tcpwrap support for el7:
> 
> So is el8 truly incompatible with tcpwrap?  Or is it just too much 
> effort to continue suport for every feature that was ever in the system?

https://access.redhat.com/solutions/3906701

"The TCP Wrappers package has been deprecated in RHEL 7 and therefore it will 
not be available in RHEL 8 or later RHEL releases".

> If the former, might it be reasonable for a user to change the 8's in 
> the code below to 9's?

No, because there is no tcp_wrappers/tcp_wrappers-devel package in RHEL8 
anymore.

For reasons and alternatives, see: 
https://fedoraproject.org/wiki/Changes/Deprecate_TCP_wrappers

Best regards
Gerald


>>> We are looking into this, it was indeed removed from el7 by accident. RPM 
>>> macros can be quite tricky sometimes.
>> 
>> I have:
>> 
>> %if 0%{?rhel} < 8
>> BuildRequires: tcp_wrappers-devel
>> %endif
>> 
>> ... then later ...
>> 
>> %if 0%{?rhel} < 8
>>--with-libwrap \
>> %endif
>> 
>> 
>> Peter



Re: Apple Mail Since upgrade to dovecot 2.3.x unable to connect

2020-08-17 Thread Gerald Galster
>> You need to set
>> 
>> ssl_min_protocol = TLSv1.2 # or TLSv1
> 
> Thanks, tried both, but unsuccessfully. Again, is there any debug
> setting that allows me to see what SSL version was requested? Without
> this, this is fumbling in the dark.

In the german version of Apple Mail go to menu "Fenster" / "Verbindug prüfen".

There you can check the connection and log all transactions.

I don't know how detailed this is in older Apple Mail versions, but you could 
try.

READ Aug 17 13:05:32.041 [kCFStreamSocketSecurityLevelTLSv1_2] -- 
host:mail.server.com -- port:587 -- socket:0x65ff1980 -- 
thread:0x6e5cb340
235 2.7.0 Authentication successful


Best regards
Gerald

Re: Stuck here - help please

2020-07-17 Thread Gerald Galster

> Thank you for the details. As per your suggestion, I have made the changes to 
> dovecot.conf file. Still I don't see any replication is happening. Please see 
> the dovecot.conf file.
> 
> I do not see "/etc/dovecot/conf.d/12-replication.conf" in my servers. So I 
> had put everything  in the dovecot.conf file only. Please see the complete 
> data in it below. The below data is in

There should be other config files in /etc/dovecot/conf.d/ - if 
12-replication.conf is not there you can just create it, but putting it in 
dovecot.conf will work too.
(it is easier to locate a specific configuration this way as to search a long 
dovecot.conf)

> server A. In other server (server B) Also I have the same configuration, 
> except mail_replica line and it is pointing to the other server like, " 
> mail_replica = remote:vm...@bal3200dev001.testorg.com 
>  ". 
> 
> I have generated/configured the ssh keys also for vmail user in both servers. 
> Now When i manually ssh to the server, it is not asking for a password. 

That's good.

> userdb {
> args = uid=vmail gid=vmail home=/z1devenv/mail/virtual/%d/%n
> driver = static
> }

The replication wiki says:

Make sure that user listing is configured for your userdb, this is required by 
replication to find the list of users that are periodically replicated:
doveadm user '*'

Did you try that?

I think doveadm user '*' will not work with static userdb because no users are 
actually configured.

You could try 
https://serverfault.com/questions/939418/how-do-i-configure-doveadm-a-with-passdb

passdb {
args = scheme=sha512-crypt /etc/mail/passwd
driver = passwd-file
}

userdb {
default_fields = uid=vmail gid=vmail home=/var/vmail/%d/%n
args = /etc/mail/passwd
driver = passwd-file
}

I've never tested this as I have my users in a mysql database.

If it works you should see some output like the following from doveadm 
replicator:

# doveadm replicator status
Queued 'sync' requests0 

  
Queued 'high' requests0 

  
Queued 'low' requests 0 

  
Queued 'failed' requests  0 

  
Queued 'full resync' requests 0 

  
Waiting 'failed' requests 0 

  
Total number of known users   1234

# doveadm replicator status '*'
username  priority fast sync full sync success sync failed
l...@gcore.biznone 00:00:28  05:52:55  00:00:28 - 

 
Best regards
Gerald



Re: Stuck here - help please

2020-07-16 Thread Gerald Galster

> I have done the sync manually with "doveadm sync" command. But, I have not 
> configured the replication yet.

If you don't tell dovecot where to replicate, nothing gets replicated.

> I am looking at the below webpage for the replication. 
> 
> https://wiki.dovecot.org/Replication  
> 
> I am using the dovecot version  "2.2.36". I am confused with what needs to be 
> done after reading that page.
> 
> 1. They are talking about v2.3.1 and v2.2+. Which one do I need to follow? 
> Could you please give me more details on this? Providing some sample settings 
> will be more helpful for me, please. 

I don't understand your confusion. You are using 2.2.36, which is v2.2+ 
(meaning a version greater than 2.2).
The documentation states you need at least 2.3.1 if you want to use the 
noreplicate feature.
So you can't use that with 2.2.36, but as your goal is to replicate everything 
you don't need "noreplicate".
Besides that I can't see any difference in configuring replication for 2.2/2.3.

If you want to replicate emails with ssh you just have to follow the first 
section, the sample settings are
right on that page. It's basically copying everything from "mail_plugins = 
$mail_plugins notify replication"
to "replication_max_conns = 10" into a config file like 
/etc/dovecot/conf.d/12-replication.conf

You only have to change the following line to match your server/ssh setup:
mail_replica = remote:vm...@anotherhost.example.com

Then generate and configure ssh keys for user vmail (passwordless 
authentication) on both servers.

> 2. Also, do I need to set the replication on both of my servers the same and 
> as it is?

On server A) you should configure mail_replica = remote:vmail@server_B and
on server B) you should configure mail_replica = remote:vmail@server_A

If you skip B) and new mail arrives on B) it is not immediately synced to A)
In that case you would have to wait until a mail gets synced from A)
(you remember sync is bidirectional)

Best regards
Gerald

Re: Stuck here - help please

2020-07-16 Thread Gerald Galster
> I have 2 test servers with the below configuration.
>  
> ==
> Linux OS-  Red Hat Enterprise Linux Server release 7.7 (Maipo)
> Dovecot version -  2.2.36 (1f10bfa63)
> Postfix version -  2.10.1 
> == 
> 
> Trying to create High Availability. 
> 
> I have added both of the above servers behind a F5 load balancer. I have got 
> a Load Balancer FQDN "intl-dev-imaptest.testorg.com 
> ". I have enabled/opened the ports 
> (25/110/143/993/995) on the above  "intl-dev-imaptest.testorg.com 
> ".
> 
> When I send 10 emails to  "intl-dev-imaptest.testorg.com 
> ", then those 10 emails are getting 
> distributed between the above 2 backend servers (5 emails to each server). I 
> see those 5 emails each in both the servers.

You should see 10 emails on each server if replication is working: 5 emails 
that were directly delivered via loadbalancer and 5 emails from the other 
server via replication.

> From Outlook I have configured the email address using "POP and IMAP", when I 
> gave the IMAP server as  "intl-dev-imaptest.testorg.com 
> " ,then it shows only 5 emails from 
> server1 in outlook and after a few seconds/minutes, automatically it 
> shows/refreshes the other 5 emails from server2. But I am not seeing all the 
> 10 emails at the same time. why?

The loadbalancer does its job, sometimes the Outlook connection is forwarded to 
server A sometimes to server B. So you just see the mails on the respective 
server. This is very bad. Your Mailclient is probably syncing and deleting 
emails everytime the connection is moved to the other server. As I suggested in 
the other thread you should at least configure some kind of ip stickyness when 
using a loadbalancer, so that your mailclient reaches the same backend.

The purpose of replication is that two servers, operating independently, have 
the same dataset. Your servers seem to have completely distinct datasets, which 
indicates replication is not working. Did you configure replication?
 
> So I tried the sync command. When I execute sync command like below from 
> server1, it reflects the same emails in other server2 also. Then I see the 
> same number of emails in both the servers. Is it not possible to access the 
> both servers emails at one time with the "sync" command? Do we need to run 
> this on all the email boxes on both servers? don't we miss/lose any emails 
> during this sync process multiple times?
> 
> "doveadm sync -f -u kish...@test.testorg.com 
>  remote:vm...@bal3200dev002.testorg.com 
> "
> 
> Is "replication" and "sync" are same?

Think of replication as a continous sync. This has to be done every time an 
email is delivered, which dovecot does automatically when replication is 
configured.

You don't lose any emails because the replication/sync is bidirectional, it 
copies from the respective other server what's missing. Of course this is not 
instant but usually happens within seconds.


> Why are we not able to see all the emails at one time without the "sync" 
> command?

Probably because you did not configure replication?


> What is the best and easiest way to create High Availability with just 2 
> servers, like emails should travel to both servers equally and if one server 
> goes down also, another server should take care of the emails/functionality. 
> This is my requirement. 

It seems you just have to configure replication.

> My current real time environment: I have around 10 email domains and each 
> domain is having 10 imap emails. In total around 100 email boxes/addresses. 
> We receive around 50K emails in a day to those email addresses. We are using 
> the "Maildir" format in our environment. Want to move to the High 
> Availability option with 2 servers. 

See my other mail, it may be better to use mdbox instead of maildir.

Best regards
Gerald

Re: NFS vs Replication

2020-07-16 Thread Gerald Galster


> Some missing infos...
> 
> - As load balancer I'm using a pair of keepalived with simple setup and
> not the DNS
> - Load balancer algorithm is "Weighted Least-Connection"
> - About 20 domains and 3000 email
> - I'm monitoring my backend servers with poolmon
> - The backend servers are virtual machine (vmware) with datastore on
> "all flash" storage
> 
> based on yours notes, I think the better choice is Replication. Correct?

In my experience it's best to keep complexity low because the fewer
components you have, the fewer can fail. With replication you basically
have two independent servers that asynchronously sync emails.

While it would work with loadbalancers/keepalived/director they are not
necessary. If this is the way you want to go you should configure the
loadbalancer to always send the same source-ip to the same backend
(ip stickyness). Mailclients do open several connections in parallel
and they should see the same data.

With DNS this happens automatically because ips are rotated by resolvers
and the mailclient gets the same ip for all its connections. Failover
is builtin as mailclients just connect to the second ip when the first
is not reachable.

Replication works reliable with mdbox/sdbox but you should avoid maildir.

Best regards
Gerald


Re: NFS vs Replication

2020-07-15 Thread Gerald Galster


> I built an email system using a proxy / director pair (IMAP, POP3, LMTP)
> and a backend pair.
> 
> To have an HA system, I would like to understand if it is better to use
> an NFS export or replication to save emails and index files
> 
> NFS is provided by a NAS (in HA), while for replication I would use the
> local backend disks
> 
> Which of the two systems is more reliable? Are there any drawbacks for
> one or the other?

This decision is more about how many users you have in total and how you
can partition them.

A) 200 domains with 10 IMAP accounts each

For high availability two dovecot servers with replication are sufficient,
no director/nfs needed. Return both server ips via dns for imap.domain.com
and you get active/active load balancing for free.

There is no shared storage which means no locking problems.
Dovecot can use optimizations like mmap which is not possible with nfs.


B) 20 IMAP accounts, all within the same domain

You cannot partition by domain and a single server cannot handle the load.

Here imap.domain.com could return e.g. 5 ips via DNS that point to your 
directors.
The director's job is to send all connections of one particular user to the
same backend, i.e. Outlook at work, Thunderbird at home and K9 Mail on a
mobile phone could be active at the same time, but all are directed to the
same backend server. This way locking issues with nfs are avoided because
only one server is accessing the mailbox at a time.

IIRC you need to monitor your backend servers and add/remove them on failure.

If the nfs mount is not available on the backend, dovecot may create
a new (empty) mailbox, which could break things. You need to set permissions
in a way that cannot happen.


C) like B) but with a static proxy mapping where users are assigned to a 
certain backend server by configuration, that could be replicated like A)
without nfs. 


While A) in principle has a higher performance due to local disks and
optimizations B) can have a higher overall performance as dedicated
storage appliances usually have a lot more disks (ssd caching, ...)
and 10G+ networking.

C) avoids nfs but may introduce more complexity when software like pacemaker
is used to provide failover.

See https://wiki2.dovecot.org/Director and https://wiki2.dovecot.org/NFS


Best regards
Gerald






Re: Dovecot and MySQL aborted connections.

2019-10-30 Thread Gerald Galster via dovecot


> We also spotted these sql connections getting aborted, upon upgrading MySQL 
> from 5.6 to 5.7. (Going back to 5.6 we don't see them!)

I have a mailserver running MySQL 5.6.44 which is not very busy that logs these 
warnings.
Another busy one running MySQL 5.6.45 does not log any warnings.

MySQL interactive-/wait_timeout is 28800 (8 hours), which is more than enough 
time to wait for a query. Connections are closed way earlier.

dovecot and postfix use the same database and mysql user, so I created a new 
user for dovecot.
syslog shows a few communication packet errors but these are all from postfix, 
none from dovecot so far.

It seems connections are closed after 60s of inactivity (vmail is postfix, 
vmail2 is dovecot):

mysql> select * from INFORMATION_SCHEMA.PROCESSLIST where USER like 'vmail%';
+-++-+---+-+--+---+--+-+---+---+--+
| ID  | USER   | HOST| DB| COMMAND | TIME | STATE | INFO | 
TIME_MS | ROWS_SENT | ROWS_EXAMINED | TID  |
+-++-+---+-+--+---+--+-+---+---+--+
| 147 | vmail  | localhost   | vmail | Sleep   |1 |   | NULL | 
566 | 0 | 0 | 6664 |
| 148 | vmail  | localhost   | vmail | Sleep   |1 |   | NULL | 
564 | 0 | 0 | 6069 |
| 149 | vmail  | localhost   | vmail | Sleep   |1 |   | NULL | 
566 | 0 | 0 | 6037 |
| 151 | vmail2 | 127.0.0.2:35058 | vmail | Sleep   |   51 |   | NULL |   
50872 | 1 | 1 | 6072 |
| 152 | vmail2 | 127.0.0.2:35060 | vmail | Sleep   |   60 |   | NULL |   
59324 | 1 | 1 | 6025 |
| 150 | vmail  | localhost   | vmail | Sleep   |1 |   | NULL | 
565 | 0 | 0 | 6071 |
+-++-+---+-+--+---+--+-+---+---+--+
6 rows in set (0.00 sec)

mysql> select * from INFORMATION_SCHEMA.PROCESSLIST where USER like 'vmail%';
+-++-+---+-+--+---+--+-+---+---+--+
| ID  | USER   | HOST| DB| COMMAND | TIME | STATE | INFO | 
TIME_MS | ROWS_SENT | ROWS_EXAMINED | TID  |
+-++-+---+-+--+---+--+-+---+---+--+
| 147 | vmail  | localhost   | vmail | Sleep   |2 |   | NULL |
1775 | 0 | 0 | 6664 |
| 148 | vmail  | localhost   | vmail | Sleep   |2 |   | NULL |
1772 | 0 | 0 | 6069 |
| 149 | vmail  | localhost   | vmail | Sleep   |2 |   | NULL |
1774 | 0 | 0 | 6037 |
| 151 | vmail2 | 127.0.0.2:35058 | vmail | Sleep   |   52 |   | NULL |   
52080 | 1 | 1 | 6072 |
| 150 | vmail  | localhost   | vmail | Sleep   |2 |   | NULL |
1773 | 0 | 0 | 6071 |
+-++-+---+-+--+---+--+-+---+---+--+
5 rows in set (0.00 sec)

Only vmail (postfix) has dropped connections (you need performance schema 
enabled):

mysql> SELECT ess.user, ess.host, (a.total_connections - a.current_connections) 
- ess.count_star as not_closed, ((a.total_connections - a.current_connections) 
- ess.count_star) * 100 / (a.total_connections - a.current_connections) as 
pct_not_closed FROM 
performance_schema.events_statements_summary_by_account_by_event_name ess JOIN 
performance_schema.accounts a on (ess.user = a.user and ess.host = a.host)  
WHERE ess.event_name = 'statement/com/quit' AND (a.total_connections - 
a.current_connections) > ess.count_star;
+---+---+++
| user  | host  | not_closed | pct_not_closed |
+---+---+++
| vmail | localhost | 11 |15.4930 |
+---+---+++
1 row in set (0.02 sec)


My setup is different concerning quotas, they are not stored in MySQL. So if 
you don't use postfix with the same user/database the source for your warnings 
might be quota. For me it seems to be related to postfix.

Don't know if this helps, there is a new option in 5.7
https://dev.mysql.com/doc/refman/5.7/en/x-plugin-options-system-variables.html#sysvar_mysqlx_idle_worker_thread_timeout

Gerald


> Turning on mysql general query logging we can see it is Dovecot's mysql 
> connections that inquire about or update quota usage in particular:
> 
> 
> *** /logs//mysql.log ***
> 2019-10-30T10:52:22.624690-07:00  2 Connect   dovecot@localhost on 
> npomail using Socket
> 
> 2019-10-30T10:52:40.019780-07:00  2 Query SELECT bytes FROM 
> quota2 WHERE username = 'a@bla'
> 2019-10-30T10:52:40.020948-07:00  2 Query SELECT 

Re: Dovecot and MySQL aborted connections.

2019-10-28 Thread Gerald Galster via dovecot
Hi,

> Is anyone else using Dovecot (2.3.8) with MySQL (5.7) seeing a lot of these 
> in MySQL logs?
> 
> 2019-10-28T11:08:20.384428+02:00 58378 [Note] Aborted connection 58378 to db: 
> 'vmail' user: 'vmail' host: 'localhost' (Got an error reading communication 
> packets)
> 2019-10-28T11:10:09.821171+02:00 58420 [Note] Aborted connection 58420 to db: 
> 'vmail' user: 'vmail' host: 'localhost' (Got an error reading communication 
> packets)
> 2019-10-28T11:11:26.170015+02:00 58441 [Note] Aborted connection 58441 to db: 
> 'vmail' user: 'vmail' host: 'localhost' (Got an error reading communication 
> packets)
> 2019-10-28T11:13:14.091426+02:00 58459 [Note] Aborted connection 58459 to db: 
> 'vmail' user: 'vmail' host: 'localhost' (Got an error reading communication 
> packets)
> 
> They've plagued my logs for as long as I can remember. Is Dovecot not closing 
> connections to the database properly or something similar?

is it possible MySQL closed inactive connections?

SHOW VARIABLES LIKE '%timeout%';

mysqlx_wait_timeout = 3600
wait_timeout = 3600
mysqlx_interactive_timeout = 3600
interactive_timeout = 3600

Gerald

Re: dovecot disk space settings

2019-10-22 Thread Gerald Galster via dovecot
> 
> is there an option to leave some disk space free?
> Let's say, don't store new mails if the storage mount point has less
> than 1% free disk space.
> What's the way to go?
> 
> I don't want to restrict each mailbox size. It's just to prevent running
> out space completely.

emails are often accepted by an mta like postfix before handed over to dovecot.
You could configure a postfix policy service that checks your disk usage and
temporarily rejects new emails.

On the server you should check your filesystem: ext4 reserves 5% of storage
for root by default. If dovecot does not need root to deliver mails, you already
might have some space left.

Or you might use linux system quota. If all your mailusers are in the same group
you can set a group quota.

Another alternative would be to create a big dummy file that you can delete
if necessary. E.g. truncate -s 20g /var/mail/DELETEME   (man truncate)

Best regards
Gerald 

Re: Dovecote IMAPSieve user scripts

2019-09-27 Thread Gerald Galster via dovecot

> I wonder how to configure IMAPSieve with user scripts. I can't find much
> information on the internet.

try to enable managesieve:

service managesieve-login {
  inet_listener sieve {
port = 4190
  }  
}


https://wiki2.dovecot.org/Pigeonhole/ManageSieve/Configuration 


Roundcube or Thunderbird with Sieve plugin connect to your dovecot server on 
port 4190.
Users are authenticatd with the same credentials used for pop/imap and can 
upload their
scripts.

Best regards
Gerald




Re: Random duplicated emails

2019-09-09 Thread Gerald Galster via dovecot

> I migrated our mail infrastructure to Dovecot on Ubuntu 18.04 some months 
> ago. It works fine, but recently some users told me that they sometime 
> receive duplicated emails. Same email content, same headers including 
> message-id.
> 
> I'm using two dovecot servers on two sites. Both server are in cluster. We 
> don't use shared folders. All users that reported this issue so far are using 
> the same server instance. The problematic  emails are coming from local users 
> on that instance too. The examples they given to me was emails with many 
> recipients (To/CC). A specific message can be received twice (or more) by 
> recipient A but only once by recipient B. I didn't see anything in the logs 
> about sieve rules that redirect emails to others recipients.
> 
> Where should I look to diagnostic this issue?
> 
> Thanks.
> 
> Server config:
> # 2.2.33.2 (d6601f4ec): /etc/dovecot/dovecot.conf
> # Pigeonhole version 0.4.21 (92477967)


may be you hit this problem:

https://dovecot.org/list/dovecot/2018-March/111422.html 


I don't know if it's fixed yet.

You could log mail events and check if it's related to dsync:

mail_plugins = ... mail_log

plugin {
  mail_log_events = delete undelete expunge copy mailbox_delete mailbox_rename
  mail_log_fields = uid box msgid size subject
}

Best regards,
Gerald



Re: Dovecot and Apple's Mail.app not playing nicely?

2019-09-03 Thread Gerald Galster via dovecot


>>> On 3 Sep 2019, at 15.30, Coy Hile via dovecot  wrote:
>>> Hi all,
>>> Is there anything cute one has to take into account when using Dovecot with 
>>> users of Apple’s Mail.app?
>>> Behavior I’m seeing is that if I delete or move messages via Webmail 
>>> (Roundcube, Horde, or even ActiveSync
>>> via Mail.app on my phone), they do get moved or deleted.  However, if I 
>>> take the same actions in the desktop
>>> mail client, when logging in to the Webmail (or phone) app, I see the 
>>> messages still seeming to be in the Inbox.
>>> Is this known behavior? A peculiarity in Apple Mail?
>> I am using Apple Mail.App in Macbook, iPhone and iPad. And in fact
>> quite many of us internally are doing the same
>> and I can't see that behaviour. Mail.App correctly obeys \Deleted flag
>> and does not show the mails in folders.
>> Sami
> 
> That's exactly the converse of what I'm seeing. Mail.app sets the \Deleted 
> flag, or flags a message as Junk
> and moves it to the Junk folder. But when I login via, say, Roundcube, it 
> still shows in the inbox, though
> greyed out with a little (/) icon (which I assume is the deleted flag.)  If I 
> move or delete the message via
> the webmail client, it actually gets moved to Junk or Trash. (Or wherever I 
> moved it.)
> 
> FWIW, I think this applies only to deleted messages (where Mail.app may just 
> set a flag rather than actually moving
> the messages to Trash) and to Mail.app's own Junk processing. (Things flagged 
> as Spam and moved to Junk via Sieve do
> end up in the Junk folder.)

Apple Mail does not show messages anymore when the \Deleted flag is set. They 
are moved to trash only if a mailbox
for deleted messages is set in preferences. Usually they are removed (expunged) 
from the server a month later.
Roundcube on the other hand displays \Deleted messages greyed out 
(strikethrough in some versions) by default.

The ability to just mark messages as \Deleted is a nice feature. Imagine 
deleting 10 small statusmails without
unnecessary i/o. It may stress your disks (local and server) when that many 
mails are moved around before being expunged.

Best regards
Gerald

Re: Dovecot and Apple's Mail.app not playing nicely?

2019-09-03 Thread Gerald Galster via dovecot
Hi Coy,

> Is there anything cute one has to take into account when using Dovecot with 
> users of Apple’s Mail.app? 
> Behavior I’m seeing is that if I delete or move messages via Webmail 
> (Roundcube, Horde, or even ActiveSync
> via Mail.app on my phone), they do get moved or deleted.  However, if I take 
> the same actions in the desktop
> mail client, when logging in to the Webmail (or phone) app, I see the 
> messages still seeming to be in the Inbox.
> 
> Is this known behavior? A peculiarity in Apple Mail? 

I don't see this behavior with Apple Mail 12.4 / MacOS 10.14.6 (using imap, as 
pop3 does not support folders).

You could configure mail_log_events and see what happens:

plugin {
  # Events to log. Also available: flag_change append
  mail_log_events = delete undelete expunge copy mailbox_delete mailbox_rename
  # Available fields: uid, box, msgid, from, subject, size, vsize, flags
  # size and vsize are available only for expunge and copy events.
  mail_log_fields = uid box msgid size subject
}

When I move an email into another folder with Apple Mail it's moved on the 
server immediately.
Upon deletion it might get flagged as deleted and expunged later (probably a 
month after).
You can right click on the folder and choose "Erase Deleted Items" to remove 
them from the server,
or configure the time span in Mail's preferences.

Best regards
Gerald




Re: doveadm backup mdbox - initial copy slow

2019-08-30 Thread Gerald Galster via dovecot


> On 30.8.2019 12.33, Gerald Galster via dovecot wrote:
>> Hello,
>> 
>> when calling doveadm backup like in the following example, it seems to flush 
>> every email to disk
>> maxing out harddrives (iops) and resulting in very poor performance:
>> 
>> Calling doveadm -o plugin/quota= backup -u "statusma...@domain.com" 
>> "mdbox:/backup/vmail/domain.com/statusmails"
>> finished in 212 secs [changes:1800 copy:1800 delete:0 expunge:0]
>> 
>> The source mdbox holds 1800 small emails and uses about 20 MB disk space 
>> only.
>> 
>> I've tried -f (full sync) and -l 30 (locking the mailbox) which did not get 
>> faster, nor did doveadm sync -1.
>> 
>> When using doveadm import all mails are copied in less than 3 seconds:
>> time doveadm -o mail=mdbox:/backup/vmail/domain.com/statusmails import 
>> mdbox:/var/vmail/domain.com/statusmails "" all
>> real 0m2.605s
>> 
>> Are there any other options I could try to speed up doveadm backup, avoiding 
>> the flush after each email?
>> 
>> This is doveadm from dovecot 2.2 (>= 2.2.33). Does anyone get better results 
>> with 2.3 tree?
>> 
>> Best regards
>> Gerald
> 
> Try setting mail_fsync=never

Thanks Aki, that did the trick!

Calling /usr/bin/doveadm -o plugin/quota= -o mail_fsync=never backup -u 
"statusma...@domain.com" "mdbox:/backup/vmail/domain.com/statusmails"
finished in 2 secs [changes:1925 copy:1925 delete:0 expunge:0]

Best regards
Gerald

doveadm backup mdbox - initial copy slow

2019-08-30 Thread Gerald Galster via dovecot
Hello,

when calling doveadm backup like in the following example, it seems to flush 
every email to disk
maxing out harddrives (iops) and resulting in very poor performance:

Calling doveadm -o plugin/quota= backup -u "statusma...@domain.com" 
"mdbox:/backup/vmail/domain.com/statusmails"
finished in 212 secs [changes:1800 copy:1800 delete:0 expunge:0]

The source mdbox holds 1800 small emails and uses about 20 MB disk space only.

I've tried -f (full sync) and -l 30 (locking the mailbox) which did not get 
faster, nor did doveadm sync -1.

When using doveadm import all mails are copied in less than 3 seconds:
time doveadm -o mail=mdbox:/backup/vmail/domain.com/statusmails import 
mdbox:/var/vmail/domain.com/statusmails "" all
real0m2.605s

Are there any other options I could try to speed up doveadm backup, avoiding 
the flush after each email?

This is doveadm from dovecot 2.2 (>= 2.2.33). Does anyone get better results 
with 2.3 tree?

Best regards
Gerald

Re: User / Pass SQL queries

2019-08-29 Thread Gerald Galster via dovecot
Hi Michael,

> Is there any reason Dovecot shows the 'user' variable (ie: u...@domain.tld) 
> being obtained in the password query and not the (more logical) user query?


maybe prefetching is configured:

https://wiki.dovecot.org/AuthDatabase/SQL 


Prefetching

If you want to avoid doing two SQL queries when logging in with IMAP/POP3, you 
can make the password_query return all the necessary userdb fields and use 
prefetch userdb to use those fields. If you're using Dovecot's deliver you'll 
still need to have the user_query working.

https://wiki.dovecot.org/UserDatabase/Prefetch 


Best regards
Gerald

Re: Replication issue 2.3.7

2019-07-13 Thread Gerald Galster via dovecot
Hello,

this sounds like the replication problem that occured after 2.2.33.2:

https://www.dovecot.nl/list/dovecot/2018-March/111422.html

As far as I remember there is no fix yet.

Best regards
Gerald



> Am 13.07.2019 um 14:18 schrieb Günther J. Niederwimmer via dovecot 
> :
> 
> Hello,
> 
> Thanks for the info and workaround! 
> 
> I found and have the same problems after read your mail :-(.  
> 
> Am Samstag, 13. Juli 2019, 11:13:23 CEST schrieb Reio Remma via dovecot:
>> Hello!
>> 
>> I noticed these in the logs since upgrading from 2.3.6. to 2.3.7:
>> 
>> Jul 13 11:52:10 turin dovecot: doveadm: Error:
>> dsync-remote(r...@mrstuudio.ee): Error:
>> Exporting mailbox INBOX failed: Mailbox attribute
>> vendor/vendor.dovecot/pvt/server/sieve/files/MR lookup failed: Mailbox
>> attributes not enabled
>> Jul 13 11:52:11 turin dovecot: doveadm: Error:
>> dsync-remote(r...@mrstuudio.ee): Error:
>> Exporting mailbox INBOX failed: Mailbox attribute
>> vendor/vendor.dovecot/pvt/server/sieve/files/MR lookup failed: Mailbox
>> attributes not enabled
>> 
>> After turning on mailbox attributes these errors went away:
>> 
>> mail_attribute_dict = file:~/Maildir/dovecot-attributes
>> 
>> protocol imap {
>> imap_metadata = yes
>> }
>> 
>> But now the errors are replaced with (when deleting mail):
>> 
>> Jul 13 12:04:32 turin dovecot: imap(r...@mrstuudio.ee): Warning:
>> /home/vmail/mrstuudio.ee/reio/Maildir/dovecot-uidlist: Duplicate file
>> entry at line 2: 1563008644.M18534P25946.orc.mrstuudio.ee,S=4180,W=4262
>> (uid 23030 -> 23031) - retrying by re-reading from beginning
>> Jul 13 12:04:32 turin dovecot: imap(r...@mrstuudio.ee): Warning: Maildir
>> /home/vmail/mrstuudio.ee/reio/Maildir: Expunged message reappeared,
>> giving a new UID (old uid=23030,
>> file=1563008644.M18534P25946.orc.mrstuudio.ee,S=4180,W=4262:2,S)
>> 
>> The mail message reappears on the other side of dsync and eventually I
>> end up with 3 identical messages in trash after I've deleted them on
>> both sides.
>> 
>> Thanks for any advice,
>> Reio
> 
> 
> -- 
> mit freundliche Grüßen / best regards,
> 
>  Günther J. Niederwimmer
> 
> 



Re: High availability of Dovecot

2019-04-11 Thread Gerald Galster via dovecot



> Am 11.04.2019 um 13:45 schrieb Patrick Westenberg via dovecot 
> :
> 
> Gerald Galster via dovecot schrieb:
> 
>> mail1.yourdomain.com <http://mail1.yourdomain.com> IN A 192.168.10.1
>> mail2.yourdomain.com <http://mail2.yourdomain.com> IN A 192.168.20.1
>> 
>> mail.yourdomain.com <http://mail.yourdomain.com>  IN A 192.168.10.1
>> mail.yourdomain.com <http://mail.yourdomain.com>  IN A 192.168.20.1
>> 
>> 
>> mail1/mail2 is for direct connection (MTAs)
>> 
>> Your users (outlook, thunderbird, ...) connect to mail.yourdomain.com
>> <http://mail.yourdomain.com> which returns the two ip addresses.
>> 
>> In this scenario MUA just connects to mail.yourdomain.com
>> <http://mail.yourdomain.com> and randomly uses one of the two ips. You
>> can't control which one, but this gives you active/active loadbalancing.
>> In case one server is down the MUA just uses the other ip.
> 
> Are you sure that this is working?


yes, I'm running a two node dsync cluster in production for a few years without 
issues.
The system was even working during a whole datacenter outage because the nodes 
reside
in different, distant locations. I would'nt use a filesystem like ceph with 
distant
locations due to latency issues. dsync replication is asynchronous, so there is 
no problem.

Most cluster systems that use drbd, ceph, keepalived, pacemaker, whatever are 
operated
within a single datacenter or datacenter park. If the datacenter goes down, your
cluster is not reachable anymore. This is a rare event but within 10-15 years 
it happens
to a lot of datacenters.

Best regards
Gerald




Re: Mail account brute force / harassment

2019-04-11 Thread Gerald Galster via dovecot


> Am 11.04.2019 um 12:43 schrieb Marc Roos via dovecot :
> 
> Please do not assume anything other than what is written, it is a 
> hypothetical situation
> 
> 
> A. With the fail2ban solution
>   - you 'solve' that the current ip is not able to access you
>   - it will continue bothering other servers and admins
>   - you get the next abuse host to give a try.
> 
> B. With 500GB dump
> - the owner of the attacking server (probably hacked) will notice it 
> will be forced to take action.
> 
> 
> If abuse clouds are smart (most are) they would notice that attacking my 
> servers, will result in the loss of abuse nodes, hence they will not 
> bother me anymore. 
> 
> If every one would apply strategy B, the abuse problem would get less. 
> Don't you agree??

I disagree. If 100 servers "hack" your imap account and fetch 500GB then
most likely your server is unreachable. If this is done over many servers
then your rack switches become the bottleneck and uninvolved servers are
affected too.

Your solution may work if traffic is expensive and limited but we're heading
in the other direction: you can rent a server for 50 bucks with 1gbit bandwidth
and unmetered traffic e.g. at hetzner.de 

Maybe you want to look into a solution like weakforced:

https://github.com/PowerDNS/weakforced 
Wforce is a project by Dovecot, PowerDNS and Open-Xchange

Best regards
Gerald



> 
> 
> 
> 
> 
> 
> -Original Message-
> From: Odhiambo Washington  
> Sent: donderdag 11 april 2019 12:28
> To: Marc Roos
> Cc: dovecot
> Subject: Re: Mail account brute force / harassment
> 
> 
> 
> On Thu, 11 Apr 2019 at 13:24, Marc Roos via dovecot 
>  wrote:
> 
> 
> 
> 
>   Say for instance you have some one trying to constantly access an 
>   account
>   
>   
>   Has any of you made something creative like this:
>   
>   * configure that account to allow to login with any password
>   * link that account to something like /dev/zero that generates 
> infinite 
>   amount of messages
> (maybe send an archive of virusses?)
>   * transferring TB's of data to this harassing client.
>   
>   I think it would be interesting to be able to do such a thing.
>   
>   
> 
> 
> Instead of being evil, just use fail2ban to address this problem :-)  
> 
> -- 
> 
> Best regards,
> Odhiambo WASHINGTON,
> Nairobi,KE
> +254 7 3200 0004/+254 7 2274 3223
> "Oh, the cruft.", grep ^[^#] :-)
> 
> 



Re: Mail account brute force / harassment

2019-04-11 Thread Gerald Galster via dovecot


> Am 11.04.2019 um 12:28 schrieb Odhiambo Washington via dovecot 
> :
> 
> 
> 
> On Thu, 11 Apr 2019 at 13:24, Marc Roos via dovecot  > wrote:
> 
> 
> Say for instance you have some one trying to constantly access an 
> account
> 
> 
> Has any of you made something creative like this:
> 
> * configure that account to allow to login with any password
> * link that account to something like /dev/zero that generates infinite 
> amount of messages
>   (maybe send an archive of virusses?)
> * transferring TB's of data to this harassing client.
> 
> I think it would be interesting to be able to do such a thing.
> 
> 
> Instead of being evil, just use fail2ban to address this problem :-)  


fail2ban is a good solution. I don't see any benefits in granting access to 
pop/imap as well.
On the other hand if you to this with smtp, your service is probably abused for 
sending spam
which you could use to train your spam filters :-)

Best regards
Gerald



Re: High availability of Dovecot

2019-04-11 Thread Gerald Galster via dovecot


> Am 11.04.2019 um 11:48 schrieb luckydog xf :
> 
> As your statement, nothing speical is needed to do except setting up DNS MX 
> records, right?

MX records are for incoming MAIL:

yourdomain.com  IN MX 100 mail1.yourdomain.com 

yourdomain.com  IN MX 100 mail2.yourdomain.com 


-> both priority 100 = 50/50 load balancing (globally, not when checked on a 
single resolver!)

Then you need A Records ( for ipv6)

mail1.yourdomain.com IN A 192.168.10.1
mail2.yourdomain.com  IN A 192.168.20.1

mail.yourdomain.com  IN A 192.168.10.1
mail.yourdomain.com  IN A 192.168.20.1


mail1/mail2 is for direct connection (MTAs)

Your users (outlook, thunderbird, ...) connect to mail.yourdomain.com 
 which returns the two ip addresses.

In this scenario MUA just connects to mail.yourdomain.com 
 and randomly uses one of the two ips. You can't 
control which one, but this gives you active/active loadbalancing.
In case one server is down the MUA just uses the other ip. dsync replicates 
bi-directionally so that both servers are up-to-date.

You don't need shared storage, every server is a copy of the other. If you want 
to use shared storage, then dsync is not for you because there is nothing to 
sync at that stage.

I would use shared storage only if you need to have more than two servers. The 
above setup has no locking problems and is performant due to local filesystems.
It depends on how many users you have and how much storage you need. You could 
buy two 2HE servers with 24 2.5" disks each (up to 96 with 4 HE), which may be 
sufficient for your needs.

> User's mail store is running on shared storage, basically user's MUA connects 
> to primary MX , the backup one is used once Primary is down.

If you're not using Maildir beware of locking issues with concurrent access. It 
could crash indices.

> It's a native HA of email system? I'll test those solution out.

Yes, it works well with small setups. For big setups you'd typically use 
dovecot director, shared storage, object storage ... but you need more servers 
and it is way more complex and expensive.

Best regards
Gerald

Re: High availability of Dovecot

2019-04-11 Thread Gerald Galster via dovecot


>  I'm going to deploy postfix + dovecot + CephFS( as Mail Storage). 
> Basically I want to use two servers for them, which  is kind of HA.

you may consider dovecot's builtin dsync replication which works great with two 
servers (while there still is one little bug that may duplicate mails upon 
deletion with pop3 only under specific conditions)

> My idea is that using keepalived or Pacemaker to host a VIP, which could 
> fail over the other server once one is down. And I'll use Haproxy or Nginx to 
> schedule connections to one of those server based on source IP( Session 
> stickiness),  I'll use VIP as DNS record.etc, is my plan doable?
>I know MX could be server ones with different priority. But I think it 
> brings along shortage that DNS couldn't know Email server is up or down, it 
> just returns results to MUA, right?


DNS just returns your servers' ip addresses/mx records and does not know if 
they are up or down. You could combine that with an external monitoring system 
that modifies your dns entries but this is overkill (keep ttl in mind).
DNS resolvers return records in a round robin fashion so that you get 50/50 
active/active loadbalancing. SMTP does cope with delivery errors very well 
(e.g. greylisting is a temporary delivery error).
MTAs just connect to the second MX and try to deliver the mail. Even MUAs like 
Outlook, Apple Mail or Thunderbird are capable to use more than one ip - if the 
connection fails they connect to the second ip returned via DNS, without any 
user interaction.

Best regards
Gerald

Re: ssl_cert: Can't open file permission denied

2019-04-10 Thread Gerald Galster via dovecot


> Am 10.04.2019 um 11:59 schrieb Laura Smith via dovecot :
> 
> 
> On Wednesday, April 10, 2019 10:52 AM, Aki Tuomi via dovecot 
>  wrote:
> 
>> On 10.4.2019 12.36, Laura Smith via dovecot wrote:
>> 
>>> Dovecot 2.3.3 (dcead646b)
>>> openSUSE Leap 15.0
>>> I am getting a weird error message:
>>> Fatal: Error in configuration file /etc/dovecot/local.conf line 16: 
>>> ssl_cert: Can't open file /etc/foobar/ssl/certbot.pem: Permission denied
>>> I have tried the following:
>>> 
>>> -   chmod -R 655 /etc/foobar/ssl (/etc/foobar is 755)
>>> -   create "ssl_users" group add dovecot to it chown -R dovecot:ssl_users 
>>> /etc/foobar/ssl
>>> 
>>> How can I fix this ? There's no obvious solution ?
>> 
>> Are you by chance using selinux? If you are, you might need to relabel
>> the files.
>> 
>> Aki
> 
> This is openSUSE, not Centos, I don't think it even comes with selinux.

Maybe apparmor?

https://git.ispconfig.org/ispconfig/ispconfig3/issues/5071 


 > OpenSuSE and apparmor expect dovecot certs to be in /etc/ssl/private
 > ISPConfig setup script expects SSL certs to be in /etc/postfix but apparmor 
 > prevents dovecot from reading them in that directory

Otherwise you could login as dovecot user (temporarily change the shell to bash 
if needed; usermod -s /bin/bash) and see if you can access the certificate.
Check all directory/file permissions, including acls (man getfacl), along the 
path.

Best regards
Gerald

Re: protocols: Unknown protocol: sieve

2019-04-10 Thread Gerald Galster via dovecot



> Am 10.04.2019 um 11:24 schrieb luckydog xf via dovecot :
> 
> Hi, list,
> 
> I downloaded dovecot-2.3-pigeonhole-0.5.5.tar.gz and installed it, after 
> I enabled 
> 
> #/etc/dovecot/conf.d/20-managesieve.conf
> protocols = $protocols sieve
> 
> it said " protocols: Unknown protocol: sieve"
> 
> What's wrong?"

Do you have something like

protocol sieve {
 #managesieve_max_line_length = 65536 
 ...
}

in any of your config files?

Best regards
Gerald

Re: CentOS Repository broken ?

2019-03-30 Thread Gerald Galster via dovecot

> All is correct on my system also a yum clean is make.
> 
> But the dovecot = 2.3.5.1-1 is MISSING on the repositories !!

Sorry, this is wrong, it's there. Have a look at: 
http://repo.dovecot.org/ce-2.3-latest/centos/7/RPMS/x86_64/2.3.5.1-1_ce/ 


dovecot-2.3.5.1-1.x86_64.rpm   25-Mar-2019 09:29
 4649368

If yum doesn't work for you, download the rpm and install it with yum locally.

> Yum like to update to the new packages dovecot-lua and dovecot-imaptest but 
> can't found the latest passed dovecot Version  2.3.5.1-1
> 
> The packages dovecot-lua and dovecot-imaptes 2.3.5.1-1 are found but not the 
> dovecot version??

The repo works as expected, I've checked it on a clean system. See below.

Best Regards
Gerald


[root@localhost ~]# date
Sat Mar 30 12:38:06 CET 2019

[root@localhost ~]# yum install dovecot
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirror.fra10.de.leaseweb.net
 * epel: ftp.nluug.nl
 * extras: centos.mirror.iphh.net
 * nux-dextop: mirror.li.nux.ro
 * updates: mirror.checkdomain.de
Resolving Dependencies
--> Running transaction check
---> Package dovecot.x86_64 2:2.3.5.1-1 will be installed
--> Processing Dependency: libclucene-core.so.1()(64bit) for package: 
2:dovecot-2.3.5.1-1.x86_64
--> Processing Dependency: libclucene-shared.so.1()(64bit) for package: 
2:dovecot-2.3.5.1-1.x86_64
--> Running transaction check
---> Package clucene-core.x86_64 0:2.3.3.4-11.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved


 Package Arch  Version  
Repository Size

Installing:
 dovecot x86_642:2.3.5.1-1  
dovecot-2.3-latest4.4 M
Installing for dependencies:
 clucene-corex86_642.3.3.4-11.el7   
base  528 k

Transaction Summary

Install  1 Package (+1 Dependent package)

Total download size: 4.9 M
Installed size: 16 M
Is this ok [y/d/N]: y
Downloading packages:
(1/2): clucene-core-2.3.3.4-11.el7.x86_64.rpm   
 | 528 kB  00:00:00 
(2/2): dovecot-2.3.5.1-1.x86_64.rpm 
 | 4.4 MB  00:00:00 

Total   
5.0 MB/s | 4.9 MB  00:00:00 
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : clucene-core-2.3.3.4-11.el7.x86_64   
1/2 
  Installing : 2:dovecot-2.3.5.1-1.x86_64   
2/2 
  Verifying  : clucene-core-2.3.3.4-11.el7.x86_64   
1/2 
  Verifying  : 2:dovecot-2.3.5.1-1.x86_64   
2/2 

Installed:
  dovecot.x86_64 2:2.3.5.1-1


Dependency Installed:
  clucene-core.x86_64 0:2.3.3.4-11.el7  


Complete!
[root@localhost ~]# 

Check wich version is installed:

[root@localhost ~]# rpm -q dovecot
dovecot-2.3.5.1-1.x86_64
[root@localhost ~]# 




Re: CentOS Repository broken ?

2019-03-30 Thread Gerald Galster via dovecot



> Am 30.03.2019 um 00:06 schrieb Peter via dovecot :
> 
> On 30/03/19 1:31 AM, Günther J. Niederwimmer via dovecot wrote:
>> I have a CentOS 7 server with dovecot Repository enabled.
>> But it is not possible to Update with Yum Update
>> I have this Error?
>> The dovecot package is missing?
>> Fehler: Paket: 2:dovecot-imaptest-2.3.5.1-1.x86_64 (dovecot-2.3-latest)
>> Benötigt: dovecot = 2:2.3.5.1-1
>> Installiert: 2:dovecot-2.3.5-1.x86_64 (installed)
>> dovecot = 2:2.3.5-1
>> Verfügbar: 1:dovecot-2.2.36-3.el7.i686 (base)
>> dovecot = 1:2.2.36-3.el7
>> Fehler: Paket: 2:dovecot-lua-2.3.5.1-1.x86_64 (dovecot-2.3-latest)
>> Benötigt: dovecot = 2:2.3.5.1-1
>> Installiert: 2:dovecot-2.3.5-1.x86_64 (installed)
>> dovecot = 2:2.3.5-1
>> Verfügbar: 1:dovecot-2.2.36-3.el7.i686 (base)
>> dovecot = 1:2.2.36-3.el7
> 
> From what I can gather (since it's not in english) yyum is complaining 
> because you have dovecot.i686 installed (from CentOS base).  the CE version 
> of doevecot doesn't come in 32 bit, but you shouldn't need it, so just remove 
> it:
> 
> yum remove dovecot.i686
> yum update

Maybe there is something wrong with his local yum/repo configuration. CentOS 7 
usually doesn't mix 32bit and 64bit packages anymore.

Installiert: 2:dovecot-2.3.5-1.x86_64 (Installiert = installed) -> 
dovecot-2.3.5-1 is already installed as a 64bit package (this is an epoch 2 
package that supersedes 2.2.36-3, which is epoch 1).

Benötigt: dovecot = 2:2.3.5.1-1  (Benötigt = this is the required version that 
should be installed)

Verfügbar: 1:dovecot-2.2.36-3.el7.i686 (base) (Verfügbar = this version is 
available) -> but this is epoch 1 and i686, although dovecot-2.3.5-1.x86_64 has 
already been installed.

Therefore my guess is that the dovecot-2.3-latest repo is not available/enabled 
on his side.

Best regards
Gerald

Re: CentOS Repository broken ?

2019-03-29 Thread Gerald Galster via dovecot
you could also try to download the rpms and update them with yum locally:

http://repo.dovecot.org/ce-2.3-latest/centos/7/RPMS/x86_64/2.3.5.1-1_ce/



> Am 29.03.2019 um 14:54 schrieb Gerald Galster via dovecot 
> :
> 
> or something has been cached, try: yum clean all and then yum update
> 
> 
> 
>> Am 29.03.2019 um 14:49 schrieb Gerald Galster via dovecot 
>> :
>> 
>> Hello,
>> 
>> can you please check if the dovecot repository is enabled?
>> Is it mentioned at the top of the output when you do "yum update":
>> 
>> (1/10): base/7/x86_64/group_gz
>> (2/10): updates/7/x86_64/primary_db
>> ...
>> 
>> 
>> Verfügbar / available:
>> 
>>> Verfügbar: 1:dovecot-2.2.36-3.el7.i686 (base)
>> 
>> 
>> There should be something available from the dovecot-2.3-latest repo.
>> 
>> Or try: yum update --enablerepo=dovecot-2.3-latest
>> 
>> Best regards
>> Gerald
>> 
>> 
>>> Am 29.03.2019 um 13:31 schrieb Günther J. Niederwimmer via dovecot 
>>> :
>>> 
>>> Hello,
>>> 
>>> I have a CentOS 7 server with dovecot Repository enabled.
>>> 
>>> But it is not possible to Update with Yum Update
>>> I have this Error?
>>> The dovecot package is missing?
>>> 
>>> --> Transaktionsprüfung wird ausgeführt
>>> ---> Paket dovecot-imaptest.x86_64 2:2.3.5-1 markiert, um aktualisiert zu 
>>> werden
>>> ---> Paket dovecot-imaptest.x86_64 2:2.3.5.1-1 markiert, um eine 
>>> Aktualisierung zu werden
>>> --> Abhängigkeit dovecot = 2:2.3.5.1-1 wird für Paket 2:dovecot-
>>> imaptest-2.3.5.1-1.x86_64 verarbeitet
>>> ---> Paket dovecot-lua.x86_64 2:2.3.5-1 markiert, um aktualisiert zu werden
>>> ---> Paket dovecot-lua.x86_64 2:2.3.5.1-1 markiert, um eine Aktualisierung 
>>> zu 
>>> werden
>>> --> Abhängigkeit dovecot = 2:2.3.5.1-1 wird für Paket 2:dovecot-
>>> lua-2.3.5.1-1.x86_64 verarbeitet
>>> --> Abhängigkeitsauflösung beendet
>>> Fehler: Paket: 2:dovecot-imaptest-2.3.5.1-1.x86_64 (dovecot-2.3-latest)
>>>  Benötigt: dovecot = 2:2.3.5.1-1
>>>  Installiert: 2:dovecot-2.3.5-1.x86_64 (installed)
>>>  dovecot = 2:2.3.5-1
>>>  Verfügbar: 1:dovecot-2.2.36-3.el7.i686 (base)
>>>  dovecot = 1:2.2.36-3.el7
>>> Fehler: Paket: 2:dovecot-lua-2.3.5.1-1.x86_64 (dovecot-2.3-latest)
>>>  Benötigt: dovecot = 2:2.3.5.1-1
>>>  Installiert: 2:dovecot-2.3.5-1.x86_64 (installed)
>>>  dovecot = 2:2.3.5-1
>>>  Verfügbar: 1:dovecot-2.2.36-3.el7.i686 (base)
>>>  dovecot = 1:2.2.36-3.el7
>>> 
>>> Thanks for repair.
>>> 
>>> -- 
>>> mit freundliche Grüßen / best regards,
>>> 
>>> Günther J. Niederwimmer
>>> 
>>> 
>> 
> 



Re: CentOS Repository broken ?

2019-03-29 Thread Gerald Galster via dovecot
or something has been cached, try: yum clean all and then yum update



> Am 29.03.2019 um 14:49 schrieb Gerald Galster via dovecot 
> :
> 
> Hello,
> 
> can you please check if the dovecot repository is enabled?
> Is it mentioned at the top of the output when you do "yum update":
> 
> (1/10): base/7/x86_64/group_gz
> (2/10): updates/7/x86_64/primary_db
> ...
> 
> 
> Verfügbar / available:
> 
>> Verfügbar: 1:dovecot-2.2.36-3.el7.i686 (base)
> 
> 
> There should be something available from the dovecot-2.3-latest repo.
> 
> Or try: yum update --enablerepo=dovecot-2.3-latest
> 
> Best regards
> Gerald
> 
> 
>> Am 29.03.2019 um 13:31 schrieb Günther J. Niederwimmer via dovecot 
>> :
>> 
>> Hello,
>> 
>> I have a CentOS 7 server with dovecot Repository enabled.
>> 
>> But it is not possible to Update with Yum Update
>> I have this Error?
>> The dovecot package is missing?
>> 
>> --> Transaktionsprüfung wird ausgeführt
>> ---> Paket dovecot-imaptest.x86_64 2:2.3.5-1 markiert, um aktualisiert zu 
>> werden
>> ---> Paket dovecot-imaptest.x86_64 2:2.3.5.1-1 markiert, um eine 
>> Aktualisierung zu werden
>> --> Abhängigkeit dovecot = 2:2.3.5.1-1 wird für Paket 2:dovecot-
>> imaptest-2.3.5.1-1.x86_64 verarbeitet
>> ---> Paket dovecot-lua.x86_64 2:2.3.5-1 markiert, um aktualisiert zu werden
>> ---> Paket dovecot-lua.x86_64 2:2.3.5.1-1 markiert, um eine Aktualisierung 
>> zu 
>> werden
>> --> Abhängigkeit dovecot = 2:2.3.5.1-1 wird für Paket 2:dovecot-
>> lua-2.3.5.1-1.x86_64 verarbeitet
>> --> Abhängigkeitsauflösung beendet
>> Fehler: Paket: 2:dovecot-imaptest-2.3.5.1-1.x86_64 (dovecot-2.3-latest)
>>   Benötigt: dovecot = 2:2.3.5.1-1
>>   Installiert: 2:dovecot-2.3.5-1.x86_64 (installed)
>>   dovecot = 2:2.3.5-1
>>   Verfügbar: 1:dovecot-2.2.36-3.el7.i686 (base)
>>   dovecot = 1:2.2.36-3.el7
>> Fehler: Paket: 2:dovecot-lua-2.3.5.1-1.x86_64 (dovecot-2.3-latest)
>>   Benötigt: dovecot = 2:2.3.5.1-1
>>   Installiert: 2:dovecot-2.3.5-1.x86_64 (installed)
>>   dovecot = 2:2.3.5-1
>>   Verfügbar: 1:dovecot-2.2.36-3.el7.i686 (base)
>>   dovecot = 1:2.2.36-3.el7
>> 
>> Thanks for repair.
>> 
>> -- 
>> mit freundliche Grüßen / best regards,
>> 
>> Günther J. Niederwimmer
>> 
>> 
> 



Re: CentOS Repository broken ?

2019-03-29 Thread Gerald Galster via dovecot
Hello,

can you please check if the dovecot repository is enabled?
Is it mentioned at the top of the output when you do "yum update":

(1/10): base/7/x86_64/group_gz
(2/10): updates/7/x86_64/primary_db
...


Verfügbar / available:

> Verfügbar: 1:dovecot-2.2.36-3.el7.i686 (base)


There should be something available from the dovecot-2.3-latest repo.

Or try: yum update --enablerepo=dovecot-2.3-latest

Best regards
Gerald


> Am 29.03.2019 um 13:31 schrieb Günther J. Niederwimmer via dovecot 
> :
> 
> Hello,
> 
> I have a CentOS 7 server with dovecot Repository enabled.
> 
> But it is not possible to Update with Yum Update
> I have this Error?
> The dovecot package is missing?
> 
> --> Transaktionsprüfung wird ausgeführt
> ---> Paket dovecot-imaptest.x86_64 2:2.3.5-1 markiert, um aktualisiert zu 
> werden
> ---> Paket dovecot-imaptest.x86_64 2:2.3.5.1-1 markiert, um eine 
> Aktualisierung zu werden
> --> Abhängigkeit dovecot = 2:2.3.5.1-1 wird für Paket 2:dovecot-
> imaptest-2.3.5.1-1.x86_64 verarbeitet
> ---> Paket dovecot-lua.x86_64 2:2.3.5-1 markiert, um aktualisiert zu werden
> ---> Paket dovecot-lua.x86_64 2:2.3.5.1-1 markiert, um eine Aktualisierung zu 
> werden
> --> Abhängigkeit dovecot = 2:2.3.5.1-1 wird für Paket 2:dovecot-
> lua-2.3.5.1-1.x86_64 verarbeitet
> --> Abhängigkeitsauflösung beendet
> Fehler: Paket: 2:dovecot-imaptest-2.3.5.1-1.x86_64 (dovecot-2.3-latest)
>Benötigt: dovecot = 2:2.3.5.1-1
>Installiert: 2:dovecot-2.3.5-1.x86_64 (installed)
>dovecot = 2:2.3.5-1
>Verfügbar: 1:dovecot-2.2.36-3.el7.i686 (base)
>dovecot = 1:2.2.36-3.el7
> Fehler: Paket: 2:dovecot-lua-2.3.5.1-1.x86_64 (dovecot-2.3-latest)
>Benötigt: dovecot = 2:2.3.5.1-1
>Installiert: 2:dovecot-2.3.5-1.x86_64 (installed)
>dovecot = 2:2.3.5-1
>Verfügbar: 1:dovecot-2.2.36-3.el7.i686 (base)
>dovecot = 1:2.2.36-3.el7
> 
> Thanks for repair.
> 
> -- 
> mit freundliche Grüßen / best regards,
> 
>  Günther J. Niederwimmer
> 
> 



CVE-2019-7524 backport patch for 2.2.33.2

2019-03-28 Thread Gerald Galster via dovecot
Hello Aki,

I'm currently stuck with 2.2.33.2 as 2.2.36 still duplicates mails after pop3 
deletion on a two node dsync cluster.

Therefore I've created a small patch and it seems only these two files are 
affected:

dovecot-2.2.36.3/src/lib-storage/index/index-pop3-uidl.c
dovecot-2.2.36.3/src/plugins/fts/fts-api.c

Please correct me if I have missed something.

Best regards
Gerald



dovecot-CVE-2019-7524-2.2.36-1-3.patch
Description: Binary data


Re: Removing a mailbox from a dovecot cluster

2019-03-08 Thread Gerald Galster via dovecot


> Am 08.03.2019 um 14:46 schrieb Francis :
> 
> Le jeu. 7 mars 2019 à 17:03, Gerald Galster via dovecot  <mailto:dovecot@dovecot.org>> a écrit :
>   
> Why does "doveadm replicator status " not return?
> 
> I've tried the following commands on a replicated server (2.2.33.2), which 
> all returned immediately:
> 
> [root@server ~]# doveadm replicator status u...@domain.com 
> <mailto:u...@domain.com>
> username priority fast sync full sync success sync failed
> u...@domain.com <mailto:u...@domain.com>  none 12:30:32  12:30:32  
> 12:30:31 - 
> 
> [root@server ~]# doveadm replicator remove u...@domain.com 
> <mailto:u...@domain.com>
> 
> [root@server ~]# doveadm replicator status u...@domain.com 
> <mailto:u...@domain.com>
> username priority fast sync full sync success sync failed
>  (no additional output as replication is stopped)
> 
> 
> Hi,
> 
> Sorry, I didn't express myself correctly. The output you see is the one I see 
> too when the replication is off. When I say it doesn't return anything, I 
> wanted to say it doesn't print anything else other than the header line. The 
> replication has been disabled for more than 12 hours and I just sent an email 
> to this account and the replication switched on by itself again. I see 
> nothing about the replication in the logs.


Maybe someone from the dovecot team can tell more about the internals on why 
and when replication is activated again.

I'm solving that problem at the mta level: the email is inserted into postfix' 
access table where access is denied, so emails won't reach dovecot.
Maybe you can do something similar or set a flag in ldap that your mta can use 
to reject mails.

Best regards
Gerald

Re: Removing a mailbox from a dovecot cluster

2019-03-07 Thread Gerald Galster via dovecot


> Am 07.03.2019 um 20:43 schrieb Francis :
> 
> Le mar. 5 mars 2019 à 10:08, Gerald Galster via dovecot  <mailto:dovecot@dovecot.org>> a écrit :
> 
> you could try to stop replication for the user you are deleting:
> doveadm replicator remove [-a replicator_socket_path] username
> 
> 
> Good idea! But I have a problem. I tried to stop the replication (doveadm 
> replicator remove ) for an user on both server. Verified with 
> "doveadm replicator status ", it doesn't return, so I assume the 
> replicator is off for this account. Then I try to send an email to this 
> account to verify the replication is really stopped, but it activate again by 
> itself? I receive the mail on both server and If I type "doveadm replicator 
> status ", it seem like replication is back on.

Why does "doveadm replicator status " not return?

I've tried the following commands on a replicated server (2.2.33.2), which all 
returned immediately:

[root@server ~]# doveadm replicator status u...@domain.com
username priority fast sync full sync success sync failed
u...@domain.com  none 12:30:32  12:30:32  12:30:31 - 

[root@server ~]# doveadm replicator remove u...@domain.com

[root@server ~]# doveadm replicator status u...@domain.com
username priority fast sync full sync success sync failed
 (no additional output as replication is stopped)

[root@server ~]# doveadm replicator add u...@domain.com

[root@server ~]# doveadm replicator status u...@domain.com
username priority fast sync full sync success sync failed
u...@domain.com  none 00:00:02  00:00:02  00:00:01 - 
 (replication working again)


Maybe replication had not been stopped yet on your server.


Best regards
Gerald



Re: Removing a mailbox from a dovecot cluster

2019-03-05 Thread Gerald Galster via dovecot


> Am 04.03.2019 um 22:19 schrieb Francis via dovecot :
> 
> Le lun. 4 mars 2019 à 12:48, Gerald Galster via dovecot  <mailto:dovecot@dovecot.org>> a écrit :
> 
> Hallo Francis,
> 
> have you tried removing the account from your ldap? If dovecot has no 
> information about a particular user, it won't replicate.
> 
> Then you would have to delete the mailbox (on both cluster nodes) from the 
> filesystem (rm -rf /path/to/mailbox)
> During testing you could move the mailbox somewhere else instead of deleting 
> it, just in case something does not work as expected.
> 
> Deleting files on another server could be automated with ssh (ssh keys).
> 

> I'm also using single instance storage for attachments. Because of that, I 
> think I can't just remove the mdbox storage with rm because I'll be stuck 
> with attachments from removed mailboxes. Am I wrong?

you may be right. I don't know if any tools like doveadm deduplicate will check 
for orphans. 
On a new server you could disable sis and use a filesystem with deduplication 
instead (e.g. vdo)

> This is why I first use doveadm flags/expunge to mark as removed all messages 
> then I use doveadm purge to remove them from storage. I can't use theses 
> commands on deleted/disabled user, I get an error saying the user cannot be 
> found, so I can't remove them from LDAP first.


you could try to stop replication for the user you are deleting:
doveadm replicator remove [-a replicator_socket_path] username

Best regards
Gerald

Re: Removing a mailbox from a dovecot cluster

2019-03-04 Thread Gerald Galster via dovecot


> Am 04.03.2019 um 16:35 schrieb Francis via dovecot :
> 
> Le jeu. 28 févr. 2019 à 11:18, Francis  > a écrit :
> Le mar. 26 févr. 2019 à 22:58, David Myers  > a écrit :
> Hello Francis,
> 
> I wonder if this is due to how a cluster is configured to function internally.
> 
> Tell us more about the cluster, is it one of those ‘fancy pants’ high 
> availability, auto backup heart beat things, or is it more a traditional 
> multi server (master slave style) setup. 
> 
> Either way you may need to disconnect the servers from one another and delete 
> the offending files / directories either via dove or or via the os (although 
> reading your original email it sounds like you are already attempting this).
> 
> If you have a fancy cluster this may actually be more difficult than it 
> sounds and have interesting (unwanted) side effects, also the underlying 
> database (if you are storing emails that way) may have a method to remove data
> 
> I assume you are keeping back up copies of all those emails somewhere, just 
> in case you need them in the future. 
> 
> See this wiki article to better understand what I mean by the ‘fancy pants’ 
> clusters :
> https://en.m.wikipedia.org/wiki/High-availability_cluster 
> They sound very 
> cool, but I suspect are overkill for a mail server, unless your database is 
> already inside one then it would make sense I guess. 
> 
> 
> Hello,
> 
> I just use the cluster/replication functionality integrated into dovecot, 
> nothing more. There is no database involved. I use LDAP for the 
> authentication. The mails are stored locally on each server and replicated 
> with the replication feature of dovecot.
> 
> I followed this wiki article: https://wiki.dovecot.org/Replication 
> 
> 
> 
> Hello,
> 
> So nobody use the replication feature? or nobody never ever remove a mailbox? 
> or maybe my question is so dumb and I should RTFM something? :) 
> 
> Do you need more information to help me to debug that issue?
> 
> Thanks.



Hallo Francis,

have you tried removing the account from your ldap? If dovecot has no 
information about a particular user, it won't replicate.

Then you would have to delete the mailbox (on both cluster nodes) from the 
filesystem (rm -rf /path/to/mailbox)
During testing you could move the mailbox somewhere else instead of deleting 
it, just in case something does not work as expected.

Deleting files on another server could be automated with ssh (ssh keys).

Best regards,
Gerald

Re: new Centos server install yum dependancy error

2019-02-23 Thread Gerald Galster via dovecot


> Am 23.02.2019 um 03:33 schrieb Voytek Eymont via dovecot 
> :
> 
> 
> 
> On Sat, February 23, 2019 10:41 am, Gerald Galster via dovecot wrote:
> 
>> 
>> 
>> you can't install it yet because dovecot-2.3.4-2.x86_64 is not shown,
>> probably due to the priority protection plugin:
>> 
>> -> 226 packages excluded due to repository priority protections
>> 
>> 
>> Try disabling it, see:
>> 
>> 
>> https://serverfault.com/questions/312472/what-does-that-mean-packages-exc
>> luded-due-to-repository-priority-protections
>> <https://serverfault.com/questions/312472/what-does-that-mean-packages-ex
>> cluded-due-to-repository-priority-protections>
>> 
>> Afterwards you should see all dovecot packages and can install version
>> 2.3.4 again
> 
> 
> Gerald,
> 
> thank you again
> 
> in meanwhile, I've tried specifying repo as in 'yum --enablerepo..',
> perhaps I din't do it correctly as got errors, BUT, following your
> advise/link, disabling r-p-p, now I see:
> 
> # yum install  dovecot dovecot-devel dovecot-mysql dovecot-pigeonhole
> Loaded plugins: fastestmirror, langpacks
> Loading mirror speeds from cached hostfile
> * base: mirror.nsw.coloau.com.au
> * epel: mirror.nsw.coloau.com.au
> * extras: mirror.ventraip.net.au
> * remi-safe: remi.conetix.com.au
> * updates: mirror.nsw.coloau.com.au
> Resolving Dependencies
> --> Running transaction check
> ---> Package dovecot.x86_64 2:2.3.4.1-1 will be installed
> ---> Package dovecot-devel.x86_64 2:2.3.4.1-1 will be installed
> ---> Package dovecot-mysql.x86_64 2:2.3.4.1-1 will be installed
> ---> Package dovecot-pigeonhole.x86_64 2:2.3.4.1-1 will be installed
> --> Finished Dependency Resolution
> 
> Dependencies Resolved
> 
> 
> PackageArch   Version Repository 
> Size
> 
> Installing:
> dovecotx86_64 2:2.3.4.1-1 dovecot-2.3-latest
> 4.4 M
> dovecot-devel  x86_64 2:2.3.4.1-1 dovecot-2.3-latest
> 475 k
> dovecot-mysql  x86_64 2:2.3.4.1-1 dovecot-2.3-latest 
> 92 k
> dovecot-pigeonhole x86_64 2:2.3.4.1-1 dovecot-2.3-latest
> 704 k
> 
> Transaction Summary
> 
> Install  4 Packages
> 
> Total download size: 5.6 M
> Installed size: 18 M
> Is this ok [y/d/N]:
> 
> 
> so I guess I'm good to hit 'y'
> 
> and:
> 
> Running transaction check
> Running transaction test
> Transaction test succeeded
> Running transaction
>  Installing : 2:dovecot-2.3.4.1-1.x86_64 
> 1/4
>  Installing : 2:dovecot-devel-2.3.4.1-1.x86_64   
> 2/4
>  Installing : 2:dovecot-mysql-2.3.4.1-1.x86_64   
> 3/4
>  Installing : 2:dovecot-pigeonhole-2.3.4.1-1.x86_64  
> 4/4
>  Verifying  : 2:dovecot-devel-2.3.4.1-1.x86_64   
> 1/4
>  Verifying  : 2:dovecot-mysql-2.3.4.1-1.x86_64   
> 2/4
>  Verifying  : 2:dovecot-pigeonhole-2.3.4.1-1.x86_64  
> 3/4
>  Verifying  : 2:dovecot-2.3.4.1-1.x86_64 
> 4/4
> 
> Installed:
>  dovecot.x86_64 2:2.3.4.1-1   dovecot-devel.x86_64 2:2.3.4.1-1
>  dovecot-mysql.x86_64 2:2.3.4.1-1 dovecot-pigeonhole.x86_64 2:2.3.4.1-1
> 
> Complete!
> [root@c7 ~]# dovecot --version
> 2.3.4.1 (3c0b8769e)
> 
> 
> thanks again!
> (I should've read the install screen properly in the first place, and,
> should've noticed I was installing NOT from dovecot...)
> 
> 
> just thinking... now that I installed OK, should I revert the priority
> protection to '1', if I don't, will it bite me elsewhere, any thought ?

I've never used it. When you install/update, yum shows which repository the 
package belongs to.
This has been sufficient so far.

https://wiki.centos.org/PackageManagement/Yum/Priorities 
<https://wiki.centos.org/PackageManagement/Yum/Priorities>  / 7. A Cautionary 
Note:

# Note: The upstream maintainer of yum, Seth Vidal, had the following to say 
about
# 'yum priorities' in September 2009

# Gosh, I hope people do not set up yum priorities. There are so many things 
about
# priorities that make me cringe all over. It could just be that it reminds me 
of
# apt 'pinning' and that makes me want to hurl.

# The primary concern is that priorities is heavy handed over removing packages
# from the transaction set. It makes it difficult to readily determine what 
packages
# are being ignored and why. Even so, it is very flexible and can be extremely 
useful
# to provide the largest available list of packages.

Best regards
Gerald



Re: new Centos server install yum dependancy error

2019-02-22 Thread Gerald Galster via dovecot


> Am 22.02.2019 um 22:10 schrieb Voytek Eymont via dovecot 
> :
> 
> 
> 
> On Sat, February 23, 2019 3:47 am, Gerald Galster via dovecot wrote:
>> 
> 
>> 
>> try: yum clean all
> 
> 
> Gerald , thank you
> 
> it shows in the output after 'second step' below:
> 
> # yum clean all
> Loaded plugins: fastestmirror, langpacks, priorities
> Cleaning repos: base centos-sclo-rh centos-sclo-sclo dovecot-2.3-latest epel
>  : extras gf remi-safe updates
> Cleaning up list of fastest mirrors
> Other repos take up 622 k of disk space (use --verbose for details)
> 
> 
>> this deletes all cached data, then search for dovecot:
>> 
>> [root@noc ~]# yum search dovecot --showduplicates
> 
> ...
> 
>> -> base, epel, extras, updates are the repositories that are queried -
>> this is the name in [] brackets from /etc/yum.repos.d/*.repo -> you should
>> see dovecot-2.3-latest there
> 
> 
> # yum search dovecot --showduplicates
> Loaded plugins: fastestmirror, langpacks, priorities
> Determining fastest mirrors
> epel/x86_64/metalink | 3.8 kB 00:00
> * base: mirror.nsw.coloau.com.au
> * epel: mirror.nsw.coloau.com.au
> * extras: mirror.ventraip.net.au
> * remi-safe: remi.conetix.com.au
> * updates: mirror.nsw.coloau.com.au
> base | 3.6 kB 00:00
> centos-sclo-rh   | 3.0 kB 00:00
> centos-sclo-sclo | 2.9 kB 00:00
> dovecot-2.3-latest   | 2.9 kB 00:00
> epel | 4.7 kB 00:00
> extras   | 3.4 kB 00:00
> gf   | 2.9 kB 00:00
> remi-safe| 3.0 kB 00:00
> updates  | 3.4 kB 00:00
> (1/12): base/7/x86_64/group_gz | 166 kB   00:00
> epel/x86_64/group_gz   FAILED
> http://fedora.mirror.serversaustralia.com.au/epel/7/x86_64/repodata/d97ad2922a45eb2a5fc007fdd84e7ae4981b257d3b94c3c9f5d7b0dda6baa098-comps-Everything.x86_64.xml.gz:
> [Errno 14] curl#6 - "Could not resolve host:
> fedora.mirror.serversaustralia.com.au; Unknown error"
> Trying other mirror.
> (2/12): dovecot-2.3-latest/7/x86_64/primary_db |  16 kB   00:01
> (3/12): epel/x86_64/updateinfo | 959 kB   00:02
> (4/12): extras/7/x86_64/primary_db | 180 kB   00:00
> (5/12): gf/x86_64/primary_db   |  44 kB   00:00
> (6/12): centos-sclo-sclo/x86_64/primary_db | 264 kB   00:05
> (7/12): updates/7/x86_64/primary_db| 2.4 MB   00:07
> (8/12): epel/x86_64/group_gz   |  88 kB   00:00
> (9/12): base/7/x86_64/primary_db   | 6.0 MB   00:16
> (10/12): remi-safe/primary_db  | 1.4 MB   00:18
> (11/12): epel/x86_64/primary_db| 6.6 MB   00:24
> (12/12): centos-sclo-rh/x86_64/primary_db  | 3.7 MB   00:25
> 226 packages excluded due to repository priority protections
> 
> = N/S matched: dovecot
> =
> 2:dovecot-debuginfo-2.3.4-2.x86_64 : Debug information for package dovecot
> 2:dovecot-debuginfo-2.3.4.1-1.x86_64 : Debug information for package dovecot
> 1:dovecot-devel-2.2.36-3.el7.x86_64 : Development files for dovecot
> 1:dovecot-devel-2.2.36-3.el7.x86_64 : Development files for dovecot
> 2:dovecot-imaptest-debuginfo-2.3.4-2.x86_64 : Debug information for package
>: dovecot-imaptest
> 2:dovecot-imaptest-debuginfo-2.3.4.1-1.x86_64 : Debug information for package
>  : dovecot-imaptest
> 2:dovecot-lua-2.3.4-2.x86_64 : LUA support for Dovecot Community Edition
> 2:dovecot-lua-2.3.4.1-1.x86_64 : LUA support for Dovecot Community Edition
> 1:dovecot-mysql-2.2.36-3.el7.x86_64 : MySQL back end for dovecot
> 1:dovecot-mysql-2.2.36-3.el7.x86_64 : MySQL back end for dovecot
> 1:dovecot-pgsql-2.2.36-3.el7.x86_64 : Postgres SQL back end for dovecot
> 1:dovecot-pigeonhole-2.2.36-3.el7.x86_64 : Sieve and managesieve plug-in for
> : dovecot
> 1:dovecot-pigeonhole-2.2.36-3.el7.x86_64 : Sieve and managesieve plug-in for
> : dovecot
> 2:dovecot-pigeonhole-debuginfo-2.3.4-2.x86_64 : 

Re: new Centos server install yum dependancy error

2019-02-22 Thread Gerald Galster via dovecot



> Am 22.02.2019 um 14:43 schrieb Voytek Eymont via dovecot 
> :
> 
> 
> 
> On Sat, February 23, 2019 12:31 am, Gerald Galster via dovecot wrote:
>> Hello Voytek,
>> 
>> 
>> the *-devel packages include header files that are only needed if you
>> want to compile something, they are not needed for running a dovecot
>> server. Likewise the *debuginfo packages, they contain information that is
>> helpful for debugging dovecot.
>> 
>> Your problem is here:
>> 
>> 
>> Error: Package: 2:dovecot-lua-2.3.4.1-1.x86_64 (dovecot-2.3-latest)
>> Requires: dovecot = 2:2.3.4.1-1
>> Available: 1:dovecot-2.2.36-3.el7.i686 (base)
>> dovecot = 1:2.2.36-3.el7
>> 
>> 
>> If you want to install dovecot-lua-2.3.4.1-1.x86_64 you need dovecot
>> 2.3.4.1-1
>> but the repository you configured has 2.2.36-3. Your output shows you're
>> using the base repository, which is from centos. Your are not using the
>> dovecot repo. Maybe you need to enable it - look for enabled=0 in
>> /etc/yum.d/* files and change
>> to enabled=1 or stay with 2.2.36 from centos.
> 
> Gerald, thanks.
> 
> 
> hmmm, I have enabled=1, I've copied .repo from dovecot page, what did I
> screw up ?
> 
> # dovecot --version
> 2.2.36 (1f10bfa63)
> 
> 
> [root@c7 yum.repos.d]# cat dovecot.repo
> [dovecot-2.3-latest]
> name=Dovecot 2.3 CentOS $releasever - $basearch
> baseurl=http://repo.dovecot.org/ce-2.3-latest/centos/$releasever/RPMS/$basearch
> gpgkey=https://repo.dovecot.org/DOVECOT-REPO-GPG
> gpgcheck=1
> enabled=1

try: yum clean all

this deletes all cached data, then search for dovecot:

[root@noc ~]# yum search dovecot --showduplicates
Loaded plugins: fastestmirror
Determining fastest mirrors
 * base: ftp.fau.de
 * epel: ftp.lysator.liu.se
 * extras: ftp.fau.de
 * updates: mirror1.hs-esslingen.de
=
1:dovecot-devel-2.2.36-3.el7.x86_64 : Development files for dovecot
1:dovecot-mysql-2.2.36-3.el7.x86_64 : MySQL back end for dovecot
1:dovecot-pgsql-2.2.36-3.el7.x86_64 : Postgres SQL back end for dovecot
1:dovecot-pigeonhole-2.2.36-3.el7.x86_64 : Sieve and managesieve plug-in for 
dovecot
1:dovecot-2.2.36-3.el7.i686 : Secure imap and pop3 server
1:dovecot-2.2.36-3.el7.x86_64 : Secure imap and pop3 server

-> base, epel, extras, updates are the repositories that are queried - this is 
the name in [] brackets from /etc/yum.repos.d/*.repo
-> you should see dovecot-2.3-latest there

if not you could try to add: yum --enablerepo=dovecot-2.3-latest search dovecot

use --showduplicates  -> this prints the version numbers of available packages

1:dovecot-2.2.36-3.el7.x86_64
if your list includes 2:dovecot... this means "epoch" and overrides packages 
from epoch 1: (higher epoch is preferred so that you can install newer packages 
with lower version numbers)


I've checked the dovecot.repo you posted, it works:

[root@noc ~]# yum search dovecot --showduplicates
Loaded plugins: fastestmirror
Determining fastest mirrors
epel/x86_64/metalink
 * base: centosmirror.netcup.net
 * epel: epel.mirror.far.fi
 * extras: mirror.23media.de
 * updates: mirror.checkdomain.de
base



 dovecot-2.3-latest 



  epel  




   extras   
   

Re: new Centos server install yum dependancy error

2019-02-22 Thread Gerald Galster via dovecot
Hello Voytek,

the *-devel packages include header files that are only needed if you want to 
compile something,
they are not needed for running a dovecot server. Likewise the *debuginfo 
packages, they contain
information that is helpful for debugging dovecot.

Your problem is here:

Error: Package: 2:dovecot-lua-2.3.4.1-1.x86_64 (dovecot-2.3-latest)
  Requires: dovecot = 2:2.3.4.1-1
  Available: 1:dovecot-2.2.36-3.el7.i686 (base)
  dovecot = 1:2.2.36-3.el7


If you want to install dovecot-lua-2.3.4.1-1.x86_64 you need dovecot 2.3.4.1-1
but the repository you configured has 2.2.36-3. Your output shows you're using
the base repository, which is from centos. Your are not using the dovecot repo.
Maybe you need to enable it - look for enabled=0 in /etc/yum.d/* files and 
change
to enabled=1 or stay with 2.2.36 from centos.

Best regards
Gerald



> Am 22.02.2019 um 14:20 schrieb Voytek Eymont via dovecot 
> :
> 
> 
> 
> On Sat, February 23, 2019 12:15 am, Voytek Eymont via dovecot wrote:
> 
>> should I just go for "dovecot dovecot-mysql dovecot-pigeonhole" ??
> 
> tried :
> 
> yum install dovecot dovecot-devel dovecot-mysql dovecot-pigeonhole
> 
> and, it installed OK, I guess I don't really need the other ones ?
> 
> 
> 
> Dependencies Resolved
> 
> 
> Package  Arch Version Repository 
> Size
> 
> Installing:
> dovecot  x86_64   1:2.2.36-3.el7  base  
> 4.4 M
> dovecot-develx86_64   1:2.2.36-3.el7  base  
> 461 k
> dovecot-mysqlx86_64   1:2.2.36-3.el7  base   
> 66 k
> dovecot-pigeonhole   x86_64   1:2.2.36-3.el7  base  
> 392 k
> Installing for dependencies:
> clucene-core x86_64   2.3.3.4-11.el7  base  
> 528 k
> 
> Transaction Summary
> Installed:
>  dovecot.x86_64 1:2.2.36-3.el7dovecot-devel.x86_64 1:2.2.36-3.el7
>  dovecot-mysql.x86_64 1:2.2.36-3.el7  dovecot-pigeonhole.x86_64
> 1:2.2.36-3.el7
> 
> Dependency Installed:
>  clucene-core.x86_64 0:2.3.3.4-11.el7
> 
> Complete!
> 
> 
> -- 
> Voytek
> 



Re: index problems after update

2019-02-21 Thread Gerald Galster via dovecot
For replicated servers I'm stuck with 2.2.33.2 because of pop3/dsync problems,
but on single servers I have no index problems after upgrading to 
dovecot-2.2.35-1.el7.centos.0.x86_64
or dovecot-2.2.36-3.el7.x86_64.

All servers run CentOS 7 (RHEL 7) but use lmtp delivery with mdbox and sieve.
Maybe something in dovecot-lda has changed?

Best regards
Gerald

> Am 21.02.2019 um 14:12 schrieb Gonzalo Palacios Goicolea via dovecot 
> :
> 
> El 21/02/2019 a las 10:51, Aki Tuomi via dovecot escribió:
>> On 21.2.2019 10.53, Hajo Locke via dovecot wrote:
>>> Hello,
>>> 
>>> Am 20.02.2019 um 10:39 schrieb Aki Tuomi via dovecot:
> On 18 February 2019 09:28 Hajo Locke via dovecot
>   wrote:
> 
> 
> Hello,
> it seems we need a dovecot developers opinion. May be we hit a
> bug or cant help ourselves.
>
>>> Thanks for your answer.
 Core dump with backtrace would help, if possible to acquire. Please
 refer to https://dovecot.org/bugreport.html 
  for information how to
 get a core dump.
 
 Aki
 
>>> Unfortunately its hard to get a backtrace because dovecot is not
>>> crashing. so it seems to be more a kind of logic problem in code and
>>> no unexpected situation.
>>> yesterday evening i had next incident. I upgraded from 2.2.33.2 to
>>> 2.2.36.1, but same behaviour. Also 2.2.36.1 is tricked by the broken
>>> index and delivers no new mails. it starts delivering if i delete
>>> index files. At this point i cant tell if 2.2.36.1 also has same bug
>>> and writes a damaged index, but very likely.  We dont know this
>>> problems with 2.2.22, between 2.2.22 and 2.2.33.2 a change on
>>> mbox-index code must happend which leads to this big problem. So imapd
>>> cant do what he was created for.
>>> 
>>> For next incident i prepared a 2.3.2.1 on base of Ubuntu 18.10 and
>>> will try this. In my opinion this is a major problem and i expect a
>>> lot of affected people with version > 2.2.22 and classic mbox-storage.
>>> 
>>> Thanks,
>>> Hajo
>> We consider mbox + procmail setup somewhat edge case, and if the core
>> dump does not point into something more generic, it will probably not
>> get fixed. It is more likely to have this working if you use
>> dovecot-lda/lmtp with sieve instead of procmail.
>> 
>> Aki
>> 
> Hi Aki, 
> In support of Hajo I've to say that a few days ago I posted a similar issue, 
> and I use dovecot-lda+sieve. My environment has RHEL6 and 7 servers. When I 
> last updated the servers RHEL6 servers mantained 2.2.10-1_14.el6.x86_64 
> version, while RHEL7 updated dovecot from 2.2.10-8.el7.x86_64 to 
> 2.2.36-3.el7.x86_64. When the RHEL7 servers (used for sympa) processed a 
> message for a user, its indexes were corrupted, and the user could't access 
> his inbox through webmail, so I had to delete dovecot.* files from the user 
> mail path to get it working again. 
> My solution was to downgrade dovecot and dovecot-pigeonhole back to 
> 2.2.10-8.el7.x86_64
> Regards
> Gonzalo 
> 



Re: Dovecot v2.2.36.1 released

2019-02-05 Thread Gerald Galster
Hello Aki,

> https://dovecot.org/releases/2.2/dovecot-2.2.36.1.tar.gz
> https://dovecot.org/releases/2.2/dovecot-2.2.36.1.tar.gz.sig
> 
> - pop3_no_flag_updates=no: Don't expunge RETRed messages without QUIT

is this in any way related to the problem that has first been reported in march 
last year:

"Duplicate mails on pop3 expunge with dsync replication on 2.2.35 (2.2.33.2 
works)"

Thanks
Gerald

Re: managesieve configuration

2019-01-11 Thread Gerald Galster
Hi Dominik,

I have set ssl = required in 10-ssl.conf globally but no ssl here:

service managesieve-login {
  inet_listener sieve {
port = 4190
  }  
  ...
}


Nevertheless, STARTTLS is offered 

"IMPLEMENTATION" "Dovecot Pigeonhole"
"SIEVE" "fileinto reject envelope encoded-character vacation subaddress 
comparator-i;ascii-numeric relational regex imap4flags copy include variables 
body enotify environment mailbox date index ihave duplicate mime foreverypart 
extracttext"
"NOTIFY" "mailto"
"SASL" ""
"STARTTLS"
"VERSION" "1.0"
OK "service active"


and the connection will be encrypted (tested with roudcube webmail)


> STARTTLS
< OK "Begin TLS negotiation now."

...


You can check if it works with tcpdump:

tcpdump -nn -l -A -i eth0 port 4190


Best regards
Gerald


> Am 11.01.2019 um 09:59 schrieb Dominik Menke :
> 
> Sure, here you go (I've masked a few unimportant fields, though):
> 
> 
># 2.2.33.2 (d6601f4ec): /etc/dovecot/dovecot.conf
># Pigeonhole version 0.4.21 (92477967)
># OS: Linux 4.15.0-42-generic x86_64 Ubuntu 18.04.1 LTS
>auth_default_realm = masked
>auth_master_user_separator = *
>auth_mechanisms = plain login scram-sha-1
>default_vsz_limit = 4 G
>doveadm_worker_count = 8
>log_path = /dev/stderr
>mail_attachment_dir = /var/mail/sis
>mail_attachment_hash = %{sha256}
>mail_location = mdbox:~/mdbox
>managesieve_notify_capability = mailto
>managesieve_sieve_capability = fileinto reject envelope encoded-character 
> vacation subaddress comparator-i;ascii-numeric relational regex imap4flags 
> copy include variables body enotify environment mailbox date index ihave 
> duplicate mime foreverypart extracttext vacation-seconds imapsieve 
> vnd.dovecot.imapsieve
>mdbox_rotate_size = 128 M
>namespace inbox {
>  inbox = yes
>  location =
>  mailbox Drafts {
>auto = subscribe
>special_use = \Drafts
>  }
>  mailbox Junk {
>auto = subscribe
>special_use = \Junk
>  }
>  mailbox Sent {
>auto = subscribe
>special_use = \Sent
>  }
>  mailbox Trash {
>auto = subscribe
>special_use = \Trash
>  }
>  prefix =
>}
>passdb {
>  args = username_format=%n /etc/dovecot/passwd.masterusers
>  driver = passwd-file
>  master = yes
>  pass = yes
>}
>passdb {
>  args = username_format=%n /etc/dovecot/passwd
>  driver = passwd-file
>}
>plugin {
>  imapsieve_mailbox1_before = file:/etc/dovecot/sieve/learn-spam.sieve
>  imapsieve_mailbox1_cause = COPY FLAG
>  imapsieve_mailbox1_name = Junk
>  imapsieve_mailbox2_before = file:/etc/dovecot/sieve/learn-ham.sieve
>  imapsieve_mailbox2_causes = COPY
>  imapsieve_mailbox2_from = Junk
>  imapsieve_mailbox2_name = *
>  sieve = ~/dovecot.sieve
>  sieve_after = /etc/dovecot/sieve/after
>  sieve_dir = ~/sieve
>  sieve_extensions = +vacation-seconds
>  sieve_global_extensions = +vnd.dovecot.pipe
>  sieve_pipe_bin_dir = /etc/dovecot/sieve
>  sieve_plugins = sieve_imapsieve sieve_extprograms
>  sieve_vacation_default_period = 1d
>  sieve_vacation_max_period = 30d
>  sieve_vacation_min_period = 1d
>}
>protocols = imap lmtp sieve
>service auth {
>  unix_listener /var/spool/postfix/private/dovecot-auth {
>group = postfix
>mode = 0600
>user = postfix
>  }
>}
>service imap-login {
>  inet_listener imap {
>port = 143
>  }
>  inet_listener imaps {
>port = 993
>ssl = yes
>  }
>  process_limit = 128
>}
>service lmtp {
>  unix_listener /var/spool/postfix/private/dovecot-lmtp {
>group = postfix
>mode = 0600
>user = postfix
>  }
>}
>service managesieve-login {
>  inet_listener sieve {
>port = 4190
>ssl = yes
>  }
>  service_count = 1
>}
>service managesieve {
>  process_limit = 256
>}
>ssl_cert = ssl_key =  # hidden, use -P to show it
>userdb {
>  args = uid=vmail gid=vmail home=/var/mail/users/%n
>  driver = static
>}
>verbose_proctitle = yes
>protocol lmtp {
>  mail_plugins = " sieve notify push_notification"
>  ssl = no
>}
>protocol imap {
>  mail_plugins = " imap_sieve"
>}
>protocol sieve {
>  mail_debug = yes
>  managesieve_max_line_length = 65536
>}
> 
> 
> --Dominik
> 
> 
> On 1/11/19 9:44 AM, Aki Tuomi wrote:
>> On 10.1.2019 18.28, Dominik Menke wrote:
>>> I've missed a part at the end:
>>> 
 This leads me to my question: How do I force Dovecot to print at
 least a STARTTLS line after a client connects to port 4190? Looking
>>> 
>>> ... at the default configuration files in /etc/dovecot/conf.d/ I don't
>>> see an obvious difference.
>>> 
>>> 
>>> --Dominik
>> Can you provide output of `doveconf -n`
>> Aki
> 
> -- 
> Digineo GmbH
> 

Re: repo.dovecot.org expired certificate

2019-01-10 Thread Gerald Galster
Hi Aki,

it doesn't happen very often but the certificate renew can fail, so it's best 
to check daily. certbot will only try to renew those certificates that are 
about to expire in a few weeks.

I'm using a little perl script via cron which may be more flexible:


#!/usr/bin/perl

my $reload_count;

open(FF, "find /etc/letsencrypt/live -mtime -1 -name cert.pem |");
while(){
chomp;
next if !$_;
system("/usr/bin/logger \"sslreload: ssl certificate $_ needs reload 
after renew\"");
$reload_count++;
}
close(FF);

if($reload_count){
system("/usr/bin/logger \"sslreload: $reload_count certificates 
changed, reloading services\"");
# list all your affected services or rsync/reload on other nodes
# some services need restart, not reload
system("/usr/bin/systemctl reload httpd");
system("/usr/bin/systemctl reload postfix");
system("/usr/bin/systemctl restart vsftpd");
} else {
system("/usr/bin/logger \"sslreload: nothing to reload\"");
}


Save to /usr/bin/sslreload and chmod 700

crontab -e

0 18 * * * /usr/bin/certbot renew --quiet --no-self-upgrade 
--allow-subset-of-names; /usr/bin/sslreload


Best regards
Gerald




> Am 10.01.2019 um 09:14 schrieb Aki Tuomi :
> 
> Would be better if it would happen automatically though.
> 
> Aki
> 
> On 10.1.2019 10.04, Filipe Carvalho wrote:
>> Yup, that did the trick.
>> 
>> Thanks!
>> 
>> Filipe
>> 
>> 
>> On 1/10/19 7:47 AM, Aki Tuomi wrote:
>>> 
>>> 
>>> On 10.1.2019 9.42, Filipe Carvalho wrote:
 Hello,
 
 Not sure if this is the right place to post this, but the ssl certificate 
 of the repo.dovecot.org server expired on the 9th of January.
 
 It's giving an error via the browser and via the apt command in Debian:
 
 W: Failed to fetch 
 https://repo.dovecot.org/ce-2.3-latest/debian/jessie/dists/jessie/main/binary-amd64/Packages
   server certificate verification failed. CAfile: 
 /etc/ssl/certs/ca-certificates.crt CRLfile: none
 
 Cheers!
 
 Filipe Carvalho
 
 -- 
  
 Filipe Carvalho
 Infraestruturas Tecnológicas / IT infrastructures 
 
 fili...@uporto.pt 
>>> 
>>> 
>>> Amazing this certbot thing...
>>> 
>>> [Unit]
>>> Description=Certbot
>>> Documentation=file:///usr/share/doc/python-certbot-doc/html/index.html
>>> Documentation=https://letsencrypt.readthedocs.io/en/latest/
>>> [Service]
>>> Type=oneshot
>>> ExecStart=/usr/bin/certbot -q renew --post-hook 
>>> /etc/letsencrypt/post.hooks.d/reload
>>> PrivateTmp=true
>>> 
>>> one would think this would work and reload nginx after the cert has been 
>>> renewed... 
>>> 
>>> Aki
>>> 



Re: Dovecot Submission Proxy Auth

2019-01-09 Thread Gerald Galster
Hi Jacky,

if postfix did not log a specific error to your maillog you could change smtpd 
to smtpd -v in master.cf to get more debug output or use debug_peer_list to see 
what smtp commands are sent:

http://www.postfix.org/DEBUG_README.html

Typically smtp auth looks like this:

S: 220 smtp.example.com ESMTP server ready
C: EHLO jgm.example.com
S: 250-smtp.example.com
S: 250 AUTH CRAM-MD5 DIGEST-MD5
C: AUTH FOOBAR
S: 504 Unrecognized authentication type.

or

C: AUTH CRAM-MD5
S: 334
PENCeUxFREJoU0NnbmhNWitOMjNGNndAZWx3b29kLmlubm9zb2Z0LmNvbT4=
C: ZnJlZCA5ZTk1YWVlMDljNDBhZjJiODRhMGMyYjNiYmFlNzg2ZQ==
S: 235 Authentication successful.

C = client, S = server

Depending on your setup the password (maybe base64 encoded) or hash must also 
be sent for verification.

Or you could try to authenticate with a master user for all connections by 
setting

submission_relay_master_user =
submission_relay_password =

in dovecot, see https://wiki.dovecot.org/Submission

Best regards
Gerald



> Am 09.01.2019 um 11:08 schrieb Jacky :
> 
> Hi Gerald,
> 
> in my postfix/main.cf
> 
> smtpd_sasl_authenticated_header = yes
> smtpd_sasl_security_options = noanonymous
> smtpd_sasl_local_domain = $myhostname
> smtpd_sasl_type = dovecot
> smtpd_sasl_path = /var/run/dovecot/auth-client
> broken_sasl_auth_clients = yes
> 
> I am already using dovecot for SASL
> 
> The dovecot submission service authenticates users and already added the 
> AUTH= parameter in the MAIL FROM
> 
> MAIL FROM: AUTH=ja...@xxx.com SIZE=1430
> 
> But, it seems that postfix does not accept the AUTH= parameter and reject the 
> sender as no logged in.
> 
> 
> Best regards,
> 
> Jacky
> 
> 
> 
> On 9/1/2019 5:49 PM, Gerald Galster wrote:
>> Hi Jacky,
>> 
>> in postfix/main.cf you typically set something like
>> 
>> smtpd_sasl_auth_enable=yes
>> smtpd_sasl_type=cyrus
>> smtpd_sasl_exceptions_networks=$mynetworks
>> smtpd_sasl_security_options=noanonymous
>> smtpd_sasl_authenticated_header=yes
>> broken_sasl_auth_clients=yes
>> smtpd_recipient_restrictions=permit_sasl_authenticated,permit_mynetworks,reject_unauth_destination
>> 
>> smtpd_recipient_restrictions might already exist in main.cf and in that case 
>> has to be extended
>> 
>> postfix can verify login/passwords via sasl but it does not store these 
>> credentials, so you need to install saslauthd and add user/pass there or use 
>> a dovecot instance that already authenticates users for pop/imap.
>> 
>> http://www.postfix.org/SASL_README.html
>> https://wiki.dovecot.org/HowTo/PostfixAndDovecotSASL
>> 
>> Best regards
>> Gerald
>> 
>>> Am 09.01.2019 um 10:15 schrieb Jacky :
>>> 
>>> Hi,
>>> 
>>> Anyone know how to enable this SMTP AUTH feature with Postfix?
>>> 
>>> Regards,
>>> 
>>> Jacky
>>> 
>>> 
>>> On 7/4/2018 3:40 AM, Paul Hecker wrote:
>>>> Hi,
>>>> 
>>>>> On 6. Apr 2018, at 18:58, Odhiambo Washington  wrote:
>>>>> 
>>>>> Hi Paul,
>>>>> 
>>>>> Care to share your config (even OFFLIST) that has successfully integrated 
>>>>> Dovecot Submission service with Exim??
>>>> here the steps I have done to integrate Dovecot submission in Exim:
>>>> 
>>>> - Create and set the acl_smtp_mailauth ACL:
>>>> 
>>>> acl_smtp_mailauth = acl_check_mailauth
>>>> 
>>>> acl_check_mailauth:
>>>>   accept
>>>> hosts  = <; 127.0.0.1 ; ::1
>>>> condition  = ${if eq{$interface_port}{10025}}
>>>> log_message= Will accept MAIL AUTH parameter for 
>>>> $authenticated_sender
>>>>deny
>>>> 
>>>> 
>>>> - add a deny fo all connections to 10025 without MAIL AUTH parameter in 
>>>> acl_smtp_mail ACL:
>>>> 
>>>>   deny
>>>> condition  = ${if eq{$interface_port}{10025}}
>>>> condition  = ${if eq{$authenticated_sender}{}}
>>>> message= All connections on port $interface_port need MAIL 
>>>> AUTH sender
>>>> 
>>>> - in Dovecot, add the following submission parameters
>>>> 
>>>> submission_relay_port = 10025
>>>> submission_relay_ssl = starttls
>>>> submission_relay_ssl_verify = no
>>>> 
>>>> All the remaining parts of the Dovecot config is the default for 
>>>> submission protocol/service, copied either from the sources (default 
>>&g

Re: Dovecot Submission Proxy Auth

2019-01-09 Thread Gerald Galster
Hi Jacky,

in postfix/main.cf you typically set something like

smtpd_sasl_auth_enable=yes
smtpd_sasl_type=cyrus
smtpd_sasl_exceptions_networks=$mynetworks
smtpd_sasl_security_options=noanonymous
smtpd_sasl_authenticated_header=yes
broken_sasl_auth_clients=yes
smtpd_recipient_restrictions=permit_sasl_authenticated,permit_mynetworks,reject_unauth_destination

smtpd_recipient_restrictions might already exist in main.cf and in that case 
has to be extended

postfix can verify login/passwords via sasl but it does not store these 
credentials, so you need to install saslauthd and add user/pass there or use a 
dovecot instance that already authenticates users for pop/imap.

http://www.postfix.org/SASL_README.html
https://wiki.dovecot.org/HowTo/PostfixAndDovecotSASL

Best regards
Gerald

> Am 09.01.2019 um 10:15 schrieb Jacky :
> 
> Hi,
> 
> Anyone know how to enable this SMTP AUTH feature with Postfix?
> 
> Regards,
> 
> Jacky
> 
> 
> On 7/4/2018 3:40 AM, Paul Hecker wrote:
>> Hi,
>> 
>>> On 6. Apr 2018, at 18:58, Odhiambo Washington  wrote:
>>> 
>>> Hi Paul,
>>> 
>>> Care to share your config (even OFFLIST) that has successfully integrated 
>>> Dovecot Submission service with Exim??
>> here the steps I have done to integrate Dovecot submission in Exim:
>> 
>> - Create and set the acl_smtp_mailauth ACL:
>> 
>> acl_smtp_mailauth = acl_check_mailauth
>> 
>> acl_check_mailauth:
>>   accept
>> hosts  = <; 127.0.0.1 ; ::1
>> condition  = ${if eq{$interface_port}{10025}}
>> log_message= Will accept MAIL AUTH parameter for 
>> $authenticated_sender
>>deny
>> 
>> 
>> - add a deny fo all connections to 10025 without MAIL AUTH parameter in 
>> acl_smtp_mail ACL:
>> 
>>   deny
>> condition  = ${if eq{$interface_port}{10025}}
>> condition  = ${if eq{$authenticated_sender}{}}
>> message= All connections on port $interface_port need MAIL AUTH 
>> sender
>> 
>> - in Dovecot, add the following submission parameters
>> 
>> submission_relay_port = 10025
>> submission_relay_ssl = starttls
>> submission_relay_ssl_verify = no
>> 
>> All the remaining parts of the Dovecot config is the default for submission 
>> protocol/service, copied either from the sources (default config) or from 
>> here:
>> 
>> https://wiki.dovecot.org/Submission
>> 
>> Feel free is you have any further questions.
>> 
>> Regards,
>> Paul
>> 
>> 
>>> I use Exim+Dovecot (Exim4U) and wouldn't mind exploring this.
>>> 
>>> Thanks in advance.
>>> 
>>> 
>>> On 6 April 2018 at 19:15, Paul Hecker  wrote:
>>> Hi,
>>> 
>>> Thanks you very much. This did the trick!
>>> 
 On 6. Apr 2018, at 15:56, Stephan Bosch  wrote:
 
 
 
 Op 6-4-2018 om 13:52 schreef Paul Hecker:
> Hi,
> 
> Dovecot 2.3.1 (8e2f634). Could not get Dovecot to forward the (plain) 
> authentication to the SMTP server using submission. Reason why I need it 
> is sender spoofing (do not want my employees to send messages in behalf 
> of me).
> 
> In exim I can disable sender spoofing with the authenticated user. When 
> sending through dovecot, exim either does not accept the email (need 
> auth) or relay every sender address (because relaying from localhost).
> 
> Am I missing a setting or do I need any additional field in the (MySQL) 
> user_query/password_query to forward the password?
> 
> You can find my config here:
> 
> https://gist.github.com/lluuaapp/7daddf761131da47237b0f45e6bab5a8
 That would be possible using the following SMTP AUTH feature:
 
 https://tools.ietf.org/html/rfc4954#section-5
 
 Which is apparently supported by Exim: 
 https://www.exim.org/exim-html-current/doc/html/spec_html/ch-smtp_authentication.html#SECTauthparamail
 This requires explicit configuration, so it will not work out of the box.
>>> Here is what I did:
>>> 
>>> I had to add the acl_smtp_mailauth to only allow this on a certain port. 
>>> Then I had to duplicate my code for sender spoofing for authenticated users 
>>> and change the $authenticated_id -> $authenticated_sender.
>>> 
>>> Besides that, I must use TLS (in my case STARTTLS) so that Dovecot actually 
>>> sends the MAIL AUTH parameter.
>>> 
 The Dovecot Submission service should support this too. It sends an AUTH 
 parameter with the MAIL command (currently only then the username is a 
 valid SMTP address). However, I must say, I haven't tested this recently.
>>> I can confirm that it works (only with TLS with my current configuration, 
>>> see above).
>>> 
 I can try this in a few days. Feel free to experiment with this yourself.
 
 Regards,
 
 Stephan.
>>> Thanks again,
>>> Paul
>>> 
>>> 
>>> 
>>> 
>>> -- 
>>> Best regards,
>>> Odhiambo WASHINGTON,
>>> Nairobi,KE
>>> +254 7 3200 0004/+254 7 2274 3223
>>> "Oh, the cruft."



Re: Replication fatal error

2018-12-10 Thread Gerald Galster



> Am 10.12.2018 um 19:39 schrieb Aki Tuomi :
> 
> This has been fixed in a later version of Dovecot, so your best course of 
> action is to upgrade to some more recent version. I'd recommend 2.2.36.

There is a replication bug that duplicates mails on deletion via pop3 under 
special circumstances. The last known good version is 2.2.33.2.

See:

https://dovecot.org/list/dovecot/2018-March/111422.html
https://dovecot.org/list/dovecot/2018-September/112945.html

Aki, can you tell if this bug is on your list and will be fixed eventually (as 
it seems to exist in 2.3 tree as well)

Best regards,
Gerald



>> On 10 December 2018 at 18:28 admin via dovecot  wrote:
>> 
>> 
>> Dear suscribers.
>> 
>> I just dont understand this error. Don't know where to look first. If 
>> someone have already meet this problem...
>> 
>> Dec 10 16:06:17 prudence dovecot: dsync-server(addr...@domain.fr): 
>> Panic: file dsync-brain-mailbox.c: line 370 
>> (dsync_brain_sync_mailbox_deinit): assertion failed: (brain->failed)
>> Dec 10 16:06:17 prudence dovecot: dsync-server(addr...@domain.fr): 
>> Error: Raw backtrace: /usr/lib/dovecot/libdovecot.so.0(+0xa0112) 
>> [0x7f2c66f62112] -> /usr/lib/dovecot/libdovecot.so.0(+0xa020a) 
>> [0x7f2c66f6220a] -> /usr/lib/dovecot/libdovecot.so.0(i_fatal+0) 
>> [0x7f2c66ef21d8] -> 
>> dovecot/doveadm-server(dsync_brain_sync_mailbox_deinit+0x1b0) 
>> [0x5587f89936b0] -> 
>> dovecot/doveadm-server(dsync_brain_slave_recv_mailbox+0x417) 
>> [0x5587f8994447] -> dovecot/doveadm-server(dsync_brain_run+0x286) 
>> [0x5587f8991a86] -> dovecot/doveadm-server(+0x430d9) [0x5587f89920d9] -> 
>> dovecot/doveadm-server(+0x595cf) [0x5587f89a85cf] -> 
>> /usr/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x52) [0x7f2c66f781a2] 
>> -> /usr/lib/dovecot/libdovecot.so.0(io_loop_handler_run_internal+0x12e) 
>> [0x7f2c66f7984e] -> 
>> /usr/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x36) 
>> [0x7f2c66f78236] -> /usr/lib/dovecot/libdovecot.so.0(io_loop_run+0x38) 
>> [0x7f2c66f783e8] -> dovecot/doveadm-server(+0x27eed) [0x5587f8976eed] -> 
>> dovecot/doveadm-server(+0x298cd) [0x5587f89788cd] -> 
>> dovecot/doveadm-server(+0x3f30b) [0x5587f898e30b] -> 
>> /usr/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x52) [0x7f2c66f781a2] 
>> -> /usr/lib/dovecot/libdovecot.so.0(io_loop_handler_run_internal+0x12e) 
>> [0x7f2c66f7984e] -> 
>> /usr/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x36) 
>> [0x7f2c66f78236] -> /usr/lib/dovecot/libdovecot.so.0(io_loop_run+0x38) 
>> [0x7f2c66f783e8] -> 
>> /usr/lib/dovecot/libdovecot.so.0(master_service_run+0x13) 
>> [0x7f2c66efc9f3] -> dovecot/doveadm-server(main+0x1b5) [0x5587f89691c5] 
>> -> /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xe7) 
>> [0x7f2c66af2b97] -> dovecot/doveadm-server(_start+0x2a) [0x5587f896928a]
>> Dec 10 16:06:17 prudence dovecot: dsync-server(addr...@domain.fr): 
>> Fatal: master: service(doveadm): child 22362 killed with signal 6 (core 
>> dumped)
>> 
>> 
>> server 1 dovecot configuration (version 2.2.9):
>> 
>> default_client_limit = 2048
>> default_process_limit = 2048
>> disable_plaintext_auth = no
>> doveadm_password = xxx
>> doveadm_port = 11225
>> hostname = 
>> listen = 127.0.0.1,XX.XX.XX.XX
>> mail_location = maildir:/srv/mail/%d/%n/Maildir
>> mail_plugins = " notify replication"
>> managesieve_notify_capability = mailto
>> managesieve_sieve_capability = fileinto reject envelope 
>> encoded-character vacation subaddress comparator-i;ascii-numeric 
>> relational regex imap4flags copy include variables body enotify 
>> environment mailbox date ihave
>> namespace inbox {
>>   inbox = yes
>>   location =
>>   mailbox Drafts {
>> auto = subscribe
>> special_use = \Drafts
>>   }
>>   mailbox Junk {
>> auto = subscribe
>> special_use = \Junk
>>   }
>>   mailbox Sent {
>> auto = subscribe
>> special_use = \Sent
>>   }
>>   mailbox "Sent Messages" {
>> special_use = \Sent
>>   }
>>   mailbox Trash {
>> auto = subscribe
>> special_use = \Trash
>>   }
>>   prefix =
>> }
>> passdb {
>>   args = /etc/dovecot/dovecot-sql.conf.ext
>>   driver = sql
>> }
>> plugin {
>>   mail_replica = tcp:x:11225
>>   sieve = ~/.dovecot.sieve
>>   sieve_default = /etc/dovecot/sieve/Ingenie.sieve
>>   sieve_default_name = Ingenie
>>   sieve_dir = ~/sieve
>>   sieve_max_redirects = 0
>>   sieve_vacation_dont_check_recipient = yes
>> }
>> postmaster_address = ad...@ingenie.fr
>> protocols = " imap lmtp sieve pop3"
>> service aggregator {
>>   fifo_listener replication-notify-fifo {
>> user = vmail
>>   }
>>   unix_listener replication-notify {
>> user = vmail
>>   }
>> }
>> service auth {
>>   unix_listener /var/spool/postfix-submission/private/dovecot-auth {
>> group = postfix
>> mode = 0600
>> user = postfix
>>   }
>> }
>> service doveadm {
>>   inet_listener {
>> address = XX.XX.XX.XX
>> port = 11225
>>   }
>>   vsz_limit = 100 M
>> }
>> service lmtp {
>>   inet_listener lmtp {
>> 

Re: Duplicate mails on pop3 expunge with dsync replication on 2.2.35 (2.2.33.2 works)

2018-09-21 Thread Gerald Galster
Hi Jan,

unfortunately no news on this topic after 6 months.

I understand this bug might not be a top priority but perhaps someone from the 
dovecot team can clarify if they will investigate eventually or if we're on our 
own.

Thanks,
Gerald




> Am 18.09.2018 um 23:20 schrieb Jan Münnich :
> 
> Hi,
> 
> Has anyone any idea how to solve or further debug this issue? It seems indeed 
> that it was introduced in 2.2.34 and is still there in 2.3.2.1. I found a 
> couple of posts for this on the mailing list and elsewhere, but no solution:
> 
> When a message is retrieved and immediately expunged, it gets replicated back 
> from the other dsync node. This usually happens with POP3 but with IMAP as 
> well, when the MUA fetches the mail and the user opens and reads it 
> immediately within seconds. It does not seem to happen when the message is 
> retrieved and only expunged a while after, which is mostly the case with IMAP.
> 
> The bug occurs and is reproducible when the message is delivered to node A 
> and then fetched by the client from node B. If the message is delivered to 
> and fetched from the same node, the message does not get duplicated.
> 
> I'm attaching the debug logs from both nodes for a full example transaction. 
> The message is delivered via lmtp to node A with UID 175261, fetched and 
> deleted on node B and then appears again with the new UID 175262.
> 
> Thanks,
> Jan
> 
> 
> [...]



Re: How to send mail to mailbox with disabled domain?

2018-09-12 Thread Gerald Galster
If you want to accept delivery for one address only and reject all other
adresses in the domain, you can do this with postfix's access table:

http://www.postfix.org/access.5.html

A hint to transport tables: 
us...@example1.com  lmtp:$HOW_TO_REACH_THE_MX
us...@example1.com  lmtp:[$HOW_TO_REACH_THE_MX]

If you enclose the mx in [] postfix will just take the value as is
and not force any MX lookups.

Last but not least you could configure mail routing by sender if
necessary: 
http://www.postfix.org/postconf.5.html#sender_dependent_relayhost_maps

Best regards,
Gerald


> Am 12.09.2018 um 10:58 schrieb Jochen Bern :
> 
> On 09/11/2018 08:20 PM, Kai Schaetzl wrote:
>> I have to disable mail acceptance for example1.com. 
>> If not, mail sent *from* that server (e.g. from a web form) to that domain 
>> will not leave the server. 
>> However, if I disable example1.com for mail dovecot lmtp will not deliver 
>> mail to this mail box anymore, although the mailbox still exists.
> 
> First and foremost, you are describing a major routing problem *for the
> MTA*. You want it to do local delivery (via LMTP) for
> us...@example1.com, but forward mail addressed to foo...@example1.com to
> that domain's current MX. Since MTAs usually(!) decide that based on the
> *domain*, you have a need for some off-the-textbook tweaking right
> there. And the config to make *dovecot* work as needed would need to
> pick up from there.
> 
> If we're talking postfix, my first idea would be to make example1.com a
> virtual alias domain and set up a transport table with entries
>   us...@example1.com  local:
>   # ... etc. etc. ...
>   example1.comsmtp:$HOW_TO_REACH_THE_MX
> (with $HOW_TO_REACH_THE_MX being anything from "use the official MX from
> DNS" to "contact this internal IP on this port, *without* DNS lookups",
> whichever your (internal?) networking necessitates).
> 
> http://www.postfix.org/transport.5.html
> 
> With a bit of luck, that might already "contain" the weirdness to the
> point that neither the MX nor dovecot need config hacks.
> 
> Regards,
> -- 
> Jochen Bern
> Systemingenieur
> 
> www.binect.de
> www.facebook.de/binect
> 



Re: How to send mail to mailbox with disabled domain?

2018-09-11 Thread Gerald Galster
Is this a dovecot problem on your side? dovecot usually accepts mail
from MTA like postfix, so it would be better to remove example1.com from
postfix relaydomains (mailbox domains, alias domains, ...). Then there
is no delivery to dovecot. Most MTAs ignore MX records - if a domain is
configured locally, it gets delivered.

Best regards
Gerald

> Am 11.09.2018 um 20:20 schrieb Kai Schaetzl :
> 
> Given the following:
> 
> mailboxes:
> us...@example1.com
> us...@example1.com
> us...@example1.com
> etc.
> 
> aliases:
> whate...@example1.com -> us...@example1.com
> whate...@example2.com -> us...@example1.com
> whate...@example3.com -> us...@example1.com
> 
> Now the problem:
> example1.com MX goes elsewhere (doesn't point to this server anymore).
> Domains example2.com and example3.com still point to that server and 
> should be able to accept mail.
> I have to disable mail acceptance for example1.com. 
> If not, mail sent *from* that server (e.g. from a web form) to that domain 
> will not leave the server. 
> However, if I disable example1.com for mail dovecot lmtp will not deliver 
> mail to this mail box anymore, although the mailbox still exists.
> 
> How can I solve this? Is there a way of solving this, but keeping the 
> domain example1.com in the name for these mailboxes?
> Or is there a way to tell dovecot to ignore domains for mailbox names? 
> e.g. deliver to "user1"? (All user localparts are unique.)
> 
> Thanks for any hints.
> 
> Kai
> 
> 



Re: Duplicate mails on pop3 expunge with dsync replication on 2.2.35 (2.2.33.2 works)

2018-08-02 Thread Gerald Galster
Hi Tim,

> Do you have any new insights on the problem with reappearing mail using 
> dovecot replication + pop3?
> 
> It's driving me mad. I'm running dovecot 2.2.34 (874deae) on OpenBSD and it 
> looks like I have the same problem as you have.

unfortunately there has been no response, I'm stuck with 2.2.33.2 for the time 
being.

I can only suspect it has something to do with mailbox locking which was 
introduced in 2.2.34 (int dsync_mailbox_lock)
or maybe with some cached values (drop_older_timestamp) as it just happens when 
the mail is deleted on the node that
it has been synced to but not if it's deleted on the node where it has 
originally been received.

Best regards,
Gerald



diff -Nru dovecot-2.2.33.2/src/doveadm/dsync/dsync-brain-mailbox.c 
dovecot-2.2.34/src/doveadm/dsync/dsync-brain-mailbox.c
--- dovecot-2.2.33.2/src/doveadm/dsync/dsync-brain-mailbox.c2017-10-05 
19:10:44.0 +0200
+++ dovecot-2.2.34/src/doveadm/dsync/dsync-brain-mailbox.c  2018-02-28 
15:45:34.0 +0100


@@ -522,25 +529,33 @@
}
 
/* mailbox appears to have changed. do a full sync here and get 
the
-  state again */
+  state again. Lock before syncing. */
+   if (dsync_mailbox_lock(brain, box, ) < 0) {
+   brain->failed = TRUE;
+   mailbox_free();
+   return -1;
+   }
if (mailbox_sync(box, MAILBOX_SYNC_FLAG_FULL_READ) < 0) {
i_error("Can't sync mailbox %s: %s",
mailbox_get_vname(box),
mailbox_get_last_internal_error(box, 
>mail_error));

...

@@ -599,6 +615,7 @@
 static void
 dsync_cache_fields_update(const struct dsync_mailbox *local_box,
  const struct dsync_mailbox *remote_box,
+ struct mailbox *box,
  struct mailbox_update *update)
 {
ARRAY_TYPE(mailbox_cache_field) local_sorted, remote_sorted, changes;
@@ -630,7 +647,8 @@
local_fields = array_get(_sorted, _count);
remote_fields = array_get(_sorted, _count);
t_array_init(, local_count + remote_count);
-   drop_older_timestamp = ioloop_time - MAIL_CACHE_FIELD_DROP_SECS;
+   drop_older_timestamp = ioloop_time -
+   box->index->optimization_set.cache.unaccessed_field_drop_secs;






diff -Nru dovecot-2.2.33.2/src/doveadm/dsync/dsync-mailbox.c 
dovecot-2.2.34/src/doveadm/dsync/dsync-mailbox.c
--- dovecot-2.2.33.2/src/doveadm/dsync/dsync-mailbox.c  2017-06-23 
13:18:28.0 +0200
+++ dovecot-2.2.34/src/doveadm/dsync/dsync-mailbox.c2018-02-28 
15:45:34.0 +0100
@@ -1,7 +1,9 @@
-/* Copyright (c) 2013-2017 Dovecot authors, see the included COPYING file */
+/* Copyright (c) 2013-2018 Dovecot authors, see the included COPYING file */
 
 #include "lib.h"
 #include "istream.h"
+#include "mail-storage-private.h"
+#include "dsync-brain-private.h"
 #include "dsync-mailbox.h"
 
 void dsync_mailbox_attribute_dup(pool_t pool,
@@ -20,3 +22,40 @@
dest_r->last_change = src->last_change;
dest_r->modseq = src->modseq;
 }
+
+int dsync_mailbox_lock(struct dsync_brain *brain, struct mailbox *box,
+  struct file_lock **lock_r)
+{
+   const char *path, *error;
+   int ret;
+
+   /* Make sure the mailbox is open - locking requires it */
+   if (mailbox_open(box) < 0) {
+   i_error("Can't open mailbox %s: %s", mailbox_get_vname(box),
+   mailbox_get_last_internal_error(box, 
>mail_error));
+   return -1;
+   }
+
+   ret = mailbox_get_path_to(box, MAILBOX_LIST_PATH_TYPE_INDEX, );
+   if (ret < 0) {
+   i_error("Can't get mailbox %s path: %s", mailbox_get_vname(box),
+   mailbox_get_last_internal_error(box, 
>mail_error));
+   return -1;
+   }
+   if (ret == 0) {
+   /* No index files - don't do any locking. In theory we still
+  could, but this lock is mainly meant to prevent replication
+  problems, and replication wouldn't work without indexes. */
+   *lock_r = NULL;
+   return 0;
+   }
+
+   if (mailbox_lock_file_create(box, DSYNC_MAILBOX_LOCK_FILENAME,
+brain->mailbox_lock_timeout_secs,
+lock_r, ) <= 0) {
+   i_error("Failed to lock mailbox %s for dsyncing: %s",
+   box->vname, error);
+   return -1;
+   }
+   return 0;
+}





Re: Replication problems

2018-07-19 Thread Gerald Galster
Hello Thomas,

which version of dovecot do you use?

I'm running a dovecot cluster with 2 servers and dsync replication with ssh (no 
loadbalancer but active/active with same priority dns mx records).
dsync replicates some emails back after they have been deleted on one node. For 
me this started after 2.2.33.2. No solution yet.
If you're using a 2.2 version you could try 2.2.33.2.

Best regards,
Gerald


> Am 19.07.2018 um 14:13 schrieb Thomas Kristensen :
> 
> Hey 
>  
> I am trying to setup a dovecot cluster with 2 servers using replication 
> /dsync.
>  
> In front of it I got a Fortinet ADC (Load balance) and I think that I messing 
> up the dsync.
> I see mails duplicated in the sync progress.
>  
> If I disable one of the servers in the ADC, it seems to work and the sync if 
> working without a problem.
> But if I use both servers with a round robin on the ADC, I see mailed 
> duplicated.
> Ex. I sent 100 mails thru the SMTP (Postfix) and 107 mails is in both 
> servers, but as said before, if I disable one of the servers in the ADC, I 
> see the correct amount of mails in both dovecot servers.
>  
> In the header of the duplicated mails I see the exact same postfix id and 
> LMTP id from dovecot.
>  
> Also I cant seem to get any log from the sync progress.
>  
> Med venlig hilsen
> Thomas Kristensen
> 
> Storhaven 12 - 7100 Vejle
> Tlf: 75 72 54 99 - Fax: 75 72 65 33
> E-mail: t...@multimed.dk
>  
> Denne e-mail kan indeholde fortrolig information. Hvis du ikke er den rette 
> modtager af denne e-mail eller hvis du modtager den ved en fejltagelse, beder 
> vi dig venligst informere afsender om fejlen ved at bruge svarfunktionen. 
> Samtidig bedes du slette e-mailen med det samme uden at videresende eller 
> kopiere den.
> 



Re: auth: Error - Request timed out

2018-05-29 Thread Gerald Galster



> Am 29.05.2018 um 11:00 schrieb Aki Tuomi :
> 
> 
> 
> On 29.05.2018 11:35, Hajo Locke wrote:
>> Hello,
>> 
>> 
>> Am 29.05.2018 um 09:22 schrieb Aki Tuomi:
>>> 
>>> On 29.05.2018 09:54, Hajo Locke wrote:
 Hello List,
 
 i use dovecot 2.2.22 and have the same problem described here:
 https://dovecot.org/pipermail/dovecot/2017-November/110020.html
 
 I can confirm that sometimes there is a problem with connection to
 mysql-db, but sometimes not.
 Reasons for failing are still under investigation by my mates.
 
 My current main problem is, that this fail seems to be a one way
 ticket for dovecot. Even if mysql is verifyable working again and
 waiting for connection dovecot stucks furthermore with errors like
 this:
 
 May 29 07:00:49 hostname dovecot: auth: Error:
 plain(m...@example.com,xxx.xxx.xx.xxx,): Request
 999.7 timed out after 150 secs, state=1
 
 When restarting dovecot all is immediately working again.
 Is there a way to tell dovecot to restart auth services or
 reinitialize mysql-connection after these hard fails? I could insert
 "idle_kill = 1 mins" into service auth and service auth-worker, but i
 dont know if this would work. Unfortunately i am not able to reproduce
 this error and there are always a couple of days between errors.
 
 Thanks,
 Hajo
 
 
>>> Hi!
>>> 
>>> I was not able to repeat this problem using 2.2.36. Can you provide
>>> steps to reproduce?
>>> 
>>> May 29 10:20:24 auth: Debug: client in: AUTH1PLAIN
>>> service=imapsecuredsession=XtpgEVNtQeUB
>>> lip=::1rip=::1lport=143rport=58689resp=
>>> May 29 10:20:24 auth-worker(31098): Debug:
>>> sql(t...@domain.org,::1,): query:
>>> SELECT userid AS username, domain, password FROM users WHERE userid =
>>> 'test' AND domain = 'domain.org'
>>> May 29 10:20:54 auth-worker(31098): Warning: mysql: Query failed,
>>> retrying: Lost connection to MySQL server during query (idled for 28
>>> secs)
>>> May 29 10:20:59 auth-worker(31098): Error: mysql(127.0.0.1): Connect
>>> failed to database (dovecot): Can't connect to MySQL server on
>>> '127.0.0.1' (4) - waiting for 5 seconds before retry
>>> May 29 10:21:04 auth-worker(31098): Error: mysql(127.0.0.1): Connect
>>> failed to database (dovecot): Can't connect to MySQL server on
>>> '127.0.0.1' (4) - waiting for 5 seconds before retry
>>> May 29 10:21:14 auth: Debug: auth client connected (pid=31134)
>>> May 29 10:21:14 imap-login: Warning: Growing pool 'imap login commands'
>>> with: 1024
>>> May 29 10:21:14 auth-worker(31098): Error: mysql(127.0.0.1): Connect
>>> failed to database (dovecot): Can't connect to MySQL server on
>>> '127.0.0.1' (4) - waiting for 25 seconds before retry
>>> 
>>> This is what it looks like for me and after restoring connectivity, it
>>> started working normally.
>> Unfortunately i can not reproduce. Servers running well for days or
>> sometimes weeks and then it happens one time. I can provide some more
>> logs.
>> 
>> This is an error with mysql involvement:
>> 
>> May 29 06:56:59 hostname dovecot: auth-worker(1099): Error:
>> mysql(xx.xx.xx.xx): Connect failed to database (mysql): Can't connect
>> to MySQL server on 'xx.xx.xx.xx' (111) - waiting for 1 seconds before
>> retry
>> .
>> . some more of above line
>> .
>> May 29 06:56:59 hostname dovecot: auth-worker(1110): Error:
>> sql(m...@example.com,xx.xx.xx.xx): Password query failed: Not
>> connected to database
>> May 29 06:56:59 hostname dovecot: auth: Error: auth worker: Aborted
>> PASSV request for m...@example.com: Internal auth worker failure
>> May 29 06:57:59 hostname dovecot: auth-worker(1099): Error: mysql:
>> Query timed out (no free connections for 60 secs): SELECT `inbox` as
>> `user`, `password` FROM `mail_users` WHERE `login` = 'username' AND
>> `active`='Y'
>> May 29 06:59:30 hostname dovecot: auth: Error:
>> plain(username,xx.xx.xx.xx,): Request 999.2 timed
>> out after 151 secs, state=1
>> .
>> . much more of these lines with Request timed out
>> .
>> 
>> At this point my mates restartet dovecot and all worked well
>> immediately. Mysql performed a short restart at 6:56 and dovecot was
>> not able to reconnect for about 10 mins until my mates did the
>> restart. I could not reproduce the problem by manually restarting of
>> mysql, this worked well.
>> 
>> This is an error without visible mysql involvement:
>> .
>> . lines of normal imap/pop activity
>> .
>> May 29 05:43:03 hostname dovecot: imap-login: Error: master(imap):
>> Auth request timed out (received 0/12 bytes)
>> May 29 05:43:03 hostname dovecot: imap-login: Internal login failure
>> (pid=1014 id=16814) (internal failure, 1 successful auths):
>> user=, method=PLAIN, rip=xx.xx.xx.xx, lip=xx.xx.xx.xx, TLS
>> May 29 05:43:03 hostname dovecot: imap: Error: Login client
>> disconnected too early
>> May 29 05:44:03 hostname dovecot: auth: Error:
>> 

Re: replicator: User listing returned failure

2018-05-09 Thread Gerald Galster


> Am 08.05.2018 um 23:37 schrieb Alexey :
> 
> Wow. Sorry, maybe it works from morning or maybe somethings changes at the 
> evening, but I may say that replication works fine. But I can confirm it now 
> only with "less on mdbox file". But replication works!!!
> 
> Thanks Gerald.

That's great, you're welcome.

Maybe replication was stopped due to errors. doveadm replicator replicate '*' 
tells dovecot to start replication and with doveadm replicator status '*' you 
can immediately see if something has changed.

Best regards
Gerald


> 
> 
> On 2018-05-08 23:17, Alexey wrote:
>>> I don't know if it makes a difference, I don't have quotes on my 
>>> mail_plugins:
>> I don't have quotes too (It's difference between config file and
>> dovecot -n output)
>>> Did you check permissons on the replication fifos?
>> For what? I think that problems on slave. How it should work in automatic 
>> mode?
>> I repeat that from slave manually all works fine:
   As I can see nothing happens automatically.
   But mx2:~# doveadm -D sync -u ab...@example.com -d -N -l 30 -U   
 successfully executed and getting mails from mx1.
>> prw-rw 1 dovecot mail   0 May  8 06:29 replication-notify-fifo
>>> Does it work if you issue:
>>> doveadm replicator replicate '*'
>> mx1:~# doveadm replicator replicate '*'
>> 2 users updated
>>> doveadm replicator status '*'
>> I already posted this output.
>>> Do you have different hostnames for your servers?
>> Sure. You may see in listing that short hostnames are mx1 and mx2.



Re: replicator: User listing returned failure

2018-05-08 Thread Gerald Galster

> What about automatic replication without manually running of doveadm sync?
>>> As I can see nothing happens automatically.
>>> But mx2:~# doveadm -D sync -u ab...@example.com -d -N -l 30 -U   
>>> successfully executed and getting mails from mx1.

I don't know if it makes a difference, I don't have quotes on my mail_plugins:

mail_plugins = $mail_plugins notify replication

Did you check permissons on the replication fifos?

Does it work if you issue:

doveadm replicator replicate '*'
doveadm replicator status '*'

Do you have different hostnames for your servers?

Best regards
Gerald

Re: [sieve][regex] Matching multiple strings in the "Received" header

2018-05-08 Thread Gerald Galster
Hello Adi,

did you try:

" from.*(outbound.protection.outlook.com|.google.com|.yahoo.com|mx.aol.com) "

If you need to specify the posix character class:

[[:blank:]] means space and tab. With pcre it would be like [ \t]
[[:space:]] includes space, tab, newline, linefeed, formfeed, vertical tab (in 
pcre like [ \t\n\r\f\v])

"[[:blank:]]from.*(outbound.protection.outlook.com|.google.com|.yahoo.com|mx.aol.com)[[:blank:]]"

Best regards,
Gerald


> Am 08.05.2018 um 03:38 schrieb Adi Pircalabu :
> 
> On 08-05-2018 2:43, Benny Pedersen wrote:
>> Adi Pircalabu skrev den 2018-05-07 05:10:
>>> How should I write it to also match the space character at both the
>>> beginning and end of the expression?
>> use \ before space char
> 
> Tks. Just tried these two, unsuccessfully:
> "\.from.*(outbound.protection.outlook.com|.google.com|.yahoo.com|mx.aol.com)\."
> "\ from.*(outbound.protection.outlook.com|.google.com|.yahoo.com|mx.aol.com)\ 
> "
> 
> However, this expression always matches:
> "from.*(outbound.protection.outlook.com|.google.com|.yahoo.com|mx.aol.com)"
> 
> What am I missing?
> 
> ---
> Adi Pircalabu



Re: replicator: User listing returned failure

2018-05-08 Thread Gerald Galster
Hello Alexey,

> mx1:~# dovecot --version
> 2.2.27 (c0f36b0)
> 
> From dovecot.log:
> May 07 19:27:41 auth-worker(34348): Warning: mysql: Query failed, retrying: 
> Unknown column 'username' in 'field list'
> May 07 19:27:41 auth-worker(34348): Debug: sql(*): SELECT username, domain 
> FROM users
> May 07 19:27:41 auth-worker(34348): Error: sql: Iterate query failed: Unknown 
> column 'username' in 'field list' (using built-in default iterate_query: 
> SELECT username, domain FROM users)
> May 07 19:27:41 auth-worker(34348): Debug: sql(*): SELECT id AS username, 
> domain FROM users
> May 07 19:27:41 replicator: Error: User listing returned failure
> May 07 19:27:41 replicator: Error: listing users failed, can't replicate 
> existing data


> dovecot-sql.conf.ext:
> driver = mysql
> ...
> iterate_query = SELECT id AS username, domain FROM users

you defined a custom iterate_query but the debug message says it uses the 
built-in default:

> using built-in default iterate_query: SELECT username, domain FROM users


Please check if your config is included:

conf.d/10-auth.conf: !include auth-sql.conf.ext

conf.d/auth-sql.conf.ext:

userdb {
  driver = sql
  args = /etc/dovecot/dovecot-sql.conf.ext
}

Maybe your dovecot-sql.conf.ext is somewhere dovecot does not look for it.

Besides you have configured mysql userdb twice:

> userdb {
>  args = /etc/dovecot/dovecot-sql-master.conf.ext
>  driver = sql
> }
> userdb {
>  args = /etc/dovecot/dovecot-sql.conf.ext
>  driver = sql
> }



Best regards,
Gerald





> 
> mx1:~# doveadm replicator status
> Queued 'sync' requests0
> Queued 'high' requests0
> Queued 'low' requests 0
> Queued 'failed' requests  0
> Queued 'full resync' requests 0
> Waiting 'failed' requests 2
> Total number of known users   2
> 
> 
> mx1:~# doveadm replicator status '*'
> username  
>   priority fast sync full sync success sync failed
> ab...@example.com 
>   none 00:03:01  01:06:36  -y
> ad...@exmaple.com 
>   none 00:03:41  01:31:49  -y
> 
> 
> 
> From slave:
> 
> mx2:~# cat /etc/dovecot/conf.d/90-replication.conf
> # use tcp:hostname as the dsync target
> plugin {
>  mail_replica = tcp:mx1 # use doveadm_port
> }
> 
> 
> As I can see nothing happens automatically.
> But mx2:~# doveadm -D sync -u ab...@example.com -d -N -l 30 -U   successfully 
> executed and getting mails from mx1.
> 
> 
> 
> 
> dovecot-sql.conf.ext:
> driver = mysql
> connect = host=localhost dbname=vmail user=sqlmail 
> password=sqL_hidden033|TGPAS
> default_pass_scheme = SHA512-CRYPT
> password_query = SELECT id AS username, password, domain FROM users WHERE id 
> = '%n' AND domain = '%d' AND active = 'Y'
> user_query = SELECT id AS username, uid, gid, home, concat('*:storage=', 
> quota, 'M' ) as quota_rule FROM users WHERE id = '%n' AND domain = '%d'
> iterate_query = SELECT id AS username, domain FROM users
> 
> Regards,
> Alexey
> 
> 
> mx1:~# dovecot -n
> # 2.2.27 (c0f36b0): /etc/dovecot/dovecot.conf
> # Pigeonhole version 0.4.16 (fed8554)
> # OS: Linux 4.9.0-6-amd64 x86_64 Debian 9.4
> auth_master_user_separator = *
> auth_verbose = yes
> auth_verbose_passwords = sha1
> default_vsz_limit = 512 M
> doveadm_password =  # hidden, use -P to show it
> doveadm_port = 994
> hostname = mx1.example.com
> imap_client_workarounds = delay-newmail tb-extra-mailbox-sep tb-lsub-flags
> imap_idle_notify_interval = 12 mins
> lda_mailbox_autocreate = yes
> lda_mailbox_autosubscribe = yes
> lmtp_save_to_detail_mailbox = yes
> log_path = /var/log/dovecot/dovecot.log
> mail_access_groups = mail
> mail_location = mdbox:~/mdbox:UTF-8
> mail_plugins = " notify replication"
> managesieve_notify_capability = mailto
> managesieve_sieve_capability = fileinto reject envelope encoded-character 
> vacation subaddress comparator-i;ascii-numeric relational regex imap4flags 
> copy include variables body enotify environment mailbox date index ihave 
> duplicate mime foreverypart extracttext
> namespace inbox {
>  inbox = yes
>  location =
>  mailbox Drafts {
>auto = subscribe
>special_use = \Drafts
>  }
>  mailbox Junk {
>auto = subscribe
>special_use = \Junk
>  }
>  mailbox Sent {
>auto = subscribe
>special_use = \Sent
>  }
>  mailbox Trash {
>auto = subscribe
>special_use = \Trash
>  }
>  prefix =
> }
> passdb {
>  args = /etc/dovecot/dovecot-sql-master.conf.ext
>  driver = sql
>  master = yes
>  pass = yes
> }
> passdb {
>  args = /etc/dovecot/dovecot-sql.conf.ext
>  driver = sql
> }
> plugin {
>  quota = dict:User quota::proxy::quota
>  quota_rule = *:storage=100G
>  quota_rule2 = Trash:storage=+10G
>  quota_warning = storage=95%% quota-warning 95 %u
>  quota_warning2 = storage=80%% quota-warning 80 %u
>  sieve_before = 

Re: v2.2.36 release candidate released

2018-05-04 Thread Gerald Galster
Hello Timo,

> Am 30.04.2018 um 16:11 schrieb Timo Sirainen :
> 
> https://dovecot.org/releases/2.2/rc/dovecot-2.2.36.rc1.tar.gz
> https://dovecot.org/releases/2.2/rc/dovecot-2.2.36.rc1.tar.gz.sig 
> 
> v2.2.36 is hopefully going to be the last v2.2.x release. Please test this RC 
> well, so we'll have a good final release! v2.3.2 is still going to take a 
> couple of months before it's ready.

I've temporarly upgraded two replicated servers to 2.2.36. The mail duplication 
on pop3 expunge is not fixed yet. Last known good version is 2.2.33.2 (2.2.34 
was not tested).

Summary: mx2a and mx2b are synced via dsync replication (ssh). An email is 
received via mx2a/lmtp and replicated to mx2b. Then it is fetched from mx2b 
with pop3 and expunged. On that expunge it gets duplicated (copied from INBOX) 
so that the mail reapears once again on next pop3 fetch. Duplication does not 
occur when received and deleted on the same host. There are no sieve scripts 
involved.


May 04 18:02:35 mx2b.example.com dovecot[25546]: pop3(popt...@gcore.biz): 
expunge: box=INBOX, uid=29, 
msgid=<228c5d0d-1ad0-44fb-a83f-6a2aa65cb...@example.com>, size=2383, 
subject=test 1802
May 04 18:02:36 mx2b.example.com dovecot[25546]: doveadm: Error: 
dsync-remote(popt...@gcore.biz): Info: copy from INBOX: box=INBOX, uid=30, 
msgid=<228c5d0d-1ad0-44fb-a83f-6a2aa65cb...@example.com>, size=2383, 
subject=test 1802
May 04 18:02:36 mx2b.example.com dovecot[25546]: doveadm: Error: 
dsync-remote(popt...@gcore.biz): Info: expunge: box=INBOX, uid=29, 
msgid=<228c5d0d-1ad0-44fb-a83f-6a2aa65cb...@example.com>, size=2383, 
subject=test 1802

May 04 18:02:42 mx2b.example.com dovecot[25546]: pop3-login: Login: 
user=, method=PLAIN, rip=91.x.x.1, lip=188.x.x.1, 
mpid=25798, TLS, TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)
May 04 18:02:42 mx2b.example.com dovecot[25546]: pop3(popt...@gcore.biz): 
expunge: box=INBOX, uid=30, 
msgid=<228c5d0d-1ad0-44fb-a83f-6a2aa65cb...@example.com>, size=2383, 
subject=test 1802
May 04 18:02:42 mx2b.example.com dovecot[25546]: pop3(popt...@gcore.biz): 
Disconnected: Logged out top=0/0, retr=1/2449, del=1/1, size=2432


For more details see my mail: 2018-03-27 16:37 / Duplicate mails on pop3 
expunge with dsync replication on 2.2.35 (2.2.33.2 works)

It would be nice if you could have a look if something changed on dsync 
replication after 2.2.33.2.

Thanks!
Gerald

Re: LDAP Homedir location: Needs dovecot restart after change it

2018-04-16 Thread Gerald Galster
Hello Andre,

try to flush the auth cache: doveadm auth cache flush u...@example.com
or: doveadm auth cache flush

Best regards,
Gerald

> Am 16.04.2018 um 20:39 schrieb Andre Luiz Paiz :
> 
> Dear group members.
> 
> I work with Dovecot and Openldap authentication. Sometimes users change 
> departments and we need to alter their homedir location. Every time this 
> process is needed, I perform this steps:
> 
> 1 - Change homedir location in openldap
> 2 - Move homedir folder to the new location
> 3 - Re-apply permissions
> 4 - Remove user index folder
> 
> After I do that, users cannot authenticate unless I restart dovecot, process 
> that I would like to avoid. Can you guys give a tip on what I need to change 
> to avoid this last problematic step? After the restart, everything works.
> 
> Does the auth_cache feature also store the homedir location?



Re: 2.3.1 Replication is throwing scary errors

2018-04-04 Thread Gerald Galster
Hi,

> There is also a second issue of a long standing race with replication 
> occurring somewhere whereby if a mail comes in, is written to disk, is 
> replicated and then deleted in short succession, it will reappear again to 
> the MUA.  I suspect the mail is being replicated back from the remote.  A few 
> people have reported it over the years but it's not reliable or consistent, 
> so it has never been fixed.

sounds like my replication issue which is reproducible on 2.2.35 and does not 
occur on 2.2.33.2, so I assume something in the replication code has changed 
between these two versions.

dsync is copying the mail before expunge in this situation (no sieve filters 
involved):

(mail received on mx2a.example.com and delivered via dsync/ssh to 
mx2b.example.com, then expunged via pop3 on mx2b.example.com -> copy/duplicate)
Mar 26 15:35:58 mx2b.example.com dovecot[3825]: pop3(popt...@example.com): 
expunge: box=INBOX, uid=23, 
msgid=, size=1210, 
subject=test 1535
Mar 26 15:35:58 mx2b.example.com dovecot[3825]: doveadm: Error: 
dsync-remote(popt...@example.com): Info: copy from INBOX: box=INBOX, uid=24, 
msgid=, size=1210, 
subject=test 1535
Mar 26 15:35:58 mx2b.example.com dovecot[3825]: doveadm: Error: 
dsync-remote(popt...@example.com): Info: expunge: box=INBOX, uid=23, 
msgid=, size=1210, 
subject=test 1535

For more details see mail from 2018-03-27 / Duplicate mails on pop3 expunge 
with dsync replication on 2.2.35 (2.2.33.2 works)

Best regards,
Gerald

Duplicate mails on pop3 expunge with dsync replication on 2.2.35 (2.2.33.2 works)

2018-03-27 Thread Gerald Galster
Hello,

consider the following setup with dovecot 2.2.35:

smtp/587 (subject: test 1535)
 |
 |
mx2a.example.com   --> dsync/ssh   --> mx2b.example.com
|
  pop3 fetch/expunge (uid 23)
|
  !! dsync (copy from INBOX -> uid 24) /|
dsync (expunge uid 23) /

The pop3 client deletes mail from the server which triggers a copy from
INBOX before it is expunged. On the next pop3 fetch you get the copy of
the mail you thought had been expunged.

This occurs only if mail is received by smtp on mx2a, synced to mx2b
via dsync/ssh and then expunged via pop3 on mx2b. It does not occur
if mail is received and expunged on mx2b.

As a temporary workaround the system has been downgraded to 2.2.33.2.
There are no duplicate emails after expunge with this version.
2.2.34 has not been tested.

Does anyone know if there were changes in the dsync code from 2.2.33.2
to 2.2.35?

Log:

(mail received on mx2a.example.com and delivered via dsync to mx2b.example.com, 
then expunged via pop3 on mx2b.example.com -> copy/duplicate)
Mar 26 15:35:57 mx2b.example.com dovecot[3825]: pop3-login: Login: 
user=, method=PLAIN, rip=91.0.0.1, lip=188.0.0.1, 
mpid=3922, TLS, TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)
Mar 26 15:35:58 mx2b.example.com dovecot[3825]: pop3(popt...@example.com): 
expunge: box=INBOX, uid=23, 
msgid=, size=1210, 
subject=test 1535
Mar 26 15:35:58 mx2b.example.com dovecot[3825]: pop3(popt...@example.com): 
Disconnected: Logged out top=0/0, retr=1/1259, del=1/1, size=1242
Mar 26 15:35:58 mx2b.example.com dovecot[3825]: doveadm: Error: 
dsync-remote(popt...@example.com): Info: copy from INBOX: box=INBOX, uid=24, 
msgid=, size=1210, 
subject=test 1535
Mar 26 15:35:58 mx2b.example.com dovecot[3825]: doveadm: Error: 
dsync-remote(popt...@example.com): Info: expunge: box=INBOX, uid=23, 
msgid=, size=1210, 
subject=test 1535

(mail received on mx2b.example.com and expunged via pop3 on mx2b.example.com -> 
no copy/duplicate)
Mar 26 15:36:09 mx2b.example.com dovecot[3825]: pop3-login: Login: 
user=, method=PLAIN, rip=91.0.0.1, lip=188.0.0.1, 
mpid=3927, TLS, TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)
Mar 26 15:36:09 mx2b.example.com dovecot[3825]: pop3(popt...@example.com): 
expunge: box=INBOX, uid=24, 
msgid=, size=1210, 
subject=test 1535
Mar 26 15:36:09 mx2b.example.com dovecot[3825]: pop3(popt...@example.com): 
Disconnected: Logged out top=0/0, retr=1/1259, del=1/1, size=1242
Mar 26 15:36:10 mx2b.example.com dovecot[3825]: doveadm: Error: 
dsync-remote(popt...@example.com): Info: expunge: box=INBOX, uid=24, 
msgid=, size=1210, 
subject=test 1535

Thanks for looking into this
Gerald