Re: Different userdb per inet_listener

2021-08-24 Thread Miloslav Hůla

Dne 28.07.2021 v 11:24 James napsal(a):

On 28/07/2021 09:12, Miloslav Hůla wrote:

Now we would like to disable authentication for Postfix (SMTP), but
allow it for Dovecot (IMAP & POP3). Something like "receive-only".

Is there any way we can configure different passdb for mentioned
inet_listener?

Or is there any variable with "auth requetor name" we can use in SQL
query to differentiate the result?


%s for service

https://doc.dovecot.org/configuration_manual/config_file/config_variables/ 




Something like:

password_query = "SELECT password, allow_nets, '*:storage=' || quota 
|| 'M' AS userdb_quota_rule FROM mailbox WHERE username = '%n' AND 
domain = '%d' AND %Ls = true;"


Note the "AND %Ls = true".  The 'L' is for lower case.
Add boolean columns for the services to your database.


Hi James,

I somehow missed your reply. That's exactly what I need.

Thank you!
Miloslav



Different userdb per inet_listener

2021-07-28 Thread Miloslav Hůla

Hello,

we are running Dovecot (2.3.4.1-5+deb10u6) with PostgreSQL passdb and 
userdb and for remote Postfix:


auth_mechanisms = plain login
inet_listener {
  address = 127.0.0.1
  port = 12345
}

It works perfectly.


Now we would like to disable authentication for Postfix (SMTP), but 
allow it for Dovecot (IMAP & POP3). Something like "receive-only".


Is there any way we can configure different passdb for mentioned 
inet_listener?


Or is there any variable with "auth requetor name" we can use in SQL 
query to differentiate the result?


Regards
Miloslav


Re: Sieve - disable redirect

2021-05-03 Thread Miloslav Hůla
Sorry, just found an example configuration of "sieve_max_redirects" 
which probably is the way.


Kind regards
Milo

Dne 03.05.2021 v 12:11 Miloslav Hůla napsal(a):

Hi,

I would like to disallow "redirect" in sieve scripts to prevent 
automatical e-mail forwarding out of organisation.


I'dint find a way in [1], only "sieve_extensions" option and when I try 
"sieve_extensions = -redirect" I got:


# sievec test.sieve
sievec(root): Warning: sieve: ignored unknown extension 'redirect' while 
configuring available extensions


Is there any way?

Kind regards
Milo


[1] https://doc.dovecot.org/configuration_manual/sieve/configuration/


Sieve - disable redirect

2021-05-03 Thread Miloslav Hůla

Hi,

I would like to disallow "redirect" in sieve scripts to prevent 
automatical e-mail forwarding out of organisation.


I'dint find a way in [1], only "sieve_extensions" option and when I try 
"sieve_extensions = -redirect" I got:


# sievec test.sieve
sievec(root): Warning: sieve: ignored unknown extension 'redirect' while 
configuring available extensions


Is there any way?

Kind regards
Milo


[1] https://doc.dovecot.org/configuration_manual/sieve/configuration/


Re: Remap login before authentication

2021-01-11 Thread Miloslav Hůla

I'm sorry, I explained it wrong.

It is not login with & without domain scenario. I have internal company 
usernames + passwords and e-mail addresses.


I want to achive:
- internal username + password login to work
- email + password login to work

Now works:
Username: milo
Password: 123456

Want to allow:
Username: miloslav.h...@domain.tld
Password: 123456

which somehow remaps to 'milo' username, so same Maildir access.

Milo


Dne 11.01.2021 v 17:32 Aki Tuomi napsal(a):

Not sure if you read my mail wrong, but

if

user.name works

and

user.n...@domain.com does not work,

then why not just write

auth_bind_userdn = uid=%d,dc=domain,dc=tld

note the %d, which means, expand to local part (user.name) instead of 
user.n...@domain.com.

Aki



On 11/01/2021 18:28 Miloslav Hůla  wrote:

  
Would be possible following scenario?


1. do the SQL passdb lookup, do the remap & return password = NULL
without nopassword
2. do the LDAP bind

I think it works, but I'm not sure if there are some security/other flaws.

Milo


Dne 11.01.2021 v 17:11 Miloslav Hůla napsal(a):

Probably not way for me. I forgot to write, then I cannot change LDAP
schema, so bindDN is fixed for me.

Milo

Dne 11.01.2021 v 17:00 Aki Tuomi napsal(a):

auth_bind_userdn = uid=%d,dc=domain,dc=tld, also see

%D - return “sub.domain.org” as “sub,dc=domain,dc=org” (for LDAP queries)

from
https://doc.dovecot.org/configuration_manual/config_file/config_variables/


Aki


On 11/01/2021 17:58 Miloslav Hůla  wrote:

Hi,

with Dovecot 2.3.4 I would like to allow user to login with two
different usernames:

- USERNAME (no domain) - now works
- name.surn...@domain.tld - would like to add

Problem is, that the only authentication method I have is LDAP bind by
USERNAME. Now I use:


passdb {
     driver = ldap
     args = /etc/dovecot/dovecot-ldap.conf.ext
}

# Args
uris = ldaps://ldap.domain.tld
auth_bind = yes
auth_bind_userdn = uid=%u,dc=domain,dc=tld
base =


I know passdb can remap user&domain, but I have no password hash at all.
And for example '{SASL}' is not supported password scheme to return e.g.
from SQL passdb.


Is there any way how to achive this? Maybe somehow remap username in
first passdb and then continue to LDAP bind?

1. login as name.surn...@domain.tld
2. remap to USERNAME
3. do the LDAP bind


Milo


Re: Remap login before authentication

2021-01-11 Thread Miloslav Hůla

Would be possible following scenario?

1. do the SQL passdb lookup, do the remap & return password = NULL 
without nopassword

2. do the LDAP bind

I think it works, but I'm not sure if there are some security/other flaws.

Milo


Dne 11.01.2021 v 17:11 Miloslav Hůla napsal(a):
Probably not way for me. I forgot to write, then I cannot change LDAP 
schema, so bindDN is fixed for me.


Milo

Dne 11.01.2021 v 17:00 Aki Tuomi napsal(a):

auth_bind_userdn = uid=%d,dc=domain,dc=tld, also see

%D - return “sub.domain.org” as “sub,dc=domain,dc=org” (for LDAP queries)

from 
https://doc.dovecot.org/configuration_manual/config_file/config_variables/ 



Aki


On 11/01/2021 17:58 Miloslav Hůla  wrote:

Hi,

with Dovecot 2.3.4 I would like to allow user to login with two
different usernames:

- USERNAME (no domain) - now works
- name.surn...@domain.tld - would like to add

Problem is, that the only authentication method I have is LDAP bind by
USERNAME. Now I use:


passdb {
    driver = ldap
    args = /etc/dovecot/dovecot-ldap.conf.ext
}

# Args
uris = ldaps://ldap.domain.tld
auth_bind = yes
auth_bind_userdn = uid=%u,dc=domain,dc=tld
base =


I know passdb can remap user&domain, but I have no password hash at all.
And for example '{SASL}' is not supported password scheme to return e.g.
from SQL passdb.


Is there any way how to achive this? Maybe somehow remap username in
first passdb and then continue to LDAP bind?

1. login as name.surn...@domain.tld
2. remap to USERNAME
3. do the LDAP bind


Milo


Re: Remap login before authentication

2021-01-11 Thread Miloslav Hůla
Probably not way for me. I forgot to write, then I cannot change LDAP 
schema, so bindDN is fixed for me.


Milo

Dne 11.01.2021 v 17:00 Aki Tuomi napsal(a):

auth_bind_userdn = uid=%d,dc=domain,dc=tld, also see

%D - return “sub.domain.org” as “sub,dc=domain,dc=org” (for LDAP queries)

from https://doc.dovecot.org/configuration_manual/config_file/config_variables/

Aki


On 11/01/2021 17:58 Miloslav Hůla  wrote:

  
Hi,


with Dovecot 2.3.4 I would like to allow user to login with two
different usernames:

- USERNAME (no domain) - now works
- name.surn...@domain.tld - would like to add

Problem is, that the only authentication method I have is LDAP bind by
USERNAME. Now I use:


passdb {
driver = ldap
args = /etc/dovecot/dovecot-ldap.conf.ext
}

# Args
uris = ldaps://ldap.domain.tld
auth_bind = yes
auth_bind_userdn = uid=%u,dc=domain,dc=tld
base =


I know passdb can remap user&domain, but I have no password hash at all.
And for example '{SASL}' is not supported password scheme to return e.g.
from SQL passdb.


Is there any way how to achive this? Maybe somehow remap username in
first passdb and then continue to LDAP bind?

1. login as name.surn...@domain.tld
2. remap to USERNAME
3. do the LDAP bind


Milo


Remap login before authentication

2021-01-11 Thread Miloslav Hůla

Hi,

with Dovecot 2.3.4 I would like to allow user to login with two 
different usernames:


- USERNAME (no domain) - now works
- name.surn...@domain.tld - would like to add

Problem is, that the only authentication method I have is LDAP bind by 
USERNAME. Now I use:



passdb {
  driver = ldap
  args = /etc/dovecot/dovecot-ldap.conf.ext
}

# Args
uris = ldaps://ldap.domain.tld
auth_bind = yes
auth_bind_userdn = uid=%u,dc=domain,dc=tld
base =


I know passdb can remap user&domain, but I have no password hash at all. 
And for example '{SASL}' is not supported password scheme to return e.g. 
from SQL passdb.



Is there any way how to achive this? Maybe somehow remap username in 
first passdb and then continue to LDAP bind?


1. login as name.surn...@domain.tld
2. remap to USERNAME
3. do the LDAP bind


Milo


Re: Btrfs RAID-10 performance

2020-09-15 Thread Miloslav Hůla

Dne 15.09.2020 v 10:22 Linda A. Walsh napsal(a):

On 2020/09/10 07:40, Miloslav Hůla wrote:
I cannot verify it, but I think that even JBOD is propagated as a 
virtual device. If you create JBOD from 3 different disks, low level 
parameters may differ.


    JBOD allows each disk to be seen by the OS, as is.  You wouldn't
create JBOD disk from 3 different disks -- JBOD would give you 3 separate
JBOD disks for the 3 separate disks.


Yes. If I create 3 JBOD configurations from 3 100GB disks, I get 3 100GB 
devices in OS. If I create 1 JBOD configuration from 3 100GB disks, I 
get 1 300GB device in OS.



    So for your 16  disks, you are using 1 long RAID0?  You realize
1 disk goes out, the entire array needs to be reconstructed.  Also
all of your spindles can be tied up by long read/writes -- optimal speed
would come from a read 16 stripes wide spread over the 16 disks.


No. I have 16 RAID-0 configurations from 16 disks. As I wrote, there was 
no other option of how to propagate 16 disks as 16 devices into OS few 
years before.



    What would be better, IMO, is going with a RAID-10 like your subject
says, using 8-pairs of mirrors and strip those.  Set your stripe unit
for 64K to allow the disks to operate independently.  You don't want
a long 16-disk stripe, as that's far from optimal for your mailbox load.
What you want is the ability to have multiple I/O ops going at the same
time -- independently.  I think as it stands now, you are far more likely
to get contention as different mailboxes are accessed with contention
happening within the span, vs. letting each 2 disk mirror potentially doing
a different task -- which would likely have the effect of raising your
I/O ops/s.


The reason to not create RAID-10 in controller was, that btrfs scrubbing 
detects slowly degrading disk much sooner than controller (verified many 
times). And if I create RAID-10 in controller, btrfs scrub detects soon 
too, but I'm not able to recognize on which disk.



    Running raid10 on top of raid0 seems really wasteful


I'm not doing that.



Re: Btrfs RAID-10 performance

2020-09-15 Thread Miloslav Hůla

Dne 10.09.2020 v 17:40 John Stoffel napsal(a):

So why not run the backend storage on the Netapp, and just keep the
indexes and such local to the system?  I've run Netapps for many years
and they work really well.  And then you'd get automatic backups using
schedule snapshots.

Keep the index files local on disk/SSDs and put the maildirs out to
NFSv3 volume(s) on the Netapp(s).  Should do wonders.  And you'll stop
needing to do rsync at night.


Miloslav> It's the option we have in minds. As you wrote, NetApp is very solid.
Miloslav> The main reason for local storage is, that IMAP server is completely
Miloslav> isolated from network. But maybe one day will use it.

It's not completely isolated, it can rsync data to another host that
has access to the Netapp.  *grin*


:o)


Miloslav> Unfortunately, to quickly fix the problem and make server
Miloslav> usable again, we already added SSD and moved indexes on
Miloslav> it. So we have no measurements in old state.

That's ok, if it's better, then its better.  How is the load now?
Looking at the output of 'iostat -x 30' might be a good thing.


Load is between 1 and 2. We can live with that for now.


Miloslav> Situation is better, but I guess, problem still exists. I
Miloslav> takes some time to load be growing. We will see.

Hmm... how did you setup the new indexes volume?  Did you just use
btrfs again?  Did you mirror your SSDs as well?


Yes. Just two SSD into free slots, propagate them as two RAID-0 into OS 
and btrfs RAID-1.


It is a nasty, I know, but without outage. It is a just quick attempt to 
improve the situation. Our next plan is to buy more controllers, 
schedule an outage on weekend and do it properly.



Do the indexes fill the SSD, or is there 20-30% free space?  When an
SSD gets fragmented, it's performance can drop quite a bit.  Did you
put the SSDs onto a seperate controller?  Probably not.  So now you've
just increased the load on the single controller, when you really
should be spreading it out more to improve things.


SSD are almost empty, 2.4GB of 93GB is used after 'doveadm index' on all 
mailboxes.



Another possible hack would be to move some stuff to a RAM disk,
assuming your server is on a UPS/Generator incase of power loss.  But
that's an unsafe hack.

Also, do you have quotas turned on?  That's a performance hit for
sure.


No, we are running without quotas.


Miloslav> Thank you for the fio tip. Definetly I'll try that.

It's a good way to test and measure how the system will react.
Unfortunately, you will need to do your testing outside of normal work
hours so as to not impact your users too much.

Good luck!   Please post some numbers if you get them.  If you see
only a few disks are 75% or more busy, then *maybe* you have a bad
disk in the system, and moving off that disk or replacing it might
help.  Again, hard to know.

Rebalancing btrfs might also help, especially now that you've moved
the indexes off that volume.

John


Thank you
Milo



Re: Btrfs RAID-10 performance

2020-09-10 Thread Miloslav Hůla
I cannot verify it, but I think that even JBOD is propagated as a 
virtual device. If you create JBOD from 3 different disks, low level 
parameters may differ.


And probably old firmware is the reason we used RAID-0 two or three 
years before.


Thank you for the ideas.

Kind regards
Milo

Dne 10.09.2020 v 16:15 Scott Q. napsal(a):
Actually there is, filesystems like ZFS/BTRFS prefer to see the drive 
directly, not a virtual drive.


I'm not sure you can change it now anymore but in the future, always use 
JBOD.


It's also possible that you don't have the latest firmware on the 
9361-8i. If I recall correctly, they only added in the JBOD option in 
the last firmware update


On Thursday, 10/09/2020 at 08:52 Miloslav Hůla wrote:

Some controllers has direct option "pass through to OS" for a drive,
that's what I meant. I can't recall why we have chosen RAID-0
instead of
JBOD, there was some reason, but I hope there is no difference with
single drive.

Thank you
Milo

Dne 09.09.2020 v 15:51 Scott Q. napsal(a):
 > The 9361-8i does support passthrough ( JBOD mode ). Make sure you
have
 > the latest firmware.



Re: Btrfs RAID-10 performance

2020-09-10 Thread Miloslav Hůla

Dne 09.09.2020 v 17:52 John Stoffel napsal(a):

Miloslav> There is a one PCIe RAID controller in a chasis. AVAGO
Miloslav> MegaRAID SAS 9361-8i. And 16x SAS 15k drives conneced to
Miloslav> it. Because the controller does not support pass-through for
Miloslav> the drives, we use 16x RAID-0 on controller. So, we get
Miloslav> /dev/sda ... /dev/sdp (roughly) in OS. And over that we have
Miloslav> single btrfs RAID-10, composed of 16 devices, mounted as
Miloslav> /data.

I will bet that this is one of your bottlenecks as well.  Get a secord
or third controller and split your disks across them evenly.


That's plan for a next step.


Miloslav> We run 'rsync' to remote NAS daily. It takes about 6.5 hours to 
finish,
Miloslav> 12'265'387 files last night.


That's sucky.  So basically you're hitting the drives hard with
random IOPs and you're probably running out of performance.  How much
space are you using on the filesystem?


Miloslav> It's not so sucky how it seems. rsync runs during the
Miloslav> night. And even reading is high, server load stays low. We
Miloslav> have problems with writes.

Ok.  So putting in an SSD pair to cache things should help.


And why not use brtfs send to ship off snapshots instead of using
rsync?  I'm sure that would be an improvement...


Miloslav> We run backup to external NAS (NetApp) for a disaster
Miloslav> recovery scenario.  Moreover NAS is spreaded across multiple
Miloslav> locations. Then we create NAS snapshot, tens days
Miloslav> backward. All snapshots easily available via NFS mount. And
Miloslav> NAS capacity is cheaper.

So why not run the backend storage on the Netapp, and just keep the
indexes and such local to the system?  I've run Netapps for many years
and they work really well.  And then you'd get automatic backups using
schedule snapshots.

Keep the index files local on disk/SSDs and put the maildirs out to
NFSv3 volume(s) on the Netapp(s).  Should do wonders.  And you'll stop
needing to do rsync at night.


It's the option we have in minds. As you wrote, NetApp is very solid. 
The main reason for local storage is, that IMAP server is completely 
isolated from network. But maybe one day will use it.



Miloslav> Last half year, we encoutered into performace
Miloslav> troubles. Server load grows up to 30 in rush hours, due to
Miloslav> IO waits. We tried to attach next harddrives (the 838G ones
Miloslav> in a list below) and increase a free space by rebalace. I
Miloslav> think, it helped a little bit, not not so rapidly.


If you're IOPs bound, but not space bound, then you *really* want to
get an SSD in there for the indexes and such.  Basically the stuff
that gets written/read from all the time no matter what, but which
isn't large in terms of space.


Miloslav> Yes. We are now on 66% capacity. Adding SSD for indexes is
Miloslav> our next step.

This *should* give you a boost in performance.  But finding a way to
take before and after latency/performance measurements is key.  I
would look into using 'fio' to test your latency numbers.  You might
also want to try using XFS or even ext4 as your filesystem.  I
understand not wanting to 'fsck', so that might be right out.


Unfortunately, to quickly fix the problem and make server usable again, 
we already added SSD and moved indexes on it. So we have no measurements 
in old state.


Situation is better, but I guess, problem still exists. I takes some 
time to load be growing. We will see.


Thank you for the fio tip. Definetly I'll try that.

Kind regards
Milo


Re: Btrfs RAID-10 performance

2020-09-10 Thread Miloslav Hůla
Some controllers has direct option "pass through to OS" for a drive, 
that's what I meant. I can't recall why we have chosen RAID-0 instead of 
JBOD, there was some reason, but I hope there is no difference with 
single drive.


Thank you
Milo

Dne 09.09.2020 v 15:51 Scott Q. napsal(a):
The 9361-8i does support passthrough ( JBOD mode ). Make sure you have 
the latest firmware.


Re: Btrfs RAID-10 performance

2020-09-09 Thread Miloslav Hůla

Hi, thank you for your reply. I'll continue inline...

Dne 09.09.2020 v 3:15 John Stoffel napsal(a):

Miloslav> Hello,
Miloslav> I sent this into the Linux Kernel Btrfs mailing list and I got reply:
Miloslav> "RAID-1 would be preferable"
Miloslav> 
(https://lore.kernel.org/linux-btrfs/7b364356-7041-7d18-bd77-f60e0e2e2...@lechevalier.se/T/).
Miloslav> May I ask you for the comments as from people around the Dovecot?


Miloslav> We are using btrfs RAID-10 (/data, 4.7TB) on a physical Supermicro
Miloslav> server with Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz and 125GB of 
RAM.
Miloslav> We run 'btrfs scrub start -B -d /data' every Sunday as a cron task. It
Miloslav> takes about 50 minutes to finish.

Miloslav> # uname -a
Miloslav> Linux imap 4.9.0-12-amd64 #1 SMP Debian 4.9.210-1 (2020-01-20) x86_64
Miloslav> GNU/Linux

Miloslav> RAID is a composition of 16 harddrives. Harddrives are connected via
Miloslav> AVAGO MegaRAID SAS 9361-8i as a RAID-0 devices. All harddrives are SAS
Miloslav> 2.5" 15k drives.

Can you post the output of "cat /proc/mdstat" or since you say you're
using btrfs, are you using their own RAID0 setup?  If so, please post
the output of 'btrfs stats' or whatever the command is you use to view
layout info?


There is a one PCIe RAID controller in a chasis. AVAGO MegaRAID SAS 
9361-8i. And 16x SAS 15k drives conneced to it. Because the controller 
does not support pass-through for the drives, we use 16x RAID-0 on 
controller. So, we get /dev/sda ... /dev/sdp (roughly) in OS. And over 
that we have single btrfs RAID-10, composed of 16 devices, mounted as /data.


We have chosen this wiring for severeal reasons:
- easy to increase a capacity
- easy to replace drives by larger ones
- due to checksuming, btrfs does not need fsck in case of power failure
- btrfs scrub discovers failing drive sooner than S.M.A.R.T. or RAID 
controller




Miloslav> Server serves as a IMAP with Dovecot 2.2.27-3+deb9u6, 4104 accounts,
Miloslav> Mailbox format, LMTP delivery.

How ofter are these accounts hitting the server?


IMAP serves for a univesity. So there are typical rush hours from 7AM to 
3PM. Lowers during the evening, almost not used during the night.




Miloslav> We run 'rsync' to remote NAS daily. It takes about 6.5 hours to 
finish,
Miloslav> 12'265'387 files last night.

That's sucky.  So basically you're hitting the drives hard with
random IOPs and you're probably running out of performance.  How much
space are you using on the filesystem?


It's not so sucky how it seems. rsync runs during the night. And even 
reading is high, server load stays low. We have problems with writes.




And why not use brtfs send to ship off snapshots instead of using
rsync?  I'm sure that would be an improvement...


We run backup to external NAS (NetApp) for a disaster recovery scenario. 
Moreover NAS is spreaded across multiple locations. Then we create NAS 
snapshot, tens days backward. All snapshots easily available via NFS 
mount. And NAS capacity is cheaper.




Miloslav> Last half year, we encoutered into performace
Miloslav> troubles. Server load grows up to 30 in rush hours, due to
Miloslav> IO waits. We tried to attach next harddrives (the 838G ones
Miloslav> in a list below) and increase a free space by rebalace. I
Miloslav> think, it helped a little bit, not not so rapidly.

If you're IOPs bound, but not space bound, then you *really* want to
get an SSD in there for the indexes and such.  Basically the stuff
that gets written/read from all the time no matter what, but which
isn't large in terms of space.


Yes. We are now on 66% capacity. Adding SSD for indexes is our next step.



Also, adding in another controller card or two would also probably
help spread the load across more PCI channels, and reduce contention
on the SATA/SAS bus as well.


Probably we will wait how SSD helps first, but as you wrote, it is 
possible next step.



Miloslav> Is this a reasonable setup and use case for btrfs RAID-10?
Miloslav> If so, are there some recommendations to achieve better
Miloslav> performance?

1. move HOT data to SSD based volume RAID 1 pair.  On a seperate
controller.


OK


2. add more controllers, which also means you're more redundant in
case one controller fails.


OK


3. Clone the system and put Dovecot IMAP director in from of the
setup.


I still hope that one server can handle 4105 accounts.


4. Stop using rsync for copying to your DR site, use the btrfs snap
send, or whatever the commands are.


I hope it is not needed in our scenario.


5. check which dovecot backend you're using and think about moving to
one which doesn't involve nearly as many files.


Maildir is comfortable for us. Time to time, users call us with: "I 
accidentally deleted the folder" and it is super easy to copy it back 
from backup.



6. Find out who your biggest users are, in terms of emails and move
them to SSDs if step 1 is too hard to do at first.


OK


Can you also grab some 'iostat -dhm 3

Re: Btrfs RAID-10 performance

2020-09-08 Thread Miloslav Hůla

Thanks for the tips!

Dne 07.09.2020 v 15:24 Scott Q. napsal(a):
1. I assume that's a 2U format -24 bays. You only have 1 raid card for 
all 24 disks ? Granted you only have 16, but usually you should assign 1 
card per 8 drives. In our standard 2U chassis we have 3 hba's per 8 
drives. Your backplane should support that.


Exactly. And what's the reason/bottleneck? PCIe or card throughput?


2. Add more drives


We can add 2 next drives, and we actually did yesterday, but we keep 
free slots to be able replace drives by double-capacity ones.



3. Get a pci nvme ssd card and move the indexes/control/sieve files there.


It complicates current backup and restore a little bit, but I'll 
probably try that.


Thank you,
Milo



On Monday, 07/09/2020 at 08:16 Miloslav Hůla wrote:

Dne 07.09.2020 v 12:43 Sami Ketola napsal(a):
 >> On 7. Sep 2020, at 12.38, Miloslav Hůla mailto:miloslav.h...@gmail.com>> wrote:
 >>
 >> Hello,
 >>
 >> I sent this into the Linux Kernel Btrfs mailing list and I got
reply: "RAID-1 would be preferable"

(https://lore.kernel.org/linux-btrfs/7b364356-7041-7d18-bd77-f60e0e2e2...@lechevalier.se/T/).
May I ask you for the comments as from people around the Dovecot?
 >>
 >>
 >> We are using btrfs RAID-10 (/data, 4.7TB) on a physical
Supermicro server with Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz and
125GB of RAM. We run 'btrfs scrub start -B -d /data' every Sunday as
a cron task. It takes about 50 minutes to finish.
 >>
 >> # uname -a
 >> Linux imap 4.9.0-12-amd64 #1 SMP Debian 4.9.210-1 (2020-01-20)
x86_64 GNU/Linux
 >>
 >> RAID is a composition of 16 harddrives. Harddrives are connected
via AVAGO MegaRAID SAS 9361-8i as a RAID-0 devices. All harddrives
are SAS 2.5" 15k drives.
 >>
 >> Server serves as a IMAP with Dovecot 2.2.27-3+deb9u6, 4104
accounts, Mailbox format, LMTP delivery.
 >
 > does "Mailbox format" mean mbox?
 >
 > If so, then there is your bottleneck. mbox is the slowest
possible mailbox format there is.
 >
 > Sami

Sorry, no, it is a typo. We are using "Maildir".

"doveconf -a" attached

Milo


Re: Btrfs RAID-10 performance

2020-09-07 Thread Miloslav Hůla

Dne 07.09.2020 v 12:43 Sami Ketola napsal(a):

On 7. Sep 2020, at 12.38, Miloslav Hůla  wrote:

Hello,

I sent this into the Linux Kernel Btrfs mailing list and I got reply: "RAID-1 would 
be preferable" 
(https://lore.kernel.org/linux-btrfs/7b364356-7041-7d18-bd77-f60e0e2e2...@lechevalier.se/T/).
 May I ask you for the comments as from people around the Dovecot?


We are using btrfs RAID-10 (/data, 4.7TB) on a physical Supermicro server with 
Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz and 125GB of RAM. We run 'btrfs scrub 
start -B -d /data' every Sunday as a cron task. It takes about 50 minutes to 
finish.

# uname -a
Linux imap 4.9.0-12-amd64 #1 SMP Debian 4.9.210-1 (2020-01-20) x86_64 GNU/Linux

RAID is a composition of 16 harddrives. Harddrives are connected via AVAGO MegaRAID 
SAS 9361-8i as a RAID-0 devices. All harddrives are SAS 2.5" 15k drives.

Server serves as a IMAP with Dovecot 2.2.27-3+deb9u6, 4104 accounts, Mailbox 
format, LMTP delivery.


does "Mailbox format" mean mbox?

If so, then there is your bottleneck. mbox is the slowest possible mailbox 
format there is.

Sami


Sorry, no, it is a typo. We are using "Maildir".

"doveconf -a" attached

Milo


# 2.2.27 (c0f36b0): /etc/dovecot/dovecot.conf
# Pigeonhole version 0.4.16 (fed8554)
# OS: Linux 4.9.0-12-amd64 x86_64 Debian 9.13
# NOTE: Send doveconf -n output instead when asking for help.
auth_anonymous_username = anonymous
auth_cache_negative_ttl = 30 secs
auth_cache_size = 100 M
auth_cache_ttl = 30 secs
auth_debug = no
auth_debug_passwords = no
auth_default_realm =
auth_failure_delay = 2 secs
auth_gssapi_hostname =
auth_krb5_keytab =
auth_master_user_separator =
auth_mechanisms = plain
auth_policy_hash_mech = sha256
auth_policy_hash_nonce =
auth_policy_hash_truncate = 12
auth_policy_reject_on_fail = no
auth_policy_request_attributes = login=%{orig_username} 
pwhash=%{hashed_password} remote=%{real_rip}

auth_policy_server_api_header =
auth_policy_server_timeout_msecs = 2000
auth_policy_server_url =
auth_proxy_self =
auth_realms =
auth_socket_path = auth-userdb
auth_ssl_require_client_cert = no
auth_ssl_username_from_cert = no
auth_stats = no
auth_use_winbind = no
auth_username_chars = 
abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ01234567890.-_@

auth_username_format = %Lu
auth_username_translation =
auth_verbose = no
auth_verbose_passwords = no
auth_winbind_helper_path = /usr/bin/ntlm_auth
auth_worker_max_count = 30
base_dir = /var/run/dovecot
config_cache_size = 1 M
debug_log_path =
default_client_limit = 1000
default_idle_kill = 1 mins
default_internal_user = dovecot
default_login_user = dovenull
default_process_limit = 100
default_vsz_limit = 256 M
deliver_log_format = msgid=%m: %$
dict_db_config =
director_consistent_hashing = no
director_doveadm_port = 0
director_flush_socket =
director_mail_servers =
director_servers =
director_user_expire = 15 mins
director_user_kick_delay = 2 secs
director_username_hash = %u
disable_plaintext_auth = yes
dotlock_use_excl = yes
doveadm_allowed_commands =
doveadm_api_key =
doveadm_password =
doveadm_port = 0
doveadm_socket_path = doveadm-server
doveadm_username = doveadm
doveadm_worker_count = 0
dsync_alt_char = _
dsync_features =
dsync_remote_cmd = ssh -l%{login} %{host} doveadm dsync-server -u%u -U
first_valid_gid = 1
first_valid_uid = 109
haproxy_timeout = 3 secs
haproxy_trusted_networks =
hostname =
imap_capability =
imap_client_workarounds =
imap_hibernate_timeout = 0
imap_id_log = *
imap_id_send = name *
imap_idle_notify_interval = 2 mins
imap_logout_format = in=%i out=%o
imap_max_line_length = 64 k
imap_metadata = no
imap_urlauth_host =
imap_urlauth_logout_format = in=%i out=%o
imap_urlauth_port = 143
imapc_cmd_timeout = 5 mins
imapc_features =
imapc_host =
imapc_list_prefix =
imapc_master_user =
imapc_max_idle_time = 29 mins
imapc_max_line_length = 0
imapc_password =
imapc_port = 143
imapc_rawlog_dir =
imapc_sasl_mechanisms =
imapc_ssl = no
imapc_ssl_verify = yes
imapc_user =
import_environment = TZ CORE_OUTOFMEM CORE_ERROR LISTEN_PID LISTEN_FDS
info_log_path =
instance_name = dovecot
last_valid_gid = 0
last_valid_uid = 0
lda_mailbox_autocreate = no
lda_mailbox_autosubscribe = no
lda_original_recipient_header =
libexec_dir = /usr/lib/dovecot
listen = *, ::
lmtp_address_translate =
lmtp_hdr_delivery_address = final
lmtp_proxy = no
lmtp_rcpt_check_quota = no
lmtp_save_to_detail_mailbox = no
lmtp_user_concurrency_limit = 0
lock_method = fcntl
log_path = syslog
log_timestamp = "%b %d %H:%M:%S "
login_access_sockets =
login_greeting = Dovecot ready.
login_log_format = %$: %s
login_log_format_elements = user=<%u> method=%m rip=%r lip=%l mpid=%e %c 
session=<%{session}>

login_plugin_dir = /usr/lib/dovecot/modules/login
login_plugins =
login_proxy_max_disconnect_delay = 0
login_source_ips =
login_trusted_networks =
mail_access_groups =
mail_always_cache_fields =
mail_attachment_dir =
mail_attachment_f

Btrfs RAID-10 performance

2020-09-07 Thread Miloslav Hůla

Hello,

I sent this into the Linux Kernel Btrfs mailing list and I got reply: 
"RAID-1 would be preferable" 
(https://lore.kernel.org/linux-btrfs/7b364356-7041-7d18-bd77-f60e0e2e2...@lechevalier.se/T/). 
May I ask you for the comments as from people around the Dovecot?



We are using btrfs RAID-10 (/data, 4.7TB) on a physical Supermicro 
server with Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz and 125GB of RAM. 
We run 'btrfs scrub start -B -d /data' every Sunday as a cron task. It 
takes about 50 minutes to finish.


# uname -a
Linux imap 4.9.0-12-amd64 #1 SMP Debian 4.9.210-1 (2020-01-20) x86_64 
GNU/Linux


RAID is a composition of 16 harddrives. Harddrives are connected via 
AVAGO MegaRAID SAS 9361-8i as a RAID-0 devices. All harddrives are SAS 
2.5" 15k drives.


Server serves as a IMAP with Dovecot 2.2.27-3+deb9u6, 4104 accounts, 
Mailbox format, LMTP delivery.


We run 'rsync' to remote NAS daily. It takes about 6.5 hours to finish, 
12'265'387 files last night.



Last half year, we encoutered into performace troubles. Server load 
grows up to 30 in rush hours, due to IO waits. We tried to attach next 
harddrives (the 838G ones in a list below) and increase a free space by 
rebalace. I think, it helped a little bit, not not so rapidly.


Is this a reasonable setup and use case for btrfs RAID-10? If so, are 
there some recommendations to achieve better performance?


Thank you. With kind regards
Milo



# megaclisas-status
-- Controller information --
-- ID | H/W Model  | RAM| Temp | BBU| Firmware
c0| AVAGO MegaRAID SAS 9361-8i | 1024MB | 72C  | Good   | FW: 
24.16.0-0082


-- Array information --
-- ID | Type   |Size |  Strpsz | Flags | DskCache |   Status |  OS 
Path | CacheCade |InProgress
c0u0  | RAID-0 |838G |  256 KB | RA,WB |  Enabled |  Optimal | 
/dev/sdq | None  |None
c0u1  | RAID-0 |558G |  256 KB | RA,WB |  Enabled |  Optimal | 
/dev/sda | None  |None
c0u2  | RAID-0 |558G |  256 KB | RA,WB |  Enabled |  Optimal | 
/dev/sdb | None  |None
c0u3  | RAID-0 |558G |  256 KB | RA,WB |  Enabled |  Optimal | 
/dev/sdc | None  |None
c0u4  | RAID-0 |558G |  256 KB | RA,WB |  Enabled |  Optimal | 
/dev/sdd | None  |None
c0u5  | RAID-0 |558G |  256 KB | RA,WB |  Enabled |  Optimal | 
/dev/sde | None  |None
c0u6  | RAID-0 |558G |  256 KB | RA,WB |  Enabled |  Optimal | 
/dev/sdf | None  |None
c0u7  | RAID-0 |558G |  256 KB | RA,WB |  Enabled |  Optimal | 
/dev/sdg | None  |None
c0u8  | RAID-0 |558G |  256 KB | RA,WB |  Enabled |  Optimal | 
/dev/sdh | None  |None
c0u9  | RAID-0 |558G |  256 KB | RA,WB |  Enabled |  Optimal | 
/dev/sdi | None  |None
c0u10 | RAID-0 |558G |  256 KB | RA,WB |  Enabled |  Optimal | 
/dev/sdj | None  |None
c0u11 | RAID-0 |558G |  256 KB | RA,WB |  Enabled |  Optimal | 
/dev/sdk | None  |None
c0u12 | RAID-0 |558G |  256 KB | RA,WB |  Enabled |  Optimal | 
/dev/sdl | None  |None
c0u13 | RAID-0 |558G |  256 KB | RA,WB |  Enabled |  Optimal | 
/dev/sdm | None  |None
c0u14 | RAID-0 |558G |  256 KB | RA,WB |  Enabled |  Optimal | 
/dev/sdn | None  |None
c0u15 | RAID-0 |838G |  256 KB | RA,WB |  Enabled |  Optimal | 
/dev/sdr | None  |None


-- Disk information --
-- ID   | Type | Drive Model   | Size | Status 
 | Speed| Temp | Slot ID  | LSI ID
c0u0p0  | HDD  | SEAGATE ST900MP0006 N003WAG0Q3S3  | 837.8 Gb | Online, 
Spun Up | 12.0Gb/s | 53C  | [8:14]   | 32
c0u1p0  | HDD  | HGST HUC156060CSS200 A3800XV250TJ | 558.4 Gb | Online, 
Spun Up | 12.0Gb/s | 38C  | [8:0]| 12
c0u2p0  | HDD  | HGST HUC156060CSS200 A3800XV3XT4J | 558.4 Gb | Online, 
Spun Up | 12.0Gb/s | 43C  | [8:1]| 11
c0u3p0  | HDD  | HGST HUC156060CSS200 ADB05ZG4XLZU | 558.4 Gb | Online, 
Spun Up | 12.0Gb/s | 46C  | [8:2]| 25
c0u4p0  | HDD  | HGST HUC156060CSS200 A3800XV3DWRL | 558.4 Gb | Online, 
Spun Up | 12.0Gb/s | 48C  | [8:3]| 14
c0u5p0  | HDD  | HGST HUC156060CSS200 A3800XV3XZTL | 558.4 Gb | Online, 
Spun Up | 12.0Gb/s | 52C  | [8:4]| 18
c0u6p0  | HDD  | HGST HUC156060CSS200 A3800XV3VSKJ | 558.4 Gb | Online, 
Spun Up | 12.0Gb/s | 55C  | [8:5]| 15
c0u7p0  | HDD  | SEAGATE ST600MP0006 N003WAF1LWKE  | 558.4 Gb | Online, 
Spun Up | 12.0Gb/s | 56C  | [8:6]| 28
c0u8p0  | HDD  | HGST HUC156060CSS200 A3800XV3XTDJ | 558.4 Gb | Online, 
Spun Up | 12.0Gb/s | 55C  | [8:7]| 20
c0u9p0  | HDD  | HGST HUC156060CSS200 A3800XV3T8XL | 558.4 Gb | Online, 
Spun Up | 12.0Gb/s | 57C  | [8:8]| 19
c0u10p0 | HDD  | HGST HUC156060CSS200 A7030XHL0ZYP | 558.4 Gb | Online, 
Spun Up | 12.0Gb/s | 61C  | [8:9]| 23
c0u11p0 | HDD  | HGST HUC156060CSS200 ADB05ZG4VR3P | 558.4 Gb | Online, 
Spun Up | 12.0Gb/s | 60C  | [8:10]   | 24
c0u12p0 | HDD  | SEAGATE ST600MP0006 N003WAF195KA  | 558.4 Gb | Online, 
Spun Up | 12.0Gb/s | 60C  | [8:11]   | 29
c0u13p0 | HDD  | SEAGATE ST600MP0006 N003WAF1LTZW  | 558.4 Gb | Online, 
Spun Up | 1

Re: Renewal of Let's Encrypt Certificates in Dovecot

2018-10-11 Thread Miloslav Hůla

From my experience, restart is required.

On Debian Strech, I edited cron job to:

certbot -q renew --renew-hook 'service dovecot restart' --renew-hook 
'service postfix reload'


Milo

Dne 2018-10-11 v 10:55 Ignacio Garcia napsal(a):
Hi there. I've been using Dovecot for quite some time now but I just 
started using Let's Encrypt certs. Since LE certs are renewed 
automatically without user intervention I'm wondering if I will need to 
restart dovecot after that renewal...


Has anybody had any experience with that?

Thanks so much for your help!

Ignacio


Re: Userdb by directory lookup

2018-08-30 Thread Miloslav Hůla
Well, I solved it by passwd-file userdb and maintaining user list in 
/data/vmail/global/users file.


Kind regards, Milo


Dne 2018-08-30 v 10:15 Miloslav Hůla napsal(a):
One day, I'll use the LDAP. But infrastructure I got is quite neglected 
and some older admins is hard to convince to innovate. Just, ah :)


I read whole documentation related to userdb a and passdb. Easy to 
understand with relation to IMAP or POP3 access. But what I didn't 
understand is relation to LMTP. Which one is used and when for 
successful delivery.


I'll try to move those settings as global.

Thank you, Milo


Dne 2018-08-30 v 09:32 Aki Tuomi napsal(a):

Is there some reason why you cannot use LDAP as userdb? Those uid / gid
/ home parameters can be also provided as global settings like

mail_uid=vmail

mail_gid=vmail

mail_home=/data/vmail/user/%n

Aki


On 29.08.2018 23:12, Miloslav Hůla wrote:

Hi,

I have the Dovecot (2.2.27-3+deb9u2) with LMTP and Postfix. Static
userdb:

userdb {
   driver = static
   args = uid=vmail gid=vmail home=/data/vmail/user/%n 
allow_all_users=yes

}

and passdb by LDAP, only to verify IMAP user password by bind.


Problem is, when someone sends email to non-exist...@mydomain.tld,
Dovecot automatically creates its home directory and Maildir.

Is there any way how deliver only when /data/vmail/user/%n directory
already exists, and reject otherwise?

When I remove allow_all_users=yes, LMTP stops to deliver at all with
550 code. I probably understand why, but I cannot figure how to solve
it. I cannot use LDAP to user lookup.

Milo




Re: Userdb by directory lookup

2018-08-30 Thread Miloslav Hůla
One day, I'll use the LDAP. But infrastructure I got is quite neglected 
and some older admins is hard to convince to innovate. Just, ah :)


I read whole documentation related to userdb a and passdb. Easy to 
understand with relation to IMAP or POP3 access. But what I didn't 
understand is relation to LMTP. Which one is used and when for 
successful delivery.


I'll try to move those settings as global.

Thank you, Milo


Dne 2018-08-30 v 09:32 Aki Tuomi napsal(a):

Is there some reason why you cannot use LDAP as userdb? Those uid / gid
/ home parameters can be also provided as global settings like

mail_uid=vmail

mail_gid=vmail

mail_home=/data/vmail/user/%n

Aki


On 29.08.2018 23:12, Miloslav Hůla wrote:

Hi,

I have the Dovecot (2.2.27-3+deb9u2) with LMTP and Postfix. Static
userdb:

userdb {
   driver = static
   args = uid=vmail gid=vmail home=/data/vmail/user/%n allow_all_users=yes
}

and passdb by LDAP, only to verify IMAP user password by bind.


Problem is, when someone sends email to non-exist...@mydomain.tld,
Dovecot automatically creates its home directory and Maildir.

Is there any way how deliver only when /data/vmail/user/%n directory
already exists, and reject otherwise?

When I remove allow_all_users=yes, LMTP stops to deliver at all with
550 code. I probably understand why, but I cannot figure how to solve
it. I cannot use LDAP to user lookup.

Milo




Userdb by directory lookup

2018-08-29 Thread Miloslav Hůla

Hi,

I have the Dovecot (2.2.27-3+deb9u2) with LMTP and Postfix. Static userdb:

userdb {
  driver = static
  args = uid=vmail gid=vmail home=/data/vmail/user/%n allow_all_users=yes
}

and passdb by LDAP, only to verify IMAP user password by bind.


Problem is, when someone sends email to non-exist...@mydomain.tld, 
Dovecot automatically creates its home directory and Maildir.


Is there any way how deliver only when /data/vmail/user/%n directory 
already exists, and reject otherwise?


When I remove allow_all_users=yes, LMTP stops to deliver at all with 550 
code. I probably understand why, but I cannot figure how to solve it. I 
cannot use LDAP to user lookup.


Milo


Re: Connection closed reason

2017-10-12 Thread Miloslav Hůla

Dne 12.10.2017 v 8:49 Steffen Kaiser napsal(a):
we have one user using the old Alpine client with IMAP. Time to time 
(3 times per day or 3 times per week) he get error: "MAIL FOLDER INBOX 
CLOSED DUE TO ACCESS ERROR" and he complains, that inbox stops to 
refresh with new emails.


when I get this error, it's a network issue always or I restarted Dovecot.


Thanks. We didn't restart Dovecot, I was affraid of network issue. It's 
hard to find.


Milo


Re: Connection closed reason

2017-10-12 Thread Miloslav Hůla

Dne 11.10.2017 v 21:02 Joseph Tam napsal(a):

I don't know what would cause this -- maybe some firewall state session
timeout?


Thanks for the tip. Currently, intranet is not behind any kind of firewall.

Milo


Connection closed reason

2017-10-11 Thread Miloslav Hůla

Hi,

we have one user using the old Alpine client with IMAP. Time to time (3 
times per day or 3 times per week) he get error: "MAIL FOLDER INBOX 
CLOSED DUE TO ACCESS ERROR" and he complains, that inbox stops to 
refresh with new emails.


I don't know Alpine but I can imagine, that Alpine creates TCP 
connection to IMAPS and uses IDLE. I read wiki page about Timeouts [1] 
and it seems OK. Alpine is connected 5 or more hours without interrupt. 
But I found 'Connection closed' at approximate times of user reports in 
mail.log.


Am'I undestand it correctly, that 'Connection closed' is some kind of 
incorrect connection end? I can find 'Logged out' messages which seems 
to me as correct ones.


I can imagine that such situations may happend for moving clients, such 
a smartphone with losing signal. But mentioned user is on local network 
connected by cable.


I'm interested, what exactly 'Connection closed' in mail log means. We 
have following lines in log:



dovecot: imap(username): Connection closed in=63 out=288442

dovecot: imap(username): Connection closed (IDLE running for 0.001 + 
waiting input for 0.001 secs, 2 B in + 10+0 B out, state=wait-input) 
in=9668 out=17375


imap(username): Connection closed: read(size=3751) failed: Connection 
reset by peer in=4441 out=51431


dovecot: imap(username): Connection closed (UID fetch running for 0.008 
+ waiting input for 0.001 secs, 0.001 in locks, 101 B in + 202+202 B 
out, state=wait-input) in=3330 out=436043



Could it be problem in our network or Dovecot configuration issue? Can I 
somehow debug all connections for single client?


Thank you, Milo


[1] https://wiki.dovecot.org/Timeouts


Re: CPU for Dovecot

2016-11-28 Thread Miloslav Hůla

Hi,

thanks to all for advices. We will choose the 8core variant.

About IO notes... there will be local 10k SASes in RAID 10, similar 
configuration as we have now and works fine.


Kind regards, Miloslav


Dne 25.11.2016 v 14:29 Miloslav Hůla napsal(a):

we are planning to change hardware for our standalone Dovecot instance
handling ~5800 IMAP users with 1TB mailboxes on local RAID. Is there
some recommendation about CPU?

We can choose from:
 - Intel Xeon E5-2620v4 - 2,1GHz@8,0GT 20MB cache, 8core, HT, 85W, LGA2011
 - Intel Xeon E5-2623v4 - 2,6GHz@8,0GT 10MB cache, 4core, HT, 85W, LGA2011

The difference is about more cores vs. hi frequency.


CPU for Dovecot

2016-11-25 Thread Miloslav Hůla

Hi,

we are planning to change hardware for our standalone Dovecot instance 
handling ~5800 IMAP users with 1TB mailboxes on local RAID. Is there 
some recommendation about CPU?


We can choose from:
 - Intel Xeon E5-2620v4 - 2,1GHz@8,0GT 20MB cache, 8core, HT, 85W, LGA2011
 - Intel Xeon E5-2623v4 - 2,6GHz@8,0GT 10MB cache, 4core, HT, 85W, LGA2011

The difference is about more cores vs. hi frequency.

Thank you, Miloslav


Absolute path in SUBSCRIPTIONS

2016-07-20 Thread Miloslav Hůla

Hello,

I'm using following two namespaces with Dovecot 2.2.13-12~deb8u1:

namespace inbox {
  inbox = yes
  list = yes
  location =
  prefix = INBOX.  # the previous Cyrus compatibility
  separator = .
  subscriptions = yes
  type = private
  ...
}

namespace {
  inbox = no
  list = children
  location = 
maildir:/vmail/user/%%n/Maildir:INDEXPVT=/vmail/user/%n/Maildir/Shared/%%n

  prefix = user.%%n.
  separator = .
  subscriptions = yes
  type = shared
}

For the shared namespace, the subscriptions are stored globally in 
'/vmail/user/%%n/Maildir/subscribtions' which is bad for me. I would 
like too keep subscriptions per user.


I cannot set 'subscriptions = no' because I have no parent namespace. 
And when I set


:SUBSCRIPTIONS=/vmail/user/%n/Maildir/subscriptions-shared

it does not work (absolute path does not work) and it creates file in:

/vmail/user/%%n/Maildir/vmail/user/%n/Maildir/subscriptions-shared
instead of
   /vmail/user/%n/Maildir/subscriptions-shared

I found, that relative path hack works:

:SUBSCRIPTIONS=../../%n/Maildir/subscriptions-shared

but I'm not sure it is the legitimate solution.

Are there other options?

Kind regards, Milo


Where Dovecot stores subscribtions for shared folder

2016-06-27 Thread Miloslav Hůla

Hi,

could please someone hint me, where Dovecot stores subscribtions for 
shared folder?


Our configuration:

namespace {
  disabled = no
  hidden = no
  ignore_on_failure = no
  inbox = no
  list = children
  location = 
maildir:/vmail/user/%%n/Maildir:INDEXPVT=/vmail/user/%n/Maildir/Shared/%%n

  prefix = user.%%n.
  separator = .
  subscriptions = yes
  type = shared
}

When I subscribe to 'user.test', I'll get 
~Maildir/Shared/test/.INBOX/dovecot.index.pvt.log in it.


When I unsubscribe from 'user.test', file stays there and its hash is 
the same.


Kind regards, Milo


Re: Mailboxes on NFS or iSCSI

2016-06-27 Thread Miloslav Hůla

Hi,

thank you both for hints. I'm still not sure what to choose, so I'll 
probably test it on some dev installation.


Kind regards, Milo


Dne 23.06.2016 v 8:05 Götz Reinicke - IT Koordinator napsal(a):

Hi,

Am 22.06.16 um 16:40 schrieb Miloslav Hůla:

Hello,

we are running Dovecot (2.2.13-12~deb8u1) on Debian stable. Configured
with Mailbox++, IMAP, POP3, LMTPD, Managesieved, ACL. Mailboxes are on
local 1.2TB RAID, it's about 5310 accounts.

We are slowly getting out of space and we are considering to move
Mailboxes onto Netapp disk array with two independent network
connections.

Are there some pitfalls? Not sure we should use NTP or iSCSI mounts
(both open implementations are not so shiny).

Thanks for sharing any experiences.


have a look at my question and the answers from the yesterday posting
"Storage upgrade maildir suggestions". May be they help you too.

Regards . Götz





Mailboxes on NFS or iSCSI

2016-06-22 Thread Miloslav Hůla

Hello,

we are running Dovecot (2.2.13-12~deb8u1) on Debian stable. Configured 
with Mailbox++, IMAP, POP3, LMTPD, Managesieved, ACL. Mailboxes are on 
local 1.2TB RAID, it's about 5310 accounts.


We are slowly getting out of space and we are considering to move 
Mailboxes onto Netapp disk array with two independent network connections.


Are there some pitfalls? Not sure we should use NTP or iSCSI mounts 
(both open implementations are not so shiny).


Thanks for sharing any experiences.

Kind regards, Milo



Re: Cyrus mailbox (plain files) to Dovecot

2015-09-25 Thread Miloslav Hůla

Hi,

the simplest way is to create cur/new/tmp folders for every mailbox and 
copy all mailfiles into new folder. Dovecot will create all other files 
like 'dovecot-uidlist' automatically. You may get some warnings.


All emails will be marked as new ones and all will be redownloaded.

Milo

Dne 25.9.2015 v 13:03 Wolfgang Rosenauer napsal(a):

I'm migrating from a Cyrus to a Dovecot installation right now. As part of
it I've got plain Cyrus mailboxes (w/o real metadata; so to say I've got
the /var/spool/imap/user part but not the /var/lib/imap/user one)).
Those former mailboxes I want to provide under a public namespace via ACLs.
The question I cannot answer right now is:
How can I convert these plain mailboxes on a FS level to maildir++ so I can
provide them as public mailboxes on the new system? The tools I have found
require a valid/complete cyrus mailbox. (I don't really care about message
flags etc).


Unsubscribe from shared mailbox

2015-09-03 Thread Miloslav Hůla

Hi,

I'm using Dovecot 2.2.13-11 (Debian Jessie) and shared namespace (to be 
old Cyrus compatible setting):


namespace {
  list = children
  location = 
maildir:/vmail/user/%%n/Maildir:INDEXPVT=/vmail/user/%n/Maildir/Shared/%%n

  prefix = user.%%n.
  separator = .
  subscriptions = yes
  type = shared
}

My account: milo
Other account: peter

After subscription, I have:
/vmail/user/milo/Maildir/Shared/peter/.INBOX/dovecot.index.pvt.log

After unsubscribe, file and folder still exists. Where Dovecot stores 
the information, that I had unsubscribed from this folder? I need this 
to analyze subscriptions over all mail acounts.


Thank you, Miloslav


Re: Allow delivery to existing accounts only with LDAP and static

2015-08-28 Thread Miloslav Hůla

Dne 28.8.2015 v 11:07 Steffen Kaiser napsal(a):

On Fri, 28 Aug 2015, Miloslav Hůla wrote:

Dne 28.8.2015 v 9:56 Steffen Kaiser napsal(a):

we are using LDAP binding as a passdb, and static with
allow_all_users=yes as an userdb.

Works fine, but problem is, Maildirs are created for non-existent
accounts too. We would like to prevent it.

The LDAP binding does not supporta user lookups. Is the correct way to
use checkpassword as a passdb before LDAP, check for account existency
here and:


"the correct way" is to reject messages to non-existant users by the
MTA.

Which one do you use?


We are using Postfix.


Then this link is probably helpful:

http://www.postfix.org/LDAP_README.html


Thank you Steffen, at first, I didn't realized that MTA should reject it.

We can use LDAP only for auth binds for now, but thanks to pointing me out.

Best regards, Miloslav


Re: Allow delivery to existing accounts only with LDAP and static

2015-08-28 Thread Miloslav Hůla

Dne 28.8.2015 v 9:56 Steffen Kaiser napsal(a):

we are using LDAP binding as a passdb, and static with
allow_all_users=yes as an userdb.

Works fine, but problem is, Maildirs are created for non-existent
accounts too. We would like to prevent it.

The LDAP binding does not supporta user lookups. Is the correct way to
use checkpassword as a passdb before LDAP, check for account existency
here and:


"the correct way" is to reject messages to non-existant users by the MTA.

Which one do you use?


We are using Postfix.

Thanks in advance.

-- Miloslav


Allow delivery to existing accounts only with LDAP and static

2015-08-27 Thread Miloslav Hůla

Hi,

we are using LDAP binding as a passdb, and static with 
allow_all_users=yes as an userdb.


Works fine, but problem is, Maildirs are created for non-existent 
accounts too. We would like to prevent it.


The LDAP binding does not supporta user lookups. Is the correct way to 
use checkpassword as a passdb before LDAP, check for account existency 
here and:


result_success=continue
result_failure=return-fail

?

Thank you, regards, Miloslav


Structure of dovecot.index.pvt.log

2015-08-03 Thread Miloslav Hůla

Hi,

we are migrating from Cyrus to Dovecot and I would like to migrate seen 
flags for shared folder too.


We have Dovecot 2.2.13 prepared as:
location = 
maildir:/vmail/user/%%n/Maildir:INDEXPVT=/vmail/user/%n/Maildir/Shared/%%n


Now I'm looking for 'dovecot.index.pvt.log' syntax to be able migrate 
Seen flags. All I know is, that index contains messages UID and Seen flag.


May I ask you for a link to doc (if it exists) or into a source code? 
Should I care about 'dovecot.index.pvt.log' timestamps?


Thank you, Milo


Re: Postpone email delivery with LMTP and Postfix

2015-04-30 Thread Miloslav Hůla

Dne 30.4.2015 v 18:51 Thomas Leuxner napsal(a):

* Miloslav Hůla  2015.04.29 22:47:


is there any way, based on userdb/passwdb attribute, how to postpone an
email delivery? The purpose is, I need to freeze an account (Maildir++) for
a few minutes and new email must not be delivered. But emails must be
delivered when account is unfrozen.


You can put the messages on hold and then release them again:

http://wiki2.dovecot.org/Migration/Online


Thomas, in combination with SQL, that's exactly what I'm looking for. 
Thank you!


Best regards, Milo


Postpone email delivery with LMTP and Postfix

2015-04-29 Thread Miloslav Hůla

Hi,

is there any way, based on userdb/passwdb attribute, how to postpone an 
email delivery? The purpose is, I need to freeze an account (Maildir++) 
for a few minutes and new email must not be delivered. But emails must 
be delivered when account is unfrozen.


I found few things about Postfix filters, but I'm not sure it's a good way.

Thank you, Milo


Re: Migrating from Cyrus to Dovecot

2015-03-31 Thread Miloslav Hůla

Hi Timo,

thank you for the valuable answers!

Milo


Dne 27.3.2015 v 22:12 Timo Sirainen napsal(a):

On 27 Mar 2015, at 10:19, Miloslav Hůla  wrote:

Hi,

we are migrating from Cyrus 2.3.7 to Dovecot 2.2.13. We have ~7000 maildirs 
with ~500GB. Our goal is to do the migration without users have notice and with 
the shortest service downtime. The users use IMAP (with shared folders and 
ACL), POP3 and sieve filters.

As a first choice, we tried the Dovecot's dsync tool. First tests were great, 
but we are not able to change the Cyrus auth backend for migration. Moreover, 
this migration seems too slow for us.

As a second try, we tried the cyrus2dovecot migrating Perl scripts (and their 
derivates) from Wiki2. More or less they works but we found we need more 
control during the migration.

So, as a third try, we wrote own migrating scripts. And thanks to the 
cyrus2dovecot it wasn't too much complicated. And there are my questions:

A) Files and dirs timestamps
The mtime of email file is important as an internal date as I found on Wiki2. 
But what about timestamps of cur/new/tmp directories or Dovecot's internal 
files line dovecot-uidlist? Do they play some role here?


No.


B) The 128 bit mailbox UID
The Wiki2 speaks about 128 bit mailbox UID at first line of dovecot-uidlist. 
Cyrus preserves only 64 bit UID. Is this mailbox UID required by Dovecot? If 
so, can we use 50118c4a11c1 (Cyrus UID padded by zeros)?


The mailbox GUID is internal to Dovecot. There's no standard IMAP way to see 
it, so there's no need to migrate it. Better not to set it and let Dovecot 
generate it automatically.


C) Format of dovecot-uidlist records
Wiki2 shows two examples:
25006 :1276528487.M364837P9451.kurkku,S=1355,W=1394:2,
25017 W2481 :1276533073.M242911P3632.kurkku:2,F

Which format is preferred? Or what the benefits are?


If W=size is in the filename, it never needs to be recalculated if 
dovecot-uidlist is lost. Of course, dovecot-uidlist should never be lost. So I 
don't think it makes a huge difference. If you care about performance, 
sdbox/mdbox mailbox format would behave much better. sdbox is a close match to 
Cyrus - so with Maildir you're actually likely making the disk I/O performance 
somewhat slower in Dovecot than in Cyrus, although that also depends on other 
things.


D) Converting between CRLF and LF
If I understand correctly, Dovecot stores emails with LF only. We have all 
emails with CRLF now on Cyrus and converting them to LF only is a little more 
time consuming. Is there any benefit to do that? Or can we live with 
'mail_save_crlf' without problems?


Dovecot can automatically handle both mixed CRLF and LF mails, you can keep old 
mails as CRLF and new mails as LF. mail_save_crlf setting only controls what is 
used for new emails. If you want to save more disk space you can enable 
compression.


E) POP3 backend
I found many informations about IMAP internals but few on POP3 internals. What 
do I need to do POP3 migration transparent for user?


Just preserve the UIDL. See the pop3_uidl_format setting in 
http://wiki2.dovecot.org/Migration/Cyrus


Migrating from Cyrus to Dovecot

2015-03-27 Thread Miloslav Hůla

Hi,

we are migrating from Cyrus 2.3.7 to Dovecot 2.2.13. We have ~7000 
maildirs with ~500GB. Our goal is to do the migration without users have 
notice and with the shortest service downtime. The users use IMAP (with 
shared folders and ACL), POP3 and sieve filters.


As a first choice, we tried the Dovecot's dsync tool. First tests were 
great, but we are not able to change the Cyrus auth backend for 
migration. Moreover, this migration seems too slow for us.


As a second try, we tried the cyrus2dovecot migrating Perl scripts (and 
their derivates) from Wiki2. More or less they works but we found we 
need more control during the migration.


So, as a third try, we wrote own migrating scripts. And thanks to the 
cyrus2dovecot it wasn't too much complicated. And there are my questions:


A) Files and dirs timestamps
The mtime of email file is important as an internal date as I found on 
Wiki2. But what about timestamps of cur/new/tmp directories or Dovecot's 
internal files line dovecot-uidlist? Do they play some role here?


B) The 128 bit mailbox UID
The Wiki2 speaks about 128 bit mailbox UID at first line of 
dovecot-uidlist. Cyrus preserves only 64 bit UID. Is this mailbox UID 
required by Dovecot? If so, can we use 50118c4a11c1 
(Cyrus UID padded by zeros)?


C) Format of dovecot-uidlist records
Wiki2 shows two examples:
25006 :1276528487.M364837P9451.kurkku,S=1355,W=1394:2,
25017 W2481 :1276533073.M242911P3632.kurkku:2,F

Which format is preferred? Or what the benefits are?

D) Converting between CRLF and LF
If I understand correctly, Dovecot stores emails with LF only. We have 
all emails with CRLF now on Cyrus and converting them to LF only is a 
little more time consuming. Is there any benefit to do that? Or can we 
live with 'mail_save_crlf' without problems?


E) POP3 backend
I found many informations about IMAP internals but few on POP3 
internals. What do I need to do POP3 migration transparent for user?


Many thanks for any answers.

Regards, Milo