Seeking Suggestions for Optimizing Dovecot Performance

2024-08-21 Thread leoniemeeyr--- via dovecot
Hi everyone,

I hope you’re all doing well! I’ve been working with Dovecot for a while now 
and I’m looking to optimize its performance on my server. I’d love to hear any 
tips or suggestions you might have to help me get the best performance out of 
it.

Specifically, I’m interested in:

1. Configuration Tweaks: Any key settings that you’ve found to be particularly 
beneficial for performance?
2. Best Practices: Are there any common practices or tools that help in 
monitoring and improving performance?
3. Resource Management: How do you handle resource allocation to ensure smooth 
operation under heavy loads?

Any advice or experiences you can share would be greatly appreciated! Thanks in 
advance for your help.

Best Regards

https://www.igmguru.com/blog/msbi-vs-power-bi
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: Seeking Guidance on Dovecot Configuration, Security, and Performance Optimization

2024-07-23 Thread Aki Tuomi via dovecot


> On 23/07/2024 15:04 EEST Selena Thomas via dovecot  
> wrote:
> 
>  
> Hi everyone,
> 
> I am setting up Dovecot for the first time and could use some guidance. I 
> have a couple of questions::-
> 
> -Configuration Basics: What are the essential configuration files I need to 
> focus on for a basic setup: ??
> -Security Best Practices: What steps should I take to ensure my Dovecot 
> server is secure, especially regarding -authentication and SSL/TLS: ??
> -Performance Tuning: Are there any tips for optimizing performance for a 
> small to medium-sized deployment: ??
> 
> I appreciate any advice or resources you can point me towards. 
> 
> Thanks in advance for your help!
> 
> With Regards,
> 
> Selena
> 
> [flutter/https://www.igmguru.com/digital-marketing-programming/flutter-training/]
> ___
> dovecot mailing list -- dovecot@dovecot.org
> To unsubscribe send an email to dovecot-le...@dovecot.org


you can start by reading

- https://doc.dovecot.org/configuration_manual/quick_configuration/
- https://doc.dovecot.org/configuration_manual/dovecot_ssl_configuration/

Aki
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Seeking Guidance on Dovecot Configuration, Security, and Performance Optimization

2024-07-23 Thread Selena Thomas via dovecot
Hi everyone,

I am setting up Dovecot for the first time and could use some guidance. I have 
a couple of questions::-

-Configuration Basics: What are the essential configuration files I need to 
focus on for a basic setup: ??
-Security Best Practices: What steps should I take to ensure my Dovecot server 
is secure, especially regarding -authentication and SSL/TLS: ??
-Performance Tuning: Are there any tips for optimizing performance for a small 
to medium-sized deployment: ??

I appreciate any advice or resources you can point me towards. 

Thanks in advance for your help!

With Regards,
Selena

https://www.igmguru.com/digital-marketing-programming/flutter-training/
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Seeking Guidance on Dovecot Configuration, Security, and Performance Optimization

2024-07-23 Thread Selena Thomas via dovecot
Hi everyone,

I am setting up Dovecot for the first time and could use some guidance. I have 
a couple of questions::-

-Configuration Basics: What are the essential configuration files I need to 
focus on for a basic setup: ??
-Security Best Practices: What steps should I take to ensure my Dovecot server 
is secure, especially regarding -authentication and SSL/TLS: ??
-Performance Tuning: Are there any tips for optimizing performance for a small 
to medium-sized deployment: ??

I appreciate any advice or resources you can point me towards. 

Thanks in advance for your help!

With Regards,

Selena
[https://www.igmguru.com/digital-marketing-programming/flutter-training/flutter]
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Seeking Guidance on Dovecot Configuration, Security, and Performance Optimization

2024-07-23 Thread Selena Thomas via dovecot
Hi everyone,

I am setting up Dovecot for the first time and could use some guidance. I have 
a couple of questions::-

-Configuration Basics: What are the essential configuration files I need to 
focus on for a basic setup: ??
-Security Best Practices: What steps should I take to ensure my Dovecot server 
is secure, especially regarding -authentication and SSL/TLS: ??
-Performance Tuning: Are there any tips for optimizing performance for a small 
to medium-sized deployment: ??

I appreciate any advice or resources you can point me towards. 

Thanks in advance for your help!

With Regards,

Selena

[flutter/https://www.igmguru.com/digital-marketing-programming/flutter-training/]
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: IMAP login-logout cycle performance with performance mode seems slow/limited and cause cannot be found

2024-07-03 Thread m--- via dovecot
That one truly fixed it, it yields 4500-5000 cycles per second now and the 
latency is superb.

In my understanding from the docs this means that before there was only a 
single process, doing a single service to the imap-login and then exiting, thus 
limiting the login process where now we have at least 10 processes available 
doing 1024 services before exiting rightfully?

I will go ahead from here with my tests, thank you!
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: IMAP login-logout cycle performance with performance mode seems slow/limited and cause cannot be found

2024-07-03 Thread Aki Tuomi via dovecot
You could also try this:

service imap {
   process_min_avail = 10
   service_count = 1024
}

Aki

> On 03/07/2024 10:22 EEST m--- via dovecot  wrote:
> 
>  
> Thank you for the swift answer. Thats what I tried without success.
> 
> service imap-login {
>   process_limit = 15000
>   process_min_avail = 48
>   service_count = 0
>   vsz_limit = 2 G
> }
> 
> But I also now tried to set both userdb and passdb to static, to rule out any 
> caching internals. Performance stood in the same 230-270 range. No success.
> 
> Funny is, that in this case the most busy process is the config process, 
> requiring about 25% of a single cpu core. If I understood the documentation 
> correctly to ... supply configuration to other processes?
> ___
> dovecot mailing list -- dovecot@dovecot.org
> To unsubscribe send an email to dovecot-le...@dovecot.org
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: IMAP login-logout cycle performance with performance mode seems slow/limited and cause cannot be found

2024-07-03 Thread m--- via dovecot
Thank you for the swift answer. Thats what I tried without success.

service imap-login {
  process_limit = 15000
  process_min_avail = 48
  service_count = 0
  vsz_limit = 2 G
}

But I also now tried to set both userdb and passdb to static, to rule out any 
caching internals. Performance stood in the same 230-270 range. No success.

Funny is, that in this case the most busy process is the config process, 
requiring about 25% of a single cpu core. If I understood the documentation 
correctly to ... supply configuration to other processes?
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: IMAP login-logout cycle performance with performance mode seems slow/limited and cause cannot be found

2024-07-03 Thread Aki Tuomi via dovecot
You could try changing login processes to high-performance configuration, 
https://doc.dovecot.org/admin_manual/login_processes/#high-performance-mode 

and see if this makes any difference

Aki

> On 03/07/2024 09:40 EEST m--- via dovecot  wrote:
> 
>  
> For further testing and because I could not figure the limitation, I just 
> duplicated the dovecot nodes multiple times and loadbalanced over them with 
> primitive roundrobin TCP. 
> 
> With every increase of instances I could load the system more and got almost 
> linear scaling until about 16 instances were reached and the system is 85-95% 
> loaded. It yields 3100-3500 cycles per second.
> 
> Which still poses the question: What could be the reason?
> ___
> dovecot mailing list -- dovecot@dovecot.org
> To unsubscribe send an email to dovecot-le...@dovecot.org
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: IMAP login-logout cycle performance with performance mode seems slow/limited and cause cannot be found

2024-07-02 Thread m--- via dovecot
For further testing and because I could not figure the limitation, I just 
duplicated the dovecot nodes multiple times and loadbalanced over them with 
primitive roundrobin TCP. 

With every increase of instances I could load the system more and got almost 
linear scaling until about 16 instances were reached and the system is 85-95% 
loaded. It yields 3100-3500 cycles per second.

Which still poses the question: What could be the reason?
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


IMAP login-logout cycle performance with performance mode seems slow/limited and cause cannot be found

2024-07-02 Thread m--- via dovecot
Hello,

I am running a test setup in a docker stack with Dovecot (2.3.21). Basically as 
a test to see whats possible.

The whole thing works okay, but I noticed that with imaptest (latest version) 
somewhere between 230 and 270 requests per second on a login+logout cycle it 
cannot go further. However, this is a 384G memory, 48 core, dual-cpu setup that 
is pretty much idling (utilization somewhere around 6-12%). The ssds are not 
loaded at all. The memory usage is low. 

For testing purpose I set nopassword=y and allowed all logins, the users just 
require a small mysql db lookup which is done once before the cache is filled.

If I copy that exact same stack to my local machine, I yield around 450 cycles 
per second. And the only difference I somehow see is that my local machine has 
a faster cpu-memory connection and only one cpu with less cores.

Is it possible memory speed is a limiting factor because of the 
cpu-memory-mapping on the server and the slower memory?
What performance should I expect?

Asking because I am planning to run a stateless client at a later point and the 
limited login-performance really seems to make that difficult at scale.

doveconf -n:

# 2.3.21 (47349e2482): /etc/dovecot/dovecot.conf
# Pigeonhole version 0.5.21 (f6cd4b8e)
# OS: Linux 6.8.0-36-generic x86_64 Debian 11.7 ext4
# Hostname: 4e83f0e9d630
auth_cache_negative_ttl = 0
auth_cache_size = 50 M
auth_cache_ttl = 5 hours
auth_cache_verify_password_with_worker = yes
auth_debug = yes
auth_debug_passwords = yes
auth_failure_delay = 0
auth_mechanisms = plain login
auth_verbose = yes
auth_verbose_passwords = yes
auth_worker_max_count = 500
default_vsz_limit = 2 G
disable_plaintext_auth = no
doveadm_api_key = # hidden, use -P to show it
doveadm_password = # hidden, use -P to show it
doveadm_port = 2425
log_debug = event=*
log_path = /var/log/dovecot-debug.log
login_trusted_networks = 10.0.0.0/8 127.0.0.0/8
mail_debug = yes
mail_fsync = never
mail_gid = 1000
mail_location = maildir:/data/vmail/%d/%1n/%n
mail_uid = 1000
managesieve_notify_capability = mailto
managesieve_sieve_capability = fileinto reject envelope encoded-character 
vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy 
include variables body enotify environment mailbox date index ihave duplicate 
mime foreverypart extracttext
namespace inbox {
  inbox = yes
  location = 
  mailbox Drafts {
special_use = \Drafts
  }
  mailbox Junk {
special_use = \Junk
  }
  mailbox Sent {
special_use = \Sent
  }
  mailbox "Sent Messages" {
special_use = \Sent
  }
  mailbox Trash {
special_use = \Trash
  }
  prefix = 
}
passdb {
  args = nopassword=y
  driver = static
}
protocols = " imap lmtp sieve pop3 submission"
service anvil {
  chroot = empty
  client_limit = 75100
  idle_kill = 4294967295 secs
  process_limit = 1
  unix_listener anvil-auth-penalty {
mode = 00
  }
}
service auth-worker {
  client_limit = 1
  process_limit = 6000
  user = $default_internal_user
}
service auth {
  client_limit = 91000
}
service doveadm {
  inet_listener {
port = 2425
  }
  inet_listener http {
port = 8080
  }
}
service imap-login {
  process_limit = 15000
  process_min_avail = 48
  service_count = 0
  vsz_limit = 2 G
}
service imap {
  client_limit = 1
  process_limit = 15000
}
userdb {
  args = /etc/dovecot/dovecot-sql.conf.ext
  driver = sql
}
protocol doveadm {
  passdb {
args = /etc/dovecot/dovecot-sql.conf.ext
driver = sql
name = 
override_fields = port=2425 ssl=no starttls=no
  }
}
protocol imap {
  mail_max_userip_connections = 250
}

Best regards
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Options to track performance?

2023-07-18 Thread Christian
Hi there,
after upgrading my dovecot on a bookworm container, I now have a weird
delay when imap clients like Evolution connect the first time. 

Is there any performance logging configuration I could enable, to see
what dovecot is doing in which timing? I suspect some timeout or delay
somewhere, but unable to find it so far.

Kind regards
  Chris
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Options to track performance?

2023-07-15 Thread Christian
Hi there,
after upgrading my dovecot on a bookworm container, I now have a weird
delay when imap clients like Evolution connect the first time. 

Is there any performance logging configuration I could enable, to see
what dovecot is doing in which timing? I suspect some timeout or delay
somewhere, but unable to find it so far.

Kind regards
  Chris

___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


RE: How do you scale dovecot for good performance with Roundcube webmailer in front? (hitting limits without exhausting resources)

2023-01-26 Thread Malte Schmidt
Am 26.01.2023 21:25 schrieb Marc :> 

> Generally speaking the setup performs OK, I wrote a JMeter Roundcube script to

> test the Roundcube. This includes actions like listing mails and fetching

> them. I am hitting a somehow dubious limit of 600 req/s though. (This includes

> all kinds of HTTP calls to Roundcube, not only the ones triggering IMAP). When

> the dovecot becomes unavailable, the performance without mails jumps to 7700

> req/s. Testing is done with 4 JMeter-servers and one client.

> 



Should you not be testing also dovecot performance directly? So you can see what is it's maximum. I can't imagine http interfaces are the bottle neck.


Right, I also set up imaptest and tested with that. I seem to hit a limit around 600 auth/s (auth workers 64, hashing algorithm does not seem to matter, tried all 3) and get about 1100 append/s and 1100 fetch/s.But I must say that the imaptest ran in parallel (10) for the append and fetch test on a single machine. Now that I think about it, the auth test may have been limited by the single imaptest instance, as I did not run that in parallel.

RE: How do you scale dovecot for good performance with Roundcube webmailer in front? (hitting limits without exhausting resources)

2023-01-26 Thread Marc
> 
> Generally speaking the setup performs OK, I wrote a JMeter Roundcube script to
> test the Roundcube. This includes actions like listing mails and fetching
> them. I am hitting a somehow dubious limit of 600 req/s though. (This includes
> all kinds of HTTP calls to Roundcube, not only the ones triggering IMAP). When
> the dovecot becomes unavailable, the performance without mails jumps to 7700
> req/s. Testing is done with 4 JMeter-servers and one client.
> 

Should you not be testing also dovecot performance directly? So you can see 
what is it's maximum. I can't imagine http interfaces are the bottle neck.





Re: How do you scale dovecot for good performance with Roundcube webmailer in front? (hitting limits without exhausting resources)

2023-01-26 Thread Brendan Braybrook
are you running (squirrelmail's) imapproxy on the roundcube machine? it 
keeps user imap connections active to dovecot, meaning that roundcube 
doesn't have to constantly log back in via imap for each operation.


that might help somewhat.

(we just use the debian package version of imapproxy - it seems the 
www.imapproxy.org website is down right now)


On 2023-01-26 10:18, Malte Schmidt wrote:

Good day,

I am currently setting up/debugging a webmailer-only setup using 
Roundcube (latest version) with Dovecot (2.3.20, latest as of now).


Generally speaking the setup performs OK, I wrote a JMeter Roundcube 
script to test the Roundcube. This includes actions like listing mails 
and fetching them. I am hitting a somehow dubious limit of 600 req/s 
though. (This includes all kinds of HTTP calls to Roundcube, not only 
the ones triggering IMAP). When the dovecot becomes unavailable, the 
performance without mails jumps to 7700 req/s. Testing is done with 4 
JMeter-servers and one client.


My setup is a Dovecot with MySQL (Percona XtraDB) backend. Mails are 
encrypted with mail_crypt and EC keys. Authentication is done in 
parallel (auth_cache_verify_with_workers=yes). Where possible 
min_available_processes have been set equal to the threads available on 
the Dovecot machine (64).


Hardware is a 64 thread Xeon CPU at 2.10 GHz, 96 GB RAM, SSDs as backing 
storage IOPS read 4/write 13000.


What settings do you recommend and how was your experience with 
Roundcube and its performance in general (what should be possible with 
that kind of HW?)?


Best regards and thanks in advance!






How do you scale dovecot for good performance with Roundcube webmailer in front? (hitting limits without exhausting resources)

2023-01-26 Thread Malte Schmidt

Good day,

I am currently setting up/debugging a webmailer-only setup using Roundcube 
(latest version) with Dovecot (2.3.20, latest as of now).

Generally speaking the setup performs OK, I wrote a JMeter Roundcube script to 
test the Roundcube. This includes actions like listing mails and fetching them. 
I am hitting a somehow dubious limit of 600 req/s though. (This includes all 
kinds of HTTP calls to Roundcube, not only the ones triggering IMAP). When the 
dovecot becomes unavailable, the performance without mails jumps to 7700 req/s. 
Testing is done with 4 JMeter-servers and one client. 

My setup is a Dovecot with MySQL (Percona XtraDB) backend. Mails are encrypted 
with mail_crypt and EC keys. Authentication is done in parallel 
(auth_cache_verify_with_workers=yes). Where possible min_available_processes 
have been set equal to the threads available on the Dovecot machine (64).

Hardware is a 64 thread Xeon CPU at 2.10 GHz, 96 GB RAM, SSDs as backing 
storage IOPS read 4/write 13000.

What settings do you recommend and how was your experience with Roundcube and 
its performance in general (what should be possible with that kind of HW?)?

Best regards and thanks in advance!




mdbox_rotate_size recommendation / performance

2022-03-14 Thread Lucas Rolff
Hi guys,

When using mdbox under Dovecot, the mdbox_rotate_size setting defaults to 10 
megabyte - from a performance point of view if using spinning drives does this 
matter whether one goes for e.g. 2MB versus 10MB, especially in the combination 
with compression.

Does anyone have any performance data regarding it’s size, and what’s a decent 
trade-off between limiting IOPS required by the drive versus the sequential 
read/write speeds.

Any pointers would be awesome!

Thanks in advance

Re: sudden performance drop - i/o related

2021-05-11 Thread Aki Tuomi


> On 11/05/2021 10:34 Marcin Gryszkalis  wrote:
> 
>  
> On 11.05.2021 07:30, Aki Tuomi wrote:
> >> On 11/05/2021 01:07 Marcin Gryszkalis  wrote:
> >> It looks like i/o risen from 150writes/s to 500writes/s (in top hours) -
> 
> > One thing that does come to mind is that you are delivering outside 
> > dovecot. Without knowing your system better, I would suggest that one thing 
> > to try would be to use dovecot-lda to deliver mail.
> exim delivers locally via /usr/local/libexec/dovecot/dovecot-lda and 
> it's the only way used for delivery (not counting occasional restoring 
> mail from backups)
> 
> > Are your users directly accessing the maildir?
> Not sure what you mean, they use imap (plus few dovecot/pop3 boxes for 
> automated processing).
> 
> best regards
> -- 
> Marcin Gryszkalis, PGP 0xA5DBEEC7 http://fork.pl/gpg.txt

Your logs indicate though that dovecot is finding new mails that were not 
indexed before. So something external must be placing them there.

Aki


Re: sudden performance drop - i/o related

2021-05-11 Thread Marcin Gryszkalis

On 11.05.2021 07:30, Aki Tuomi wrote:

On 11/05/2021 01:07 Marcin Gryszkalis  wrote:
It looks like i/o risen from 150writes/s to 500writes/s (in top hours) -



One thing that does come to mind is that you are delivering outside dovecot. 
Without knowing your system better, I would suggest that one thing to try would 
be to use dovecot-lda to deliver mail.
exim delivers locally via /usr/local/libexec/dovecot/dovecot-lda and 
it's the only way used for delivery (not counting occasional restoring 
mail from backups)



Are your users directly accessing the maildir?
Not sure what you mean, they use imap (plus few dovecot/pop3 boxes for 
automated processing).


best regards
--
Marcin Gryszkalis, PGP 0xA5DBEEC7 http://fork.pl/gpg.txt


Re: sudden performance drop - i/o related

2021-05-10 Thread Aki Tuomi


> On 11/05/2021 01:07 Marcin Gryszkalis  wrote:
> 
>  
> Hi
> I have exim/dovecot server that worked great for last few years and two 
> weeks ago it got ill ;)
> First were users reporting errors on saving mails to Sent (timeouts).
> Now the logs are infested with warnings about long waits:
> 
> May 10 10:18: Maildir /mail/xxx Synchronization took 193 seconds (1 new 
> msgs, 0 flag change attempts, 0 expunge attempts)
> May 10 10:18: Maildir /mail/xxx Synchronization took 125 seconds (1 new 
> msgs, 0 flag change attempts, 0 expunge attempts)
> May 10 10:18: Maildir /mail/xxx Synchronization took 211 seconds (1 new 
> msgs, 0 flag change attempts, 0 expunge attempts)
> May 10 10:18: Maildir /mail/xxx Synchronization took 107 seconds (8 new 
> msgs, 0 flag change attempts, 0 expunge attempts)
> May 10 10:18: Transaction log file /mail/xxx was locked for 36 seconds 
> (Mailbox was synchronized)
> May 10 10:18: Transaction log file /mail/xxx was locked for 160 seconds 
> (Mailbox was synchronized)
> May 10 10:18: Transaction log file /mail/xxx was locked for 72 seconds 
> (Mailbox was synchronized)
> May 10 10:18: Locking transaction log file /mail/xxx took 60 seconds 
> (syncing)
> May 10 10:18: Locking transaction log file /mail/xxx took 38 seconds 
> (syncing)
> May 10 10:18: Locking transaction log file /mail/xxx took 35 seconds 
> (syncing)
> 
> It looks like i/o risen from 150writes/s to 500writes/s (in top hours) - 
> but there's no real change in number of emails or the volume. Number of 
> users is steady (~100 active users, ~250 imap sessions), number of 
> emails (by count or by volume) is rising and falling within 15% margin.
> 
> The box is FreeBSD 11.4, dovecot is 2.3.13.
> Filesystem is ZFS, disks are fine, free space is around 20% (~200GB)
> Layout is Maildir. CPU is not overloaded (2x6core), same with memory (48GB).
> 
> I didn't change anything in configuration.
> 
> Tonight I did some finetuning like maildir_copy_with_hardlinks=yes or 
> mail_fsync=never/optimized (I'm not happy with that but I'm afraid it 
> won't really help and I'll be able to revert that). I'm also thinking 
> about switching from Maildir to sdbox (I know it won't hurt).
> 
> I don't know where to look to find where the i/o goes. I don't have any 
> metrics/stats enabled (I looked at the docs but it looks it's not really 
> simple and needs some digging to get valuable config). Maybe somebody 
> has suggestions what to look for?
> 
> For detailed per-process stats I need to rebuild kernel with dtrace 
> (other night I guess)... Simple top (in i/o mode - similar to linux's 
> iotop) doesn't catch short living processes (like LDA deliveries).
> 
> best regards
> -- 
> Marcin Gryszkalis, PGP 0xA5DBEEC7 http://fork.pl/gpg.txt

One thing that does come to mind is that you are delivering outside dovecot. 
Without knowing your system better, I would suggest that one thing to try would 
be to use dovecot-lda to deliver mail.

Are your users directly accessing the maildir?

Aki


sudden performance drop - i/o related

2021-05-10 Thread Marcin Gryszkalis

Hi
I have exim/dovecot server that worked great for last few years and two 
weeks ago it got ill ;)

First were users reporting errors on saving mails to Sent (timeouts).
Now the logs are infested with warnings about long waits:

May 10 10:18: Maildir /mail/xxx Synchronization took 193 seconds (1 new 
msgs, 0 flag change attempts, 0 expunge attempts)
May 10 10:18: Maildir /mail/xxx Synchronization took 125 seconds (1 new 
msgs, 0 flag change attempts, 0 expunge attempts)
May 10 10:18: Maildir /mail/xxx Synchronization took 211 seconds (1 new 
msgs, 0 flag change attempts, 0 expunge attempts)
May 10 10:18: Maildir /mail/xxx Synchronization took 107 seconds (8 new 
msgs, 0 flag change attempts, 0 expunge attempts)
May 10 10:18: Transaction log file /mail/xxx was locked for 36 seconds 
(Mailbox was synchronized)
May 10 10:18: Transaction log file /mail/xxx was locked for 160 seconds 
(Mailbox was synchronized)
May 10 10:18: Transaction log file /mail/xxx was locked for 72 seconds 
(Mailbox was synchronized)
May 10 10:18: Locking transaction log file /mail/xxx took 60 seconds 
(syncing)
May 10 10:18: Locking transaction log file /mail/xxx took 38 seconds 
(syncing)
May 10 10:18: Locking transaction log file /mail/xxx took 35 seconds 
(syncing)


It looks like i/o risen from 150writes/s to 500writes/s (in top hours) - 
but there's no real change in number of emails or the volume. Number of 
users is steady (~100 active users, ~250 imap sessions), number of 
emails (by count or by volume) is rising and falling within 15% margin.


The box is FreeBSD 11.4, dovecot is 2.3.13.
Filesystem is ZFS, disks are fine, free space is around 20% (~200GB)
Layout is Maildir. CPU is not overloaded (2x6core), same with memory (48GB).

I didn't change anything in configuration.

Tonight I did some finetuning like maildir_copy_with_hardlinks=yes or 
mail_fsync=never/optimized (I'm not happy with that but I'm afraid it 
won't really help and I'll be able to revert that). I'm also thinking 
about switching from Maildir to sdbox (I know it won't hurt).


I don't know where to look to find where the i/o goes. I don't have any 
metrics/stats enabled (I looked at the docs but it looks it's not really 
simple and needs some digging to get valuable config). Maybe somebody 
has suggestions what to look for?


For detailed per-process stats I need to rebuild kernel with dtrace 
(other night I guess)... Simple top (in i/o mode - similar to linux's 
iotop) doesn't catch short living processes (like LDA deliveries).


best regards
--
Marcin Gryszkalis, PGP 0xA5DBEEC7 http://fork.pl/gpg.txt


Re: Dovecot Director scaling / performance

2020-12-15 Thread Paterakis E. Ioannis



From my short experience,

we run 2 directors and 3 dovecot servers behind them, with approx. 800 
concurent users/dovecot server without any problems. During peak times, 
they go as high as 2000-2200 users/dovecot server, again without any 
problems. These 2 directors are behind a haproxy, so i'd say each 
director handles at approx. 1200-3500 concurent connections without 
problems.


You may face problems with limitations on your OS if you plan to serve 
such an amount of users though. It depends on the OS you use.


i hope this helps,

John


On 15/12/2020 13:25, t...@linux-daus.de wrote:

Hi,

currently i'm going to evaluate the Dovecot Director. Are there someone with 
experience how the Dovecot Director does scale?

The last current information i could find was a post on this mailing list from 
2012: 4 Directors are well known, >75 are too much.

Are there any more up to date experience regarding how many user (current and 
new/s) a single Dovecot Director/proxy  node could handle and how many director 
nodes in a ring are known to work (well)?

Best regards,
Tim


Dovecot Director scaling / performance

2020-12-15 Thread tim
Hi,

currently i'm going to evaluate the Dovecot Director. Are there someone with 
experience how the Dovecot Director does scale? 

The last current information i could find was a post on this mailing list from 
2012: 4 Directors are well known, >75 are too much.

Are there any more up to date experience regarding how many user (current and 
new/s) a single Dovecot Director/proxy  node could handle and how many director 
nodes in a ring are known to work (well)?

Best regards,
Tim


doveadm http api hign performance?

2020-10-20 Thread h...@cndns.com
Does doveadm http api have high execution efficiency? I have a program that 
directly calls doveadm http api. Will there be an efficiency problem if a large 
number of users call it?



h...@cndns.com


Re: Btrfs RAID-10 performance

2020-09-15 Thread John Stoffel
>>>>> "Miloslav" == Miloslav Hůla  writes:

Miloslav> Dne 10.09.2020 v 17:40 John Stoffel napsal(a):
>>>> So why not run the backend storage on the Netapp, and just keep the
>>>> indexes and such local to the system?  I've run Netapps for many years
>>>> and they work really well.  And then you'd get automatic backups using
>>>> schedule snapshots.
>>>> 
>>>> Keep the index files local on disk/SSDs and put the maildirs out to
>>>> NFSv3 volume(s) on the Netapp(s).  Should do wonders.  And you'll stop
>>>> needing to do rsync at night.
>> 
Miloslav> It's the option we have in minds. As you wrote, NetApp is very solid.
Miloslav> The main reason for local storage is, that IMAP server is completely
Miloslav> isolated from network. But maybe one day will use it.
>> 
>> It's not completely isolated, it can rsync data to another host that
>> has access to the Netapp.  *grin*

Miloslav> :o)

Miloslav> Unfortunately, to quickly fix the problem and make server
Miloslav> usable again, we already added SSD and moved indexes on
Miloslav> it. So we have no measurements in old state.
>> 
>> That's ok, if it's better, then its better.  How is the load now?
>> Looking at the output of 'iostat -x 30' might be a good thing.

Miloslav> Load is between 1 and 2. We can live with that for now.

Has IMAP access gotten faster or more consistent under load?  That's
the key takeaway, not system load, since the LoadAvg isn't really a
good measure on Linux.

Basically, has your IO pattern or IO wait times improved?  

Miloslav> Situation is better, but I guess, problem still exists. I
Miloslav> takes some time to load be growing. We will see.
>> 
>> Hmm... how did you setup the new indexes volume?  Did you just use
>> btrfs again?  Did you mirror your SSDs as well?

Miloslav> Yes. Just two SSD into free slots, propagate them as two RAID-0 into 
OS 
Miloslav> and btrfs RAID-1.

Miloslav> It is a nasty, I know, but without outage. It is a just quick attempt 
to 
Miloslav> improve the situation. Our next plan is to buy more controllers, 
Miloslav> schedule an outage on weekend and do it properly.

That is a good plan in any case.  

>> Do the indexes fill the SSD, or is there 20-30% free space?  When an
>> SSD gets fragmented, it's performance can drop quite a bit.  Did you
>> put the SSDs onto a seperate controller?  Probably not.  So now you've
>> just increased the load on the single controller, when you really
>> should be spreading it out more to improve things.

Miloslav> SSD are almost empty, 2.4GB of 93GB is used after 'doveadm
Miloslav> index' on all mailboxes.

Interesting.  I wonder if there's other dovecot files that could be
moved over to increase speed because they're IOPs or IO bound still?  

>> Another possible hack would be to move some stuff to a RAM disk,
>> assuming your server is on a UPS/Generator incase of power loss.  But
>> that's an unsafe hack.
>> 
>> Also, do you have quotas turned on?  That's a performance hit for
>> sure.

Miloslav> No, we are running without quotas.

By quotas, I mean btrfs quotas, just to be clear. 

Miloslav> Thank you for the fio tip. Definetly I'll try that.

Please do!  Getting some numbers from there will let you at least
document your changes in performance. 

But overall, if sounds like you've made some progress and gotten
better performance.  


Re: Btrfs RAID-10 performance

2020-09-15 Thread Miloslav Hůla

Dne 15.09.2020 v 10:22 Linda A. Walsh napsal(a):

On 2020/09/10 07:40, Miloslav Hůla wrote:
I cannot verify it, but I think that even JBOD is propagated as a 
virtual device. If you create JBOD from 3 different disks, low level 
parameters may differ.


    JBOD allows each disk to be seen by the OS, as is.  You wouldn't
create JBOD disk from 3 different disks -- JBOD would give you 3 separate
JBOD disks for the 3 separate disks.


Yes. If I create 3 JBOD configurations from 3 100GB disks, I get 3 100GB 
devices in OS. If I create 1 JBOD configuration from 3 100GB disks, I 
get 1 300GB device in OS.



    So for your 16  disks, you are using 1 long RAID0?  You realize
1 disk goes out, the entire array needs to be reconstructed.  Also
all of your spindles can be tied up by long read/writes -- optimal speed
would come from a read 16 stripes wide spread over the 16 disks.


No. I have 16 RAID-0 configurations from 16 disks. As I wrote, there was 
no other option of how to propagate 16 disks as 16 devices into OS few 
years before.



    What would be better, IMO, is going with a RAID-10 like your subject
says, using 8-pairs of mirrors and strip those.  Set your stripe unit
for 64K to allow the disks to operate independently.  You don't want
a long 16-disk stripe, as that's far from optimal for your mailbox load.
What you want is the ability to have multiple I/O ops going at the same
time -- independently.  I think as it stands now, you are far more likely
to get contention as different mailboxes are accessed with contention
happening within the span, vs. letting each 2 disk mirror potentially doing
a different task -- which would likely have the effect of raising your
I/O ops/s.


The reason to not create RAID-10 in controller was, that btrfs scrubbing 
detects slowly degrading disk much sooner than controller (verified many 
times). And if I create RAID-10 in controller, btrfs scrub detects soon 
too, but I'm not able to recognize on which disk.



    Running raid10 on top of raid0 seems really wasteful


I'm not doing that.



Re: Btrfs RAID-10 performance

2020-09-15 Thread KSB

On 2020.09.15. 11:22, Linda A. Walsh wrote:

On 2020/09/10 07:40, Miloslav Hůla wrote:
I cannot verify it, but I think that even JBOD is propagated as a 
virtual device. If you create JBOD from 3 different disks, low level 
parameters may differ.


    JBOD allows each disk to be seen by the OS, as is.  You wouldn't
create JBOD disk from 3 different disks -- JBOD would give you 3 separate
JBOD disks for the 3 separate disks.

    So for your 16  disks, you are using 1 long RAID0?  You realize
1 disk goes out, the entire array needs to be reconstructed.  Also
all of your spindles can be tied up by long read/writes -- optimal speed
would come from a read 16 stripes wide spread over the 16 disks.

    What would be better, IMO, is going with a RAID-10 like your subject
says, using 8-pairs of mirrors and strip those.  Set your stripe unit
for 64K to allow the disks to operate independently.  You don't want
a long 16-disk stripe, as that's far from optimal for your mailbox load.
What you want is the ability to have multiple I/O ops going at the same
time -- independently.  I think as it stands now, you are far more likely
to get contention as different mailboxes are accessed with contention
happening within the span, vs. letting each 2 disk mirror potentially doing
a different task -- which would likely have the effect of raising your
I/O ops/s.
    Running raid10 on top of raid0 seems really wasteful




You create individual raid0 from each individual disk, write buffers 
off, of course. That is how it's going on sh***y controllers. For some 
controllers, firmware upgrade will add JBOD, for some you need to flash 
IT firmware, for some you can switch to HBA mode.

But anyway - use HBA or GOOD RAID controller.

--
KSB


Re: Btrfs RAID-10 performance

2020-09-15 Thread Miloslav Hůla

Dne 10.09.2020 v 17:40 John Stoffel napsal(a):

So why not run the backend storage on the Netapp, and just keep the
indexes and such local to the system?  I've run Netapps for many years
and they work really well.  And then you'd get automatic backups using
schedule snapshots.

Keep the index files local on disk/SSDs and put the maildirs out to
NFSv3 volume(s) on the Netapp(s).  Should do wonders.  And you'll stop
needing to do rsync at night.


Miloslav> It's the option we have in minds. As you wrote, NetApp is very solid.
Miloslav> The main reason for local storage is, that IMAP server is completely
Miloslav> isolated from network. But maybe one day will use it.

It's not completely isolated, it can rsync data to another host that
has access to the Netapp.  *grin*


:o)


Miloslav> Unfortunately, to quickly fix the problem and make server
Miloslav> usable again, we already added SSD and moved indexes on
Miloslav> it. So we have no measurements in old state.

That's ok, if it's better, then its better.  How is the load now?
Looking at the output of 'iostat -x 30' might be a good thing.


Load is between 1 and 2. We can live with that for now.


Miloslav> Situation is better, but I guess, problem still exists. I
Miloslav> takes some time to load be growing. We will see.

Hmm... how did you setup the new indexes volume?  Did you just use
btrfs again?  Did you mirror your SSDs as well?


Yes. Just two SSD into free slots, propagate them as two RAID-0 into OS 
and btrfs RAID-1.


It is a nasty, I know, but without outage. It is a just quick attempt to 
improve the situation. Our next plan is to buy more controllers, 
schedule an outage on weekend and do it properly.



Do the indexes fill the SSD, or is there 20-30% free space?  When an
SSD gets fragmented, it's performance can drop quite a bit.  Did you
put the SSDs onto a seperate controller?  Probably not.  So now you've
just increased the load on the single controller, when you really
should be spreading it out more to improve things.


SSD are almost empty, 2.4GB of 93GB is used after 'doveadm index' on all 
mailboxes.



Another possible hack would be to move some stuff to a RAM disk,
assuming your server is on a UPS/Generator incase of power loss.  But
that's an unsafe hack.

Also, do you have quotas turned on?  That's a performance hit for
sure.


No, we are running without quotas.


Miloslav> Thank you for the fio tip. Definetly I'll try that.

It's a good way to test and measure how the system will react.
Unfortunately, you will need to do your testing outside of normal work
hours so as to not impact your users too much.

Good luck!   Please post some numbers if you get them.  If you see
only a few disks are 75% or more busy, then *maybe* you have a bad
disk in the system, and moving off that disk or replacing it might
help.  Again, hard to know.

Rebalancing btrfs might also help, especially now that you've moved
the indexes off that volume.

John


Thank you
Milo



Re: Btrfs RAID-10 performance

2020-09-15 Thread Linda A. Walsh

On 2020/09/10 07:40, Miloslav Hůla wrote:
I cannot verify it, but I think that even JBOD is propagated as a 
virtual device. If you create JBOD from 3 different disks, low level 
parameters may differ.
  


   JBOD allows each disk to be seen by the OS, as is.  You wouldn't
create JBOD disk from 3 different disks -- JBOD would give you 3 separate
JBOD disks for the 3 separate disks.

   So for your 16  disks, you are using 1 long RAID0?  You realize
1 disk goes out, the entire array needs to be reconstructed.  Also
all of your spindles can be tied up by long read/writes -- optimal speed
would come from a read 16 stripes wide spread over the 16 disks.

   What would be better, IMO, is going with a RAID-10 like your subject
says, using 8-pairs of mirrors and strip those.  Set your stripe unit
for 64K to allow the disks to operate independently.  You don't want
a long 16-disk stripe, as that's far from optimal for your mailbox load.
What you want is the ability to have multiple I/O ops going at the same
time -- independently.  I think as it stands now, you are far more likely
to get contention as different mailboxes are accessed with contention
happening within the span, vs. letting each 2 disk mirror potentially doing
a different task -- which would likely have the effect of raising your
I/O ops/s. 


   Running raid10 on top of raid0 seems really wasteful




Re: Btrfs RAID-10 performance

2020-09-10 Thread Robert Nowotny
"Miloslav" == Miloslav Hůla 
 writes:

Miloslav> Dne 09.09.2020 v 17:52 John Stoffel napsal(a):
Miloslav> There is a one PCIe RAID controller in a chasis. AVAGO
Miloslav> MegaRAID SAS 9361-8i. And 16x SAS 15k drives conneced to
Miloslav> it. Because the controller does not support pass-through for
Miloslav> the drives, we use 16x RAID-0 on controller. So, we get
Miloslav> /dev/sda ... /dev/sdp (roughly) in OS. And over that we have
Miloslav> single btrfs RAID-10, composed of 16 devices, mounted as
Miloslav> /data.

I will bet that this is one of your bottlenecks as well.  Get a secord
or third controller and split your disks across them evenly.

Miloslav> That's plan for a next step.

Miloslav> We run 'rsync' to remote NAS daily. It takes about 6.5 hours
to finish,
Miloslav> 12'265'387 files last night.

That's sucky.  So basically you're hitting the drives hard with
random IOPs and you're probably running out of performance.  How much
space are you using on the filesystem?

Miloslav> It's not so sucky how it seems. rsync runs during the
Miloslav> night. And even reading is high, server load stays low. We
Miloslav> have problems with writes.

Ok.  So putting in an SSD pair to cache things should help.


And why not use brtfs send to ship off snapshots instead of using
rsync?  I'm sure that would be an improvement...

Miloslav> We run backup to external NAS (NetApp) for a disaster
Miloslav> recovery scenario.  Moreover NAS is spreaded across multiple
Miloslav> locations. Then we create NAS snapshot, tens days
Miloslav> backward. All snapshots easily available via NFS mount. And
Miloslav> NAS capacity is cheaper.

So why not run the backend storage on the Netapp, and just keep the
indexes and such local to the system?  I've run Netapps for many years
and they work really well.  And then you'd get automatic backups using
schedule snapshots.

Keep the index files local on disk/SSDs and put the maildirs out to
NFSv3 volume(s) on the Netapp(s).  Should do wonders.  And you'll stop
needing to do rsync at night.

Miloslav> It's the option we have in minds. As you wrote, NetApp is very solid.
Miloslav> The main reason for local storage is, that IMAP server is completely
Miloslav> isolated from network. But maybe one day will use it.

It's not completely isolated, it can rsync data to another host that
has access to the Netapp.  **grin**

Miloslav> Unfortunately, to quickly fix the problem and make server
Miloslav> usable again, we already added SSD and moved indexes on
Miloslav> it. So we have no measurements in old state.

That's ok, if it's better, then its better.  How is the load now?
Looking at the output of 'iostat -x 30' might be a good thing.

Miloslav> Situation is better, but I guess, problem still exists. I
Miloslav> takes some time to load be growing. We will see.

Hmm... how did you setup the new indexes volume?  Did you just use
btrfs again?  Did you mirror your SSDs as well?

Do the indexes fill the SSD, or is there 20-30% free space?  When an
SSD gets fragmented, it's performance can drop quite a bit.  Did you
put the SSDs onto a seperate controller?  Probably not.  So now you've
just increased the load on the single controller, when you really
should be spreading it out more to improve things.

Another possible hack would be to move some stuff to a RAM disk,
assuming your server is on a UPS/Generator incase of power loss.  But
that's an unsafe hack.

Also, do you have quotas turned on?  That's a performance hit for
sure.

Miloslav> Thank you for the fio tip. Definetly I'll try that.

It's a good way to test and measure how the system will react.
Unfortunately, you will need to do your testing outside of normal work
hours so as to not impact your users too much.

Good luck!   Please post some numbers if you get them.  If you see
only a few disks are 75% or more busy, then **maybe** you have a bad
disk in the system, and moving off that disk or replacing it might
help.  Again, hard to know.

Rebalancing btrfs might also help, especially now that you've moved
the indexes off that volume.


Robert:

Fio is an acronym for *Flexible IO Tester* and describes a tool for
measuring IO performance. With Fio, devices such as hard drives or
SSDs can be tested for their speed by executing a *user-defined
**workload* and collecting performance data. Therefore it might be
difficult to really simulate the load with that, because You need to
define the workload Yourself.
But, at least, You might use it to get an idea of maximum transfer
rates, random I/O etc.
*iotop* shows the *current I/O transfer **rates* for the currently
running processes / threads. It uses the I/O usage information of the
Linux kernel, so it might be a good tool for You.
*htop *might also be Your friend.
and of course

Re: Btrfs RAID-10 performance

2020-09-10 Thread John Stoffel
>>>>> "Miloslav" == Miloslav Hůla  writes:

Miloslav> Dne 09.09.2020 v 17:52 John Stoffel napsal(a):
Miloslav> There is a one PCIe RAID controller in a chasis. AVAGO
Miloslav> MegaRAID SAS 9361-8i. And 16x SAS 15k drives conneced to
Miloslav> it. Because the controller does not support pass-through for
Miloslav> the drives, we use 16x RAID-0 on controller. So, we get
Miloslav> /dev/sda ... /dev/sdp (roughly) in OS. And over that we have
Miloslav> single btrfs RAID-10, composed of 16 devices, mounted as
Miloslav> /data.
>> 
>> I will bet that this is one of your bottlenecks as well.  Get a secord
>> or third controller and split your disks across them evenly.

Miloslav> That's plan for a next step.

Miloslav> We run 'rsync' to remote NAS daily. It takes about 6.5 hours to 
finish,
Miloslav> 12'265'387 files last night.
>>>> 
>>>> That's sucky.  So basically you're hitting the drives hard with
>>>> random IOPs and you're probably running out of performance.  How much
>>>> space are you using on the filesystem?
>> 
Miloslav> It's not so sucky how it seems. rsync runs during the
Miloslav> night. And even reading is high, server load stays low. We
Miloslav> have problems with writes.
>> 
>> Ok.  So putting in an SSD pair to cache things should help.
>> 
>>>> And why not use brtfs send to ship off snapshots instead of using
>>>> rsync?  I'm sure that would be an improvement...
>> 
Miloslav> We run backup to external NAS (NetApp) for a disaster
Miloslav> recovery scenario.  Moreover NAS is spreaded across multiple
Miloslav> locations. Then we create NAS snapshot, tens days
Miloslav> backward. All snapshots easily available via NFS mount. And
Miloslav> NAS capacity is cheaper.
>> 
>> So why not run the backend storage on the Netapp, and just keep the
>> indexes and such local to the system?  I've run Netapps for many years
>> and they work really well.  And then you'd get automatic backups using
>> schedule snapshots.
>> 
>> Keep the index files local on disk/SSDs and put the maildirs out to
>> NFSv3 volume(s) on the Netapp(s).  Should do wonders.  And you'll stop
>> needing to do rsync at night.

Miloslav> It's the option we have in minds. As you wrote, NetApp is very solid. 
Miloslav> The main reason for local storage is, that IMAP server is completely 
Miloslav> isolated from network. But maybe one day will use it.

It's not completely isolated, it can rsync data to another host that
has access to the Netapp.  *grin*

Miloslav> Unfortunately, to quickly fix the problem and make server
Miloslav> usable again, we already added SSD and moved indexes on
Miloslav> it. So we have no measurements in old state.

That's ok, if it's better, then its better.  How is the load now?
Looking at the output of 'iostat -x 30' might be a good thing.

Miloslav> Situation is better, but I guess, problem still exists. I
Miloslav> takes some time to load be growing. We will see.

Hmm... how did you setup the new indexes volume?  Did you just use
btrfs again?  Did you mirror your SSDs as well?

Do the indexes fill the SSD, or is there 20-30% free space?  When an
SSD gets fragmented, it's performance can drop quite a bit.  Did you
put the SSDs onto a seperate controller?  Probably not.  So now you've
just increased the load on the single controller, when you really
should be spreading it out more to improve things.

Another possible hack would be to move some stuff to a RAM disk,
assuming your server is on a UPS/Generator incase of power loss.  But
that's an unsafe hack. 

Also, do you have quotas turned on?  That's a performance hit for
sure. 

Miloslav> Thank you for the fio tip. Definetly I'll try that.

It's a good way to test and measure how the system will react.
Unfortunately, you will need to do your testing outside of normal work
hours so as to not impact your users too much.

Good luck!   Please post some numbers if you get them.  If you see
only a few disks are 75% or more busy, then *maybe* you have a bad
disk in the system, and moving off that disk or replacing it might
help.  Again, hard to know.

Rebalancing btrfs might also help, especially now that you've moved
the indexes off that volume.

John


Re: Btrfs RAID-10 performance

2020-09-10 Thread Miloslav Hůla
I cannot verify it, but I think that even JBOD is propagated as a 
virtual device. If you create JBOD from 3 different disks, low level 
parameters may differ.


And probably old firmware is the reason we used RAID-0 two or three 
years before.


Thank you for the ideas.

Kind regards
Milo

Dne 10.09.2020 v 16:15 Scott Q. napsal(a):
Actually there is, filesystems like ZFS/BTRFS prefer to see the drive 
directly, not a virtual drive.


I'm not sure you can change it now anymore but in the future, always use 
JBOD.


It's also possible that you don't have the latest firmware on the 
9361-8i. If I recall correctly, they only added in the JBOD option in 
the last firmware update


On Thursday, 10/09/2020 at 08:52 Miloslav Hůla wrote:

Some controllers has direct option "pass through to OS" for a drive,
that's what I meant. I can't recall why we have chosen RAID-0
instead of
JBOD, there was some reason, but I hope there is no difference with
single drive.

Thank you
Milo

Dne 09.09.2020 v 15:51 Scott Q. napsal(a):
 > The 9361-8i does support passthrough ( JBOD mode ). Make sure you
have
 > the latest firmware.



Re: Btrfs RAID-10 performance

2020-09-10 Thread Scott Q.
Actually there is, filesystems like ZFS/BTRFS prefer to see the
drive directly, not a virtual drive.

I'm not sure you can change it now anymore but in the future, always
use JBOD.


It's also possible that you don't have the latest firmware on the
9361-8i. If I recall correctly, they only added in the JBOD option in
the last firmware update

On Thursday, 10/09/2020 at 08:52 Miloslav Hůla wrote:


Some controllers has direct option "pass through to OS" for a drive, 
that's what I meant. I can't recall why we have chosen RAID-0 instead
of 
JBOD, there was some reason, but I hope there is no difference with 
single drive.

Thank you
Milo

Dne 09.09.2020 v 15:51 Scott Q. napsal(a):
> The 9361-8i does support passthrough ( JBOD mode ). Make sure you
have 
> the latest firmware.


Re: Btrfs RAID-10 performance

2020-09-10 Thread Miloslav Hůla

Dne 09.09.2020 v 17:52 John Stoffel napsal(a):

Miloslav> There is a one PCIe RAID controller in a chasis. AVAGO
Miloslav> MegaRAID SAS 9361-8i. And 16x SAS 15k drives conneced to
Miloslav> it. Because the controller does not support pass-through for
Miloslav> the drives, we use 16x RAID-0 on controller. So, we get
Miloslav> /dev/sda ... /dev/sdp (roughly) in OS. And over that we have
Miloslav> single btrfs RAID-10, composed of 16 devices, mounted as
Miloslav> /data.

I will bet that this is one of your bottlenecks as well.  Get a secord
or third controller and split your disks across them evenly.


That's plan for a next step.


Miloslav> We run 'rsync' to remote NAS daily. It takes about 6.5 hours to 
finish,
Miloslav> 12'265'387 files last night.


That's sucky.  So basically you're hitting the drives hard with
random IOPs and you're probably running out of performance.  How much
space are you using on the filesystem?


Miloslav> It's not so sucky how it seems. rsync runs during the
Miloslav> night. And even reading is high, server load stays low. We
Miloslav> have problems with writes.

Ok.  So putting in an SSD pair to cache things should help.


And why not use brtfs send to ship off snapshots instead of using
rsync?  I'm sure that would be an improvement...


Miloslav> We run backup to external NAS (NetApp) for a disaster
Miloslav> recovery scenario.  Moreover NAS is spreaded across multiple
Miloslav> locations. Then we create NAS snapshot, tens days
Miloslav> backward. All snapshots easily available via NFS mount. And
Miloslav> NAS capacity is cheaper.

So why not run the backend storage on the Netapp, and just keep the
indexes and such local to the system?  I've run Netapps for many years
and they work really well.  And then you'd get automatic backups using
schedule snapshots.

Keep the index files local on disk/SSDs and put the maildirs out to
NFSv3 volume(s) on the Netapp(s).  Should do wonders.  And you'll stop
needing to do rsync at night.


It's the option we have in minds. As you wrote, NetApp is very solid. 
The main reason for local storage is, that IMAP server is completely 
isolated from network. But maybe one day will use it.



Miloslav> Last half year, we encoutered into performace
Miloslav> troubles. Server load grows up to 30 in rush hours, due to
Miloslav> IO waits. We tried to attach next harddrives (the 838G ones
Miloslav> in a list below) and increase a free space by rebalace. I
Miloslav> think, it helped a little bit, not not so rapidly.


If you're IOPs bound, but not space bound, then you *really* want to
get an SSD in there for the indexes and such.  Basically the stuff
that gets written/read from all the time no matter what, but which
isn't large in terms of space.


Miloslav> Yes. We are now on 66% capacity. Adding SSD for indexes is
Miloslav> our next step.

This *should* give you a boost in performance.  But finding a way to
take before and after latency/performance measurements is key.  I
would look into using 'fio' to test your latency numbers.  You might
also want to try using XFS or even ext4 as your filesystem.  I
understand not wanting to 'fsck', so that might be right out.


Unfortunately, to quickly fix the problem and make server usable again, 
we already added SSD and moved indexes on it. So we have no measurements 
in old state.


Situation is better, but I guess, problem still exists. I takes some 
time to load be growing. We will see.


Thank you for the fio tip. Definetly I'll try that.

Kind regards
Milo


Re: Btrfs RAID-10 performance

2020-09-10 Thread Miloslav Hůla
Some controllers has direct option "pass through to OS" for a drive, 
that's what I meant. I can't recall why we have chosen RAID-0 instead of 
JBOD, there was some reason, but I hope there is no difference with 
single drive.


Thank you
Milo

Dne 09.09.2020 v 15:51 Scott Q. napsal(a):
The 9361-8i does support passthrough ( JBOD mode ). Make sure you have 
the latest firmware.


Re: Btrfs RAID-10 performance

2020-09-09 Thread John Stoffel
>>>>> "Miloslav" == Miloslav Hůla  writes:

Miloslav> Hi, thank you for your reply. I'll continue inline...

Me too... please look for further comments.  Esp about 'fio' and
Netapp useage.


Miloslav> Dne 09.09.2020 v 3:15 John Stoffel napsal(a):
Miloslav> Hello,
Miloslav> I sent this into the Linux Kernel Btrfs mailing list and I got reply:
Miloslav> "RAID-1 would be preferable"
Miloslav> 
(https://lore.kernel.org/linux-btrfs/7b364356-7041-7d18-bd77-f60e0e2e2...@lechevalier.se/T/).
Miloslav> May I ask you for the comments as from people around the Dovecot?
>> 
>> 
Miloslav> We are using btrfs RAID-10 (/data, 4.7TB) on a physical Supermicro
Miloslav> server with Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz and 125GB of 
RAM.
Miloslav> We run 'btrfs scrub start -B -d /data' every Sunday as a cron task. It
Miloslav> takes about 50 minutes to finish.
>> 
Miloslav> # uname -a
Miloslav> Linux imap 4.9.0-12-amd64 #1 SMP Debian 4.9.210-1 (2020-01-20) x86_64
Miloslav> GNU/Linux
>> 
Miloslav> RAID is a composition of 16 harddrives. Harddrives are connected via
Miloslav> AVAGO MegaRAID SAS 9361-8i as a RAID-0 devices. All harddrives are SAS
Miloslav> 2.5" 15k drives.
>> 
>> Can you post the output of "cat /proc/mdstat" or since you say you're
>> using btrfs, are you using their own RAID0 setup?  If so, please post
>> the output of 'btrfs stats' or whatever the command is you use to view
>> layout info?

Miloslav> There is a one PCIe RAID controller in a chasis. AVAGO
Miloslav> MegaRAID SAS 9361-8i. And 16x SAS 15k drives conneced to
Miloslav> it. Because the controller does not support pass-through for
Miloslav> the drives, we use 16x RAID-0 on controller. So, we get
Miloslav> /dev/sda ... /dev/sdp (roughly) in OS. And over that we have
Miloslav> single btrfs RAID-10, composed of 16 devices, mounted as
Miloslav> /data.

I will bet that this is one of your bottlenecks as well.  Get a secord
or third controller and split your disks across them evenly.  

Miloslav> We have chosen this wiring for severeal reasons:
Miloslav> - easy to increase a capacity
Miloslav> - easy to replace drives by larger ones
Miloslav> - due to checksuming, btrfs does not need fsck in case of power 
failure
Miloslav> - btrfs scrub discovers failing drive sooner than S.M.A.R.T. or RAID 
Miloslav> controller


Miloslav> Server serves as a IMAP with Dovecot 2.2.27-3+deb9u6, 4104 accounts,
Miloslav> Mailbox format, LMTP delivery.

>> How ofter are these accounts hitting the server?

Miloslav> IMAP serves for a univesity. So there are typical rush hours from 7AM 
to 
Miloslav> 3PM. Lowers during the evening, almost not used during the night.

I can understand this, I used to work at a Uni so I can understand the
population needs.  

Miloslav> We run 'rsync' to remote NAS daily. It takes about 6.5 hours to 
finish,
Miloslav> 12'265'387 files last night.
>> 
>> That's sucky.  So basically you're hitting the drives hard with
>> random IOPs and you're probably running out of performance.  How much
>> space are you using on the filesystem?

Miloslav> It's not so sucky how it seems. rsync runs during the
Miloslav> night. And even reading is high, server load stays low. We
Miloslav> have problems with writes.

Ok.  So putting in an SSD pair to cache things should help.  

>> And why not use brtfs send to ship off snapshots instead of using
>> rsync?  I'm sure that would be an improvement...

Miloslav> We run backup to external NAS (NetApp) for a disaster
Miloslav> recovery scenario.  Moreover NAS is spreaded across multiple
Miloslav> locations. Then we create NAS snapshot, tens days
Miloslav> backward. All snapshots easily available via NFS mount. And
Miloslav> NAS capacity is cheaper.

So why not run the backend storage on the Netapp, and just keep the
indexes and such local to the system?  I've run Netapps for many years
and they work really well.  And then you'd get automatic backups using
schedule snapshots.

Keep the index files local on disk/SSDs and put the maildirs out to
NFSv3 volume(s) on the Netapp(s).  Should do wonders.  And you'll stop
needing to do rsync at night. 

Miloslav> Last half year, we encoutered into performace
Miloslav> troubles. Server load grows up to 30 in rush hours, due to
Miloslav> IO waits. We tried to attach next harddrives (the 838G ones
Miloslav> in a list below) and increase a free space by rebalace. I
Miloslav> think, it helped a little bit, not not so rapidly.

>> If you're IOPs bound, but not space bound, then you *really* want to
>> get an SSD in there for the indexes and such.  Basically the stuff
>> t

Re: Btrfs RAID-10 performance

2020-09-09 Thread Scott Q.
The 9361-8i does support passthrough ( JBOD mode ). Make sure you
have the latest firmware.

On Wednesday, 09/09/2020 at 03:55 Miloslav Hůla wrote:


Hi, thank you for your reply. I'll continue inline...

Dne 09.09.2020 v 3:15 John Stoffel napsal(a):
> Miloslav> Hello,
> Miloslav> I sent this into the Linux Kernel Btrfs mailing list and I
got reply:
> Miloslav> "RAID-1 would be preferable"
> Miloslav>
(https://lore.kernel.org/linux-btrfs/7b364356-7041-7d18-bd77-f60e0e2e2...@lechevalier.se/T/).
> Miloslav> May I ask you for the comments as from people around the
Dovecot?
> 
> 
> Miloslav> We are using btrfs RAID-10 (/data, 4.7TB) on a physical
Supermicro
> Miloslav> server with Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz and
125GB of RAM.
> Miloslav> We run 'btrfs scrub start -B -d /data' every Sunday as a
cron task. It
> Miloslav> takes about 50 minutes to finish.
> 
> Miloslav> # uname -a
> Miloslav> Linux imap 4.9.0-12-amd64 #1 SMP Debian 4.9.210-1
(2020-01-20) x86_64
> Miloslav> GNU/Linux
> 
> Miloslav> RAID is a composition of 16 harddrives. Harddrives are
connected via
> Miloslav> AVAGO MegaRAID SAS 9361-8i as a RAID-0 devices. All
harddrives are SAS
> Miloslav> 2.5" 15k drives.
> 
> Can you post the output of "cat /proc/mdstat" or since you say
you're
> using btrfs, are you using their own RAID0 setup?  If so, please
post
> the output of 'btrfs stats' or whatever the command is you use to
view
> layout info?

There is a one PCIe RAID controller in a chasis. AVAGO MegaRAID SAS 
9361-8i. And 16x SAS 15k drives conneced to it. Because the controller

does not support pass-through for the drives, we use 16x RAID-0 on 
controller. So, we get /dev/sda ... /dev/sdp (roughly) in OS. And over

that we have single btrfs RAID-10, composed of 16 devices, mounted as
/data.

We have chosen this wiring for severeal reasons:
- easy to increase a capacity
- easy to replace drives by larger ones
- due to checksuming, btrfs does not need fsck in case of power
failure
- btrfs scrub discovers failing drive sooner than S.M.A.R.T. or RAID 
controller


> Miloslav> Server serves as a IMAP with Dovecot 2.2.27-3+deb9u6, 4104
accounts,
> Miloslav> Mailbox format, LMTP delivery.
> 
> How ofter are these accounts hitting the server?

IMAP serves for a univesity. So there are typical rush hours from 7AM
to 
3PM. Lowers during the evening, almost not used during the night.


> Miloslav> We run 'rsync' to remote NAS daily. It takes about 6.5
hours to finish,
> Miloslav> 12'265'387 files last night.
> 
> That's sucky.  So basically you're hitting the drives hard
with
> random IOPs and you're probably running out of performance.  How
much
> space are you using on the filesystem?

It's not so sucky how it seems. rsync runs during the night. And even 
reading is high, server load stays low. We have problems with writes.


> And why not use brtfs send to ship off snapshots instead of using
> rsync?  I'm sure that would be an improvement...

We run backup to external NAS (NetApp) for a disaster recovery
scenario. 
Moreover NAS is spreaded across multiple locations. Then we create NAS

snapshot, tens days backward. All snapshots easily available via NFS 
mount. And NAS capacity is cheaper.


> Miloslav> Last half year, we encoutered into performace
> Miloslav> troubles. Server load grows up to 30 in rush hours, due to
> Miloslav> IO waits. We tried to attach next harddrives (the 838G
ones
> Miloslav> in a list below) and increase a free space by rebalace. I
> Miloslav> think, it helped a little bit, not not so rapidly.
> 
> If you're IOPs bound, but not space bound, then you *really* want to
> get an SSD in there for the indexes and such.  Basically the stuff
> that gets written/read from all the time no matter what, but which
> isn't large in terms of space.

Yes. We are now on 66% capacity. Adding SSD for indexes is our next
step.


> Also, adding in another controller card or two would also probably
> help spread the load across more PCI channels, and reduce contention
> on the SATA/SAS bus as well.

Probably we will wait how SSD helps first, but as you wrote, it is 
possible next step.

> Miloslav> Is this a reasonable setup and use case for btrfs RAID-10?
> Miloslav> If so, are there some recommendations to achieve better
> Miloslav> performance?
> 
> 1. move HOT data to SSD based volume RAID 1 pair.  On a seperate
> controller.

OK

> 2. add more controllers, which also means you're more redundant in
> case one controller fails.

OK

> 3. Clone the system and put Dovecot IMAP director in from of the
> setup.

I still hope that one server can

Re: Btrfs RAID-10 performance

2020-09-09 Thread Miloslav Hůla

Hi, thank you for your reply. I'll continue inline...

Dne 09.09.2020 v 3:15 John Stoffel napsal(a):

Miloslav> Hello,
Miloslav> I sent this into the Linux Kernel Btrfs mailing list and I got reply:
Miloslav> "RAID-1 would be preferable"
Miloslav> 
(https://lore.kernel.org/linux-btrfs/7b364356-7041-7d18-bd77-f60e0e2e2...@lechevalier.se/T/).
Miloslav> May I ask you for the comments as from people around the Dovecot?


Miloslav> We are using btrfs RAID-10 (/data, 4.7TB) on a physical Supermicro
Miloslav> server with Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz and 125GB of 
RAM.
Miloslav> We run 'btrfs scrub start -B -d /data' every Sunday as a cron task. It
Miloslav> takes about 50 minutes to finish.

Miloslav> # uname -a
Miloslav> Linux imap 4.9.0-12-amd64 #1 SMP Debian 4.9.210-1 (2020-01-20) x86_64
Miloslav> GNU/Linux

Miloslav> RAID is a composition of 16 harddrives. Harddrives are connected via
Miloslav> AVAGO MegaRAID SAS 9361-8i as a RAID-0 devices. All harddrives are SAS
Miloslav> 2.5" 15k drives.

Can you post the output of "cat /proc/mdstat" or since you say you're
using btrfs, are you using their own RAID0 setup?  If so, please post
the output of 'btrfs stats' or whatever the command is you use to view
layout info?


There is a one PCIe RAID controller in a chasis. AVAGO MegaRAID SAS 
9361-8i. And 16x SAS 15k drives conneced to it. Because the controller 
does not support pass-through for the drives, we use 16x RAID-0 on 
controller. So, we get /dev/sda ... /dev/sdp (roughly) in OS. And over 
that we have single btrfs RAID-10, composed of 16 devices, mounted as /data.


We have chosen this wiring for severeal reasons:
- easy to increase a capacity
- easy to replace drives by larger ones
- due to checksuming, btrfs does not need fsck in case of power failure
- btrfs scrub discovers failing drive sooner than S.M.A.R.T. or RAID 
controller




Miloslav> Server serves as a IMAP with Dovecot 2.2.27-3+deb9u6, 4104 accounts,
Miloslav> Mailbox format, LMTP delivery.

How ofter are these accounts hitting the server?


IMAP serves for a univesity. So there are typical rush hours from 7AM to 
3PM. Lowers during the evening, almost not used during the night.




Miloslav> We run 'rsync' to remote NAS daily. It takes about 6.5 hours to 
finish,
Miloslav> 12'265'387 files last night.

That's sucky.  So basically you're hitting the drives hard with
random IOPs and you're probably running out of performance.  How much
space are you using on the filesystem?


It's not so sucky how it seems. rsync runs during the night. And even 
reading is high, server load stays low. We have problems with writes.




And why not use brtfs send to ship off snapshots instead of using
rsync?  I'm sure that would be an improvement...


We run backup to external NAS (NetApp) for a disaster recovery scenario. 
Moreover NAS is spreaded across multiple locations. Then we create NAS 
snapshot, tens days backward. All snapshots easily available via NFS 
mount. And NAS capacity is cheaper.




Miloslav> Last half year, we encoutered into performace
Miloslav> troubles. Server load grows up to 30 in rush hours, due to
Miloslav> IO waits. We tried to attach next harddrives (the 838G ones
Miloslav> in a list below) and increase a free space by rebalace. I
Miloslav> think, it helped a little bit, not not so rapidly.

If you're IOPs bound, but not space bound, then you *really* want to
get an SSD in there for the indexes and such.  Basically the stuff
that gets written/read from all the time no matter what, but which
isn't large in terms of space.


Yes. We are now on 66% capacity. Adding SSD for indexes is our next step.



Also, adding in another controller card or two would also probably
help spread the load across more PCI channels, and reduce contention
on the SATA/SAS bus as well.


Probably we will wait how SSD helps first, but as you wrote, it is 
possible next step.



Miloslav> Is this a reasonable setup and use case for btrfs RAID-10?
Miloslav> If so, are there some recommendations to achieve better
Miloslav> performance?

1. move HOT data to SSD based volume RAID 1 pair.  On a seperate
controller.


OK


2. add more controllers, which also means you're more redundant in
case one controller fails.


OK


3. Clone the system and put Dovecot IMAP director in from of the
setup.


I still hope that one server can handle 4105 accounts.


4. Stop using rsync for copying to your DR site, use the btrfs snap
send, or whatever the commands are.


I hope it is not needed in our scenario.


5. check which dovecot backend you're using and think about moving to
one which doesn't involve nearly as many files.


Maildir is comfortable for us. Time to time, users call us with: "I 
accidentally deleted the folder" and

Re: Btrfs RAID-10 performance

2020-09-08 Thread John Stoffel
>>>>> "Miloslav" == Miloslav Hůla  writes:

Miloslav> Hello,
Miloslav> I sent this into the Linux Kernel Btrfs mailing list and I got reply: 
Miloslav> "RAID-1 would be preferable" 
Miloslav> 
(https://lore.kernel.org/linux-btrfs/7b364356-7041-7d18-bd77-f60e0e2e2...@lechevalier.se/T/).
 
Miloslav> May I ask you for the comments as from people around the Dovecot?


Miloslav> We are using btrfs RAID-10 (/data, 4.7TB) on a physical Supermicro 
Miloslav> server with Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz and 125GB of 
RAM. 
Miloslav> We run 'btrfs scrub start -B -d /data' every Sunday as a cron task. 
It 
Miloslav> takes about 50 minutes to finish.

Miloslav> # uname -a
Miloslav> Linux imap 4.9.0-12-amd64 #1 SMP Debian 4.9.210-1 (2020-01-20) x86_64 
Miloslav> GNU/Linux

Miloslav> RAID is a composition of 16 harddrives. Harddrives are connected via 
Miloslav> AVAGO MegaRAID SAS 9361-8i as a RAID-0 devices. All harddrives are 
SAS 
Miloslav> 2.5" 15k drives.

Can you post the output of "cat /proc/mdstat" or since you say you're
using btrfs, are you using their own RAID0 setup?  If so, please post
the output of 'btrfs stats' or whatever the command is you use to view
layout info? 

Miloslav> Server serves as a IMAP with Dovecot 2.2.27-3+deb9u6, 4104 accounts, 
Miloslav> Mailbox format, LMTP delivery.

How ofter are these accounts hitting the server?  

Miloslav> We run 'rsync' to remote NAS daily. It takes about 6.5 hours to 
finish, 
Miloslav> 12'265'387 files last night.

That's sucky.  So basically you're hitting the drives hard with
random IOPs and you're probably running out of performance.  How much
space are you using on the filesystem?

And why not use brtfs send to ship off snapshots instead of using
rsync?  I'm sure that would be an improvement...

Miloslav> Last half year, we encoutered into performace
Miloslav> troubles. Server load grows up to 30 in rush hours, due to
Miloslav> IO waits. We tried to attach next harddrives (the 838G ones
Miloslav> in a list below) and increase a free space by rebalace. I
Miloslav> think, it helped a little bit, not not so rapidly.

If you're IOPs bound, but not space bound, then you *really* want to
get an SSD in there for the indexes and such.  Basically the stuff
that gets written/read from all the time no matter what, but which
isn't large in terms of space.

Also, adding in another controller card or two would also probably
help spread the load across more PCI channels, and reduce contention
on the SATA/SAS bus as well.

Miloslav> Is this a reasonable setup and use case for btrfs RAID-10?
Miloslav> If so, are there some recommendations to achieve better
Miloslav> performance?

1. move HOT data to SSD based volume RAID 1 pair.  On a seperate
   controller. 
2. add more controllers, which also means you're more redundant in
   case one controller fails.
3. Clone the system and put Dovecot IMAP director in from of the
   setup.
4. Stop using rsync for copying to your DR site, use the btrfs snap
   send, or whatever the commands are.
5. check which dovecot backend you're using and think about moving to
   one which doesn't involve nearly as many files.
6. Find out who your biggest users are, in terms of emails and move
   them to SSDs if step 1 is too hard to do at first. 


Can you also grab some 'iostat -dhm 30 60'  output, which is 30
minutes of data over 30 second intervals?  That should help you narrow
down which (if any) disk is your hotspot.

It's not clear to me if you have one big btrfs filesystem, or a bunch
of smaller ones stiched together.  In any case, it should be very easy
to get better performance here.

I think someone else mentioned that you should look at your dovecot
backend, and you should move to the fastest one you can find.

Good luck!
John


Miloslav> # megaclisas-status
Miloslav> -- Controller information --
Miloslav> -- ID | H/W Model  | RAM| Temp | BBU| Firmware
Miloslav> c0| AVAGO MegaRAID SAS 9361-8i | 1024MB | 72C  | Good   | FW: 
Miloslav> 24.16.0-0082

Miloslav> -- Array information --
Miloslav> -- ID | Type   |Size |  Strpsz | Flags | DskCache |   Status |  
OS 
Miloslav> Path | CacheCade |InProgress
Miloslav> c0u0  | RAID-0 |838G |  256 KB | RA,WB |  Enabled |  Optimal | 
Miloslav> /dev/sdq | None  |None
Miloslav> c0u1  | RAID-0 |558G |  256 KB | RA,WB |  Enabled |  Optimal | 
Miloslav> /dev/sda | None  |None
Miloslav> c0u2  | RAID-0 |558G |  256 KB | RA,WB |  Enabled |  Optimal | 
Miloslav> /dev/sdb | None  |None
Miloslav> c0u3  | RAID-0 |558G |  256 KB | RA,WB |  Enabled |  Optimal | 
Miloslav> /dev/sdc | None  |None
Miloslav> c0u4  | RAID-0 |558G |  256 KB | RA,WB |  E

Re: Btrfs RAID-10 performance

2020-09-08 Thread Miloslav Hůla

Thanks for the tips!

Dne 07.09.2020 v 15:24 Scott Q. napsal(a):
1. I assume that's a 2U format -24 bays. You only have 1 raid card for 
all 24 disks ? Granted you only have 16, but usually you should assign 1 
card per 8 drives. In our standard 2U chassis we have 3 hba's per 8 
drives. Your backplane should support that.


Exactly. And what's the reason/bottleneck? PCIe or card throughput?


2. Add more drives


We can add 2 next drives, and we actually did yesterday, but we keep 
free slots to be able replace drives by double-capacity ones.



3. Get a pci nvme ssd card and move the indexes/control/sieve files there.


It complicates current backup and restore a little bit, but I'll 
probably try that.


Thank you,
Milo



On Monday, 07/09/2020 at 08:16 Miloslav Hůla wrote:

Dne 07.09.2020 v 12:43 Sami Ketola napsal(a):
 >> On 7. Sep 2020, at 12.38, Miloslav Hůla mailto:miloslav.h...@gmail.com>> wrote:
 >>
 >> Hello,
 >>
 >> I sent this into the Linux Kernel Btrfs mailing list and I got
reply: "RAID-1 would be preferable"

(https://lore.kernel.org/linux-btrfs/7b364356-7041-7d18-bd77-f60e0e2e2...@lechevalier.se/T/).
May I ask you for the comments as from people around the Dovecot?
 >>
 >>
 >> We are using btrfs RAID-10 (/data, 4.7TB) on a physical
Supermicro server with Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz and
125GB of RAM. We run 'btrfs scrub start -B -d /data' every Sunday as
a cron task. It takes about 50 minutes to finish.
 >>
 >> # uname -a
 >> Linux imap 4.9.0-12-amd64 #1 SMP Debian 4.9.210-1 (2020-01-20)
x86_64 GNU/Linux
 >>
 >> RAID is a composition of 16 harddrives. Harddrives are connected
via AVAGO MegaRAID SAS 9361-8i as a RAID-0 devices. All harddrives
are SAS 2.5" 15k drives.
 >>
 >> Server serves as a IMAP with Dovecot 2.2.27-3+deb9u6, 4104
accounts, Mailbox format, LMTP delivery.
 >
 > does "Mailbox format" mean mbox?
 >
 > If so, then there is your bottleneck. mbox is the slowest
possible mailbox format there is.
 >
 > Sami

Sorry, no, it is a typo. We are using "Maildir".

"doveconf -a" attached

Milo


Re: Btrfs RAID-10 performance

2020-09-07 Thread Scott Q.
Here's a few tips:

1. I assume that's a 2U format -24 bays. You only have 1 raid card
for all 24 disks ? Granted you only have 16, but usually you should
assign 1 card per 8 drives. In our standard 2U chassis we have 3 hba's
per 8 drives. Your backplane should support that.
2. Add more drives
3. Get a pci nvme ssd card and move the indexes/control/sieve files
there. 


On Monday, 07/09/2020 at 08:16 Miloslav Hůla wrote:


Dne 07.09.2020 v 12:43 Sami Ketola napsal(a):
>> On 7. Sep 2020, at 12.38, Miloslav Hůla  wrote:
>>
>> Hello,
>>
>> I sent this into the Linux Kernel Btrfs mailing list and I got
reply: "RAID-1 would be preferable"
(https://lore.kernel.org/linux-btrfs/7b364356-7041-7d18-bd77-f60e0e2e2...@lechevalier.se/T/).
May I ask you for the comments as from people around the Dovecot?
>>
>>
>> We are using btrfs RAID-10 (/data, 4.7TB) on a physical Supermicro
server with Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz and 125GB of
RAM. We run 'btrfs scrub start -B -d /data' every Sunday as a cron
task. It takes about 50 minutes to finish.
>>
>> # uname -a
>> Linux imap 4.9.0-12-amd64 #1 SMP Debian 4.9.210-1 (2020-01-20)
x86_64 GNU/Linux
>>
>> RAID is a composition of 16 harddrives. Harddrives are connected
via AVAGO MegaRAID SAS 9361-8i as a RAID-0 devices. All harddrives are
SAS 2.5" 15k drives.
>>
>> Server serves as a IMAP with Dovecot 2.2.27-3+deb9u6, 4104
accounts, Mailbox format, LMTP delivery.
> 
> does "Mailbox format" mean mbox?
> 
> If so, then there is your bottleneck. mbox is the slowest possible
mailbox format there is.
> 
> Sami

Sorry, no, it is a typo. We are using "Maildir".

"doveconf -a" attached

Milo


# 2.2.27 (c0f36b0): /etc/dovecot/dovecot.conf
# Pigeonhole version 0.4.16 (fed8554)
# OS: Linux 4.9.0-12-amd64 x86_64 Debian 9.13
# NOTE: Send doveconf -n output instead when asking for help.
auth_anonymous_username = anonymous
auth_cache_negative_ttl = 30 secs
auth_cache_size = 100 M
auth_cache_ttl = 30 secs
auth_debug = no
auth_debug_passwords = no
auth_default_realm =
auth_failure_delay = 2 secs
auth_gssapi_hostname =
auth_krb5_keytab =
auth_master_user_separator =
auth_mechanisms = plain
auth_policy_hash_mech = sha256
auth_policy_hash_nonce =
auth_policy_hash_truncate = 12
auth_policy_reject_on_fail = no
auth_policy_request_attributes = login=%{orig_username} 
pwhash=%{hashed_password} remote=%{real_rip}
auth_policy_server_api_header =
auth_policy_server_timeout_msecs = 2000
auth_policy_server_url =
auth_proxy_self =
auth_realms =
auth_socket_path = auth-userdb
auth_ssl_require_client_cert = no
auth_ssl_username_from_cert = no
auth_stats = no
auth_use_winbind = no
auth_username_chars = 
abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ01234567890.-_@
auth_username_format = %Lu
auth_username_translation =
auth_verbose = no
auth_verbose_passwords = no
auth_winbind_helper_path = /usr/bin/ntlm_auth
auth_worker_max_count = 30
base_dir = /var/run/dovecot
config_cache_size = 1 M
debug_log_path =
default_client_limit = 1000
default_idle_kill = 1 mins
default_internal_user = dovecot
default_login_user = dovenull
default_process_limit = 100
default_vsz_limit = 256 M
deliver_log_format = msgid=%m: %$
dict_db_config =
director_consistent_hashing = no
director_doveadm_port = 0
director_flush_socket =
director_mail_servers =
director_servers =
director_user_expire = 15 mins
director_user_kick_delay = 2 secs
director_username_hash = %u
disable_plaintext_auth = yes
dotlock_use_excl = yes
doveadm_allowed_commands =
doveadm_api_key =
doveadm_password =
doveadm_port = 0
doveadm_socket_path = doveadm-server
doveadm_username = doveadm
doveadm_worker_count = 0
dsync_alt_char = _
dsync_features =
dsync_remote_cmd = ssh -l%{login} %{host} doveadm dsync-server -u%u -U
first_valid_gid = 1
first_valid_uid = 109
haproxy_timeout = 3 secs
haproxy_trusted_networks =
hostname =
imap_capability =
imap_client_workarounds =
imap_hibernate_timeout = 0
imap_id_log = *
imap_id_send = name *
imap_idle_notify_interval = 2 mins
imap_logout_format = in=%i out=%o
imap_max_line_length = 64 k
imap_metadata = no
imap_urlauth_host =
imap_urlauth_logout_format = in=%i out=%o
imap_urlauth_port = 143
imapc_cmd_timeout = 5 mins
imapc_features =
imapc_host =
imapc_list_prefix =
imapc_master_user =
imapc_max_idle_time = 29 mins
imapc_max_line_length = 0
imapc_password =
imapc_port = 143
imapc_rawlog_dir =
imapc_sasl_mechanisms =
imapc_ssl = no
imapc_ssl_verify = yes
imapc_user =
import_environment = TZ CORE_OUTOFMEM CORE_ERROR LISTEN_PID LISTEN_FDS
info_log_path =
instance_name = dovecot
last_valid_gid = 0
last_valid_uid = 0
lda_mailbox_autocreate = no
lda_mailbox_autosubscribe = no
lda_original_recipient_header =
libexec_dir = /usr/lib/dovecot
listen = *, ::
lmtp_address_translate =
lmtp_hdr_delivery_address = final
lmtp_proxy = no
lmtp_rcpt_check_quota = no
lmtp_save_to_detail_mailbox = no
lmtp_user_concurrency_limit = 0
lock_method = fcntl
log_path = syslog
log_timestamp = "%b %d %H:%M:%S "
login_access_sockets =
login_greeting =

Re: Btrfs RAID-10 performance

2020-09-07 Thread Miloslav Hůla

Dne 07.09.2020 v 12:43 Sami Ketola napsal(a):

On 7. Sep 2020, at 12.38, Miloslav Hůla  wrote:

Hello,

I sent this into the Linux Kernel Btrfs mailing list and I got reply: "RAID-1 would 
be preferable" 
(https://lore.kernel.org/linux-btrfs/7b364356-7041-7d18-bd77-f60e0e2e2...@lechevalier.se/T/).
 May I ask you for the comments as from people around the Dovecot?


We are using btrfs RAID-10 (/data, 4.7TB) on a physical Supermicro server with 
Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz and 125GB of RAM. We run 'btrfs scrub 
start -B -d /data' every Sunday as a cron task. It takes about 50 minutes to 
finish.

# uname -a
Linux imap 4.9.0-12-amd64 #1 SMP Debian 4.9.210-1 (2020-01-20) x86_64 GNU/Linux

RAID is a composition of 16 harddrives. Harddrives are connected via AVAGO MegaRAID 
SAS 9361-8i as a RAID-0 devices. All harddrives are SAS 2.5" 15k drives.

Server serves as a IMAP with Dovecot 2.2.27-3+deb9u6, 4104 accounts, Mailbox 
format, LMTP delivery.


does "Mailbox format" mean mbox?

If so, then there is your bottleneck. mbox is the slowest possible mailbox 
format there is.

Sami


Sorry, no, it is a typo. We are using "Maildir".

"doveconf -a" attached

Milo


# 2.2.27 (c0f36b0): /etc/dovecot/dovecot.conf
# Pigeonhole version 0.4.16 (fed8554)
# OS: Linux 4.9.0-12-amd64 x86_64 Debian 9.13
# NOTE: Send doveconf -n output instead when asking for help.
auth_anonymous_username = anonymous
auth_cache_negative_ttl = 30 secs
auth_cache_size = 100 M
auth_cache_ttl = 30 secs
auth_debug = no
auth_debug_passwords = no
auth_default_realm =
auth_failure_delay = 2 secs
auth_gssapi_hostname =
auth_krb5_keytab =
auth_master_user_separator =
auth_mechanisms = plain
auth_policy_hash_mech = sha256
auth_policy_hash_nonce =
auth_policy_hash_truncate = 12
auth_policy_reject_on_fail = no
auth_policy_request_attributes = login=%{orig_username} 
pwhash=%{hashed_password} remote=%{real_rip}

auth_policy_server_api_header =
auth_policy_server_timeout_msecs = 2000
auth_policy_server_url =
auth_proxy_self =
auth_realms =
auth_socket_path = auth-userdb
auth_ssl_require_client_cert = no
auth_ssl_username_from_cert = no
auth_stats = no
auth_use_winbind = no
auth_username_chars = 
abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ01234567890.-_@

auth_username_format = %Lu
auth_username_translation =
auth_verbose = no
auth_verbose_passwords = no
auth_winbind_helper_path = /usr/bin/ntlm_auth
auth_worker_max_count = 30
base_dir = /var/run/dovecot
config_cache_size = 1 M
debug_log_path =
default_client_limit = 1000
default_idle_kill = 1 mins
default_internal_user = dovecot
default_login_user = dovenull
default_process_limit = 100
default_vsz_limit = 256 M
deliver_log_format = msgid=%m: %$
dict_db_config =
director_consistent_hashing = no
director_doveadm_port = 0
director_flush_socket =
director_mail_servers =
director_servers =
director_user_expire = 15 mins
director_user_kick_delay = 2 secs
director_username_hash = %u
disable_plaintext_auth = yes
dotlock_use_excl = yes
doveadm_allowed_commands =
doveadm_api_key =
doveadm_password =
doveadm_port = 0
doveadm_socket_path = doveadm-server
doveadm_username = doveadm
doveadm_worker_count = 0
dsync_alt_char = _
dsync_features =
dsync_remote_cmd = ssh -l%{login} %{host} doveadm dsync-server -u%u -U
first_valid_gid = 1
first_valid_uid = 109
haproxy_timeout = 3 secs
haproxy_trusted_networks =
hostname =
imap_capability =
imap_client_workarounds =
imap_hibernate_timeout = 0
imap_id_log = *
imap_id_send = name *
imap_idle_notify_interval = 2 mins
imap_logout_format = in=%i out=%o
imap_max_line_length = 64 k
imap_metadata = no
imap_urlauth_host =
imap_urlauth_logout_format = in=%i out=%o
imap_urlauth_port = 143
imapc_cmd_timeout = 5 mins
imapc_features =
imapc_host =
imapc_list_prefix =
imapc_master_user =
imapc_max_idle_time = 29 mins
imapc_max_line_length = 0
imapc_password =
imapc_port = 143
imapc_rawlog_dir =
imapc_sasl_mechanisms =
imapc_ssl = no
imapc_ssl_verify = yes
imapc_user =
import_environment = TZ CORE_OUTOFMEM CORE_ERROR LISTEN_PID LISTEN_FDS
info_log_path =
instance_name = dovecot
last_valid_gid = 0
last_valid_uid = 0
lda_mailbox_autocreate = no
lda_mailbox_autosubscribe = no
lda_original_recipient_header =
libexec_dir = /usr/lib/dovecot
listen = *, ::
lmtp_address_translate =
lmtp_hdr_delivery_address = final
lmtp_proxy = no
lmtp_rcpt_check_quota = no
lmtp_save_to_detail_mailbox = no
lmtp_user_concurrency_limit = 0
lock_method = fcntl
log_path = syslog
log_timestamp = "%b %d %H:%M:%S "
login_access_sockets =
login_greeting = Dovecot ready.
login_log_format = %$: %s
login_log_format_elements = user=<%u> method=%m rip=%r lip=%l mpid=%e %c 
session=<%{session}>

login_plugin_dir = /usr/lib/dovecot/modules/login
login_plugins =
login_proxy_max_disconnect_delay = 0
login_source_ips =
login_trusted_networks =
mail_access_groups =
mail_always_cache_fields =
mail_attachment_dir =
mail_attachment_fs = sis posix
mail_attachment_hash = %{sha1}
mail_attachment_min_size = 128 k
mail_attribu

Re: Btrfs RAID-10 performance

2020-09-07 Thread Sami Ketola



> On 7. Sep 2020, at 12.38, Miloslav Hůla  wrote:
> 
> Hello,
> 
> I sent this into the Linux Kernel Btrfs mailing list and I got reply: "RAID-1 
> would be preferable" 
> (https://lore.kernel.org/linux-btrfs/7b364356-7041-7d18-bd77-f60e0e2e2...@lechevalier.se/T/).
>  May I ask you for the comments as from people around the Dovecot?
> 
> 
> We are using btrfs RAID-10 (/data, 4.7TB) on a physical Supermicro server 
> with Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz and 125GB of RAM. We run 
> 'btrfs scrub start -B -d /data' every Sunday as a cron task. It takes about 
> 50 minutes to finish.
> 
> # uname -a
> Linux imap 4.9.0-12-amd64 #1 SMP Debian 4.9.210-1 (2020-01-20) x86_64 
> GNU/Linux
> 
> RAID is a composition of 16 harddrives. Harddrives are connected via AVAGO 
> MegaRAID SAS 9361-8i as a RAID-0 devices. All harddrives are SAS 2.5" 15k 
> drives.
> 
> Server serves as a IMAP with Dovecot 2.2.27-3+deb9u6, 4104 accounts, Mailbox 
> format, LMTP delivery.

does "Mailbox format" mean mbox?

If so, then there is your bottleneck. mbox is the slowest possible mailbox 
format there is.

Sami




Btrfs RAID-10 performance

2020-09-07 Thread Miloslav Hůla

Hello,

I sent this into the Linux Kernel Btrfs mailing list and I got reply: 
"RAID-1 would be preferable" 
(https://lore.kernel.org/linux-btrfs/7b364356-7041-7d18-bd77-f60e0e2e2...@lechevalier.se/T/). 
May I ask you for the comments as from people around the Dovecot?



We are using btrfs RAID-10 (/data, 4.7TB) on a physical Supermicro 
server with Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz and 125GB of RAM. 
We run 'btrfs scrub start -B -d /data' every Sunday as a cron task. It 
takes about 50 minutes to finish.


# uname -a
Linux imap 4.9.0-12-amd64 #1 SMP Debian 4.9.210-1 (2020-01-20) x86_64 
GNU/Linux


RAID is a composition of 16 harddrives. Harddrives are connected via 
AVAGO MegaRAID SAS 9361-8i as a RAID-0 devices. All harddrives are SAS 
2.5" 15k drives.


Server serves as a IMAP with Dovecot 2.2.27-3+deb9u6, 4104 accounts, 
Mailbox format, LMTP delivery.


We run 'rsync' to remote NAS daily. It takes about 6.5 hours to finish, 
12'265'387 files last night.



Last half year, we encoutered into performace troubles. Server load 
grows up to 30 in rush hours, due to IO waits. We tried to attach next 
harddrives (the 838G ones in a list below) and increase a free space by 
rebalace. I think, it helped a little bit, not not so rapidly.


Is this a reasonable setup and use case for btrfs RAID-10? If so, are 
there some recommendations to achieve better performance?


Thank you. With kind regards
Milo



# megaclisas-status
-- Controller information --
-- ID | H/W Model  | RAM| Temp | BBU| Firmware
c0| AVAGO MegaRAID SAS 9361-8i | 1024MB | 72C  | Good   | FW: 
24.16.0-0082


-- Array information --
-- ID | Type   |Size |  Strpsz | Flags | DskCache |   Status |  OS 
Path | CacheCade |InProgress
c0u0  | RAID-0 |838G |  256 KB | RA,WB |  Enabled |  Optimal | 
/dev/sdq | None  |None
c0u1  | RAID-0 |558G |  256 KB | RA,WB |  Enabled |  Optimal | 
/dev/sda | None  |None
c0u2  | RAID-0 |558G |  256 KB | RA,WB |  Enabled |  Optimal | 
/dev/sdb | None  |None
c0u3  | RAID-0 |558G |  256 KB | RA,WB |  Enabled |  Optimal | 
/dev/sdc | None  |None
c0u4  | RAID-0 |558G |  256 KB | RA,WB |  Enabled |  Optimal | 
/dev/sdd | None  |None
c0u5  | RAID-0 |558G |  256 KB | RA,WB |  Enabled |  Optimal | 
/dev/sde | None  |None
c0u6  | RAID-0 |558G |  256 KB | RA,WB |  Enabled |  Optimal | 
/dev/sdf | None  |None
c0u7  | RAID-0 |558G |  256 KB | RA,WB |  Enabled |  Optimal | 
/dev/sdg | None  |None
c0u8  | RAID-0 |558G |  256 KB | RA,WB |  Enabled |  Optimal | 
/dev/sdh | None  |None
c0u9  | RAID-0 |558G |  256 KB | RA,WB |  Enabled |  Optimal | 
/dev/sdi | None  |None
c0u10 | RAID-0 |558G |  256 KB | RA,WB |  Enabled |  Optimal | 
/dev/sdj | None  |None
c0u11 | RAID-0 |558G |  256 KB | RA,WB |  Enabled |  Optimal | 
/dev/sdk | None  |None
c0u12 | RAID-0 |558G |  256 KB | RA,WB |  Enabled |  Optimal | 
/dev/sdl | None  |None
c0u13 | RAID-0 |558G |  256 KB | RA,WB |  Enabled |  Optimal | 
/dev/sdm | None  |None
c0u14 | RAID-0 |558G |  256 KB | RA,WB |  Enabled |  Optimal | 
/dev/sdn | None  |None
c0u15 | RAID-0 |838G |  256 KB | RA,WB |  Enabled |  Optimal | 
/dev/sdr | None  |None


-- Disk information --
-- ID   | Type | Drive Model   | Size | Status 
 | Speed| Temp | Slot ID  | LSI ID
c0u0p0  | HDD  | SEAGATE ST900MP0006 N003WAG0Q3S3  | 837.8 Gb | Online, 
Spun Up | 12.0Gb/s | 53C  | [8:14]   | 32
c0u1p0  | HDD  | HGST HUC156060CSS200 A3800XV250TJ | 558.4 Gb | Online, 
Spun Up | 12.0Gb/s | 38C  | [8:0]| 12
c0u2p0  | HDD  | HGST HUC156060CSS200 A3800XV3XT4J | 558.4 Gb | Online, 
Spun Up | 12.0Gb/s | 43C  | [8:1]| 11
c0u3p0  | HDD  | HGST HUC156060CSS200 ADB05ZG4XLZU | 558.4 Gb | Online, 
Spun Up | 12.0Gb/s | 46C  | [8:2]| 25
c0u4p0  | HDD  | HGST HUC156060CSS200 A3800XV3DWRL | 558.4 Gb | Online, 
Spun Up | 12.0Gb/s | 48C  | [8:3]| 14
c0u5p0  | HDD  | HGST HUC156060CSS200 A3800XV3XZTL | 558.4 Gb | Online, 
Spun Up | 12.0Gb/s | 52C  | [8:4]| 18
c0u6p0  | HDD  | HGST HUC156060CSS200 A3800XV3VSKJ | 558.4 Gb | Online, 
Spun Up | 12.0Gb/s | 55C  | [8:5]| 15
c0u7p0  | HDD  | SEAGATE ST600MP0006 N003WAF1LWKE  | 558.4 Gb | Online, 
Spun Up | 12.0Gb/s | 56C  | [8:6]| 28
c0u8p0  | HDD  | HGST HUC156060CSS200 A3800XV3XTDJ | 558.4 Gb | Online, 
Spun Up | 12.0Gb/s | 55C  | [8:7]| 20
c0u9p0  | HDD  | HGST HUC156060CSS200 A3800XV3T8XL | 558.4 Gb | Online, 
Spun Up | 12.0Gb/s | 57C  | [8:8]| 19
c0u10p0 | HDD  | HGST HUC156060CSS200 A7030XHL0ZYP | 558.4 Gb | Online, 
Spun Up | 12.0Gb/s | 61C  | [8:9]| 23
c0u11p0 | HDD  | HGST HUC156060CSS200 ADB05ZG4VR3P | 558.4 Gb | Online, 
Spun Up | 12.0Gb/s | 60C  | [8:10]   | 24
c0u12p0 | HDD  | SEAGATE ST600MP0006 N003WAF195KA  | 558.4 Gb | Online, 
Spun Up | 12.0Gb/s | 60C  | [8:11]   | 29
c0u13p0 | HDD  | SEAGATE ST600MP000

Re: Performance mdbox vs mbox

2019-11-26 Thread @lbutlr via dovecot
On 26 Nov 2019, at 04:15, Marc Roos  wrote:
> If I do the same test[1] with mbox I can store around 31k messages and 
> mdbox 16k messages. I noticed also that cpu and disk utilization with 
> mdbox was not very high, while disk utilization on mbox was much higher. 
> That makes me wonder if I can tune mdbox to have better performance?

No one should use box for anything. It was designed for mail stores of a few 
megabytes.

*Every* other choice is better.



-- 
My little brother got his arm stuck in the microwave. So my mom had
to take him to the hospital. My grandma dropped acid this
morning, and she freaked out. She hijacked a busload of penguins.
So it's sort of a family crisis. Bye!



Performance mdbox vs mbox

2019-11-26 Thread Marc Roos via dovecot


If I do the same test[1] with mbox I can store around 31k messages and 
mdbox 16k messages. I noticed also that cpu and disk utilization with 
mdbox was not very high, while disk utilization on mbox was much higher. 
That makes me wonder if I can tune mdbox to have better performance?


[1]
imaptest - append=100,0 logout=0 host=svr port=143 user=test pass=xxx 
seed=100 secs=240 clients=1 mbox=64kb.mbox box=inbox/test

[2]
mail_location = 
mbox:~/mail:INBOX=/var/spool/mail/%u:CONTROL=~/mail/control:INDEX=/var/d
ovecot/%u/index:LAYOUT=maildir++
mail_location = mdbox:~/mdbox:INDEX=/home/popindex/%u/index


- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -. 
F1 Outsourcing Development Sp. z o.o.
Poland 

t:  +48 (0)124466845
f:  +48 (0)124466843
e:  m...@f1-outsourcing.eu




Re: [ext] dovecot 2.3.7.2-1~bionic: Performance issues caused by excessive IO to ~/mdbox/mailboxes/INBOX/dbox-Mails/dovecot.index.tmp

2019-10-16 Thread Ralf Hildebrandt via dovecot
* Aki Tuomi via dovecot :

> 2.3.7 does not generate DH keys. It's been removed since 2.3.0

Yes, it was the only periodic process I could think/knew of.

> Is it possible for you to track and find out which process is causing the 
> peak?

Will try. Next hour :)

-- 
Ralf Hildebrandt
  Geschäftsbereich IT | Abteilung Netzwerk
  Charité - Universitätsmedizin Berlin
  Campus Benjamin Franklin
  Hindenburgdamm 30 | D-12203 Berlin
  Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962
  ralf.hildebra...@charite.de | https://www.charite.de



Re: [ext] dovecot 2.3.7.2-1~bionic: Performance issues caused by excessive IO to ~/mdbox/mailboxes/INBOX/dbox-Mails/dovecot.index.tmp

2019-10-16 Thread Aki Tuomi via dovecot


> On 16/10/2019 13:31 Ralf Hildebrandt via dovecot  wrote:
> 
>  
> * Ralf Hildebrandt via dovecot :
> > * Timo Sirainen :
> > 
> > > > BTW: This post is a followup to my "2.3.7 slower than 2.3.6?" post from 
> > > > back in July.
> > > 
> > > Fixed by 
> > > https://github.com/dovecot/core/commit/5e9e09a041b318025fd52db2df25052b60d0fc98
> > >  and will be in the soon-to-be-released v2.3.8.
> > 
> > I stopped 2.3.7, copied over the index files from the ramdisk into
> > the physical "realm" and restarted with a fresh 2.3.8. It probably
> > takes a few days to be absolutely sure.
> 
> So, in general the performance issues are gone.
> 
> But... 
> 
> I'm seeing odd hourly spikes almost every hour, on the hour.
> You might say: Well yes, that's a cronjob sending lots of mails. But
> it isn't. There's not more or less mail coming in at that very moment.
> 
> I suspect something in dovecot running every hour (DH key regeneration?)
> 
> -- 
> Ralf Hildebrandt

2.3.7 does not generate DH keys. It's been removed since 2.3.0

Is it possible for you to track and find out which process is causing the peak?

Aki


Re: [ext] dovecot 2.3.7.2-1~bionic: Performance issues caused by excessive IO to ~/mdbox/mailboxes/INBOX/dbox-Mails/dovecot.index.tmp

2019-10-16 Thread Ralf Hildebrandt via dovecot
* Ralf Hildebrandt via dovecot :
> * Timo Sirainen :
> 
> > > BTW: This post is a followup to my "2.3.7 slower than 2.3.6?" post from 
> > > back in July.
> > 
> > Fixed by 
> > https://github.com/dovecot/core/commit/5e9e09a041b318025fd52db2df25052b60d0fc98
> >  and will be in the soon-to-be-released v2.3.8.
> 
> I stopped 2.3.7, copied over the index files from the ramdisk into
> the physical "realm" and restarted with a fresh 2.3.8. It probably
> takes a few days to be absolutely sure.

So, in general the performance issues are gone.

But... 

I'm seeing odd hourly spikes almost every hour, on the hour.
You might say: Well yes, that's a cronjob sending lots of mails. But
it isn't. There's not more or less mail coming in at that very moment.

I suspect something in dovecot running every hour (DH key regeneration?)

-- 
Ralf Hildebrandt
  Geschäftsbereich IT | Abteilung Netzwerk
  Charité - Universitätsmedizin Berlin
  Campus Benjamin Franklin
  Hindenburgdamm 30 | D-12203 Berlin
  Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962
  ralf.hildebra...@charite.de | https://www.charite.de



Re: [ext] dovecot 2.3.7.2-1~bionic: Performance issues caused by excessive IO to ~/mdbox/mailboxes/INBOX/dbox-Mails/dovecot.index.tmp

2019-10-08 Thread Ralf Hildebrandt via dovecot
* Timo Sirainen :

> > BTW: This post is a followup to my "2.3.7 slower than 2.3.6?" post from 
> > back in July.
> 
> Fixed by 
> https://github.com/dovecot/core/commit/5e9e09a041b318025fd52db2df25052b60d0fc98
>  and will be in the soon-to-be-released v2.3.8.

I stopped 2.3.7, copied over the index files from the ramdisk into
the physical "realm" and restarted with a fresh 2.3.8. It probably
takes a few days to be absolutely sure.

-- 
Ralf Hildebrandt
  Geschäftsbereich IT | Abteilung Netzwerk
  Charité - Universitätsmedizin Berlin
  Campus Benjamin Franklin
  Hindenburgdamm 30 | D-12203 Berlin
  Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962
  ralf.hildebra...@charite.de | https://www.charite.de



Re: [ext] dovecot 2.3.7.2-1~bionic: Performance issues caused by excessive IO to ~/mdbox/mailboxes/INBOX/dbox-Mails/dovecot.index.tmp

2019-10-07 Thread Ralf Hildebrandt via dovecot
* Timo Sirainen :

> >> But why is that? Why would the index file be updated so often?
> > 
> > BTW: This post is a followup to my "2.3.7 slower than 2.3.6?" post from 
> > back in July.
> 
> Fixed by 
> https://github.com/dovecot/core/commit/5e9e09a041b318025fd52db2df25052b60d0fc98
>  
> 
>  and will be in the soon-to-be-released v2.3.8.

Thanks. I will test this (once it's been released) by moving the index files 
back to
conventional storage. 

-- 
Ralf Hildebrandt
  Geschäftsbereich IT | Abteilung Netzwerk
  Charité - Universitätsmedizin Berlin
  Campus Benjamin Franklin
  Hindenburgdamm 30 | D-12203 Berlin
  Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962
  ralf.hildebra...@charite.de | https://www.charite.de



Re: [ext] dovecot 2.3.7.2-1~bionic: Performance issues caused by excessive IO to ~/mdbox/mailboxes/INBOX/dbox-Mails/dovecot.index.tmp

2019-10-07 Thread Timo Sirainen via dovecot
On 1 Oct 2019, at 16.45, Ralf Hildebrandt via dovecot  
wrote:
> 
> * Ralf Hildebrandt via dovecot :
> 
>> But why is that? Why would the index file be updated so often?
> 
> BTW: This post is a followup to my "2.3.7 slower than 2.3.6?" post from back 
> in July.

Fixed by 
https://github.com/dovecot/core/commit/5e9e09a041b318025fd52db2df25052b60d0fc98 

 and will be in the soon-to-be-released v2.3.8.



Re: [ext] Re: dovecot 2.3.7.2-1~bionic: Performance issues caused by excessive IO to ~/mdbox/mailboxes/INBOX/dbox-Mails/dovecot.index.tmp

2019-10-01 Thread Ralf Hildebrandt via dovecot
> > This command quickly pointed to 
> > ~/mdbox/mailboxes/INBOX/dbox-Mails/dovecot.index.tmp
> > That file was written excessively. 
> 

> Was it one user's dovecot.index.tmp or for a lot of users?

There's just one user. All mail goes to one mailbox.

> This
> means that dovecot.index is being rewritten, which should happen only
> once in a while, but now it sounds like it's happening maybe for every
> mail delivery.

Yes, it seems to be that way.

> If it's still happening, could you send me one folder's
> dovecot.index and dovecot.index.log files? (They don't contain
> anything sensitive other than maybe message flags.)

I can send the files.

> > This is dovecot 2.3.7.2-1~bionic
> 
> So you had been running this version already for a while, and then it just 
> suddenly started getting slow?

Yes. Initially I threw away the whole mailbox after it got slow, but
after a few days the performance started to degrade. Admittedly, it
contains a lot of mails :)

> I tried to reproduce this with imaptest and Dovecot that is patched
> to log when dovecot.index is being rewritten, but there doesn't seem
> to be any difference with v2.2.36, v2.3.7 or git master.
 
-- 
Ralf Hildebrandt
  Geschäftsbereich IT | Abteilung Netzwerk
  Charité - Universitätsmedizin Berlin
  Campus Benjamin Franklin
  Hindenburgdamm 30 | D-12203 Berlin
  Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962
  ralf.hildebra...@charite.de | https://www.charite.de



Re: dovecot 2.3.7.2-1~bionic: Performance issues caused by excessive IO to ~/mdbox/mailboxes/INBOX/dbox-Mails/dovecot.index.tmp

2019-10-01 Thread Timo Sirainen via dovecot
On 1 Oct 2019, at 16.31, Ralf Hildebrandt via dovecot  
wrote:
> 
> I set up system copying all mails to a backup system.
> 
> This used to work without a hitch - now in the last few days mails
> would pile up in the Postfix Queue, waiting to be delivered using the
> lmtp transport into dovecot.
> 
> So dovecot was being slow, but why? After all, nothing changed.
> 
> After reading some articles on stackoverflow I found a way of finding
> out which file gets the most IO:
> 
> % sysdig -c topfiles_bytes;
> 
> This command quickly pointed to 
> ~/mdbox/mailboxes/INBOX/dbox-Mails/dovecot.index.tmp
> That file was written excessively. 

Was it one user's dovecot.index.tmp or for a lot of users? This means that 
dovecot.index is being rewritten, which should happen only once in a while, but 
now it sounds like it's happening maybe for every mail delivery. If it's still 
happening, could you send me one folder's dovecot.index and dovecot.index.log 
files? (They don't contain anything sensitive other than maybe message flags.)

> I then put ~/mdbox/mailboxes/INBOX/dbox-Mails/ into tmpfs and alas, the queue 
> would drain quickly.
> 
> But why is that? Why would the index file be updated so often?
> 
> This is dovecot 2.3.7.2-1~bionic

So you had been running this version already for a while, and then it just 
suddenly started getting slow?

I tried to reproduce this with imaptest and Dovecot that is patched to log when 
dovecot.index is being rewritten, but there doesn't seem to be any difference 
with v2.2.36, v2.3.7 or git master.



Re: [ext] dovecot 2.3.7.2-1~bionic: Performance issues caused by excessive IO to ~/mdbox/mailboxes/INBOX/dbox-Mails/dovecot.index.tmp

2019-10-01 Thread Ralf Hildebrandt via dovecot
* Ralf Hildebrandt via dovecot :

> But why is that? Why would the index file be updated so often?

BTW: This post is a followup to my "2.3.7 slower than 2.3.6?" post from back in 
July.


dovecot 2.3.7.2-1~bionic: Performance issues caused by excessive IO to ~/mdbox/mailboxes/INBOX/dbox-Mails/dovecot.index.tmp

2019-10-01 Thread Ralf Hildebrandt via dovecot
I set up system copying all mails to a backup system.

This used to work without a hitch - now in the last few days mails
would pile up in the Postfix Queue, waiting to be delivered using the
lmtp transport into dovecot.

So dovecot was being slow, but why? After all, nothing changed.

After reading some articles on stackoverflow I found a way of finding
out which file gets the most IO:

% sysdig -c topfiles_bytes;

This command quickly pointed to 
~/mdbox/mailboxes/INBOX/dbox-Mails/dovecot.index.tmp
That file was written excessively. 

I then put ~/mdbox/mailboxes/INBOX/dbox-Mails/ into tmpfs and alas, the queue 
would drain quickly.

But why is that? Why would the index file be updated so often?

This is dovecot 2.3.7.2-1~bionic

# 2.3.7.2 (3c910f64b): /etc/dovecot/dovecot.conf
# OS: Linux 5.0.0-29-generic x86_64 Ubuntu 18.04.3 LTS 
default_vsz_limit = 2 G
lmtp_user_concurrency_limit = 1
mail_attachment_dir = /home/copymail/attachments
mail_attachment_hash = %{sha256}
mail_fsync = never
mail_location = mdbox:~/mdbox
mail_plugins = zlib fts fts_lucene
mdbox_rotate_size = 128 M
namespace inbox {
  inbox = yes
  location = 
  mailbox Drafts {
special_use = \Drafts
  }
  mailbox Junk {
special_use = \Junk
  }
  mailbox Sent {
special_use = \Sent
  }
  mailbox "Sent Messages" {
special_use = \Sent
  }
  mailbox Trash {
special_use = \Trash
  }
  prefix = 
}
passdb {
  args = username_format=%u /etc/dovecot/passwd
  driver = passwd-file
}
plugin {
  fts = lucene
  fts_languages = de,en
  fts_lucene = whitespace_chars=@.
}
protocols = " imap lmtp"
service imap-login {
  inet_listener imap {
address = 127.0.0.1
port = 143
  }
  inet_listener imaps {
port = 993
ssl = yes
  }
}
service lmtp {
  inet_listener lmtp {
  address = 10.0.5.208
port = 1025
  }
  process_min_avail = 5
  unix_listener /var/spool/postfix/private/dovecot-lmtp {
group = postfix
mode = 0660
user = postfix
  }
}
ssl_ca = /etc/ssl/certs/ca-certificates.crt
ssl_cert = https://www.charite.de



Re: performance issue with UID SEARCH

2019-02-27 Thread Aki Tuomi via dovecot
Without FTS, dovecot needs to open *each* and *every* email when doing
text searches, which is understandably rather slow process.

Aki

On 27.2.2019 10.43, Marc Roos wrote:
>  
>
> I am not sure if this any help. From what I understand of maildir it has 
> lots of separate files, thus uid/gid lookups. Try running something like 
> nscd, that will cache these lookups?
>
>
>
>
> -Original Message-
> From: Aki Tuomi via dovecot [mailto:dovecot@dovecot.org] 
> Sent: 27 February 2019 06:24
> To: Ben Burke; Dovecot Mailing List
> Subject: Re: performance issue with UID SEARCH
>
>
>   On 27 February 2019 03:27 Ben Burke via dovecot < 
> dovecot@dovecot.org> wrote: 
>
>
>   Hi, 
>
>   I'm running dovecot 2.2.x and I'm having an issue where I see many 
>   dovecot processes use all the available IO on a server. According 
> to 
>   iotop the worst offenders seem to be in this state (NOTE: I swapped 
> in 
>   phony username & IP info): 
>
>   dovecot/imap [someusername 123.456.789.012 UID SEARCH] 
>
>   The server in question is running with Maildirs on top of an XFS 
>   filesystem. Is there anything I can do to optimize "UID SEARCH" or 
> find 
>   out why it's being a problem? I've read 
>   https://wiki2.dovecot.org/PerformanceTuning and the linked pages. 
>
>   By "being a problem" I mean iostat -xmt 1 /dev/diskdevice shows 
> 100% 
>   utilization for long periods and in some cases io service times are 
>
>   taking many seconds... which causes thunderbird to timeout when 
> doing 
>   things like appending messages to user "Sent" mailboxes. 
>
>   Any ideas? 
>
>   Thanks, 
>   Ben Burke 
>
>
> Are you using FTS? If not, you should. See 
> https://wiki.dovecot.org/Plugins/FTS
> ---
> Aki Tuomi
>
>


Re: performance issue with UID SEARCH

2019-02-26 Thread Aki Tuomi via dovecot


 
 
  
   
  
  
   
On 27 February 2019 03:27 Ben Burke via dovecot <
dovecot@dovecot.org> wrote:
   
   

   
   

   
   
Hi,
   
   

   
   
I'm running dovecot 2.2.x and I'm having an issue where I see many
   
   
dovecot processes use all the available IO on a server. According to
   
   
iotop the worst offenders seem to be in this state (NOTE: I swapped in
   
   
phony username & IP info):
   
   

   
   
dovecot/imap [someusername 123.456.789.012 UID SEARCH]
   
   

   
   
The server in question is running with Maildirs on top of an XFS
   
   
filesystem. Is there anything I can do to optimize "UID SEARCH" or find
   
   
out why it's being a problem? I've read
   
   
https://wiki2.dovecot.org/PerformanceTuning and the linked pages.
   
   

   
   
By "being a problem" I mean iostat -xmt 1 /dev/diskdevice shows 100%
   
   
utilization for long periods and in some cases io service times are
   
   
taking many seconds... which causes thunderbird to timeout when doing
   
   
things like appending messages to user "Sent" mailboxes.
   
   

   
   
Any ideas?
   
   

   
   
Thanks,
   
   
Ben Burke
   
  
  
   
  
  
   Are you using FTS? If not, you should. See https://wiki.dovecot.org/Plugins/FTS
  
  
   ---
Aki Tuomi
   
 



performance issue with UID SEARCH

2019-02-26 Thread Ben Burke via dovecot
Hi,

I'm running dovecot 2.2.x and I'm having an issue where I see many
dovecot processes use all the available IO on a server. According to
iotop the worst offenders seem to be in this state (NOTE: I swapped in
phony username & IP info):

dovecot/imap [someusername 123.456.789.012 UID SEARCH]

The server in question is running with Maildirs on top of an XFS
filesystem. Is there anything I can do to optimize "UID SEARCH" or find
out why it's being a problem? I've read
https://wiki2.dovecot.org/PerformanceTuning and the linked pages.

By "being a problem" I mean iostat -xmt 1 /dev/diskdevice shows 100%
utilization for long periods and in some cases io service times are
taking many seconds... which causes thunderbird to timeout when doing
things like appending messages to user "Sent" mailboxes.

Any ideas?

Thanks,
Ben Burke



Re: [enhancement] fts-solr low performance

2018-03-05 Thread Aki Tuomi


On 05.03.2018 11:07, azurIt wrote:
>> Hi,
>>
>> we have activated fts-solr about a week ago and immediately started to  
>> experience really *low* performance with MOVE and EXPUNGE commands.  
>> After several days of googling, tcpdumping and straceing i was able to  
>> find and resolve the problem.
>>
>> We are using Dovecot 2.2.27 from Debian Jessie (jessie-backports),  
>> which is doing a soft commit in solr after every MOVE or EXPUNGE  
>> command - this behavior cannot be, currently, changed. The problem is  
>> that this was causing every MOVE/EXPUNGE to take about 6 seconds to  
>> complete. The problem appears to be in very old version of Solr -  
>> 3.6.2 (!!). This is the only version which is shipped with current  
>> (Jessie) and also next (Stretch) version of Debian, don't ask my why,  
>> i don't understand it either. Solr versions below 4.0 are NOT  
>> supporting soft commits, so all commits are hard and this was the  
>> problem. Finally, i decided to patch our Dovecot to not send a commit  
>> at all and everything started to be super fast. I'm doing hard commits  
>> every minute via cron so the only consequence of this is that you  
>> cannot search for messages delivered before less then a minute (which  
>> you, usually, don't need to do anyway).
>>
>> While googling i also find out that Solr supports autoCommit function  
>> (and from version 4.0 also autoSoftCommit), so there's no reason for  
>> Dovecot to handle this on it's own (and potentially doing hundreds or  
>> thousands of soft commits every second) - you can just set Solr to,  
>> for example, do autoSoftCommit every second and autoCommit every minute:
>> https://cwiki.apache.org/confluence/display/solr/UpdateHandlers+in+SolrConfig#UpdateHandlersinSolrConfig-autoCommit
>>
>> Also this wiki page should be updated with warning about old versoins  
>> of Solr not supporting soft commits (you could also mention the  
>> auto[Soft]Commit function):
>> http://wiki2.dovecot.org/Plugins/FTS/Solr
>>
>> I suggest to allow completely disable Solr commits in Dovecot by  
>> configuration, so people like me can handle this easily. What do you  
>> think?
>>
>> azur
>
>
>
> Hi,
>
> any news on this? Even Solr documentation suggests to NOT doing commits from 
> applications:
> https://lucene.apache.org/solr/guide/6_6/shards-and-indexing-data-in-solrcloud.html#ShardsandIndexingDatainSolrCloud-IgnoringCommitsfromClientApplicationsinSolrCloud
>
> Thanks for not ignoring me.
>
> azur

You are not being ignored. We'll attend to this eventually.

Aki


Re: [enhancement] fts-solr low performance

2018-03-05 Thread azurIt
>Hi,
>
>we have activated fts-solr about a week ago and immediately started to  
>experience really *low* performance with MOVE and EXPUNGE commands.  
>After several days of googling, tcpdumping and straceing i was able to  
>find and resolve the problem.
>
>We are using Dovecot 2.2.27 from Debian Jessie (jessie-backports),  
>which is doing a soft commit in solr after every MOVE or EXPUNGE  
>command - this behavior cannot be, currently, changed. The problem is  
>that this was causing every MOVE/EXPUNGE to take about 6 seconds to  
>complete. The problem appears to be in very old version of Solr -  
>3.6.2 (!!). This is the only version which is shipped with current  
>(Jessie) and also next (Stretch) version of Debian, don't ask my why,  
>i don't understand it either. Solr versions below 4.0 are NOT  
>supporting soft commits, so all commits are hard and this was the  
>problem. Finally, i decided to patch our Dovecot to not send a commit  
>at all and everything started to be super fast. I'm doing hard commits  
>every minute via cron so the only consequence of this is that you  
>cannot search for messages delivered before less then a minute (which  
>you, usually, don't need to do anyway).
>
>While googling i also find out that Solr supports autoCommit function  
>(and from version 4.0 also autoSoftCommit), so there's no reason for  
>Dovecot to handle this on it's own (and potentially doing hundreds or  
>thousands of soft commits every second) - you can just set Solr to,  
>for example, do autoSoftCommit every second and autoCommit every minute:
>https://cwiki.apache.org/confluence/display/solr/UpdateHandlers+in+SolrConfig#UpdateHandlersinSolrConfig-autoCommit
>
>Also this wiki page should be updated with warning about old versoins  
>of Solr not supporting soft commits (you could also mention the  
>auto[Soft]Commit function):
>http://wiki2.dovecot.org/Plugins/FTS/Solr
>
>I suggest to allow completely disable Solr commits in Dovecot by  
>configuration, so people like me can handle this easily. What do you  
>think?
>
>azur




Hi,

any news on this? Even Solr documentation suggests to NOT doing commits from 
applications:
https://lucene.apache.org/solr/guide/6_6/shards-and-indexing-data-in-solrcloud.html#ShardsandIndexingDatainSolrCloud-IgnoringCommitsfromClientApplicationsinSolrCloud

Thanks for not ignoring me.

azur


Re: Really slow IMAP performance

2018-02-26 Thread Tanstaafl
On Sat Feb 24 2018 17:01:01 GMT-0500 (Eastern Standard Time), @lbutlr
 wrote:
> On 2018-02-24 (07:14 MST), Aki Tuomi  wrote:
>>
>> https://wiki2.dovecot.org/Migration/MailFormat
> 
> That didn't show up when searching wiki2 for "Migration" :/

Don't search 'wiki2', search just wiki.dovecot.org

I never liked the way Timo rolled out the wiki for the new version 2
when he did, I knew it would do nothing but create confusion...

Really, he should just redirect all references to wiki2 to wiki and kill
the old content...


Re: Really slow IMAP performance

2018-02-24 Thread Joseph Tam

On Sat, 24 Feb 2018, Neil Jerram wrote:


My INBOX file has 22990 messages.  Is the slowness that I am seeing
definitely expected for an mbox of that size?  (It may also be relevant
that the HDD it's stored on is pretty old now, and has been known to
report SMART errors...)


Yeah, a copy of that mailbox will be that slow, esp. if the messages
have large attachments.  Even a simple operation like deleting/expunging the
the first message will cause data shuffling of the entire mailbox.

Joseph Tam 


Re: Really slow IMAP performance

2018-02-24 Thread @lbutlr
On 2018-02-24 (07:14 MST), Aki Tuomi  wrote:
> 
> https://wiki2.dovecot.org/Migration/MailFormat

That didn't show up when searching wiki2 for "Migration" :/


-- 
"...Life is not a journey to the grave with the intention of arriving
safely in one pretty and well-preserved piece, but to slide across the
finish line broadside, thoroughly used up, worn out, leaking oil, and
shouting GERONIMO!!!" -- Bill McKenna



Re: Really slow IMAP performance

2018-02-24 Thread @lbutlr
On 2018-02-24 (07:04 MST), Neil Jerram  wrote:
> 
> My INBOX file has 22990 messages.  Is the slowness that I am seeing
> definitely expected for an mbox of that size?  (It may also be relevant
> that the HDD it's stored on is pretty old now, and has been known to
> report SMART errors...)


back int he dark ages I would send and alert to users if their inbox for over 
1,000 messages because an mbox that large drove the server to its knees and 
made merely logging in to mail take an excruciating amount of time. If they 
didn't fix it I'd archive their inbox and start over.

I am astonished your machine can process an mbox with over 22 times that many 
messages.

https://wiki1.dovecot.org/Migration/MailFormat

That page doesn't exist on the wiki for dovecot 2, but that script to convert 
mbox to milder should still work. Obviously, keep backups and such.

-- 
Did they get you to trade your heroes for ghosts? Hot ashes for trees?
Hot air for a cool breeze? Cold comfort for change?



Re: Really slow IMAP performance

2018-02-24 Thread Neil Jerram
Aki Tuomi  writes:

> Yes. You deffo are looking at several reasons for slowness.
>
> I can only recommend moving into maildir or sdbox format, and probably new 
> HDD too.
>
> https://wiki2.dovecot.org/Tools/Doveadm/Sync here is example of 'converting' 
> between mailbox formats using dsync. You should also read 
> https://wiki2.dovecot.org/Migration/MailFormat
>
> mbox format has been known to act up with dsync occasionally, so I recommend 
> using 
>
> doveadm backup maildir:~/Maildir

Thanks, I've done that now, and things are looking much better.

I rediscovered that I've configured postfix to deliver locally using
dovecot-lmtp - which meant that I then only needed to change dovecot's
mail_location setting, and nothing at all in the postfix config.

Many thanks! - Neil


Re: Really slow IMAP performance

2018-02-24 Thread Håkon Alstadheim


Den 24. feb. 2018 15:04, skrev Neil Jerram:
> Aki Tuomi  writes:
> 
>>> On 24 February 2018 at 15:47 Neil Jerram  wrote:
> 
> [...]
  Feb 24 10:24:24 arudy dovecot[1712]: imap(neil): Warning:
 Transaction log file
 /home/neil/dovecot-mail/.imap/INBOX/dovecot.index.log was locked
 for 98 seconds (Mailbox was synchronized)
> [...]
> 
>> You are using mbox format. This is ... bit slow. =)
>>
>> When you move mails between mbox files, it has to rewrite the entire mbox 
>> file every time. You should probably start using maildir or sdbox instead.
> 
> Ah, right, thanks.
> 
> My INBOX file has 22990 messages.  Is the slowness that I am seeing
> definitely expected for an mbox of that size?  (It may also be relevant
> that the HDD it's stored on is pretty old now, and has been known to
> report SMART errors...)
> 
> If so, I'll start looking at how to migrate, given that my system is
> Postfix + Dovecot.  If you have any particular recommendations or
> migration pointers for a system like that, I'd appreciate them.
> 

I'd go with whatever tools you are familiar with. If you don't know
where to start, formail(1) can read an mbox and do whatever for each
mail contained therein. This, together with procmail, used to be the
go-to tools in the days before IMAP.

Theese days you'd probably want to involve your local delivery agent on
the output from formail. The lda would invoke sieve instead of procmail
if that is your thing.

Whatever you do, try to set up so you can do some tests before you blast
22000 mails to somewhere you do not want them :-)



Re: Really slow IMAP performance

2018-02-24 Thread Aki Tuomi

> On 24 February 2018 at 16:04 Neil Jerram  wrote:
> 
> 
> Aki Tuomi  writes:
> 
> >> On 24 February 2018 at 15:47 Neil Jerram  wrote:
> 
> [...]
> >> >  Feb 24 10:24:24 arudy dovecot[1712]: imap(neil): Warning:
> >> > Transaction log file
> >> > /home/neil/dovecot-mail/.imap/INBOX/dovecot.index.log was locked
> >> > for 98 seconds (Mailbox was synchronized)
> [...]
> 
> > You are using mbox format. This is ... bit slow. =)
> >
> > When you move mails between mbox files, it has to rewrite the entire mbox 
> > file every time. You should probably start using maildir or sdbox instead.
> 
> Ah, right, thanks.
> 
> My INBOX file has 22990 messages.  Is the slowness that I am seeing
> definitely expected for an mbox of that size?  (It may also be relevant
> that the HDD it's stored on is pretty old now, and has been known to
> report SMART errors...)
> 
> If so, I'll start looking at how to migrate, given that my system is
> Postfix + Dovecot.  If you have any particular recommendations or
> migration pointers for a system like that, I'd appreciate them.
> 
> Best wishes - Neil

Yes. You deffo are looking at several reasons for slowness.

I can only recommend moving into maildir or sdbox format, and probably new HDD 
too.

https://wiki2.dovecot.org/Tools/Doveadm/Sync here is example of 'converting' 
between mailbox formats using dsync. You should also read 
https://wiki2.dovecot.org/Migration/MailFormat

mbox format has been known to act up with dsync occasionally, so I recommend 
using 

doveadm backup maildir:~/Maildir

if you want to give it a try, instead of doveadm sync. Backup does dsync too, 
but it only works one way.

Aki


Re: Really slow IMAP performance

2018-02-24 Thread Neil Jerram
Aki Tuomi  writes:

>> On 24 February 2018 at 15:47 Neil Jerram  wrote:

[...]
>> >  Feb 24 10:24:24 arudy dovecot[1712]: imap(neil): Warning:
>> > Transaction log file
>> > /home/neil/dovecot-mail/.imap/INBOX/dovecot.index.log was locked
>> > for 98 seconds (Mailbox was synchronized)
[...]

> You are using mbox format. This is ... bit slow. =)
>
> When you move mails between mbox files, it has to rewrite the entire mbox 
> file every time. You should probably start using maildir or sdbox instead.

Ah, right, thanks.

My INBOX file has 22990 messages.  Is the slowness that I am seeing
definitely expected for an mbox of that size?  (It may also be relevant
that the HDD it's stored on is pretty old now, and has been known to
report SMART errors...)

If so, I'll start looking at how to migrate, given that my system is
Postfix + Dovecot.  If you have any particular recommendations or
migration pointers for a system like that, I'd appreciate them.

Best wishes - Neil


Re: Really slow IMAP performance

2018-02-24 Thread Aki Tuomi

> On 24 February 2018 at 15:47 Neil Jerram  wrote:
> 
> 
> Aki Tuomi  writes:
> 
> >  On 24 February 2018 at 12:45 Neil Jerram < n...@ossau.homelinux.net> 
> > wrote: 
> >
> >  Please could you help me to understand and fix why my dovecot IMAP 
> >  performance is so bad? I've read through a lot of the 
> >  performance-related material on the website, but I don't think that any 
> >  of it could account for slowness at the level that I am seeing. 
> >
> >  The simplest scenario is moving a message from my Inbox to another IMAP 
> >  folder. Using Gnus as the client, the whole UI freezes for about 2 
> >  minutes (which I assume is until the move is complete), and journalctl 
> >  on the dovecot server says: 
> >
> >  Feb 24 10:24:24 arudy dovecot[1712]: imap(neil): Warning: Transaction log 
> > file /home/neil/dovecot-mail/.imap/INBOX/dovecot.index.log was locked for 
> > 98 seconds (Mailbox was synchronized) 
> >
> >  or the same message with (rotating while syncing). 
> >
> >  There must be something badly wrong in my setup, or perhaps in the spec 
> >  of the server that dovecot is running on. What should I look at to 
> >  start understanding this better? 
> >
> >  Many thanks - Neil 
> >
> > Can you tell a bit more about your environment? Sounds like io issue 
> 
> Thanks for your reply.  I'm not sure exactly what you have in mind, but
> here are some starting points:
> 
> arudy:~# uname -a
> Linux arudy 4.13.0-1-686-pae #1 SMP Debian 4.13.4-2 (2017-10-15) i686 
> GNU/Linux
> 
> arudy:~# dovecot -n
> # 2.2.32 (dfbe293d4): /etc/dovecot/dovecot.conf
> # Pigeonhole version 0.4.20 (7cd71ba)
> # OS: Linux 4.13.0-1-686-pae i686 Debian buster/sid 
> auth_mechanisms = plain login
> auth_username_format = %Ln
> auth_verbose = yes
> login_trusted_networks = 192.168.11.8
> mail_access_groups = mail
> mail_fsync = never
> mail_location = mbox:~/dovecot-mail:INBOX=/var/mail/%u

You are using mbox format. This is ... bit slow. =)

When you move mails between mbox files, it has to rewrite the entire mbox file 
every time. You should probably start using maildir or sdbox instead.

Aki


Re: Really slow IMAP performance

2018-02-24 Thread Neil Jerram
Aki Tuomi  writes:

>  On 24 February 2018 at 12:45 Neil Jerram < n...@ossau.homelinux.net> wrote: 
>
>  Please could you help me to understand and fix why my dovecot IMAP 
>  performance is so bad? I've read through a lot of the 
>  performance-related material on the website, but I don't think that any 
>  of it could account for slowness at the level that I am seeing. 
>
>  The simplest scenario is moving a message from my Inbox to another IMAP 
>  folder. Using Gnus as the client, the whole UI freezes for about 2 
>  minutes (which I assume is until the move is complete), and journalctl 
>  on the dovecot server says: 
>
>  Feb 24 10:24:24 arudy dovecot[1712]: imap(neil): Warning: Transaction log 
> file /home/neil/dovecot-mail/.imap/INBOX/dovecot.index.log was locked for 98 
> seconds (Mailbox was synchronized) 
>
>  or the same message with (rotating while syncing). 
>
>  There must be something badly wrong in my setup, or perhaps in the spec 
>  of the server that dovecot is running on. What should I look at to 
>  start understanding this better? 
>
>  Many thanks - Neil 
>
> Can you tell a bit more about your environment? Sounds like io issue 

Thanks for your reply.  I'm not sure exactly what you have in mind, but
here are some starting points:

arudy:~# uname -a
Linux arudy 4.13.0-1-686-pae #1 SMP Debian 4.13.4-2 (2017-10-15) i686 GNU/Linux

arudy:~# dovecot -n
# 2.2.32 (dfbe293d4): /etc/dovecot/dovecot.conf
# Pigeonhole version 0.4.20 (7cd71ba)
# OS: Linux 4.13.0-1-686-pae i686 Debian buster/sid 
auth_mechanisms = plain login
auth_username_format = %Ln
auth_verbose = yes
login_trusted_networks = 192.168.11.8
mail_access_groups = mail
mail_fsync = never
mail_location = mbox:~/dovecot-mail:INBOX=/var/mail/%u
namespace inbox {
  inbox = yes
  location = 
  mailbox Drafts {
special_use = \Drafts
  }
  mailbox Junk {
special_use = \Junk
  }
  mailbox Sent {
special_use = \Sent
  }
  mailbox "Sent Messages" {
special_use = \Sent
  }
  mailbox Trash {
special_use = \Trash
  }
  prefix = 
}
passdb {
  driver = pam
}
plugin {
  antispam_backend = dspam
  antispam_dspam_args = --deliver;--user;%u
  antispam_dspam_binary = /usr/bin/dspam
  antispam_signature = X-DSPAM-Signature
  antispam_signature_missing = error
  antispam_spam = Spam
  antispam_trash = trash;Trash;Deleted Items; Deleted Messages
  fts = solr
  fts_solr = url=http://localhost:8080/solr/
  sieve = file:~/sieve;active=~/.dovecot.sieve
}
postmaster_address = postmas...@ossau.homelinux.net
protocols = " imap lmtp"
service auth {
  unix_listener /var/spool/postfix/private/auth {
group = postfix
mode = 0666
user = postfix
  }
}
service lmtp {
  unix_listener /var/spool/postfix/private/dovecot-lmtp {
group = postfix
mode = 0600
user = postfix
  }
}
ssl_cert = 

Re: Really slow IMAP performance

2018-02-24 Thread Aki Tuomi


 
 
  
   
  
  
   
On 24 February 2018 at 12:45 Neil Jerram <
n...@ossau.homelinux.net> wrote:
   
   

   
   

   
   
Please could you help me to understand and fix why my dovecot IMAP
   
   
performance is so bad? I've read through a lot of the
   
   
performance-related material on the website, but I don't think that any
   
   
of it could account for slowness at the level that I am seeing.
   
   

   
   
The simplest scenario is moving a message from my Inbox to another IMAP
   
   
folder. Using Gnus as the client, the whole UI freezes for about 2
   
   
minutes (which I assume is until the move is complete), and journalctl
   
   
on the dovecot server says:
   
   

   
   
Feb 24 10:24:24 arudy dovecot[1712]: imap(neil): Warning: Transaction log file /home/neil/dovecot-mail/.imap/INBOX/dovecot.index.log was locked for 98 seconds (Mailbox was synchronized)
   
   

   
   
or the same message with (rotating while syncing).
   
   

   
   
There must be something badly wrong in my setup, or perhaps in the spec
   
   
of the server that dovecot is running on. What should I look at to
   
   
start understanding this better?
   
   

   
   
Many thanks - Neil
   
  
  
   
  
  
   Can you tell a bit more about your environment? Sounds like io issue
  
  
   ---
   Aki Tuomi
   
 



Really slow IMAP performance

2018-02-24 Thread Neil Jerram
Please could you help me to understand and fix why my dovecot IMAP
performance is so bad?  I've read through a lot of the
performance-related material on the website, but I don't think that any
of it could account for slowness at the level that I am seeing.

The simplest scenario is moving a message from my Inbox to another IMAP
folder.  Using Gnus as the client, the whole UI freezes for about 2
minutes (which I assume is until the move is complete), and journalctl
on the dovecot server says:

Feb 24 10:24:24 arudy dovecot[1712]: imap(neil): Warning: Transaction log file 
/home/neil/dovecot-mail/.imap/INBOX/dovecot.index.log was locked for 98 seconds 
(Mailbox was synchronized)

or the same message with (rotating while syncing).

There must be something badly wrong in my setup, or perhaps in the spec
of the server that dovecot is running on.  What should I look at to
start understanding this better?

Many thanks - Neil


Re: Optimizing search performance for mobile devices / web mailer / general - solr plugin config

2018-02-23 Thread Tanstaafl
On Fri Feb 23 2018 03:51:37 GMT-0500 (Eastern Standard Time), Peter
Chiochetti  wrote:
> There is a trick to have messages indexed on arrival, instead of at 
> mailbox access, "fts_autoindex = yes" in the plugin section. This is not 
> mentioned in the dovecot wiki page but might be useful.

Not sure why you would say that...

That setting is mentioned specifically twice...

https://wiki.dovecot.org/Plugins/FTS


Re: Optimizing search performance for mobile devices / web mailer / general - solr plugin config

2018-02-23 Thread Peter Chiochetti

Hello Götz,

As an intermediate skill level admin I found it not too hard. The 
dovecot part was easier than the solr one, mostly as I am stuck on an 
old system.


Solr indeed is blazingly fast. Searching several imap (sub)folders slows 
it down a bit, because imap only always searches a single one and then 
another one and so on. A proposed rfc to tackle that does not move forward.


There is a trick to have messages indexed on arrival, instead of at 
mailbox access, "fts_autoindex = yes" in the plugin section. This is not 
mentioned in the dovecot wiki page but might be useful.


In solr schema.xml I put some extra "copyField" stances, so from, to, 
and subject get indexed with body. My users are mostly on thunderbird, 
which only does body searches on the server.


Happy Hacking

Peter


Am 23.02.18 um 08:55 schrieb Götz Reinicke:

Hi all,

we run dovecot for a long time now with no complains from the users … 
until this week. Some users say, the search in mailfolders from iPhone 
(which only stores a few mails and search most on the server as I know) 
or our web mailer (SOGo, which I currently search also on the imap 
server) is „slow“.


As this is sort of individual experience, I was thinking of ways to 
speed up the search and came across the fts_solr plugin.


My question is, can I „just“ configure the its_solg plugin as described 
at the dovecot wiki? https://wiki.dovecot.org/Plugins/FTS/Solr


How difficult is is to set up a solr server for that purpose? As our 
current mail hardware is not busy at all, is it ok to install Solr on 
the same hardware/server?


May be someone using that setup can give me some hints?

And: How hard is it to switch back to the „build in“ default search if 
we don’t see any benefit from hosting a solr server too.



Thanks for feedback and suggestions . Regards . Götz




Optimizing search performance for mobile devices / web mailer / general - solr plugin config

2018-02-23 Thread Götz Reinicke
Hi all,

we run dovecot for a long time now with no complains from the users … until 
this week. Some users say, the search in mailfolders from iPhone (which only 
stores a few mails and search most on the server as I know) or our web mailer 
(SOGo, which I currently search also on the imap server) is „slow“.

As this is sort of individual experience, I was thinking of ways to speed up 
the search and came across the fts_solr plugin.

My question is, can I „just“ configure the its_solg plugin as described at the 
dovecot wiki? https://wiki.dovecot.org/Plugins/FTS/Solr 


How difficult is is to set up a solr server for that purpose? As our current 
mail hardware is not busy at all, is it ok to install Solr on the same 
hardware/server?

May be someone using that setup can give me some hints?

And: How hard is it to switch back to the „build in“ default search if we don’t 
see any benefit from hosting a solr server too.


Thanks for feedback and suggestions . Regards . Götz




smime.p7s
Description: S/MIME cryptographic signature


Re: Dovecot Performance Issues on VMWARE Esxi

2017-10-06 Thread Aki Tuomi


On 04.10.2017 16:39, Holger wrote:
> Hello,
>
> I'm new to dovcot, but have serious issues with dovecot and IMAP. 
>
> I'm struggling with low IMAP performance but have no glue, where to start my
> search.
>
> It is a small office setup. 10 Mailaccounts. Mine is the biggest with 4GB in
> size. But searching for mail takes several seconds... Zu switch to specific
> folder also takes some seconds. To long in my opinion. 

Do you have fts turned on? Even fts with lucene would help.

Aki


Re: Dovecot Performance Issues on VMWARE Esxi

2017-10-06 Thread Jonas Björklund


On Wed, 4 Oct 2017, Holger wrote:


I use esxi 6.0 with local datastorage lenovo megaraid raid5 3x2TB SAS 7.2K.


This is probably your problem. Migrate to raid6 or raid10 on SSD. Or atleast 
raid10 on fast SAS-disks 15K rpm.

/Jonas


Dovecot Performance Issues on VMWARE Esxi

2017-10-06 Thread Holger
Hello,

I'm new to dovcot, but have serious issues with dovecot and IMAP. 

I'm struggling with low IMAP performance but have no glue, where to start my
search.

It is a small office setup. 10 Mailaccounts. Mine is the biggest with 4GB in
size. But searching for mail takes several seconds... Zu switch to specific
folder also takes some seconds. To long in my opinion. 



I use esxi 6.0 with local datastorage lenovo megaraid raid5 3x2TB SAS 7.2K.

On the dovecot appliance i see high iowait. Some issues dovecot or the VM
config?


Any help i would appreciate...



--
Sent from: http://dovecot.2317879.n4.nabble.com/


Re: Slow performance with large folders over the Internet

2017-03-31 Thread Daniel Tröder
On 03/31/2017 12:03 AM, Shawn Heisey wrote:
> Dovecot package version is 1:1.2.15-7+deb6u1.  It is in Debian 6.0.10,
> using the Debian package.
> 
> The server is in my basement at home, and is exposed to the Internet so
> I can fully access my mail from anywhere.  I use IMAP for reading mail.
> 
> I have a number of folders in my mailbox that have thousands of messages
> in them, from mailing lists.
> 
> When I'm at home, I have a LAN connection to the server.  It goes
> through a Cisco firewall that limits the connection speed to 100Mb/s.
> In this situation, I can open a folder with 25000 messages in it, click
> on the next unread message that Thunderbird did not know about before,
> and within a second or two, the message will download, allowing me to
> view it and reply.
> 
> When I'm at work, with highly variable network latency between
> Thunderbird and the server, doing exactly the same thing takes a LOT
> longer.  I have seen it take as long as 15 minutes for a single message.
>  If I open a folder with only a few messages in it, it is fast.
> 
> The server is not overloaded -- I can log into it with ssh and use "mutt
> -f" to open a folder directly.  Loading thousands of messages into mutt
> takes a while, but I have no difficulty using the ssh connection and
> running commandline programs.
> 
> This suggests that the IMAP communication between the server and the
> client involves a large amount of back and forth communication when the
> message count in the folder is high, possibly something for every
> message in the folder.  It happens quickly on a LAN but crawls on a
> connection with high latency.  I can understand it taking a few seconds
> longer on a high-latency link, but it takes minutes.
> 
> I do plan on building a new server and migrating to Dovecot 2.x, but I
> haven't had the time to work on that.
> 
> Is this a known problem? If so, is it fixed in 2.x?
> 
> Thanks,
> Shawn
This sounds like your companies firewall trying a mitm attack or
similar. Just a wild guess.

If the SSH-connection is good (probably ignored by the firewall or maybe
even prioritized), then forward your IMAP-traffic through it and see if
the problem persists. This is not meant as a solution, but to help
analyze the problem.

# ssh -L 10993:127.0.0.1:993 you@your.server
Then connect with Thunderbird to 127.0.0.1:10993.
You could also use :143, the SSH-tunnel is already encrypted.

Greetings
Daniel



signature.asc
Description: OpenPGP digital signature


Re: Slow performance with large folders over the Internet

2017-03-31 Thread Gerard Ranke
On 03/31/2017 12:03 AM, Shawn Heisey wrote:
> Dovecot package version is 1:1.2.15-7+deb6u1.  It is in Debian 6.0.10,
> using the Debian package.
> 
> The server is in my basement at home, and is exposed to the Internet so
> I can fully access my mail from anywhere.  I use IMAP for reading mail.
> 
> I have a number of folders in my mailbox that have thousands of messages
> in them, from mailing lists.
> 
> When I'm at home, I have a LAN connection to the server.  It goes
> through a Cisco firewall that limits the connection speed to 100Mb/s.
> In this situation, I can open a folder with 25000 messages in it, click
> on the next unread message that Thunderbird did not know about before,
> and within a second or two, the message will download, allowing me to
> view it and reply.
> 
> When I'm at work, with highly variable network latency between
> Thunderbird and the server, doing exactly the same thing takes a LOT
> longer.  I have seen it take as long as 15 minutes for a single message.
>  If I open a folder with only a few messages in it, it is fast.
> 
> The server is not overloaded -- I can log into it with ssh and use "mutt
> -f" to open a folder directly.  Loading thousands of messages into mutt
> takes a while, but I have no difficulty using the ssh connection and
> running commandline programs.
> 
> This suggests that the IMAP communication between the server and the
> client involves a large amount of back and forth communication when the
> message count in the folder is high, possibly something for every
> message in the folder.  It happens quickly on a LAN but crawls on a
> connection with high latency.  I can understand it taking a few seconds
> longer on a high-latency link, but it takes minutes.
> 
> I do plan on building a new server and migrating to Dovecot 2.x, but I
> haven't had the time to work on that.
> 
> Is this a known problem? If so, is it fixed in 2.x?
> 
> Thanks,
> Shawn
> 

Hi Shawn,

If you think that imap is the problem, you can do an imap session by
hand and see where the problems are:

openssl s_client -CApath /path/to/your/certs -connect your.server:143
-starttls imap

See fi. http://wiki.linuxquestions.org/wiki/Testing_IMAP_via_telnet

But from your mail I would say that you might have networking or
firewall issues. So I would be looking for interface errors, missing
ping packets, traceroute output and so on.
Best,

gerard


Slow performance with large folders over the Internet

2017-03-30 Thread Shawn Heisey
Dovecot package version is 1:1.2.15-7+deb6u1.  It is in Debian 6.0.10,
using the Debian package.

The server is in my basement at home, and is exposed to the Internet so
I can fully access my mail from anywhere.  I use IMAP for reading mail.

I have a number of folders in my mailbox that have thousands of messages
in them, from mailing lists.

When I'm at home, I have a LAN connection to the server.  It goes
through a Cisco firewall that limits the connection speed to 100Mb/s.
In this situation, I can open a folder with 25000 messages in it, click
on the next unread message that Thunderbird did not know about before,
and within a second or two, the message will download, allowing me to
view it and reply.

When I'm at work, with highly variable network latency between
Thunderbird and the server, doing exactly the same thing takes a LOT
longer.  I have seen it take as long as 15 minutes for a single message.
 If I open a folder with only a few messages in it, it is fast.

The server is not overloaded -- I can log into it with ssh and use "mutt
-f" to open a folder directly.  Loading thousands of messages into mutt
takes a while, but I have no difficulty using the ssh connection and
running commandline programs.

This suggests that the IMAP communication between the server and the
client involves a large amount of back and forth communication when the
message count in the folder is high, possibly something for every
message in the folder.  It happens quickly on a LAN but crawls on a
connection with high latency.  I can understand it taking a few seconds
longer on a high-latency link, but it takes minutes.

I do plan on building a new server and migrating to Dovecot 2.x, but I
haven't had the time to work on that.

Is this a known problem? If so, is it fixed in 2.x?

Thanks,
Shawn


[enhancement] fts-solr low performance

2017-03-04 Thread azurit

Hi,

we have activated fts-solr about a week ago and immediately started to  
experience really *low* performance with MOVE and EXPUNGE commands.  
After several days of googling, tcpdumping and straceing i was able to  
find and resolve the problem.


We are using Dovecot 2.2.27 from Debian Jessie (jessie-backports),  
which is doing a soft commit in solr after every MOVE or EXPUNGE  
command - this behavior cannot be, currently, changed. The problem is  
that this was causing every MOVE/EXPUNGE to take about 6 seconds to  
complete. The problem appears to be in very old version of Solr -  
3.6.2 (!!). This is the only version which is shipped with current  
(Jessie) and also next (Stretch) version of Debian, don't ask my why,  
i don't understand it either. Solr versions below 4.0 are NOT  
supporting soft commits, so all commits are hard and this was the  
problem. Finally, i decided to patch our Dovecot to not send a commit  
at all and everything started to be super fast. I'm doing hard commits  
every minute via cron so the only consequence of this is that you  
cannot search for messages delivered before less then a minute (which  
you, usually, don't need to do anyway).


While googling i also find out that Solr supports autoCommit function  
(and from version 4.0 also autoSoftCommit), so there's no reason for  
Dovecot to handle this on it's own (and potentially doing hundreds or  
thousands of soft commits every second) - you can just set Solr to,  
for example, do autoSoftCommit every second and autoCommit every minute:

https://cwiki.apache.org/confluence/display/solr/UpdateHandlers+in+SolrConfig#UpdateHandlersinSolrConfig-autoCommit

Also this wiki page should be updated with warning about old versoins  
of Solr not supporting soft commits (you could also mention the  
auto[Soft]Commit function):

http://wiki2.dovecot.org/Plugins/FTS/Solr

I suggest to allow completely disable Solr commits in Dovecot by  
configuration, so people like me can handle this easily. What do you  
think?


azur


Re: Dovecot performance and proxy loops with IPv6

2017-02-03 Thread Daniel Betz
Ok, got it.

change imap-login and pop-login to these like showed in dovocot wiki for 
high-performance login mode.

service imap-login {
chroot = login
service_count = 0
client_limit = 600
process_limit = 100
process_min_avail = 16
}
service pop3-login {
chroot = login
service_count = 0
client_limit = 600
process_limit = 100
process_min_avail = 16
}


Dovecot performance and proxy loops with IPv6

2017-02-02 Thread Daniel Betz
Hello list,

i run here an large mailsetup with some million mailboxes and got strange 
performance problems, cause i think i have overseen or forgotten an simple 
setting.

Here are some details:

21 CentOS 7 Servers with dovecot 2.2.25 and ldap userdb/passdb via socket 
behind an hardware loadbalancer.
The storage behind is an ISCSI Storage with 4 10Gbit/s multipath paths, 
splitted up to 10 TB volumes for each server with LVM and xfs filesystem. No 
Cluster FS
Each server has about 60.000 to 75.000 mailboxes on it. mailboxes can have up 
to 10Gbyte space.

The Log says this sometimes and complete random:
Feb  1 10:42:49 server1 dovecot: pop3-login: Error: net_connect_unix(pop3) 
failed: Resource temporarily unavailable - 
http://wiki2.dovecot.org/SocketUnavailable
Feb  1 10:42:50 server1 dovecot: pop3-login: Error: net_connect_unix(pop3) 
failed: Resource temporarily unavailable - 
http://wiki2.dovecot.org/SocketUnavailable
Feb  1 10:42:50 server1 dovecot: pop3-login: Error: net_connect_unix(pop3) 
failed: Resource temporarily unavailable - 
http://wiki2.dovecot.org/SocketUnavailable
Feb  1 10:42:50 server1  dovecot: pop3-login: Error: net_connect_unix(pop3) 
failed: Resource temporarily unavailable - 
http://wiki2.dovecot.org/SocketUnavailable
Feb  1 10:42:50 server1 dovecot: imap-login: Error: net_connect_unix(imap) 
failed: Resource temporarily unavailable - 
http://wiki2.dovecot.org/SocketUnavailable
Feb  1 10:42:50 server1 dovecot: pop3-login: Error: net_connect_unix(pop3) 
failed: Resource temporarily unavailable - 
http://wiki2.dovecot.org/SocketUnavailable

Sure i have read the SocketUnavailabe wiki page and changed some settings, but 
the errors are not gone.
Could you please look over my dovecot config and give me some tips or hints 
what to change.

The next this is, when adding IPv6 via DNS to the hosts and login with IPv6 i 
will become an proxy loop.

Settings in nameserver:
server1.domain.com IN A 123.123.123.123
server1.domain.com IN  2001:123::1

The host entry comes from the ldap and says: mailHost: server1.domain.com

Imap Login with IPv6 to server1.domain.com tries to proxy from 
server1.domain.com ( IPv6 ) to server1.domain.com ( IPv6 ) and loops then.
I have removed the IPv6  entries in the dns to stop this loops.
Sorry, but i have no logs for this anymore.

Thanks in advise,
Daniel


And here system configs and dovecot configs:

sysctl:

fs.inotify.max_user_instances = 65535
fs.inotify.max_user_watches = 16384

systemd startup with ulimit settings:

[Unit]
Description=Dovecot Mailservice IMAP/POP

[Service]
Type=simple
LimitCORE=0
LimitNPROC=500
LimitNOFILE=65535
LimitSTACK=81920
LimitDATA=infinity
LimitMEMLOCK=infinity
LimitRSS=infinity
LimitAS=infinity

ExecStart=/usr/local/dovecot2/sbin/dovecot -F -c 
/usr/local/dovecot2/etc/dovecot/dovecot.conf

[Install]
WantedBy=multi-user.target



dovecot-ldap.conf:

uris = ldapi://%2Fvar%2Frun%2Fldapi
dn = cn=xxx,o=domain,c=com
dnpass = x
auth_bind = no
ldap_version = 3
base = o=domain,c=com 
user_attrs = mail=user,mailMessageStore=home,\
mailQuota=quota_rule=*:storage=%$
iterate_filter= (|(mailHost=server1.domain.com)(mailHost=popserver1.domain.com))
user_filter = (&(accountstatus=active)(|(uid=%u)(mail=%u)))
pass_attrs = 
mail=user,userPassword=password,=proxy_maybe=y,mailHost=host,=destuser=%u[%r]
pass_filter = (&(accountstatus=active)(|(uid=%u)(mail=%u)))

dovecot.conf:

# 2.2.25 (7be1766): /usr/local/dovecot2/etc/dovecot/dovecot.conf
# OS: Linux 3.10.0-327.36.3.el7.x86_64 x86_64 CentOS Linux release 7.2.1511 
(Core)
auth_cache_negative_ttl = 1 mins
auth_cache_size = 64 M
auth_cache_ttl = 2 hours
auth_mechanisms = plain login
auth_username_chars =
auth_verbose = yes
base_dir = /var/run/dovecot/
debug_log_path = /dev/null
default_login_user = dovecot
disable_plaintext_auth = no
doveadm_password =  # hidden, use -P to show it
doveadm_port = 12345
first_valid_gid = 1001
first_valid_uid = 1001
info_log_path = /dev/stderr
lda_mailbox_autocreate = yes
lda_original_recipient_header = X-Envelope-To
log_path = /dev/stderr
log_timestamp =
login_log_format_elements = user=[%u] method=%m rip=%r lip=%l %c
mail_gid = 1001
mail_location = mdbox:~:INDEX=%h/INDEX
mail_plugins = "notify replication stats"
mail_uid = 1001
mbox_write_locks = fcntl
namespace {
  inbox = yes
  location =
  prefix = INBOX.
  separator = .
  type = private
}
passdb {
  args = /usr/local/dovecot2/etc/dovecot/dovecot-ldap.conf
  driver = ldap
}
plugin {
  quota = dict:User quota::file:%h/mdbox/dovecot-quota
  quota_warning = storage=85%% quota-warning 85 %u
  stats_refresh = 30 secs
  stats_track_cmds = yes
}
replication_max_conns = 30
sendmail_path = /usr/local/exim/bin/exim
service aggregator {
  fifo_listener replication-notify-fifo {
mode = 0666
user = popuser
  }
  unix_listener replication-notify {
mode = 0666
user = popuser
  }
}
service anvil {
  client_limit = 6
}
service auth {
  client_limit = 600

Re: Performance

2015-04-24 Thread Leonardo Rodrigues

On 24/04/15 08:26, absolutely_f...@libero.it wrote:

My question is: is better to use SQLite instead of MySQL?
Should I prefer dbox format?

Thank you in advance for your opinion!


While 10k accounts is not a few accounts, i wouldn't call that a 
LOT of accounts neither. Assuming that the query cache is active on 
MySQL, probably almost all your queries are being answered directly from 
the cache and, if not that, your tables shouldnt be that big and after a 
few queries should be all in cache memory of the Linux system. Your I/O 
costs on the MySQL should be very very very low, o i really doubt that 
MySQL is being part of your problem here.


Unless, of course, that you have other heavy databases running on 
the MySQL instance your mail system is using...




--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it


Re: Performance

2015-04-24 Thread Roland van Laar

On 24-04-15 13:26, absolutely_f...@libero.it wrote:

Hi,

at moment I have this environment:

CentOS
nginx + phpfpm
Dovecot, with Maildir format
Postfix
Roundcube
MySQL backend
about 1 mailusers
dual core Intel(R) Pentium(R) D CPU 3.00GHz
8 GB RAM
network storage device (Coraid), ext4 file system

I have no performance issue now, but I need to move to a different  server:

FreeBSD 10.1-RELEASE
nginx + phpfpm
Dovecot
Postfix
Roundcube
dual core Intel(R) Xeon(R) CPU5120  @ 1.86GHz
16 GB RAM
local storage with zfs file system

My question is: is better to use SQLite instead of MySQL?


Do you have a lot of writes?

With SQLite you can run into locking issues:
https://www.sqlite.org/lockingv3.html
Or use the Write-Ahead Logging:
https://www.sqlite.org/wal.html

Regards,

Roland


Should I prefer dbox format?

Thank you in advance for your opinion!


Performance

2015-04-24 Thread absolutely_f...@libero.it
Hi,

at moment I have this environment:

CentOS 
nginx + phpfpm
Dovecot, with Maildir format
Postfix
Roundcube 
MySQL backend
about 1 mailusers
dual core Intel(R) Pentium(R) D CPU 3.00GHz
8 GB RAM
network storage device (Coraid), ext4 file system 

I have no performance issue now, but I need to move to a different  server:

FreeBSD 10.1-RELEASE
nginx + phpfpm
Dovecot
Postfix
Roundcube 
dual core Intel(R) Xeon(R) CPU5120  @ 1.86GHz
16 GB RAM
local storage with zfs file system

My question is: is better to use SQLite instead of MySQL?
Should I prefer dbox format?

Thank you in advance for your opinion!


Re: Performance impace of spawning shell processes from Dovecot [was: quota_over_flag examples?]

2015-04-16 Thread Gedalya

On 04/16/2015 09:09 PM, E.B. wrote:

Don't use bash, of course!

Hmm well I didn't not know about this. On CentOS--

lrwxrwxrwx. 1 root root 4 Apr  5 10:31 /bin/sh -> bash*

Can you state the reasons you say do not use bash so I can google about
them?


Some random links..

https://wiki.ubuntu.com/DashAsBinSh
https://lwn.net/Articles/343924/
http://www.cyberciti.biz/faq/debian-ubuntu-linux-binbash-vs-bindash-vs-binshshell/

My summary:

I use Debian.
dash is actually a Debian-specific creation. The problem with bash is 
that it's feature-rich and therefore slow to start and slow to execute. 
For non-interactive scripts, things like tab-completion or command 
history are not needed of course. Less-than-bash shells however also do 
not support more advanced bash syntax.


http://mywiki.wooledge.org/Bashism

/usr/bin/mysql is of course 3 times bigger than /bin/bash and for that 
matter is also guilty of being unnecessarily friendly to interactive 
users (via libreadline) ;-)


So I did a very crude test and putting a 'echo select 1 | mysql' in a 
#!/bin/bash script is only ~20% slower than using #!/bin/sh (which is 
dash in my case).
Oh and it looks like mysql -e blah is a bit faster under bash, but not 
under dash.


I guess the differences are more meaningful when we're talking about 
more than (hardly even) one line of shell code.


Performance impace of spawning shell processes from Dovecot [was: quota_over_flag examples?]

2015-04-16 Thread E.B.
> > I can't find any posts on this list for peoples using quota_over_flag
> >
> > http://wiki2.dovecot.org/Quota/Configuration#Overquota-flag_.28v2.2.16.2B-.29
> >
> > If my userdb is sql what would be best script to use in terms of 
> > performance?
> > (I mean if over-quota-flag triggers script every time it changes and the 
> > script
> > calls CLI mysql client isn't all this so expensive to spawn a new shell 
> > session
> > which spawns a mysql client?)
>
> I have a post-login script updating a "lastlogin" timestamp every time a
> user logs in. This can happen many times per second in busy hours. The
> only noticeable load is on the mysql _server_ (namely, some I/O). The
> shell + mysql client load is not noticeable at all.

Thank you. Is this common for most people === repeatedly spawning shell
scripts from Dovecot processes is not impact performance? I thought it's
why apps are written as daemons especially for many times a second as
you say!

> Don't use bash, of course!

Hmm well I didn't not know about this. On CentOS--

lrwxrwxrwx. 1 root root 4 Apr  5 10:31 /bin/sh -> bash*

Can you state the reasons you say do not use bash so I can google about
them?


Re: Performance issue

2014-11-10 Thread Alessio Cecchi

Hi,

try to start dovecot with "ulimits -u 10240 -n 4" and increase the 
value in /proc/sys/fs/inotify/max_user_instances around to 1024.


I also suggest to use XFS instead of ext4, for example after a crash XFS 
is immediately available.


Ciao

Il 05/11/2014 16:07, absolutely_f...@libero.it ha scritto:

Hi,
Since few days I noticed very high load on my mailserver (Centos 6.6 64bit, 8 
GB RAM, 2 x  CPU 3.00GHz
I am using Dovecot + Postfix + Roundcube + Nginx.

I have about 1 users.
Spool is on network attached storage (Coraid).

File system is ext4 (mounted with noatime).

Problem appears almost every morning (while load is normal during afternoon).

I suspect that this can be related to some user that have so many messages in 
his mailbox.
How can I troubleshoot this?

Here some messages that I got in maillog:

Warning: Maildir: Scanning /var/spool/pop/domains/.it/Y/Maildir/new 
took 71 seconds (1 readdir()s, 1 rename()s to cur/)

Warning: Maildir /var/spool/pop/domains///Maildir/.Trash: 
Synchronization took 74 seconds (5 new msgs, 0 flag change attempts, 0 expunge 
attempts)

dovecot: imap(x...@.it): Warning: Inotify instance limit for user 89 (UID 
postfix) exceeded, disabling. Increase /proc/sys/fs/inotify/max_user_instances


tail: inotify cannot be used, reverting to polling: Too many open files

My relevant dovecot conf:

mail_location = maildir:/coraid-s2l2/domains
namespace {
   type = private
   separator = .
   prefix = INBOX.
   inbox = yes
}
mail_uid = 89
mail_gid = 89
mail_fsync = never
first_valid_uid = 89
first_valid_gid = 89
maildir_very_dirty_syncs = yes
mbox_write_locks = fcntl



thank you very much!


Performance issue

2014-11-05 Thread absolutely_f...@libero.it
Hi,
Since few days I noticed very high load on my mailserver (Centos 6.6 64bit, 8 
GB RAM, 2 x  CPU 3.00GHz 
I am using Dovecot + Postfix + Roundcube + Nginx.

I have about 1 users.
Spool is on network attached storage (Coraid).

File system is ext4 (mounted with noatime).

Problem appears almost every morning (while load is normal during afternoon).

I suspect that this can be related to some user that have so many messages in 
his mailbox.
How can I troubleshoot this?

Here some messages that I got in maillog:

Warning: Maildir: Scanning /var/spool/pop/domains/.it/Y/Maildir/new 
took 71 seconds (1 readdir()s, 1 rename()s to cur/)

Warning: Maildir /var/spool/pop/domains///Maildir/.Trash: 
Synchronization took 74 seconds (5 new msgs, 0 flag change attempts, 0 expunge 
attempts)

dovecot: imap(x...@.it): Warning: Inotify instance limit for user 89 (UID 
postfix) exceeded, disabling. Increase /proc/sys/fs/inotify/max_user_instances


tail: inotify cannot be used, reverting to polling: Too many open files

My relevant dovecot conf:

mail_location = maildir:/coraid-s2l2/domains
namespace {
  type = private
  separator = .
  prefix = INBOX.
  inbox = yes
}
mail_uid = 89
mail_gid = 89
mail_fsync = never
first_valid_uid = 89
first_valid_gid = 89
maildir_very_dirty_syncs = yes
mbox_write_locks = fcntl



thank you very much!


Re: Dovecot mailstore performance tuning

2014-07-29 Thread Timo Sirainen
On 22 Jul 2014, at 04:57, Murray Trainer  wrote:

> We have a couple of dovecot director proxies and six backed mailstores
> each accessing mailboxes stored on five NFSv4 filsystems with about
> 1TB of mail on each in maildir format.  We have about 800 max users
> on each mailstore at peak times and performance appears to starting to
> degrade at these times.  The mailstores are pretty recent hardware
> with 64GB of RAM and 24 cores.   The NFS storage is EMC VNX and we
> are doing about 250 I/O per sec upto max of 500 on each
> filesystem.   I need to squeeze more performance out of these
> servers whether that is in the NFS client, Dovecot or Linux OS/kernel
> areas.   We use LDAP for auth and I am doing some tuning in that
> area.   The NFS filesystems are mounted with the options below:
> 
> 10.11.0.238:/mailbox_store_01 on /home1 type nfs4
> (rw,relatime,vers=4.0,rsize=65536,wsize=65536,namlen=255,hard,nordirplus,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=10.11.0.96,local_lock=none,addr=10.11.0.238)

Does relatime work with NFS? If yes, changing it to noatime would save some I/O.

maildir_very_dirty_syncs=yes should be helpful.

> # 2.2.9: /etc/dovecot/dovecot.conf

mailbox_list_index=yes might be useful, although it has had some further 
performance improvements since v2.2.13. I should try to make v2.2.14 soon..

>   quota = maildir

Dict file quota would be a bit faster than maildir++ quota.


Dovecot mailstore performance tuning

2014-07-21 Thread Murray Trainer
HI All,

We have a couple of dovecot director proxies and six backed mailstores
each accessing mailboxes stored on five NFSv4 filsystems with about
1TB of mail on each in maildir format.  We have about 800 max users
on each mailstore at peak times and performance appears to starting to
degrade at these times.  The mailstores are pretty recent hardware
with 64GB of RAM and 24 cores.   The NFS storage is EMC VNX and we
are doing about 250 I/O per sec upto max of 500 on each
filesystem.   I need to squeeze more performance out of these
servers whether that is in the NFS client, Dovecot or Linux OS/kernel
areas.   We use LDAP for auth and I am doing some tuning in that
area.   The NFS filesystems are mounted with the options below:

10.11.0.238:/mailbox_store_01 on /home1 type nfs4
(rw,relatime,vers=4.0,rsize=65536,wsize=65536,namlen=255,hard,nordirplus,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=10.11.0.96,local_lock=none,addr=10.11.0.238)

My dovecot config is below.  I am not seeing any obvious issues in
the server logs.    Any suggestions on how to improve performance
would be appreciated.

Thanks

Murray

# doveconf -n
# 2.2.9: /etc/dovecot/dovecot.conf
doveconf: Warning: service auth { client_limit=40960 } is lower than
required under max. load (41280)
doveconf: Warning: service anvil { client_limit=40970 } is lower than
required under max. load (41183)
# OS: Linux 3.13-0.bpo.1-amd64 x86_64 Debian 7.5
auth_cache_size = 64 M
auth_cache_ttl = 2 hours
auth_failure_delay = 0
auth_mechanisms = plain login
auth_username_chars =
"abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!#$-=?^_{}~./@+%"
auth_username_translation = +@
auth_worker_max_count = 256
base_dir = /var/run/dovecot/
disable_plaintext_auth = no
first_valid_gid = 1001
first_valid_uid = 1001
mail_fsync = always
mail_location = maildir:~/
managesieve_notify_capability = mailto
managesieve_sieve_capability = fileinto reject envelope
encoded-character vacation subaddress comparator-i;ascii-numeric
relational regex imap4flags copy include variables body enotify
environment mailbox date ihave
mmap_disable = yes
namespace {
  inbox = yes
  location =
  prefix = INBOX.
  type = private
}
passdb {
  args = /etc/dovecot/dovecot-ldap.conf.ext
  driver = ldap
}
plugin {
  acl = vfile
  quota = maildir
  quota_rule = INBOX.Trash:ignore
}
protocols = " imap lmtp sieve pop3"
service anvil {
  client_limit = 40970
}
service auth-worker {
  user = dovecot
}
service auth {
  client_limit = 40960
  unix_listener auth-userdb {
    group = mail
    mode = 0666
    user = dovecot
  }
}
service imap-login {
  chroot = login
  inet_listener imap {
    address = *, [::]
    port = 143
  }
  inet_listener imaps {
    address = *
    port = 993
    ssl = yes
  }
  process_limit = 20480
  process_min_avail = 32
  service_count = 1
  user = dovecot
  vsz_limit = 256 M
}
service imap {
  process_limit = 40960
  vsz_limit = 256 M
}
service lmtp {
  inet_listener lmtp {
    address = 27.54.95.41
    port = 24
  }
  process_min_avail = 32
}
service managesieve-login {
  client_limit = 40960
  process_limit = 120
  process_min_avail = 5
  service_count = 0
  vsz_limit = 64 M
}
service managesieve {
  process_limit = 4096
  vsz_limit = 256 M
}
service pop3-login {
  chroot = login
  inet_listener pop3 {
    address = *, [::]
    port = 110
  }
  inet_listener pop3s {
    address = *
    port = 995
    ssl = yes
  }
  process_limit = 20480
  process_min_avail = 32
  service_count = 1
  user = dovecot
  vsz_limit = 256 M
}
service pop3 {
  process_limit = 40960
  vsz_limit = 256 M
}
ssl_cert =


Re: [Dovecot] Shared mailboxes / IMAP folder performance

2014-01-21 Thread Robert Schetterer
Am 21.01.2014 18:09, schrieb Sebastian Schlatow:
> Am 21.01.2014 17:51, schrieb Robert Schetterer:
>> Am 21.01.2014 17:31, schrieb Sebastian Schlatow:
>>> Hello,
>>>
>>> how performant is an IMAP shared folder / mailbox if it contains 2
>>> million mails? Is it possible two have such a quantity of mails in a
>>> shared folder? Is it possible to search that shared folder for mails in
>>> a fast way?
>>>
>>> Regards
>>> Sebastian
>>>
>> there might no ultimate answer for this ,cause it might not depend on
>> the number of mails only, there might be other complex setup stuff
>> involved, at the end with which client you like to search, why not
>> simply test it with a test server, shouldnt take much time
>>
>>
>> Best Regards
>> MfG Robert Schetterer
>>
> Thanks for your quick reply. As a client Thunderbird, Evolution and
> Outlook should be used. In rare cases maybe mobile clients on iOS and
> Android. So it is principle possible to have it performant? I asked
> because I wanted to know if it makes sense to setup a test system for that.
> 

speculate ,in an "ideal" dove server setup, the clients will get your
bottlenecks


Best Regards
MfG Robert Schetterer

-- 
[*] sys4 AG

http://sys4.de, +49 (89) 30 90 46 64
Franziskanerstraße 15, 81669 München

Sitz der Gesellschaft: München, Amtsgericht München: HRB 199263
Vorstand: Patrick Ben Koetter, Marc Schiffbauer
Aufsichtsratsvorsitzender: Florian Kirstein


Re: [Dovecot] Shared mailboxes / IMAP folder performance

2014-01-21 Thread Sebastian Schlatow
Am 21.01.2014 17:51, schrieb Robert Schetterer:
> Am 21.01.2014 17:31, schrieb Sebastian Schlatow:
>> Hello,
>>
>> how performant is an IMAP shared folder / mailbox if it contains 2
>> million mails? Is it possible two have such a quantity of mails in a
>> shared folder? Is it possible to search that shared folder for mails in
>> a fast way?
>>
>> Regards
>> Sebastian
>>
> there might no ultimate answer for this ,cause it might not depend on
> the number of mails only, there might be other complex setup stuff
> involved, at the end with which client you like to search, why not
> simply test it with a test server, shouldnt take much time
>
>
> Best Regards
> MfG Robert Schetterer
>
Thanks for your quick reply. As a client Thunderbird, Evolution and
Outlook should be used. In rare cases maybe mobile clients on iOS and
Android. So it is principle possible to have it performant? I asked
because I wanted to know if it makes sense to setup a test system for that.


Re: [Dovecot] Shared mailboxes / IMAP folder performance

2014-01-21 Thread Robert Schetterer
Am 21.01.2014 17:31, schrieb Sebastian Schlatow:
> Hello,
> 
> how performant is an IMAP shared folder / mailbox if it contains 2
> million mails? Is it possible two have such a quantity of mails in a
> shared folder? Is it possible to search that shared folder for mails in
> a fast way?
> 
> Regards
> Sebastian
> 

there might no ultimate answer for this ,cause it might not depend on
the number of mails only, there might be other complex setup stuff
involved, at the end with which client you like to search, why not
simply test it with a test server, shouldnt take much time


Best Regards
MfG Robert Schetterer

-- 
[*] sys4 AG

http://sys4.de, +49 (89) 30 90 46 64
Franziskanerstraße 15, 81669 München

Sitz der Gesellschaft: München, Amtsgericht München: HRB 199263
Vorstand: Patrick Ben Koetter, Marc Schiffbauer
Aufsichtsratsvorsitzender: Florian Kirstein


[Dovecot] Shared mailboxes / IMAP folder performance

2014-01-21 Thread Sebastian Schlatow
Hello,

how performant is an IMAP shared folder / mailbox if it contains 2
million mails? Is it possible two have such a quantity of mails in a
shared folder? Is it possible to search that shared folder for mails in
a fast way?

Regards
Sebastian


[Dovecot] Slow authentication performance when switching folder

2014-01-13 Thread ra
Hello,

we have a problem with Dovecot 2.2.9 running on an AIX 7.1 and compiled
with xlc. At first we configured passdb to use our ldap directory via
pam and experienced an Internal login failure like the following one

Jan 13 16:20:02 imap-login: Info: Internal login failure (pid=29818948
id=1) (internal failure, 1 successful auths): user=, method=PLAIN,
rip=xxx.xxx.xxx.xxx, lip=yyy.yyy.yyy.yyy, TLS, session=

I read that this error occurs if the last passdb returns a continue and
there is no other passdb to ask. We added two more passdb to rule out
that pam is the problem. We added ldap directly and as third a fallback
passwd file, but we still get the Internal login failure. As far as i
can see this only occurs if i switch to another folder and i´m being
reauthenticated. Are there any suggestions on what is going wrong? Any
push in the right direction would be appreciated.

kind regards

Manuel

PS: This is the dump of our dovecot configuration file:

doveconf: Warning: service auth { client_limit=1000 } is lower than
required under max. load (32768)
doveconf: Warning: service anvil { client_limit=1000 } is lower than
required under max. load (24579)
# OS: AIX 1 00F7B83D4C00
auth_debug = yes
auth_mechanisms = plain login
auth_username_chars =
abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ01234567890-
auth_username_format = %n
auth_username_translation =
AaBbCcDdEeFfGgHhIiJjKkLlMmNnOoPpQqRrSsTtUuVvWwXxYyZz
auth_verbose = yes
base_dir = /var/run/dovecot-imap
default_process_limit = 8192
disable_plaintext_auth = no
first_valid_uid = 100
info_log_path = /mailbase/dovecot/2.2.9/log/dovecot_imap_debug.log
log_path = /mailbase/dovecot/2.2.9/log/dovecot_imap.log
login_greeting = University-Frankfurt-IMAP-Horde ready.
mail_access_groups = mhs
mail_debug = yes
mail_fsync = never
mail_location = mbox:~/:INBOX=/var/spool/mail/%u:INDEX=/var/mail-indexes/%u
mailbox_idle_check_interval = 90 secs
mbox_write_locks = fcntl
namespace {
  inbox = yes
  location =
  prefix =
  separator = /
  type = private
  name =
}
passdb {
  args = username_format=%u /mailbase/etc/passwd
  driver = passwd-file
}
passdb {
  args = %s
  driver = pam
}
plugin {
  stats_refresh = 30 secs
  stats_track_cmds = yes
}
service replication-notify-fifo {
  name = aggregator
}
service anvil-auth-penalty {
  name = anvil
}
service auth-worker {
  name = auth-worker
}
service auth-client {
  name = auth
}
service config {
  name = config
}
service dict {
  name = dict
}
service login/proxy-notify {
  name = director
}
service dns-client {
  name = dns_client
}
service doveadm-server {
  name = doveadm
}
service {
  inet_listener {
address = *
port = 0
name = imap
  }
  inet_listener {
address = *
port = 993
name = imaps
  }
  name = imap-login
}
service imap-urlauth {
  name = imap-urlauth-login
}
service imap-urlauth-worker {
  name = imap-urlauth-worker
}
service token-login/imap-urlauth {
  name = imap-urlauth
}
service login/imap {
  name = imap
}
service indexer-worker {
  name = indexer-worker
}
service indexer {
  name = indexer
}
service ipc {
  name = ipc
}
service lmtp {
  name = lmtp
}
service log-errors {
  name = log
}
service {
  inet_listener {
address = 10.1.1.40
port = 0
name = pop3
  }
  inet_listener {
address = *
port = 0
name = pop3s
  }
  name = pop3-login
}
service login/pop3 {
  name = pop3
}
service replicator-doveadm {
  name = replicator
}
service login/ssl-params {
  name = ssl-params
}
service stats-mail {
  name = stats
}
ssl_cert = 

  1   2   3   4   5   6   >