Re: [Dovecot] Best Cluster Storage

2011-01-26 Thread Luben Karavelov
On Thu, 13 Jan 2011 10:33:34 -0200, Henrique Fernandes 
sf.ri...@gmail.com wrote:

I use ocfs2 with 3 dovecots. one only for mailman.


We have problens with IO. Have about 4k active users.


We are now testing more ocfs2 clusters, becasue one of yours theorys 
is that
iff all mail resides in only one ocfs2 cluster, it takes too long to 
find

the file. ocfs2 i guess does not support index. using ocfs2 1.4



My last production environment using OCFS2 was with quite recent 
ocfs2/dovecot -

linux 2.6.35 and dovecot 1.2.15 with dbox mail storage. We got a lot of
problems - high IO, fragmentation and exponential grow of access time 
etc. We

tested also with directory indexes but this hasn't helped a lot.

Finaly we scrapped the ocfs2 setup and moved to less advanced setup:
We created distinct volumes for every worker on the SAN, formated it 
with with
XFS. The volumes got mounted on different mountpoints on workers. We 
setup a
Pacemaker as cluster manager on the workers, so if worker dies its 
volume

gets mounted on another worker and its service IP is brought up there.

As a result we are using a fraction of the IO compared with OCFS, the 
wait time

on the workers dropped significantly, the service got better.

You have different options to distribute mailboxes through the workers. 
In owr
setup the load is distributed by domain, because we are servicing 
hundreds of
domains. So every domain MX/pop3/imap was changed to the service IP of 
the
worker. If there are a lot of mailboxes in one domain you should put a 
balancer
that knows on which server the mailbox is located and forward the 
requests

there.


So now, we are gettins smallers luns from your storages and mounting
3 ocfs2
clusters that way we think the DLM will work better.



Sorry if i did not answer your question.


Anyway, we had some tests with NFS and it wasn't good also. We 
prefere

sticky with ocfs2.


My test with NFS3/NFS4 were not good also, so it was not considered an 
option.




We are balacing with IPVS, not using dovecot director.



With IPVS you could not stick the same mailbox to the same server - 
this is

important with ocfs setup because of filesystem caches and the locks.
We were using nginx as proxy/balancer that could stick the same mailbox 
to
the same backend - we did this before there was director service in 
dovecot

but now you could use the director.

Best regards

--
Luben Karavelov


Re: [Dovecot] maildir-dbox hybrid

2010-10-20 Thread Luben Karavelov
On Tue, 19 Oct 2010 19:11:20 +0100, Timo Sirainen t...@iki.fi wrote:
 On Mon, 2010-10-18 at 21:56 +0300, Luben Karavelov wrote:
 On Sun, 17 Oct 2010 22:23:29 +0100, Timo Sirainen t...@iki.fi wrote:
  On 17.10.2010, at 17.52, Roland Stuehmer wrote:
 
  How can I convert old mailboxes in maildir-dbox hybrid to a current 
  dbox?
 
  Does this help? http://dovecot.org/list/dovecot/2010-September/053012.html

 If I understand it correctly, dbox/maildir hybrid is when there are
 some messages
 stored as maildir msgs and some messages in dbox format, isn't it?
 
 Right.
 
 The conversion, as described in the suggested post seems like a lot of
 trouble.
 
 Yeah. 
 
 So, I have some questions:

 In the first scenario, if I redeliver maildir-format messages, does the
 users that
 use POP3 and store their mail on the server will re-download all
 redelivered
 messages? (if it does matter, my pop3_uidl_format = %08Xu%08Xv).
 
 Yes, pop3 users will redownload mails.
 
 The other option - convert the dbox-es back to maildir, move the old
 maildir
 messages in the new maildir, manually add them to the new uid-list and
 finally
 convert the new maildir to sdbox - seems quite fragile.
 
 Yeah. Not very nice either. Here's a 3rd option: Apply the attached
 patch to v1.2 and then open all users' all mailboxes (write a script or
 something). It should convert all maildir files to dbox files. Try with
 one account first to make sure it works. :)

Thanks Timo, I will try the patch and report back.

-- 
Luben Karavelov
Research and development
Spectrum Net JSC

36, D-r G. M. Dimitrov Blvd.
1797 Sofia
Mobile: +359 884332840
url: www.spnet.net


Re: [Dovecot] maildir-dbox hybrid

2010-10-20 Thread Luben Karavelov
I have tried the patch with a single dbox and it works. But when I try
to 
access it via POP3 I get this error:

MAIL=dbox:/var/www/210870/mail/156897-dbox /usr/lib/dovecot/pop3
pop3(areonfresh): Error: Getting size of message UID=2428 failed
-ERR [IN-USE] Couldn't sync mailbox.
pop3(areonfresh): Error: Couldn't init INBOX: Can't sync mailbox:
Messages keep getting expunged
pop3(areonfresh): Info: Mailbox init failed top=0/0, retr=0/0,
del=0/2055, size=286255398

With IMAP there is no such a problem. This is offline machine, just for
test, so nobody is accessing
the mail. I have seen such a messages on other boxes also. How could I
clean this type of errors?

Best regards and thanks for the great work.

-- 
Luben Karavelov
Research and development
Spectrum Net JSC

36, D-r G. M. Dimitrov Blvd.
1797 Sofia
Mobile: +359 884332840
url: www.spnet.net


Re: [Dovecot] maildir-dbox hybrid

2010-10-18 Thread Luben Karavelov
On Sun, 17 Oct 2010 22:23:29 +0100, Timo Sirainen t...@iki.fi wrote:
 On 17.10.2010, at 17.52, Roland Stuehmer wrote:
 
 How can I convert old mailboxes in maildir-dbox hybrid to a current dbox?
 
 Does this help? http://dovecot.org/list/dovecot/2010-September/053012.html

If I understand it correctly, dbox/maildir hybrid is when there are
some messages 
stored as maildir msgs and some messages in dbox format, isn't it?

The conversion, as described in the suggested post seems like a lot of
trouble.
So, I have some questions:

In the first scenario, if I redeliver maildir-format messages, does the
users that
use POP3 and store their mail on the server will re-download all
redelivered 
messages? (if it does matter, my pop3_uidl_format = %08Xu%08Xv).

The other option - convert the dbox-es back to maildir, move the old
maildir 
messages in the new maildir, manually add them to the new uid-list and
finally 
convert the new maildir to sdbox - seems quite fragile.

Best regards

-- 
Luben Karavelov
Research and development
Spectrum Net JSC

36, D-r G. M. Dimitrov Blvd.
1797 Sofia
Mobile: +359 884332840
url: www.spnet.net


Re: [Dovecot] Significant performance problems

2010-10-07 Thread Luben Karavelov
On Wed, 06 Oct 2010 21:42:57 -0700, Chris Hobbs
cho...@nhusd.k12.ca.us wrote:
 For documentation's sake, here's what I've done so far:
 
 I do have one more idea I'll throw out there. Everything I've got
 here is virtual. I only have the one Dovecot/Postfix server running
 now, and the impression I get from you all is that that should be
 adequate for my load. What would the collective opinion be of simply
 removing the NFS server altogether and mounting the virtual disk
 holding my messages directly to the dovecot server? I give up the
 ability to have a failover dovecot/postfix server, which was my
 motivation for using NFS in the first place, but a usable system
 probably trumps a redundant one.
 
 Chris
 

I have done some tests here that shows that NFS is a major overhead 
compared to local filesystem on iSCSI volume. I have tested only 
NFS4 with linux clients and server. Finally we went with a couple 
of mails servers that mount OCFS2 shared volume - this setup also 
has some drawbacks in terms of complexity. 

You also could achieve redundant mail system with local fs (XFS for 
example) over iSCSI volume - one server will be standby and will
mount the volume and bring up a floating IP if primary goes down. 
You could automate such a setup with heartbeat/pacemaker or other 
cluster manager. Though, in such a setup you could not load-balance 
if you are serving only one mail-domain.

Best regards

-- 
Luben Karavelov
Research and development
Spectrum Net JSC

36, D-r G. M. Dimitrov Blvd.
1797 Sofia
Mobile: +359 884332840
url: www.spnet.net


Re: [Dovecot] A new director service in v2.0 for NFS installations

2010-05-19 Thread luben karavelov
On Wed, 19 May 2010 10:51:06 +0200, Timo Sirainen t...@iki.fi wrote:
 The company here in Italy didn't really like such idea, so I thought
about
 making it more transparent and simpler to manage. The result is a new
 director service, which does basically the same thing, except without
SQL
 database. The idea is that your load balancer can redirect connections
to
 one or more Dovecot proxies, which internally then figure out where the
 user should go. So the proxies act kind of like a secondary load
balancer
 layer.

As I understand, the first load balancer is just IP balancer, not
POP3/IMAP balancer, isn't it?
 
 When a connection from a newly seen user arrives, it gets assigned to a
 mail server according to a function:
 
   host = vhosts[ md5(username) mod vhosts_count ]
 
 This way all of the proxies assign the same user to the same host
without
 having to talk to each others. The vhosts[] is basically an array of
hosts,
 except each host is initially listed there 100 times (vhost count=100).
 This vhost count can then be increased or decreased as necessary to
change
 the host's load, probably automatically in future.
 
 The problem is then of course that if (v)hosts are added or removed, the
 above function will return a different host than was previously used for
 the same user. That's why there is also an in-memory database that keeps
 track of username - (hostname, timestamp) mappings. Every new
connection
 from user refreshes the timestamp. Also existing connections refresh the
 timestamp every n minutes. Once all connections are gone, the timestamp
 expires and the user is removed from database.
 

I have implemented similar scheme here with imap/pop3 proxy (nginx) in 
front of dovecot servers. What i have found to work best (for my
conditions)
as hashing scheme  is some sort of weighted constant hash. 
Here is the algorithm I use:

On init, server add or server remove you initialize a ring:

1. For every server:
   - seed the random number generator with the crc32(IP of the server)
   - get N random numbers (where N = server weight) and put them in an 
 array. Put randon_number = IP in another map/hash structure.
2. Sort the array. This is the ring.

For every redirect request:

1. get crc32 number of the mailbox
2. traverse the ring until you  find a number that is bigger than 
   the crc32 number and was not yet visited. 
3. mark that number as visited.
4. lookup if it is already marked dead. If it was marked goto 2.
5. lookup the number in the map/hash and you find the IP of the server. 
6. redirect the client to that server
7. If that server is not responding, you mark it as dead and goto 2.

In this way you do not need to synchronize a state between balancers 
and proxies. If you add or remove servers very few clients get 
reallocated - num active clients/num servers. If one server is not 
responding, the clients that should be directed to it are redirected 
to one and a same other server without a need to sync states between 
servers.

This scheme has some disadvantages also - on certain circumstances, 
different sessions to one mailbox could be handled by different
servers in parallel. My tests showed that this causes some 
performance degradation but no index corruptions here (using OCFS2, 
not NFS). 

So my choice was to trade correctness (no parallel sessions to 
different servers) for simplicity (no state synchronization between 
servers).

 
 Finally, there are the doveadm commands that can be used to:
 
 1) List the director status:
 # doveadm director status
 mail server ipvhosts  users
 11.22.3.44100 1312
 12.33.4.5550  1424
 
 1) Add a new mail server (defaults are in dovecot.conf):
 # doveadm director add 1.2.3.4
 
 2) Change a mail server's vhost count to alter its connection count
(also
 works during adding):
 # doveadm director add 1.2.3.4 50
 
 3) Remove a mail server completely (because it's down):
 # doveadm director remove 1.2.3.4
 
 If you want to slowly get users away from a specific server, you can
 assign its vhost count to 0 and wait for its user count to drop to zero.
If
 the server is still working while doveadm director remove is called,
new
 connections from the users in that server are going to other servers
while
 the old ones are still being handled.

This is nice admin interface.
Also, I have a question. Your implementation, what kind of sessions does 
it balance? I suppose imap/pop3. Is there a plan for similar redirecting
of LMTP connections based on delivery address?

Best regards and thanks for the great work
luben




Re: [Dovecot] looking for feedbacks on courier to dovecot

2010-05-08 Thread luben karavelov

On  8.05.2010 20:57, Mihamina Rakotomandimby wrote:

Arne K. Haajea...@drlinux.no  :
This worked great, and as it preserves flags users did not have to
re-download mail. You might want to tune some of the parameters like
wheter to subscribe to folders or not.
 

That's about IMAP switch. Thank you.
What about any POP experience?

   
I have used courier-dovecot-migrate.pl. You have to use a recent version 
of dovecot (=1.1) and set the uidl format in the pop3 configuration. 
I.e. in protocol pop3 section, add following:


pop3_uidl_format = %08Xu%08Xv

luben


Re: [Dovecot] performance of maildir on ocfs2

2010-04-30 Thread luben karavelov

On 29.04.2010 21:02, Timo Sirainen wrote:

On Mon, 2010-04-26 at 15:51 +0300, karavelov wrote:

   

3. My understanding is that OCFS2 uses a global lock for move/rename.
As you know, Maildir format uses a lot of such operations. I think
that
dbox format (dovecot native) will be better choice, because there are
no file moves/renames. I am planning migration to dbox now. If I have
to start the service now, I would choose dbox for mail storage.
 

Wonder what the performance difference is then between v2.0's
single-dbox and multi-dbox? I'd guess mdbox is faster.

   


Here are some benchmarks that were done with imaptest.
The used commands are
imaptest host=rhp2 mbox=dovecot.mbox user=t...@example.com pass=test 
seed=123 secs=10
imaptest host=rhp2 mbox=dovecot.mbox user=t...@example.com pass=test 
seed=123 secs=10 logout=0


The volume is an iscsi export (4 SATA disks in a stripe) mounted on a
imap test server (no other processes are running). On OCFS2 setup, the
filesystem is mounted also on another node (2 node test cluster). The
other node was also idle.

Here are my results:

Logi List Stat Sele Fetc Fet2 Stor Dele Expu Appe Logo
100%  50%  50% 100% 100% 100%  50% 100% 100% 100% 100%
  30%  5%
nologout  10  139  130   10  248  350   87  196  248  248   XFS 
maildir
logout   227  121  127  227  216  323   60  170  216  221  454  XFS 
maildir
nologout  10  733  713   10 1438 2094  467 1161 1438 1438   OCFS2   
maildir
logout   584  300  282  584  547  780  170  428  547  580 1168  OCFS2   
maildir

nologout  10  930  892   10 1825 2614  527 1489 1825 1825   OCFS2   dbox
logout   570  290  298  569  564  838  226  452  564  568 1140  OCFS2   dbox


DISCLAIMER: Dovecot server is tuned for best performance with OCFS2 as 
far as
I can because my current production setup is OCFS2 based. XFS is 
included for

comparison without much of tuning. Mount options are:

XFS:   noatime,nodiratime,logbufs=8,logbsize=131072
OCFS2: noatime,data=writeback,commit=30

I have tested also NFS4 but the results were disappointing so I abandoned
further tests because no tuning could make a x10 difference.

My expectation is that pushing dbox in production will have even more gains
than my tests show because it will lower internal OCFS2 locking on move and
rename.

My tests and benchmarks were done using v1.2.11. May be I should make some
benchmarks for mdbox also using dovecot v2. My understanding is that dbox
is forward compatible with mdbox and there will be no need to convert
mailboxes from dbox to mdbox. Is it that ot there will be another pain in
migrationg mailboxes from one format in another?

Best regards
luben



Re: [Dovecot] performance of maildir on ocfs2

2010-04-30 Thread luben karavelov

On  1.05.2010 00:32, Timo Sirainen wrote:


v1.2 dbox is similar to v2.0's dbox, but not identical. v2.0 dbox is simpler 
and faster. Also dbox and mdbox are different, although they share some code. 
http://wiki.dovecot.org/MailboxFormat/dbox

Anyway, v2.0 is supposed to be able to read v1.2's dbox, but 1) I haven't 
tested it recently and 2) that's only if the dbox doesn't contain any 
maildir-migration files (so all mail files are u.* files, no maildir files). 
I'm kind of hoping the dbox/maildir hybrids aren't all that popular and maybe I 
don't need to worry about them.. :)


I have done some test here.

1st. There is some problem with imap + quota plugin. the corresponding logs:

May  1 03:46:33 rho2 dovecot: imap(lu...@test.dpv.bg): Panic: file 
index-transaction.c: line 145 (index_transaction_rollback): assertion 
failed: (box-transaction_count  0 || box-view-transactions == 0)
May  1 03:46:33 rho2 dovecot: imap(lu...@test.dpv.bg): Raw backtrace: 
/usr/lib/dovecot/libdovecot.so.0 [0x7f6e230114c2] - 
/usr/lib/dovecot/libdovecot.so.0 [0x7f6e2301152a] - 
/usr/lib/dovecot/libdovecot.so.0(i_error+0) [0x7f6e230118d3] - 
/usr/lib/dovecot/libdovecot-storage.so.0 [0x7f6e232c0d24] - 
/usr/lib/dovecot/modules/lib10_quota_plugin.so [0x7f6e21294021] - 
/usr/lib/dovecot/modules/lib10_quota_plugin.so [0x7f6e21293a90] - 
/usr/lib/dovecot/libdovecot-storage.so.0(sdbox_sync_begin+0x45e) 
[0x7f6e232c2bee] - 
/usr/lib/dovecot/libdovecot-storage.so.0(sdbox_transaction_save_commit_pre+0x70) 
[0x7f6e232c33f0] - /usr/lib/dovecot/libdovecot-storage.so.0 
[0x7f6e232c1138] - 
/usr/lib/dovecot/libdovecot-storage.so.0(mail_index_transaction_commit_full+0x97) 
[0x7f6e23290957] - 
/usr/lib/dovecot/libdovecot-storage.so.0(index_transaction_commit+0x8b) 
[0x7f6e232c0dbb] - /usr/lib/dovecot/modules/lib10_quota_plugin.so 
[0x7f6e212940a4] - 
/usr/lib/dovecot/libdovecot-storage.so.0(mailbox_transaction_commit_get_changes+
May  1 03:46:33 rho2 dovecot: master: service(imap): child 5621 killed 
with signal 6 (core dumps disabled)
May  1 03:46:33 rho2 dovecot: imap(lu...@test.dpv.bg): dbox: File 
unexpectedly lost: 
/var/www/149444/mail/122-dbox/mailboxes/INBOX/dbox-Mails/u.3226

...

So I have disabled all imap plugins except autocreate and have run the 
dbox/mdbox

test with dovecot v2.0b4 and with v1.2.11. Here are the results:

nologout  10  1036   10 2111 3071  755 1680 2111 2111   1.2.11  dbox
logout   544  272  263  542  538  753  191  426  538  540 1088  1.2.11  dbox
nologout  10 1182 1182   10 2367 3389  808 1919 2367 2367   2.0b4   dbox
logout   531  266  265  529  517  720   76  414  517  526 1072  2.0b4   dbox
nologout  10 1074 1012   10 2087 3045  725 1622 2087 2087   2.0b4   
mdbox
logout   504  265  242  503  491  660  135  397  491  502 1012  2.0b4   
mdbox



So, on this test setup, there is no much difference between dbox/mdbox. 
May be in other
setups it will show differend results. I have seen differend  comparison 
proportions
using diffened servers (pre-core 64bit xeons vs core2-quad) even when 
using the same

storage and filesystem.

When v2 stabilizes, I will consider migrating for the greater 
flexibility (altstorage,

LMTP etc.)

Best regards and thanks for the great work
luben


Re: [Dovecot] performance of maildir on ocfs2

2010-04-26 Thread luben karavelov

On 26.04.2010 21:42, Philipp Snizek wrote:



So bottom line is to use a filesystem such as XFS, distribute and 
dedicate mailboxes to a number of backend imap servers optimally with 
direct access to the storage and do imap proxying and loadbalancing in 
front of those servers.


Then you should also balance incoming mail and local deliveries and this 
part is tricky. Also, there should

be a hearthbeat/pacemaker for filesystem/IP/services failover.

Every choice is a compromise. So you should balance a lot of factors: 
complexity, administrative overhead, FS limits, performance, stability etc.


Best regards
luben


[Dovecot] Quota maildirsize and dbox

2010-04-25 Thread luben karavelov

Hello,

Is there any reason that maildirsize Quota store should not work in dbox 
folders or any other folder based mailbox stores?


I have patched a test version here and it seems to work (added dbox 
check in src/plugins/quota/quota-maildir.c line 785).


If there could be any complications or unexpected consequences of this 
setup?


Thanks in advance
luben