Re: [Dovecot] ldap idle connection timeout in DoveCot 1.0.13?

2012-04-11 Thread Aliet Santiesteban Sifontes
I had this problem running Dovecot 2.x where LDAP servers are located on
another firewall zone, we use Juniper SSG550. The problem was that the
firewall was dropping the ldap idle connections so client authentication
was failing in dovecot for a while and after a time it reconnects,
Dovecot/Openldap-Server never knows that the firewall has dropped the
connection because this is the default, the firewall doesn't send TCP
-Reset to the client and the server, in Juniper/Netscreen you can do a
workaround to speed up the process by configuring the zone to send reset
back to the client and the server. Check you have on the firewall:

set flow tcp-mss
unset flow no-tcp-seq-check
set flow tcp-syn-check
unset flow tcp-syn-bit-check
set flow reverse-route clear-text prefer
set flow reverse-route tunnel always

Edit your zone and enable If TCP non SYN, send RESET back checkbox:

This fixed the delay for us, it would be a nice feature at dovecot side...
best regards



El 11 de abril de 2012 11:36, Timo Sirainen t...@iki.fi escribió:

 On 11.4.2012, at 17.49, Zhou, Yan wrote:

  We are using DoveCot 1.0.13, it connects to LDAP server for
 authentication. It seems that DoveCot keeps the idle LDAP connection open.

 Yes.

  Our firewall is terminating these connections after some time of idle
 activity (2 hours), then, we run into authentication problem. If we restart
 either LDAP or DoveCot, then it is fine.
 
  Can we set some kind of LDAP idle connection timeout in DoveCot?
  /etc/dovecot-ldap.conf.  I do not see any configuration available for
 1.0.13.

 No. But if you upgrade to a newer Dovecot (v2.x probably) this is solved
 by automatic transparent reconnection.




Re: [Dovecot] How to define ldap connection idle

2011-11-07 Thread Aliet Santiesteban Sifontes
We checked with the firewall admins and they can not change the drop
action, this model doesn't support reject, only drops, but for testing they
disabled the ldap protocol idle timeout wich was set to 30 mins to never so
the firewall never drops ldap idle connections, we also verified the
clientidletimeout option in Openldap but is set to 0 wich means never close
a idle connection. After testing again we see the connection hanging again
after user inactivity, we will keep looking for other issues and maybe do
some packet captures to see what is really happening.
best regards, btw it would be great this ldap_idle_disconnect = 30s

2011/11/4 Timo Sirainen t...@iki.fi

 On Thu, 2011-11-03 at 11:52 -0400, Aliet Santiesteban Sifontes wrote:
  I'm having a problem with dovecot ldap connection when ldap server is in
  another firewall zone, firewall kills the ldap connection after a
  determined period of inactivity, this is good from the firewall point of
  view but is bad for dovecot because it never knows the connections has
 been
  dropped, this creates longs timeouts in dovecot and finally it
 reconnects,
  meanwhile many users fails to authenticate, I have seen this kind of post
  in the list for a while but can't find a solution for it, so my question
 is
  how to define a idle ldap time in dovecot so it can reconnect before the
  firewall has dropped the connection or just close the connection under
  inactivity so when a user authenticate doesn't fails for a while until
  dovecot detects that the connection has hanged. Is this a feature request
  or there is already a configuration for this???

 Can't the firewall be changed to reject the LDAP packets instead of
 dropping them? Then Dovecot would immediately notice that the connection
 has died, and with a recent enough version it wouldn't even log an error
 about it.

 I guess some kind of an ldap_idle_disconnect = 30s setting could be
 added, but it's not a very high priority for me.





Re: [Dovecot] How to define ldap connection idle

2011-11-07 Thread Aliet Santiesteban Sifontes
We will try this as next step to find a workaround, the problem with client
idletimeout=5 mins in openldap server is that is a global server definition
and have the net effect of changing replication refreshAndPersit into type
refreshOnly which is not a welcome side effect, we will look other options,
still the better candidate is ldap_idle_disconnect in dovecot side or any
other kind of logic able to detect this kind of problems.
best regards

2011/11/7 Timo Sirainen t...@iki.fi

 If you set openldap server to close idle clients sooner than the
 connection itself is dropped by firewall (or whatever), then Dovecot
 sees the disconnection and won't hang. So you could try something like
 clientidletimeout=5 mins

 On Mon, 2011-11-07 at 18:02 -0500, Aliet Santiesteban Sifontes wrote:
  We checked with the firewall admins and they can not change the drop
  action, this model doesn't support reject, only drops, but for testing
  they disabled the ldap protocol idle timeout wich was set to 30 mins
  to never so the firewall never drops ldap idle connections, we also
  verified the clientidletimeout option in Openldap but is set to 0 wich
  means never close a idle connection. After testing again we see the
  connection hanging again after user inactivity, we will keep looking
  for other issues and maybe do some packet captures to see what is
  really happening.
  best regards, btw it would be great this ldap_idle_disconnect = 30s
 
  2011/11/4 Timo Sirainen t...@iki.fi
 
  On Thu, 2011-11-03 at 11:52 -0400, Aliet Santiesteban Sifontes
  wrote:
   I'm having a problem with dovecot ldap connection when ldap
  server is in
   another firewall zone, firewall kills the ldap connection
  after a
   determined period of inactivity, this is good from the
  firewall point of
   view but is bad for dovecot because it never knows the
  connections has been
   dropped, this creates longs timeouts in dovecot and finally
  it reconnects,
   meanwhile many users fails to authenticate, I have seen this
  kind of post
   in the list for a while but can't find a solution for it, so
  my question is
   how to define a idle ldap time in dovecot so it can
  reconnect before the
   firewall has dropped the connection or just close the
  connection under
   inactivity so when a user authenticate doesn't fails for a
  while until
   dovecot detects that the connection has hanged. Is this a
  feature request
   or there is already a configuration for this???
 
 
  Can't the firewall be changed to reject the LDAP packets
  instead of
  dropping them? Then Dovecot would immediately notice that the
  connection
  has died, and with a recent enough version it wouldn't even
  log an error
  about it.
 
  I guess some kind of an ldap_idle_disconnect = 30s setting
  could be
  added, but it's not a very high priority for me.
 
 
 





[Dovecot] Dovecot error on rhel 6.1 using GFS2(bug)

2011-07-03 Thread Aliet Santiesteban Sifontes
Hi, just to let you know to all the people testing dovecot in a RHEL 6.1
setup where nodes are Active/Active and sharing a GFS2 filesystem that there
is a bug on latest rhel6.1 GFS2 kernel modules, and latest updates in 6.0
wich  makes dovecot crash a GFS2 filesystem, with a corruption and other
related errors, redhat people have posted a fix for the kernel wich is in
QA:

https://bugzilla.redhat.com/show_bug.cgi?id=712139

Just hope this helps somebody to don't waste many nights looking for a
problem in dovecot config :) as I did...

best regards, Aliet


Re: [Dovecot] mmap in GFS2 on rhel 6.1

2011-06-12 Thread Aliet Santiesteban Sifontes
/0x40
 [81088660] ? worker_thread+0x0/0x2a0
 [8108dd96] ? kthread+0x96/0xa0
 [8100c1ca] ? child_rip+0xa/0x20
 [8108dd00] ? kthread+0x0/0xa0
 [8100c1c0] ? child_rip+0x0/0x20
  no_formal_ino = 468
  no_addr = 525144
  i_disksize = 65536
  blocks = 0
  i_goal = 525170
  i_diskflags = 0x
  i_height = 1
  i_depth = 0
  i_entries = 0
  i_eattr = 0
GFS2: fsid=MailCluster:indexes.0: gfs2_delete_inode: -5

I I change to differents mailbox formats, they also hangs, only that messages
in the kernel are little differents as the first post.
any ideas???
Best regards



2011/6/11 Stan Hoeppner s...@hardwarefreak.com

 On 6/10/2011 11:24 PM, Aliet Santiesteban Sifontes wrote:
  Hello list, we continue our tests using Dovecot on a RHEL 6.1 Cluster
  Backend with GFS2, also we are using dovecot as a Director for user node
  persistence, everything was ok until we started stress testing the
 solution
  with imaptest, we had many deadlocks, cluster filesystems corruptions and
  hangs, specially in index filesystem, we have configured the backend as
 if
  they were on a NFS like setup but this seems not to work at least on GFS2
 on
  rhel 6.1.

 Actual _filesystem_ corruption is typically unrelated to user space
 applications.  You should be looking at a lower level for the cause,
 i.e. kernel, device driver, hardware, etc.  Please post details of your
 shared storage hardware environment, including HBAs, SAN array
 brand/type, if you're using GFS2 over DRBD, etc.

  We have a two node cluster sharing two GFS2 filesystem
  - Index GFS2 filesystem to store users indexes
  - Mailbox data on a GFS2 filesystem

 Experience of many users has shown that neither popular cluster
 filesystems such as GFS2/OCFS, nor NFS, handle high metadata/IOPS
 workloads very well, especially those that make heavy use of locking.

  The specific configs for NFS or cluster filesystem we used:
 
  mmap_disable = yes
  mail_fsync = always
  mail_nfs_storage = yes
  mail_nfs_index = yes
  fsync_disable=no
  lock_method = fcntl
 
  mail location :
 
  mail_location =
  mdbox:/var/vmail/%d/%3n/%n/mdbox:INDEX=/var/indexes/%d/%3n/%n

 For a Dovecot cluster using shared storage, you are probably better off
 using a mailbox format for which indexes are independent of mailbox
 files and are automatically [re]generated if absent.

 Try using mbox or maildir and store indexes on local node disk/SSD
 instead of on the cluster filesystem.  Only store the mailboxes on the
 cluster filesystem.  If for any reason a user login gets bumped to a
 node lacking the index files they're automatically rebuilt.

 Since dbox indexes aren't automatically generated if missing you can't
 do what I describe above with dbox storage.  Given the limitations of
 cluster filesystem (and NFS) metadata IOPS and locking, you'll likely
 achieve best performance and stability using local disk index files and
 mbox format mailboxes on GFS2.  Maildir format works in this setup as
 well, but the metadata load on the cluster filesystem is much higher,
 and thus peak performance will typically be lower.

 --
 Stan



[Dovecot] mmap in GFS2 on rhel 6.1

2011-06-10 Thread Aliet Santiesteban Sifontes
Hello list, we continue our tests using Dovecot on a RHEL 6.1 Cluster
Backend with GFS2, also we are using dovecot as a Director for user node
persistence, everything was ok until we started stress testing the solution
with imaptest, we had many deadlocks, cluster filesystems corruptions and
hangs, specially in index filesystem, we have configured the backend as if
they were on a NFS like setup but this seems not to work at least on GFS2 on
rhel 6.1.
We have a two node cluster sharing two GFS2 filesystem
- Index GFS2 filesystem to store users indexes
- Mailbox data on a GFS2 filesystem

The specific configs for NFS or cluster filesystem we used:

mmap_disable = yes
mail_fsync = always
mail_nfs_storage = yes
mail_nfs_index = yes
fsync_disable=no
lock_method = fcntl

mail location :

mail_location =
mdbox:/var/vmail/%d/%3n/%n/mdbox:INDEX=/var/indexes/%d/%3n/%n

But this seems not to work for GFS2 even doing user node persistence,
maillog is plagged of errors and GFS2 hangs on stress testing with imaptest,
many corrupted index for example, transaction logs etc, at this point we
have many questions, first mmap...
In Redhat GFS2 docs we read:
Gold rules for performance:
An inode is used in a read only fashion across all nodes
An inode is written or modified from a single node only.

We have succesfull archived this using dovecot director

Now, for mmap rh says:

... If you mmap() a file on GFS2 with a read/write mapping, but only read
from it, this only counts as a
read. On GFS though, it counts as a write, so GFS2 is much more scalable
with mmap() I/O...

But in our config we are using mmap_disable=yes, do we have to use
mmap_disable=no with GFS2???

Also, how dovecot manage the cache flush on GFS2 filesystem???

Why, if we are doing user node persistence, dovecot indexes gets
corrupted???

What lock method do we have to use??

How fsync should be used??

We know we have many questions, but this is really a very complex stuff and
we are going to appreciate any help you can give us.

Thank you all for a great work, specially Timo...
best regards


Re: [Dovecot] Dovecot imaptest on RHEL4/GFS1, RHEL6/GFS2, NFS and local storage results

2011-05-26 Thread Aliet Santiesteban Sifontes
Thanks Ed, right now we are finishing the setup, next week we will continue
the tests and will let you know the results...
best regards

2011/5/23 Ed W li...@wildgooses.com

 On 11/05/2011 00:00, Aliet Santiesteban Sifontes wrote:
  Using local storage(local hard driver ext4 filesystems)
 
 
  Totals:
  Logi List Stat Sele Fetc Fet2 Stor Dele Expu Appe Logo
  100%  50%  50% 100% 100% 100%  50% 100% 100% 100% 100%
30%  5%
  7798 3868 3889 7706 7566 10713 1080 6089 7559 7688 15562
  7806 3879 3874 7716 7585 10873 1114 6018 7578 7696 15572
  7866 3910 3855 7773 7748 11053 1076 6253 7747 7761 15710
  7893 3978 3931 7802 7772 10988 1117 6197 7767 7789 15760
  7775 3853 3809 7683 7654 10897 1081 6142 7651 7675 15534
  7877 3919 3872 7789 7758 10986 1085 6218 7755 7773 15720
 
  GFS2-mdbox, (no plugins)
 
  Totals:
  Logi List Stat Sele Fetc Fet2 Stor Dele Expu Appe Logo
  100%  50%  50% 100% 100% 100%  50% 100% 100% 100% 100%
30%  5%
  7547 3739 3749 7455 7421 10605 1053 5931 7417 7443 15074
  7480 3702 3724 7387 7367 10558 1064 5874 7366 7378 14946
  7523 3759 3711 7428 7394 10560 1126 5898 7390 7412 15014
  7455 3736 3621 7364 7326 10561 1088 5854 7324 7349 14880
  7431 3712 3686 7337 7312 10406 1017 5882 7311 7328 14844
  7426 3704 3671 7334 7296 10364 1076 5791 7296 7325 14834
  7517 3673 3782 7425 7406 10554 1103 5913 7404 7414 15008

 Hi, this performance seems excellent!

 There is no reason at all why you might try this, but as someone on
 lower end hardware I would be fascinated to learn how the performance
 changes is:

 - Switch FC to gig ethernet? (expecting substantial performance hit?)
 - Reverting to maildir (suspecting much less of a hit based on your
 numbers above?)
 - OCFS vs GFS (although probably not sensible in your architecture since
 you have a support contract for GFS, some have suggested OCFS can be
 faster?)

 Please do post any other performance results - seems like you have found
 an excellent cluster setup?

 Ed W



Re: [Dovecot] Dovecot imaptest on RHEL4/GFS1, RHEL6/GFS2, NFS and local storage results

2011-05-22 Thread Aliet Santiesteban Sifontes
Timo,
Can you recommend us some benchmarking tools to test the dovecot cluster
setup??
Best regards

2011/5/10 Timo Sirainen t...@iki.fi

 I don't think those results look too bad, even the original ones.
 imaptest doesn't measure real world performance anyway. Some ideas:

  - Try mdbox instead of sdbox. Cluster filesystems apparently like a few
 bigger files better than many small ones.

  - Try imaptest with logout=0 (or =1 or something). Now you're measuring
 way too much the login performance.

  - autocreate plugin sucks, especially with logout=100 because it has to
 check that all of the mailboxes exist. In v2.1 autocreate plugin is
 redesigned to not do any disk I/O.

 On Fri, 2011-05-06 at 23:01 -0400, Aliet Santiesteban Sifontes wrote:
  New results, now with all plugins disabled:
 
  os rhel6 x86_64, GFS2 Lun
 
  Totals:
 Logi List Stat Sele Fetc Fet2 Stor Dele Expu Appe Logo
 100%  50%  50% 100% 100% 100%  50% 100% 100% 100% 100%
   30%  5%
  1- 4514 2189 2250 4514 4264 6163  709 3403 4260 4292 8726
  2- 2827 1409 1389 2827 2765 3951  495 2168 2765 2777 5644
  3- 2711 1409 1368 2711 2649 3833  512 2145 2647 2662 5396
  4- 1799  912  890 1799 1720 2492  360 1370 1719 1735 3592
  5- 3817 1869 1896 3760 3717 5313  575 3026 3715 3737 7616
  6- 3296 1583 1628 3296 3215 4585  523 2600 3215 3238 6584
 
  2011/5/6 Aliet Santiesteban Sifontes alietsantieste...@gmail.com
 
   the configs:
  
   [root@n02 ~]# dovecot -n
   # 2.0.12: /etc/dovecot/dovecot.conf
   # OS: Linux 2.6.32-71.24.1.el6.x86_64 x86_64 Red Hat Enterprise Linux
   Server release 6.0 (Santiago)
   auth_cache_size = 15 M
   auth_default_realm = test.com
   auth_mechanisms = plain login
   auth_worker_max_count = 60
   disable_plaintext_auth = no
   login_greeting = Server ready.
   mail_fsync = never
   mail_location = sdbox:~/sdbox:INDEX=/vmail/index/%n
   mail_plugins = quota zlib
   managesieve_notify_capability = mailto
   managesieve_sieve_capability = fileinto reject envelope
 encoded-character
   vacation subaddress comparator-i;ascii-numeric relational regex
 imap4flags
   copy include variables body enotify environment mailbox date
   mbox_write_locks = fcntl
   mmap_disable = yes
   namespace {
 inbox = yes
 location =
 prefix =
 separator = /
   }
   passdb {
 args = /etc/dovecot/dovecot-ldap.conf.ext
 driver = ldap
   }
   plugin {
 autocreate = Sent
 autocreate2 = Trash
 autocreate3 = Drafts
 autocreate4 = Junk
 autocreate5 = Archives
 autocreate6 = Templates
 autosubscribe = Sent
 autosubscribe2 = Trash
 autosubscribe3 = Drafts
 autosubscribe4 = Junk
 autosubscribe5 = Archives
 autosubscribe6 = Templates
 quota = dict:User quota::file:%h/sdbox/dovecot-quota
 quota_rule = *:storage=250M
 quota_rule2 = Trash:storage=+50M
 quota_rule3 = Spam:storage=+25M
 quota_rule4 = Sent:ignore
 sieve = ~/.dovecot.sieve
 sieve_before = /var/vmail/sievescripts/before.d
 sieve_dir = ~/sieve
 zlib_save = gz
 zlib_save_level = 6
   }
   postmaster_address = postmas...@test.com
   protocols = imap pop3 lmtp sieve
   service auth {
 unix_listener auth-userdb {
   group = vmail
   mode = 0660
   user = root
 }
   }
   service imap-login {
 service_count = 0
   }
  
   best regards
  
  
   2011/5/6 Charles Marcus cmar...@media-brokers.com
  
   On 2011-05-05 7:56 PM, Aliet Santiesteban Sifontes wrote:
We have used sdbox as mailbox format, and all the user data is
   configured in
LDAP Servers
  
   It might help Timo to provide some suggestions if you also provide
   dovecot -n output... ;)
  
   --
  
   Best regards,
  
   Charles
  
  
  





Re: [Dovecot] Dovecot imaptest on RHEL4/GFS1, RHEL6/GFS2, NFS and local storage results

2011-05-10 Thread Aliet Santiesteban Sifontes
Timo, thank's for your answer, we finally found the problem, it was cluster
related. We have a rhel6-x86_64 cluster using Redhat Cluster Suite and GFS2,
the third node was located in an external location for Disaster Recovery,
the ethernet links and fiber channel links of that facility are
experimenting high latency, this was affecting cluster intercomunicate, many
packets were retransmited, after we removed the third node from that
facility results improved a lot.
Righ now we have all the node in the same place, two shared FC luns using
GFS2, one for indexes and the other for mailbox data, here the new results:

Using local storage(local hard driver ext4 filesystems)


Totals:
Logi List Stat Sele Fetc Fet2 Stor Dele Expu Appe Logo
100%  50%  50% 100% 100% 100%  50% 100% 100% 100% 100%
  30%  5%
7798 3868 3889 7706 7566 10713 1080 6089 7559 7688 15562
7806 3879 3874 7716 7585 10873 1114 6018 7578 7696 15572
7866 3910 3855 7773 7748 11053 1076 6253 7747 7761 15710
7893 3978 3931 7802 7772 10988 1117 6197 7767 7789 15760
7775 3853 3809 7683 7654 10897 1081 6142 7651 7675 15534
7877 3919 3872 7789 7758 10986 1085 6218 7755 7773 15720

GFS2-mdbox, (no plugins)

Totals:
Logi List Stat Sele Fetc Fet2 Stor Dele Expu Appe Logo
100%  50%  50% 100% 100% 100%  50% 100% 100% 100% 100%
  30%  5%
7547 3739 3749 7455 7421 10605 1053 5931 7417 7443 15074
7480 3702 3724 7387 7367 10558 1064 5874 7366 7378 14946
7523 3759 3711 7428 7394 10560 1126 5898 7390 7412 15014
7455 3736 3621 7364 7326 10561 1088 5854 7324 7349 14880
7431 3712 3686 7337 7312 10406 1017 5882 7311 7328 14844
7426 3704 3671 7334 7296 10364 1076 5791 7296 7325 14834
7517 3673 3782 7425 7406 10554 1103 5913 7404 7414 15008

GFS2-mdbox( using plugins)

Totals:
Logi List Stat Sele Fetc Fet2 Stor Dele Expu Appe Logo
100%  50%  50% 100% 100% 100%  50% 100% 100% 100% 100%
  30%  5%
5464 2713 2697 5371 5201 7503  733 4152 5201 5361 10910
5649 2757 2781  5500 7814  810 4397 5500 5549 11286
5303 2589 2583 5211 5147 7398  783 4067 5147 5201 10590
5446 2633 2721 5353 5280 7465  799 4272 5278 5336 10860
5628 2781 2865 5536 5467 7867  792 4317 5466 5520 11224
5699 2837 2797 5605 5543 7771  809 4416 5542 5599 11382

GFS2-sdbox(using plugins)

Totals:
Logi List Stat Sele Fetc Fet2 Stor Dele Expu Appe Logo
100%  50%  50% 100% 100% 100%  50% 100% 100% 100% 100%
  30%  5%
6102 3008 3018 6008 5885 8395  851 4705 5882 5999 12186
6140 2963 3085 6045 6013 8534  845 4798 6011 6035 12260
6063 2997 3021 5970 5929 8568  894 4719 5926 5955 12100
5747 2805 2890 5651 5599 7956  799 4434 5598 5638 11470
6025 3000 3014 5931 5901 8476  869 4697 5898 5917 12022
5899 2863 2890 5807 5762 8249  839 4610 5761 5802 11792

We will continue the tests with your suggestions.
Best regards and thank you all for a great work!!
Aliet

2011/5/10 Timo Sirainen t...@iki.fi

 I don't think those results look too bad, even the original ones.
 imaptest doesn't measure real world performance anyway. Some ideas:

  - Try mdbox instead of sdbox. Cluster filesystems apparently like a few
 bigger files better than many small ones.

  - Try imaptest with logout=0 (or =1 or something). Now you're measuring
 way too much the login performance.

  - autocreate plugin sucks, especially with logout=100 because it has to
 check that all of the mailboxes exist. In v2.1 autocreate plugin is
 redesigned to not do any disk I/O.

 On Fri, 2011-05-06 at 23:01 -0400, Aliet Santiesteban Sifontes wrote:
  New results, now with all plugins disabled:
 
  os rhel6 x86_64, GFS2 Lun
 
  Totals:
 Logi List Stat Sele Fetc Fet2 Stor Dele Expu Appe Logo
 100%  50%  50% 100% 100% 100%  50% 100% 100% 100% 100%
   30%  5%
  1- 4514 2189 2250 4514 4264 6163  709 3403 4260 4292 8726
  2- 2827 1409 1389 2827 2765 3951  495 2168 2765 2777 5644
  3- 2711 1409 1368 2711 2649 3833  512 2145 2647 2662 5396
  4- 1799  912  890 1799 1720 2492  360 1370 1719 1735 3592
  5- 3817 1869 1896 3760 3717 5313  575 3026 3715 3737 7616
  6- 3296 1583 1628 3296 3215 4585  523 2600 3215 3238 6584
 
  2011/5/6 Aliet Santiesteban Sifontes alietsantieste...@gmail.com
 
   the configs:
  
   [root@n02 ~]# dovecot -n
   # 2.0.12: /etc/dovecot/dovecot.conf
   # OS: Linux 2.6.32-71.24.1.el6.x86_64 x86_64 Red Hat Enterprise Linux
   Server release 6.0 (Santiago)
   auth_cache_size = 15 M
   auth_default_realm = test.com
   auth_mechanisms = plain login
   auth_worker_max_count = 60
   disable_plaintext_auth = no
   login_greeting = Server ready.
   mail_fsync = never
   mail_location = sdbox:~/sdbox:INDEX=/vmail/index/%n
   mail_plugins = quota zlib
   managesieve_notify_capability = mailto
   managesieve_sieve_capability = fileinto reject envelope
 encoded-character
   vacation subaddress comparator-i;ascii-numeric relational regex
 imap4flags
   copy

Re: [Dovecot] Dovecot imaptest on RHEL4/GFS1, RHEL6/GFS2, NFS and local storage results

2011-05-06 Thread Aliet Santiesteban Sifontes
the configs:

[root@n02 ~]# dovecot -n
# 2.0.12: /etc/dovecot/dovecot.conf
# OS: Linux 2.6.32-71.24.1.el6.x86_64 x86_64 Red Hat Enterprise Linux Server
release 6.0 (Santiago)
auth_cache_size = 15 M
auth_default_realm = test.com
auth_mechanisms = plain login
auth_worker_max_count = 60
disable_plaintext_auth = no
login_greeting = Server ready.
mail_fsync = never
mail_location = sdbox:~/sdbox:INDEX=/vmail/index/%n
mail_plugins = quota zlib
managesieve_notify_capability = mailto
managesieve_sieve_capability = fileinto reject envelope encoded-character
vacation subaddress comparator-i;ascii-numeric relational regex imap4flags
copy include variables body enotify environment mailbox date
mbox_write_locks = fcntl
mmap_disable = yes
namespace {
  inbox = yes
  location =
  prefix =
  separator = /
}
passdb {
  args = /etc/dovecot/dovecot-ldap.conf.ext
  driver = ldap
}
plugin {
  autocreate = Sent
  autocreate2 = Trash
  autocreate3 = Drafts
  autocreate4 = Junk
  autocreate5 = Archives
  autocreate6 = Templates
  autosubscribe = Sent
  autosubscribe2 = Trash
  autosubscribe3 = Drafts
  autosubscribe4 = Junk
  autosubscribe5 = Archives
  autosubscribe6 = Templates
  quota = dict:User quota::file:%h/sdbox/dovecot-quota
  quota_rule = *:storage=250M
  quota_rule2 = Trash:storage=+50M
  quota_rule3 = Spam:storage=+25M
  quota_rule4 = Sent:ignore
  sieve = ~/.dovecot.sieve
  sieve_before = /var/vmail/sievescripts/before.d
  sieve_dir = ~/sieve
  zlib_save = gz
  zlib_save_level = 6
}
postmaster_address = postmas...@test.com
protocols = imap pop3 lmtp sieve
service auth {
  unix_listener auth-userdb {
group = vmail
mode = 0660
user = root
  }
}
service imap-login {
  service_count = 0
}

best regards

2011/5/6 Charles Marcus cmar...@media-brokers.com

 On 2011-05-05 7:56 PM, Aliet Santiesteban Sifontes wrote:
  We have used sdbox as mailbox format, and all the user data is configured
 in
  LDAP Servers

 It might help Timo to provide some suggestions if you also provide
 dovecot -n output... ;)

 --

 Best regards,

 Charles



Re: [Dovecot] Dovecot imaptest on RHEL4/GFS1, RHEL6/GFS2, NFS and local storage results

2011-05-06 Thread Aliet Santiesteban Sifontes
New results, now with all plugins disabled:

os rhel6 x86_64, GFS2 Lun

Totals:
   Logi List Stat Sele Fetc Fet2 Stor Dele Expu Appe Logo
   100%  50%  50% 100% 100% 100%  50% 100% 100% 100% 100%
 30%  5%
1- 4514 2189 2250 4514 4264 6163  709 3403 4260 4292 8726
2- 2827 1409 1389 2827 2765 3951  495 2168 2765 2777 5644
3- 2711 1409 1368 2711 2649 3833  512 2145 2647 2662 5396
4- 1799  912  890 1799 1720 2492  360 1370 1719 1735 3592
5- 3817 1869 1896 3760 3717 5313  575 3026 3715 3737 7616
6- 3296 1583 1628 3296 3215 4585  523 2600 3215 3238 6584

2011/5/6 Aliet Santiesteban Sifontes alietsantieste...@gmail.com

 the configs:

 [root@n02 ~]# dovecot -n
 # 2.0.12: /etc/dovecot/dovecot.conf
 # OS: Linux 2.6.32-71.24.1.el6.x86_64 x86_64 Red Hat Enterprise Linux
 Server release 6.0 (Santiago)
 auth_cache_size = 15 M
 auth_default_realm = test.com
 auth_mechanisms = plain login
 auth_worker_max_count = 60
 disable_plaintext_auth = no
 login_greeting = Server ready.
 mail_fsync = never
 mail_location = sdbox:~/sdbox:INDEX=/vmail/index/%n
 mail_plugins = quota zlib
 managesieve_notify_capability = mailto
 managesieve_sieve_capability = fileinto reject envelope encoded-character
 vacation subaddress comparator-i;ascii-numeric relational regex imap4flags
 copy include variables body enotify environment mailbox date
 mbox_write_locks = fcntl
 mmap_disable = yes
 namespace {
   inbox = yes
   location =
   prefix =
   separator = /
 }
 passdb {
   args = /etc/dovecot/dovecot-ldap.conf.ext
   driver = ldap
 }
 plugin {
   autocreate = Sent
   autocreate2 = Trash
   autocreate3 = Drafts
   autocreate4 = Junk
   autocreate5 = Archives
   autocreate6 = Templates
   autosubscribe = Sent
   autosubscribe2 = Trash
   autosubscribe3 = Drafts
   autosubscribe4 = Junk
   autosubscribe5 = Archives
   autosubscribe6 = Templates
   quota = dict:User quota::file:%h/sdbox/dovecot-quota
   quota_rule = *:storage=250M
   quota_rule2 = Trash:storage=+50M
   quota_rule3 = Spam:storage=+25M
   quota_rule4 = Sent:ignore
   sieve = ~/.dovecot.sieve
   sieve_before = /var/vmail/sievescripts/before.d
   sieve_dir = ~/sieve
   zlib_save = gz
   zlib_save_level = 6
 }
 postmaster_address = postmas...@test.com
 protocols = imap pop3 lmtp sieve
 service auth {
   unix_listener auth-userdb {
 group = vmail
 mode = 0660
 user = root
   }
 }
 service imap-login {
   service_count = 0
 }

 best regards


 2011/5/6 Charles Marcus cmar...@media-brokers.com

 On 2011-05-05 7:56 PM, Aliet Santiesteban Sifontes wrote:
  We have used sdbox as mailbox format, and all the user data is
 configured in
  LDAP Servers

 It might help Timo to provide some suggestions if you also provide
 dovecot -n output... ;)

 --

 Best regards,

 Charles





Re: [Dovecot] Dovecot imaptest on RHEL4/GFS1, RHEL6/GFS2, NFS and local storage results

2011-05-05 Thread Aliet Santiesteban Sifontes
We have used sdbox as mailbox format, and all the user data is configured in
LDAP Servers

2011/5/5 Aliet Santiesteban Sifontes alietsantieste...@gmail.com

 We have done some benchmarking tests using dovecot 2.0.12 to find the best
 shared filesystem for hosting many users, here I share with you the results,
 notice the bad perfomance of all the shared filesystems against the local
 storage.
 Is there any specific optimization/tunning on dovecot for use GFS2 on
 rhel6??, we have configured the director to make the user mailbox persistent
 in a node, we will thank's any help from you.
 we are interested in using GFS2 or NFS, we believe the problem is the
 locks, how can we improve this??

 best regards, Aliet

 The results

  rhel 4.8 x86_64/GFS1 two nodes, shared FC lun on a SAN

 Totals:
Logi List Stat Sele Fetc Fet2 Stor Dele Expu Appe Logo
100%  50%  50% 100% 100% 100%  50% 100% 100% 100% 100%
  30%  5%
 1- 2608 1321 1311 2608 2508 3545  547 2001 2493 2702 5282
 2- 2810 1440 1430 2810 2688 3835  403 2154 2679 2925 5706
 3- 2913 1457 1441 2908 2778 3913  417 2253 2773 3034 5924
 4- 2814 1448 1412 2812 2695 3910  401 2186 2686 2929 5712
 5- 2789 1464 1432 2787 2652 3774  427 2112 2649 2879 5676
 6- 2843 1460 1444 2839 2722 3948  422 2164 2713 2957 5778

 rhel6 x86_64/GFS2 two nodes, shared FC lun on a SAN(Used RDM in VMWare
 vSphere for GFS2 lun)
 Tunned cluster suite cluster.conf +dlm plock_ownership=1
 plock_rate_limit=0/
 gfs_controld plock_rate_limit=0/

 Totals:
Logi List Stat Sele Fetc Fet2 Stor Dele Expu Appe Logo
100%  50%  50% 100% 100% 100%  50% 100% 100% 100% 100%
  30%  5%
 1- 2730 1340 1356 2704 2644 3748  522 2125 2643 2662 5422
 2- 3309 1618 1659 3294 3223 4658  531 2563 3221 3239 6596
 3- 2131 1046 1017 2055 2025 2911  381 1608 2024 2052 4256
 4- 2176 1055 1039 2082 2058 2947  377 1671 2058 2078 4344
 5- 1859  928  931 1859 1800 2626  304 1454 1799 1801 3706
 6- 2672 1322 1329 2672 2607 3758  464 2097 2606 2615 5326


 rhel6 x86_64/GFS2 two nodes, shared FC lun on a SAN(Used RDM in VMWare
 vSphere for GFS2 lun)
 Cluster suite defaults configs for plocks

 Totals:
Logi List Stat Sele Fetc Fet2 Stor Dele Expu Appe Logo
100%  50%  50% 100% 100% 100%  50% 100% 100% 100% 100%
  30%  5%
 1- 1417  644  676 1325 1305 1872  308 1048 1302 1318 2824
 2-  837  378  392  742  726 1050  117  588  722  734 1658
 3-  803  363  347  752  745 1069  153  597  744  750 1658
 4- 1682  802  811 1587 1569 2261  291 1299 1569 1585 3360
 5- 1146  583  564 1146 1037 1500  213  811 1037 1049 2290
 6-  838  403  366  744  734 1057  152  561  731  736 1664

 rhel6 x86_64 two nodes used NFS(NAS Freenas 0.8, nfsvers 3)

 Totals:
Logi List Stat Sele Fetc Fet2 Stor Dele Expu Appe Logo
100%  50%  50% 100% 100% 100%  50% 100% 100% 100% 100%
  30%  5%
 1- 1382  699  691 1357 1063 1500  224  861 1053 1313 2694
 2- 1634  785  799 1610 1459 2120  311 1192 1451 1570 3204
 2- 1635  826  806 1611 1463 2088  345 1159 1459 1568 3190
 3- 1574  758  781 1537 1403 2060  324 1135 1396 1504 3090
 4- 1685  842  807 1653 1506 2135  349 1215 1504 1634 3344
 5- 1766  850  893 1737 1582 2289  335 1288 1579 1705 3480
 6- 1597  797  769 1572 1423 2007  313 1133 1420 1536 3142

 rhel6 x86_64 local storage

 Totals:
Logi List Stat Sele Fetc Fet2 Stor Dele Expu Appe Logo
100%  50%  50% 100% 100% 100%  50% 100% 100% 100% 100%
  30%  5%
 1- 7798 3868 3889 7706 7566 10713 1080 6089 7559 7688 15562
 2- 7806 3879 3874 7716 7585 10873 1114 6018 7578 7696 15572
 3- 7866 3910 3855 7773 7748 11053 1076 6253 7747 7761 15710
 4- 7893 3978 3931 7802 7772 10988 1117 6197 7767 7789 15760
 5- 7775 3853 3809 7683 7654 10897 1081 6142 7651 7675 15534
 6- 7877 3919 3872 7789 7758 10986 1085 6218 7755 7773 15720



[Dovecot] Email backend monitor script for Director

2010-11-16 Thread Aliet Santiesteban Sifontes
Hi people, I know I saw this at some point in the list but can't find it, I
need a script wich monitor the health of the email backend and if a node
fails remove it from the director server, once is up again add it, I plan tu
run the script at the load balancer, if you have some let me know..
thank's in advance


Re: [Dovecot] Email backend monitor script for Director

2010-11-16 Thread Aliet Santiesteban Sifontes
Found it:
http://www.dovecot.org/list/dovecot/2010-August/051946.html

It would be great if director include it self this feature..
best regards

2010/11/16 Aliet Santiesteban Sifontes alietsantieste...@gmail.com

 Hi people, I know I saw this at some point in the list but can't find it, I
 need a script wich monitor the health of the email backend and if a node
 fails remove it from the director server, once is up again add it, I plan tu
 run the script at the load balancer, if you have some let me know..
 thank's in advance



[Dovecot] Dovecot ldap connection reconnecting after inactivity

2010-11-16 Thread Aliet Santiesteban Sifontes
Hi people, I have a setup configured using ldap, I have noticed that after a
period of user inactivity if a client open connections to dovecot first
attemps fails with this:

Nov 16 19:34:43 cl05-02 dovecot: auth: Error:
ldap(u...@xxx.xx.xx,172.29.13.26):
Connection appears to be hanging, reconnecting

After the connections to ldap has been restablished everything starts
working ok, is this a expected behavior or I'm missing something??

Best regards


Re: [Dovecot] Local node indexes in a cluster backend with GFS2

2010-11-15 Thread Aliet Santiesteban Sifontes
Should I set mmap_disable = yes when storing indexes in a GFS2 shared
filesystem??

2010/11/15 Aliet Santiesteban Sifontes alietsantieste...@gmail.com

 Ok, I will create a LUN also as a shared clustered storage for indexes, any
 consideration to have into account when the indexes are shared by many
 nodes...
 thank you all...

 2010/11/15 Timo Sirainen t...@iki.fi

 On 15.11.2010, at 6.44, Aliet Santiesteban Sifontes wrote:

  mail_location =
  sdbox:/var/vmail/%d/%3n/%n/sdbox:INDEX=/var/indexes/%d/%3n/%n
 
  /var/vmail is shared clustered filesystem with GFS2 shared by node1 and
  node2
 
  /var/indexes is a local filesystem at the node, so each node has his own
  /var/indexes stuff on ext3 and raid1 for improving performance, I mean
 each
  node a different /var/indexes of its own.

 This is a bad idea. With dbox the message flags are only stored in index
 files, so if you lose indexes you lose all message flags. Users won't be
 happy.





Re: [Dovecot] Local node indexes in a cluster backend with GFS2

2010-11-15 Thread Aliet Santiesteban Sifontes
Read this in GFS2 docs:
mmap/splice support for journaled files (enabled by using the same on disk
format as for regular files)

...
2010/11/15 Aliet Santiesteban Sifontes alietsantieste...@gmail.com

 Should I set mmap_disable = yes when storing indexes in a GFS2 shared
 filesystem??

 2010/11/15 Aliet Santiesteban Sifontes alietsantieste...@gmail.com

 Ok, I will create a LUN also as a shared clustered storage for indexes, any
 consideration to have into account when the indexes are shared by many
 nodes...
 thank you all...

 2010/11/15 Timo Sirainen t...@iki.fi

 On 15.11.2010, at 6.44, Aliet Santiesteban Sifontes wrote:

  mail_location =
  sdbox:/var/vmail/%d/%3n/%n/sdbox:INDEX=/var/indexes/%d/%3n/%n
 
  /var/vmail is shared clustered filesystem with GFS2 shared by node1 and
  node2
 
  /var/indexes is a local filesystem at the node, so each node has his
 own
  /var/indexes stuff on ext3 and raid1 for improving performance, I mean
 each
  node a different /var/indexes of its own.

 This is a bad idea. With dbox the message flags are only stored in index
 files, so if you lose indexes you lose all message flags. Users won't be
 happy.






[Dovecot] Recommended quota backend for a virtual users setup using ldap and sdbox

2010-11-14 Thread Aliet Santiesteban Sifontes
Just that, wich is the recommended quota backend for sdbox in terms of
performance and flexibility running a setup of virtual users with ldap??...
thank's in advance...


[Dovecot] Local node indexes in a cluster backend with GFS2

2010-11-14 Thread Aliet Santiesteban Sifontes
Hi, all
this days I'm testing a dovecot setup using lvs, director and a cluster
email backend with two nodes using rhel5 and gfs2. In the two nodes of the
email backend I configured mail location this way:

mail_location =
sdbox:/var/vmail/%d/%3n/%n/sdbox:INDEX=/var/indexes/%d/%3n/%n

/var/vmail is shared clustered filesystem with GFS2 shared by node1 and
node2

/var/indexes is a local filesystem at the node, so each node has his own
/var/indexes stuff on ext3 and raid1 for improving performance, I mean each
node a different /var/indexes of its own.

Now, if the director was using node2 for the email for user1 and I removed
node2 now all the connections for that user are redirected to node1, that
node has its local /var/indexes, when this ocurr I can not see the emails
that in fact are in the mailboxes, I guess should be indexes not current
with the user mails.
Is supposed indexes to be rebuilt automatically when the node switches?? Or
I have to configure the indexes in the shared storage so the two node update
the information concurrently?
I will appreciate your help...
best regards


[Dovecot] Is Dovecot Director stable for production??

2010-11-13 Thread Aliet Santiesteban Sifontes
Today I will try Dovecot Director for a setup using a Cluster Backend with
GFS2 on rhel5, my question is if is Director stable for use in production
for large sites, I know is mainly designed for NFS but I believe it will do
the job also for a cluster filesystem like GFS2 and should solve the mail
persistence problem with a node and locking issues.
I plan to add a layer behind a load balancer to do the stuff, I will
appreciate if you guys can point me to best practices in configuring the
director, or how the thing works since is new for me, I have seen the
example conf but I don't have clarity in some of the concepts and the wiki
is empty.
I use 2.0.6, thank's for a great job with dovecot.
best regards, Aliet


Re: [Dovecot] Is Dovecot Director stable for production??

2010-11-13 Thread Aliet Santiesteban Sifontes
Ok, I answer to my self what I did:

In /etc/dovecot/dovecot.conf

protocols = imap pop3 lmtp sieve

In /etc/dovecot/conf.d/10-auth.con

disable_plaintext_auth = no

Removed comment to
!include auth-static.conf.ext

In /etc/dovecot/conf.d/auth-static.conf.ext

passdb {
  driver = static
  args = proxy=y nopassword=y
}

In /etc/dovecot/conf.d/10-master.conf to fix perms

service auth {
...
mode = 0600
user = dovecot

...

In /etc/dovecot/conf.d/20-lmtp.conf

lmtp_proxy = yes

In /etc/dovecot/conf.d/10-director.conf

director_servers = 172.29.9.25 172.29.9.26

director_mail_servers = 172.29.9.10 172.29.9.11

director_user_expire = 15 min

service director {
  unix_listener login/director {
mode = 0666
  }
  fifo_listener login/proxy-notify {
mode = 0666
  }
  unix_listener director-userdb {
mode = 0600
  }
  inet_listener {
port = 4000
  }
}

service imap-login {
  executable = imap-login director
}
service pop3-login {
  executable = pop3-login director
}

# Enable director for LMTP proxying:
protocol lmtp {
  auth_socket_path = director-userdb
  passdb {
driver = static
args = proxy=y nopassword=y port=24
  }
}


Both backend configured and working using ldap, with this director config is
working but I still have a problem...
if I run command

doveadm director map I get

user  mail server ip expire time
unknown 172.29.9.112010-11-13
17:43:31

lmtp, imap and pop director working but in the list the user appears as
unknown, how tofix this??
any ideas??

2010/11/13 Aliet Santiesteban Sifontes alietsantieste...@gmail.com

 Today I will try Dovecot Director for a setup using a Cluster Backend with
 GFS2 on rhel5, my question is if is Director stable for use in production
 for large sites, I know is mainly designed for NFS but I believe it will do
 the job also for a cluster filesystem like GFS2 and should solve the mail
 persistence problem with a node and locking issues.
 I plan to add a layer behind a load balancer to do the stuff, I will
 appreciate if you guys can point me to best practices in configuring the
 director, or how the thing works since is new for me, I have seen the
 example conf but I don't have clarity in some of the concepts and the wiki
 is empty.
 I use 2.0.6, thank's for a great job with dovecot.
 best regards, Aliet



Re: [Dovecot] Is Dovecot Director stable for production??

2010-11-13 Thread Aliet Santiesteban Sifontes
Aliet, have you thought about is is better to use ipvs or dovecot diretord
to do de job ?  I mean does anyone know ? if it is let me know so i can
configure also here.

The problem with clustered filesystems is locking and cache, I have not
worked with OCFS2 I believe they use DLM as GFS, but in the case of GFS1 and
GFS2 you must try to keep user persistency with one node, that's why you
need dovecot director, linux lvs can do persistency but with connections and
client ip source not with user mailboxes, that's the job of dovecot director
I guess, so you must:

Cient -- Linux IPVS -- Dovecot directors Servers -- Backend Email
Cluster

IPVS for ip persistency and load balance and Dovecot Director for mailbox
node persistency and also load balance, this should in theory improve your
cluster filesystem performance, since one node will access always the same
area of your cluster filesystem, if you have many different nodes
accessingthe same directory in a cluster filesystem, I mean doing
read, write, delete
etc operations of many small files this will terrible decrease your io
performance because of dlm, imagine a mail server where this happens in
ratios of thousands maybe millions of times with many small files, then you
cluster filesystem will eventually halt...

best regards

2010/11/13 Henrique Fernandes sf.ri...@gmail.com

 Robert, does't you have problens with performance  ? What mail storage do
 you use, maildir mbox or other one? do you use any particular config to
 seeting up your OCFS2 cluster?

 I am using in prodution  ldiretord and heartbeat with balancing 2 servers
 with dovecot 2.07 and have another server in the cluster for maillist.

 But i am having lots of problens with IO wait.  I am in really troubel
 trying to figure out how to manage that. I am not sure if is ocfs2 or my
 swicth or the storage it self. We still trying to figure out this. Are you
 with any problem with IO in your cluster ?? thanks


 Aliet, have you thought about is is better to use ipvs or dovecot diretord
 to do de job ?  I mean does anyone know ? if it is let me know so i can
 configure also here.

 Thanks

 And i am sorry not help!


 []'sf.rique


 On Sat, Nov 13, 2010 at 8:50 PM, Aliet Santiesteban Sifontes 
 alietsantieste...@gmail.com wrote:

 Ok, I answer to my self what I did:

 In /etc/dovecot/dovecot.conf

 protocols = imap pop3 lmtp sieve

 In /etc/dovecot/conf.d/10-auth.con

 disable_plaintext_auth = no

 Removed comment to
 !include auth-static.conf.ext

 In /etc/dovecot/conf.d/auth-static.conf.ext

 passdb {
  driver = static
  args = proxy=y nopassword=y
 }

 In /etc/dovecot/conf.d/10-master.conf to fix perms

 service auth {
 ...
mode = 0600
user = dovecot

 ...

 In /etc/dovecot/conf.d/20-lmtp.conf

 lmtp_proxy = yes

 In /etc/dovecot/conf.d/10-director.conf

 director_servers = 172.29.9.25 172.29.9.26

 director_mail_servers = 172.29.9.10 172.29.9.11

 director_user_expire = 15 min

 service director {
  unix_listener login/director {
mode = 0666
  }
  fifo_listener login/proxy-notify {
mode = 0666
  }
  unix_listener director-userdb {
mode = 0600
  }
  inet_listener {
port = 4000
  }
 }

 service imap-login {
  executable = imap-login director
 }
 service pop3-login {
  executable = pop3-login director
 }

 # Enable director for LMTP proxying:
 protocol lmtp {
  auth_socket_path = director-userdb
  passdb {
driver = static
args = proxy=y nopassword=y port=24
  }
 }


 Both backend configured and working using ldap, with this director config
 is
 working but I still have a problem...
 if I run command

 doveadm director map I get

 user  mail server ip expire
 time
 unknown 172.29.9.11
  2010-11-13
 17:43:31

 lmtp, imap and pop director working but in the list the user appears as
 unknown, how tofix this??
 any ideas??

 2010/11/13 Aliet Santiesteban Sifontes alietsantieste...@gmail.com

  Today I will try Dovecot Director for a setup using a Cluster Backend
 with
  GFS2 on rhel5, my question is if is Director stable for use in
 production
  for large sites, I know is mainly designed for NFS but I believe it will
 do
  the job also for a cluster filesystem like GFS2 and should solve the
 mail
  persistence problem with a node and locking issues.
  I plan to add a layer behind a load balancer to do the stuff, I will
  appreciate if you guys can point me to best practices in configuring the
  director, or how the thing works since is new for me, I have seen the
  example conf but I don't have clarity in some of the concepts and the
 wiki
  is empty.
  I use 2.0.6, thank's for a great job with dovecot.
  best regards, Aliet
 





Re: [Dovecot] Is Dovecot Director stable for production??

2010-11-13 Thread Aliet Santiesteban Sifontes
We tough the point of have a clustered file system would be to be able to
write with many serves at same time, inscreassing  high availibialyti,
cause any node would be able to crash or anything like it.

Yes, and also high performance since you can have many nodes doing paralell
tasks, in this case email processing

Migh thing about let one node just to write emails when it comes and other
just to use imap and pop.. might do some help.

Not exactly, this will not fix your issue with the cluster filesystem, since
in this case node writer and node pop imaps will share the same  cluster
filesystem, so it might ocurr that the smtp node writes to the same
directory-file(or same file at your filesystem level) that is accessing the
pop imap servers and this is what you must avoid in clustered filesystems.

to explain better you can keep using your cluster deployment you just must
add after the LVS the Dovecot Director:

Cient -- Linux IPVS -- Dovecot Directors Servers -- Dovecot writing in
OCFS2

This not means that you will loose high availability, since you can use a
script to check the health of your ocfs2 backend, and if one fails you just
removed from director pool, in my case I'm testing this

Piranha LVS for load balancing connections in rhel

Directors(two nodes)
Running Dovecot lmtp, pop,imap, managesieve

GFS2 Cluster Backend- Running Dovecot( three nodes)
 - tcp lmtp for mail delivery
 - imap,pop, sieve etc

All our incoming emails after leaving our filter gateways goes to our postix
deliveries servers wich talks to a lmtp service configured in the lvs, wich
selects the best director to serve the lmtp request, the director makes a
persistency of the user mailbox with one node our use an existing one and do
the delivery of lmtp in the backend server, if one backend fails a script
removes from director pools and the director use another node as final
destination, similar ocurr with imap pop etc.
You can also do some tunning of your filesystem, use noatime, nodiratime etc
at mount, check first for supported options in your filesystem.

best regards

2010/11/13 Henrique Fernandes sf.ri...@gmail.com

 We are using

 Cient -- Linux IPVS -- Dovecot writing in OCFS2

 When we were testing the OCFS2 with did much beter job than it is doins
 right now. We not sure yet if it is the OCFS2 or other thing, still trying
 to figure out, that is why i asked if Robert is having any issues.

 We tough the point of have a clustered file system would be to be able to
 write with many serves at same time, inscreassing  high availibialyti, cause
 any node would be able to crash or anything like it.

 Migh thing about let one node just to write emails when it comes and other
 just to use imap and pop.. might do some help.

 Thanks!

 Any problem configuring dovecot besides diretor i would be glad to help



 []'sf.rique


 On Sat, Nov 13, 2010 at 9:50 PM, Aliet Santiesteban Sifontes 
 alietsantieste...@gmail.com wrote:

 Aliet, have you thought about is is better to use ipvs or dovecot
 diretord
 to do de job ?  I mean does anyone know ? if it is let me know so i can
 configure also here.

 The problem with clustered filesystems is locking and cache, I have not
 worked with OCFS2 I believe they use DLM as GFS, but in the case of GFS1
 and
 GFS2 you must try to keep user persistency with one node, that's why you
 need dovecot director, linux lvs can do persistency but with connections
 and
 client ip source not with user mailboxes, that's the job of dovecot
 director
 I guess, so you must:

 Cient -- Linux IPVS -- Dovecot directors Servers -- Backend Email
 Cluster

 IPVS for ip persistency and load balance and Dovecot Director for mailbox
 node persistency and also load balance, this should in theory improve your
 cluster filesystem performance, since one node will access always the same
 area of your cluster filesystem, if you have many different nodes
 accessingthe same directory in a cluster filesystem, I mean doing

 read, write, delete
 etc operations of many small files this will terrible decrease your io
 performance because of dlm, imagine a mail server where this happens in
 ratios of thousands maybe millions of times with many small files, then
 you
 cluster filesystem will eventually halt...

 best regards

 2010/11/13 Henrique Fernandes sf.ri...@gmail.com

  Robert, does't you have problens with performance  ? What mail storage
 do
  you use, maildir mbox or other one? do you use any particular config to
  seeting up your OCFS2 cluster?
 
  I am using in prodution  ldiretord and heartbeat with balancing 2
 servers
  with dovecot 2.07 and have another server in the cluster for maillist.
 
  But i am having lots of problens with IO wait.  I am in really troubel
  trying to figure out how to manage that. I am not sure if is ocfs2 or my
  swicth or the storage it self. We still trying to figure out this. Are
 you
  with any problem with IO in your cluster ?? thanks
 
 
  Aliet, have you thought about is is better

Re: [Dovecot] Is Dovecot Director stable for production??

2010-11-13 Thread Aliet Santiesteban Sifontes
Timo, first as all thank's for your directions and work...
about the doveadm I will ty enabling ldap at the directors, I'm just having
a problem with this, using the recommended static config everything works
ok, but if I enable ldap exactly as in the backends, I mean ldap stuff and
auth, dovecot attemps to do all the imap, pop, ltmp local, where exactly I
must configure dovecot so use ldap for user list and keep using static
configs for director/proxy stuff??
best regards

2010/11/13 Timo Sirainen t...@iki.fi

 On 13.11.2010, at 22.50, Aliet Santiesteban Sifontes wrote:
  doveadm director map I get
 
  user  mail server ip expire
 time
  unknown 172.29.9.11
  2010-11-13
  17:43:31
 
  lmtp, imap and pop director working but in the list the user appears as
  unknown, how tofix this??
  any ideas??

 Director doesn't keep track of username - IP mappings. It keeps track of
 CRC32(username) - IP mappings. So if you want to list user - IP, doveadm
 needs to get a list of all usernames so it can map them to the CRC32 (which
 of course isn't necessarily 1:1). So either you need to make doveadm user
 '*' working (in sql/ldap you have iterate_* settings) or you need to put
 all the usernames into a file and give -f file parameter to doveadm director
 map.

  Today I will try Dovecot Director for a setup using a Cluster Backend
 with
  GFS2 on rhel5, my question is if is Director stable for use in
 production
  for large sites,

 I know there are at least two production installations and I haven't heard
 complaints for months, so I guess they're working.


Re: [Dovecot] Is Dovecot Director stable for production??

2010-11-13 Thread Aliet Santiesteban Sifontes
protocol doveadm {
   userdb {
  driver = ldap
   args = whatever
 }
}

Not worked configuring this in 10-director.conf, but this worked for my
stuff

Configured iterate_ stuff in dovecot-ldap.conf.ext

Uncommented !include auth-ldap.conf.ext in conf.d/10-auth.conf

In 20-imap.conf

protocol imap {
...
  passdb {
driver = static
args = proxy=y nopassword=y
  }
}

in 20-pop3.conf

protocol pop3 {
...
  passdb {
driver = static
args = proxy=y nopassword=y
  }
}
 in 20-managesieve.conf

protocol sieve {
...
  passdb {
driver = static
args = proxy=y nopassword=y
  }
}

Now everything is working ok, director, doveadm etc, well... I guess, I will
continue testing and let you know the results guys

best regards

2010/11/13 Timo Sirainen t...@iki.fi

 Try:

 protocol doveadm {
  userdb {
driver = ldap
args = whatever
  }
 }
 userdb {
  driver = static
  args = etc
 }

 On 14.11.2010, at 1.51, Aliet Santiesteban Sifontes wrote:

  Timo, first as all thank's for your directions and work...
  about the doveadm I will ty enabling ldap at the directors, I'm just
 having
  a problem with this, using the recommended static config everything works
  ok, but if I enable ldap exactly as in the backends, I mean ldap stuff
 and
  auth, dovecot attemps to do all the imap, pop, ltmp local, where exactly
 I
  must configure dovecot so use ldap for user list and keep using static
  configs for director/proxy stuff??
  best regards
 
  2010/11/13 Timo Sirainen t...@iki.fi
 
  On 13.11.2010, at 22.50, Aliet Santiesteban Sifontes wrote:
  doveadm director map I get
 
  user  mail server ip expire
  time
  unknown 172.29.9.11
  2010-11-13
  17:43:31
 
  lmtp, imap and pop director working but in the list the user appears as
  unknown, how tofix this??
  any ideas??
 
  Director doesn't keep track of username - IP mappings. It keeps track
 of
  CRC32(username) - IP mappings. So if you want to list user - IP,
 doveadm
  needs to get a list of all usernames so it can map them to the CRC32
 (which
  of course isn't necessarily 1:1). So either you need to make doveadm
 user
  '*' working (in sql/ldap you have iterate_* settings) or you need to
 put
  all the usernames into a file and give -f file parameter to doveadm
 director
  map.
 
  Today I will try Dovecot Director for a setup using a Cluster Backend
  with
  GFS2 on rhel5, my question is if is Director stable for use in
  production
  for large sites,
 
  I know there are at least two production installations and I haven't
 heard
  complaints for months, so I guess they're working.




Re: [Dovecot] Per User Quotas with LDAP on Dovecot 1.x

2010-10-08 Thread Aliet Santiesteban Sifontes
Camron, if you look in the downloads link at dovecot site, you can can
check:

http://wiki2.dovecot.org/PrebuiltBinaries#RPMs_of_newer_Dovecot_and_Sieve_packages

There you will find references to third party repositories wich build latest
dovecot rpm versions for rhel5.5. If you will use atrpms follow the install
instructions:

http://atrpms.net/documentation/install/

For dovecot 1.2
http://packages.atrpms.net/dist/el5/dovecot-1.2.x/
For dovecot 2.x
http://packages.atrpms.net/dist/el5/dovecot/

Just import atrpms rpm key, configure the repo for rhel5 and use yum to
install the desired packages...
good luck...


2010/10/8 Camron W. Fox cw...@us.fujitsu.com

 On 10/10/08 08:59, Charles Marcus wrote:
  On 2010-10-08 2:10 PM, Camron W. Fox wrote:
  I started poking @ 1.2 as you suggested, but I run into libcurl-devel
  dependency issues. Does anyone know where to get a libcurl-devel RPM for
  RHEL5?
 
  I'd think you could get everything you needed from the extra
  repositories (I think RHEL uses the CentOS repos)...
 
 You would think so, but no. I checked all the CentOS additional
 repositories on my mrepo server here with no luck. That's why I asked. I
 really want to stay with package installations and away from source if I
 can.

 Best Regards,
 Camron

 --
 Camron W. Fox
 Hilo Office
 High Performance Computing Group
 Fujitsu Management Services of America, Inc.
 E-mail: cw...@us.fujitsu.com




[Dovecot] Proxy IMAP/POP/ManageSieve/SMTP in a large cluster enviroment

2010-07-18 Thread Aliet Santiesteban Sifontes
Hi to all in the list, we are trying to do some tests lab for a large scale
mail system having this requirements:
- Scale to maybe 1million users(Only for testing).
- Server side filters.
- User quotas.
- High concurrency.
- High performance and High Availability.

We plan to test this using RHEL5 and maybe RHEL6.

As a storage we are going to use an HP EVA 8400 FC(8 GB/s)

We defined this functional roles.

1- Outgoing SMTP gateway servers.

 - Load balanced with Piranha LVS using Direct Routing.
 - ( n servers)RHEL5/6 using Postfix Latest version Amavisd-new Clamav
Spamassasin maybe some other filters.

2- Incomming SMTP gateway servers.

 - Load balanced with Piranha LVS using Direct Routing.
 - RHEL5/6 using Postfix Latest version Amavisd-new Clamav Spamassasin( n
servers) maybe some other filters.

3- Webmail Cluster farm.

 - Load balanced with Piranha LVS using Direct Routing.
 - ( n servers)RHEL5/6 using RHCS Active/Active. Apache2 PHP5 Horde.

4- Openldap load balanced openldap cluster.
 - Load balanced with Piranha LVS using Direct Routing.
 - ( n servers)RHEL5/6 using Openldap.

5- IMAP/POP/ManageSieve/SMTP Proxy or Director.
 - Load balanced with Piranha LVS using Direct Routing.
 - ( n servers)RHEL5/6 using Dovecot - Postfix(Director or Proxy).

6- Mail backend.
 - ( n servers)RHEL5/6 using Dovecot.

Now for functional role 6 Mail Backend we have some dilemmas.
   - Recommended scalable filesystem to use for such scenario(Clustered or
not).
 -GFS2?? We have very bad experiences with GFS1 and maildir, GFS2
doesn't seems to improve this also. Using  some techniques for session
afinity with a backend server seems to help with the locking problems of GFS
and the cache. Using GFS many IMAP SMTP servers can in parallel write or
read to user mailboxes, if GFS can perform well we prefer this, will have to
proxy o use director for the cache problem.
 - Ext4, Ext3 well tested with many setups but only one backend server
can access one ext4 lun, so we have to proxy a user always to a ip backend
address or VIP in a case of failure cluster software has to move the server
to another node, the cluster service is formed by a IP Address and one or
many Luns wich moves from node to node in a case of failure.

  - Recommended scalable method  for partition divition.
 - Directory Hashing(How it actually works the theory behind this), we
see the wiki but we need to understand the theory to balance the directory
structure.
 - Using some criteria ex Lun1(a-h users), Lun2(i-m users)

 - Proxy or Director is a must for a Clustered Filsystem or for the solution
of a user belonging to one server.
 - In case we use a Proxy with dovecot we know we can use it with
IMAP/POP but are not sure for ManageSieve and more important for SMTP. The
question is, how can we proxy for the same user IMAP/POP/ManageSieve and
SMTP delivery to mailbox at the same time, can the incoming mail gateways
send the email to the proxy servers and dovecot proxy the request for smtp
or lmtp appply the same proxy criteria that uses for the protocols IMAP and
POP and send the email to the correct backend server and this server do the
final delivery?? I mean proxy all the protocols for a user
IMAP/POP/SMTP-LMTP?? Any example...

Right now we have doubts on this stuff, we are really going to appreciate
any help or advice on this...
best regards