Re: [Dovecot] Difference between LOGIN and PLAIN

2011-11-03 Thread Simon Brereton
On 3 November 2011 17:01, Stephan Bosch  wrote:
> On 11/3/2011 9:42 PM, Simon Brereton wrote:
>>
>> Hi
>>
>> Could someone explain to me the difference between LOGIN and PLAIN?
>> I've been googling for a while, but haven't found anything.
>
> The LOGIN SASL mechanism is an obsolete plain text mechanism. It is
> documented here:
>
> http://tools.ietf.org/html/draft-murchison-sasl-login-00
>
> Some clients still support it, but I would not recommend using it when PLAIN
> or a better SASL mechanism is also available at both ends. The PLAIN
> mechanism is documented here:
>
> http://tools.ietf.org/html/rfc4616
>
> The main technical difference between the two is that the PLAIN mechanism
> transfers both username and password in a single SASL interaction, where
> LOGIN needs two. The PLAIN mechanism also provides support for having an
> authorization id different from the authentication id, allowing for master
> user login for example.

Thanks to both of you.  Can I bet that Outlook doesn't support
anything but plain?

I'm not sure I've ever heard of a client supporting other than
Evolution supporting MD5 passwords..

Simon


Re: [Dovecot] Difference between LOGIN and PLAIN

2011-11-03 Thread Jerry
On Thu, 3 Nov 2011 16:42:40 -0400
Simon Brereton articulated:

> Hi
> 
> Could someone explain to me the difference between LOGIN and PLAIN?
> I've been googling for a while, but haven't found anything.

You could start here for some basic information:

http://en.wikipedia.org/wiki/Simple_Authentication_and_Security_Layer

http://wiki.dovecot.org/Authentication/Mechanisms

-- 
Jerry ✌
dovecot.u...@seibercom.net

Disclaimer: off-list followups get on-list replies or get ignored.
Please do not ignore the Reply-To header.
__



Re: [Dovecot] Difference between LOGIN and PLAIN

2011-11-03 Thread Stephan Bosch

On 11/3/2011 9:42 PM, Simon Brereton wrote:

Hi

Could someone explain to me the difference between LOGIN and PLAIN?
I've been googling for a while, but haven't found anything.


The LOGIN SASL mechanism is an obsolete plain text mechanism. It is 
documented here:


http://tools.ietf.org/html/draft-murchison-sasl-login-00

Some clients still support it, but I would not recommend using it when 
PLAIN or a better SASL mechanism is also available at both ends. The 
PLAIN mechanism is documented here:


http://tools.ietf.org/html/rfc4616

The main technical difference between the two is that the PLAIN 
mechanism transfers both username and password in a single SASL 
interaction, where LOGIN needs two. The PLAIN mechanism also provides 
support for having an authorization id different from the authentication 
id, allowing for master user login for example.


Regards,

Stephan.


Re: [Dovecot] Difference between LOGIN and PLAIN

2011-11-03 Thread Patrick Ben Koetter
* Simon Brereton :
> Could someone explain to me the difference between LOGIN and PLAIN?

In SMTP these are:

Both 
- are plaintext mechanisms. 
- base64 encode identification data before they send it over the wire
- do not encrypt the indentification data and should therefore only be offered
  over an encrypted transport layer

PLAIN
- is an open standard supported by most clients
- sends identification data as one string
- sends an authentication ID, an authorization ID and the password

LOGIN
- is a proprietary standard supported by Microsofts clients
- sends LOGIN, login name, password and optionally the domain name one after
  another

I guess they are basically the same in IMAP, but others will know better.

p@rick


> I've been googling for a while, but haven't found anything.
> 
> Thanks.
> 
> Simon

-- 
state of mind ()

http://www.state-of-mind.de

Franziskanerstraße 15  Telefon +49 89 3090 4664
81669 München  Telefax +49 89 3090 4666

Amtsgericht MünchenPartnerschaftsregister PR 563



[Dovecot] Difference between LOGIN and PLAIN

2011-11-03 Thread Simon Brereton
Hi

Could someone explain to me the difference between LOGIN and PLAIN?
I've been googling for a while, but haven't found anything.

Thanks.

Simon


Re: [Dovecot] Restricting IMAP access

2011-11-03 Thread Robert Schetterer
Am 03.11.2011 19:13, schrieb Thierry de Montaudry:
> Hi list,
> 
> I have a setup with postfix+dovecot+mysql unser CentOS 5, running 50 odd 
> domains with virtual users. Access is allowed for public POP3, and a webmail 
> on apache+PHP solution through local IMAP.
> I'm not gonna give you the long story about the why, but I'm looking for a 
> way to give public IMAP access only to one domain, knowing that users log in 
> with full email (u...@domain.tld).
> Anybody has a trick for that? Running dovecot 2.0.13.
> 
> I know there should be a way to do it through the database, but quite heavy 
> change on our side for a million odd users.
> 
> Regards,
> 
>   Thierry
i am shot in time , with a databse this should be get to work
i have it as flag for all users, so i can forbid imap to special ones
as far i remember ther should be examples on the dovecot site , and it
was written about here on the list before

-- 
Best Regards

MfG Robert Schetterer

Germany/Munich/Bavaria


Re: [Dovecot] Indexes to MLC-SSD

2011-11-03 Thread Felipe Scarel
Reasons to choose ZFS were snapshots, and mainly dedup and compression
capabilities. I know, it's ironic since I'm not able to use them now due to
severe performance issues with them (mostly dedup) turned on.

I do like the emphasis on data integrity and fast on-the-fly
configurability of ZFS to an extent, but I wouldn't recommend it highly for
new users, especially for production. It works (in fact it's working right
now), but has its fair share of troubles.

We've started implementations to move our mail system to a more modular
enviroment and we'll probably move away from ZFS. Was a nice experiment
nonetheless, I learned quite a bit from it.

On Thu, Nov 3, 2011 at 12:27, Ed W  wrote:

> On 03/11/2011 11:32, Felipe Scarel wrote:
> > I'm using native ZFS (http://zfsonlinux.org) on production here (15k+
> > users, over 2TB of mail data) with little issues. Dedup and compression
> > disabled, mind that.
> >
>
> OT: but what were the rough criteria that led you to using ZFS over say
> LVM with EXT4/XFS/btrfs?  I can think of plenty for/against reasons for
> each, just wondering what criteria affected *your* situation?  I'm
> guessing some kind of manageability reason is at the core, but perhaps
> you can expand on how it's all worked out for you?
>
> I have a fairly static server setup here so I have been "satisfied" with
> LVM, software raid and mainly ext4.  The main thing I miss is simple to
> use snapshots
>
> Cheers
>
> Ed W
>


[Dovecot] Restricting IMAP access

2011-11-03 Thread Thierry de Montaudry
Hi list,

I have a setup with postfix+dovecot+mysql unser CentOS 5, running 50 odd 
domains with virtual users. Access is allowed for public POP3, and a webmail on 
apache+PHP solution through local IMAP.
I'm not gonna give you the long story about the why, but I'm looking for a way 
to give public IMAP access only to one domain, knowing that users log in with 
full email (u...@domain.tld).
Anybody has a trick for that? Running dovecot 2.0.13.

I know there should be a way to do it through the database, but quite heavy 
change on our side for a million odd users.

Regards,

Thierry

Re: [Dovecot] Indexes to MLC-SSD

2011-11-03 Thread Dan Swartzendruber

Patrick Westenberg wrote:

Ed W schrieb:


I'm using NexentaStor (Solaris, ZFS) to export iSCSI-LUNs and I was
thinking about a SSD based LUN for the indexes. As I'm using multiple
servers this LUN will use OCFS2.


Given that the SAN always has the network latency behind it, might you
be better to look at putting the SSDs in the frontend machines?
Obviously this then needs some way to make users "sticky" to one machine
(or some few machines) where the indexes are stored?


Storing the indexes on several machines?
In this case I have to synchronize them.

maybe i am missing something.  if a client has to fetch the index, the 
server has to read the index from disk and pass it back.  the network 
latency is unavoidable, but i don't see why putting the fastest possible 
SSD on the server isn't a win.  possibly i am misunderstanding something?


Re: [Dovecot] Indexes to MLC-SSD

2011-11-03 Thread Patrick Westenberg

Ed W schrieb:


I'm using NexentaStor (Solaris, ZFS) to export iSCSI-LUNs and I was
thinking about a SSD based LUN for the indexes. As I'm using multiple
servers this LUN will use OCFS2.


Given that the SAN always has the network latency behind it, might you
be better to look at putting the SSDs in the frontend machines?
Obviously this then needs some way to make users "sticky" to one machine
(or some few machines) where the indexes are stored?


Storing the indexes on several machines?
In this case I have to synchronize them.



Re: [Dovecot] looking for Dovecot-code + SQL consultants

2011-11-03 Thread Rich
Hi,

I've already received a number of replies from providers offering to help out.

I'll be in touch with each, and am certain we'll be able to find the
right solution from among them.

Thanks for the responses,

Rich

On Tue, Nov 1, 2011 at 1:53 PM, Rich  wrote:
> Hi,
>
> We're using Dovecot2.  Trying, given our own spread-too-thin
> bandwidth, to make it work within our evolving SQL application
> environment.
>
> When there's a problem, we post to this list (e.g.,
> http://www.dovecot.org/list/dovecot/2011-October/061609.html), but
> aren't getting any/timely responses.
>
> We've decided to look for a consultant (hourly or retainer) that can
> be available for working with our in-house staff to straighten these
> issues out -- by helping us identify & fix our own mess, and by
> working to get fixes pushed to Dovecot project code, where
> appropriate.
>
> If you provide these services, rather than simply deployment or
> hosting, and are available, please drop me a line *offlist*.  We're in
> the San Francisco area, and local is best, but remote work is
> certainly an option.
>
> Thanks,
>
> Rich
>


[Dovecot] How to define ldap connection idle

2011-11-03 Thread Aliet Santiesteban Sifontes
I'm having a problem with dovecot ldap connection when ldap server is in
another firewall zone, firewall kills the ldap connection after a
determined period of inactivity, this is good from the firewall point of
view but is bad for dovecot because it never knows the connections has been
dropped, this creates longs timeouts in dovecot and finally it reconnects,
meanwhile many users fails to authenticate, I have seen this kind of post
in the list for a while but can't find a solution for it, so my question is
how to define a idle ldap time in dovecot so it can reconnect before the
firewall has dropped the connection or just close the connection under
inactivity so when a user authenticate doesn't fails for a while until
dovecot detects that the connection has hanged. Is this a feature request
or there is already a configuration for this???
Thank's in advance and congrats tu Timo for this great app.


Re: [Dovecot] Indexes to MLC-SSD

2011-11-03 Thread Ed W

> I'm using NexentaStor (Solaris, ZFS) to export iSCSI-LUNs and I was
> thinking about a SSD based LUN for the indexes. As I'm using multiple
> servers this LUN will use OCFS2.

Given that the SAN always has the network latency behind it, might you
be better to look at putting the SSDs in the frontend machines?
Obviously this then needs some way to make users "sticky" to one machine
(or some few machines) where the indexes are stored?

This seems theoretically likely to give you higher IOPs to the index
than having them on the OCFS2 storage? (At a trade off with more
complexity for the load balancer front end...)

Ed W



Re: [Dovecot] Indexes to MLC-SSD

2011-11-03 Thread Ed W
On 03/11/2011 11:32, Felipe Scarel wrote:
> I'm using native ZFS (http://zfsonlinux.org) on production here (15k+
> users, over 2TB of mail data) with little issues. Dedup and compression
> disabled, mind that.
>

OT: but what were the rough criteria that led you to using ZFS over say
LVM with EXT4/XFS/btrfs?  I can think of plenty for/against reasons for
each, just wondering what criteria affected *your* situation?  I'm
guessing some kind of manageability reason is at the core, but perhaps
you can expand on how it's all worked out for you?

I have a fairly static server setup here so I have been "satisfied" with
LVM, software raid and mainly ext4.  The main thing I miss is simple to
use snapshots

Cheers

Ed W


Re: [Dovecot] Indexes to MLC-SSD

2011-11-03 Thread Felipe Scarel
I'm using native ZFS (http://zfsonlinux.org) on production here (15k+
users, over 2TB of mail data) with little issues. Dedup and compression
disabled, mind that.

Dedup especially is a major source of trouble, I wouldn't recommend it for
production just yet.

Cheers,
fbscarel

On Tue, Nov 1, 2011 at 19:40, Dan Swartzendruber  wrote:

>
> I can't imagine running any kind of performance critical app on linux using
> fuse!  There is a native ZFS port going on, but I don't know how stable it
> is yet.
>
> -Original Message-
> From: dovecot-boun...@dovecot.org [mailto:dovecot-boun...@dovecot.org] On
> Behalf Of Patrick Westenberg
> Sent: Tuesday, November 01, 2011 5:19 PM
> To: dovecot@dovecot.org
> Subject: Re: [Dovecot] Indexes to MLC-SSD
>
> Dovecot-GDH schrieb:
> > If I/O performance is a concern, you may be interested in ZFS and
> Flashcache.
> >
> > Specifically, ZFS' ZIL (ZFS Intent Log) and its L2ARC (Layer 2 Adaptive
> Read Cache)
> > ZFS does run on Linux http://zfs-fuse.net
>
> I'm using NexentaStor (Solaris, ZFS) to export iSCSI-LUNs and I was
> thinking about a SSD based LUN for the indexes. As I'm using multiple
> servers this LUN will use OCFS2.
>
>


Re: [Dovecot] Dot Lock timestmap, users disconnections from roundcube

2011-11-03 Thread Maria Arrea
We follow the guidelines about timekeeping RHEL in vmware vsphere located here

 
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1006427

 These problems happens in peak hours. Any dovecot config parameter I could set 
to mitigate this problem?

 Regards

 Maria

- Original Message -
From: Ed W
Sent: 11/03/11 11:57 AM
To: Maria Arrea, Dovecot Mailing List
Subject: Re: [Dovecot] Dot Lock timestmap, users disconnections from roundcube

 On 03/11/2011 10:49, Maria Arrea wrote: > All the ESXs hosts and all the VM 
use the same NTP server. > > Any other idea? > Doesn't ESX have issues with the 
time drifting when certain kernel options are set? Something to do with it 
rescheduling machines and them not counting idle ticks or something..? Does 
this problem happen during idle hours or peak hours? I should home in on clock 
problems... Probably vmware related issues to the kernel you are using? Good 
luck Ed W


Re: [Dovecot] Dot Lock timestmap, users disconnections from roundcube

2011-11-03 Thread Ed W
On 03/11/2011 10:49, Maria Arrea wrote:
> All the ESXs hosts and all the VM use the same NTP server.
>
>  Any other idea?
>

Doesn't ESX have issues with the time drifting when certain kernel
options are set?  Something to do with it rescheduling machines and them
not counting idle ticks or something..?

Does this problem happen during idle hours or peak hours?

I should home in on clock problems... Probably vmware related issues to
the kernel you are using?

Good luck

Ed W


Re: [Dovecot] patching dovecot for sieve/managesieve support, centos 5.6?

2011-11-03 Thread Stephan Bosch

Op 3-11-2011 6:31, Scott Lewis schreef:

Hi all,

I am having real trouble when attempting to patch dovecot 1.2 to include the 
Pidgeonhole sieve support on my CentOS 5.6 x64 mail server. I am relatively new 
to the programming side of linux, but I am not having a lot of luck when trying 
to get this thing to compile.

Here's what happens:

[root@mail ~]# whereis dovecot
dovecot: /usr/sbin/dovecot /etc/dovecot.conf /usr/lib/dovecot 
/usr/libexec/dovecot /usr/share/man/man8/dovecot.8.gz

[root@mail dovecot-1.2-sieve-0.1.19]# ./configure 
--with-dovecot=/usr/lib/dovecot

...

checking whether to build static libraries... yes
dovecot-config not found from /usr/lib/dovecot, use --with-dovecot=PATH
to give path to compiled Dovecot sources or to a directory with the
installed dovecot-config file. configure: error: dovecot-config not found

--

I get this message regardless of whether I set --with-dovecot as 
/usr/sbin/dovecot, or /etc, or /usr/libexec/dovecot.


I'm not familiar with CentOS, but there usually is a separate package 
containing the Dovecot development headers and the dovecot-config file 
you need. By the looks of things, that is not installed at your end. 
Point the --with-dovecot to wherever the dovecot-config file is installed.


Regards,

Stephan.


Re: [Dovecot] Dot Lock timestmap, users disconnections from roundcube

2011-11-03 Thread Maria Arrea
All the ESXs hosts and all the VM use the same NTP server.

 Any other idea?

 Regards

 Maria

- Original Message -
From: Giulio Casella
Sent: 11/03/11 11:38 AM
To: dovecot@dovecot.org
Subject: Re: [Dovecot] Dot Lock timestmap, users disconnections from roundcube

 This could be the problem. Double check the time also on your host system(s), 
not only on guest. Bye, gc Il 03/11/2011 11.30, Maria Arrea ha scritto: > We 
use ntpd daemon, all our systems are configured equal. Another thing, this is 
VM on vmware vsphere 4.1 > > Regards > > Maria > > - Original Message - 
> From: Ed W > Sent: 11/03/11 11:25 AM > To: dovecot@dovecot.org > Subject: Re: 
[Dovecot] Dot Lock timestmap, users disconnections from roundcube > > Hi> We 
are running dovecot 2.0.13 with mdbox+zlib on RHEL 5.7 x64, ext4. We use NTP. 
Quick check, but by "NTP" you mean the background daemon and you don't have 
some cron job running ntpdate or similar every so often? No idea, but since it 
looks like a clock related curiousity, then knowing if the clock is spot on 
accurate or drifting would be interesting to know? Simple comparison against 
other machines over a similar period to you having problems might be accurate 
enough? Good luck Ed W > -- Giulio Casella giulio at ds
 i.unimi.it System and network manager Computer Science Dept. - University of 
Milano


Re: [Dovecot] How can we horizontally scale Dovecot across multiple servers?

2011-11-03 Thread Ed W
On 31/10/2011 11:28, Felipe Scarel wrote:
> Quick question about the usage of DRBD: I'm thinking of a setup on my
> organization here (15k+ users, 4TB of email data), but I'm holding back on
> the clusterization due to the high volume of data.
>
> Using DRBD would implicate mirroring those 4TB of data across all cluster
> nodes? If yes, I might go with a SAN-based solution, though I haven't

I don't the technique with DRBD is something like having pairs of
machines, each of which is a backup for the other.  There were some old
notes on the Dovecot website about such a setup? 

Roughly I seem to recall that each pair of machines ran two virtual
machines, each of which ran active on one of the nodes each, but could
migrate to the other if needed.  Add a bunch of such paired nodes to get
to the performance you require and put a dovecot proxy instance in front
of the whole lot

In contrast the SAN solution uses a clustered filesystem (opinion varies
on which performs best) and then in theory every machine has access to
every mailbox.  In practice access to the SAN is relatively slow
compared with local storage, so the technique seems to be to store
indexes on the local machine and then using the front end proxy to be
somewhat "sticky" in returning users to the same backend node so that
the indexes can be re-used and not rebuilt

The DRBD solution offers local disk access speed to the node and would
on the surface give far faster performance (if disk were the limiting
issue).  However, it's likely to be more complex to maintain and manage
and without buying licences you get only failover between pairs of
machines.  The SAN solution in theory looks like perfect scale up, big
backend and just add more backend IMAP nodes as you need them, and all
the clever stuff moves to the frontend load balancer to be "sticky" and
obviously that's your main maintenance problem.

However, based on evidence from users of big systems, IO is likely to be
your main bottleneck and so just theoretically, the SAN will only scale
as far as it doesn't run out of IOs... Using local disk for indexes
would tend to reduce the amount of IOs needed (from the SAN) very
dramatically, but you still have some limit out there and it's a
question of whether you will reach it?  DRBD has theoretical infinite
scale out because each time you add another pair you get more IO as well
as more CPU

I don't have the fortune to have anything like the volume of users you
have so I have no opinion to offer... However, I think the above
accurately summarises your options.  Others might help clarify the
likely bounds on performance of each solution and maintenance headaches
(eg some have had problems with maildir mounted on OCFS/GFS2 and fixed
that by moving to dbox, etc)

Please report on your results!  Good luck

Ed W



Re: [Dovecot] Dot Lock timestmap, users disconnections from roundcube

2011-11-03 Thread Giulio Casella

This could be the problem.
Double check the time also on your host system(s), not only on guest.

Bye,
gc


Il 03/11/2011 11.30, Maria Arrea ha scritto:

We use ntpd daemon, all our systems are configured equal. Another thing, this 
is VM on vmware vsphere 4.1

  Regards

  Maria

- Original Message -
From: Ed W
Sent: 11/03/11 11:25 AM
To: dovecot@dovecot.org
Subject: Re: [Dovecot] Dot Lock timestmap, users disconnections from roundcube

  Hi>  We are running dovecot 2.0.13 with mdbox+zlib on RHEL 5.7 x64, ext4. We use NTP. 
Quick check, but by "NTP" you mean the background daemon and you don't have some 
cron job running ntpdate or similar every so often? No idea, but since it looks like a clock 
related curiousity, then knowing if the clock is spot on accurate or drifting would be 
interesting to know? Simple comparison against other machines over a similar period to you 
having problems might be accurate enough? Good luck Ed W



--
Giulio Casella giulio at dsi.unimi.it
System and network manager
Computer Science Dept. - University of Milano


Re: [Dovecot] Imap/pop gateway

2011-11-03 Thread Maria Arrea
If you are going to use an imap proxy for security reasons, consider using a 
software DIFFERENT than in your real mailboxes. If you use dovecot in your 
backend, you could use perdition in the frontend.

 Regards

 Maria

- Original Message -
From: Ed W
Sent: 11/03/11 11:31 AM
To: Dovecot Mailing List
Subject: Re: [Dovecot] Imap/pop gateway

 On 31/10/2011 22:20, nuno marques wrote: > > > > Hello, > How can i make a 
imap/pop gateway? that is, putting the mailboxes on a server on the internal 
network and put the gateway in the dmz. > The question isn't entirely clear, 
but I *think* you just want to use the normal "proxy" feature of dovecot. This 
accepts connections on one machine, examines them until the end of the auth 
stage and passes them onto some other machine based on the results of the auth 
process Also there are other imap/pop proxies such as nginx That said I'm not 
sure how much security this really buys you versus port forwarding POP/IMAP 
ports to your real server? If the proxy machine were to get hacked (over imap?) 
then the same hack can jump from the proxy to the real server. Also your only 
exposure in each case is via POP/IMAP, which means you would be mainly chasing 
buffer overflow vulnerabilities and the like. These can also be mitigated by 
chrooting the server machine (please consider virtualisati
 on options, it's usually simpler/faster/saner, eg see my favourite: 
linux-vservers), MAC controls on the dovecot process (grsec/selinux, etc), and 
compiler extensions (gcc hardened) Good luck Ed W


Re: [Dovecot] Imap/pop gateway

2011-11-03 Thread Ed W
On 31/10/2011 22:20, nuno marques wrote:
>
>
>
> Hello,
> How can i make a imap/pop gateway? that is, putting the mailboxes on a server 
> on the internal network and put the gateway in the dmz.
>

The question isn't entirely clear, but I *think* you just want to use
the normal "proxy" feature of dovecot. This accepts connections on one
machine, examines them until the end of the auth stage and passes them
onto some other machine based on the results of the auth process

Also there are other imap/pop proxies such as nginx

That said I'm not sure how much security this really buys you versus
port forwarding POP/IMAP ports to your real server?  If the proxy
machine were to get hacked (over imap?) then the same hack can jump from
the proxy to the real server.  Also your only exposure in each case is
via POP/IMAP, which means you would be mainly chasing buffer overflow
vulnerabilities and the like.  These can also be mitigated by chrooting
the server machine (please consider virtualisation options, it's usually
simpler/faster/saner, eg see my favourite: linux-vservers), MAC controls
on the dovecot process (grsec/selinux, etc), and compiler extensions
(gcc hardened)

Good luck

Ed W


Re: [Dovecot] Dot Lock timestmap, users disconnections from roundcube

2011-11-03 Thread Maria Arrea
We use ntpd daemon, all our systems are configured equal. Another thing, this 
is VM on vmware vsphere 4.1

 Regards

 Maria

- Original Message -
From: Ed W
Sent: 11/03/11 11:25 AM
To: dovecot@dovecot.org
Subject: Re: [Dovecot] Dot Lock timestmap, users disconnections from roundcube

 Hi > We are running dovecot 2.0.13 with mdbox+zlib on RHEL 5.7 x64, ext4. We 
use NTP. Quick check, but by "NTP" you mean the background daemon and you don't 
have some cron job running ntpdate or similar every so often? No idea, but 
since it looks like a clock related curiousity, then knowing if the clock is 
spot on accurate or drifting would be interesting to know? Simple comparison 
against other machines over a similar period to you having problems might be 
accurate enough? Good luck Ed W


Re: [Dovecot] Dot Lock timestmap, users disconnections from roundcube

2011-11-03 Thread Ed W
Hi

>  We are running dovecot 2.0.13 with mdbox+zlib on RHEL 5.7 x64, ext4. We use 
> NTP.

Quick check, but by "NTP" you mean the background daemon and you don't
have some cron job running ntpdate or similar every so often?

No idea, but since it looks like a clock related curiousity, then
knowing if the clock is spot on accurate or drifting would be
interesting to know?  Simple comparison against other machines over a
similar period to you having problems might be accurate enough?

Good luck

Ed W


[Dovecot] Dot Lock timestmap, users disconnections from roundcube

2011-11-03 Thread Maria Arrea
Hello.

 We are running dovecot 2.0.13 with mdbox+zlib on RHEL 5.7 x64, ext4. We use 
NTP. Indexes are in a iSCSI raid 10, mailboxes in raid5. No NFS. We have 
detected that sometimes all users get disconnected from roundcube at the same 
time. In dovecot logs we hundreds of lines like this:

 Nov 3 09:23:07 buzon dovecot: imap(mcrivero@mydomain): Warning: Created 
dotlock file's timestamp is different than current time (1320308587 vs 
1320308542): /buzones/mydomain/03/67/mcrivero/subscriptions
 Nov 3 09:23:07 buzon dovecot: imap(mcrivero@mydomain): Connection closed 
bytes=0/295
 Nov 3 09:23:07 buzon dovecot: imap(delolmo@mydomain): Warning: Created dotlock 
file's timestamp is different than current time (1320308587 vs 1320308542): 
/buzones/mydomain/15/77/delolmo/subscriptions
 Nov 3 09:23:07 buzon dovecot: imap(delolmo@mydomain): Connection closed 
bytes=0/295



 I have been googling but I only see problems with remote NFS, our setup does 
not use NFS. I give you doveconf -n output & mount options, if more info es 
needed, please ask.

 doveconf -n output


 # 2.0.13: /etc/dovecot/dovecot.conf
 # OS: Linux 2.6.18-274.el5 x86_64 Red Hat Enterprise Linux Server release 5.7 
(Tikanga) ext4
 auth_cache_negative_ttl = 10 secs
 auth_cache_size = 10 M
 auth_cache_ttl = 2 mins
 auth_master_user_separator = *
 auth_mechanisms = plain login
 auth_worker_max_count = 3500
 base_dir = /var/run/dovecot/
 default_client_limit = 5000
 default_process_limit = 6500
 disable_plaintext_auth = no
 imap_client_workarounds = tb-extra-mailbox-sep delay-newmail tb-lsub-flags
 lda_mailbox_autocreate = yes
 lda_mailbox_autosubscribe = yes
 mail_fsync = never
 mail_gid = entrega
 mail_home = /buzones/mydomain/%2.26Hn/%2.200Hn/%n/home_usuario/
 mail_location = 
mdbox:/buzones/mydomain/%2.26Hn/%2.200Hn/%n:INDEX=/indices_dovecot/indices/%2.26Hn/%2.200Hn/%n
 mail_max_userip_connections = 15000
 mail_plugins = " zlib acl quota autocreate"
 mail_uid = entrega
 managesieve_notify_capability = mailto
 managesieve_sieve_capability = fileinto reject envelope encoded-character 
vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy 
include variables body enotify environment mailbox date
 mdbox_rotate_interval = 1 days
 mdbox_rotate_size = 60 M
 passdb {
 args = /etc/dovecot/dovecot-ldap.conf
 driver = ldap
 }
 passdb {
 args = /etc/usuario_maestro.txt
 driver = passwd-file
 master = yes
 }
 passdb {
 args = /etc/dovecot/dovecot-ldap.conf
 driver = ldap
 }
 plugin {
 acl = vfile
 autocreate = SPAM
 autocreate2 = Sent
 autocreate3 = Drafts
 autocreate4 = Trash
 autosubscribe = SPAM
 autosubscribe2 = Sent
 autosubscribe3 = Drafts
 autosubscribe4 = Trash
 lda_mailbox_autosubscribe = yes
 quota = dict:Cuota de usuario::file:/buzones/cuotas/%n
 quota_rule2 = Trash:storage=+10%%
 quota_warning = storage=90%% aviso_cuota 90 %u
 sieve = /buzones/mydomain/%2.26Hn/%2.200Hn/%n/home_usuario/dovecot.sieve
 sieve_dir = /buzones/mydomain/%2.26Hn/%2.200Hn/%n/home_usuario/sieve/
 zlib_save = gz
 zlib_save_level = 9
 }
 pop3_no_flag_updates = yes
 protocols = pop3 imap sieve
 service anvil {
 client_limit = 25000
 }
 service auth {
 client_limit = 28000
 unix_listener auth-master {
 user = entrega
 }
 unix_listener auth-userdb {
 user = entrega
 }
 user = root
 }
 service aviso_cuota {
 executable = script /usr/local/bin/quota-warning.sh
 unix_listener aviso_cuota {
 mode = 0666
 }
 user = entrega
 }
 service imap-login {
 executable = /usr/libexec/dovecot/imap-login
 group = dovenull
 service_count = 0
 }
 service imap {
 executable = /usr/libexec/dovecot/imap
 process_limit = 6000
 }
 service managesieve-login {
 executable = /usr/libexec/dovecot/managesieve-login
 inet_listener sieve {
 port = 2000
 }
 process_limit = 2000
 }
 service managesieve {
 executable = /usr/libexec/dovecot/managesieve
 process_limit = 5000
 }
 service pop3-login {
 executable = /usr/libexec/dovecot/pop3-login
 process_limit = 4000
 service_count = 0
 }
 service pop3 {
 executable = /usr/libexec/dovecot/pop3
 process_limit = 4000
 }
 ssl_ca = 

Re: [Dovecot] Thunderbird slow in talking with dovecot IMAP AND to sendmail

2011-11-03 Thread Ed W
On 25/10/2011 11:14, Linda Walsh wrote:
>
>
> I'm trying to find out what's causing this slowdown -- it's
> INTOLERABLE
>
> over 1 minute and less than 1% done. (400MB file)...
>
> After trying 3 times, I gave up and logged in using X to the server
> and ran Tbird from there
>
> Mail sent out in < 1 minute, though the copy to dovecot took about 50%
> longer.
>
> So...
>
> I looked at the network trace.
>
> and everyfrackin' body was using 4K packet sizes (at the application
> level!, the window size on TCP was over 64K...but no one was using
> it)especially galling with my network's MTU at 9K, BTW, because
> small packets are really bad on a 1Gb network.
>

Although larger packets might be helpful, I don't see that you shouldn't
be getting much faster speed without it?  Even the 64K window, whilst it
looks too small, might be ok if your ping times are very low?

Something else is limiting your performance I think?

Ed W