Re: [Dovecot] Using LDAP for Dovecot extra/regular fields

2009-03-13 Thread Jack Stewart


Just to answer some of my own questions and close the loop in case 
anyone sees this message down the road. All of it makes sense when you 
consider SQL and the implications.


With a multivalue attribute, the last value is returned.

There is no host failure, which is likely a good thing when you consider 
all of the possible proxy configurations - you can get yourself into a 
lots of trouble with this.


If the host attribute exists in LDAP, then it is flagged as true even if 
the attribute is boolean and set to FALSE. This is not a big deal and 
makes sense when you have sql support with a NULL.


A standard LDAP objectclass does not exist - since there are so make 
possible configurations, it is unlikely that you could make everyone happy.


---Jack

Jack Stewart wrote:


Hi,

We're moving to a dovecot proxy / server configuration in order to make 
sure that a users go to a specific server.


If someone has used these LDAP for this, there are a few things that I 
wish to verify.


  Dovecot does not verify that type of the LDAP attribute, only that the 
returned value works.


  If a boolean is used for a yes/no field, then FALSE sets the field to 
no and TRUE sets the field to yes.


  If a multivalue attribute is used for a single value field, the last 
returned value for the LDAP lookup is used in that field (i.e. host will 
use the last value).


  If a string attribute is returned for a yes/no field and has any 
value, then the associated field is set to true.


  There is no automatic failover with the host field so if the remote 
host is down, the IMAP connection no longer works.


I'm fairly sure of all of these except for the boolean.

Now this is just due diligence. I don't know that turning on/off or 
switching LDAP attributes is the right way to go to handle failover but 
it might work for phased rollout.


My feeling is the best configuration will be using a secondary IP 
address that has to be manually turned on for a host after a reboot or 
shutdown. This creates a poor mans "fencing". The secondary can either 
be brought up on another host or handled via a load balance with 
DSR/backup server.


As a practical matter, it is probably worth setting sensible attributes 
for each field (i.e. numeric for host, boolean for proxy/proxy_maybe 
etc). Any interest in registering an LDAP object class for dovecot?


---Jack



[Dovecot] Using LDAP for Dovecot extra/regular fields

2009-03-08 Thread Jack Stewart


Hi,

We're moving to a dovecot proxy / server configuration in order to make 
sure that a users go to a specific server.


If someone has used these LDAP for this, there are a few things that I 
wish to verify.


  Dovecot does not verify that type of the LDAP attribute, only that 
the returned value works.


  If a boolean is used for a yes/no field, then FALSE sets the field to 
no and TRUE sets the field to yes.


  If a multivalue attribute is used for a single value field, the last 
returned value for the LDAP lookup is used in that field (i.e. host will 
use the last value).


  If a string attribute is returned for a yes/no field and has any 
value, then the associated field is set to true.


  There is no automatic failover with the host field so if the remote 
host is down, the IMAP connection no longer works.


I'm fairly sure of all of these except for the boolean.

Now this is just due diligence. I don't know that turning on/off or 
switching LDAP attributes is the right way to go to handle failover but 
it might work for phased rollout.


My feeling is the best configuration will be using a secondary IP 
address that has to be manually turned on for a host after a reboot or 
shutdown. This creates a poor mans "fencing". The secondary can either 
be brought up on another host or handled via a load balance with 
DSR/backup server.


As a practical matter, it is probably worth setting sensible attributes 
for each field (i.e. numeric for host, boolean for proxy/proxy_maybe 
etc). Any interest in registering an LDAP object class for dovecot?


---Jack



Re: [Dovecot] great disappearing email mystery

2009-02-11 Thread Jack Stewart



dhottin...@harrisonburg.k12.va.us wrote:

Quoting Jack Stewart :




dhottin...@harrisonburg.k12.va.us wrote:



On Wed, 2009-02-11 at 17:27 -0500, dhottin...@harrisonburg.k12.va.us
wrote:

Have there been any issues with dovecot and using outlook express
(imap) as an email client?  I have had a couple of users come up with
random missing emails.  Im trying to figure out if it is user error,
or something wacky in my mailserver.   I cant find anything telling in
maillog files or my messages.  Dovecot version is 1.0.3.  Its ok to
reply to me, I get list messages digest.




Is it missing in the mail server or just on the client? What does your
server layout look like?

We had a similar issue with Outlook and AppleMail where the uidlist
would change just enough to wipe out their local index. People would
tell us that they could see the E-mail in webmail but not in their
client.

We haven't had the issue in a long while, but it was painful while it
lasted. The key to resolving the issue had to do with upgrades to the
dovecot version - we currently just made the jump to 1.1.11 and it
seems to be working well.  Shes been up almost a year now without a 
reboot, with close to 750 accounts.


---Jack


Jack,
My server is a linux box running sendmail, procmail, and dovecot, I use 
ldap on the backend.  Most of my clients use Horde for webmail, but I 
have some that use outlook.  The emails are missing, missing.  Neither 
on client or server.  Thing is I looked on my backups (7 days worth) and 
supposedly missing emails werent there either.  So its hard to tell how 
long they have been missing.  Also, this server was put online 2 years 
ago, so all mailboxes were migrated from the oldserver to the new one 
and renamed oldmail.  Nothing in their either.  Were there any gotcha's 
on the upgrade?  Im not one to upgrade unless there are security issues 
or problems and my mailserver has been extremely stable




That is a mystery - nothing to do with my environment.

If I understand you correctly, nothing has changed in two years and some 
users are now having missing messages. Assuming that this is the case, 
my money is on the E-mail clients.




Re: [Dovecot] great disappearing email mystery

2009-02-11 Thread Jack Stewart



dhottin...@harrisonburg.k12.va.us wrote:



On Wed, 2009-02-11 at 17:27 -0500, dhottin...@harrisonburg.k12.va.us
wrote:

Have there been any issues with dovecot and using outlook express
(imap) as an email client?  I have had a couple of users come up with
random missing emails.  Im trying to figure out if it is user error,
or something wacky in my mailserver.   I cant find anything telling in
maillog files or my messages.  Dovecot version is 1.0.3.  Its ok to
reply to me, I get list messages digest.




Is it missing in the mail server or just on the client? What does your 
server layout look like?


We had a similar issue with Outlook and AppleMail where the uidlist 
would change just enough to wipe out their local index. People would 
tell us that they could see the E-mail in webmail but not in their client.


We haven't had the issue in a long while, but it was painful while it 
lasted. The key to resolving the issue had to do with upgrades to the 
dovecot version - we currently just made the jump to 1.1.11 and it seems 
to be working well.


---Jack


Re: [Dovecot] NFS - inotify vs kqueue

2009-02-05 Thread Jack Stewart



Timo Sirainen wrote:

On Thu, 2009-02-05 at 13:08 -0800, Jack Stewart wrote:
Anyone have pro/con experience with dovecot on the inotify/kqueue 
question when using NFS storage?


Inotify is for Linux, kqueue is for BSDs. Right? So I'd think there are
a lot of other issues if you're switching between Linux/BSDs..



That would be a problem :-) Oops.

Looks like I need to increase the priority of NFSv4 on the test queue 
for these machines.


---Jack





[Dovecot] NFS - inotify vs kqueue

2009-02-05 Thread Jack Stewart


Hi,

I've seen some chatter on NFS boards about kqueue being more reliable 
than inotify when used in NFSv3 and NFSv2. The chatter is a bit old so I 
don't know if it is true anymore.


Anyone have pro/con experience with dovecot on the inotify/kqueue 
question when using NFS storage?


I realize that kqueue is probably a bit slower and causes some delay 
with IDLE. Also, it may not really make any difference which is why I 
ask the question.


---Jack


Re: [Dovecot] Error: Maximum number of mail processes exceeded (see max_mail_processes setting)

2009-02-05 Thread Jack Stewart

Timo Sirainen wrote:

On Thu, 2009-02-05 at 11:36 -0500, Stewart Dean wrote:
Question: Do you have to have a radically greater setting for maildir 
than for mbox?  I would think...
What sort of values are people using with both formats?  Sounds like a 
nasty thing that could bite one in the $%# come migration from mbox to 
maildir


No, there's pretty much no difference in fd usage between mbox and
maildir. The main problem is the Dovecot master process, since it uses
1-2 fds per child process.


It's more of a client/server activity and usage issue than anything 
else. Based on lsof on individual processes, there doesn't seem to be 
anything unique to maildir but I don't have any mbox or dbox experience.


O/S tweaks are no limited to dovecot, you get into these issues with 
databases and webservers. To be honest, I only knew to look into these 
issues because of settings needed to Oracle/MySQL/Apache/etc servers. 
Tuning isn't limited to just ulimit. I know of some useful RHE settings, 
but not all.


---Jack


Re: [Dovecot] Error: Maximum number of mail processes exceeded (see max_mail_processes setting)

2009-02-05 Thread Jack Stewart

Frank Bonnet wrote:

Hello

I have this message repeated several times each *seconds* in 
/var/log/dovecot/dovecot.log


the  max_mail_processes is set to 8192 and I can see an average of 500 
imap processes

on the machine , I think there is a problem somewhere ...

Debian 64 bits , IBM X3650 biproc , 7 Gb RAM , RAID5 disks , 2 ethernet 
Gb ports bonded.


Dovecot 1.1.11 has been compiled from scratch on the machine

Thanks for any info.



Hi Frank,

Your system is plenty powerful - no issues there.

What are your settings in the init script? I found that putting in a 
ulimit -n 8192 and ulimit -f 16384 prior to invoking dovecot was 
worthwhile on my system.


I would first try tweaking these settings in a root shell and then 
invoke  dovecot with a -c conf make sure it is picking up the right conf 
file.


I'm sure the list will ask for the dovecot -n -c ... output as well.

---Jack



[Dovecot] Testing Suite (was [Dovecot-news] v1.1.10 released)

2009-01-26 Thread Jack Stewart


What ideas do people have on a good going forward framework/matrix?

Caltech, which largely translate to myself, will be going through a 
validation process before we upgrade. After reading the dovecot and some 
other wikis, I realize I can do better. It would be nice to be more general.


We are upgrading precisely because of race conditions. My latest joy, 
two stratum one clocks that are 1s apart.


Our server environment is pretty specific but there is a lot of 
diversity in our user base. The outliers are good test cases.


---Jack




Re: [Dovecot] DC testing observation and a question

2009-01-09 Thread Jack Stewart

Scott Silva wrote:

on 1-9-2009 10:16 AM Stewart Dean spake the following:


1) Watching the syslog maillog has been intriguing...different IMAP
client show widely differently use patterns.
a) Users running TBird and Seamonkey have 2-5 imap sessions (ps -aef |
grep ) *but* very little syslog activity...sparse occasional
logins and disconnects
b) Users running Exchange have only 1 imap sessions *but* every 5
minutes will generate login and disconnect messages (in and out in the
space of a second) for each folders.  So for a user with 22 folders,
there will be 44 syslog messages in the maillog every 5 minutes.
Just curiousany thought as to which is more efficient and by how much?

2) When I try to switch a MacMail client over, it sees the new mail, but
not the old mail in the INBOX.  How do I  force re-indexing on the test
server?

By Exchange to you mean Outlook?

Outlook's (and Outlook Express) have poorly written IMAP implementations IMHO.
Outlook is first and foremost a client for an Exchange server, with somewhat
decent POP3 support. OE is just the POP3 and buggy IMAP.
Later versions added HTML support mainly to access hotmail.

They both seem to poll each folder for info instead of using IMAP calls.



First with question #2, it isn't clear to me if you need to have the 
INDEX cache or dovecot-uidlist file rebuilt. Try removing one or the 
other with a test account and see what happens (start with INDEX first). 
If it is one or the other, the simplest thing might be to change the 
location of these files in your configuration file for the new server. 
Of course this will impact all of the clients (forcing them to rebuild 
their indexes). For your sake, I hope it is just the INDEX files.


Here is what I know (or at least think I know), on some various E-mail 
clients that should help to explain what you are seeing in the log 
files. It can useful it getting your customers to have the appropriate 
configuration for their client.


Thunderbird/Seamonkey use IMAP IDLE when possible and often use one 
connection per monitored folder.


Depending on the version and configuration for Outlook/Outlook 
Express/Entourage, you often get a periodic full synchronization polling 
where the client checks the header of each message in the folder against 
its internal index. This is even if it supports IDLE and is using IDLE 
on the INBOX. I have seen older versions of Outlook Express open an IMAP 
connection for every folder (subscribed or not) when simultaneously when 
syncing (i.e. 10 folders, 10 connections).


More recent versions of MacMail likes to keep a full copy of every 
messages including attachments in all subscribed folders. This is so it 
can use spotlight.


A blackberry uses IDLE but, if you use more than one client at the same 
time, RIM suggests that you may wish to have the administrator turn off 
IDLE or have your blackberry to use POP. I am not kidding on this one. 
We have a blackberry user where we found this change necessary.


Mutt and pine are mostly behave, but it is worth changing their default 
settings in order to tune down the poll interval. rmail over stunnel has 
more problems with postfix than it does with dovecot.



So I recommend to our users that they check for new mail no more 
frequently than once every five minutes (once every ten minutes for 
MacMail). My excuse is that they don't want to stomp on their 
synchronization(s). They seem to buy this explanation. I check the log 
files every so often and give users who check for new mail every minute 
or less a stern finger wagging.


Hope this helps.

---Jack







Re: [Dovecot] Sudden, large numbers of "Timeout while waiting for lock for transaction log ..."

2009-01-09 Thread Jack Stewart

Jack Stewart wrote:





Yes, the indexes are also on NFS.

The locking is fcntl() - the default.


I'm guessing that's the problem. NFS locking seems to break/hang
randomly sometimes. Can you somehow restart the NFS server locking
daemon?





I changed the /etc/hosts.allow so that any connection from the server is 
allowed (i.e. ALL: server_ip). This may only be specific to redhat 
enterprise 5. Since making this change the error message in the log 
files has gone away - not only for our IMAP servers but also for some 
Xen servers that were logging the same errors.


The core problem appears to be that portmapper/nlockmgr uses tcpwrappers 
or /etc/hosts.allow to determine if connections are accepted.


On occasion, the NFS server initiates a connection to nlockmgr on the 
client. It appears this communication is used to make sure locking 
information is accurate.


I have not found a bullet proof method of restricting the ports for 
nlockmgr so 'ALL: server_ip' ensures that a communication to nlockmgr is 
possible. IP Tables still apply so the risk to the system is minimal.


A strange problem, but I figure that people should know. Not as annoying 
as the out of the box telnet vulnerability in Solaris 10, but close.


---Jack


Re: [Dovecot] Sudden, large numbers of "Timeout while waiting for lock for transaction log ..."

2009-01-07 Thread Jack Stewart



Timo Sirainen wrote:

On Wed, 2009-01-07 at 08:32 -0800, Jack Stewart wrote:


Timo Sirainen wrote:

On Tue, 2009-01-06 at 23:11 -0800, Jack Stewart wrote:
an  6 01:21:02 earth-griffen dovecot: IMAP(zabala): Timeout while 
waiting for lock for transaction log file 
/var/spool/dovecot/indexes/z/zabala/.INBOX/dovecot.index.log

This is the main problem. So indexes are also on NFS? What locking are
they using (dovecot -n output)?


Yes, the indexes are also on NFS.

The locking is fcntl() - the default.


I'm guessing that's the problem. NFS locking seems to break/hang
randomly sometimes. Can you somehow restart the NFS server locking
daemon?



restarting the server would be tough but I'll see what can be done. It's 
an EMC Celerra with head failover so in theory it should be straight 
forward.


There is another red flag. The Celerra is configured for snapshots and 
the problem started around the time one of the snapshots occurred. I'm 
fairly sure that the snapshots use file or volume locking.


Re: [Dovecot] Sudden, large numbers of "Timeout while waiting for lock for transaction log ..."

2009-01-07 Thread Jack Stewart



Timo Sirainen wrote:

On Tue, 2009-01-06 at 23:11 -0800, Jack Stewart wrote:
an  6 01:21:02 earth-griffen dovecot: IMAP(zabala): Timeout while 
waiting for lock for transaction log file 
/var/spool/dovecot/indexes/z/zabala/.INBOX/dovecot.index.log


This is the main problem. So indexes are also on NFS? What locking are
they using (dovecot -n output)?



Yes, the indexes are also on NFS.

The locking is fcntl() - the default.

Below is the dovecot -n output.

One of the guys that I work with appears to clear up the problem for now 
- I think it was via manual deletion of the indexes and processes that 
were logging errors but I won't know for sure until he gets in.




# 1.1.3: /etc/dovecot.d/imap-server-its-caltech-edu.cfg
base_dir: /var/run/dovecot/imap-server-its/
syslog_facility: local4
protocols: imap imaps
listen: *:10143
ssl_listen: *:10993
ssl_ca_file: /etc/pki/dovecot/certs/caltech-ca.pem
ssl_cert_file: /etc/pki/dovecot/certs/imap-server.its.caltech.edu.pem
ssl_key_file: /etc/pki/dovecot/private/imap-server.its.caltech.edu.key
disable_plaintext_auth: yes
login_dir: /var/run/dovecot/imap-server-its//login
login_executable: /opt/dovecot/libexec/dovecot/imap-login
login_processes_count: 16
login_max_processes_count: 2048
max_mail_processes: 4096
mail_max_userip_connections: 1024
verbose_proctitle: yes
mail_location: 
maildir:/var/spool/maildir/%1Ln/%Ln:INDEX=/var/spool/dovecot/indexes/%1Ln/%Ln

mail_debug: yes
mail_full_filesystem_access: yes
mmap_disable: yes
mail_nfs_storage: yes
mail_nfs_index: yes
mail_plugins: fts fts_squat
imap_capability: IMAP4rev1 SASL-IR SORT THREAD=REFERENCES MULTIAPPEND 
UNSELECT LITERAL+ CHILDREN NAMESPACE LOGIN-REFERRALS UIDPLUS 
LIST-EXTENDED I18NLEVEL=1

imap_client_workarounds: delay-newmail netscape-eoh
pop3_reuse_xuidl: yes
pop3_uidl_format: UID%v-%u
namespace:
  type: private
  separator: .
  prefix: Mail.
  inbox: yes
  list: yes
  subscriptions: yes
auth default:
  mechanisms: plain login
  verbose: yes
  passdb:
driver: ldap
args: /etc/dovecot.conf-ldap
  userdb:
driver: static
args: uid=vmail gid=mail home=/var/spool/maildir/%1Ln/%Ln
  socket:
type: listen
master:
  path: /var/run/dovecot/imap-server-its/auth-master
  mode: 384
  user: vmail
  group: mail


[Dovecot] Sudden, large numbers of "Timeout while waiting for lock for transaction log ..."

2009-01-06 Thread Jack Stewart


Hi,

Up until yesterday, our environment which consists an NFS maildir file 
store with multiple front end servers, was working fine. We've verified 
that the server clocks and machines clocks are in sync.



Starting yesterday afternoon, We are getting ~850 log entries of the 
form 'Timeout while waiting for lock for transaction log files' or 'Our 
dotlock file was modified, assuming it wasn't overridden (kept it 180 sec)


Based on packet capture, just one of these index files shows 28553 
GETATTRs queries in in a one minute.



At this happened at exactly the same time on all of our servers, it is 
pretty clear that the back-end system (or network) is a major factor 
although nobody will confess that they made any changes.


It would be helpful, and very appreciated, to get more information about 
what this might be so that we can nudge the correct people to undo 
whatever it is that they didn't do.


We are running currently dovecot 1.1.3



an  6 01:21:02 earth-griffen dovecot: IMAP(zabala): Timeout while 
waiting for lock for transaction log file 
/var/spool/dovecot/indexes/z/zabala/.INBOX/dovecot.index.log
Jan  6 01:21:02 earth-griffen dovecot: IMAP(zabala): Our dotlock file 
/var/spool/maildir/z/zabala/dovecot-uidlist.lock was deleted (kept it 
180 secs)
Jan  6 01:21:02 earth-griffen dovecot: IMAP(zabala): Timeout while 
waiting for lock for transaction log file 
/var/spool/dovecot/indexes/z/zabala/.INBOX/dovecot.index.log
Jan  6 01:21:02 earth-griffen dovecot: IMAP(zabala): Our dotlock file 
/var/spool/maildir/z/zabala/dovecot-uidlist.lock was modified 
(1231233482 vs 1231233662), assuming it wasn't overridden (kept it 180 secs)
Jan  6 01:21:02 earth-griffen dovecot: IMAP(zabala): Timeout while 
waiting for lock for transaction log file 
/var/spool/dovecot/indexes/z/zabala/.INBOX/dovecot.index.log
Jan  6 01:21:02 earth-griffen dovecot: IMAP(zabala): Our dotlock file 
/var/spool/maildir/z/zabala/dovecot-uidlist.lock was modified 
(1231233482 vs 1231233662), assuming it wasn't overridden (kept it 180 secs)
Jan  6 01:21:04 fire-griffen dovecot: IMAP(sukwon): Timeout while 
waiting for lock for transaction log file 
/var/spool/dovecot/indexes/s/sukwon/.Trash/dovecot.index.log






--
Jack Stewart
Academic Computing Services, IMSS,
California Institute of Technology
jstew...@caltech.edu
626-395-4690 office


Re: [Dovecot] cannot delete emails in inbox

2009-01-04 Thread Jack Stewart

JANE CUA wrote:
I am working on upgrading our current squirrelmail and replace uw-imap with dovecot.  I have what looks like a successful installation.  My current setup is: 
Centos 5.2

sendmail 8.13.8
dovecot 1.1.17 (imap)
squirrelmail 1.4.17

I can send and receive email fine.  However when I try to delete an email in my 
Inbox, it doesn't get deleted.  It only sends a copy to the Trash folder.  But 
the email is still in the Inbox.  I can purge and delete the contents in my 
Trash and Sent folder, no problem.  I am not sure if this is a Squirrelmail 
issue or a Dovecot issue.  Please advice.  Thank you -- Jane




What is the mtime of your dovecot.index.cache file? Is it significantly 
older than the last new message?


---Jack


Re: [Dovecot] NFS, IDLE, and index locking

2008-12-31 Thread Jack Stewart
..
On Wed, 31 Dec 2008 10:11:08 +0200 "Timo Sirainen"  wrote:
>On Dec 31, 2008, at 4:24 AM, Jack Stewart wrote:
>
>> The attribute cache is the default (60 seconds) which works well  
>> enough in a polling setting but may be a problem with IDLE (30  
>> second check).
>..
>> mail_nfs_index = yes
>> mail_nfs_storage = yes
>
>What OS are you using? With mail_nfs_*=yes Dovecot is supposed to  
>flush attribute cache whenever necessary so it shouldn't make much of  
>a difference what value you set attribute cache to.

We're using Redhat Enterprise 5.

The maildir and dovecot uidlist files work great - the problem only impacts 
index cache for a handful of users.

There are enough fixes from 1.1.4 to 1.1.7+ that this problem may already 
be addressed. I noticed a flush patch targetting Solaris/Linux and also a 
lock/race condition patch.

It sounds like attribute cache is unlikely to be an issue so  I'll spend 
some more debug time on this. It will be good, if possible, to create a 
script that can reliably and quickly reproduce the problem.  If so, then it 
is strace, kernel debug, and incremental patch  time. 

---Jack





[Dovecot] NFS, IDLE, and index locking

2008-12-30 Thread Jack Stewart


Hi,

I'll try to keep short. My primary question is what, in people's 
experience, are the best configuration settings for to avoid potential 
NFS cache locking issues in an interesting heterogeneous environment? We 
appear to have a workaround for some locking issues we have seen, which 
is turning off IDLE. Upgrading our version of dovecot from 1.1.4 is 
likely to address some core issues, but I want to get the system setup 
in tune first. There are some things in our setup that seem dumb, and I 
would like to make sure they are dumb.


As far as our environment goes, it isn't unusual for our users to 
connect with multiple different clients from multiple locations. Some of 
these clients behave well, and some not so well. In addition we have 
multiple distributed servers with multiple instances. Generally the 
connection from one of their clients/desktops goes to the same server 
but their different desktops may connect to different servers.


If anyone has a similar enough environment, do you find it better to 
have the hard link setting off or is it better for you to turn it on?


The attribute cache is the default (60 seconds) which works well enough 
in a polling setting but may be a problem with IDLE (30 second check). 
Do you find it better to bump up the IDLE check interval or drop the 
attribute cache? Our cache and maildir are on separate NFS volumes so we 
can change one without changing the other.


Our tcp timeout setting is too long (it is the default). Do you find 
180-300 seconds more reasonable?



---Jack

To give some context, we have a handful of users with intermittent 
caching locking where the dovecot.index.cache file would simply not 
update. One of our internal people managed to get this problem when he 
was using pine (non-IDLE) with gnubiff (IDLE) from the same desktop. We 
haven't had any alarms since taking the brute force approach of turning 
off IDLE.


Unfortunately other projects are inhibiting my capacity for additional 
debugging at this time. Also, it isn't clear to me that we will learn 
anything useful since the servers do not appear to be tuned and they are 
only running dovecot 1.1.4.


Just for reference, below is our dovecot configutation.

disable_plaintext_auth = yes
base_dir= /var/run/dovecot/imap-server-its/
protocols   = imap imaps
syslog_facility = local4
shutdown_clients = yes

ssl_cert_file   = xxx.pem
ssl_key_file= yyy.key
ssl_ca_file = zzz.pem

mail_location = 
maildir:/var/spool/maildir/%1Ln/%Ln:INDEX=/var/spool/dovecot/indexes/%1Ln/%Ln


login_greeting_capability = no
verbose_proctitle = yes
mail_debug = yes
imap_capability = "IMAP4rev1 SASL-IR SORT THREAD=REFERENCES MULTIAPPEND 
UNSELECT LITERAL+ CHILDREN NAMESPACE LOGIN-REFERRALS UIDPLUS 
LIST-EXTENDED I18NLEVEL=1"


# MAILDIR Performance settings
maildir_copy_with_hardlinks = yes
maildir_copy_preserve_filename = no

# Handling web logins & imap traffic
login_processes_count = 16
login_max_processes_count = 2048
max_mail_processes = 4096
mail_max_userip_connections = 2000

# NFS settings
mmap_disable = yes
fsync_disable = no
mail_nfs_index = yes
mail_nfs_storage = yes
dotlock_use_excl = yes

lock_method =fcntl


protocol imap {
  listen = *:10143
  ssl_listen = *:10993
  imap_client_workarounds = delay-newmail outlook-idle netscape-eoh
}

## Authentication processes

namespace private {
  separator = .
  prefix = Mail.
  inbox = yes
}

auth_verbose = yes

auth default {
  mechanisms = plain login

  passdb ldap {
args = /etc/dovecot.conf-ldap
  }
  userdb static {
args = uid=vmail gid=mail home=/var/spool/maildir/%1Ln/%Ln
  }

  socket listen {
master {
  path = /var/run/dovecot/imap-server-its/auth-master
  user = vmail
  group = mail
}
  user = root
}




Re: [Dovecot] MySQL as a storage only.?

2008-12-23 Thread Jack Stewart


Neil wrote:

On Tue, Dec 23, 2008 at 2:20 AM, Timo Sirainen  wrote:

On Dec 23, 2008, at 4:51 AM, R A wrote:

Especially if you try
to implement cloud-like services, where you have the possibility of
links temporarily going down between servers, and mail can come in to
any point, and be retrieved or moved at any point.

You really need transactions then, to track every mails change in time,
and to replicate those when you get connectivity back. You "can"
possibly do it by tracking dovecot logs and do the replication yourself
with scripts, but using a database would probably be easier here.

I've also planned easy replication support for Dovecot. Also I don't think
doing the SQL replication correctly and without losing any data on error
conditions is as easy as you think.


+1
Needless to say, replication would be _very_ useful...


At least in 5, MySQL replication is very difficult. Based on my 
experience with amavisd, master <-> master replication does not work if 
you have foreign key constraints. master <-> slaves can have issues with 
high activity as a key on the master might get update while a search is 
happening on the slave (again this is with foreign key constraints). 
Then you'll probably need innodb for performance so backups become more 
challenging. Lastly, disk usage triples.


Hate to be a wet blanket, but this is what I've seen. If you don't need 
the constraints, the problem becomes more manageable. Still your safest 
bet for replication at this time is to use the slave as a backup with 
some sort of auto promotion mechanism. The network master daemon (nmdb?) 
looks promising for straight forward databases.


Don't forget blob sizing.

I'm not sure how Oracle is for replication. Setups/configuration does 
not sound simple from the dba's I've talked to, although RACK looks decent.


MySQL does work well for index problems (i.e. searches) where the index 
can be reconstructed if their is a failure and the searching process 
doesn't seize in a failure.


Just my opinion and warning.

---Jack



Re: [Dovecot] Is there a straight forward way to associate login with imap process

2008-12-22 Thread Jack Stewart



Jack Stewart wrote:



Timo Sirainen wrote:


On Dec 22, 2008, at 9:01 PM, Jack Stewart wrote:



Hi,




What kind of a locking issue? Hangs?




The clients are hanging. There are at least a couple of different types 
of locking issues. In both cases the dovecot.cache.index file does not 
update. Based on lslk, in one case the lock on the file appears 
persistent but not in the others. Removing the dovecot.cache.index file 
appears and sending a term signal to the locking process (or all 
processes) removes the issue for the user.





As a quick follow-up to the locking issue, we found someone who has a 
reasonable repeatable configuration. His main client is pine and turning 
off IDLE in GNU biff appears to solved his locking problem.


I'm not convinced this is the only (or core) issue but we've turned off 
IDLE (advertisement) for now. It seems to be working.


This just might be a helpful stop gap for someone else, so I'm posting, 
although I doubt that anyone else is in a similar setup.





Re: [Dovecot] Is there a straight forward way to associate login with imap process

2008-12-22 Thread Jack Stewart



Timo Sirainen wrote:


On Dec 22, 2008, at 9:01 PM, Jack Stewart wrote:



Hi,

Is there a way to associate a user's the login (imap-login) process 
with  the user's 'imap [' process? We are trying to lock down an issue 
to make sure we full understand it. /proc and shared memory tools 
weren't particular useful.


Not really with v1.1, but with v1.2 you can use %e variable in 
login_log_format_elements which expands to mail process PID. I guess you 
could also try if the change happens to apply cleanly to v1.1:


http://hg.dovecot.org/dovecot-1.2/rev/29b623366e1e

Just for context, we are having an intermittent locking index cache 
locking issue that appears to impact only a subset of our users.


What kind of a locking issue? Hangs?



Thank you - I'll try applying the patch to our 1.1.7 tests but it may be 
a few weeks before I can get back to everyone on it.


The clients are hanging. There are at least a couple of different types 
of locking issues. In both cases the dovecot.cache.index file does not 
update. Based on lslk, in one case the lock on the file appears 
persistent but not in the others. Removing the dovecot.cache.index file 
appears and sending a term signal to the locking process (or all 
processes) removes the issue for the user.


I don't yet have enough information yet to lock this down. We haven't 
yet reproduced it reliably but we're getting closer.


---Jack






[Dovecot] Is there a straight forward way to associate login with imap process

2008-12-22 Thread Jack Stewart


Hi,

Is there a way to associate a user's the login (imap-login) process with 
 the user's 'imap [' process? We are trying to lock down an issue to 
make sure we full understand it. /proc and shared memory tools weren't 
particular useful.


Just for context, we are having an intermittent locking index cache 
locking issue that appears to impact only a subset of our users. The 
index files are on NFS. It may be fixed by a dovecot patch (we are at 
1.1.3), adjusting our tcp_timeout on the servers, upgrading out circa 
2000 load balancer, the type and configuration of the multiple 
simultaneous persistent clients that user is using, etc.


lslk, alert logs, strace, testing the most recent 1.1.X version, and 
file mtime is giving us some information but we want to make everything 
is covered.


Many thanks in advance.

---Jack

--
Jack Stewart
California Institute of Technology
jstew...@caltech.edu / www.imss.caltech.edu




Re: [Dovecot] Errors on high load

2008-09-16 Thread Jack Stewart


Although my high load errors with different, the following might help. 
Once I did the following system tuning, the errors went away (including 
the infamous backtrace/maildir_transaction...). The following parameters 
were cranked up (for dovecot,system, & mail) :


  open files => 32768
  locks => 32768
  maxsyslogins => 16384
  maxlogins => 4096
  max_user_instances => 2048

In RHE, the first four are in limits.conf and the last is in 
/proc/sys/fs/inotify/max_user_instances. Probably all are not needed (in 
particular maxlogins) but open files definitely made a different - lsof 
was showing 24K files for dovecot & mail. My bottleneck was a piece of 
old network gear and once the traffic settled down open file dropped to 
4K (and load from 200 to 0.5)


These might not address your particular problem but system tuning could 
address the memory corruption error. These settings also got rid of my 
error messages under low load.


It is also probably worth while to shut the system down and run memory 
test (i.e. memtest86+ - or if it is under maintenance just get it replaced)


---Jack


Allan Cassaro wrote:

On Tue, Sep 16, 2008 at 10:37 AM, Charles Marcus
<[EMAIL PROTECTED]> wrote:

On 9/16/2008, Allan Cassaro ([EMAIL PROTECTED]) wrote:

Hi, always when my server is on high load I saw this errors on logs:

doevecot -n output / version info?


Ouch!
Sorry! I forgot this...



Re: [Dovecot] NFS performance and 1.1.3 - what can you (unofficially) get away with?

2008-09-10 Thread Jack Stewart


Thanks. I would like to give the patches a try.

I've removed all of the redhat patches except for the one that tells 
dovecot where to find the certs (why does redhat use pki? why?) but I 
haven't installed this version yet. Are there any build/configure 
twiddles that might help?


Just as an update, I tried 1.1.3 with the nfs settings below. At first, 
it was horrid. Then I deleted the old index cache files. Now it seems to 
be working well (with webmail - major service is not yet moved). The 
load is under 1.


I've managed to descend into the twilight zone yet again. Just rolling 
one server and just rolling webmail for now - we'll see where it goes.


---Jack

Timo Sirainen wrote:

On Wed, 2008-09-10 at 12:20 -0700, Jack Stewart wrote:
The performance hit was bad. When I tried "mail_nfs_index = yes" the 
load went from 0.5 to 120+ (on each of three servers).


It shouldn't have been anything that bad. I could send you some patches
that reduce what mail_nfs_index=yes does. It would be interesting to
know if the problem is just one or few specific calls or all of them
cumulatively.


dotlock_use_excl: no


"yes" should be safe (and faster) here.



--
Jack Stewart
[EMAIL PROTECTED] / 626-395-4690
http://www.imss.caltech.edu


Re: [Dovecot] NFS performance and 1.1.3 - what can you (unofficially) get away with?

2008-09-10 Thread Jack Stewart


The performance hit was bad. When I tried "mail_nfs_index = yes" the 
load went from 0.5 to 120+ (on each of three servers).


My RPM of 1.1.3 includes the Redhat patches from their source RPM for 
1.0.7. I'm checking the patches now and most seem benign but there is 
some mbox locking changes. I'm using maildir so it shouldn't make a 
difference yet I shouldn't need the patches either (using the source RPM 
was a management recommendation).


"mail_nfs_storage" is set to yes. I forgot to put in dovecot -n so it is 
below.


Right now I'm running with local index.cache files for each servers 
(using sticky connections) and that seems to work fine.


---Jack

Timo Sirainen wrote:

On Wed, 2008-09-10 at 10:58 -0700, Jack Stewart wrote:
The question is has anyone, with Maildir and the INDEX= on NFS (i.e. 
dovecot.index and dovecot.index.cache, set mail_nfs_index to no. 


How much worse is the mail_nfs_index=yes? Last I heard it made hardly a
difference.

If so, 
was it better to turn maildir_copy_preserve_filename to on or did it 
help to turn if off.


That shouldn't make a noticeable performance difference. If it's "yes"
it does one more uidlist lookup, but pretty much always the uidlist is
already in memory in any case so it doesn't matter.

The other question is did you play with turning off 
atime updates or changing the acmin* values?


Dovecot doesn't care about atimes, so feel free to disable them.

I figure that the worst that can happen is that the dovecot.index.cache 
file will become corrupt, and dovecot will then rebuild it.


It's not the worst that can happen, but index file errors are probably
more likely than other errors..



/opt/dovecot/sbin/dovecot -n -c /etc/dovecot.conf
# 1.1.3: /etc/dovecot.conf
Warning: fd limit 1024 is lower than what Dovecot can use under full 
load (more than 8192). Either grow the limit or change 
login_max_processes_count and max_mail_processes settings

base_dir: /var/run/dovecot/default/
syslog_facility: local4
listen(default): *:143
listen(imap): *:143
listen(pop3): *:110
ssl_listen(default): *:993
ssl_listen(imap): *:993
ssl_listen(pop3): *:995
ssl_ca_file: /etc/pki/dovecot/certs/caltech-ca.pem
ssl_cert_file: /etc/pki/dovecot/certs/imap.caltech.edu.pem
ssl_key_file: /etc/pki/dovecot/private/imap.caltech.edu.key
login_dir: /var/run/dovecot/default//login
login_executable(default): /opt/dovecot/libexec/dovecot/imap-login
login_executable(imap): /opt/dovecot/libexec/dovecot/imap-login
login_executable(pop3): /opt/dovecot/libexec/dovecot/pop3-login
login_greeting_capability: yes
login_processes_count: 16
login_max_processes_count: 2048
max_mail_processes: 4096
mail_max_userip_connections: 2000
verbose_proctitle: yes
mail_location: 
maildir:/var/spool/maildir/%1Ln/%Ln:INDEX=/var/spool/indexes/%1Ln/%Ln:CONTROL=/var/spool/dovecot/uidl/imap/%1Ln/%Ln

mail_debug: yes
mmap_disable: yes
dotlock_use_excl: no
mail_nfs_storage: yes
mail_executable(default): /opt/dovecot/libexec/dovecot/imap
mail_executable(imap): /opt/dovecot/libexec/dovecot/imap
mail_executable(pop3): /opt/dovecot/libexec/dovecot/pop3
mail_plugins(default): fts fts_squat
mail_plugins(imap): fts fts_squat
mail_plugins(pop3):
mail_plugin_dir(default): /opt/dovecot/lib/dovecot/imap
mail_plugin_dir(imap): /opt/dovecot/lib/dovecot/imap
mail_plugin_dir(pop3): /opt/dovecot/lib/dovecot/pop3
imap_client_workarounds(default): delay-newmail outlook-idle netscape-eoh
imap_client_workarounds(imap): delay-newmail outlook-idle netscape-eoh
imap_client_workarounds(pop3):
pop3_uidl_format(default): %v-%u
pop3_uidl_format(imap): %v-%u
pop3_uidl_format(pop3): %08Xu%08Xv
pop3_client_workarounds(default):
pop3_client_workarounds(imap):
pop3_client_workarounds(pop3): outlook-no-nuls oe-ns-eoh
namespace:
  type: private
  separator: .
  prefix: Mail.
  inbox: yes
  list: yes
  subscriptions: yes
auth default:
  mechanisms: plain login
  passdb:
driver: ldap
args: /etc/dovecot.conf-ldap
  userdb:
driver: static
args: uid=vmail gid=mail home=/var/spool/maildir/%1Ln/%Ln
  socket:
type: listen
master:
  path: /var/run/dovecot/default/auth-master
  mode: 384
  user: vmail
  group: mail
plugin:
  fts: squat
  fts_squat: partial=4 full=8




[Dovecot] NFS performance and 1.1.3 - what can you (unofficially) get away with?

2008-09-10 Thread Jack Stewart

Hi,

Does anyone have experience with bending the NFS recommendations to get 
better performance?


The question is has anyone, with Maildir and the INDEX= on NFS (i.e. 
dovecot.index and dovecot.index.cache, set mail_nfs_index to no. If so, 
was it better to turn maildir_copy_preserve_filename to on or did it 
help to turn if off. The other question is did you play with turning off 
atime updates or changing the acmin* values?


I know that the recommendation is do not do this. Fortunately Timo's 
screams won't reach my ears for another ten hours. I'm just interested 
if anyone has done this. You are most welcome to reply offline.


I figure that the worst that can happen is that the dovecot.index.cache 
file will become corrupt, and dovecot will then rebuild it. As long as 
this doesn't happen all the time, it should not be a big deal. I'm 
guessing that this is less likely maildir_copy_preserve_filename turned 
on. My old (courier) servers is that I was able to get away with 
disabling atime updates and bumping up acmin* but then there was no 
central index to check.


I admit that this is a nutty way to go since part of the whole point of 
1.1.X is its NFS settings. Yet this is a place with some pretty nutty 
users. (i.e. 2+GB inboxes, 100K messages, one/minute connections, ...)


---Jack

P.S. As a total aside people migrating from 1.0.X to 1.1.3 may wish to 
have to use a different INDEX directory in 1.1.3 than the one you used 
in 1.0.7. Doing this got rid of my index.cache and transaction log error 
messages. This may be why testing isn't showing anything.


P.P.S. I've tried imapproxy for my webmail but it doesn't seem to have 
made much of a difference on the INDEX I/O.




Re: [Dovecot] Thunderbird, subscriptions, and upgrade to 1.1.3

2008-09-08 Thread Jack Stewart


I copied a (tested) development config, made the appropriate changes, 
and it's now working again. I think there was a typo or stray character 
in the specification of the CONTROL directive as a result of a cut/paste.


Thanks for the tip on rawlog - it is very nice and will help in the 
future with confused clients/users.


---Jack

Timo Sirainen wrote:

On Mon, 2008-09-08 at 11:49 -0700, Jack Stewart wrote:
I've upgraded from 1.0.7 to 1.1.3. When I did this, my Thunderbird lost 
its brain and did not show some subscribed folders. squirrelmail also 
seemed to have issues. Applemail and mutt were fine. The configuration 
output is below.


Is the problem client-specific or user-specific? Rawlog might show
something about what responses are wrong:
http://wiki.dovecot.org/Debugging/Rawlog


namespace:
   type: private
   separator: .
   prefix: Mail.
   inbox: yes
   list: yes
   subscriptions: yes


v1.1 has this new subscriptions-setting, but it shouldn't have changed
how the subscriptions file was read, especially if you have only one
namespace..


[Dovecot] Thunderbird, subscriptions, and upgrade to 1.1.3

2008-09-08 Thread Jack Stewart


Hi,

I've upgraded from 1.0.7 to 1.1.3. When I did this, my Thunderbird lost 
its brain and did not show some subscribed folders. squirrelmail also 
seemed to have issues. Applemail and mutt were fine. The configuration 
output is below.


Hopefully this is just me and just random bizarreness. There wasn't a 
problem in 1.0.7 but I could have been caught in mid-upgrade.


Thunderbird did not seem to update the subscription file (or at least I 
could not find the one it thought it was using).


Any pointers are appreciated. Thanks in advance.

---Jack

/opt/dovecot/sbin/dovecot -c /etc/dovecot.conf -n

# 1.1.3: /etc/dovecot.conf
Warning: fd limit 1024 is lower than what Dovecot can use under full 
load (more than 6144). Either grow the limit or change 
login_max_processes_count and max_mail_processes settings

base_dir: /var/run/dovecot/default-imap
syslog_facility: local4
protocols: imap imaps
listen: *:143
ssl_listen: *:993
ssl_ca_file: /etc/pki/dovecot/certs/caltech-ca.pem
ssl_cert_file: /etc/pki/dovecot/certs/imap.caltech.edu.pem
ssl_key_file: /etc/pki/dovecot/private/imap.caltech.edu.pem
disable_plaintext_auth: yes
login_dir: /var/run/dovecot/default-imap/login
login_executable: /opt/dovecot/libexec/dovecot/imap-login
login_greeting_capability: yes
login_processes_count: 16
login_max_processes_count: 2048
max_mail_processes: 4096
verbose_proctitle: yes
mail_location: 
maildir:/var/spool/maildir/%1Ln/%Ln:INDEX=/var/spool/dovecot/indexes/%1Ln/%Ln:CONTROL=/var/spool/dovecot/uidl/imap/%1Ln/%L

mail_debug: yes
mmap_disable: yes
mail_nfs_storage: yes
mail_nfs_index: yes
maildir_copy_preserve_filename: yes
mail_plugins: fts fts_squat
imap_client_workarounds: delay-newmail outlook-idle netscape-eoh
pop3_reuse_xuidl: yes
pop3_uidl_format: UID%u-%v
namespace:
  type: private
  separator: .
  prefix: Mail.
  inbox: yes
  list: yes
  subscriptions: yes
auth default:
  mechanisms: plain login
  verbose: yes
  passdb:
driver: ldap
args: /etc/dovecot.conf-ldap
  userdb:
driver: static
args: uid=vmail gid=mail home=/var/spool/maildir/%1Ln/%Ln
  socket:
type: listen
master:
  path: /var/run/dovecot/default-imap/auth-master
  mode: 384
  user: vmail
  group: mail
plugin:
  fts: squat
  fts_squat: partial=4 full=8



Re: [Dovecot] Distributed indexes, maildir_copy_preserve_filename, and Microsoft

2008-08-30 Thread Jack Stewart


.. Original Message ...
On Sat, 30 Aug 2008 13:47:50 +0300 "Timo Sirainen" <[EMAIL PROTECTED]> wrote:
>On Fri, 2008-08-22 at 10:50 -0700, Jack Stewart wrote:
>> Hi,
>> 
>> I've just completed a migration from Courier to a Dovecot 1.0.7 (patched 
>> RHE 5) which is working great, except for this weirdness of an issue 
>> that is impacting a handful of users. My question is that will turning 
>> off maildir_copy_preserve_filename help or hurt in a situation where 
>> there are multiple servers with their own local INDEX files (uidlist is 
>> shared)?
>
>It should make no difference.
>
>> The problem is that a few users on a few versions of Microsoft Mail 
>> clients are not showing a few messages. They can see the messages just 
>> fine in webmail and other, more typical, E-mail clients.
>
>If the webmail sees it, then Dovecot sees it and I can't really think of
>why MS Mail wouldn't see it.. Do you mean even a client restart won't
>help? Or are they using different Dovecot servers?
>
>You could enable rawlog (http://wiki.dovecot.org/Debugging/Rawlog) for
>the users having the problem and see if it shows the mails being sent to
>MS Mail.
>
>> Another option might be easier is to delete the users INDEX and let the 
>> clients rebuild them.
>
>If webmail sees the mails, I don't see it making any difference.
>
>[signature.asc]

Thank's for your reply and the raw log tip. You've answered my question and 
concern.

Just for reference, the most reliable solution for these few clients appears to 
be to disable 
the original configuration and then create  a new one. 

Only certain versions of Outlook and Entourage have this  index issue where 
only some of messages are listed. All of the other clients are just fine. 
The problem might be related to how I migrated the uidlist files.

Since we are migrating to 1.1.1 in ~1-2 weeks, I'm not worried.

Thanks again for your reply!

---Jack



Re: [Dovecot] namespaces...

2008-08-26 Thread Jack Stewart



John Doe wrote:
The SEPARATOR should just refer to the storage structure and the PREFIX 
should refer to the hierarchy. So if you use "/" as your SEPARATOR, then 
your storage structure will be user-directory/folder/sub-folder and if 
you use "." as your storage structure (the default), then your storage 
structure will be user-directory/.folder.sub-folder . A PREFIX of INBOX. 
should cause all of the folders & sub-folders to appear under the inbox; 
a PREFIX of "." should cause all of the folders to appear at the top 
level; and a PREFIX of Mail. will should all of the folders to appear 
under Mail with the exception of the INBOX.


Thx for your long post!  ^_^
I understood that it was something like 
.folder1folder2
But, from my tests, it seems not to work as expected...  At least not with 
thunderbird.
By default, Tbird has "allow server to override namespaces"
prefix=INBOX. and separator=.  => no prefix and Trash system not working.
prefix= and separator=.  => INBOX. prefix (from tbird) and Trash system working.
'/' in prefix and separator seems to be ignored (converted to '.').
I guess I will leave it to prefix= and separator=. but one user who uses apple 
mail complained about some prefix problems...

Thx,
JD




You're welcome.

I think I can help on the Trash, etc. issue. New client configurations 
(i.e. delete the old first) should be fine but you have have interesting 
issues if you change the server name in an existing setup (or change the 
PREFIX on the server).


With Thunderbird, and a number of other clients, the location of the 
Trash/Sent/Drafts folders appear to be 'fixed' and can only be manually 
re-configured. So even if all of the other folders automatically remap, 
you need to go to "Account Settings\Copies & Folders" and change the 
"Trash\Sent\Drafts" folder location by using the "Other" checkbox.


Otherwise what happens is that Thunderbird is looking for the old Trash 
folder (i.e. Foo.Trash) which no longer exists.


---Jack

P.S. Not all mail clients are sane, they don't all play by the rules, 
and their behaviors can be quite different. You should test each (major) 
one against your configuration. My favorite case is AppleMail. It works 
great Trash/etc mapping when using a INBOX. or . prefix but only if you 
have one accounts. Sigh.




Re: [Dovecot] namespaces...

2008-08-22 Thread Jack Stewart


John Doe wrote:

Hi,

I am new to dovecot and I am a bit confused with how 
namespaces/prefixes/separators are handled by the clients and dovecot...
I tried to understand the desciption from the conf file but without success.

With each conf I create the following path /f1/f2 on the client (thunderbird) 
and get the following on the dovecot server:

  PREFIX= and separator=/  => .INBOX.f1.f2 + .INBOXTrash
  PREFIX=INBOX/ and separator=/  => .f1.f2 and no Trash
 
PREFIX=INBOX. and separator=.  => .f1.f2 and no Trash


Could someone point me to some IMAP for dummy web page...?  ;D
Or, what is the best conf able to handle most clients?

Thx,
JD



I research a similar issue for a user and found that the / character is 
not allowed (or at least didn't used to be) in folder names. Even if you 
can get it to work on the server, a lot of clients will break. In fact 
various clients seem to be sensitive to what characters you use outside 
of a limited range.


The SEPARATOR should just refer to the storage structure and the PREFIX 
should refer to the hierarchy. So if you use "/" as your SEPARATOR, then 
your storage structure will be user-directory/folder/sub-folder and if 
you use "." as your storage structure (the default), then your storage 
structure will be user-directory/.folder.sub-folder . A PREFIX of INBOX. 
should cause all of the folders & sub-folders to appear under the inbox; 
a PREFIX of "." should cause all of the folders to appear at the top 
level; and a PREFIX of Mail. will should all of the folders to appear 
under Mail with the exception of the INBOX.


The reason that I use the word should because not all clients care what 
you declare the namespace to be. They "know", or think they know, 
better. Webmail is one of these clients as you declare your own prefix 
in the configuration file.


Once you decide on what structure you want, make sure that your 
configuration will make that structure the default as you can run with 
multiple namespace in the same storage locations (if you are a 
masochist). The sample configuration and wiki is pretty good on this 
part but let me know if you want help. I was a masochist for a while in 
my development environment until I became smarter.


The RFC isn't a great read so you might want to look at the following 
discussions about folder characters:


http://support.microsoft.com/kb/281211
http://lists.apple.com/archives/macos-x-server/2005/dec/msg00445.html
http://osdir.com/ml/mail.squirrelmail.internationalization/2005-10/msg00041

The ideal hierarchy seems client dependent for the clients that care.

My preference is to the INBOX. prefix because that's what my ISP and 
(yuck) Exchange uses. So we can the same look and feel to give our users 
the warm fuzzy. I'm also lazy and our support staff want the minimum of 
pain. Applemail seems happy with this setup, unless you have more than 
one account in which case it gets grumpy no matter what. Thunderbird, 
pine, and mutt all seem good but they are pretty flexible. I make other 
people test Microsoft.


We are currently using Mail. in production for legacy reasons but that 
will sort of change someday (we give both depending on server address).


I think that Timo likes the (default) top level structure but that is 
very scary in my environment.


I wish I had a better answer but at the end of the day your 
configuration depends on your client base and what works best for your 
server.


Hope this helps - sorry about the length.

---Jack

--
Jack Stewart
[EMAIL PROTECTED]
http://www.imss.caltech.edu



[Dovecot] Distributed indexes, maildir_copy_preserve_filename, and Microsoft

2008-08-22 Thread Jack Stewart


Hi,

I've just completed a migration from Courier to a Dovecot 1.0.7 (patched 
RHE 5) which is working great, except for this weirdness of an issue 
that is impacting a handful of users. My question is that will turning 
off maildir_copy_preserve_filename help or hurt in a situation where 
there are multiple servers with their own local INDEX files (uidlist is 
shared)?


The problem is that a few users on a few versions of Microsoft Mail 
clients are not showing a few messages. They can see the messages just 
fine in webmail and other, more typical, E-mail clients.


I know that the right solution is to upgrade to 1.1.2 and use shared 
indexes. The plan is to upgrade in a month or so. In my defense, this 
setup was not of my making,


The two things issues that might have an impact are that we have shared 
(NFS) maildirs on multiple servers with their own local indexes and 
maildir_copy_preserve_file turned on. The uidlist (CONTROL) is shared 
and we have sticky connections turned on in our load balancer. The vast 
majority of clients do not have this issue. Quite frankly I have no idea 
 these clients are ignoring the dovecot-uidlist.


Would turning off maildir_copy_preserve_filename help with this setup?

Another option might be easier is to delete the users INDEX and let the 
clients rebuild them.


Since this is only a handful of users and we are migrating to 1.1.2 
anyway, my preference is to make the bare minimum of changes to help 
address this problem. The changes need not be perfect nor anytime more 
than a suggestion or preference. I just want to make things better for a 
little while.


Many thanks in advance.

---Jack

P.S. The performance improvement is amazing - anywhere from 2X to 12X 
depending on what metrics you are looking at. It isn't an apples to 
apples comparison since we upgraded the servers as well but the "old" 
servers are pretty powerful beasts.


--
Jack Stewart
[EMAIL PROTECTED] / 626-395-4690
http://www.imss.caltech.edu


[Dovecot] Distributed indexes, maildir_copy_preserve_filename, and Microsoft

2008-08-22 Thread Jack Stewart


Hi,

I've just completed a migration from Courier to a Dovecot 1.0.7 (patched 
RHE 5) which is working great, except for this weirdness of an issue 
that is impacting a handful of users. My question is that will turning 
off maildir_copy_preserve_filename help or hurt in a situation where 
there are multiple servers with their own local INDEX files (uidlist is 
shared)?


The problem is that a few users on a few versions of Microsoft Mail 
clients are not showing a few messages. They can see the messages just 
fine in webmail and other, more typical, E-mail clients.


I know that the right solution is to upgrade to 1.1.2 and use shared 
indexes. The plan is to upgrade in a month or so. In my defense, this 
setup was not of my making,


The two things issues that might have an impact are that we have shared 
(NFS) maildirs on multiple servers with their own local indexes and 
maildir_copy_preserve_file turned on. The uidlist (CONTROL) is shared 
and we have sticky connections turned on in our load balancer. The vast 
majority of clients do not have this issue. Quite frankly I have no idea 
 these clients are ignoring the dovecot-uidlist.


Would turning off maildir_copy_preserve_filename help with this setup?

Another option might be easier is to delete the users INDEX and let the 
clients rebuild them.


Since this is only a handful of users and we are migrating to 1.1.2 
anyway, my preference is to make the bare minimum of changes to help 
address this problem. The changes need not be perfect nor anytime more 
than a suggestion or preference. I just want to make things better for a 
little while.


Many thanks in advance.

---Jack

P.S. The performance improvement is amazing - anywhere from 2X to 12X 
depending on what metrics you are looking at. It isn't an apples to 
apples comparison since we upgraded the servers as well but the "old" 
servers are pretty powerful beasts.


--
Jack Stewart
[EMAIL PROTECTED] / 626-395-4690
http://www.imss.caltech.edu


[Dovecot] IMAP maildir performance difference between dovecot-uidlist versions?

2008-08-17 Thread Jack Stewart

Hi,

Is there an IMAP performance difference using Maildir between using 
version 3 and version 1 of the dovecot-uidlist (either dovecot 1.0.7 or 
1.1.2)? It isn't entirely clear to me there is any difference as the 
filename may contain the size/vsize information. The servers will 
eventually be running 1.1.2 with the maildir_copy_preserve_filename 
turned on.


I'm doing a batch migration from courier to dovecot 1.0.7 and eventually 
dovecot 1.1.2. These servers are, and will continue to be, IMAP only 
(i.e. dedicated control/dovecot-uidlist files). The migration script is 
run once - no build on the fly. It does a readdir() to get the full 
filenames and make sure that list entries are only for existing files. 
As a result, there is almost complete freedom in the output format for 
the dovecot-uidlist.


Many thanks in advance!

---Jack

P.S. For better or worse, it does seem that once you are stuck with 
whatever initial version you choose.




--
Jack Stewart
IMSS, California Institute of Technology,
M/C 1-10, 1200 E. California Blvd, Pasadena, CA 91125
[EMAIL PROTECTED] / 626-309-4690
http://www.imss.caltech.edu


Re: [Dovecot] FTS/squat search indexes built when?

2008-08-01 Thread Jack Stewart



Patrick Nagel wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi,

Timo Sirainen wrote:
| printf "1 select $mailbox\n2 search  text x93hgdgd\n3 logout\n" |
| dovecot --exec-mail imap
|
| For getting the list of mailboxes:
|
| mailboxes=`printf "1 list "" *\n" | dovecot --exec-mail imap | perl 
magic`


Ok, looks easy. But I can't find information anywhere on how to specify the
user. I tried with 'USER=username' in front of the dovecot call, and 
dovecot
then said 'Logged in as username' instead of 'Logged in as root', but a 
list
command doesn't show the mailboxes, so I think it's not accessing the 
maildir

of that user.
I guess it's because the users are not system users, and dovecot needs to
retrieve info from the LDAP directory first. But how do I tell it to do 
that?


Logging in via netcat / telnet works, but then I don't know how to 
proceed...


Thanks for your help!
Patrick.



exporting the $USER variable seems to work - i.e. 'export USER=joetest'. 
The index is updated and you get all the lovely information for reporting.


This is great. Unfortunately when I run it, existing messages for mutt 
are marked as O (or old). Thunderbird works just fine. I'm betting to 
believe these clients hate me. Please let me know of a way to get around 
this. Thanks.


---Jack





Re: [Dovecot] making IMAP quicker on LAN

2008-07-24 Thread Jack Stewart



Ed W wrote:

Andrew Von Cid wrote:

Hi all,

I keep on hitting this problem when migrating new clients from POP or 
local IMAP servers (hosted on their LAN's) to my Dovecot setup, which 
is hosted properly in a data center.  People usually complain that 
it's slower and although they're getting a kick ass mail setup it 
doesn't look good from their point of view.


I'm wondering if there is anything I could do to speed it up on their 
LAN's.  What I mean is probably a caching IMAP proxy or some sort of 
replication to a local Dovecot server.  Is this something Dovecot can 
do?  I'd be really grateful for any opinions on how to tackle this 
problem.


My experience is that most mail clients drag down a LOT of data when you 
open a folder, hence the bandwidth required is surprisingly large.  I 
also noticed that this data compresses EXTREMELY well.  So my company 
just happens to make a compression proxy for use on seriously slow 
dialup links (2.4Kbit), but my own experience is that this speeds things 
up by around a factor of 2 on a typical fast broadband link (compared 
using Thunderbird)


There are various simple ways to test this thesis on your own setup, 
including a simple straight through proxy in about 20 lines or perl.  
However, not sure what the best fix is for this problem?


There was some discussion a few weeks back that SSL can have a 
compression layer turned on - Timo pointed out that this was disabled in 
both Dovecot and also TB.  It might be possible to send Timo some money 
and have it enabled in Dovecot (looked like a very trivial one line 
fix?) - you could then (fix and) use ssltunnel to get the benefit whilst 
waiting for your patch to TB to be accepted into mainstream (or if it 
suits your userbase you could fix the code and distribute a changed 
version locally? If using Outlook then obviously this isn't possible, 
but no idea if Outlook already supports compressed SSL?)


You could also pay Timo to add support for the compressed IMAP protocol 
extension, but again you run into the problem that few/no clients 
support it (at least you have half the problem licked though)


Timo is also working on a very clever multi-master imap server 
replication engine - again probably tipping a few euros his way might 
speed up that process.  This would give you a local cache server


Hope those ideas get you started?

Good luck

Ed W


A few IMAP client based things that seem to help are: disable all of the 
languages you don't need in Thunderbird; configure AppleMail to download 
only the messages you've read; configure Outlook/Outlook Express to sync 
at a more reasonable level to limit it from downloading everything every 
time; or make everyone use mutt/pine. The last isn't realistic but if 
mutt or pine works fine, then you know some client optimizations will help.


Webmail, as long as it isn't loaded with too much graphics, might work 
better with slow connections. For people connecting to Email while in a 
Rain Forest, POP seems to be the best option.


It also seems to me that fts plugin (free text indexing) improves 
performance. This might just be wishful thinking on my part.


Hope this helps a little.

---Jack


Re: [Dovecot] login processes from attacks staying for hours

2008-07-23 Thread Jack Stewart


Hi,

We you run 'netstat -tan' (or equivalent), what state are the packets 
in? If it is just a bunch of processes with no active connections then 
it should not be a big deal.


We've seen something on our SMTP servers that sounds similar (our IMAP 
servers haven't been hit yet). The problem is there is badly written 
spammer/hacker software that do not close connections correctly. We wind 
up with a number of useless connections, many of them in CLOSE_WAIT or 
FIN_WAIT* states.


TCP/IP kernel tuning is our solution to close connection states quicker. 
I don't know Centros. In Solaris the ndd parameters, which have stupid 
defaults, are tcp_time_wait_interval, tcp_fin_wait_2_flush_interval, 
tcp_ip_abort_interval and tcp_keepalive_interval. Redhat seems a bit 
better but the tcp_keepalive_time of 2 hours is a little high for my 
liking. I'ld pay attention to the connection state that seems to be the 
problem.


An strict iptables approach doesn't address the tcp teardown issue. Even 
with drops via iptables there still will be connections waiting to close.


Our servers are front-ended by load balancers and there is also a router 
at the border. This is where we block ip addresses, if we need to.


Hope this helps.

---Jack


Kai Schaetzl wrote:
I'm in the process of rolling out new setups with dovecot on CentOS 5.2 
and I notice that dovecot doesn't handle the brute-force attacks too nice.

I reduced the limit a bit to some reasonable looking value:
login_max_processes_count = 32
to stop them earlier and the number of processes stops at that figure when 
an attack happens.
However, it stays at this count for hours although the attack is already 
over since long. For instance, my monitoring alerts me at the moment when 
the process count for pop3-login goes over 20 processes. This happened on 
three machines at 2 am with a brute-force attack from the same source that 
didn't last longer than a minute or so. However, the process count dropped 
only at 7am under 20 on two machines and on the third machine it was still 
over 20 when I was in the office at 9 am and finally killed them.
As these machines are all not in production yet, there weren't any other 
logins and the single brute-force ended within one minute according to the 
logs (obviously when pop3-logins hit the limit).
Shouldn't these processes go down to login_processes_count (3) within a 
few minutes? An strace shows that they are mostly doing gettimeofday 
lookups (=sleeping).

This is the default dovecot (1.07) coming with CentOS 5.2.
I've been running only one other instance of dovecot in production 
(0.99.11) on CentOS 4.6 so far and I don't know which behavior that 
displayed in the past as I just recognize that I accidentally ommitted it 
from monitoring. :-(


I had this mailing list searched for "brute-force" to see how others 
handle this and what dovecot provides to stop these attacks. I have found 
not many threads about this. There is one with a bit more information: 
"Delay on failed pw attempts" from January 1. Unfortunately, this 
functionality is only in a later version of dovecot and it's not clear if 
it was implemented or not or if it would be helpful. Was it implemented?


This thread also mentions fail2ban which may be one way to go, although I 
don't like this log parsing approach too much. Does anyone use iptables 
for rate-limiting per IP on the pop/imap ports to prevent brute-force 
attacks?




Kai



Re: [Dovecot] The case of the disappearing INBOX's - upgrade to 1.1.1 from 1.0.7 and based on Email client type

2008-07-21 Thread Jack Stewart


Solution tested, problem solved - thanks!

This will greatly ease our transition to a more sensible namespace layout.

---Jack

Timo Sirainen wrote:

On Wed, 2008-07-16 at 09:59 -0700, Jack Stewart wrote:
The interesting part is that after upgrading to 1.1.1, the INBOX would 
disappear from Thunderbird after clicking on a different folder. 


Thanks. This should fix it:
http://hg.dovecot.org/dovecot-1.1/rev/c76aaa79156c


namespace private {
   separator = .
   prefix = INBOX.
   inbox = yes
   hidden = no
   list = yes
}
namespace private {
   separator = .
   prefix = Mail.
   hidden = yes
   list = no
}


I'd rather change into a configuration where there is no namespace
prefix.


[Dovecot] The case of the disappearing INBOX's - upgrade to 1.1.1 from 1.0.7 and based on Email client type

2008-07-16 Thread Jack Stewart


Hi,

I apologize if this has already been answered. I looked for an identical 
message but had no luck. I'm also not convinced this is a bug or if it 
is a bug that I want it fixed. The problem gives me a great reason to 
change our configuration to something more modern.


Here is the situation. Our 1.0.7 configuration has the following, and 
only, namespace definition.


namespace private {
  separator = .
  prefix = Mail.
  inbox = yes
}

This configuration was implemented years ago in order to simplify the 
upgrade from mail spool to IMAP for people who use mutt and pine.


The interesting part is that after upgrading to 1.1.1, the INBOX would 
disappear from Thunderbird after clicking on a different folder. Now 
mutt worked just fine, which was even worse. Changing the configuration 
so that the INBOX appeared on Thunderbird caused the INBOX to disappear 
(or be unreachable) on mutt. After trying multiple configurations with 
every permutation, and making several wall dents with my forehead, I 
finally found this configuration.


namespace private {
  separator = .
  prefix = INBOX.
  inbox = yes
  hidden = no
  list = yes
}
namespace private {
  separator = .
  prefix = Mail.
  hidden = yes
  list = no
}

For the most part, this works just fine. INBOX appears for every client 
I've tested and the migration is largely transparent. The only time I 
found issues is when the Trash, Sent, etc. folders were manually 
specified in a GUI client such as Thunderbird.


Still the real question remains. Is this supposed to happen? If so, what 
are some of the reasons?


Naturally my management would prefer that the original configuration 
works the same so that things are transparent. Personally, the second 
configuration feels better to me especially on a going forward basis.


Thanks in advance for any answers or hints.

---Jack

--
Jack Stewart
[EMAIL PROTECTED]
http://www.imss.caltech.edu



Re: [Dovecot] fd limit 1024 is lower in dovecot-1.1.1

2008-07-16 Thread Jack Stewart



Zhang Huangbin wrote:

Daniel L. Miller wrote:

Zhang Huangbin wrote:

Hi, all.

I just upgrade from 1.0.15 to 1.1.1 in a test box(RHEL 5.2, x86_64).

after upgrade, i got this warning msg:

8< 
# /etc/init.d/dovecot restart
Stopping Dovecot Imap: [  OK  ]
Starting Dovecot Imap: Warning: fd limit 1024 is lower than what 
Dovecot can use under full load (more than 1280). Either grow the 
limit or change login_max_processes_count and max_mail_processes 
settings


I'm just guessing - but reading that warning it appears to me that 
Dovecot is saying that as it is configured, it can consume more O/S 
resources (I assume fd is "file descriptors") than the O/S is 
currently configured for.  So you need to DECREASE your dovecot max 
processes to decrease the (potential) system demands - or increase 
your O/S settings.


Daniel



I think so.

I just decrease the processes, it works now.



Hi,

Just as an FYI, for some versions of linux (i.e. RHE) you need to set 
the process limit - ulimit unlimited is not enough. 1024 is the default 
number of open file descriptors.


One of the ways to solve this is by running 'ulimit -n X' in your init 
script before starting the program. Within RHE, another way is to 
configure /etc/security/limits.conf by putting something like this into 
the file:


*hardlocks 8192
*hardnofile8192
*softlocks 4096
*softnofile4096

This allows for processes to run with 4096 file descriptors by default 
and the user process can increase it to 8192 via ulimit. 
/etc/security/limits.conf can also be configured on a per user basis 
which is also nice.