Re: [Dovecot] load increase after upgrade to 2.0.8

2010-12-09 Thread Timo Sirainen
On 9.12.2010, at 7.57, Ralf Hildebrandt wrote:

>> The v1.2 values look pretty good. v2.0's involuntary context switches
>> isn't too bad either. So where do all the 3700 new voluntary context
>> switches come from? The new method of initializing user logins can't
>> add more than a few more of those. Running imaptest locally I get
>> similar volcs values for v1.2 and v2.0. Wonder if you get different
>> values?
> 
> How exactly would I run imaptest...?

Create a test account for it (or you're more or less accidentally destroy your 
INBOX) and then run e.g.:

imaptest user=testuser pass=testpass logout=0 secs=10 clients=1

Then compare the voluntary context switch counts logged by the imap process.

http://imapwiki.org/ImapTest has more info about installing and running. 
Especially note http://imapwiki.org/ImapTest/Running#Append_mbox



Re: [Dovecot] load increase after upgrade to 2.0.8

2010-12-09 Thread Timo Sirainen
On 9.12.2010, at 6.50, Cor Bosman wrote:

>> lcs values for v1.2 and v2.0. Wonder if you get different values?
>> 
>> If you don't mind some huge logs, you could also try the attached patch that 
>> logs the voluntary context switch count for every executed IMAP command. 
>> Maybe some command shows up that generates them much more than others.
>> 
> 
> I have applied this patch to 1 server, and it's logging away. Let me know 
> what kind of output you'd like to see. Or i can send you the raw logfile.  

Are the process pids also logged in the messages, so it's clear which messages 
belong to which imap process? If not, add %p to mail_log_prefix.

> As I said previously, im not longer running the imap server=0 patch because 
> it caused these errors:
> 
> Dec  9 06:54:32 userimap30 dovecot: imap(cor): Error: user cor: 
> Initialization failed: Namespace '': 
> mkdir(/var/spool/mail/.4a/L/0/14/corbosch) failed: Permission denied 
> (euid=1000(cor) egid=50(staff) missing +w perm: /var/spool/mail/.4a/L/0/14, 
> euid is not dir owner).  
> 
> After I removed the patch these errors disappeared. 

The patch was supposed to fix this.. Is uid 1000 wrong for this user? Didn't 
you already exist, why is it complaining about mkdir()?

> Just for thoroughness ive started 2 servers with the logging patch. One with 
> service_count=0 and one with service_count=1

What, are you saying that without the patch service imap { service_count=0 } 
works? Or are you talking about imap-login?

Re: [Dovecot] load increase after upgrade to 2.0.8

2010-12-09 Thread Cor Bosman
> 
> Are the process pids also logged in the messages, so it's clear which 
> messages belong to which imap process? If not, add %p to mail_log_prefix.

Done. It wasnt logging this, now it is. 

> 
>> As I said previously, im not longer running the imap server=0 patch because 
>> it caused these errors:
>> 
>> Dec  9 06:54:32 userimap30 dovecot: imap(cor): Error: user cor: 
>> Initialization failed: Namespace '': 
>> mkdir(/var/spool/mail/.4a/L/0/14/corbosch) failed: Permission denied 
>> (euid=1000(cor) egid=50(staff) missing +w perm: /var/spool/mail/.4a/L/0/14, 
>> euid is not dir owner).  
>> 
>> After I removed the patch these errors disappeared. 
> 
> The patch was supposed to fix this.. Is uid 1000 wrong for this user? Didn't 
> you already exist, why is it complaining about mkdir()?

It's very strange. Im user 'cor'. Maybe it's logging my name because I started 
the process (su-d to root obviously)?  My uid is 1000.  But im not the one 
doing this login, these are customer logins. The mkdir fails because the target 
dir is only writable by root. Normally dovecot creates a directory as root, and 
chowns it to the owner it belongs to. At least, thats what I assume happens as 
normally this all works fine. 

It's like dovecot was unable to change the process to euid root but instead is 
doing everything as euid cor.

>> Just for thoroughness ive started 2 servers with the logging patch. One with 
>> service_count=0 and one with service_count=1
> 
> What, are you saying that without the patch service imap { service_count=0 } 
> works? Or are you talking about imap-login?

Im talking about imap-login. Currently im not running any servers with imap { 
service_count=0 } due to the permission errors. Im more than happy to give it 
another go if you feel i did something wrong.


Cor




Re: [Dovecot] load increase after upgrade to 2.0.8

2010-12-09 Thread Timo Sirainen
On 9.12.2010, at 8.24, Cor Bosman wrote:

>> 
>> Are the process pids also logged in the messages, so it's clear which 
>> messages belong to which imap process? If not, add %p to mail_log_prefix.
> 
> Done. It wasnt logging this, now it is. 

Great. If the logs aren't too huge I could look at the raw ones, or you could 
try to write a script yourself to parse them. I'm basically interested in 
things like:

1. How large are the first volcs entries for processes? (= Is the initial 
process setup cost high?)

2. Find the last volcs entry for each process. Sum them up for some time 
period. Is the total about the same as the system's entire volcs during that 
time?

3. What kind of a volcs distribution do the processes have? A graph might be 
nice :)

4. Most importantly: Are there some commands that trigger a high increase in 
volcs numbers?

>>> As I said previously, im not longer running the imap server=0 patch because 
>>> it caused these errors:
>>> 
>>> Dec  9 06:54:32 userimap30 dovecot: imap(cor): Error: user cor: 
>>> Initialization failed: Namespace '': 
>>> mkdir(/var/spool/mail/.4a/L/0/14/corbosch) failed: Permission denied 
>>> (euid=1000(cor) egid=50(staff) missing +w perm: /var/spool/mail/.4a/L/0/14, 
>>> euid is not dir owner).  
>>> 
>>> After I removed the patch these errors disappeared. 
>> 
>> The patch was supposed to fix this.. Is uid 1000 wrong for this user? Didn't 
>> you already exist, why is it complaining about mkdir()?
> 
> It's very strange. Im user 'cor'. Maybe it's logging my name because I 
> started the process (su-d to root obviously)?  My uid is 1000.  But im not 
> the one doing this login, these are customer logins.

Well, that clearly says it's trying to initialize user "cor".. Is the 
"corbosch" directory you or someone else?

> The mkdir fails because the target dir is only writable by root.

I'd have guessed that the above corbosch already existed.

> Normally dovecot creates a directory as root, and chowns it to the owner it 
> belongs to. At least, thats what I assume happens as normally this all works 
> fine. 

Uh. No.. Privileges are always dropped before Dovecot creates any directories. 
I don't know what creates the directories for you. I'd guess the authentication 
code?

>>> Just for thoroughness ive started 2 servers with the logging patch. One 
>>> with service_count=0 and one with service_count=1
>> 
>> What, are you saying that without the patch service imap { service_count=0 } 
>> works? Or are you talking about imap-login?
> 
> Im talking about imap-login. Currently im not running any servers with imap { 
> service_count=0 } due to the permission errors. Im more than happy to give it 
> another go if you feel i did something wrong.

OK. I hope the volcs logging gets us somewhere.

Re: [Dovecot] load increase after upgrade to 2.0.8

2010-12-09 Thread Cor Bosman
> Great. If the logs aren't too huge I could look at the raw ones, or you could 
> try to write a script yourself to parse them. I'm basically interested in 
> things like:
> 
> 1. How large are the first volcs entries for processes? (= Is the initial 
> process setup cost high?)
> 
> 2. Find the last volcs entry for each process. Sum them up for some time 
> period. Is the total about the same as the system's entire volcs during that 
> time?
> 
> 3. What kind of a volcs distribution do the processes have? A graph might be 
> nice :)
> 
> 4. Most importantly: Are there some commands that trigger a high increase in 
> volcs numbers?

If you want to have a quick look already, im mailing you the locations of 2 
files, 1 with service_count=0 and one with service_count=1.  

> 
> Well, that clearly says it's trying to initialize user "cor".. Is the 
> "corbosch" directory you or someone else?

corbosch is actually a non-existing user, and im suspecting a local bug. I 
mailed you about that in private. 


> I'd have guessed that the above corbosch already existed.

Yeah, it probably should be targetting 'cor' but for some reason chars get 
appended to the dir location.  Not sure if its our local modification or 
something in dovecot.

> 
>> Normally dovecot creates a directory as root, and chowns it to the owner it 
>> belongs to. At least, thats what I assume happens as normally this all works 
>> fine. 
> 
> Uh. No.. Privileges are always dropped before Dovecot creates any 
> directories. I don't know what creates the directories for you. I'd guess the 
> authentication code?

Something in dovecot does :)  If a user without a maildir logs in for the first 
time, the dir is created.


Cor



Re: [Dovecot] load increase after upgrade to 2.0.8

2010-12-09 Thread Timo Sirainen
On 9.12.2010, at 9.13, Cor Bosman wrote:

> If you want to have a quick look already, im mailing you the locations of 2 
> files, 1 with service_count=0 and one with service_count=1.  

I see that about half the commands that do hundreds or thousands of volcses are 
IDLE. Wonder if that is the problem. But there are others .. I see for example 
that one "LOGOUT" command takes 2000 volcses, which I doubt is true. More 
likely it's counting something that happened before the LOGOUT, like probably 
doing a mailbox synchronization. Attached a new version of the patch that also 
logs volcs for syncs.

Also STATUS and SELECT seem to have high volcses, which do a mailbox sync.. 
Hmm. So if it is about mailbox sync, where in the sync then is it slow? Added 
more debugging for this in the attached patch.

>>> Normally dovecot creates a directory as root, and chowns it to the owner it 
>>> belongs to. At least, thats what I assume happens as normally this all 
>>> works fine. 
>> 
>> Uh. No.. Privileges are always dropped before Dovecot creates any 
>> directories. I don't know what creates the directories for you. I'd guess 
>> the authentication code?
> 
> Something in dovecot does :)  If a user without a maildir logs in for the 
> first time, the dir is created.

Probably would be good to find out what exactly, in case that's caused by some 
security hole :)



diff
Description: Binary data


Re: [Dovecot] load increase after upgrade to 2.0.8

2010-12-09 Thread Cor Bosman

On Dec 9, 2010, at 10:41 AM, Timo Sirainen wrote:

> On 9.12.2010, at 9.13, Cor Bosman wrote:
> 
>> If you want to have a quick look already, im mailing you the locations of 2 
>> files, 1 with service_count=0 and one with service_count=1.  
> 
> I see that about half the commands that do hundreds or thousands of volcses 
> are IDLE. Wonder if that is the problem. But there are others .. I see for 
> example that one "LOGOUT" command takes 2000 volcses, which I doubt is true. 
> More likely it's counting something that happened before the LOGOUT, like 
> probably doing a mailbox synchronization. Attached a new version of the patch 
> that also logs volcs for syncs.
> 
> Also STATUS and SELECT seem to have high volcses, which do a mailbox sync.. 
> Hmm. So if it is about mailbox sync, where in the sync then is it slow? Added 
> more debugging for this in the attached patch.

Ok, i'll apply this patch to both running servers doing logging.

>> Something in dovecot does :)  If a user without a maildir logs in for the 
>> first time, the dir is created.
> 
> Probably would be good to find out what exactly, in case that's caused by 
> some security hole :)
> 

Sorry, my mistake. It's not dovecot, it's our LDA (not dovecot) thats making 
the dir (which means we now get an error if a user logs in and they have never 
had email.. but that never happens since we mail them on signup..interesting 
little problem :)

Cor




Re: [Dovecot] load increase after upgrade to 2.0.8

2010-12-09 Thread Cor Bosman
Some preliminary findings..

Changing the kernel seems to have a positive effect on the load. I changed from 
2.6.27.46 to 2.6.27.54 (sorry, im bound by locally available kernels due to a 
kernel patch we created to fix some NFS problems in the linux kernel. Patch 
should be available in the stock linux kernel as of 2.6.36, but we dont have 
that kernel locally available yet), At similar user levels the load is now back 
to what it was. It doesnt however influence the context switches. Still very 
high. Here's a graph of a server where I changed the kernel and nothing else:

load:  http://grab.by/7Odt (i rebooted the server with a new kernel at the 
point where the high load drops to 0)  
cs:  http://grab.by/7Odw

Here's a server with the old kernel, but imap-login { service_count=0 }. It 
seems this doesnt have much impact. cs are lower (but higher than 1.2), 
probably because user levels havent gone up to normal levels yet due to 
loadbalancer distribution issues after a reboot)

load:  http://grab.by/7OdL
cs:  http://grab.by/7OdN

Ive now also set up a server with new kernel and service_count = 0, just to 
cover all the bases. But not enough data on that server yet. It takes a while 
for user levels to creep up once a server gets rebooted as users are sticky to 
a server once they're linked to it.

Cor



Re: [Dovecot] Dovecot 1.2/2.0 coexistence guide?

2010-12-09 Thread Ron Leach

Tom Talpey wrote:

On 12/5/2010 2:25 PM, Timo Sirainen wrote:


It's also safe to run v1.2 and v2.0 in parallel, even accessing the 
same index files.




So, I just installed 2.0.8 on a separate server and deployed a test user
or two.


Out of interest, Tom, (thinking about migrating from v1 to v2
ourselves, as well as migrating the maildirs to new hardware) what did
you do next?

Did you leave both servers up (that v2 can co-exist and use the same
indexes is quite a promising route for live-migration), and let v2
'look' at the v1 mails?  I think this would mean that only Dovecot 2
would need to be reconfigured.

Or did you move the v1 mails across to the v2 server (and if you did
that, did you have any problems)?  Moving the mails also implies
reconfiguring the MTA as well, I think, so this step isn't only a
Dovecot reconfiguration issue.

Be interested to hear what you did.

regards, Ron

(copy; apologies, hadn't sent to list)


[Dovecot] mutt freezes

2010-12-09 Thread Manoel Prazeres

Hi

Sometimes mutt freezes saying "Closing connection to imap.impa.br...".

Any clue?

Thanks
Manoel

===
My configuration:

User-Agent: Mutt/1.5.21 (2010-09-15)

# dovecot -n
# 2.0.8: /etc/dovecot/dovecot.conf
# OS: Linux 2.6.18-194.26.1.el5xen x86_64 CentOS release 5.5 (Final)
default_client_limit = 1027
default_process_limit = 512
first_valid_gid = 1000
first_valid_uid = 1001
log_path = /var/log/imapd/imapd.log
log_timestamp = "%b %e %H:%M:%S markov "
login_log_format_elements = user=<%u> method=%m pid=%p rip=%r lip=%l %c
mail_location = mbox:%h/mail:INBOX=%h/mbox:INDEX=/var/local/imapd.indexes/%u
mail_log_prefix = "%Us(%u): pid=%p, rip=%r, lip=%l, "
mbox_write_locks = fcntl
passdb {
  args = /etc/dovecot/deny-users
  deny = yes
  driver = passwd-file
}
passdb {
  driver = pam
}
protocols = imap
service imap-login {
  process_min_avail = 32
}
service imap {
  process_limit = 512
}
ssl_cert = 

[Dovecot] sieve plugin does not autocreate folder

2010-12-09 Thread JM
Sieve plugin does not autocreate folder, defined in sieve filter

/etc/dovecot/sieve/default.sieve
require ["fileinto"];
# rule:[off]
if anyof (header :contains "To" "o...@***.com", header :contains "Cc"
"o...@.com")
{
   fileinto "INBOX.off";
   stop;
}

>sievec /etc/dovecot/sieve/default.sieve
sievec(root): Debug: Loading modules from directory: /usr/lib64/dovecot
sievec(root): Debug: Module loaded: /usr/lib64/dovecot/lib15_notify_plugin.so
sievec(root): Debug: Module loaded: /usr/lib64/dovecot/lib20_expire_plugin.so
sievec(root): Debug: Effective uid=1030, gid=1030, home=/root
sievec(root): Debug: maildir++: root=/root, index=, control=, inbox=/root

> mail -f t...@test.org to o...@.com
> cat /var/log/dovecot.log

: script binary /etc/dovecot/sieve/default.svbin successfully loaded
: binary save: not saving binary /etc/dovecot/sieve/default.svbin,
because it is already stored
: executing script from /etc/dovecot/sieve/default.svbin
Namepace : Permission lookup failed from
/var/spool/mail/virtual/.com/*...@*.com/.INBOX.off
Namepace : Using permissions from
/var/spool/mail/virtual/.com/**...@*: mode=0700 gid=-1
Namepace : Permission lookup failed from
/var/spool/mail/virtual/*.com/**...@.com/.INBOX.off
Namepace : Using permissions from
/var/spool/mail/virtual/.com/*...@.com: mode=0700 gid=-1
: msgid=<56918ca75c35458412116ec36d8e7...@*.com>: failed to store
into mailbox 'INBOX.off': Mailbox doesn't exist: INBOX.off
Error: sieve: execution of script /etc/dovecot/sieve/default.sieve
failed, but implicit keep was successful
Info: sieve: msgid=<56918ca75c35458412116ec36d8e7...@*.com>:
stored mail into mailbox 'INBOX'


OS - Linux gentoo 2.6.36 x64
dovecot version 2.0.8
dovecot.conf
base_dir = /var/run/dovecot/
default_vsz_limit = 1 G
mail_debug = yes
mail_gid = vmail
mail_location = maildir:%h
mail_privileged_group = vmail
mail_uid = vmail
managesieve_notify_capability = mailto
managesieve_sieve_capability = fileinto reject envelope
encoded-character vacation

plugin {
  sieve = ~/.dovecot.sieve
  sieve_before = /etc/dovecot/sieve/default.sieve
  sieve_dir = ~/sieve
}
protocols = imap pop3 sieve
service auth {
  unix_listener auth-master {
mode = 0600
user = vmail
  }
}
service managesieve-login {
  vsz_limit = 1 M
}
protocol lda {
  auth_socket_path = /var/run/dovecot/auth-master
  log_path = /var/log/dovecot.log
  mail_plugins = sieve quota
}


Re: [Dovecot] sieve plugin does not autocreate folder

2010-12-09 Thread Marcus Rueckert
hi,

look at the following settings:
lda_mailbox_autocreate
lda_mailbox_autosubscribe

and  sieve has fileinto :create for that purpose

hth

darix

-- 
   openSUSE - SUSE Linux is my linux
   openSUSE is good for you
   www.opensuse.org


Re: [Dovecot] sieve plugin does not autocreate folder

2010-12-09 Thread JM
Thank you
Works like a charm

Regards
Juri

On Thu, Dec 9, 2010 at 6:11 PM, Marcus Rueckert  wrote:
> hi,
>
> look at the following settings:
> lda_mailbox_autocreate
> lda_mailbox_autosubscribe
>
> and  sieve has fileinto :create for that purpose
>
> hth
>
>    darix
>
> --
>           openSUSE - SUSE Linux is my linux
>               openSUSE is good for you
>                   www.opensuse.org
>


Re: [Dovecot] mutt freezes

2010-12-09 Thread Paul Tansom
** Manoel Prazeres  [2010-12-09 15:47]:
> Hi
> 
> Sometimes mutt freezes saying "Closing connection to imap.impa.br...".
> 
> Any clue?
** end quote [Manoel Prazeres]

I get that sometimes and in my case it is always due to a network problem, one
of:

1. I've just rebooted the server without disconnecting my client [1]
2. My netbook has just come out of hibernation and the wifi hasn't connected
yet

It takes a good while to recover after this, which I assume is a Mutt issue,
but since it is always self inflicted I've not investigated further.

Could be worth checking your network connectivity at the time of the problem
starting (it could have recovered quite quickly and no longer be an issue
though). I've not checked, but there may be something in the Mutt configuration
to improve its handling of a lost connection perhaps.

-- 
Paul Tansom | Aptanet Ltd. | http://www.aptanet.com/ | 023 9238 0001
==
Registered in England  |  Company No: 4905028  |  Registered Office:
Crawford House, Hambledon Road, Denmead, Waterlooville, Hants, PO7 6NU


Re: [Dovecot] mutt freezes

2010-12-09 Thread Manoel Prazeres


Last week, I upgraded dovecot. The problem did not occur with the earlier 
version:

# dovecot -n
# 1.2.13: /etc/dovecot.conf
# OS: Linux 2.6.18-8.1.15.el5 x86_64 CentOS release 5 (Final)
syslog_facility: local5
protocols: imap imaps
listen: *:143
ssl_listen: *:993
disable_plaintext_auth: yes
login_dir: /var/run/dovecot/login
login_executable: /usr/libexec/dovecot/imap-login
login_log_format_elements: user=<%u> method=%m pid=%p rip=%r lip=%l %c
login_processes_count: 32
login_max_processes_count: 512
mail_max_userip_connections: 5
first_valid_uid: 8
first_valid_gid: 12
mail_location: mbox:%h/mail:INBOX=%h/mbox:INDEX=/var/log/imapd.indexes/%u
mbox_read_locks: dotlock
mbox_write_locks: dotlock fcntl
mail_log_prefix: %Us(%u): pid=%p, rip=%r, lip=%l,
auth default:
  passdb:
driver: passwd-file
args: /etc/dovecot.deny
deny: yes
  passdb:
driver: pam
  userdb:
driver: passwd

Manoel

On 12/09/2010 02:22 PM, Paul Tansom wrote:

** Manoel Prazeres  [2010-12-09 15:47]:

Hi

Sometimes mutt freezes saying "Closing connection to imap.impa.br...".

Any clue?

** end quote [Manoel Prazeres]

I get that sometimes and in my case it is always due to a network problem, one
of:

1. I've just rebooted the server without disconnecting my client [1]
2. My netbook has just come out of hibernation and the wifi hasn't connected
yet

It takes a good while to recover after this, which I assume is a Mutt issue,
but since it is always self inflicted I've not investigated further.

Could be worth checking your network connectivity at the time of the problem
starting (it could have recovered quite quickly and no longer be an issue
though). I've not checked, but there may be something in the Mutt configuration
to improve its handling of a lost connection perhaps.



Re: [Dovecot] Dovecot 1.2/2.0 coexistence guide?

2010-12-09 Thread Tom Talpey

On 12/9/2010 10:31 AM, Ron Leach wrote:

Tom Talpey wrote:

On 12/5/2010 2:25 PM, Timo Sirainen wrote:



It's also safe to run v1.2 and v2.0 in parallel, even accessing the
same index files.



So, I just installed 2.0.8 on a separate server and deployed a test user
or two.


Did you leave both servers up (that v2 can co-exist and use the same
indexes is quite a promising route for live-migration), and let v2
'look' at the v1 mails? I think this would mean that only Dovecot 2
would need to be reconfigured.

Or did you move the v1 mails across to the v2 server (and if you did
that, did you have any problems)? Moving the mails also implies
reconfiguring the MTA as well, I think, so this step isn't only a
Dovecot reconfiguration issue.


I ended up not trying to deploy both 1.2 and 2.0 dovecot servers on the
same machine. Even after mangling the various configure options to let
the bin, sbin, lib and libexec directories coexist, the /var and /etc
dirs were still an issue, and in the end I didn't want to have all my
path settings tweaked, then have to un-tweak them to actually migrate.
Also, there's the issue of multiple network listeners so I'd have to
mangle ports, too.

I don't have much of an issue with MTA integration because I'm just
using fetchmail to perform that. My MTA is just dovecot deliver, and
it's easy to redirect it with fetchmailrc and dovecot settings. So I
just cloned the victim maildir tree, set "keep" in fetchmail, and
tested.

In the end, the testing was so successful that I just cut the server
over after a couple of days. Mostly I waited just to be confident that
I had all the dovecot.conf settings finalized. The doveconf tool did
a pretty good job of it, but there were a few new settings to try,
and I had an explicit auth_executable line that didn't carry forward
to the new binaries. All were quite straightforward.


Re: [Dovecot] load increase after upgrade to 2.0.8

2010-12-09 Thread Mark Moseley
On Wed, Dec 8, 2010 at 11:54 PM, Ralf Hildebrandt
 wrote:
> * Mark Moseley :
>> On Wed, Dec 8, 2010 at 3:03 PM, Timo Sirainen  wrote:
>> > On 8.12.2010, at 22.52, Cor Bosman wrote:
>> >
>> >> 1 server with service_count = 0, and src/imap/main.c patch
>> >
>> > By this you mean service_count=0 for both service imap-login and service 
>> > imap blocks, right?
>> >
>> >
>>
>> Speaking from my own experience, the system loads on our dovecot boxes
>> went up *substantially* when we upgraded kernels from 2.6.32.x and
>> 2.6.33.x to newer ones (late 2.6.35.x and 2.6.36 -- haven't tried
>> 2.6.36.1 yet). But I also saw loads on all sort of other types of
>> boxes grow when moved to 2.6.35.x and 2.6.36, so it's not necessarily
>> dovecot-related. Though you've got plenty to choose from between
>> 2.6.27.x and up.
>
> We're on 2.6.32 and the load only goes up when I change dovecot (not
> when I change the kernel, which I didn't do so far)

If you at some point upgrade to >2.6.35, I'd be interested to hear if
the load skyrockets on you. I also get the impression that the load
average calculation in these recent kernels is 'touchier' than in
pre-2.6.35. Even with similar CPU and I/O utilization, the load
average on a >2.6.35 both is much higher than pre- and it also seems
to react more quickly; more jitter I guess. That's based on nothing
scientific though.


>> Getting 'imap-login' and 'pop3-login' set to service_count=0 and
>> 'pop3' and 'imap' set to service_count=1000 (as per Timo's suggestion)
>> helped keep the boxes from spinning into oblivion. To reduce the
>> enormous amount of context switches, I've got 'pop3's client_limit set
>> to 4. I played around with 'imap's client_limit between 1 and 5 but
>> haven't quite found the sweet spot yet. pop3 with client_limit 4 seems
>> to work pretty good. That brought context switches down from
>> 10,000-30,000 to sub-10,000.
>
> Interesting. Would that spawn a lot of pop3 processes? On the other
> hand, almost nobody is using pop3 here

Upping the client_limit actually results in less processes, since a
single process can service up to #client_limit connections. When I
bumped up the client_limit for imap, my context switches plummeted.
Though as Timo pointed out on another thread the other day when I was
asking about this, when that proc blocks on I/O, it's blocking all the
connections that the process is servicing.Timo, correct me if I'm
wildly off here -- I didn't even know this existed before a week or
two ago. So you can then end up creating a bottleneck, thus why I've
been playing with finding a sweet spot for imap. I figure that enough
of a process's imap connections must be sitting in IDLE at any given
moment, so setting client_limit to like 4 or 5 isn't too bad. Though
it's not impossible that by putting multiple connections on a single
process, I'm actually throttiling the system, resulting in fewer
context switches (though I'd imagine bottlenecked procs would be
blocked on I/O and do a lot of volcs's).


Re: [Dovecot] sieve plugin does not autocreate folder

2010-12-09 Thread Jerry
On Thu, 9 Dec 2010 17:11:20 +0100
Marcus Rueckert  articulated:

> hi,
> 
> look at the following settings:
> lda_mailbox_autocreate
> lda_mailbox_autosubscribe
> 
> and  sieve has fileinto :create for that purpose

I have never had to use ":create" to create the location. This would be
on a FreeBSD-8.1 system. Perhaps it is system dependent.

-- 
Jerry ✌
dovecot.u...@seibercom.net

Disclaimer: off-list followups get on-list replies or get ignored.
Please do not ignore the Reply-To header.
__
After any salary raise, you will have less money at the end of the
month than you did before.


Re: [Dovecot] sieve plugin does not autocreate folder

2010-12-09 Thread Marcus Rueckert
On 2010-12-09 13:20:21 -0500, Jerry wrote:
> I have never had to use ":create" to create the location. This would be
> on a FreeBSD-8.1 system. Perhaps it is system dependent.

because you didnt use dovecot 2 so far?
the default changed between 1.2 and 2.0

darix

-- 
   openSUSE - SUSE Linux is my linux
   openSUSE is good for you
   www.opensuse.org


Re: [Dovecot] Dovecot 1.2/2.0 coexistence guide?

2010-12-09 Thread Ralf Hildebrandt
* Tom Talpey :

> I ended up not trying to deploy both 1.2 and 2.0 dovecot servers on the
> same machine. Even after mangling the various configure options to let
> the bin, sbin, lib and libexec directories coexist, the /var and /etc
> dirs were still an issue, and in the end I didn't want to have all my
> path settings tweaked, then have to un-tweak them to actually migrate.

Very odd. I simply used

./configure --prefix=/usr/dovecot-2

and that was it.

Of course I had to change some paths in the config, but that was all.
-- 
Ralf Hildebrandt
  Geschäftsbereich IT | Abteilung Netzwerk
  Charité - Universitätsmedizin Berlin
  Campus Benjamin Franklin
  Hindenburgdamm 30 | D-12203 Berlin
  Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962
  ralf.hildebra...@charite.de | http://www.charite.de



Re: [Dovecot] load increase after upgrade to 2.0.8

2010-12-09 Thread Ralf Hildebrandt
* Mark Moseley :

> > We're on 2.6.32 and the load only goes up when I change dovecot (not
> > when I change the kernel, which I didn't do so far)
> 
> If you at some point upgrade to >2.6.35, I'd be interested to hear if
> the load skyrockets on you.

You mean even more? I'm still hoping it would decrease at some point :)
I updated to 2.6.32-27-generic-pae today. I wonder what happens.

> I also get the impression that the load average calculation in these
> recent kernels is 'touchier' than in pre-2.6.35. Even with similar CPU
> and I/O utilization, the load average on a >2.6.35 both is much higher
> than pre- and it also seems to react more quickly; more jitter I guess.
> That's based on nothing scientific though.

Interesting.

> Upping the client_limit actually results in less processes, since a
> single process can service up to #client_limit connections. When I
> bumped up the client_limit for imap, my context switches plummeted.

Which setting are you using now?

> Though as Timo pointed out on another thread the other day when I was
> asking about this, when that proc blocks on I/O, it's blocking all the
> connections that the process is servicing.Timo, correct me if I'm
> wildly off here -- I didn't even know this existed before a week or
> two ago. So you can then end up creating a bottleneck, thus why I've
> been playing with finding a sweet spot for imap.

Blocking on /proc? Never heard that before.

> I figure that enough of a process's imap connections must be sitting in
> IDLE at any given moment, so setting client_limit to like 4 or 5 isn't
> too bad. Though it's not impossible that by putting multiple
> connections on a single process, I'm actually throttiling the system,
> resulting in fewer context switches (though I'd imagine bottlenecked
> procs would be blocked on I/O and do a lot of volcs's).

-- 
Ralf Hildebrandt
  Geschäftsbereich IT | Abteilung Netzwerk
  Charité - Universitätsmedizin Berlin
  Campus Benjamin Franklin
  Hindenburgdamm 30 | D-12203 Berlin
  Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962
  ralf.hildebra...@charite.de | http://www.charite.de



[Dovecot] Delay email!

2010-12-09 Thread Henrique Fernandes
I am using postfix + dovecot 2.0.6  + mailman

And others 2 serves with only postfix and dovecot writing at same storage
with OCFS2.

We are having problens with IOwait, and still trying to figure out why that
is happen!

But, problem becomes much worse when some emails are used in mailman, when
it happens all 3 serves became really slow. iowait goes to 80.

Does anyone have an idea how to slow down the dovecot lda ?

I want it to nit to iwrite so much email, at least take a small break
between each mail!!

Any ideias?

Thanks!



[]'sf.rique


Re: [Dovecot] Dovecot 1.2/2.0 coexistence guide?

2010-12-09 Thread Tom Talpey

On 12/9/2010 2:10 PM, Ralf Hildebrandt wrote:

* Tom Talpey:


I ended up not trying to deploy both 1.2 and 2.0 dovecot servers on the
same machine. Even after mangling the various configure options to let
the bin, sbin, lib and libexec directories coexist, the /var and /etc
dirs were still an issue, and in the end I didn't want to have all my
path settings tweaked, then have to un-tweak them to actually migrate.


Very odd. I simply used

./configure --prefix=/usr/dovecot-2

and that was it.

Of course I had to change some paths in the config, but that was all.


My target system is a NAS appliance and many of its filesystems are not
persistent across reboot. Changing only the prefix is not sufficient to
make a successful configuration. Unfortunately.



Re: [Dovecot] load increase after upgrade to 2.0.8

2010-12-09 Thread Mark Moseley
On Thu, Dec 9, 2010 at 11:13 AM, Ralf Hildebrandt
 wrote:
> * Mark Moseley :
>
>> > We're on 2.6.32 and the load only goes up when I change dovecot (not
>> > when I change the kernel, which I didn't do so far)
>>
>> If you at some point upgrade to >2.6.35, I'd be interested to hear if
>> the load skyrockets on you.
>
> You mean even more? I'm still hoping it would decrease at some point :)
> I updated to 2.6.32-27-generic-pae today. I wonder what happens.
>
>> I also get the impression that the load average calculation in these
>> recent kernels is 'touchier' than in pre-2.6.35. Even with similar CPU
>> and I/O utilization, the load average on a >2.6.35 both is much higher
>> than pre- and it also seems to react more quickly; more jitter I guess.
>> That's based on nothing scientific though.
>
> Interesting.

Interesting, and a major PITA. If I had more time, I'd go back and
iterate through each kernel leading up to 2.6.35 to see where things
start to go downhill. You'd like to think each kernel version would
make things just a little bit faster (or at least, nothing worse than
the same). Though if it really is just more jittery and is showing
higher loads but not actually working any harder than an older kernel,
then it's just my perception only.

>> Upping the client_limit actually results in less processes, since a
>> single process can service up to #client_limit connections. When I
>> bumped up the client_limit for imap, my context switches plummeted.
>
> Which setting are you using now?

At the moment, I'm using client_limit=5 for imap, but I keep playing
with it. I have a feeling that's too high though. If I had faster cpus
and more memory on these boxes, it wouldn't be so painful to put it
back to 1.

>> Though as Timo pointed out on another thread the other day when I was
>> asking about this, when that proc blocks on I/O, it's blocking all the
>> connections that the process is servicing.Timo, correct me if I'm
>> wildly off here -- I didn't even know this existed before a week or
>> two ago. So you can then end up creating a bottleneck, thus why I've
>> been playing with finding a sweet spot for imap.
>
> Blocking on /proc? Never heard that before.

I was just being lazy. I meant 'process' :)   So, if I'm understanding
it correctly, assume you've got client_limit=2 and you've got
connection A and connection B serviced by a single process. If A does
a file operation that blocks, then B is effectively blocked too. So I
imagine if you get enough I/O backlog, you can create a downward
spiral where you can't service the connections faster than they're
coming in and you top out at the #process_limit. Which, btw, I set to
300 for imap with client_limit=5.

>> I figure that enough of a process's imap connections must be sitting in
>> IDLE at any given moment, so setting client_limit to like 4 or 5 isn't
>> too bad. Though it's not impossible that by putting multiple
>> connections on a single process, I'm actually throttiling the system,
>> resulting in fewer context switches (though I'd imagine bottlenecked
>> procs would be blocked on I/O and do a lot of volcs's).


Re: [Dovecot] load increase after upgrade to 2.0.8

2010-12-09 Thread Timo Sirainen
On Thu, 2010-12-09 at 10:18 -0800, Mark Moseley wrote:

> Upping the client_limit actually results in less processes, since a
> single process can service up to #client_limit connections. When I
> bumped up the client_limit for imap, my context switches plummeted.
> Though as Timo pointed out on another thread the other day when I was
> asking about this, when that proc blocks on I/O, it's blocking all the
> connections that the process is servicing.

Yeah. And it's even worse if it's blocking on waiting for a lock.

BTW. Do you have these kind of error messages in your log:

net_connect_unix(pop3) failed: Resource temporarily unavailable
net_connect_unix(imap) failed: Resource temporarily unavailable

I think those are sometimes happening when not using client_limit=1,
because all the processes are busy at that time and can't accept a new
connection (while with client_limit=1 a new process would be created to
handle it).




[Dovecot] Execute Script on LMTP Deliver?

2010-12-09 Thread Edward Carraro
Is it possible to have dovecot 2.0.8 using LMTP run a shell script each time
it delivers a message to a users mailbox?

I see there's an "execute = script /path/to/script" but when i added it to
lmtp service in 10-master.conf, it didnt do anything and stopped delivering
mail altogether

service lmtp {
  executable = script /usr/local/bin/test.sh u%

  unix_listener /var/spool/postfix/private/dovecot-lmtp {
   group = postfix
   user = postfix
  }

  inet_listener lmtp {
port = 24
  }
}

$ ls -lh | grep test
-rwxrwxrwx 1 root  staff  270 2010-12-09 18:05 test.sh

$ cat test.sh
#!/bin/sh
USER=$1
echo $USER > /tmp/newfile


basically when a message arrives, it will execute a shellscript which will
notify another service that mail has arrived in their inbox


Re: [Dovecot] Dovecot as IMAP proxy to Exchange

2010-12-09 Thread Willie Gillespie

Hugo Monteiro wrote:

Hello list,

I'm looking into the possibility to setup dovecot to act as an IMAP 
proxy to an Exchange server.

Things i know beforehand:
- I will not be able to use the ldap (Active Directory) user DN for auth 
binds (but i discovered that i could could use the user 
userPrincipalName attribute as bind DN. I tested it using ldapsearch and 
it worked fine.)

- I will not be able to perform any unbinded searches.
- The Exchange server is unique, so i can setup a static proxy route to 
the server.


Given the above, i'd like to post some questions:

1 - Will i be able to use auth_bind = yes given the restrictions? My 
first guess is that this might work if i use something like 
"auth_bind_userdn = %...@example.org"


Yes, you can do things like "auth_bind_userdn = %...@example.org"  As long 
as it works to bind that way with ldapsearch you should be fine.


2 - Will i be able to specify a static route to the exchange server, not 
having to rely on that information from the AD itself?




Don't know the answer here.

Another thing i'd like to know is if NTLM auth can be used while dovecot 
acts only as proxy.


Hmm, I don't think so with auth_bind = yes.  I could be wrong though.


Re: [Dovecot] load increase after upgrade to 2.0.8

2010-12-09 Thread Mark Moseley
On Thu, Dec 9, 2010 at 12:58 PM, Timo Sirainen  wrote:
> On Thu, 2010-12-09 at 10:18 -0800, Mark Moseley wrote:
>
>> Upping the client_limit actually results in less processes, since a
>> single process can service up to #client_limit connections. When I
>> bumped up the client_limit for imap, my context switches plummeted.
>> Though as Timo pointed out on another thread the other day when I was
>> asking about this, when that proc blocks on I/O, it's blocking all the
>> connections that the process is servicing.
>
> Yeah. And it's even worse if it's blocking on waiting for a lock.
>
> BTW. Do you have these kind of error messages in your log:
>
> net_connect_unix(pop3) failed: Resource temporarily unavailable
> net_connect_unix(imap) failed: Resource temporarily unavailable
>
> I think those are sometimes happening when not using client_limit=1,
> because all the processes are busy at that time and can't accept a new
> connection (while with client_limit=1 a new process would be created to
> handle it).

Yeah, I do get small bursts of those, but not enough to get too
worried about. I was assuming it was hitting process_limit for imap.
On one box, for all of today I see two clusters of those errors, both
lasting about 15-20 seconds apiece.

The problem is that the smaller I set client_limit (including =1) on
imap, the more likely the mass of imap processes will push these box
into swap (and these are old 2gig boxes; hoping to get replace them
soon with newer Dells, like PE 450's).


Re: [Dovecot] Execute Script on LMTP Deliver?

2010-12-09 Thread Timo Sirainen
On Thu, 2010-12-09 at 15:59 -0500, Edward Carraro wrote:
> Is it possible to have dovecot 2.0.8 using LMTP run a shell script each time
> it delivers a message to a users mailbox?

Nope.

> basically when a message arrives, it will execute a shellscript which will
> notify another service that mail has arrived in their inbox

Maybe with Sieve enotify extension? I think you can cause it to send
another mail :) Then point it to some address where MTA is configured to
execute your script.




Re: [Dovecot] Delay email!

2010-12-09 Thread Timo Sirainen
On Thu, 2010-12-09 at 17:15 -0200, Henrique Fernandes wrote:
> I am using postfix + dovecot 2.0.6  + mailman
> 
> And others 2 serves with only postfix and dovecot writing at same storage
> with OCFS2.
> 
> We are having problens with IOwait, and still trying to figure out why that
> is happen!

sdbox/mdbox format should reduce disk IO compared to maildir.

> But, problem becomes much worse when some emails are used in mailman, when
> it happens all 3 serves became really slow. iowait goes to 80.
> 
> Does anyone have an idea how to slow down the dovecot lda ?

By telling Postfix to limit the number of simultaneous dovecot-lda
processes. In master.cf you have:

# service type  private unpriv  chroot  wakeup  maxproc command + args

So change the dovecot-lda entry's maxproc from "-" to however many
simultaneous dovecot-ldas you want.



Re: [Dovecot] Delay email!

2010-12-09 Thread Henrique Fernandes
Yeap, already done this! hahahah

now is just 3 process.  maxproc.


We are thinking is use sdbox but we have to be a lot of sure that will help
a LOT  other wise we gona be "stick"  to dovecot for no big reason!


We just did a huge migration from mbox to maildix. Still some problems in
some accounts but is ok!


Thanks!


Other time i saw some one using indexes in local disk. This should help ? i
mean, we would have 2 index one in each server that have imap and pop, so
each tim eindex would have to be written each time it logins on each server
right ? this would not gain lot os performance right ?


[]'sf.rique


On Thu, Dec 9, 2010 at 9:20 PM, Timo Sirainen  wrote:

> On Thu, 2010-12-09 at 17:15 -0200, Henrique Fernandes wrote:
> > I am using postfix + dovecot 2.0.6  + mailman
> >
> > And others 2 serves with only postfix and dovecot writing at same storage
> > with OCFS2.
> >
> > We are having problens with IOwait, and still trying to figure out why
> that
> > is happen!
>
> sdbox/mdbox format should reduce disk IO compared to maildir.
>
> > But, problem becomes much worse when some emails are used in mailman,
> when
> > it happens all 3 serves became really slow. iowait goes to 80.
> >
> > Does anyone have an idea how to slow down the dovecot lda ?
>
> By telling Postfix to limit the number of simultaneous dovecot-lda
> processes. In master.cf you have:
>
> # service type  private unpriv  chroot  wakeup  maxproc command + args
>
> So change the dovecot-lda entry's maxproc from "-" to however many
> simultaneous dovecot-ldas you want.
>
>


Re: [Dovecot] IMAP aggregation and MUPDATE protocolo

2010-12-09 Thread Ernesto Revilla Derksen
Hi again.

Pls, see below.


2010/11/26 Timo Sirainen :
> On Thu, 2010-11-25 at 00:43 +0100, Ernesto Revilla Derksen wrote:
>
>> Is IMAP aggregation proxying really so difficult? I know about the
>> problems of COPY (which in some cases in Cyrus-Murder is handled by
>> the proxy itself), but don't know if there are any other gotchas.
>
> Are you thinking about simple proxying, so that if you switch to
> mailbox1 it would do a connection to server1 which would completely
> handle it, so that until next SELECT/EXAMINE is done the proxy would
> just do dummy proxying?
Well, this may not be enough.

> Maybe something like that would work, but I'm not very interested in
> doing it. Also it would limit some features that could be made available
> (e.g. virtual mailboxes wouldn't work with it). There are many more
> interesting things that can be done with a smarter proxy.

One thing we would like to have a powerful search engine, so that a
user could locate any message, document, etc. that he/she has access
to. The searches may be split between all backends, and the front-end
would aggregate search results.

Yes, we would be very interested in that libstorage thing. Our initial
backends are Dovecot, Alfresco and an issue tracker, like Redmine. The
issue tracker has actually NO IMAP interface. But perhaps we could
offer a libstorage provider or a feed like Activity Feed, etc.

Could you give us a rough estimate of how many workin hours or days
this could take? We'd need at least an pure imap aggregator and then
we could talk about other converters.

Beside this, I'm still in doubt about if we're on the correct way of
unifying information of different sources.

Best regards.
Erny


Re: [Dovecot] Released Pigeonhole v0.2.2 for Dovecot v2.0.8

2010-12-09 Thread Stephan Bosch

On 12/8/2010 7:59 PM, Tom Talpey wrote:

On 12/6/2010 6:18 PM, Stephan Bosch wrote:

The new Dovecot v2.0.8 release has a few changes that prompted changes
in Pigeonhole as well. This means that a new release of Pigeonhole is
also necessary, because otherwise things will not compile anymore.


Thanks! It builds and runs well for me, but you may want to consider
updating the information in the wiki, which indicates there are no
formal releases, and that Mercurial should be used to obtain the code.



Thanks! Fixed now.

Regards,

Stephan.


[Dovecot] Phantom INBOX in 1.2. Is there a stable .deb of 2.0?

2010-12-09 Thread ian+dove...@comtek.co.uk

Hi,

We've been running 1.2 Dovecot for a while and are happy with it. In 
trying to migrate from symlinks to shared folders I see a similar 
problem to that shown at 
http://www.mail-archive.com/dovecot@dovecot.org/msg32083.html . For example:


   . lsub "" "*"
   * LSUB () "." "Users..INBOX"

The advice given in the thread is to upgrade to 2.0. We currently use 
Debian's 1.2 package and I'm reluctant to move since I can't find a 
stable 2.0 .deb (http://wiki2.dovecot.org/PrebuiltBinaries only has 
repository heads of 2.0). Building from source is an option but I'd 
really prefer to be using a package.


Can anybody advise on an appropriate way to go?

Like the original poster I am seeing INBOX under the shared namespace 
when I try to share. I'm also getting an empty 
/var/mail/virtual/users/Maildir created 
(maildir:/var/mail/virtual/users/%n/Maildir/). We formerly had an INBOX 
prefix namespace from a Courier migration too, but it has been deleted 
(to no avail).


Shared folders do work, though. It just requires 'anyone l' in 
/etc/dovecot/dovecot-acls/.DEFAULT (though I'd really rather not do this!).


Thanks for any help,

Ian





mail_location = maildir:/var/mail/virtual/users/%n/Maildir/
namespace private {
   separator = .
   prefix =
   inbox = yes
}
namespace shared {
   separator = .
   prefix = Users.%%n.
   location = 
maildir:/var/mail/virtual/users/%%n/Maildir/:INDEX=~/shared/%%u

   subscriptions = no
   list = children
}
namespace public {
   separator = .
   prefix = Shared.
   location = maildir:/var/mail/virtual/public:INDEX=~/public
   subscriptions = no
}
auth default {
  mechanisms = plain
  passdb ldap {
args = /etc/dovecot/dovecot-ldap.conf
  }
  passdb ldap {
args = /etc/dovecot/dovecot-ldap-master.conf
master=yes
  }
  userdb passwd {
  }
  userdb ldap {
args = /etc/dovecot/dovecot-ldap-userdb.conf
  }
  userdb ldap {
args = /etc/dovecot/dovecot-shared-ldap.conf
  }
}
plugin {
  acl = vfile:/etc/dovecot/dovecot-acls:cache_secs=3
  acl_shared_dict = file:/var/mail/virtual/shared-mailboxes.db
}



Re: [Dovecot] Execute Script on LMTP Deliver?

2010-12-09 Thread Edward Carraro
Ah ok. Actually enotify xmpp is what I would like, but I couldn't get it
compiled on a debian system with dovecot 2.0.8 and pigeonhole 0.2.2.

*Compile against installed dovecot* (attempt 1)
$ ./configure --with-dovecot=/usr/local/lib/dovecot/
...
Pigeonhole Sieve headers not found from
/home/edward/pigeonhole-enotify-xmpp-56ad0f423143 and they are not installed
in the Dovecot include path, use --with-pigeonhole=PATH to give path to
Pigeonhole sources or installed headers.
configure: error: pigeonhole not found

*Compile against installed dovecot* (attempt 2)
$ ./configure --with-dovecot=/usr/local/lib/dovecot/
--with-pigeonhole=/usr/local/include/dovecot/sieve/
$ make
...
make[2]: Entering directory
`/home/edward/pigeonhole-enotify-xmpp-56ad0f423143/src'
/bin/sh ../libtool --tag=CC   --mode=compile gcc -DHAVE_CONFIG_H -I. -I..
-I.. -I -I/usr/local/include/dovecot/sieve-g -O2 -MT ntfy-xmpp.lo -MD
-MP -MF .deps/ntfy-xmpp.Tpo -c -o ntfy-xmpp.lo ntfy-xmpp.c
 gcc -DHAVE_CONFIG_H -I. -I.. -I.. -I -I/usr/local/include/dovecot/sieve -g
-O2 -MT ntfy-xmpp.lo -MD -MP -MF .deps/ntfy-xmpp.Tpo -c ntfy-xmpp.c  -fPIC
-DPIC -o .libs/ntfy-xmpp.o
ntfy-xmpp.c:14:17: error: lib.h: No such file or directory
ntfy-xmpp.c:15:19: error: array.h: No such file or directory
ntfy-xmpp.c:16:17: error: str.h: No such file or directory
ntfy-xmpp.c:17:20: error: ioloop.h: No such file or directory
ntfy-xmpp.c:18:26: error: str-sanitize.h: No such file or directory
ntfy-xmpp.c:20:26: error: sieve-common.h: No such file or directory
ntfy-xmpp.c:21:28: error: sieve-settings.h: No such file or directory
ntfy-xmpp.c:22:27: error: sieve-address.h: No such file or directory
ntfy-xmpp.c:23:31: error: sieve-ext-enotify.h: No such file or directory
ntfy-xmpp.c:25:21: error: rfc2822.h: No such file or directory
...
Continues on but more errors

*Compiled against source*
$ ./configure --with-dovecot=/home/edward/dovecot-2.0.8/
--with-pigeonhole=/home/edward/dovecot-2.0-pigeonhole-0.2.2/
$ make
ntfy-xmpp.c: In function 'ntfy_xmpp_load':
ntfy-xmpp.c:105: warning: passing argument 1 of 'sieve_sys_warning' from
incompatible pointer type
ntfy-xmpp.c:105: error: too few arguments to function 'sieve_sys_warning'

After that, I tried to go the script way :P


On Thu, Dec 9, 2010 at 6:16 PM, Timo Sirainen  wrote:

> On Thu, 2010-12-09 at 15:59 -0500, Edward Carraro wrote:
> > Is it possible to have dovecot 2.0.8 using LMTP run a shell script each
> time
> > it delivers a message to a users mailbox?
>
> Nope.
>
> > basically when a message arrives, it will execute a shellscript which
> will
> > notify another service that mail has arrived in their inbox
>
> Maybe with Sieve enotify extension? I think you can cause it to send
> another mail :) Then point it to some address where MTA is configured to
> execute your script.
>
>
>


Re: [Dovecot] load increase after upgrade to 2.0.8

2010-12-09 Thread Stan Hoeppner
Mark Moseley put forth on 12/9/2010 12:18 PM:

> If you at some point upgrade to >2.6.35, I'd be interested to hear if
> the load skyrockets on you. I also get the impression that the load
> average calculation in these recent kernels is 'touchier' than in
> pre-2.6.35.

This thread may be of value in relation to this issue:

http://groups.google.com/group/linux.kernel/browse_thread/thread/eb5cb488b7404dd2/0c954e88d2f20e56

It seems there are some load issues regarding recent Linux kernels, from
2.6.34 (maybe earlier?) on up.  The commit of the patch to fix it was
Dec 8th--yesterday.  So it'll be a while before distros get this patch out.

However, this still doesn't seem to explain Ralf's issue, where the
kernel stays the same, but the Dovecot version changes, with 2.0.x
causing the high load and 1.2.x being normal.  Maybe 2.0.x simply causes
this bug to manifest itself more loudly?

This Linux kernel bug doesn't explain the high load reported with 2.0.x
on FreeBSD either.  But it is obviously somewhat at play for people
running these kernels versions and 2.0.x.  To what degree it adds to the
load I cannot say.

-- 
Stan


Re: [Dovecot] Delay email!

2010-12-09 Thread Timo Sirainen
On 9.12.2010, at 23.31, Henrique Fernandes wrote:

> We are thinking is use sdbox but we have to be a lot of sure that will help
> a LOT  other wise we gona be "stick"  to dovecot for no big reason!

You can always use dsync to migrate between different mailbox formats. You 
could also for example switch only some users to sdbox and keep others in 
Maildir to see if the performance improves.

> Other time i saw some one using indexes in local disk. This should help ? i
> mean, we would have 2 index one in each server that have imap and pop, so
> each tim eindex would have to be written each time it logins on each server
> right ? this would not gain lot os performance right ?

Difficult to say.

Re: [Dovecot] load increase after upgrade to 2.0.8

2010-12-09 Thread Timo Sirainen
On 10.12.2010, at 0.54, Stan Hoeppner wrote:

> However, this still doesn't seem to explain Ralf's issue, where the
> kernel stays the same, but the Dovecot version changes, with 2.0.x
> causing the high load and 1.2.x being normal.  Maybe 2.0.x simply causes
> this bug to manifest itself more loudly?

Cor's debugging has so far shown that a single epoll_wait() call can sometimes 
generate a few thousand voluntary context switches. I can't really understand 
how that's possible. Those epoll_wait() calls about half of the total voluntary 
context switches generated by imap processes. We'll see tomorrow if poll() 
works better or if a small patch I made makes it better.

> This Linux kernel bug doesn't explain the high load reported with 2.0.x
> on FreeBSD either.

Who has reported high load on FreeBSD? So far I know of only Ralf and Cor and 
both are using Linux.



Re: [Dovecot] IMAP aggregation and MUPDATE protocolo

2010-12-09 Thread Timo Sirainen
On 9.12.2010, at 23.41, Ernesto Revilla Derksen wrote:

> Yes, we would be very interested in that libstorage thing. Our initial
> backends are Dovecot, Alfresco and an issue tracker, like Redmine. The
> issue tracker has actually NO IMAP interface. But perhaps we could
> offer a libstorage provider or a feed like Activity Feed, etc.
..
> Beside this, I'm still in doubt about if we're on the correct way of
> unifying information of different sources.

I wonder if there would be a way for the services without IMAP interface to 
provide something really simple that Dovecot could use. If there are not a huge 
number of items ("mails") then it would work to simply have a POP3 UIDL style 
list and an ability to download an item by the UIDL. If there is a huge number 
of items, then maybe something extra where you can ask it to send only new 
items since last check (maybe "sent anything new since UIDL asdf"). I haven't 
really looked at RSS/Atom, maybe those would be enough for this?

Then this interface could be implemented for issue trackers etc. and Dovecot 
could have an optimized lib-storage interface for it.



Re: [Dovecot] Delay email!

2010-12-09 Thread Henrique Fernandes
We migh try index on local disc to see what happens.

We use dsyn to migrate between boxes, but as i said, some times dsync
crashed, and as long as we are using sdbox, i don`t know any other tool that
can migrate mails from that format to another! Maybe imapsync or some
thing...

Still studing!

Gonna upgrade from ocfs2 1.4 to ocfs2 1.6 see if makes any improves.

[]'sf.rique


On Thu, Dec 9, 2010 at 10:42 PM, Timo Sirainen  wrote:

> On 9.12.2010, at 23.31, Henrique Fernandes wrote:
>
> > We are thinking is use sdbox but we have to be a lot of sure that will
> help
> > a LOT  other wise we gona be "stick"  to dovecot for no big reason!
>
> You can always use dsync to migrate between different mailbox formats. You
> could also for example switch only some users to sdbox and keep others in
> Maildir to see if the performance improves.
>
> > Other time i saw some one using indexes in local disk. This should help ?
> i
> > mean, we would have 2 index one in each server that have imap and pop, so
> > each tim eindex would have to be written each time it logins on each
> server
> > right ? this would not gain lot os performance right ?
>
> Difficult to say.


[Dovecot] imap_logout_format don't understand %r variable

2010-12-09 Thread Nikita Koshikov
Hello list, 

Dovecot version 1.2.16. In protocol imap section I have

imap_logout_format = ip=%r bytes=%i/%o

But when user disconnecting, to log file dovecot write:
IMAP(u...@domain): Info: Disconnected: Logged out ip= bytes=954/8644

Is this suppose to work ?


Re: [Dovecot] imap_logout_format don't understand %r variable

2010-12-09 Thread Timo Sirainen
On 10.12.2010, at 7.17, Nikita Koshikov wrote:

> imap_logout_format = ip=%r bytes=%i/%o
> 
> But when user disconnecting, to log file dovecot write:
> IMAP(u...@domain): Info: Disconnected: Logged out ip= bytes=954/8644
> 
> Is this suppose to work ?

No. And I'm not sure if it should. Maybe. Anyway an alternative that already 
works is:

mail_log_prefix = "%s(%u %r): "




Re: [Dovecot] load increase after upgrade to 2.0.8

2010-12-09 Thread Ralf Hildebrandt
* Stan Hoeppner :
> Mark Moseley put forth on 12/9/2010 12:18 PM:
> 
> > If you at some point upgrade to >2.6.35, I'd be interested to hear if
> > the load skyrockets on you. I also get the impression that the load
> > average calculation in these recent kernels is 'touchier' than in
> > pre-2.6.35.
> 
> This thread may be of value in relation to this issue:
> 
> http://groups.google.com/group/linux.kernel/browse_thread/thread/eb5cb488b7404dd2/0c954e88d2f20e56
> 
> It seems there are some load issues regarding recent Linux kernels, from
> 2.6.34 (maybe earlier?) on up.  The commit of the patch to fix it was
> Dec 8th--yesterday.  So it'll be a while before distros get this patch out.

I'm using 2.6.32
 
> However, this still doesn't seem to explain Ralf's issue, where the
> kernel stays the same, but the Dovecot version changes, with 2.0.x
> causing the high load and 1.2.x being normal.  Maybe 2.0.x simply causes
> this bug to manifest itself more loudly?

That could be!
 
-- 
Ralf Hildebrandt
  Geschäftsbereich IT | Abteilung Netzwerk
  Charité - Universitätsmedizin Berlin
  Campus Benjamin Franklin
  Hindenburgdamm 30 | D-12203 Berlin
  Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962
  ralf.hildebra...@charite.de | http://www.charite.de



Re: [Dovecot] load increase after upgrade to 2.0.8

2010-12-09 Thread Ralf Hildebrandt
* Timo Sirainen :

> Cor's debugging has so far shown that a single epoll_wait() call can
> sometimes generate a few thousand voluntary context switches. I can't
> really understand how that's possible. Those epoll_wait() calls about
> half of the total voluntary context switches generated by imap
> processes. We'll see tomorrow if poll() works better or if a small
> patch I made makes it better.

That sounds promising :)

-- 
Ralf Hildebrandt
  Geschäftsbereich IT | Abteilung Netzwerk
  Charité - Universitätsmedizin Berlin
  Campus Benjamin Franklin
  Hindenburgdamm 30 | D-12203 Berlin
  Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962
  ralf.hildebra...@charite.de | http://www.charite.de



Re: [Dovecot] imap_logout_format don't understand %r variable

2010-12-09 Thread Nikita Koshikov
On Fri, 10 Dec 2010 07:26:39 +
Timo Sirainen wrote:

> On 10.12.2010, at 7.17, Nikita Koshikov wrote:
> 
> > imap_logout_format = ip=%r bytes=%i/%o
> > 
> > But when user disconnecting, to log file dovecot write:
> > IMAP(u...@domain): Info: Disconnected: Logged out ip= bytes=954/8644
> > 
> > Is this suppose to work ?
> 
> No. And I'm not sure if it should. Maybe. Anyway an alternative that already 
> works is:
> 
> mail_log_prefix = "%s(%u %r): "
> 
That works. Thank you.