Re: [Dovecot] Poll: Quota near full behavior? [Was: Feature request? Make deliver quota inclusive!]

2010-02-19 Thread Steffen Kaiser

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Thu, 18 Feb 2010, Charles Marcus wrote:


On 2010-02-18 11:09 AM, Steffen Kaiser wrote:

Actually, I once had a system where the request was we do not send over
quota notices, all mails have to arrive. Hence, deliver should have no
quota - well, a very high quota actually -, but a quite strick IMAP quota.


So simply leaving everything in the INBOX defeats the quota?


Not directly.

Incoming mails were spooled to /var/mail/user.
Upon login via IMAP or POP and when /var/mail/user changes those mails 
were slurped into ~user/don't remember by the imap/pop server process.


So the users saw at most as many mails that would fit into the filesystem 
quota.


This setup ditched for large mails sometimes, because the slurp worked
strictly sequentially.

Regards,

- -- 
Steffen Kaiser

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iQEVAwUBS35I37+Vh58GPL/cAQIzjQf/boOMd/7YN4YO4FoRw4XrN/UU7fS9/fvZ
HvSShMhPygavUdN/appmd7Ee/4drO6Ck93UG6FOt8kGHk9XDkwGOf8rHLZZ9uNsZ
hpTCvZjVO77h4s9jxEDchlJVKKJJvDL5g1rQtt8SQtO4MVqdzxwvC97W4txB3VnT
bQqDKa9PPRwQNjJ/7YpkIcx5gYTyWarC4AiLPDxzbyaEt8iyukY7TPf8p4TCLHnr
WgaPho6hSm3LtbHUjpf0mAo9/pFVXeDiNeX6UYYrx5iiHCi7Jhg4CMHx/LUEK2DH
PEokT7MhyIoTLRiYJnZ/TgojGWDpvxoXtECzluWpkWmAt2aK+o3UxQ==
=3Pud
-END PGP SIGNATURE-


Re: [Dovecot] GlusterFs - Any new progress reports?

2010-02-19 Thread John Lyons
  Sure .. but you can break the index files in exactly the same way as
  with NFS. :)
  
 That is right :)

For us, all the front end exim servers pass their mail to a single final
delivery server. It was done so that we didn't have all the front end
servers needing to mount the storage. It also means that if we need to
stop local delivery for any reason we're only stopping one exim server.

The NFS issue is resolved (I think/hope) by having the front end load
balancer use persistent connections to the dovecot servers.

All I can say is we've used dovecot since it was a little nipper and
have never had any issues with indexes.

Regards

John
www.netserve.co.uk




Re: [Dovecot] quota problem

2010-02-19 Thread Andre Hübner

Hello,

thanks for help.


On Wed, 2010-02-17 at 15:26 +0100, Andre Hübner wrote:

my user_query:
user_query = SELECT  home, uid, gid, concat('*:storage=', 
quota_bytes,'M')

AS quota_rule FROM mail_users WHERE login = '%u'



Do you really want quota_bytes number of megabytes? If not, change
the ,'M' part to ,'B'.

this was just a test, value in db is 10, content of mailbox much bigger.


quota = dirsize:user



I hope you're not using Maildir?

yes, still using mbox ;)


I have no idea why its not working.



Set auth_debug=yes and mail_debug=yes and show logs. Full dovecot -n
output might also be helpful.


it is working now. had a problem with my virtuell users.
this tutorial helped me to set up my postfix
http://heinous.org/wiki/Virtual_Domains,_Postfix,_Dovecot_LDA,_and_LDAP

one problem is left ;)

we use a lot of procmail rules.
best way would be if we could pipe mails from procmail to deliver like 
described here:

http://wiki.dovecot.org/procmail

but in this case lda ignores my quota and is putting mails in inbox which is 
actual over quota.

is there a way to combine procmail-rules and delivering via dovecot lda?

Thanks for help,
Andre 



Re: [Dovecot] quota problem

2010-02-19 Thread Timo Sirainen
On Fri, 2010-02-19 at 11:32 +0100, Andre Hübner wrote:
 best way would be if we could pipe mails from procmail to deliver like 
 described here:
 http://wiki.dovecot.org/procmail
 
 but in this case lda ignores my quota and is putting mails in inbox which is 
 actual over quota.
 is there a way to combine procmail-rules and delivering via dovecot lda?

Call deliver with -d $USER parameter.



signature.asc
Description: This is a digitally signed message part


Re: [Dovecot] Poll: Quota near full behavior? [Was: Feature request? Make deliver quota inclusive!]

2010-02-19 Thread Charles Marcus
On 2010-02-18 4:53 PM, Noel Butler wrote:
 Personally I think the best way would be, if the user isn't over
 quota at the time of a message delivery, deliver that message,
 *regardless* of whether or not it puts the user over quota.

 Wonder if there's anyone who wouldn't want this behavior? One
 exception could be that if mail is larger than the user's entire
 quota limit, it wouldn't be accepted. And this would happen only
 for deliver/lmtp, not imap append (because it would give user an
 error message directly).

 I certainly wouldn't want to accept a message in this case, user 
 might be 1K under quota, but get 20m file now that might be a
 whoopie doo :) but what if 130K users did same.

Well, I'd argue that if you're allowing messages that big already for
130K users, then you should have enough spare storage to handle such a
situation - although you and I both know the likelihood of even 10% of
those 130K users encountering such a situation is next to null, so I
don't think it's a valid argument.

That said - in an enterprise environment like that, you'd be assigning
group and domain level quotas too to keep any one group/customer from
using up all of the storage on the server, right?

-- 

Best regards,

Charles


Re: [Dovecot] Poll: Quota near full behavior? [Was: Feature request? Make deliver quota inclusive!]

2010-02-19 Thread Charles Marcus
On 2010-02-19 3:16 AM, Steffen Kaiser wrote:
 On Thu, 18 Feb 2010, Charles Marcus wrote:
 On 2010-02-18 11:09 AM, Steffen Kaiser wrote:
 Actually, I once had a system where the request was we do not
 send over quota notices, all mails have to arrive. Hence,
 deliver should have no quota - well, a very high quota actually
 -, but a quite strick IMAP quota.

 So simply leaving everything in the INBOX defeats the quota?

 Not directly.
 
 Incoming mails were spooled to /var/mail/user. Upon login via IMAP or
 POP and when /var/mail/user changes those mails were slurped into
 ~user/don't remember by the imap/pop server process.

Ahh... so, this would only be a [potential] problem in the case of [a]
user[s] that didn't login for a long time... and I guess you could even
deal with that by some kind of nightly cron job...

-- 

Best regards,

Charles


Re: [Dovecot] Poll: Quota near full behavior? [Was: Feature request? Make deliver quota inclusive!]

2010-02-19 Thread Steffen Kaiser

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Fri, 19 Feb 2010, Charles Marcus wrote:


Ahh... so, this would only be a [potential] problem in the case of [a]
user[s] that didn't login for a long time... and I guess you could even
deal with that by some kind of nightly cron job...


A cron job mailed me, if a spool files exceeded some limit. Then I needed 
to determine, why the user does not read the mails. It was a closed user 
group, no public service or something like that.


Regards,

- -- 
Steffen Kaiser

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iQEVAwUBS352SL+Vh58GPL/cAQLJyggAhcQqx+VOhY60eC8l9g+xEG8sEIztiM7H
BbLYycBqd5h8nG+A0zPhq2YL7N1cCYQ4328pizaPx6uDlK0pulYY9hcf65BIRhBn
hH3fRRxPyYq1AvI25PVob/xVHoo/u8l58VtiVE2L2Kd+fjNRHvbfIeIrcnUs6Ab8
nHKMPWrDlhNJZ6JPaYNcT4hxLJR/k3WGsamMx+ILsTkijEJzAqudPliKttBNcuK9
8Ticeb/gyoOIlpitXekajmg0iFBSm2xPGX/4CxL73aSygzy+S8yMjpCoydOzWROh
/hS9RO/qPeUD6ySTZIslpJNj9HpZKpOh/q3drRom8EXx3vDp0daWKw==
=F2Mk
-END PGP SIGNATURE-


Re: [Dovecot] quota problem

2010-02-19 Thread Andre Hübner

Hello,


Call deliver with -d $USER parameter.


puhh, thanks, i got it now.
lda works + all procmail rules.
one thing is left, but i think this is not possible.

our users have full access to procmailparts. if they use procmailrules to 
attach mails on folders it its possible to bypass the quoatarules because 
deliver is called at the end of the procmail.
if i could call deliver at the beginning of the procmail and deliver would 
only bouncing mails if mailbox over quota and do nothing in else cases the 
bypass would be eliminated.
but i dont see a way to get mail back to procmailprocessing after piping to 
binary.


ok, thanks a lot for helping, problem itself is solved,
Andre



[Dovecot] quota plugin + setting in protocol or lda

2010-02-19 Thread maximatt
hi...

which are the diference to configure quota plugin when LDA is dovecot:

these setting:

protocol imap {
:
mail_plugins = quota imap_quota
:
}
protocol pop3 {
:
mail_plugins = quota
:
}

.vs.

these setting:

protocol lda {
   :
   mail_plugins = quota
   :
}

Note: - dovecot version is 1.2.0

thanks in advance!!

-- 
Salu2 ;)


[Dovecot] Client behaviour with sieve

2010-02-19 Thread Koenraad Lelong

Hi,

I have a working dovecot imap-server, with sieve.
I find it odd that my mail-clients (Thunderbird 2 and 3) don't report 
anything that's new in the folders.

What I mean is this :
Postfix get's a mail and hands it over to dovecot's LDA and sieve moves 
it to a folder.
When I log in with Thunderbird, I see new messages in my Inbox. But that 
message that was moved to a folder is invisible until I click on the 
folder. Then Thunderbird sees there are new messages in that folder and 
reports the number of new messages and makes the foldername bold.


Is this the expected behaviour ? Or did I configure something wrong ?
Thanks for any clarification.

Regards,

Koenraad Lelong.


[Dovecot] Dovecot blog

2010-02-19 Thread Timo Sirainen
http://blog.dovecot.org/

I was thinking that I could blog about:

 - ideas for new Dovecot feature designs
 - when I actually manage to implement some new great feature
 - maybe some stuff about IMAP/email in general
 - and maybe whenever I happen to be moving to a different country

I wasn't really planning on announcing new releases there. Dovecot feature 
designs are also sent to Dovecot mailing list, as before. But there have been 
other things I've thought about mentioning somewhere, but Dovecot ML didn't 
really seem like the right place.

I doubt I'll update the blog very often. But that probably makes it even more 
useful, since casual readers find the useful stuff quickly. :)



Re: [Dovecot] Client behaviour with sieve

2010-02-19 Thread Nikita Koshikov
On Fri, 19 Feb 2010 15:17:17 +0100
Koenraad Lelong dove...@ace-electronics.be wrote:

 Hi,
 
 I have a working dovecot imap-server, with sieve.
 I find it odd that my mail-clients (Thunderbird 2 and 3) don't report 
 anything that's new in the folders.
 What I mean is this :
 Postfix get's a mail and hands it over to dovecot's LDA and sieve moves 
 it to a folder.
 When I log in with Thunderbird, I see new messages in my Inbox. But that 
 message that was moved to a folder is invisible until I click on the 
 folder. Then Thunderbird sees there are new messages in that folder and 
 reports the number of new messages and makes the foldername bold.
 
 Is this the expected behaviour ? Or did I configure something wrong ?
 Thanks for any clarification.
 
 Regards,
 
 Koenraad Lelong.

Take a look http://www.mozilla.org/support/thunderbird/tips#beh_downloadstartup


Re: [Dovecot] Client behaviour with sieve

2010-02-19 Thread Steffen Kaiser

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Fri, 19 Feb 2010, Koenraad Lelong wrote:


Is this the expected behaviour ?


Yes.


Thanks for any clarification.


The MUA must actively request the status of the mailfolders. The INBOX and 
the currently selected folder are automatically monitored by all MUAs, I 
guess.


Dunno, if TB has a monitor all feature, otherwise you have to select an 
option in each folder's properties.


Regards,

- -- 
Steffen Kaiser

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iQEVAwUBS36f+L+Vh58GPL/cAQI/1wf8CKWgqcizazSoyh7DtQpRvP59JxGU2lEt
Mr9Yxalxl0BeQPAvV2/ndP/R4Gkg1VReNWRdgQh3CgLoZykVr0Mh44bidhDNtqGL
9x4QF+3otgml1iR458wjSYSdZfnMzXaQK/E03qRwX0/WaR3dVGbBod5J2y3C/n5H
Re29IvhwjGcVq93zBKORARrLSVsPv2MpflW0w0nLxC/Fdmc03xvDgdX4zRMbmbXZ
+nA/EhCWPVI2dOQ0lv+Z23GTTb+L0Q9TwUXBhrQn8tjju4PTtIdS8c2pKdqMJyUx
LdhpCq9josq3Qsa1VI41h7vpUJ7L72mFXoQRyQase7uKQBNuf0xJMA==
=Aamd
-END PGP SIGNATURE-


Re: [Dovecot] quota plugin + setting in protocol or lda

2010-02-19 Thread Timo Sirainen
imap is the only special case, because there you probably want to use 
imap_quota. Everything else is just happy with plain quota. Actually you 
could even put the regular mail_plugins outside protocol {} and only override 
it for imap.

On 19.2.2010, at 16.15, maximatt wrote:

 hi...
 
 which are the diference to configure quota plugin when LDA is dovecot:
 
 these setting:
 
 protocol imap {
:
mail_plugins = quota imap_quota
:
 }
 protocol pop3 {
:
mail_plugins = quota
:
 }
 
 .vs.
 
 these setting:
 
 protocol lda {
   :
   mail_plugins = quota
   :
 }
 
 Note: - dovecot version is 1.2.0
 
 thanks in advance!!
 
 -- 
 Salu2 ;)



Re: [Dovecot] Client behaviour with sieve

2010-02-19 Thread Roderick A. Anderson

Steffen Kaiser wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Fri, 19 Feb 2010, Koenraad Lelong wrote:


Is this the expected behaviour ?


Yes.


Thanks for any clarification.


The MUA must actively request the status of the mailfolders. The INBOX 
and the currently selected folder are automatically monitored by all 
MUAs, I guess.


Dunno, if TB has a monitor all feature, otherwise you have to select 
an option in each folder's properties.


I haven't found one so far.  Next best thing would be to set it on by 
default and let the user turn it off but that is a Mozilla issue not 
Dovecot's.



\\||/
Rod
--


Regards,

- -- Steffen Kaiser
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iQEVAwUBS36f+L+Vh58GPL/cAQI/1wf8CKWgqcizazSoyh7DtQpRvP59JxGU2lEt
Mr9Yxalxl0BeQPAvV2/ndP/R4Gkg1VReNWRdgQh3CgLoZykVr0Mh44bidhDNtqGL
9x4QF+3otgml1iR458wjSYSdZfnMzXaQK/E03qRwX0/WaR3dVGbBod5J2y3C/n5H
Re29IvhwjGcVq93zBKORARrLSVsPv2MpflW0w0nLxC/Fdmc03xvDgdX4zRMbmbXZ
+nA/EhCWPVI2dOQ0lv+Z23GTTb+L0Q9TwUXBhrQn8tjju4PTtIdS8c2pKdqMJyUx
LdhpCq9josq3Qsa1VI41h7vpUJ7L72mFXoQRyQase7uKQBNuf0xJMA==
=Aamd
-END PGP SIGNATURE-




Re: [Dovecot] Client behaviour with sieve

2010-02-19 Thread Koenraad Lelong

Nikita Koshikov schreef:

On Fri, 19 Feb 2010 15:17:17 +0100
Koenraad Lelong xxx...@ace-electronics.be wrote:


...


Take a look http://www.mozilla.org/support/thunderbird/tips#beh_downloadstartup

Hi Nikita,
Thanks for the link.

What I don't like is that you posted my e-mail-adress in your message. 
Now it will be available to harvesters, and I will get spam via that 
address very soon. Please don't be offended, but remove that from your 
replies.


Regards,

Koenraad Lelong.




Re: [Dovecot] Client behaviour with sieve

2010-02-19 Thread Matthijs Kooijman
Hi Koenraad,

 What I don't like is that you posted my e-mail-adress in your
 message. Now it will be available to harvesters, and I will get spam
 via that address very soon. Please don't be offended, but remove
 that from your replies.
Not to a wise ass, but by posting to a list, you'll get your email published
anyway. There are public archives, of which the official one uses obfuscation,
but I think spam crawlers can replace  at  with an @ just fine, and
dovecot.org publishes an mbox file of the entire archive. I agree that having
the address in plain text in an archive is more obvious, but you'll have to
accept that it will be harvested anyway, unless you're not using the address
at all...

Gr.

Matthijs


signature.asc
Description: Digital signature


Re: [Dovecot] wish now I'd not upgraded...

2010-02-19 Thread Werner
Am 15.02.10 15:18, schrieb Timo Sirainen:
 On 15.2.2010, at 16.14, Stan Hoeppner wrote:
 
 Upgraded from Debian Dovecot 1.0.15 to Debian Dovecot 1.2.10-1~bpo50+1.

 Problem:  Instantly noticed in TB 3.0.1 Win32 that all emails in all folders
 were marked as unread.
 
 This is a Thunderbird bug and there have been several threads about this 
 here. Basically the fix is to disable CONDSTORE support in Thunderbird until 
 3.0.2 is released.
 

But this should not happen with POP3? I'm asking because I want to migrate from
courier to dovecot. In the Lab we've converted the mailbox to dovecot-format
BUT when using POP3, TB wants to download all Mails again.

So is this also related to the TB-Bug?

Thanks,
Werner


Re: [Dovecot] wish now I'd not upgraded...

2010-02-19 Thread Timo Sirainen
On 19.2.2010, at 17.28, Werner wrote:

 Am 15.02.10 15:18, schrieb Timo Sirainen:
 On 15.2.2010, at 16.14, Stan Hoeppner wrote:
 
 Upgraded from Debian Dovecot 1.0.15 to Debian Dovecot 1.2.10-1~bpo50+1.
 
 Problem:  Instantly noticed in TB 3.0.1 Win32 that all emails in all folders
 were marked as unread.
 
 This is a Thunderbird bug and there have been several threads about this 
 here. Basically the fix is to disable CONDSTORE support in Thunderbird until 
 3.0.2 is released.
 
 
 But this should not happen with POP3?

No.

 I'm asking because I want to migrate from
 courier to dovecot. In the Lab we've converted the mailbox to dovecot-format
 BUT when using POP3, TB wants to download all Mails again.

Have you read http://wiki.dovecot.org/Migration and 
http://wiki.dovecot.org/Migration/Courier? If you have, especially note this 
part: Some clients re-download all mails if you change the hostname in the 
client configuration. Be aware of this when testing.

Re: [Dovecot] Client behaviour with sieve

2010-02-19 Thread Gregory Finch
On 2010-02-19 6:33 AM, Roderick A. Anderson wrote:
 Steffen Kaiser wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On Fri, 19 Feb 2010, Koenraad Lelong wrote:

 Is this the expected behaviour ?

 Yes.

 Thanks for any clarification.

 The MUA must actively request the status of the mailfolders. The INBOX
 and the currently selected folder are automatically monitored by all
 MUAs, I guess.

 Dunno, if TB has a monitor all feature, otherwise you have to select
 an option in each folder's properties.
 
 I haven't found one so far.  Next best thing would be to set it on by
 default and let the user turn it off but that is a Mozilla issue not
 Dovecot's.
 
 
 \\||/
 Rod

The setting you're looking for is mail.check_all_imap_folders_for_new.
Set that to true and it will do what you want. I've used it once, but it
seemed to cause issues with IDLE; it seemed like after checking all
folders, it would IDLE on which ever folder it selected last during the
check and not on the folder you were currently viewing, however that
could just be me ;)

-Greg



signature.asc
Description: OpenPGP digital signature


Re: [Dovecot] Dovecot blog

2010-02-19 Thread Charles Marcus
On 2010-02-19 9:20 AM, Timo Sirainen wrote:
 http://blog.dovecot.org/
 
 I was thinking that I could blog about:

I for one will most likely enjoy reading whatever you deem worthy of
blogging about. Thanks Timo!

-- 

Best regards,

Charles


[Dovecot] quota and lazy_expunge plugins both used: quotas go wrong with lazy_expunge'd mails

2010-02-19 Thread Baptiste Malguy
Hello,

I may have missed a point, but I could have found an issue between plugins
quota and lazy_expunge.

I have noticed that the lazy_expunged mails are counted as part of the
quota, while the documentation says it should not be. I have first tried to
find out if this was a configuration mistake somewhere. Notably, you can
notice that the expunged mails are in directory ~/expunged while the usual
mail is in ~/Maildir : expunged mails are not in a subdirectory of ~/Maildir
(this was just to make sure there were no side effect with this).

I have deleted some expunged mails to be sure : at its next cycle, the file
~/Maildir/maildirsize contained expected value. So yes, I do know expunged
mails in the expunged directory are counted (size, number of mails).

So I've taken a look at the source code, adding some i_info(...) at
different places of the quota and lazy_expunge plugins.

From the source code reading session, I understand that :
1. In src/lazy-expunge/lazy-expunge-plugin.c
 1.1. lazy_expunge_mail_storage_init() sets the expunged mails namespace
flag with NAMESPACE_FLAG_NOQUOTA
 1.2. lazy_expunge_mail_storage_init() is called by
lazy_expunge_mail_storage_created()
 1.3. lazy_expunge_mail_storage_created() is part of the callback list
hook_mail_storage_create
2. In src/plugins/quota/quota-storage.c:
 2.1. The only place in the whole dovecot source code where
NAMESPACE_FLAG_NOQUOTA is checked is in function
quota_mailbox_list_created()
3. In src/plugins/quota/quota-plugin.c:
  3.1 quota_plugin_init() add quota_mailbox_list_created() to the callback
list hook_mailbox_list_created.

From my observations :
- Callback functions in quota_mailbox_list_created are called _before_
callback functions in hook_mail_storage_create.
- This observation led me to move the piece of code that sets
NAMESPACE_FLAG_NOQUOTA from lazy_expunge_mail_storage_init() to
lazy_expunge_mail_storage_created().
- But there, lazy_expunge_mail_storage_created() is called after
quota_mail_storage_created() because names of the library files are
lib10_quota_plugin.so and lib02_lazy_expunge_plugin.so.
- There I have also learnt that callback functions of the same list are
called in the reverse order compared to the library filenames.

So see if my observations were right, I have renamed lib10_quota_plugin.so
to lib01_quota_plugin.so, then it worked.

But I doubt this is the right way to fix it. This also reverses the order
other callbacks between the two plugins are called, which migh not be
expected at all. I'm not event sure moving some code from
lazy_expunge_mail_storage_init() to lazy_expunge_mail_storage_created() was
right : I've improvized the reading the source code and do not understand
the whole logic, so I suppose this could break something else.

Timo, what do you propose ?

Thanks for your great software.

-- 
Baptiste MALGUY
PGP fingerprint: 49B0 4F6E 4AA8 B149 B2DF  9267 0F65 6C1C C473 6EC2
# 1.2.10: /etc/dovecot/dovecot.conf
# OS: Linux 2.6.30-bpo.1-amd64 x86_64 Debian 5.0.4 
log_timestamp: %Y-%m-%d %H:%M:%S 
protocols: imap imaps managesieve
ssl: required
ssl_ca_file: /etc/ssl/certs/
ssl_cert_file: /etc/ssl/certs/x
ssl_key_file: /etc/ssl/private/x
ssl_parameters_regenerate: 24
verbose_ssl: yes
login_dir: /var/run/dovecot/login
login_executable(default): /usr/lib/dovecot/imap-login
login_executable(imap): /usr/lib/dovecot/imap-login
login_executable(managesieve): /usr/lib/dovecot/managesieve-login
login_greeting: X
login_max_processes_count: 256
max_mail_processes: 1024
first_valid_uid: 500
last_valid_uid: 500
first_valid_gid: 501
last_valid_gid: 501
mail_privileged_group: mail
mail_uid: vmail
mail_gid: vmail
mail_location: maildir:~/Maildir
mbox_write_locks: fcntl dotlock
mail_executable(default): /usr/lib/dovecot/imap
mail_executable(imap): /usr/lib/dovecot/imap
mail_executable(managesieve): /usr/lib/dovecot/managesieve
mail_plugins(default): autocreate expire fts fts_squat lazy_expunge quota 
imap_quota antispam acl imap_acl
mail_plugins(imap): autocreate expire fts fts_squat lazy_expunge quota 
imap_quota antispam acl imap_acl
mail_plugins(managesieve): 
mail_plugin_dir(default): /usr/lib/dovecot/modules/imap
mail_plugin_dir(imap): /usr/lib/dovecot/modules/imap
mail_plugin_dir(managesieve): /usr/lib/dovecot/modules/managesieve
namespace:
  type: private
  separator: /
  inbox: yes
  list: yes
  subscriptions: yes
namespace:
  type: shared
  separator: /
  prefix: shared/%%u/
  location: maildir:%%h/Maildir:INDEX=~/Maildir/shared/%%u
  list: yes
namespace:
  type: private
  separator: /
  prefix: .ARCHIVES/
  location: maildir:~/Maildir/archives
  list: yes
  subscriptions: yes
namespace:
  type: private
  separator: /
  prefix: .EXPUNGED/
  location: maildir:~/expunged
  list: yes
  subscriptions: yes
lda:
  postmaster_address: postmas...@xx
  mail_plugins: sieve expire acl
  quota_full_tempfail: yes
auth default:
  debug: yes
  passdb:
driver: pam
  

[Dovecot] namespaces/virtual folder archiving

2010-02-19 Thread fernando
Hi,

I was following the earlier namespaces discussion and I would like to
repost a doubt. I need to have some kind of archiving, it means, store old
messages into a cheap storage. But I couldn´t think any other solution
than symlinks.

Then, I thought about store 'Sent Items'  (as having old messages and less
acessed). But I also needed to do with nfs and symlinks. So, it comes the
namespaces discussion and a little brainstorm:

1) Could I have a namespace only with INBOX and the personal folders
(drafts, sent items, etc)  in another one - and those would be stored in
huge sata disks while inbox in 300g sas disks ?

2) Could I maintain the prior configuration, but let the second namespace
hidden and having the inbox subfolders (or any specific as sent items) as
virtual folder ? I don´t know if this makes sense...

3) Could I have my normal Inbox and a folder (in another disk - ex. huge
sata one) storing files older than 30 days and through virtual folders
join these messages in a common virtual INBOX while acessing imap or pop3
?

What do you thing, would you have another better approach ?

Best Regards,
Fernando




Re: [Dovecot] Dovecot blog

2010-02-19 Thread Brandon Lamb
On Fri, Feb 19, 2010 at 8:14 AM, Charles Marcus
cmar...@media-brokers.com wrote:
 On 2010-02-19 9:20 AM, Timo Sirainen wrote:
 http://blog.dovecot.org/

 I was thinking that I could blog about:

 I for one will most likely enjoy reading whatever you deem worthy of
 blogging about. Thanks Timo!

 --

 Best regards,

 Charles

Now where is the Like button to this thread


Re: [Dovecot] Dovecot blog

2010-02-19 Thread Timo Sirainen
On 19.2.2010, at 19.56, Brandon Lamb wrote:

 I was thinking that I could blog about:
 
 I for one will most likely enjoy reading whatever you deem worthy of
 blogging about. Thanks Timo!
 
 Now where is the Like button to this thread

In the blog itself? ;)

(Apparently I should replace those texts with thumbs up/down images, but I 
haven't figured out how to do that.)

[Dovecot] segfault - (imap|pop3)-login during nessus scan

2010-02-19 Thread Todd Rinaldo
We've been struggling with a problem for the past couple of days which to this 
point I've only gotten to be able to boil down to this:

1. Install nessus home edition (less pluggins I assume)
2. run all scans (sequentially or in parallel, doesn't seem to matter)
3. about 3 minutes in /var/log/messages will show segfaults on imap and/or pop3

imap-login[22185]: segfault at 000c rip 003c7de610a2 rsp 
7fffa2342068 error 4
or sometimes...
pop3-login[24451]: segfault at 000c rip 003c7de610a2 rsp 
7fff07116968 error 4

I'm having a really hard time getting a core dump and I'm having a really hard 
time narrowing down the list of nessus tests which cause this. So far, I have 
repeated this failure in 1.1.19 and 1.1.20

Additionally we've seen something similar on 1.2 and reverted back to 1.1 a 
year ago. At the time we could not re-produce a test case and finally gave up.

Has anyone seen something along these lines? 

Can anyone recommend how I could narrow this down further so we can find the 
problem?

Thanks,
Todd

Re: [Dovecot] Highly Performance and Availability

2010-02-19 Thread Wayne Thursby
Thank you to everyone who has contributed to this thread, it has been
very educational.

Since my last post, I have had several meetings, including a conference
with Dell storage specialists. I have also gathered some metrics to beat
around.

The EqualLogic units we are looking at are the baseline models, the
PS4000E. We would get two of these with 16x1TB 7200RPM SATA drives and
dual controllers for a total for 4xGbE ports dedicated to iSCSI traffic.

I have sent the following information and questions to our Dell reps,
but I figured I'd solicit opinions from the group.

The two servers I'm worried about are our mail server (Postfix/Dovecot)
and our database server (PostgreSQL). Our mail server regularly (several
times an hour) hits 1 second spikes of 1400 IOPS in its current
configuration. Our database server runs aroun 100-200 IOPS during quiet
periods, and spikes up to 1200 IOPS randomly, but on average every 15
minutes.

With 4xGbE ports on the each EQL device, and also keeping in mind we'll
have two of those, is it reasonable to expect 1400 IOPS bursts? What if
both of these servers were on the same storage and required closer to
3000 IOPS?

--
Wayne Thursby
System Administrator
Physicians Group, LLC


Re: [Dovecot] Poll: Quota near full behavior? [Was: Feature request? Make deliver quota inclusive!]

2010-02-19 Thread Noel Butler
On Fri, 2010-02-19 at 06:10 -0500, Charles Marcus wrote:


 
  I certainly wouldn't want to accept a message in this case, user 
  might be 1K under quota, but get 20m file now that might be a
  whoopie doo :) but what if 130K users did same.
 
 Well, I'd argue that if you're allowing messages that big already for
 130K users, then you should have enough spare storage to handle such a
 situation - although you and I both know the likelihood of even 10% of
 those 130K users encountering such a situation is next to null, so I
 don't think it's a valid argument.



Storage is designed based on guaranteed quota storage for each user,
plus anticipated growth
Why should we suffer huge expense just so every user who maxes out their
quota can exceed it?

Your idea might be fine for a small home office, but when you deal with
thousands of users, it is 
an insane configuration.


 That said - in an enterprise environment like that, you'd be assigning
 group and domain level quotas too to keep any one group/customer from
 using up all of the storage on the server, right?


No,  think an ISP or University student mail system.



Re: [Dovecot] Highly Performance and Availability

2010-02-19 Thread Stan Hoeppner
Wayne Thursby put forth on 2/19/2010 3:40 PM:
 Thank you to everyone who has contributed to this thread, it has been
 very educational.
 
 Since my last post, I have had several meetings, including a conference
 with Dell storage specialists. I have also gathered some metrics to beat
 around.
 
 The EqualLogic units we are looking at are the baseline models, the
 PS4000E. We would get two of these with 16x1TB 7200RPM SATA drives and
 dual controllers for a total for 4xGbE ports dedicated to iSCSI traffic.
 
 I have sent the following information and questions to our Dell reps,
 but I figured I'd solicit opinions from the group.
 
 The two servers I'm worried about are our mail server (Postfix/Dovecot)
 and our database server (PostgreSQL). Our mail server regularly (several
 times an hour) hits 1 second spikes of 1400 IOPS in its current
 configuration. Our database server runs aroun 100-200 IOPS during quiet
 periods, and spikes up to 1200 IOPS randomly, but on average every 15
 minutes.
 
 With 4xGbE ports on the each EQL device, and also keeping in mind we'll
 have two of those, is it reasonable to expect 1400 IOPS bursts? What if
 both of these servers were on the same storage and required closer to
 3000 IOPS?

The first thing you need to do Wayne is talk to your VMware rep and setup a
15-30 minute teleconference with a VMware engineer.  Or, if you have a local
VMware consultant/engineer, set a meet with him.  You need to get their thoughts
and recommendations on your goals and what you're currently looking at hardware
wise to implement them.

It sounds like you're set on using the ESX software iSCSI initiator, and using 2
to 4 standard GigE ports on each of your ESXi servers in some kind of ethernet
channel bonding and/or active/active multipathing setup.  I cannot say for
certain because I don't know the current certified configurations.  BUT, my
instinct based on prior experience and previous knowledge say this isn't
possible, and if possible, not desirable from a performance standpoint.  To do
this in a certified configuration, I'm guessing you at the very least will need
two single port iSCSI HBAs in each server, or one dual port iSCSI HBA in each
server.

Please get the right technical answers to these questions from VMware before
shooting yourself in the foot, for your sake.  ;)

If it turns out you can't bond 2/4 Gbe iSCSI ports in an active/active setup,
you're probably going to need to go 10Gbe iSCSI, stepping up a few models in the
Equallogic lineup and stepping up to 10Gbe HBAs.  The other option (a better,
and cheaper one) is going 4Gb Fiber channel, as I previously mentioned.

-- 
Stan


Re: [Dovecot] Best inode_ratio for maildir++ on ext4

2010-02-19 Thread Stan Hoeppner
Rodolfo Gonzalez put forth on 2/19/2010 5:18 PM:
 Hi,
 
 This might be a silly question: which would be
 the best inode ratio for a 5 Tb filesystem dedicated to Maildir++
 storage? I use ubuntu server, which has a preconfigured setting for
 mkfs.ext4 called news with inode_ratio = 4096, and after formating the
 fs with that setting and then with the defautl setting I see this
 difference of space (wasted space, but more inodes):
 
 4328633696 free 1K-blocks with mkfs's -T news switch = 1219493877 free
 inodes
 4557288800 free 1K-blocks with default mkfs settings = 304873461 free
 inodes
 
 I'll be storing e-mail messages for around 20,000 accounts on that
 partition (average 512 Mb per account). Would you consider worth the
 waste of about 200 Gb of the filesystem space in exchange of more inodes?

If your version of Ubuntu server has XFS support built in, forget ext4 and go
XFS.  It's more reliable, faster in every single benchmark I've seen especially
for large numbers of files, both large and small, has a ton of management tools
and instrumentation interfaces, and has a proven enterprise track record.

-- 
Stan


Re: [Dovecot] Best inode_ratio for maildir++ on ext4

2010-02-19 Thread Bernd Petrovitsch
Hi!

On Fre, 2010-02-19 at 17:18 -0600, Rodolfo Gonzalez wrote:
[...]
 This might be a silly question: which would be
Not at all IMHO.

 the best inode ratio for a 5 Tb filesystem dedicated to Maildir++
 storage? I use ubuntu server, which has a preconfigured setting for
 mkfs.ext4 called news with inode_ratio = 4096, and after formating the
 fs with that setting and then with the defautl setting I see this
 difference of space (wasted space, but more inodes):
 
 4328633696 free 1K-blocks with mkfs's -T news switch = 1219493877 free
 inodes
 4557288800 free 1K-blocks with default mkfs settings = 304873461 free inodes
 
 I'll be storing e-mail messages for around 20,000 accounts on that 
 partition (average 512 Mb per account). Would you consider worth the 
 waste of about 200 Gb of the filesystem space in exchange of more inodes?
That depends entirely if 512MB mail per account a few large ones or a
lot of small ones (assuming that the future behaviour is similar to the
past).
So perhaps it helps to count the files (and directories) on that file
system as each of them actually uses an i-node.

BTW you can set other values than default and news, namely the
number directly.

Bernd
-- 
Bernd Petrovitsch  Email : be...@petrovitsch.priv.at
 LUGA : http://www.luga.at



Re: [Dovecot] Best inode_ratio for maildir++ on ext4

2010-02-19 Thread Noel Butler
On Fri, 2010-02-19 at 17:51 -0600, Stan Hoeppner wrote:


 If your version of Ubuntu server has XFS support built in, forget ext4 and go
 XFS.  It's more reliable, faster in every single benchmark I've seen 
 especially
 for large numbers of files, both large and small, has a ton of management 
 tools
 and instrumentation interfaces, and has a proven enterprise track record.


Agree wth XFS, providing, and a big providing, you have reliable and
guaranteed power, hard powerouts on XFS are not known for their niceness
and protection of data
 

--
Kind Regards,
SSA Noel Butler
L.C.P No. 251002 

This Email, including any attachments, may contain legally privileged
information, therefore remains confidential and subject to copyright
protected under international law. You may not disseminate or reveal any
part to anyone without the authors express written authority to do so.
If you are not the intended recipient, please notify the sender and
delete all relevance of this message including any attachments,
immediately. Confidentiality, copyright, and legal privilege are not
waived or lost by reason of the mistaken delivery of this message. Only
PDF and ODF documents are accepted, do not send Microsoft proprietary
formatted documents.




Re: [Dovecot] Best inode_ratio for maildir++ on ext4

2010-02-19 Thread Noel Butler
Bugger, hit enter too soon, was going to say, it is probably better than
using EXT4 though, why on earth anyone would use that on a serious
production server I'll never know.

On Sat, 2010-02-20 at 11:51 +1000, Noel Butler wrote:

 On Fri, 2010-02-19 at 17:51 -0600, Stan Hoeppner wrote:
 
 
  If your version of Ubuntu server has XFS support built in, forget ext4 and 
  go
  XFS.  It's more reliable, faster in every single benchmark I've seen 
  especially
  for large numbers of files, both large and small, has a ton of management 
  tools
  and instrumentation interfaces, and has a proven enterprise track record.
 
 
 Agree wth XFS, providing, and a big providing, you have reliable and
 guaranteed power, hard powerouts on XFS are not known for their niceness
 and protection of data
  
 


Kind Regards,
SSA Noel Butler
L.C.P No. 251002 

This Email, including any attachments, may contain legally privileged
information, therefore remains confidential and subject to copyright
protected under international law. You may not disseminate or reveal any
part to anyone without the authors express written authority to do so.
If you are not the intended recipient, please notify the sender and
delete all relevance of this message including any attachments,
immediately. Confidentiality, copyright, and legal privilege are not
waived or lost by reason of the mistaken delivery of this message. Only
PDF and ODF documents are accepted, do not send Microsoft proprietary
formatted documents.




Re: [Dovecot] segfault - (imap|pop3)-login during nessus scan

2010-02-19 Thread Timo Sirainen
On Fri, 2010-02-19 at 15:28 -0600, Todd Rinaldo wrote:
 pop3-login[24451]: segfault at 000c rip 003c7de610a2 rsp 
 7fff07116968 error 4
 
 I'm having a really hard time getting a core dump

Yeah, it's difficult to get login processes to core dump. In v1.2 it's
easier though. But there's an alternative way to get the backtrace:

First set login_process_per_connection=no. Then:

gdb -p `pidof imap-login`
cont
wait for crash
bt full



signature.asc
Description: This is a digitally signed message part


Re: [Dovecot] segfault - (imap|pop3)-login during nessus scan

2010-02-19 Thread Timo Sirainen
On Sat, 2010-02-20 at 05:23 +0200, Timo Sirainen wrote:
 On Fri, 2010-02-19 at 15:28 -0600, Todd Rinaldo wrote:
  pop3-login[24451]: segfault at 000c rip 003c7de610a2 rsp 
  7fff07116968 error 4

BTW. I just tried with Nessus, but couldn't reproduce this.



signature.asc
Description: This is a digitally signed message part


Re: [Dovecot] Best inode_ratio for maildir++ on ext4

2010-02-19 Thread Rodolfo Gonzalez Gonzalez

Noel Butler wrote:

Agree wth XFS, providing, and a big providing, you have reliable
and guaranteed power, hard powerouts on XFS are not known for their
niceness and protection of data


Bugger, hit enter too soon, was going to say, it is probably better
than using EXT4 though, why on earth anyone would use that on a
serious production server I'll never know.



I used to have the maildirs on ReiserFS and never had a problem with it,
but given the current state of that FS and that I weren't really
comfortable with it, I'll give XFS a try for the maildir array and the 
postfix queue partition. After formating, I got 4.6 Tb of usable space, 
which makes me happy, and also the dynamic inode allocation.


Regards,
Rodolfo.

P.S. I have UPS and generator.


Re: [Dovecot] Best inode_ratio for maildir++ on ext4

2010-02-19 Thread Stan Hoeppner
Rodolfo Gonzalez Gonzalez put forth on 2/20/2010 12:18 AM:

 I used to have the maildirs on ReiserFS and never had a problem with it,
 but given the current state of that FS and that I weren't really
 comfortable with it, I'll give XFS a try for the maildir array and the
 postfix queue partition. After formating, I got 4.6 Tb of usable space,
 which makes me happy, and also the dynamic inode allocation.

http://en.wikipedia.org/wiki/XFS

Like I said, it's a very mature high performance journaled FS with many
enterprise level features, dynamic inode allocation being one of many.  It was
introduced by SGI in 1994 and has been in constant development since then.  It
was ported to Linux around 2000 and introduced into the mainline kernel in 2.4.

It is the only filesystem ever used on SMP servers from 128+ CPUs up to 1024
CPUs.  This is because SGI is the only company to ever offer SMP systems beyond
128 CPUs.  They are actually ccNUMA, not SMP, but the programming model is SMP,
because every CPU in the machine can directly address memory in any NUMA node in
the system.  The only practical difference between ccNUMA and a true SMP is the
memory latency.

Obviously, scalability and the ability to manipulate very large filesystems with
large numbers of files is required for such massive machines.  The Columbia
supercomputer at the NASA Ames facility consists of 20 such machines, each with
512 CPUs.  The system has a 1 Peta Byte (raw) RAID subsystem formatted with
CXFS, the clustered version of XFS.

XFS scales very well. ;)

I've been fan of SGI for a long time.  I could never afford/justify one of their
machines.  I'm so glad they open sourced XFS and are sharing this fantastic
filesystem with the rest of us who could never afford their gear.  Many would
agree with me if I said it is hands down the overall best *nix filesystem
available for most workloads.  It's not suitable on Linux for /boot or /, but
for just about everything else it is king of the hill.

-- 
Stan