[Dovecot] auth service: out of memory

2012-06-28 Thread Mailing List SVR

Hi,

I have some out of memory errors in my logs (file errors.txt attached)

I'm using dovecot 2.0.19, I can see some memory leaks fix in hg after 
the 2.0.19 release but they seem related to imap-login service,


I attached my config too, is something wrong there? Should I really 
increase the limit based on my settings?


Can these commits fix the reported leak?

http://hg.dovecot.org/dovecot-2.0/rev/6299dfb73732
http://hg.dovecot.org/dovecot-2.0/rev/67f1cef07427

Please note that the auth service is restarted when it reach the limit 
so no real issues,


please advice

thanks
Nicola


cat /var/log/mail.log | grep "Out of memory"
Jun 28 11:48:24 server1 dovecot: master: Error: service(auth): child 31301 
returned error 83 (Out of memory (service auth { vsz_limit=128 MB }, you may 
need to increase it))
Jun 28 11:50:18 server1 dovecot: auth: Fatal: pool_system_realloc(8192): Out of 
memory
Jun 28 11:50:18 server1 dovecot: master: Error: service(auth): child 10782 
returned error 83 (Out of memory (service auth { vsz_limit=128 MB }, you may 
need to increase it))
Jun 28 11:52:43 server1 dovecot: master: Error: service(auth): child 16854 
returned error 83 (Out of memory (service auth { vsz_limit=128 MB }, you may 
need to increase it))
Jun 28 11:54:01 server1 dovecot: auth: Fatal: block_alloc(4096): Out of memory
Jun 28 11:54:01 server1 dovecot: master: Error: service(auth): child 23378 
returned error 83 (Out of memory (service auth { vsz_limit=128 MB }, you may 
need to increase it))
Jun 28 11:55:09 server1 dovecot: auth: Fatal: pool_system_realloc(8192): Out of 
memory
Jun 28 11:55:09 server1 dovecot: master: Error: service(auth): child 28203 
returned error 83 (Out of memory (service auth { vsz_limit=128 MB }, you may 
need to increase it))
Jun 28 11:56:07 server1 dovecot: master: Error: service(auth): child 32570 
returned error 83 (Out of memory (service auth { vsz_limit=128 MB }, you may 
need to increase it))
Jun 28 11:57:01 server1 dovecot: auth: Fatal: block_alloc(4096): Out of memory
Jun 28 11:57:01 server1 dovecot: master: Error: service(auth): child 5136 
returned error 83 (Out of memory (service auth { vsz_limit=128 MB }, you may 
need to increase it))
Jun 28 11:57:57 server1 dovecot: master: Error: service(auth): child 9245 
returned error 83 (Out of memory (service auth { vsz_limit=128 MB }, you may 
need to increase it))
Jun 28 11:58:52 server1 dovecot: master: Error: service(auth): child 13779 
returned error 83 (Out of memory (service auth { vsz_limit=128 MB }, you may 
need to increase it))
Jun 28 11:59:49 server1 dovecot: master: Error: service(auth): child 18260 
returned error 83 (Out of memory (service auth { vsz_limit=128 MB }, you may 
need to increase it))
Jun 28 12:01:03 server1 dovecot: auth: Fatal: pool_system_realloc(8192): Out of 
memory
Jun 28 12:01:03 server1 dovecot: master: Error: service(auth): child 22181 
returned error 83 (Out of memory (service auth { vsz_limit=128 MB }, you may 
need to increase it))
Jun 28 12:03:24 server1 dovecot: auth: Fatal: pool_system_malloc(3144): Out of 
memory
Jun 28 12:03:24 server1 dovecot: master: Error: service(auth): child 27253 
returned error 83 (Out of memory (service auth { vsz_limit=128 MB }, you may 
need to increase it))

# 2.0.19: /etc/dovecot/dovecot.conf
# OS: Linux 3.2.0-25-generic x86_64 Ubuntu 12.04 LTS ext4
auth_cache_size = 10 M
auth_mechanisms = plain login
auth_socket_path = /var/run/dovecot/auth-userdb
auth_worker_max_count = 128
base_dir = /var/run/dovecot/
default_process_limit = 200
default_vsz_limit = 128 M
disable_plaintext_auth = no
first_valid_gid = 2000
first_valid_uid = 2000
hostname = mail.example.com
last_valid_gid = 2000
last_valid_uid = 2000
listen = *
login_greeting = SVR ready.
mail_location = maildir:/srv/panel/mail/%d/%t/Maildir
mail_plugins = " quota trash autocreate"
managesieve_notify_capability = mailto
managesieve_sieve_capability = fileinto reject envelope encoded-character 
vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy 
include variables body enotify environment mailbox date ihave
passdb {
  args = /etc/dovecot/dovecot-sql.conf.ext
  driver = sql
}
plugin {
  autocreate = Trash
  autocreate2 = Junk
  autocreate3 = Drafts
  autocreate4 = Sent
  autosubscribe = Trash
  autosubscribe2 = Junk
  autosubscribe3 = Drafts
  autosubscribe4 = Sent
  quota = maildir:User quota
  quota_rule = *:storage=300MB
  quota_rule2 = Trash:ignore
  quota_warning = storage=95%% quota-warning 95 %u
  quota_warning2 = storage=80%% quota-warning 80 %u
  sieve = ~/.dovecot.sieve
  sieve_before = /etc/dovecot/sieve/move-spam.sieve
  sieve_dir = ~/sieve
  sieve_max_actions = 32
  sieve_max_redirects = 4
  sieve_max_script_size = 1M
  sieve_quota_max_scripts = 10
  sieve_quota_max_storage = 2M
  trash = /etc/dovecot/dovecot-trash.conf.ext
}
postmaster_address = postmas...@example.com
protocols = imap pop3 sieve
service auth-worker {
  user = $default_internal_user
}
service auth {
  unix_

Re: [Dovecot] RAID1+md concat+XFS as mailstorage

2012-06-28 Thread Wojciech Puchar

Has anyone tried or benchmarked ZFS, perhaps ZFS+NFS as backing store for


yes. long time ago. ZFS isn't useful for anything more than a toy. I/O 
performance is just bad.


Re: [Dovecot] RAID1+md concat+XFS as mailstorage

2012-06-28 Thread Wojciech Puchar
The executive summary is something like: when raid5 fails, because at that 
point you effectively do a raid "scrub" you tend to suddenly notice a bunch 
of other hidden problems which were lurking and your rebuild fails (this


and no raid will protect you from every failure. You have to do backups.
EOT


Re: [Dovecot] Removing specific entry in user/auth cache

2012-06-28 Thread Timo Sirainen
On 29.6.2012, at 5.18, Daniel Parthey wrote:

> wouldn't it be better to use a syntax similar to other doveadm commands,
> with labels for all arguments?
> 
> doveadm auth test -u  -p []
> doveadm auth cache flush -u []
> doveadm auth cache stats
> 
> This will allow you to syntactically distinguish "commands" from "arguments".
> Otherwise you might run into the same "kludgy" syntax problem again, as soon
> as the number of subcommands changes.

The problem was with the "auth" toplevel command not having subcommands. I 
don't think there are going to be any problems with subcommands. Also there are 
many commands already that take  without the -u parameter. Actually it's 
only the "mail commands" that take -u parameter at all.

Another potential problem is "doveadm user" command. I'm wondering if it might 
be a good idea to move it to "doveadm auth user" or "doveadm auth userdb" 
command. There should be also a similar "doveadm auth passdb" command that does 
a passdb lookup without authentication.



Re: [Dovecot] Removing specific entry in user/auth cache

2012-06-28 Thread Daniel Parthey
Timo Sirainen wrote:
> On 28.6.2012, at 9.43, Timo Sirainen wrote:
> Perhaps for v2.2:
> 
> doveadm auth test  []
> doveadm auth cache flush []
> doveadm auth cache stats
>
> and for v2.1 a bit kludgy way:
> 
> doveadm auth  []
> doveadm auth cache flush []
> 
> so you couldn't test authentication against "cache" user, but that's probably 
> not a problem.

Hi there,

wouldn't it be better to use a syntax similar to other doveadm commands,
with labels for all arguments?

doveadm auth test -u  -p []
doveadm auth cache flush -u []
doveadm auth cache stats

This will allow you to syntactically distinguish "commands" from "arguments".
Otherwise you might run into the same "kludgy" syntax problem again, as soon
as the number of subcommands changes.

Regards
Daniel
-- 
https://plus.google.com/103021802792276734820


Re: [Dovecot] Removing specific entry in user/auth cache

2012-06-28 Thread Timo Sirainen
On 28.6.2012, at 9.43, Timo Sirainen wrote:

> It would be possible to add a doveadm command for this.. I think the
> main reason why I already didn't do it last time I was asked this was
> because I wanted to use "doveadm auth cache flush" or something similar
> as the command, but there already exists "doveadm auth" command and
> "cache flush" would be treated as username=cache password=flush :(
> 
> Anyone have thoughts on a better doveadm command name? Or should I just
> break it and have v2.2 use "doveadm auth check" or something for the old
> "doveadm auth" command?

Perhaps for v2.2:

doveadm auth test  []
doveadm auth cache flush []
doveadm auth cache stats

and for v2.1 a bit kludgy way:

doveadm auth  []
doveadm auth cache flush []

so you couldn't test authentication against "cache" user, but that's probably 
not a problem.

Re: [Dovecot] Removing specific entry in user/auth cache

2012-06-28 Thread Daniel Parthey
Angel L. Mateo wrote:
> El 27/06/12 14:24, Timo Sirainen escribió:
> >On 27.6.2012, at 14.10, Angel L. Mateo wrote:
> >>We have dovecot configured with auth cache.
> >> Is there any way to remove a specific entry (not all) from this cache?
> > Nope. What do you need it for?
> Because information for users sometimes changes.

We for example, define the per-user quota via mysql userdb and it needs
to be updated in a timely manner, after it has been changed in the database
via a web interface.

Since we are using a pre-fetch userdb from mysql (which uses the same mysql
database as the passdb), we were required to reduce the auth cache ttl to
one minute in order to ensure timely quota updates.

It would be good if there was some mechanism to detect or force such changes
without having to reduce caching time to one minute.

Regards
Daniel
-- 
https://plus.google.com/103021802792276734820


Re: [Dovecot] RAID1+md concat+XFS as mailstorage

2012-06-28 Thread Kelsey Cummings

On 06/28/12 05:56, Ed W wrote:

So given the statistics show us that 2 disk failures are much more
common than we expect, and that "silent corruption" is likely occurring
within (larger) real world file stores,  there really aren't many battle
tested options that can protect against this - really only RAID6 right
now and that has significant limitations...


Has anyone tried or benchmarked ZFS, perhaps ZFS+NFS as backing store 
for spools?  Sorry if I've missed it and this has already come up. 
We're using Netapp/NFS, and are likely to continue to do so but still 
curious.


-K



Re: [Dovecot] Integrating Dovecot with Amazon Web Services

2012-06-28 Thread Charles Marcus

On 2012-06-28 4:22 PM, Alex Crow  wrote:

On 28/06/12 20:28, Charles Marcus wrote:

On 2012-06-28 2:04 PM, Gary Mort  wrote:

That's probably due to the different structures they use.   sdbox
can safely use either because each email message has a unique
filename, and if it exists in both places it doesn't matter.



Eh?? Sdbox is like mbox - one file per mailbox/folder... it is NOT
like maildir (one email = one file).



Not according to the wiki:

http://wiki2.dovecot.org/MailboxFormat/dbox

dbox can be used in two ways:

 single-dbox (sdbox in mail location): One message per file,
similar to Maildir. For backwards compatibility, dbox is an alias to
sdbox in mail_location.


Now how  the heck did I remember that so wrong??

Oh well, thanks for the correction...

Sorry, OP...

--

Best regards,

Charles


Re: [Dovecot] RAID1+md concat+XFS as mailstorage

2012-06-28 Thread Ed W

On 28/06/2012 17:54, Charles Marcus wrote:

On 2012-06-28 12:20 PM, Ed W  wrote:

Bad things are going to happen if you loose a complete chunk of your
filesystem.  I think the current state of the world is that you should
assume that realistically you will be looking to your backups if you
loose the wrong 2 disks in a raid1 or raid10 array.


Which is a very good reason to have at least one hot spare in any RAID 
setup, if not 2.


RAID10 also statistically has a much better chance of surviving a 
multi drive failure than RAID5 or 6, because it will only die if two 
drives in the same pair fail, and only then if the second one fails 
before the hot spare is rebuilt.




Actually this turns out to be incorrect... Curious, but there you go!

Search google for a recent very helpful expose on this.  Basically 
RAID10 can sometimes tolerate multi-drive failure, but on average raid6 
appears less likely to trash your data, plus under some circumstances it 
better survives recovering from a single failed disk in practice


The executive summary is something like: when raid5 fails, because at 
that point you effectively do a raid "scrub" you tend to suddenly notice 
a bunch of other hidden problems which were lurking and your rebuild 
fails (this happened to me...).  RAID1 has no better bad block detection 
than assuming the non bad disk is perfect (so won't spot latent 
unscrubbed errors), and again if you hit a bad block during the rebuild 
you loose the whole of your mirrored pair.


So the vulnerability is not the first failed disk, but discovering 
subsequent problems during the rebuild.  This certainly correlates with 
my (admittedly limited) experiences.  Disk array scrubbing on a regular 
basis seems like a mandatory requirement (but how many people do..?) to 
have any chance of actually repairing a failing raid1/5 array


Digressing, but it occurs there would be a potentially large performance 
improvement if spinning disks could do a read/rewrite cycle with the 
disk only moving a minimal distance (my understanding is this can't 
happen at present without a full revolution of the disk). Then you could 
rewrite parity blocks extremely quickly without re-reading a full stripe...


Anyway, challenging problem and basically the observation is that large 
disk arrays are going to have a moderate tail risk of failure whether 
you use raid10 or raid5 (raid6 giving a decent practical improvement in 
real reliability, but at a cost in write performance).


Cheers

Ed W


Re: [Dovecot] Integrating Dovecot with Amazon Web Services

2012-06-28 Thread Alex Crow

On 28/06/12 20:28, Charles Marcus wrote:

On 2012-06-28 2:04 PM, Gary Mort  wrote:

That's probably due to the different structures they use.   sdbox
can safely use either because each email message has a unique
filename, and if it exists in both places it doesn't matter.


Eh?? Sdbox is like mbox - one file per mailbox/folder... it is NOT 
like maildir (one email = one file).




Not according to the wiki:

http://wiki2.dovecot.org/MailboxFormat/dbox

   dbox can be used in two ways:

single-dbox (sdbox in mail location): One message per file,
   similar to Maildir. For backwards compatibility, dbox is an alias to
   sdbox in mail_location.

multi-dbox (mdbox in mail location): Multiple messages per
   file, but unlike mbox multiple files per mailbox.


So the parent appears to be right.

Alex

--
This message is intended only for the addressee and may contain
confidential information.  Unless you are that person, you may not
disclose its contents or use it in any way and are requested to delete
the message along with any attachments and notify us immediately.

"Transact" is operated by Integrated Financial Arrangements plc
Domain House, 5-7 Singer Street, London  EC2A 4BQ
Tel: (020) 7608 4900 Fax: (020) 7608 5300
(Registered office: as above; Registered in England and Wales under number: 
3727592)
Authorised and regulated by the Financial Services Authority (entered on the 
FSA Register; number: 190856)



Re: [Dovecot] Integrating Dovecot with Amazon Web Services

2012-06-28 Thread Charles Marcus

On 2012-06-28 2:04 PM, Gary Mort  wrote:

That's probably due to the different structures they use.   sdbox
can safely use either because each email message has a unique
filename, and if it exists in both places it doesn't matter.


Eh?? Sdbox is like mbox - one file per mailbox/folder... it is NOT like 
maildir (one email = one file).



mdbox though is different, multiple messages are stored in a single
file.


The diff between mdbox and sdbox is sdbox puts all messages for any 
given mailbox/folder in one sdbox file (just like mbox). Sdbox has a 
setting for the max filesize of the dbox file, and once an mdbox file 
exceeds that size, it creates a new mdbox file to start adding messages to.


--

Best regards,

Charles


Re: [Dovecot] Setting up mixed mbox and maildir

2012-06-28 Thread Jeff Lacki
Jonathan Ryshpan  wrote:

> Quite right; this comes from a reading of pages in both wiki1 and wiki2.
> I now surmise that this isn't a good idea since wiki1 describes v1.x
> and wiki2 describes v2.x, which have different syntaxes (syntaces?).  Is
> all this correct?

I too had a very hard time figuring out what was what in the new wiki
for 2.1.7 and still havent figured it out and gave up since Ive had no
time to get back into it.  I had already spent 2-3 full days (in my spare
time) trying to figure out the permissions nightmare in the logs.

I was only able to get mbox working so I gave up and went on to my
next issue, getting it to work with my iphone.  My iphone 4 is not 
even connecting to dovecot imap/imaps on 993 when I tried to set that up.
Nothing in the logs, such frustration across the board.

Jeff

/mf/home/jeep/shell/.signature


Re: [Dovecot] Integrating Dovecot with Amazon Web Services

2012-06-28 Thread Timo Sirainen
On 28.6.2012, at 21.04, Gary Mort wrote:

> mdbox though is different, multiple messages are stored in a single file.
> The index indicates in which file each message is located.  When the data
> is moved to alt storage, the filename can change in which case the index is
> updated.
> IE:
> Primary/Msg06282012 -- contains Msg007, Msg008, Msg009
> Primary/Msg06272012 -- contains Msg004, Msg005, Msg006
> Primary/Msg06262012 -- contains Msg001, Msg002, Msg003
> 
> along comes archiving and the new format is:
> Primary/Msg06292012 -- contains Msg010, Msg011, Msg012
> Primary/Msg06282012 -- contains Msg007,  Msg009
> Primary/Msg06272012 -- contains Msg004,  Msg006
> Primary/Msg06262012 -- contains Msg003
> Alt/Msg06292012 00 contains Msg001, Msg002, Msg005, Msg008

Yes, doveadm altmove works like this now.

> Since the archive rules can be based on a lot of different scenarios[and a
> message can even be archived from the command line], the filenames between
> Primary and Alternate are not the same - and in fact the same filename in
> each place could have different messages.  For example: if messages are
> archived when a user sets an imap flag on them.

There shouldn't normally ever be a situation where the same filename is used in 
both storages, because every time a new file is created to either of the 
storages a new unique number is used.

> So with the way it's written now, it's not possible to have a simple
> fallback by filename.
> 
> It would be possible if the naming convention was strictly enforced, ie
> after archiving you have:
> Primary/Msg06292012 -- contains Msg010, Msg011, Msg012
> Primary/Msg06282012 -- contains Msg007,  Msg009
> Primary/Msg06272012 -- contains Msg004,  Msg006
> Primary/Msg06262012 -- contains Msg003
> Alt/Msg06282012 -- contains Msg008
> Alt/Msg06272012 -- contains Msg005
> Alt/Msg06262012 -- contains Msg001, Msg002
> 
> Now the index can simply say what file a message is in and doesn't have to
> specify primary or secondary, and the primary file with that name can be
> checked first, and then if it is not there check the alternate.

This already works like that in the reading side. If you did altmoving by "mv 
m.123 /altstorage/..." instead of doveadm it would work.

Re: [Dovecot] Integrating Dovecot with Amazon Web Services

2012-06-28 Thread Gary Mort
On Thu, Jun 28, 2012 at 1:21 PM, Timo Sirainen  wrote:

> On 28.6.2012, at 20.14, Timo Sirainen wrote:
>
> >> "An upshot of the way alternate storage works is that any given storage
> >> file (mailboxes//dbox-Mails/u.* (sdbox) or storage/m.* (mdbox))
> can
> >> only appear *either* in the primary storage area *or* the alternate
> storage
> >> area but not both — if the corresponding file appears in both areas then
> >> there is an inconsistency."
> >
> > Whoever wrote that wasn't exactly correct (or clear). There's no problem
> having the same file in both primary and alt storage. Only if the files are
> different there's a problem, but that shouldn't happen..
>
> Hmm. Although looking at the mdbox index rebuilding code:
>
>/* duplicate file. either readdir() returned it twice
>   (unlikely) or it exists in both alt and primary storage.
>   to make sure we don't lose any mails from either of the
>   files, give this file a new ID and rename it. */
>
> It probably shouldn't be doing that. sdbox isn't doing that:
>
>/* we were supposed to open the file in alt storage, but it
>   exists in primary storage as well. skip it to avoid
> adding
>   it twice. */
>
>
That's probably due to the different structures they use.   sdbox can
safely use either because each email message has a unique filename, and if
it exists in both places it doesn't matter.

mdbox though is different, multiple messages are stored in a single file.
 The index indicates in which file each message is located.  When the data
is moved to alt storage, the filename can change in which case the index is
updated.
IE:
Primary/Msg06282012 -- contains Msg007, Msg008, Msg009
Primary/Msg06272012 -- contains Msg004, Msg005, Msg006
Primary/Msg06262012 -- contains Msg001, Msg002, Msg003

along comes archiving and the new format is:
Primary/Msg06292012 -- contains Msg010, Msg011, Msg012
Primary/Msg06282012 -- contains Msg007,  Msg009
Primary/Msg06272012 -- contains Msg004,  Msg006
Primary/Msg06262012 -- contains Msg003
Alt/Msg06292012 00 contains Msg001, Msg002, Msg005, Msg008

Since the archive rules can be based on a lot of different scenarios[and a
message can even be archived from the command line], the filenames between
Primary and Alternate are not the same - and in fact the same filename in
each place could have different messages.  For example: if messages are
archived when a user sets an imap flag on them.

So with the way it's written now, it's not possible to have a simple
fallback by filename.

It would be possible if the naming convention was strictly enforced, ie
after archiving you have:
Primary/Msg06292012 -- contains Msg010, Msg011, Msg012
Primary/Msg06282012 -- contains Msg007,  Msg009
Primary/Msg06272012 -- contains Msg004,  Msg006
Primary/Msg06262012 -- contains Msg003
Alt/Msg06282012 -- contains Msg008
Alt/Msg06272012 -- contains Msg005
Alt/Msg06262012 -- contains Msg001, Msg002

Now the index can simply say what file a message is in and doesn't have to
specify primary or secondary, and the primary file with that name can be
checked first, and then if it is not there check the alternate.


Re: [Dovecot] Integrating Dovecot with Amazon Web Services

2012-06-28 Thread Timo Sirainen
On 28.6.2012, at 20.55, Gary Mort wrote:

>> The indexes have to be in primary storage.
>> 
> True, but the data they are based on I'm assuming does not include the full
> email message, just a few key pieces:
> uniqueid, subject, from, to, etc.
> 
> For an always running server, the indexes are always up to date in primary.
> 
> For a server starting up with no index data, it will need to rebuild the
> index information[or for a second server running when new email has been
> delivered].
> As such, rather then download every single email message just for a few
> bits of key info, I can run a re-index process to pull just the meta
> information and grab the data from there.

With sdbox you can't lose index files without also losing all message flags. 
And in general sdbox assumes that indexes are always up to date.

>>> When a client attempts to retrieve an email message, Dovecot would check
>>> primary storage as it does now, if the message is not found than it will
>>> retrieve it from the alternate storage system AND store a copy in the
>>> primary storage.
>> 
>> I think the storing wouldn't be very useful. Most clients download the
>> message once. There's no reason to cache it if it doesn't get downloaded
>> again. The way it should work that new mails are immediately delivered to
>> both primary and alt storage.
>> 
>> 
> I've got tons of space - so I don't mind having 750MB or so for primary
> email message storage.   If I can track how many times a message was
> actually read, over time I can get an idea of how I use it and setup the
> primary storage purge rules accordingly.

I'd be interested in knowing what those statistics will end up looking like. My 
guess is that it's not worth coding such feature, but of course some real world 
data would be better than my guesses :)

>>> Secondly, I'd like to replace the Mysql database usage with a simpleDB
>>> database.  While simpleDB lacks much of MySQL's sophistication, it
>> doesn't
>>> seem that Dovecot is really using any of that, so simpleDB can be
>>> functionally equivalent.
>> 
>> Dovecot will probably get Redis and/or memcache backend for passdb+userdb.
>> If simpledb is similar key-value database I guess the same code could be
>> used partially.
>> 
>> 
> simpleDB is more like SQLLITE:
..
> You query the data like an SQL table:
> http://docs.amazonwebservices.com/AmazonSimpleDB/latest/DeveloperGuide/UsingSelect.html

OK, so that would mean implementing lib-sql driver for SimpleDB and use sql 
passdb/userdb.

Re: [Dovecot] Integrating Dovecot with Amazon Web Services

2012-06-28 Thread Gary Mort
On Thu, Jun 28, 2012 at 1:14 PM, Timo Sirainen  wrote:

> On 28.6.2012, at 17.43, Gary Mort wrote:
> > First I want to add AWS S3 as a storage option for alternate storage.
> >
> > Then instead of the above model, the new model would be that email is
> > always stored in alternate storage, and may be in primary storage.  So,
> > when mail comes in, I'd have Dovecot save the email to the alternate
> > storage S3 bucket and update the indexs and other information[ideally,
> for
> > convenience purposes, a few bits of relevant indexing information can be
> > stored as metadata in the S3 object  - sufficient so that instead of
> > retrieving the entire S3 object, just the meta data can be pulled to
> build
> > indexes.
>
> The indexes have to be in primary storage.
>
>
True, but the data they are based on I'm assuming does not include the full
email message, just a few key pieces:
uniqueid, subject, from, to, etc.

For an always running server, the indexes are always up to date in primary.

For a server starting up with no index data, it will need to rebuild the
index information[or for a second server running when new email has been
delivered].
As such, rather then download every single email message just for a few
bits of key info, I can run a re-index process to pull just the meta
information and grab the data from there.


>  > When a client attempts to retrieve an email message, Dovecot would check
> > primary storage as it does now, if the message is not found than it will
> > retrieve it from the alternate storage system AND store a copy in the
> > primary storage.
>
> I think the storing wouldn't be very useful. Most clients download the
> message once. There's no reason to cache it if it doesn't get downloaded
> again. The way it should work that new mails are immediately delivered to
> both primary and alt storage.
>
>
I've got tons of space - so I don't mind having 750MB or so for primary
email message storage.   If I can track how many times a message was
actually read, over time I can get an idea of how I use it and setup the
primary storage purge rules accordingly.


> > Secondly, I'd like to replace the Mysql database usage with a simpleDB
> > database.  While simpleDB lacks much of MySQL's sophistication, it
> doesn't
> > seem that Dovecot is really using any of that, so simpleDB can be
> > functionally equivalent.
>
> Dovecot will probably get Redis and/or memcache backend for passdb+userdb.
> If simpledb is similar key-value database I guess the same code could be
> used partially.
>
>
simpleDB is more like SQLLITE:
"Amazon SimpleDB is a highly available and flexible non-relational data
store that offloads the work of database administration. Developers simply
store and query data items via web services requests and Amazon SimpleDB
does the rest."
http://aws.amazon.com/simpledb/

Data model:
http://docs.amazonwebservices.com/AmazonSimpleDB/latest/DeveloperGuide/DataModel.html

Domain == Table
Item == row
ItemName == primary key
Attributes == column
Value == data in column[multi value, so there can be multiple values for an
attribute of an item]

There is no built in key relationship between data, it's just one big flat
table.   Columns/Attributes only have 2 types, string or integer

You query the data like an SQL table:
http://docs.amazonwebservices.com/AmazonSimpleDB/latest/DeveloperGuide/UsingSelect.html


Because there are no dates, it's best to store dates as UTC timestamps
which are integers and can then be compared against numerically.

The datastore is spread over multiple Amazon data servers and can take up
to a second to sync, so there are two methods of querying the data.
Default: eventually consistent read: get the data quickly
Optional: consistent read: check /all/ datastores and get the latest data

Since the data in simpleDB may not be updated frequently, a simple hack
using the notification system could be:
Before updating simpleDB send SNS notice that the data is being updated and
where[domain, user, config]
Update Data
After updating simpleDB send SNS notice that the update is complete

Other servers running can record data updating notices in memory and expire
them in about 15 seconds.   For any queries they want to make for that type
of data in the next 15 seconds, they will use consistent read.


The nice thing about using S3 and simpleDB is that I can completely skip a
lot of steps in replication/distributed services as it is all handled
already.  And one can always take one set of api calls and substitute
another for a different notification system, distributed database, and
cloud file storage.


Re: [Dovecot] Integrating Dovecot with Amazon Web Services

2012-06-28 Thread Timo Sirainen
On 28.6.2012, at 20.21, Timo Sirainen wrote:

> On 28.6.2012, at 20.14, Timo Sirainen wrote:
> 
>>> "An upshot of the way alternate storage works is that any given storage
>>> file (mailboxes//dbox-Mails/u.* (sdbox) or storage/m.* (mdbox)) can
>>> only appear *either* in the primary storage area *or* the alternate storage
>>> area but not both — if the corresponding file appears in both areas then
>>> there is an inconsistency."
>> 
>> Whoever wrote that wasn't exactly correct (or clear). There's no problem 
>> having the same file in both primary and alt storage. Only if the files are 
>> different there's a problem, but that shouldn't happen..
> 
> Hmm. Although looking at the mdbox index rebuilding code:
> 
>   /* duplicate file. either readdir() returned it twice
>  (unlikely) or it exists in both alt and primary storage.
>  to make sure we don't lose any mails from either of the
>  files, give this file a new ID and rename it. */
> 
> It probably shouldn't be doing that.

Hmm. I already implemented this by having it ignore the problem if the files 
have the same sizes, but then started wondering if there's really any point in 
doing that. m.* files can be appended to later, and altmoving always creates 
files with new numbers, and even if it does renaming there's duplicate 
suppression, so .. I guess there wasn't any point in doing that after all.

Re: [Dovecot] Integrating Dovecot with Amazon Web Services

2012-06-28 Thread Timo Sirainen
On 28.6.2012, at 20.14, Timo Sirainen wrote:

>> "An upshot of the way alternate storage works is that any given storage
>> file (mailboxes//dbox-Mails/u.* (sdbox) or storage/m.* (mdbox)) can
>> only appear *either* in the primary storage area *or* the alternate storage
>> area but not both — if the corresponding file appears in both areas then
>> there is an inconsistency."
> 
> Whoever wrote that wasn't exactly correct (or clear). There's no problem 
> having the same file in both primary and alt storage. Only if the files are 
> different there's a problem, but that shouldn't happen..

Hmm. Although looking at the mdbox index rebuilding code:

/* duplicate file. either readdir() returned it twice
   (unlikely) or it exists in both alt and primary storage.
   to make sure we don't lose any mails from either of the
   files, give this file a new ID and rename it. */

It probably shouldn't be doing that. sdbox isn't doing that:

/* we were supposed to open the file in alt storage, but it
   exists in primary storage as well. skip it to avoid adding
   it twice. */



Re: [Dovecot] Integrating Dovecot with Amazon Web Services

2012-06-28 Thread Timo Sirainen
On 28.6.2012, at 17.43, Gary Mort wrote:

> http://wiki2.dovecot.org/MailboxFormat/dbox
> 
> To make life easy, I'll stick with just single-dbox as a start, however
> multi-dbox would be doable.
> 
> With dbox, the only thing that I need to change is the alternate storage
> model:
> "An upshot of the way alternate storage works is that any given storage
> file (mailboxes//dbox-Mails/u.* (sdbox) or storage/m.* (mdbox)) can
> only appear *either* in the primary storage area *or* the alternate storage
> area but not both — if the corresponding file appears in both areas then
> there is an inconsistency."

Whoever wrote that wasn't exactly correct (or clear). There's no problem having 
the same file in both primary and alt storage. Only if the files are different 
there's a problem, but that shouldn't happen..

> First I want to add AWS S3 as a storage option for alternate storage.
> 
> Then instead of the above model, the new model would be that email is
> always stored in alternate storage, and may be in primary storage.  So,
> when mail comes in, I'd have Dovecot save the email to the alternate
> storage S3 bucket and update the indexs and other information[ideally, for
> convenience purposes, a few bits of relevant indexing information can be
> stored as metadata in the S3 object  - sufficient so that instead of
> retrieving the entire S3 object, just the meta data can be pulled to build
> indexes.

The indexes have to be in primary storage.

> When a client attempts to retrieve an email message, Dovecot would check
> primary storage as it does now, if the message is not found than it will
> retrieve it from the alternate storage system AND store a copy in the
> primary storage.

I think the storing wouldn't be very useful. Most clients download the message 
once. There's no reason to cache it if it doesn't get downloaded again. The way 
it should work that new mails are immediately delivered to both primary and alt 
storage.

> Secondly, I'd like to replace the Mysql database usage with a simpleDB
> database.  While simpleDB lacks much of MySQL's sophistication, it doesn't
> seem that Dovecot is really using any of that, so simpleDB can be
> functionally equivalent.

Dovecot will probably get Redis and/or memcache backend for passdb+userdb. If 
simpledb is similar key-value database I guess the same code could be used 
partially.



Re: [Dovecot] RAID1+md concat+XFS as mailstorage

2012-06-28 Thread Charles Marcus

On 2012-06-28 12:20 PM, Ed W  wrote:

Bad things are going to happen if you loose a complete chunk of your
filesystem.  I think the current state of the world is that you should
assume that realistically you will be looking to your backups if you
loose the wrong 2 disks in a raid1 or raid10 array.


Which is a very good reason to have at least one hot spare in any RAID 
setup, if not 2.


RAID10 also statistically has a much better chance of surviving a multi 
drive failure than RAID5 or 6, because it will only die if two drives in 
the same pair fail, and only then if the second one fails before the hot 
spare is rebuilt.


--

Best regards,

Charles


Re: [Dovecot] RAID1+md concat+XFS as mailstorage

2012-06-28 Thread Ed W

On 28/06/2012 14:06, Костырев Александр Алексеевич wrote:

- RAID1 pairs, plus some kind of intelligent overlay filesystem, eg
md-linear+XFS / BTRFS. With the filesystem aware of the underlying
arrangement it can theoretically optimise file placement and
dramatically increase write speeds for small files in the same manner
that RAID-0 theoretically achieves. (However, still no protection
against "silent" single drive corruption unless btrfs perhaps adds this
in the future?)

not only "silent" single drive corruption problem but as I stated in start of 
topic - crash of first pair.



Bad things are going to happen if you loose a complete chunk of your 
filesystem.  I think the current state of the world is that you should 
assume that realistically you will be looking to your backups if you 
loose the wrong 2 disks in a raid1 or raid10 array.


However, the thing which worries me more with multidisk arrays is 
accidental disconnection of multiple disks, eg backplane fails, or a 
multi-lane connector is accidently unplugged.  Linux MD raid often seems 
to have the ability to reconstruct arrays after such accidents.  I don't 
have more recent experience with hardware controller arrays, but I have 
(sadly) found that such a situation is terminal on some older hardware 
controllers...


Interested to hear other failure modes (and successful rescues) from 
RAID1+linear+XFS setups?


Cheers

Ed W



[Dovecot] Integrating Dovecot with Amazon Web Services

2012-06-28 Thread Gary Mort
I did some searching in the mail archives and didn't see any discussion of
integration with AWS, so I wanted to through out my thoughts/plans and see
if it has been done before.

I am setting up my own personal website on EC2 along with an email server,
and I really don't like the idea of using the disk drive as permanent mail
storage.  EBS is too small instance storage is ephermeral.

Looking over the docs, the dbox format seems most easily copied for my
needs.
http://wiki2.dovecot.org/MailboxFormat/dbox

To make life easy, I'll stick with just single-dbox as a start, however
multi-dbox would be doable.

With dbox, the only thing that I need to change is the alternate storage
model:
"An upshot of the way alternate storage works is that any given storage
file (mailboxes//dbox-Mails/u.* (sdbox) or storage/m.* (mdbox)) can
only appear *either* in the primary storage area *or* the alternate storage
area but not both — if the corresponding file appears in both areas then
there is an inconsistency."

First I want to add AWS S3 as a storage option for alternate storage.

Then instead of the above model, the new model would be that email is
always stored in alternate storage, and may be in primary storage.  So,
when mail comes in, I'd have Dovecot save the email to the alternate
storage S3 bucket and update the indexs and other information[ideally, for
convenience purposes, a few bits of relevant indexing information can be
stored as metadata in the S3 object  - sufficient so that instead of
retrieving the entire S3 object, just the meta data can be pulled to build
indexes.

When a client attempts to retrieve an email message, Dovecot would check
primary storage as it does now, if the message is not found than it will
retrieve it from the alternate storage system AND store a copy in the
primary storage.

Primary storage can be periodically purged, have quota's to keep it from
growing too large, etc.

In this way, primary storage can be viewed as a message cache, just keeping
the messages that are currently of interest, while S3 is the real data.

[Ideally, this can be expanded so that when a message comes in, in addition
to storing a copy in S3, an AWS SNS notification can be issued so if
multiple IMAP servers are running, they can all subscribe to the same SNS
channel and update themselves as needed].

This give me unlimited disk storage at S3 prices, I would even like to be
able to set a few options based on the folder, so I can enable versioning
on important message folders, use the even cheaper reduced redundancy
storage for archives, and set expiration dates on email in the trash and
spam folders so S3 will automatically purge the messages after a month.


Secondly, I'd like to replace the Mysql database usage with a simpleDB
database.  While simpleDB lacks much of MySQL's sophistication, it doesn't
seem that Dovecot is really using any of that, so simpleDB can be
functionally equivalent.

The primary purpose of using simpleDB is that this way the entire Dovecot
system can be ephermeral.   When a properly configured dovecot AMI is
launched, it will start up, pull it's config data from an S3 bucket,
subscribe to the SNS channel for new updates, and then start the Dovecot
server.  It won't care if it is the only Dovecot server, or if there are
500 other servers running.  They all share the same simpleDB database.
 Whenever any change is made that is relevant to server configuration, a
notice is generated to SNS, and all the email is stored in S3.


As a starting point, I'm thinking the best place for me to start coding is
the single-s3-dbox message store as it has the least moving parts[mainly
just fix up the save function to run the way I need it to, and the retrieve
function to make a local copy of any incoming email...additional metadata
functionality can be added later].

Has anyone else been working on something similar?

-Gary


Re: [Dovecot] started with dovecot sieve

2012-06-28 Thread mailinglist

Am 2012-06-27 20:47, schrieb Daniel Parthey:

Rolf wrote:

LMTP would be new to me and I fear just other hard-to-understand
configuration topics.


LMTP (Lightweight Message Transfer Protocol) is really simple,
similar to SMTP, but immediately returns a status code which
tells whether the delivery has been successful or not.

I encourage you to read this HOWTO:
http://wiki2.dovecot.org/HowTo/PostfixDovecotLMTP

Dovecot listens and accepts mails on the LMTP service port,
postfix delivers mails directly into this LMTP service port.

Since it is an additional service, you should be able to try
it first, without interfering with your deliver functionality.

Here you can read, how the LMTP communication looks like:
http://de.wikipedia.org/wiki/LMTP

Regards
Daniel
Yes, Daniel, thank you. I had found this pieces from your privious 
mail.
I understand that LMTP is an alternative to SMTP when it comes to mail 
communication inside a server or a local network.
I understand that LMTP is newer. But if you look at incoming mail via 
SMTP on socket 25 and than look at the mail via roundcoube 
(communicating with dovecot) what is the difference and why should I 
care?


That is - if I introduce LMTP - postfix will talk to dovecot by a 
different protocol. Correct? Will dovecot change its behavior?
As I am not an SMTP insider (never did SMTP using telnet) I hardly 
understand what this change could do to my problem.
Wouldn't dovecot LDA "deliver" still try to change the INBOX and will 
have access problems that I do not understand?


Do you have a link for me, explaining what "deliver" does with a mail 
that is not subject to any of the "fileinto" of a sieve filter? What 
user accounts are involved in that function? Why does it not work with 
the Debian default that a user is not a member of the group "mail" that 
is assigned to their INBOX? (If this is part of the problem what I do 
not know for sure, yet.)


Re: [Dovecot] RAID1+md concat+XFS as mailstorage

2012-06-28 Thread Костырев Александр Алексеевич
>- RAID1 pairs, plus some kind of intelligent overlay filesystem, eg 
>md-linear+XFS / BTRFS. With the filesystem aware of the underlying 
>arrangement it can theoretically optimise file placement and 
>dramatically increase write speeds for small files in the same manner 
>that RAID-0 theoretically achieves. (However, still no protection 
>against "silent" single drive corruption unless btrfs perhaps adds this 
>in the future?)

not only "silent" single drive corruption problem but as I stated in start of 
topic - crash of first pair.



Re: [Dovecot] RAID1+md concat+XFS as mailstorage

2012-06-28 Thread Ed W

On 28/06/2012 13:46, Wojciech Puchar wrote:

(unless we are talking temporary removal and re-insertion?)

nope, I'm talking about complete pair's crash when two disks die.
I do understand that's the possibility of such outcome (when two 
disks in the same pair crash) is not high, but

when we have 12 or 24 disks in storage...


then may 6-12 filesystems. overall probability of double disk failure 
is same, but you will loose 1/6-1/12 of data.


But the compromise is that you gain the complexity of maintaining more 
filesystems and needing to figure out how to split your data across 
multiple filesystems


The options today however seem to be only:

- RAID6 (suffers slow write speeds, especially for smaller files)
- RAID1 pairs with striping (raid0) over the top. (doesn't achieve max 
speeds for small files. 2 disk failures a problem. No protection against 
"silent corruption" of 1 disk)
- RAID1 pairs, plus some kind of intelligent overlay filesystem, eg 
md-linear+XFS / BTRFS. With the filesystem aware of the underlying 
arrangement it can theoretically optimise file placement and 
dramatically increase write speeds for small files in the same manner 
that RAID-0 theoretically achieves. (However, still no protection 
against "silent" single drive corruption unless btrfs perhaps adds this 
in the future?)


So given the statistics show us that 2 disk failures are much more 
common than we expect, and that "silent corruption" is likely occurring 
within (larger) real world file stores,  there really aren't many battle 
tested options that can protect against this - really only RAID6 right 
now and that has significant limitations...


RAID1+XFS sounds very interesting.  Curious to hear some failure testing 
on this now.  Also I'm watching btrfs with a 12 month+ view


Cheers

Ed W


Re: [Dovecot] RAID1+md concat+XFS as mailstorage

2012-06-28 Thread Wojciech Puchar

(unless we are talking temporary removal and re-insertion?)

nope, I'm talking about complete pair's crash when two disks die.
I do understand that's the possibility of such outcome (when two disks in the 
same pair crash) is not high, but
when we have 12 or 24 disks in storage...


then may 6-12 filesystems. overall probability of double disk failure is 
same, but you will loose 1/6-1/12 of data.






Re: [Dovecot] RAID1+md concat+XFS as mailstorage

2012-06-28 Thread Костырев Александр Алексеевич
>Note that you wouldn't get anything back from a similar fail of a RAID10 array 
>either
I wasn't aware of it, that's interesting.

>(unless we are talking temporary removal and re-insertion?)
nope, I'm talking about complete pair's crash when two disks die.
I do understand that's the possibility of such outcome (when two disks in the 
same pair crash) is not high, but
when we have 12 or 24 disks in storage...





-Original Message-
From: dovecot-boun...@dovecot.org [mailto:dovecot-boun...@dovecot.org] On 
Behalf Of Ed W
Sent: Thursday, June 28, 2012 11:15 PM
To: dovecot@dovecot.org
Subject: Re: [Dovecot] RAID1+md concat+XFS as mailstorage

On 28/06/2012 13:01, Костырев Александр Алексеевич wrote:
> Hello!
>
> somewhere in maillist I've seen RAID1+md concat+XFS being promoted as 
> mailstorage.
> Does anybody in here actually use this setup?
>
> I've decided to give it a try,
> but ended up with not being able to recover any data off survived pairs from 
> linear array when _the_first of raid1 pairs got down.
>

This is the configuration endorsed by Stan Hoeppner.  His description of 
the benefits is quite compelling, but real world feedback is interesting 
to achieve.

Note that you wouldn't get anything back from a similar fail of a RAID10 
array either (unless we are talking temporary removal and re-insertion?)

Ed W




Re: [Dovecot] RAID1+md concat+XFS as mailstorage

2012-06-28 Thread Wojciech Puchar
Note that you wouldn't get anything back from a similar fail of a RAID10 
array either (unless we are talking temporary removal and re-insertion?)


use multiple RAID1 arrays, 2 drives each, one filesystem each.




Re: [Dovecot] RAID1+md concat+XFS as mailstorage

2012-06-28 Thread Ed W

On 28/06/2012 13:01, Костырев Александр Алексеевич wrote:

Hello!

somewhere in maillist I've seen RAID1+md concat+XFS being promoted as 
mailstorage.
Does anybody in here actually use this setup?

I've decided to give it a try,
but ended up with not being able to recover any data off survived pairs from 
linear array when _the_first of raid1 pairs got down.



This is the configuration endorsed by Stan Hoeppner.  His description of 
the benefits is quite compelling, but real world feedback is interesting 
to achieve.


Note that you wouldn't get anything back from a similar fail of a RAID10 
array either (unless we are talking temporary removal and re-insertion?)


Ed W



[Dovecot] RAID1+md concat+XFS as mailstorage

2012-06-28 Thread Костырев Александр Алексеевич
Hello!

somewhere in maillist I've seen RAID1+md concat+XFS being promoted as 
mailstorage.
Does anybody in here actually use this setup?

I've decided to give it a try, 
but ended up with not being able to recover any data off survived pairs from 
linear array when _the_first of raid1 pairs got down.

thanks!


Re: [Dovecot] Mail migration to dovecot with doveadm backup

2012-06-28 Thread Reinhard Vicinus

On 28/06/12 09:03, Reinhard Vicinus wrote:

and afterwards:

/usr/bin/doveadm -o imapc_user=u...@example.org -o 
imapc_password=imappw

-o imapc_host=local-mailbox -o imapc_features=rfc822.size -o
imapc_port=18143 -D -v backup -R -f -u u...@example.org imapc:

dsync(u...@example.org): Error: Mailbox INBOX changed its GUID
(c92f64f79f0d1ed01e6d5b314f04886c ->  54c23c119d04eb4f00514f99b03d)
dsync(u...@example.org): Error: msg iteration failed: Couldn't open
mailbox c92f64f79f0d1ed01e6d5b314f04886c

Bug/"feature" .. you could try if running with
"imapc:/tmp/imapc-username" instead of "imapc:" helps.
This works also without problems. So thanks for your help because this 
solves my problem. Let me know if i should test something more.


Sorry, I either made a mistake in my test setup or i can't reproduce it, 
but adding imapc:/tmp/imapc-username instead of imapc: doesn't help. I 
have circumvented my problem by changing the quota values directly in 
the database in my migration process.


But there is the following difference between using 
imapc:/tmp/imapc-username and plain imapc: if i backup a single, on both 
servers empty mailbox with different guids from the non dovecot imap 
server to the dovecot imap server, then plain imapc: throws some errors 
but works, imapc:/tmp/imapc-username throws more errors and only deletes 
the mailbox on the destination.


Test setup is as follow:

Both accounts don't contain a mailbox Test1:
/usr/bin/doveadm -o imapc_user=u...@example.org -o imapc_password=imappw 
-o imapc_host=local-mailbox -o imapc_features=rfc822.size -o 
imapc_port=18143 -o mail=imapc: mailbox status -u u...@example.org all Test1


/usr/bin/doveadm mailbox status -u u...@example.org all Test1


Create Mailbox Test1 on the imapc server:
/usr/bin/doveadm -o imapc_user=u...@example.org -o imapc_password=imappw 
-o imapc_host=local-mailbox -o imapc_features=rfc822.size -o 
imapc_port=18143 -o mail=imapc: mailbox create -u u...@example.org Test1



Create Mailbox Test1 on the dovecot server:
doveadm mailbox create -u u...@example.org Test1


List the status of mailbox Test1 on the imapc server:
/usr/bin/doveadm -o imapc_user=u...@example.org -o imapc_password=imappw 
-o imapc_host=local-mailbox -o imapc_features=rfc822.size -o 
imapc_port=18143 -o mail=imapc: mailbox status -u u...@example.org all Test1
Test1 messages=0 recent=0 uidnext=0 uidvalidity=87991 unseen=0 
highestmodseq=0 vsize=0 guid=0f6e69ad71659995677b43f8a8312025


List the status of mailbox Test1 on the dovecot server:
/usr/bin/doveadm mailbox status -u u...@example.org Test1
Test1 messages=0 recent=0 uidnext=1 uidvalidity=1340879819 unseen=0 
highestmodseq=1 vsize=0 guid=a8076214cb33ec4f39674f99b03d


Start Backup with imapc:/tmp/user:
/usr/bin/doveadm -o imapc_user=u...@example.org -o imapc_password=imappw 
-o imapc_host=local-mailbox -o imapc_features=rfc822.size -o 
imapc_port=18143 backup -R -f -u u...@example.org -m Test1 imapc:/tmp/user
dsync(u...@example.org): Error: Failed to sync mailbox Test1: Mailbox 
doesn't exist: Test1
dsync(u...@example.org): Error: msg iteration failed: Couldn't open 
mailbox 0f6e69ad71659995677b43f8a8312025
dsync(u...@example.org): Error: Failed to sync mailbox Test1: Mailbox 
doesn't exist: Test1
dsync(u...@example.org): Error: Trying to open a non-listed mailbox with 
guid=a8076214cb33ec4f39674f99b03d
dsync(u...@example.org): Error: msg iteration failed: Couldn't open 
mailbox a8076214cb33ec4f39674f99b03d
dsync(u...@example.org): Error: Trying to open a non-listed mailbox with 
guid=a8076214cb33ec4f39674f99b03d


List the status of mailbox Test1 on the imapc server:
/usr/bin/doveadm -o imapc_user=u...@example.org -o imapc_password=imappw 
-o imapc_host=local-mailbox -o imapc_features=rfc822.size -o 
imapc_port=18143 -o mail=imapc: mailbox status -u u...@example.org all Test1
Test1 messages=0 recent=0 uidnext=0 uidvalidity=87991 unseen=0 
highestmodseq=0 vsize=0 guid=0f6e69ad71659995677b43f8a8312025


List the status of mailbox Test1 on the dovecot server:
/usr/bin/doveadm mailbox status -u u...@example.org all Test1


result: the mailbox Test1 on the dovecot server got deleted.


with plain imapc: copying works but there are also still error messages:

Create Mailbox Test2 on the imapc server:
/usr/bin/doveadm -o imapc_user=u...@example.org -o imapc_password=imappw 
-o imapc_host=local-mailbox -o imapc_features=rfc822.size -o 
imapc_port=18143 -o mail=imapc: mailbox create -u u...@example.org Test2



Create Mailbox Test2 on the dovecot server:
doveadm mailbox create -u u...@example.org Test2


List the status of mailbox Test2 on the imapc server:
/usr/bin/doveadm -o imapc_user=u...@example.org -o imapc_password=imappw 
-o imapc_host=local-mailbox -o imapc_features=rfc822.size -o 
imapc_port=18143 -o mail=imapc: mailbox status -u u...@example.org all Test2
Test2 messages=0 recent=0 uidnext=0 uidvalidity=87993 unseen=0 
highestmodseq=0 vsize=0 guid=c0fd4ba8bd514c5c43ab9a897c8c014e


List

Re: [Dovecot] indexer-worker

2012-06-28 Thread Wojciech Puchar

29413 root 1  760 22820K  9204K kqread  1   0:17  5.86% 
indexer-worker


It runs as root while not really doing anything, but when it starts
accessing users' files it temporarily drops privileges. This is
necessary if users have multiple different UIDs.


to showed it with root privilege and 60% CPU load+disk I/O when doing text 
search over not yet indexed folder.



If you have only one UID e.g. vmail, you could set:


i'm not sure what you exactly mean.

I have simplest possible config - mail accounts are unix accounts and mail 
is at Maildir


my config below


# 2.1.7: /usr/local/etc/dovecot/dovecot.conf
# OS: FreeBSD 8.3-STABLE amd64 
disable_plaintext_auth = no

listen = *
mail_location = maildir:~/Maildir
mail_plugins = fts fts_squat
namespace inbox {
  inbox = yes
  location =
  mailbox Drafts {
special_use = \Drafts
  }
  mailbox Junk {
special_use = \Junk
  }
  mailbox Sent {
special_use = \Sent
  }
  mailbox "Sent Messages" {
special_use = \Sent
  }
  mailbox Trash {
special_use = \Trash
  }
  prefix = 
}

passdb {
  args = /usr/local/etc/dovecot/deny-users
  deny = yes
  driver = passwd-file
}
passdb {
  driver = pam
}
plugin {
  fts = squat
  fts_squat = partial=4 full=10
}
protocols = imap
ssl_cert = 

Re: [Dovecot] Default for non-present LDAP attributes?

2012-06-28 Thread Timo Sirainen
On 28.6.2012, at 12.19, Edgar Fuß wrote:

>> The "mail" field defaults to mail_location setting.
> Ah, yes, thanks. So simple I didn't think of it.
> Will it default when the LDAP attribute is not present or will I have to 
> check the attribute's presence in the LDAP filter?

The default settings are in dovecot.conf. LDAP attributes that are returned by 
the LDAP server override those settings.



Re: [Dovecot] Default for non-present LDAP attributes?

2012-06-28 Thread Edgar Fuß
> The "mail" field defaults to mail_location setting.
Ah, yes, thanks. So simple I didn't think of it.
Will it default when the LDAP attribute is not present or will I have to check 
the attribute's presence in the LDAP filter?


Re: [Dovecot] Maildir Seen Flags not heeded when dovecot-shared present

2012-06-28 Thread J E Lyon
Timo & List,

Just by way of a follow-up, running tests on a 1.0 installation of Dovecot 
confirms it.

Sure enough, I was still configuring my mail stores based on my outdated 
understanding and hadn't fully appreciated changes to what dovecot-shared files 
affect in recent versions.

Thanks all,
J.

On 27 Jun 2012, at 11:01, J E Lyon wrote:

> On 26 Jun 2012, at 21:49, Timo Sirainen wrote:
> 
>> So you don't want shared seen flags? You can simply not create 
>> dovecot-shared file nowadays. It's not necessary. The only other purpose for 
>> it was as the template for file permissions, but those are nowadays taken 
>> from the maildir itself: http://wiki2.dovecot.org/SharedMailboxes/Permissions
> 
> 
> Timo,
> 
> Thanks for pointing me in the right direction . .
> 
> I started with Dovecot back in the pre-v1 days and used dovecot-shared from 
> when it first helped with permissions and things -- never actually minded 
> about seen flags back then.
> 
> So, I've always thought of dovecot-shared as being primarily about making the 
> permissions work, and hadn't realised things have been steadily changing in 
> that regard.
> 
> So, I now have Dovecot on both CentOS 5.5 & CentOS 6, which means v1 & v2 . . 
> unfortunately though, the CentOS 5.5 default package is 1.0.x and that means 
> I miss out on 1.1+ features there, as well as the improved handling of file 
> permissions in 1.2 that I now see after scrutinising the differences . .
> 
> At least I know exactly where the problems are now, thanks!
> 
> ~ James.



Re: [Dovecot] last hope... public namespace and directory structure

2012-06-28 Thread Daniel Fischer
Hello Timo,

Thanks for your reply. I have the dovewiki a little bit misunderstod.

"Public mailboxes are typically mailboxes that are visible to all users or to 
large user groups. They are created by defining a public namespace, under which 
all the shared mailboxes are"

Daniel

-Ursprüngliche Nachricht-
Von: dovecot-boun...@dovecot.org [mailto:dovecot-boun...@dovecot.org] Im 
Auftrag von Timo Sirainen
Gesendet: Donnerstag, 28. Juni 2012 08:58
An: Daniel Fischer
Cc: dovecot@dovecot.org
Betreff: Re: [Dovecot] last hope... public namespace and directory structure

On Wed, 2012-06-27 at 09:53 +0200, Daniel Fischer wrote:
> The file passwd for those 3 samples looks like this:
> 
> sales@$DOMAIN::5000:5000::/var/mail/vhosts/$DOMAIN/public/.sales
> 
> service@$DOMAIN::5000:5000::/var/mail/vhosts/$DOMAIN/public/.service
> 
> purchase@$DOMAIN::5000:5000::/var/mail/vhosts/$DOMAIN/public/.purchase
> 
> Note: All other users have mail_location /var/mail/vhosts/%d/%n
> 
> Now a have the following problem: If  I login in as user sales and 
> create a folder foo and in there a folder bar.

It can't work like that. You need to have all of the these homes to be 
/var/mail/vhosts/$DOMAIN/public if you want them to be able to create any new 
folders. Then if needed add ACLs to the users. For delivering mails to these 
users you could set up a Sieve script to do it, or maybe something else..





Re: [Dovecot] Removing specific entry in user/auth cache

2012-06-28 Thread Angel L. Mateo

El 27/06/12 14:24, Timo Sirainen escribió:

On 27.6.2012, at 14.10, Angel L. Mateo wrote:


We have dovecot configured with auth cache. Is there any way to remove 
a specific entry (not all) from this cache?


Nope. What do you need it for?

	Because information for users sometimes changes. For example, when I 
made the question, home directory's of one user changed and all mails to 
him was been discarted because of this and I had to flush all cache to 
solve this.


--
Angel L. Mateo Martínez
Sección de Telemática
Área de Tecnologías de la Información
y las Comunicaciones Aplicadas (ATICA)
http://www.um.es/atica
Tfo: 868887590
Fax: 86337




Re: [Dovecot] Mail migration to dovecot with doveadm backup

2012-06-28 Thread Reinhard Vicinus

On 28/06/12 08:53, Timo Sirainen wrote:

On Wed, 2012-06-27 at 15:10 +0200, Reinhard Vicinus wrote:

Hi,

if i delete the home directory and all content below an existing account
u...@example.org. Then run:

/usr/bin/doveadm quota recalc -u u...@example.org

Are you sure quota recalc makes a difference here? What if you simply
run doveadm twice?

Running doveadm twice without quota recalc prior works without problems.

and afterwards:

/usr/bin/doveadm -o imapc_user=u...@example.org -o imapc_password=imappw
-o imapc_host=local-mailbox -o imapc_features=rfc822.size -o
imapc_port=18143 -D -v backup -R -f -u u...@example.org imapc:

dsync(u...@example.org): Error: Mailbox INBOX changed its GUID
(c92f64f79f0d1ed01e6d5b314f04886c ->  54c23c119d04eb4f00514f99b03d)
dsync(u...@example.org): Error: msg iteration failed: Couldn't open
mailbox c92f64f79f0d1ed01e6d5b314f04886c

Bug/"feature" .. you could try if running with
"imapc:/tmp/imapc-username" instead of "imapc:" helps.
This works also without problems. So thanks for your help because this 
solves my problem. Let me know if i should test something more.