Re: [Dovecot] dotlock timestamp trouble

2009-01-27 Thread Giorgenes Gelatti
Looks like bad news. :(

I've run nfstest to look for problems. The output is attached.
I'm not sure, but it looks like a bad nfs environment.

2009/1/25 Timo Sirainen 

> 2.6.9 is old and there are several NFS bugs in older 2.6 kernels. So I'd
> try upgrading. I've really no idea what else could be the problem.
>
> On Fri, 2009-01-23 at 11:29 -0200, Giorgenes Gelatti wrote:
> > Is there any know issue about it on kernel 2.6.9 (centos)?
> > Any other mount options I could try?
> >
> > Thank you.
> >
> > 2009/1/21 Giorgenes Gelatti 
> >
> > > Dovecot is running on a linux machine (2.6 kernel).
> > > The nfs was mounted as:
> > > nfs rw,vers=3,proto=tcp,intr,nolock,noexec,rsize=8192,wsize=8192 0 0
> > >
> > > After your hint we added the "noac" flag:
> > > nfs rw,vers=3,proto=tcp,intr,nolock,noexec,noac,rsize=8192,wsize=8192 0
> 0
> > >
> > > But the error continues with differences of 120 and 60 seconds.
> > >
> > > Thanks for the reply,
> > > gpg
> > >
> > > 2009/1/20 Timo Sirainen 
> > >
> > > On Tue, 2009-01-20 at 14:36 -0200, Giorgenes Gelatti wrote:
> > >> > Created dotlock file's timestamp is different than current time
> > >> (1232468644
> > >> > vs 1232468524): /path/to/dovecot.index.log
> > >> >
> > >> > The IT guy swears the clocks are sincronized.
> > >>
> > >> the difference in the above message is exactly 120 seconds. Are they
> all
> > >> 120 seconds?
> > >>
> > >> > I'm using dovecot 1.1.6 over NFS.
> > >> > Any thoughts?
> > >>
> > >> What OS are you using on the NFS clients? Perhaps this is a caching
> > >> issue, have you tried changing/disabling attribute cache timeouts?
> > >>
> > >
> > >
>
# ./nfstest 7070 /nfs/mail01ns03/arquivo
Listening for client on port 7070..
Connected: Acting as test server
Listening for client on port 7070..

# ./nfstest 10.235.200.126 7070 /nfs/mail01ns03/arquivo
Connected: Acting as test client
EIO errors happen on read()
 - fchown() returned ESTALE
O_EXCL appears to be working, but this could be just faked by NFS client
timestamps resolution: seconds

Testing file attribute cache..
Attr cache flush open+close: OK
Attr cache flush close+open: OK
Attr cache flush fchown(-1, -1): failed
Attr cache flush fchown(uid, -1): OK
Attr cache flush fchmod(mode): OK
Attr cache flush chown(-1, -1): failed
Attr cache flush chown(uid, -1): OK
Attr cache flush chmod(mode): OK
Attr cache flush rmdir(): failed
Attr cache flush rmdir(parent dir): failed
Attr cache flush dup+close: OK
Attr cache flush fcntl(shared): OK
Attr cache flush fcntl(exclusive): OK
Attr cache flush flock(shared): failed
Attr cache flush flock(exclusive): failed
Attr cache flush fsync(): failed
Attr cache flush fcntl(O_SYNC): failed
Attr cache flush O_DIRECT: failed

Testing data cache..
Data cache flush no caching: failed
Data cache flush open+close: failed
Data cache flush close+open: failed
Data cache flush fchown(-1, -1): failed
Data cache flush fchown(uid, -1): failed
Data cache flush fchmod(mode): failed
Data cache flush chown(-1, -1): failed
Data cache flush chown(uid, -1): failed
Data cache flush chmod(mode): failed
Data cache flush rmdir(): failed
Data cache flush rmdir(parent dir): failed
Data cache flush dup+close: failed
Data cache flush fcntl(shared): OK
Data cache flush fcntl(exclusive): OK
Data cache flush flock(shared): failed
Data cache flush flock(exclusive): failed
Data cache flush fsync(): failed
Data cache flush fcntl(O_SYNC): failed
Data cache flush O_DIRECT: OK

Testing write flushing..
Write flush no caching: failed
Write flush open+close: OK
Write flush close+open: OK
Write flush fchown(-1, -1): failed
Write flush fchown(uid, -1): OK
Write flush fchmod(mode): OK
Write flush chown(-1, -1): failed
Write flush chown(uid, -1): OK
Write flush chmod(mode): OK
Write flush rmdir(): failed
Write flush rmdir(parent dir): failed
Write flush dup+close: OK
Write flush fcntl(shared): OK
Write flush fcntl(exclusive): OK
Write flush flock(shared): failed
Write flush flock(exclusive): failed
Write flush fsync(): OK
Write flush fcntl(O_SYNC): failed
Write flush O_DIRECT: OK

Testing partial writing..
Failed at [0]

Testing file handle cache..
File handle cache flush no caching: failed
File handle cache flush open+close: failed
File handle cache flush close+open: failed
File handle cache flush fchown(-1, -1): failed
File handle cache flush fchown(uid, -1): OK
File handle cache flush fchmod(mode): OK
File handle cache flush chown(-1, -1): failed
File handle cache flush chown(uid, -1): OK
File handle cache flush chmod(mode): OK
File handle cache flush

Re: [Dovecot] dotlock timestamp trouble

2009-01-23 Thread Giorgenes Gelatti
Is there any know issue about it on kernel 2.6.9 (centos)?
Any other mount options I could try?

Thank you.

2009/1/21 Giorgenes Gelatti 

> Dovecot is running on a linux machine (2.6 kernel).
> The nfs was mounted as:
> nfs rw,vers=3,proto=tcp,intr,nolock,noexec,rsize=8192,wsize=8192 0 0
>
> After your hint we added the "noac" flag:
> nfs rw,vers=3,proto=tcp,intr,nolock,noexec,noac,rsize=8192,wsize=8192 0 0
>
> But the error continues with differences of 120 and 60 seconds.
>
> Thanks for the reply,
> gpg
>
> 2009/1/20 Timo Sirainen 
>
> On Tue, 2009-01-20 at 14:36 -0200, Giorgenes Gelatti wrote:
>> > Created dotlock file's timestamp is different than current time
>> (1232468644
>> > vs 1232468524): /path/to/dovecot.index.log
>> >
>> > The IT guy swears the clocks are sincronized.
>>
>> the difference in the above message is exactly 120 seconds. Are they all
>> 120 seconds?
>>
>> > I'm using dovecot 1.1.6 over NFS.
>> > Any thoughts?
>>
>> What OS are you using on the NFS clients? Perhaps this is a caching
>> issue, have you tried changing/disabling attribute cache timeouts?
>>
>
>


Re: [Dovecot] dotlock timestamp trouble

2009-01-21 Thread Giorgenes Gelatti
Dovecot is running on a linux machine (2.6 kernel).
The nfs was mounted as:
nfs rw,vers=3,proto=tcp,intr,nolock,noexec,rsize=8192,wsize=8192 0 0

After your hint we added the "noac" flag:
nfs rw,vers=3,proto=tcp,intr,nolock,noexec,noac,rsize=8192,wsize=8192 0 0

But the error continues with differences of 120 and 60 seconds.

Thanks for the reply,
gpg

2009/1/20 Timo Sirainen 

> On Tue, 2009-01-20 at 14:36 -0200, Giorgenes Gelatti wrote:
> > Created dotlock file's timestamp is different than current time
> (1232468644
> > vs 1232468524): /path/to/dovecot.index.log
> >
> > The IT guy swears the clocks are sincronized.
>
> the difference in the above message is exactly 120 seconds. Are they all
> 120 seconds?
>
> > I'm using dovecot 1.1.6 over NFS.
> > Any thoughts?
>
> What OS are you using on the NFS clients? Perhaps this is a caching
> issue, have you tried changing/disabling attribute cache timeouts?
>


[Dovecot] dotlock timestamp trouble

2009-01-20 Thread Giorgenes Gelatti
Hi there,

I'm getting a lot of this message in production log:

Created dotlock file's timestamp is different than current time (1232468644
vs 1232468524): /path/to/dovecot.index.log

The IT guy swears the clocks are sincronized.
Whe even have made a test in the machine running dovecot, inside the user's
mailbox:
# > foo; ls -l --time-style=full-iso foo; date
-rw-r--r-- 1 root root 0 2009-01-19 17:40:55.00085 + foo
Mon Jan 19 17:40:55 UTC 2009

The timestamps seems to match.
I'm using dovecot 1.1.6 over NFS.
Any thoughts?

Thanks in advance,
gpg


[Dovecot] index storage usage

2008-11-06 Thread Giorgenes Gelatti
Hello there,

I failed in finding the answer in the wiki so i'm asking it here.

In average, how much the dovecot's indexes increase the usage of storage,
compared to a standard maildir mailbox?

I believe it would be a function of the number of messages instead of the
message's size. Is there a known aproximated function for that?

Thank you.
gpg


Re: [Dovecot] problem with i_stream_next_line()

2008-11-05 Thread Giorgenes Gelatti
Ok. I've made a new patch in the subscription-file.c to fix my problem.

Attached...

thank you

2008/11/4 Timo Sirainen <[EMAIL PROTECTED]>

> It's not a good idea to change the code that way. For example
> dovecot-uidlist reading relies on i_stream_next_line() not returning a
> partially written. That's why the API description also says:
>
> /* Gets the next line from stream and returns it, or NULL if more data is
>   needed to make a full line. Note that if the stream ends with LF not
> being
>   the last character, this function doesn't return the last line. */
> char *i_stream_next_line(struct istream *stream);
>
> I'd think the easiest way would be for you to just add the missing LFs to
> the subscription files. Or alternatively change the subscription file
> reading code to also include the last line (with i_stream_get_data() after
> i_stream_next_line() has returned NULL).
>
>
> On Nov 4, 2008, at 10:36 PM, Giorgenes Gelatti wrote:
>
>  I did the patch below and it worked for me.
>>
>> diff --git a/dovecot/src/lib/istream.c b/dovecot/src/lib/istream.c
>> index 4b218b9..b195b4f 100644
>> --- a/dovecot/src/lib/istream.c
>> +++ b/dovecot/src/lib/istream.c
>> @@ -245,6 +245,10 @@ char *i_stream_next_line(struct istream *stream)
>>   }
>>   }
>>
>> +   if(ret_buf == NULL && i == _stream->pos) {
>> +   ret_buf = i_stream_next_line_finish(_stream, i);
>> +   }
>> +
>>return ret_buf;
>>
>>
>> 2008/11/4 Giorgenes Gelatti <[EMAIL PROTECTED]>
>>
>>  Hello there,
>>>
>>> I have a "subscriptions" file that is *not* ended with a line break
>>> (created by another system).
>>> When I do a "lsub "" "*"" the last mailbox name is not listed.
>>> Debugging a little showed that it looks like  i_stream_next_line() is not
>>> returning the last line if it doesn't end with a line break.
>>>
>>> Is this a bug?
>>>
>>> btw, i'm using version 1.1.6.
>>>
>>> []'s
>>>
>>>
>>>
>
diff --git a/dovecot/src/lib-storage/list/subscription-file.c b/dovecot/src/lib-storage/list/subscription-file.c
index dc03a29..912bd23 100644
--- a/dovecot/src/lib-storage/list/subscription-file.c
+++ b/dovecot/src/lib-storage/list/subscription-file.c
@@ -57,14 +57,22 @@ static const char *next_line(struct mailbox_list *list, const char *path,
 			 bool ignore_estale)
 {
 	const char *line;
+	ssize_t ret;
 
 	*failed_r = FALSE;
 	if (input == NULL)
 		return NULL;
 
 	while ((line = i_stream_next_line(input)) == NULL) {
-switch (i_stream_read(input)) {
+		ret = i_stream_read(input);
+switch (ret) {
 		case -1:
+			if(i_stream_have_bytes_left(input)) {
+size_t size;
+line = i_stream_get_data(input, &size);
+i_stream_skip(input, size);
+if(line && size > 0) break;
+			}
 if (input->stream_errno != 0 &&
 (input->stream_errno != ESTALE || !ignore_estale)) {
 subswrite_set_syscall_error(list,
@@ -81,6 +89,7 @@ static const char *next_line(struct mailbox_list *list, const char *path,
 			*failed_r = TRUE;
 			return NULL;
 		}
+		if(line) break;
 	}
 
 	return line;


Re: [Dovecot] problem with i_stream_next_line()

2008-11-04 Thread Giorgenes Gelatti
with attachment may be more useful.

tks.

2008/11/4 Giorgenes Gelatti <[EMAIL PROTECTED]>

> I did the patch below and it worked for me.
>
> diff --git a/dovecot/src/lib/istream.c b/dovecot/src/lib/istream.c
> index 4b218b9..b195b4f 100644
> --- a/dovecot/src/lib/istream.c
> +++ b/dovecot/src/lib/istream.c
> @@ -245,6 +245,10 @@ char *i_stream_next_line(struct istream *stream)
> }
> }
>
> +   if(ret_buf == NULL && i == _stream->pos) {
> +   ret_buf = i_stream_next_line_finish(_stream, i);
> +   }
> +
>  return ret_buf;
>
>
> 2008/11/4 Giorgenes Gelatti <[EMAIL PROTECTED]>
>
> Hello there,
>>
>> I have a "subscriptions" file that is *not* ended with a line break
>> (created by another system).
>> When I do a "lsub "" "*"" the last mailbox name is not listed.
>> Debugging a little showed that it looks like  i_stream_next_line() is not
>> returning the last line if it doesn't end with a line break.
>>
>> Is this a bug?
>>
>> btw, i'm using version 1.1.6.
>>
>> []'s
>>
>>
>
diff --git a/dovecot/src/lib/istream.c b/dovecot/src/lib/istream.c
index 4b218b9..b195b4f 100644
--- a/dovecot/src/lib/istream.c
+++ b/dovecot/src/lib/istream.c
@@ -245,6 +245,10 @@ char *i_stream_next_line(struct istream *stream)
 		}
 	}
 
+	if(ret_buf == NULL && i == _stream->pos) {
+		ret_buf = i_stream_next_line_finish(_stream, i);
+	}
+
 return ret_buf;
 }
 


Re: [Dovecot] problem with i_stream_next_line()

2008-11-04 Thread Giorgenes Gelatti
I did the patch below and it worked for me.

diff --git a/dovecot/src/lib/istream.c b/dovecot/src/lib/istream.c
index 4b218b9..b195b4f 100644
--- a/dovecot/src/lib/istream.c
+++ b/dovecot/src/lib/istream.c
@@ -245,6 +245,10 @@ char *i_stream_next_line(struct istream *stream)
}
}

+   if(ret_buf == NULL && i == _stream->pos) {
+   ret_buf = i_stream_next_line_finish(_stream, i);
+   }
+
 return ret_buf;


2008/11/4 Giorgenes Gelatti <[EMAIL PROTECTED]>

> Hello there,
>
> I have a "subscriptions" file that is *not* ended with a line break
> (created by another system).
> When I do a "lsub "" "*"" the last mailbox name is not listed.
> Debugging a little showed that it looks like  i_stream_next_line() is not
> returning the last line if it doesn't end with a line break.
>
> Is this a bug?
>
> btw, i'm using version 1.1.6.
>
> []'s
>
>


[Dovecot] problem with i_stream_next_line()

2008-11-04 Thread Giorgenes Gelatti
Hello there,

I have a "subscriptions" file that is *not* ended with a line break (created
by another system).
When I do a "lsub "" "*"" the last mailbox name is not listed.
Debugging a little showed that it looks like  i_stream_next_line() is not
returning the last line if it doesn't end with a line break.

Is this a bug?

btw, i'm using version 1.1.6.

[]'s


[Dovecot] problems with auth protocol

2008-09-26 Thread Giorgenes Gelatti
Hello there,

I have a client connecting to dovecot and comunicating like this:

* OK Dovecot ready.

* AUTHENTICATE MYAUTH
+ base64 challange
base64_response {NNN}

* NO Invalid base64 data in continued response


The problem is that dovecot seems to reject the response because of
the {NNN} in the end of the string.
If I remove the {NNN} it authenticates just fine.

Any idea of whether it's standard or not?

Thanks you.


[Dovecot] plugin for writing compressed mails

2008-09-10 Thread Giorgenes Gelatti
Hello there,

I've being reading the zlib plugin and the dovecot code and I can't
see a way of extending the plugin for writing compressed mails instead
of just reading it.

Any clue?
Tks in advance,
gpg


Re: [Dovecot] filename format question

2008-09-04 Thread Giorgenes Gelatti
> There's no config option for it, but it's theoretically possible to do just 
> about anything. Generally, though, the format of those filenames is defined 
> by the Maildir standard. Changing the filename as you suggest would likely 
> break other mail programs that might access those Maildir folders.
>
> I suppose the question is: why would you want to change that?

My company's current system (unfortunally) uses that non-standard format.
The MTA writes in that format.

I guess it would require a patch then?

tks.
gpg


[Dovecot] filename format question

2008-09-04 Thread Giorgenes Gelatti
Hello,

Is it possible to configure the filename format in dovecot?
For example, to change from "unique,W=size:2,FLAGS" to
"unique,size.hostname:2,FLAGS_unique2"?

Thanks in advance.


Re: [Dovecot] mailbox lock

2008-08-28 Thread Giorgenes Gelatti
Maybe what you have to is remove the dovecot's lock file after the
migration is complete, as after the exchange is complete.

[]'s
gpg


2008/8/28 Thiago Monaco Papageorgiou <[EMAIL PROTECTED]>:
> Ok, but we must lock the mailbox because we often move groups of mailbox
> from a file system to another. While the mailboxes are being moved, our LDA
> shouldn't delivery a new message at the old mailbox, and isn't able to
> delivery at the new one. The mailbox is unlocked after the exchange of the
> file system , if the dovecot locks the index files and then the mailbox
> exchanges of file system, what will unlock the index files of the dovecot?
> The dovecot process that perform the lock, will unlock the old index files.
> This old index files will be erased as well as the whole old mailbox, the
> new mailbox, that was moved to an another file system, will remains locked.
>
> We have almost 10M of mailboxes, if the chance that it happens is something
> like 1/1M, it will happens 10 times per day!
>
>
> mouss wrote:
>>
>> Thiago Monaco Papageorgiou wrote:
>>>
>>> Hello!
>>>
>>> I need to use my lock method into the dovecot to block a mailbox. I am
>>> using maildir format, is there an API that I can implement? I need it
>>> because there are others systems which already use my lock method.
>>
>> why would you lock in a maildir? maildir was designed to avoid locks. do
>> you have an external app that "plays" with mail files? in a maildir, it may
>> be as easy as:
>>
>> # mkdir mydir
>> # mv cur/$filename mydir/
>> # do what you want in mydir/
>>
>>
>> Esta mensagem foi verificada pelo E-mail Protegido Terra.
>> Atualizado em 31/12/1969
>>
>>
>
>
> --
> Thiago Monaco Papageorgiou <[EMAIL PROTECTED]>
> 
> Terra Networks Brasil S/A Tel: +55 (51) 3284-4274
>
>


Re: [Dovecot] dovecot performance

2008-08-15 Thread Giorgenes Gelatti
The master process exec's the mail process (imap or pop3) after fork.

gpg

2008/8/15 Sebastien Tandel <[EMAIL PROTECTED]>:
> Hi,
>
>
>>> It is well known that preforking is a good pratice if you want to
>>> achieve a higher performance.
>>> When I was asked about it I readily answered: "of course it does". For
>>> my surprise later, i doesn't.
>>
>> With fork latencies in the range of 500 to 1500 microseconds (on Pentium
>> 900 MHz-class hardware!) on most modern kernels[1] I wonder whether this
>> "good practice" isn't on the verge of voodoo ;-)
>
> OK, it measures the fork instruction. But fork is using a copy-on-write
> mechanism ... It means that *none* of the parent's memory pages are copied.
> Each page is simply *shared* by *all* the child /until/ a modification is
> made to it.Therefore this test obviously does not take into account time
> taken when modifying data. And I strongly suspect that dovecot is not only
> doing read-only access to memory when running. :-/
>
> P.S. : I'm not saying though it is mandatory to have such a mechanism in
> dovecot ;)
>
>
> Regards,
> Sebastien
>
>> (Of course, in a http server, where you might expect thousands of
>> connects per second, this is another story -- which is mitigated by HTTP
>> 1.1, when properly streaming several requests per connection).
>>
>> - 
>> [1] , search "The fork benchmark"
>>
>> Regards
>> - -- tomás
>> -BEGIN PGP SIGNATURE-
>> Version: GnuPG v1.4.6 (GNU/Linux)
>>
>> iD8DBQFIpR1XBcgs9XrR2kYRAqPdAJ0dbp+fUW0MpWdNvXa3SUvXP3v3eQCcCsTS
>> hFbhMpoG+OjI4i+za6xNn+4=
>> =SRgx
>> -END PGP SIGNATURE-
>>
>
>


Re: [Dovecot] dovecot performance

2008-08-14 Thread Giorgenes Gelatti
2008/8/14 Timo Sirainen <[EMAIL PROTECTED]>:
> But there are even some theoretical problems with preforking. For example
> the most secure way to set up your users is to use a different UNIX UID for
> each user. So for preforking that means your preforked processes must run as
> root until they receive the information about which UID they need to run as.
> And the code running as root should be minimized..

True, but it's a common scenario to have thousands of users, in which
case they usually have all the same UID.

[]'s


Re: [Dovecot] dovecot performance

2008-08-14 Thread Giorgenes Gelatti
Woa!!

Do you have statistics of access/min for pop3?

Indeed it could be premature since I didn't measure any real
bottleneck. Just something that got my attention.

[]'s
giorgenes

2008/8/14 Jose Celestino <[EMAIL PROTECTED]>:
> Words by Giorgenes Gelatti [Thu, Aug 14, 2008 at 03:38:50PM -0300]:
>> Hello All,
>>
>> I've been studying dovecot for replacing my company's current system
>> and I got a little worried about an aspect of the dovecot's design.
>> I was surprised that dovecot doesn't use prefork for its mail
>> processes, forking a new processes for each new client connection.
>>
>> Talking in the #dovecot channel I was gave a scenario of a system
>> supporting ~40k users with 4 servers just fine.
>> I wonder how well dovecot would scale if we increase this number of
>> users by some order of magnitude like, say, 4M users.
>>
>
> Well, we have 8 servers for that amount of users.
>
>> It is well known that preforking is a good pratice if you want to
>> achieve a higher performance.
>
> Some say it's premature optimization.
>
>> When I was asked about it I readily answered: "of course it does". For
>> my surprise later, i doesn't.
>>
>> Do you have any plans to support preforking in the near future?
>>
>
> --
> Jose Celestino | http://japc.uncovering.org/files/japc-pgpkey.asc
> 
> "One man's theology is another man's belly laugh." -- Robert A. Heinlein
>


[Dovecot] dovecot performance

2008-08-14 Thread Giorgenes Gelatti
Hello All,

I've been studying dovecot for replacing my company's current system
and I got a little worried about an aspect of the dovecot's design.
I was surprised that dovecot doesn't use prefork for its mail
processes, forking a new processes for each new client connection.

Talking in the #dovecot channel I was gave a scenario of a system
supporting ~40k users with 4 servers just fine.
I wonder how well dovecot would scale if we increase this number of
users by some order of magnitude like, say, 4M users.

It is well known that preforking is a good pratice if you want to
achieve a higher performance.
When I was asked about it I readily answered: "of course it does". For
my surprise later, i doesn't.

Do you have any plans to support preforking in the near future?

Best regards,
giorgenes