Re: improving concurrency/performance

2005-11-06 Thread Sergio Devojno Bruder

Jure Pečar wrote:

On Sun, 06 Nov 2005 03:58:15 -0200
Sergio Devojno Bruder [EMAIL PROTECTED] wrote:

In our experience FS-wise, ReiserFS is the worst performer between ext3, 
XFS e ReiserFS (with tailBLAH turned on or off) for a Cyrus Backend (1M 
mailboxes in 3 partitions per backend, 0.5TB each partition).


Interesting ... can you provide some numbers, even from memory?

I always thought that reiserfs is best suited for jobs like this. Also, I'm
quite happy with it, but I havent done any hard-core scientific
measurements.


From memory: 2 backends, same hardware (xeons), same storage, same 
number of mailboxes (aprox). One with ext3 spools, other with reiserFS 
spools. the reiserFS one was handling half the simultaneous use of the 
ext3 one.


--
Sergio Bruder


Cyrus Home Page: http://asg.web.cmu.edu/cyrus
Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: improving concurrency/performance

2005-11-06 Thread Simon Matter
 On Sun, 06 Nov 2005 03:58:15 -0200
 Sergio Devojno Bruder [EMAIL PROTECTED] wrote:

 In our experience FS-wise, ReiserFS is the worst performer between ext3,
 XFS e ReiserFS (with tailBLAH turned on or off) for a Cyrus Backend (1M
 mailboxes in 3 partitions per backend, 0.5TB each partition).

 Interesting ... can you provide some numbers, even from memory?

 I always thought that reiserfs is best suited for jobs like this. Also,
 I'm
 quite happy with it, but I havent done any hard-core scientific
 measurements.

One thing to keep in mind is that while ReiserFS is usually good at
handling a large number of small files, it eats up much more CPU cycles
than other filesystems, like ext3 or XFS. So, if you're only running a
benchmark, it may no show up the same way like in a mixed load test, where
CPU may also be used by other components of the system. At least that's
what showed up in my tests years ago.

Simon

Cyrus Home Page: http://asg.web.cmu.edu/cyrus
Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: improving concurrency/performance

2005-11-06 Thread Michael Loftis



--On November 6, 2005 12:51:33 PM +0100 Jure Pečar [EMAIL PROTECTED] 
wrote:



On Sun, 06 Nov 2005 03:58:15 -0200
Sergio Devojno Bruder [EMAIL PROTECTED] wrote:


In our experience FS-wise, ReiserFS is the worst performer between ext3,
XFS e ReiserFS (with tailBLAH turned on or off) for a Cyrus Backend (1M
mailboxes in 3 partitions per backend, 0.5TB each partition).


Interesting ... can you provide some numbers, even from memory?



I'd also be VERY interested since our experience was quite the opposite. 
ReiserFS was faster than all three, XFS trailing a dismal third (also had 
corruption issues) and ext3 second or even more dismal third, depending on 
if you ignored it's wretched large directory performance or not.  ReiserFS 
performed solidly and predictably in all tests.  Not the same could be said 
for XFS and ext3.  This was about 2 yrs ago though.




I always thought that reiserfs is best suited for jobs like this. Also,
I'm quite happy with it, but I havent done any hard-core scientific
measurements.



Cyrus Home Page: http://asg.web.cmu.edu/cyrus
Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: improving concurrency/performance

2005-11-06 Thread Andrew Morgan


On Sun, 6 Nov 2005, Michael Loftis wrote:

I'd also be VERY interested since our experience was quite the opposite. 
ReiserFS was faster than all three, XFS trailing a dismal third (also had 
corruption issues) and ext3 second or even more dismal third, depending on if 
you ignored it's wretched large directory performance or not.  ReiserFS 
performed solidly and predictably in all tests.  Not the same could be said 
for XFS and ext3.  This was about 2 yrs ago though.


Make sure that you format ext3 partitions with dir_index which improves 
large directory performance.  You'll probably also want to increase the 
number of inodes.  Here is what I used:


mkfs -t ext3 -j -m 1 -O dir_index /dev/sdb1
tune2fs -c 0 -i 0 /dev/sdb1

This was on an 800GB Dell/EMC Cx500 array.

Andy

Cyrus Home Page: http://asg.web.cmu.edu/cyrus
Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: improving concurrency/performance

2005-11-06 Thread Jure Pečar
On Sun, 6 Nov 2005 14:20:03 -0800 (PST)
Andrew Morgan [EMAIL PROTECTED] wrote:

 mkfs -t ext3 -j -m 1 -O dir_index /dev/sdb1
 tune2fs -c 0 -i 0 /dev/sdb1

What about 1k blocks? I think they'd be more useful than 4k on mail
spools ...


-- 

Jure Pečar
http://jure.pecar.org/

Cyrus Home Page: http://asg.web.cmu.edu/cyrus
Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: improving concurrency/performance

2005-11-06 Thread Andrew Morgan


On Mon, 7 Nov 2005, Jure [ISO-8859-2] Pe?ar wrote:


On Sun, 6 Nov 2005 14:20:03 -0800 (PST)
Andrew Morgan [EMAIL PROTECTED] wrote:


mkfs -t ext3 -j -m 1 -O dir_index /dev/sdb1
tune2fs -c 0 -i 0 /dev/sdb1


What about 1k blocks? I think they'd be more useful than 4k on mail
spools ...


Maybe, could be a tradeoff though of size versus number of messages on 
average for your users.


Andy

Cyrus Home Page: http://asg.web.cmu.edu/cyrus
Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: improving concurrency/performance

2005-11-06 Thread David Lang

On Mon, 7 Nov 2005, Jure Pe?ar wrote:


On Sun, 6 Nov 2005 14:20:03 -0800 (PST)
Andrew Morgan [EMAIL PROTECTED] wrote:


mkfs -t ext3 -j -m 1 -O dir_index /dev/sdb1
tune2fs -c 0 -i 0 /dev/sdb1


What about 1k blocks? I think they'd be more useful than 4k on mail
spools ...


I was recently doing some testing of lots of small files on the various 
filesystems, and I ran into a huge difference (8x) depending on what 
allocator was used for ext*. the default allocator changed between ext2 
and ext3 (you can override it as a mount option) and when reading 1M files 
(10 dirs of 10 dirs of 10 dirs of 1000 1K files) the time to read them 
went from ~5 min with the old allocator useed in ext2 to 40 min for the 
one that's the default for ext3.


David Lang

--
There are two ways of constructing a software design. One way is to make it so 
simple that there are obviously no deficiencies. And the other way is to make 
it so complicated that there are no obvious deficiencies.
 -- C.A.R. Hoare

Cyrus Home Page: http://asg.web.cmu.edu/cyrus
Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Authenticating (with cyradm) using an alternate Kerberos instance?

2005-11-06 Thread Lars Kellogg-Stedman
I'm running Cyrus imapd in a Kerberos environment.

When using cyradm, I would like to authenticate with a /admin
instance, rather than giving my primary instance admin privileges or
always connecting as the 'cyrus' user.  I haven't had much luck so
far, and I think it's because I'm not clear on how Cyrus/SASL
interacts with Kerberos and LDAP.

I've authenticated to Kerberos as lars/[EMAIL PROTECTED]:

  Credentials cache: FILE:/tmp/krb5cc_2_u20528
Principal: lars/[EMAIL PROTECTED]

  Issued   Expires  Principal
  Nov  6 22:50:33  Nov  7 08:50:33  krbtgt/[EMAIL PROTECTED]

I've added lars/admin as an admin user in /etc/imapd.conf (and set
defaultdomain to example.com), like this:

  admins: cyrus lars/admin
  defaultdomain: example.com

We're running 'saslauthd -a ldap'.  There is a matching record in LDAP
(uid: lars/admin) that will be matched by the filter in
saslauthd.conf:

  ldap_filter: (|([EMAIL PROTECTED])((!(mailLocalAddress=*))(uid=%u)))

If I try to connect with cyradm, I get an error:

  $ cyradm mail.example.com
  cyradm: cannot authenticate to server with  as lars

And the IMAP server says:

  badlogin: mail.example.com [192.168.1.20] GSSAPI [SASL(-13):
  authentication failure: bad userid authenticated]

I get the same behavior if I try:

  $ cyradm --user=lars/admin mail.example.com

I should probably mention that:

(a) authenticating as my primary instance ([EMAIL PROTECTED]) works
just fine (and if I set myself up as an admin user I get admin
privileges), and

(b) If I obtain the '[EMAIL PROTECTED]' principal, everything works as expected.

(c) authenticating to, say, our LDAP server as lars/admin does the
right thing, although that's largely due to the magic of OpenLDAP's
sasl-regexp commands.

What am I missing?  Thanks!

-- Lars

Cyrus Home Page: http://asg.web.cmu.edu/cyrus
Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


How to do IMAP folder subscrib using cyradm?

2005-11-06 Thread Patrick T. Tsang



Hello,

I can login as cyrus admin, create user account 
and folders.
But how to subscrib the folders? 
seems cyradm and its provided 
/Cyrus/IMAP/Admin.pm doesn't have this command.

Thanks
PT


Cyrus Home Page: http://asg.web.cmu.edu/cyrus
Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html

Re: improving concurrency/performance

2005-11-06 Thread Robert Mueller
  In our experience FS-wise, ReiserFS is the worst performer between ext3,
  XFS e ReiserFS (with tailBLAH turned on or off) for a Cyrus Backend (1M
  mailboxes in 3 partitions per backend, 0.5TB each partition).
 
  Interesting ... can you provide some numbers, even from memory?
 
 
 I'd also be VERY interested since our experience was quite the opposite. 
 ReiserFS was faster than all three, XFS trailing a dismal third (also had 
 corruption issues) and ext3 second or even more dismal third, depending
 on 
 if you ignored it's wretched large directory performance or not. 
 ReiserFS 
 performed solidly and predictably in all tests.  Not the same could be
 said 
 for XFS and ext3.  This was about 2 yrs ago though.

This was also our experience. ReiserFS was the fastest, most stable, and
the most predictable of the 3.

The concept of predictable is an interesting one. Basically we were
doing lots of tests including a bunch of simultanoues load tests (do
some cyrus tests, and at the same time do a bunch of other things that
caused lots of IO on the system) and what we found was that while ext3
in particular seemed to jump around performance wise a lot (it seemed to
strangely allocate a lot of IO for a while to cyrus, then slow down to a
crawl, then speed up again, etc) reiserfs performed very consistently
during the entire test. No idea what caused this, but was an interesting
observation.

My previous post on the filesystem topic as well...

http://permalink.gmane.org/gmane.mail.imap.cyrus/15683

Rob

--
[EMAIL PROTECTED]
Sign up at http://fastmail.fm for fast, ad free, IMAP accessible email


Cyrus Home Page: http://asg.web.cmu.edu/cyrus
Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: improving concurrency/performance

2005-11-06 Thread Sergio Devojno Bruder

Michael Loftis wrote:


Interesting ... can you provide some numbers, even from memory?


I'd also be VERY interested since our experience was quite the opposite. 
ReiserFS was faster than all three, XFS trailing a dismal third (also 
had corruption issues) and ext3 second or even more dismal third, 
depending on if you ignored it's wretched large directory performance or 
not.  ReiserFS performed solidly and predictably in all tests.  Not the 
same could be said for XFS and ext3.  This was about 2 yrs ago though.


Our cyrus in production have one diff from stock cyrus, I almost forgot: 
we tweaked the directory hash functions, we use a 2 level deep hash, and 
that can make a lot of a diferente specially comparing FS's.


We tweaked our hash function specially to guarantee that our users 
directories will in the vast majority of the cases will occupy only one 
block with ext3 (4k).


--
Sergio Devojno Bruder

Cyrus Home Page: http://asg.web.cmu.edu/cyrus
Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: improving concurrency/performance

2005-11-06 Thread David Lang

On Mon, 7 Nov 2005, Sergio Devojno Bruder wrote:


David Lang wrote:

(..)
I was recently doing some testing of lots of small files on the various 
filesystems, and I ran into a huge difference (8x) depending on what 
allocator was used for ext*. the default allocator changed between ext2 and 
ext3 (you can override it as a mount option) and when reading 1M files (10 
dirs of 10 dirs of 10 dirs of 1000 1K files) the time to read them went 
from ~5 min with the old allocator useed in ext2 to 40 min for the one 
that's the default for ext3.


David Lang

(!!) Interesting. You said mount options? man mount man page only show me 
data=journal, data=ordered, data=writeback, etcetera.


How can I change that?


I found more things listed under /usr/src/linux/Documentation/filesystems

there are ext2.txt and ext3.txt files that list all the options available.

note that with my test all the files were created in order, it may be that 
if the files are created in a random order things would be different, so 
further testing is warrented.


I was doing tests to find how long it took to tar/untar these files  (with 
the tarball on a different drive).


here are the notes I made at the time. this was either 2.6.8.1 or 2.6.13.4 
(I upgraded about that time, but I'm not sure what the exact timeing was


note that on my cyrus server I actually use XFS with very large folders 
(20,000 mails in one folder) and it seems lightning fast. I haven't 
reconciled that observed bahavior with the tests listed below


the fact that on ext* filesystems I had tests range from 5 min to 80 min 
is somewhat scary. I did make sure to clear memory (by reading a file 
larger then available ram and doing a sync) between tests


David Lang



on ext2 reading the tarball takes 53 seconds, createing the tar takes 10m, 
untarring it takes 4 min, copying it between drives on different 
controllers takes 62 seconds.


XFS looks bad for small files (13 min to untar, 9:41 to tar), but good for 
large files (47 sec to read)


reiserfs: reading the tar 43 sec 4:50 to tar 2:06 to untar (it was 
designed for tiny files and it appears to do that well)


  a couple tests I ran on reiserfs that I hadn't thought to run on the 
others, untaring on top of an existing directory took 7m, ls -lR took 
2:40, ls -flR (unsorted) took 2:40, find . -print took 21sec, rm -r took 
3m


jfs: 57 sec to read, untar 15:30, no other tests run

ext3: untar 3:30, read 64sec, tar 5:46, untarring on top of an existing 
directory 5:20, ls -lR 53 sec, ls -flR 47 sec, find . -print 7 sec


enabling dir_hash changed the read (36 sec) ls -flr (57 sec), ls -lR 61 
sec, find (25 sec), tar 81m!!!


turning off dir_hash and removing the journal (effectivly turning it into 
ext2 again) and mounting noatime

the tar goes to  34 min

mounting with oldalloc,noatime untar is 4:45, tar is 5:51.


--
There are two ways of constructing a software design. One way is to make it so 
simple that there are obviously no deficiencies. And the other way is to make 
it so complicated that there are no obvious deficiencies.
 -- C.A.R. Hoare

Cyrus Home Page: http://asg.web.cmu.edu/cyrus
Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: improving concurrency/performance

2005-11-06 Thread Sergio Devojno Bruder

David Lang wrote:
(..)
I was recently doing some testing of lots of small files on the various 
filesystems, and I ran into a huge difference (8x) depending on what 
allocator was used for ext*. the default allocator changed between ext2 
and ext3 (you can override it as a mount option) and when reading 1M 
files (10 dirs of 10 dirs of 10 dirs of 1000 1K files) the time to read 
them went from ~5 min with the old allocator useed in ext2 to 40 min for 
the one that's the default for ext3.


David Lang

(!!) Interesting. You said mount options? man mount man page only show 
me data=journal, data=ordered, data=writeback, etcetera.


How can I change that?

--
Sergio Bruder

Cyrus Home Page: http://asg.web.cmu.edu/cyrus
Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html