Re: improving concurrency/performance (fwd)

2005-11-09 Thread Sergio Devojno Bruder

John Madden wrote:

This guy is having a problem with cyrus-imap and ext3 - when multiple
processes are attempting to write to the one filesystem (but not the one
file), performance drops to next to nothing when only five processes are
writing. An strace shows most of the time is being spent in fdatasync
and fsync.



Actually, the thread just got off topic quickly -- I'm running this on reiserfs,
not ext3.  ...And I've got it mounted with data=writeback, too.  But thanks for
the info, Andrew.

John


I'll bet that the fakesync preload library will make diference for you.

--
Sergio Bruder

Cyrus Home Page: http://asg.web.cmu.edu/cyrus
Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: improving concurrency/performance

2005-11-06 Thread Sergio Devojno Bruder

Jure Pečar wrote:

On Sun, 06 Nov 2005 03:58:15 -0200
Sergio Devojno Bruder [EMAIL PROTECTED] wrote:

In our experience FS-wise, ReiserFS is the worst performer between ext3, 
XFS e ReiserFS (with tailBLAH turned on or off) for a Cyrus Backend (1M 
mailboxes in 3 partitions per backend, 0.5TB each partition).


Interesting ... can you provide some numbers, even from memory?

I always thought that reiserfs is best suited for jobs like this. Also, I'm
quite happy with it, but I havent done any hard-core scientific
measurements.


From memory: 2 backends, same hardware (xeons), same storage, same 
number of mailboxes (aprox). One with ext3 spools, other with reiserFS 
spools. the reiserFS one was handling half the simultaneous use of the 
ext3 one.


--
Sergio Bruder


Cyrus Home Page: http://asg.web.cmu.edu/cyrus
Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: improving concurrency/performance

2005-11-06 Thread Sergio Devojno Bruder

Michael Loftis wrote:


Interesting ... can you provide some numbers, even from memory?


I'd also be VERY interested since our experience was quite the opposite. 
ReiserFS was faster than all three, XFS trailing a dismal third (also 
had corruption issues) and ext3 second or even more dismal third, 
depending on if you ignored it's wretched large directory performance or 
not.  ReiserFS performed solidly and predictably in all tests.  Not the 
same could be said for XFS and ext3.  This was about 2 yrs ago though.


Our cyrus in production have one diff from stock cyrus, I almost forgot: 
we tweaked the directory hash functions, we use a 2 level deep hash, and 
that can make a lot of a diferente specially comparing FS's.


We tweaked our hash function specially to guarantee that our users 
directories will in the vast majority of the cases will occupy only one 
block with ext3 (4k).


--
Sergio Devojno Bruder

Cyrus Home Page: http://asg.web.cmu.edu/cyrus
Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: improving concurrency/performance

2005-11-06 Thread Sergio Devojno Bruder

David Lang wrote:
(..)
I was recently doing some testing of lots of small files on the various 
filesystems, and I ran into a huge difference (8x) depending on what 
allocator was used for ext*. the default allocator changed between ext2 
and ext3 (you can override it as a mount option) and when reading 1M 
files (10 dirs of 10 dirs of 10 dirs of 1000 1K files) the time to read 
them went from ~5 min with the old allocator useed in ext2 to 40 min for 
the one that's the default for ext3.


David Lang

(!!) Interesting. You said mount options? man mount man page only show 
me data=journal, data=ordered, data=writeback, etcetera.


How can I change that?

--
Sergio Bruder

Cyrus Home Page: http://asg.web.cmu.edu/cyrus
Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: improving concurrency/performance

2005-11-05 Thread Sergio Devojno Bruder

John Madden wrote:

I've had great experience with the performance of Cyrus thus far, but I'm 
testing
a migration at the moment (via imapcopy) and I'm having some pretty stinky
results.  There's no iowait (4 stripes on a 2Gbps SAN), no cpu usage, nothing
waiting on the network, and still I'm seeing terrible performance.  I assume 
this
points to something internal, such as concurrency on the db files.

I've converted everything to skiplist already, I've tweaked reiserfs's mount
options, what little Berkeley still used appears to be ok (no waiting on locks 
and
such), so I'm at a loss.  Is there a general checklist of things to have a look
at?  Are their tools to look at the metrics of the skiplist db's (such as
Berkeley's db_stat)?  Am I doomed to suffer sub-par performance as long as IMAP
writes are happening?

Migration's coming on the 24th.  I'm now officially sweating. :)

Thanks,
  John


In our experience FS-wise, ReiserFS is the worst performer between ext3, 
XFS e ReiserFS (with tailBLAH turned on or off) for a Cyrus Backend (1M 
mailboxes in 3 partitions per backend, 0.5TB each partition).


--
Sergio Bruder

Cyrus Home Page: http://asg.web.cmu.edu/cyrus
Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: frequent mupdate master mailboxes.db corruption, anyone else?

2005-09-21 Thread Sergio Devojno Bruder

Henrique de Moraes Holschuh wrote:

On Thu, 15 Sep 2005, Sergio Devojno Bruder wrote:


I'm fighting with frequent corruptions of my mupdate master server
(high volume, currently 3.9M mailboxes) with Cyrus 2.2.10.



First things first: Triple check your system RAM.


AHA:

Sep 21 09:08:49 mupdate mupdate[17026]: IOERROR: mapping 
/var/lib/imap/mailboxes.db file: Cannot allocate memory
Sep 21 09:08:49 mupdate mupdate[17026]: failed to mmap 
/var/lib/imap/mailboxes.db file

Sep 21 09:08:49 mupdate master[5866]: process 17026 exited, status 75
Sep 21 09:08:49 mupdate master[5866]: service mupdate pid 17026 in READY 
state: terminated abnormally


I remember Joao Assad had the same problem, no?
--
Sergio Devojno Bruder

Cyrus Home Page: http://asg.web.cmu.edu/cyrus
Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: frequent mupdate master mailboxes.db corruption, anyone else?

2005-09-16 Thread Sergio Devojno Bruder

Henrique de Moraes Holschuh wrote:

On Thu, 15 Sep 2005, Sergio Devojno Bruder wrote:


I'm fighting with frequent corruptions of my mupdate master server
(high volume, currently 3.9M mailboxes) with Cyrus 2.2.10.


First things first: Triple check your system RAM.



4G of ECC with no log of error, I dont think that in this case the
RAM is the guilty.

Other factor: this corruptions appeared with the volume of use,
this smells 'race' badly.

Example of the simptom:
Sep 15 21:17:14 mupdate mupdate[1608]: DBERROR: skiplist recovery 
/var/lib/imap/mailboxes.db: 14D67FD8 should be ADD or DELETE

When this happens mupdate get tired of create threads to the
point of 'cant create another thread'.

Whe are using CentOS 3.0 (if kernel/glibc/library versions
can mean something).

(Our backends are CentOS 4.1 64 bits, our mupdate is
CentOS 3.0 32 bits).

Any more relevant infos?

--
Sergio Devojno Bruder


Cyrus Home Page: http://asg.web.cmu.edu/cyrus
Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


frequent mupdate master mailboxes.db corruption, anyone else?

2005-09-15 Thread Sergio Devojno Bruder


I'm fighting with frequent corruptions of my mupdate master server
(high volume, currently 3.9M mailboxes) with Cyrus 2.2.10.

We've tried switch away from skiplist to berkley DB, same result.
This problem is resolved somewhere?  IE, there is hope in trying
2.3 branch or something like that?

--
Sergio Devojno Bruder

Cyrus Home Page: http://asg.web.cmu.edu/cyrus
Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: Is there a limit to number of mailboxes in cyrus

2005-09-09 Thread Sergio Devojno Bruder

Henrique de Moraes Holschuh wrote:
FWIW, I've experimented with 750k mailboxes on a single system with 8GB 
RAM and we

plan to put that number in production in a couple of months here.


Ouch, 750k?  How many concurrent accesses?


We currently have 1.6M, 1.2M and 940k mailboxes in 3 boxes with fiber to 
a single emc storage, all boxes dual Xeon 3.4Ghz EMT64T with 4G.


I'd better rephrase the question... how many concurrent *USERS* are
accessing the system over imap? pop? lmtp?


Our imap usage pattern is webmail only, users only 'see' pop3, so our 
concurrent use of imap is a lot lower than normal email clients.


--
Sergio Devojno Bruder

Cyrus Home Page: http://asg.web.cmu.edu/cyrus
Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: Is there a limit to number of mailboxes in cyrus

2005-09-08 Thread Sergio Devojno Bruder

Henrique de Moraes Holschuh wrote:

On Thu, 08 Sep 2005, John Madden wrote:


We stored ~50'000 boxes on a single FreeBSD 4.x machine for a couple
of years. It was a dual Pentium 3, Tyan server motherboard, Intel
server network cards with 2gb of ram, adaptec 2100 raid controller
and fast scsi disks. The system and mailbox disks where separated.


FWIW, I've experimented with 750k mailboxes on a single system with 8GB RAM and 
we
plan to put that number in production in a couple of months here.


Ouch, 750k?  How many concurrent accesses?



We currently have 1.6M, 1.2M and 940k mailboxes in 3 boxes with fiber to 
a single emc storage, all boxes dual Xeon 3.4Ghz EMT64T with 4G.


--
Sergio Devojno Bruder   [EMAIL PROTECTED]
http://haxent.com.br/  41 3363-4263, 8402-2125

Cyrus Home Page: http://asg.web.cmu.edu/cyrus
Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


mupdate master worker threads

2005-04-04 Thread Sergio Devojno Bruder
Today our mupdate master started to behave strangely:
Here mupdate master decided to start a new group o threads after some 
failed new thread creations(?!):
Apr  4 10:59:23 mupdate mupdate[10678]: accepted connection
Apr  4 10:59:23 mupdate mupdate[10678]: could not start a new worker thread 
(not fatal)
Apr  4 10:59:23 mupdate last message repeated 349 times
Apr  4 10:59:23 mupdate mupdate[10678]: New worker thread started, for a total 
of 51
Apr  4 10:59:23 mupdate mupdate[10678]: New worker thread started, for a total 
of 52
Apr  4 10:59:23 mupdate mupdate[10678]: New worker thread started, for a total 
of 53
(...)
Apr  4 10:59:23 mupdate mupdate[10678]: New worker thread started, for a total 
of 99
Apr  4 10:59:23 mupdate mupdate[10678]: New worker thread started, for a total 
of 100
Directly after that mupdate continued to start new threads and *at the same 
time* destroy 'old' ones:
Apr  4 10:59:23 mupdate mupdate[10678]: Worker thread finished, for a total of 
99 (50 spare)
Apr  4 10:59:23 mupdate mupdate[10678]: New worker thread started, for a total 
of 100
Apr  4 10:59:23 mupdate mupdate[10678]: Worker thread finished, for a total of 
99 (50 spare)
Apr  4 10:59:23 mupdate mupdate[10678]: New worker thread started, for a total 
of 100
Apr  4 10:59:23 mupdate mupdate[10678]: Worker thread finished, for a total of 
99 (50 spare)
Apr  4 10:59:23 mupdate mupdate[10678]: New worker thread started, for a total 
of 100
Apr  4 10:59:23 mupdate mupdate[10678]: Worker thread finished, for a total of 
99 (50 spare)
Apr  4 10:59:23 mupdate mupdate[10678]: New worker thread started, for a total 
of 100
(...)
and so on.
This behaviour is ciclical, seconds apart, create a new group of threads, 
destroy it.
My configuration for mupdate follows:
###
# mupdate configuration
###
mupdate_workers_start: 5
mupdate_workers_minspare: 2
mupdate_workers_maxspare: 50
mupdate_connections_max: 800
mupdate_workers_max: 800
--
Sergio Devojno Bruder[EMAIL PROTECTED]
http://haxent.com.br  41 362-5930, 41 8402-2125
---
Cyrus Home Page: http://asg.web.cmu.edu/cyrus
Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: cyrus-murder problems with database corruption in the frontend/master

2005-03-31 Thread Sergio Devojno Bruder
Derrick J Brashear wrote:
Well, you have skiplist corruption, but there's not really anything 
in your report which is helpful at suggesting why you do, or helping 
to reproduce it so (if it is a bug) it can be tracked and killed.

Even posting your corrupted skiplist would be more useful.

 Yeah I know the info I gave isn't very helpfull.. the problem is that
 the corruption is random.. sometimes it takes 2 days to happen,
 sometimes a week. Any sujestions on how to debug and try catch the
 problem ?
To add more information:
We've done 2 murder instalations, cyrus 2.2.3 and cyrus 2.2.10, and already have 
seen the same type of corruption. (should ADD or DELETE), but not with the same 
frequency.

--
Sergio Devojno Bruder[EMAIL PROTECTED]
http://haxent.com.br  41 362-5930, 41 8402-2125
---
Cyrus Home Page: http://asg.web.cmu.edu/cyrus
Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: Create mailbox issue 2.2.10

2005-02-16 Thread Sergio Devojno Bruder
Ben Ricketts wrote:
Hi List,
I am trying to use cyradm-php.lib to create user on a cyrus server. The
serv
er is set up to use virtual domains and is running version 2.2.10. Also
set 
to use unix heirarchy seperators. This seems to work fine on another
machine running 2.1x  and not set to use virtual domains.

when i create a user using code such as:
$imap-createmb(user/[EMAIL PROTECTED]/test);
try $imap-createmb(user/testacct/[EMAIL PROTECTED]) instead.
the mailbox for the user is created in the right place i.e
/mail/domain/m/myserver.com/t/user/testacct
but the subfolder is created as:
/mail/domain/m/myserver.com/test
Can anyone shed some light on this problem ?
Cheers,
Ben
---
Cyrus Home Page: http://asg.web.cmu.edu/cyrus
Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: Cyrus in ISP environment?

2005-02-16 Thread Sergio Devojno Bruder
Adam Tauno Williams wrote:
have anyone successfully used Cyrus in ISP/webhosting environment?
This means many different domains with little number of mailboxes per
domain.
Number of mailboxes: 500K - 1M
Disk space used: 1TB and more
Number of messages (daily number): 2-3M and more
Is it recommended to use Cyrus in such environment?

FastMail does.
http://www.fastmail.fm
we use it too (imap for webmail only, users only see pop3 and webmail):
1M users, but only on one domain (we dont do webhosting).
Various terabytes, I dont remember now how many.
--
Sergio Bruder
---
Cyrus Home Page: http://asg.web.cmu.edu/cyrus
Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: Murder problems, tracked down do updatelist, followup.

2005-02-16 Thread Sergio Devojno Bruder
Sergio Devojno Bruder wrote:
There are some problems with one cyrus murder deployment,
where our frontends arent getting updates, we tracked it
down to updatelist.
More info: we are in initial test phase, linux environment, CentOS 3.3 (a 
recompiled free version of RHEL 3):
2 backends (2xP4 Xeon, 2GB RAM, local SCSI now);
mupdate master (P4 Xeon with HT, 2GB RAM);
2 frontends (2xP4 Xeon, 2GB RAM, local SCSI);
This time mupdate master was using a -UP kernel, we tested :
- all boxes already started (backends, master and frontends).
- stop mupdate and frontends.
- mupdate master start, frontend2 start, frontend1 start;
In the case of the second frontend starting updatelist was
NULL, with no reference of the first frontend. How that can be?
[12921] and [12931] arent threads of the same process?
why updatelist mod in 12921 wasnt visible in 12931?
(12921 and 12931, pids of mupdate master processes that handled
the connections of the 2 frontends).
Patch and syslog follow:
patch for cyrus 2.2.10:
--- trunk/imap/mupdate.c
+++ trunk/imap/mupdate.c
@@ -54,6 +54,7 @@
 #include assert.h
 #include syslog.h
 #include errno.h
+#include stdarg.h
 #include netdb.h
 #include sys/socket.h
@@ -347,6 +348,27 @@
 return C;
 }
+
+static void syslog_updatelist(const char *msg, ...)
+{
+va_list argp;
+struct conn *upc;
+int i;
+char buffer[1024];
+
+va_start(argp, msg);
+vsprintf(buffer, msg, argp);
+va_end(argp);
+
+syslog(LOG_DEBUG, updatelist printout msg=\%s\, buffer);
+i = 0;
+for (upc = updatelist; upc != NULL; upc = upc-updatelist_next) {
+   syslog(LOG_DEBUG, \tupdatelist element #%d\t0x%x\tfd:%d\t%s;, i++, (int) 
upc, upc-fd, upc-clienthost);
+}
+syslog(LOG_DEBUG, /updatelist printout);
+}
+
+
 static void conn_free(struct conn *C)
 {
 assert(!C-idle); /* Not allowed to free idle connections */
@@ -1734,8 +1758,10 @@
 /* indicate interest in updates */
 pthread_mutex_lock(mailboxes_mutex); /* LOCK */
+syslog_updatelist(before cmd_startupdate(%s), C-clienthost);
 C-updatelist_next = updatelist;
 updatelist = C;
+syslog_updatelist(after cmd_startupdate(C: 0x%x, tag:%s, partial:0x%x), 
(int)C, tag, (int) partial);
 C-streaming = xstrdup(tag);
 C-streaming_hosts = partial;
syslog (edited for readability):
19:15:43 mupdate[12921]: login: frontend2 [192.168.115.19] cyrus PLAIN User 
logged in
19:15:43 mupdate[12921]: updatelist printout msg=before cmd_startupdate(frontend2 
[192.168.115.19])
19:15:43 mupdate[12921]: /updatelist printout
19:15:43 mupdate[12921]: updatelist printout msg=after cmd_startupdate(C: 0x8793940, 
tag:U01, partial:0x0)
19:15:43 mupdate[12921]: updatelist element #0   0x8793940   fd:10  
 frontend2 [192.168.115.19];
19:15:43 mupdate[12921]: /updatelist printout
19:16:02 mupdate[12931]: accepted connection
19:16:02 mupdate[12931]: login: frontend1 [192.168.115.18] cyrus PLAIN User 
logged in
19:16:02 mupdate[12931]: updatelist printout msg=before cmd_startupdate(frontend1 
[192.168.115.18])
19:16:02 mupdate[12931]: /updatelist printout
19:16:02 mupdate[12931]: updatelist printout msg=after cmd_startupdate(C: 0x844f940, 
tag:U01, partial:0x0)
19:16:02 mupdate[12931]: updatelist element #0   0x844f940   fd:10  
 frontend1 [192.168.115.18];
19:16:02 mupdate[12931]: /updatelist printout
pstree of relevant processes:
  |-cyrus-master,12915 -d
  |   |-mupdate,12921 -m
  |   |-mupdate,12929 -m
  |   |-mupdate,12930 -m
  |   |-mupdate,12931 -m
  |   |-mupdate,12935 -m
  |   |-mupdate,12938 -m
--
Sergio Devojno Bruder
---
Cyrus Home Page: http://asg.web.cmu.edu/cyrus
Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: Murder problems, tracked down do updatelist, followup.

2005-02-16 Thread Sergio Devojno Bruder
Sergio Devojno Bruder wrote:
 (...)
This time mupdate master was using a -UP kernel, we tested :
- all boxes already started (backends, master and frontends).
- stop mupdate and frontends.
- mupdate master start, frontend2 start, frontend1 start;
 (...)
AARGH.
my fault, brown-paper bag configuration bug.
in /etc/cyrus.conf:
(...)
SERVICES {
   mupdate  cmd=mupdate -m listen=3905 prefork=6
(...)
cyrus-master started 6 diferent processes(!!!).
And I thinking WTF that threads doesnt see each other.
I will retire to my insignificance now.
--
Sergio Devojno Bruder
---
Cyrus Home Page: http://asg.web.cmu.edu/cyrus
Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


ext3 with dir_index

2005-01-04 Thread Sergio Devojno Bruder
While planning a new set of cyrus servers Ive crossed with dir_index
ext3 standard feature in 2.6 kernels, but disabled by default (please
correct me if Im wrong).
Im planning use it. Anyone using it in production, any issues?
--
Sergio Devojno Bruder[EMAIL PROTECTED]
http://haxent.com.br  41 362-5930, 41 8402-2125
---
Cyrus Home Page: http://asg.web.cmu.edu/cyrus
Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: Cyrus 2.3 on shared filesystems

2004-11-05 Thread Sergio Devojno Bruder
Ken Murchison wrote:
Sergio Devojno Bruder wrote:
Attila Nagy wrote:
Hello,
(...) The unified approach seems to be simple. The client no longer 
has to be redirected to the given backend using the proxyd, or 
lmtpproxyd (previously called frontend), instead it can turn to any 
of the backends and the backend will know how to deal with that 
connection (serve as local, or proxy to another backend).
What is the content of mailboxes.db in these frontend-enabled backends?
All mailboxes?
Yes.  Local mailboxes specify the partition on which they reside (just 
like a standard backend or single server) and remote mailboxes specify 
the server on which they reside (just like a standard frontend).
I'm asked because in the past that in particular gave us many problems 
(we are using cyrus murder) with many 'frontends are not in sync with 
master'.

we workarounded it mupdate-enabling our webmail (ie, our webmail ask 
mupdate master 'where is this account?' and then logs directly in the 
backend), so our frontends are actually doing only pop3 amd lmtp.

The multi-thread assyncronous code of mupdate master handles the load 
without missing a beat.

--
Sergio Devojno Bruder[EMAIL PROTECTED]
http://haxent.com.br  41 362-5930, 41 9127-6620
---
Cyrus Home Page: http://asg.web.cmu.edu/cyrus
Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: murder problems (mupdate getting lost between frontends and mupdate master)

2004-04-22 Thread Sergio Devojno Bruder
Rob Siemborski wrote:

On Tue, 20 Apr 2004, Sergio Devojno Bruder wrote:
 

output trace from what program in particular? mupdate? on FEs, on master
or on backends?
   

Specifically a protocol trace between the mupdate master and the
frontends.
-Rob
 

Sorry, but we made a workaround for this problem:

Our webmail (unique way to use imap in our configuration) wast patched 
to consult
mupdate master and connect directly to the correct backend. If our find 
in mupdate
resulted in no usable response, connect to the FE's.

The FE's are only used now for pop3 and lmtp, imap and sieve are using 
the backend.

This alone resolved most of our problems.
(our backends still disconnect some times from mupdate master)
--
Sergio Devojno Bruder[EMAIL PROTECTED]
http://haxent.com.br  41 362-5930, 41 9127-6620
---
Cyrus Home Page: http://asg.web.cmu.edu/cyrus
Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: murder problems (mupdate getting lost between frontends and mupdate master)

2004-04-20 Thread Sergio Devojno Bruder
Rob Siemborski wrote:

On Thu, 15 Apr 2004, Sergio Devojno Bruder wrote:
 

Our murder started to act a little strange some days ago:
(...)
and so on. the mupdate slave in frontends is looping in connecting, syncing, loose connection, 
   

connect, sync, loose, and so on.
Someone already saw this pattern? we are using cyrus 2.2.3 with skiplist mailboxes.db.
   

This generally indicates there is some authentication problem or similar
going on (or some other problem on the mupdate master).
Do you have an output trace?

-Rob
 

output trace from what program in particular? mupdate? on FEs, on master 
or on backends?

--
Sergio Devojno Bruder[EMAIL PROTECTED]
http://haxent.com.br  41 362-5930, 41 9127-6620
---
Cyrus Home Page: http://asg.web.cmu.edu/cyrus
Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


murder problems (mupdate getting lost between frontends and mupdate master)

2004-04-15 Thread Sergio Devojno Bruder
Our murder started to act a little strange some days ago:

Apr 15 13:17:43 frontend1 mupdate[21845]: successful mupdate connection to master
Apr 15 13:17:43 frontend1 mupdate[21845]: unready for connections
Apr 15 13:17:43 frontend1 mupdate[21845]: synchronizing mailbox list with master 
mupdate server
Apr 15 13:17:58 frontend1 mupdate[21845]: retrying connection to mupdate server in 22 
seconds
Apr 15 13:18:20 frontend1 mupdate[21845]: successful mupdate connection to master
Apr 15 13:18:20 frontend1 mupdate[21845]: unready for connections
Apr 15 13:18:20 frontend1 mupdate[21845]: synchronizing mailbox list with master 
mupdate server
Apr 15 13:18:43 frontend1 mupdate[21845]: retrying connection to mupdate server in 27 
seconds
Apr 15 13:19:10 frontend1 mupdate[21845]: successful mupdate connection to master
Apr 15 13:19:10 frontend1 mupdate[21845]: unready for connections
Apr 15 13:19:10 frontend1 mupdate[21845]: synchronizing mailbox list with master 
mupdate server

and so on. the mupdate slave in frontends is looping in connecting, syncing, loose connection, connect, sync, loose, and so on.

Someone already saw this pattern? we are using cyrus 2.2.3 with skiplist mailboxes.db.

--
Sergio Devojno Bruder[EMAIL PROTECTED]
http://haxent.com.br  41 362-5930, 41 9127-6620
---
Cyrus Home Page: http://asg.web.cmu.edu/cyrus
Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: global sieve script?

2004-03-04 Thread Sergio Devojno Bruder
Joe Hrbek wrote:
This was posted in reference to a global sieve script:

http://www.irbs.net/internet/info-cyrus/0112/0133.html

It dates back to 2001.  Is this capability now present in the latest cyrus
package?  I use simon matter's RPM.
If so, this would be very cool.
-j
Sieve scripts are indeed stored in bytecode, compiled, form in Cyrus 2.2.3 
(2.2.2 too? I dont recall), but the capability of a site-side Sieve script.. I 
will be interested in such a beast :)

--
Sergio Devojno Bruder[EMAIL PROTECTED]
http://haxent.com.br  41 362-5930, 41 9127-6620
---
Home Page: http://asg.web.cmu.edu/cyrus
Wiki/FAQ: http://cyruswiki.andrew.cmu.edu
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


A Cyrus/murder of that size is possible?

2003-09-05 Thread Sergio Devojno Bruder
I'm planning a large mail system with the postfix / lmtp / cyrus in
murder / mysql (users table) configuration.

The system will be used in ISP-pattern: huge number of users
(500k), high traffic of emails arriving by lmtp (15 emails per
second), but low imap/pop3 demand (peaks of 2000 simultaneous
users, more pop3 than imap).

All configurations that I saw with cyrus was talking about
imap/pop3 use, I never saw any published benchmarks|comparitions of
cyrus with MTAs doing local delivery by lmtp|smtp.

I have some questions:

- Using storage with fiber in my ia32 backend servers, how much
  users can I squeeze in one box? (ie, how much emails/s in
  lmtp can I get AND how much concurrent pop3|imap users can one
  box cope with?) (P4's 2G RAM with fiber to an EMC);

- I'm toying with the idea of create a mutant postfix
  mpudate-enabled that will deliver the email directly to the right
  backend, using the murders servers only to pop3 and imap.
  feasible or I'm getting nuts?

--
Sergio Devojno Bruder   [EMAIL PROTECTED]
Haxent Consultoriahttp://www.haxent.com.br
55 41 9127-6620