Re: De-duping attachments

2010-09-15 Thread Nik Conwell
  Great thread.  Here as some real world numbers based on our spools 
here at BU.

One of our masters has 4,800 users, 22,000 mailboxes, and is using about 
374G of disk.

Based on the md5 files for these users there are 6,046,363 messages.  If 
I look at the first md5 value (md5 on the msg if I understand this) and 
sort and uniq I get 5,891,974 messages, so assuming we dedup all those 
messages that would be a shrink to 97.4% of the original number of 
messages.  Assuming an even distribution of message sizes this would 
mean 374G would drop down to 362.78G.  Unfortunately not an obvious huge 
win.

But, I think the md5 of the message file includes headers which may be 
more likely to be unique over the body content.  (Due to legacy support 
for UW IMAP, we often end up routing things differently for users on the 
same master so the headers for the same message sent to 2 people could 
be different).

Isn't the easy hack for dedup just looking at the above md5 files and 
then doing appropriate hard links?  This could be done by a nightly 
trawl of the spool space.  A bigger win would be to separate the headers 
from the messages but that's a lot more work.

-nik


Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/


Re: choosing a file system

2008-12-31 Thread Nik Conwell

On Dec 30, 2008, at 4:43 PM, Shawn Nock wrote:

[...]

 a scripted rename of mailboxes to balance partition utilization when  
 we
 add another partition.

Just curious - how do stop people from accessing their mailboxes  
during the time they are being renamed and moved to another partition?

-nik

Information Technology
Systems Programming
Boston University


Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: Archiving emails with Cyrus

2008-11-24 Thread Nik Conwell

On Nov 24, 2008, at 10:36 AM, John Madden wrote:

 On Monday 24 November 2008 08:56:35 am Alexandros Gougousoudis wrote:
 There must be a process in cyrus, which copies these emails into a
 (zip)-file and/or into a database, to have them somehow accessable.
 Cyrus must do this with the administrator account, because the imap
 credentials of all the users are of course not known to us. Or we
 install an archive-useraccount which has access to all mailboxes.

 Here's an idea I've been toying with for an upcoming implementation...

 Let's say you create everyone's Inbox/Drafts/etc mailboxes on your  
 reasonably
 fast (expensive/small?) storage with a relatively low mailbox  
 quota.  You
 then create user.username.archive on a separate Cyrus partition,  
 perhaps
 residing on SATA with a relatively high mailbox quota.  Inform your  
 users
 that to store mail and keep their Inbox available they should move  
 it there.
 You can then use Cyrus' built-in search mechanisms (squat) and have  
 to change
 very little.

I've been toying with the idea of replacing the cyrus routine(s?) that  
open/read the MESSAGE#. spool files with something that also  
understands gzipped data.  That way I can selectively gzip messages  
based on some algorithm involving message size and lack of activity.   
I haven't yet had the chance to do the analysis on the spool to see  
what sort of space gains I could expect.

Nik Conwell
Systems Programming
Boston University


Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: breaking into the system through cyrus account ?

2008-06-03 Thread Nik Conwell


On Jun 3, 2008, at 3:10 AM, Rudi Bruchez wrote:


Hello,

I'm using Cyrus on a Debian box, with pop3s. I found some time ago  
that

someone was able to place a spamming tool in the /var/spool/cyrus/
directory. I cleaned it and changed all my passwords. All seemed ok.


Hopefully you are keeping up to date with these security issues with  
Debian SSL and OpenSSH:


http://www.debian.org/security/2008/dsa-1571
http://www.debian.org/security/2008/dsa-1576


I figured out this week that an IRC bot was at the same place. I  
changed

my passwords again, and upgraded to the last Cyrus Debian package.
It looks like the cracker gained root access. I don't have the time  
and

window to reinstall my system. My question would be : have you already
heard of such breaks ?
The Cyrus account has shell access in passwd. Is it necessary ?  
Could I
put it to /bin/false, and change it when I want to su to it for  
changing

smth ?

Thanks !

Rudi


Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html






Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html

quota bug involving nested quota roots?

2008-05-01 Thread Nik Conwell

I'm running 2.3.8 (Invoca) and see strange quota behavior.  I checked the 
changelog for 2.3.12p1 and no mention of quota fixes (32-bit).  Unfortunately I 
don't have a 2.3.12p1 system to check this out.  Do people see similar things 
on 
the current version?  Am I doing something wrong having nested quotas this way?


My mailbox has quota and usage:

quota -f |grep -E user/nik|Quota

Quota   % Used Used Root
  10485760   15  1640288 user/nik


If I set a quota on user/nik/restore (empty mailbox) and do quota -f, my 
recorded usage changes:

sq user/nik/restore 1

quota -f|grep -E user/nik|Quota

Quota   % Used Used Root
  10485760   13  1442491 user/nik
100 user/nik/restore


I did finds on the filesystem and added up file sizes:


full=`find /cyrus/master07/spool/n/user/nik -type f -ls|grep -v cyrus\.|awk 
'{print $7}'|add`;echo full=$full

restore=`find /cyrus/master07/spool/n/user/nik/restore -type f -ls|grep -v 
cyrus\.|awk '{print $7}'|add`;echo restore=$restore


full=1679655197
restore=0


The full / 1024 matches the original 1640288 used so the problem seems to be 
quota -f not correctly traversing when there is another lower quota root.

If I remove the user/nik/restore quota and do quota -f, the value matches the 
original again:

sq user/nik/restore remove 1

quota -f|grep -E user/nik|Quota
Quota   % Used Used Root
  10485760   15  1640288 user/nik


-nik
Nik Conwell
Office of Information Technology
[EMAIL PROTECTED]


Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: quota bug involving nested quota roots?

2008-05-01 Thread Nik Conwell

On Thu, 1 May 2008, Alain Spineux wrote:

  The full / 1024 matches the original 1640288 used so the problem seems to be
  quota -f not correctly traversing when there is another lower quota root.

 traversing ? Hum. Is it possible that cyrus stop counting as soon as
 it find another quota root ?
 What appends if you call your restore folder,  or z.


Looks like it quits when it hits the nested quota root.

Unfortunately I don't have a 2.3.12p1 system to check this out.  Do people see 
similar things on the current version?  Am I doing something wrong having 
nested 
quotas this way?





quota -f|grep -E user/nik|Quota

Quota   % Used Used Root
  10485760   15  1640491 user/nik


setquota user/nik  1
quota -f|grep -E user/nik|Quota

Quota   % Used Used Root
  104857600  241 user/nik
100 user/nik/

setquota user/nik/ 1
quota -f|grep -E user/nik|Quota

Quota   % Used Used Root
  10485760   15  1640491 user/nik
100 user/nik/


Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: Miserable performance of cyrus-imapd 2.3.9 -- seems to be locking issues

2008-02-29 Thread Nik Conwell

On Feb 28, 2008, at 4:38 PM, Jeff Fookson wrote:

 is about 200GB.  There are typically about 200  'imapd'
 processes at a given time and a hugely varying number of  
 'lmtpds' (from
 about 6 to many hundreds during
 times of greatest pathology). System load is correspondingly in the  
 2-15
 range, but can spike to 50-70!

Typically when deadlocks free you get load spikes as work can now  
progress.  It implies one thing was holding the lock for a long time -  
that thing itself probably being impeded by something else.  If there  
was high activity of many things hitting the lock, you wouldn't expect  
to see spikes - the system might even look idle as everything is just  
waiting for the lock.

 waits of  upwards of 1-2 minutes to get a write lock as shown by the
 example below (this is from a trace of an 'lmtpd')

 [strace -f -p 9817 -T]
 9817  fcntl(10, F_SETLKW, {type=F_WRLCK, whence=SEEK_SET, start=0,
 len=0}) = 0 84.998159
[...]
 Can anyone suggest what we might do next to debug the problem further?

Good job with the strace.  Now figure out what fd 10 is, either by  
lsof or earlier in the strace output (look for = 10 and that should  
show what opened it).

Then install lslk and figure out who is holding the lock on that file  
and for how long, etc.  Then look at that process to see what it's  
doing for so long (strace again).

-nik


Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: tls self-signed certificates

2007-10-18 Thread Nik Conwell

On Oct 17, 2007, at 9:55 PM, Craig White wrote:

 OK - what I discovered was that TLS works with this setup (telnet
 localhost 143)

 IMAP/SSL doesn't seem to work when you 'telnet localhost 993' but on a
 client that is forgiving for self-signed certificates, it does  
 actually
 work. So much for my testing methodology.

Try this to access an IMAP/SSL server via the command line:

openssl s_client -connect hostname:port

-nik


Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: UC Davis Cyrus Incident September 2007

2007-10-17 Thread Nik Conwell
On Oct 17, 2007, at 1:36 PM, Andrew Morgan wrote:

 On Tue, 16 Oct 2007, Vincent Fox wrote:

 So here's the story of the UC Davis (no, not Berkeley) Cyrus  
 conversion.

 [snip]

 This is a fascinating story, so please keep us all posted with your
 findings!

I second this.  Thanks for sharing Vincent.  We are currently  
planning converting ~40K UW accounts to Cyrus.

We have been a little more successful distributing the UW load so we  
are not quite as desperate as you were.  :)

-nik

Nik Conwell
Boston University


Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: GSSAPI Murder authentication and The context has expired on long proxyd sessions

2007-09-12 Thread Nik Conwell

On Sep 11, 2007, at 3:00 PM, Paul M Fleming wrote:

 I had the same problems. if you google for this you'll find a  
 discussion regarding how SASL context expires should be handled.  
 Heimdal allows expired contexts to be used after expiration. MIT  
 does not.

Thanks.  I had seen your posting http://cyrusimap.web.cmu.edu/archive/ 
message.php?mailbox=archive.info-cyrusmsg=38716 but saw no responses  
so I wanted to bring it up again.

I just did some more googling on sasl gssapi context expire and that  
turned up some more good stuff.  Thanks.

 23) My opinion is this behavior is broken in SASL unfortunately  
 I'm not sure if it can be fixed without major changes to the SASL  
 library. I know the openldap list discussed work arounds to deal  
 with an expired context. Lowering the client timeout levels in imap  
 can also help but you still get deadlocks between front and back  
 ends which users notice a a client connection lock up. I did not  
 attempt to change the code for SASL or IMAP, but handling a  
 context expired event as a fatal error makes sense when running  
 MIT kerberos. My guess is CMU doesn't have this issue because they  
 use Heimdal.

 My solution was to change the keys involved in murder to have a  
 25hour max life and change the KDC to allow 25h tickets. Then  
 instead of a period event in cyrus.conf use an at event to renew  
 the ticket at 2:00AM when users are less likely to notice. The  
 Cyrus timeouts kick in before start of business and most clients  
 (Netscape, Thunderbird,etc) reconnect  automatically and the user  
 doesn't notice a thing, but you still have to deal with the log  
 messages. This solution solved the deadlock issues for my clients.

Interesting about the cyrus timeouts.  The clients I'm seeing this  
problem with (pine and Outlook) are typically checking for mail every  
couple of minutes and so the session never times out.

Just curious - why didn't you decide to go with some other auth  
scheme instead?  (Having passwords embedded in config files doesn't  
appeal to me though.)

For the list in general - what are you all using for the Murder  
authentication?  Heimdal?  Certs?  Passwords in configs?

-nik


Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: tuning cyrus?

2007-08-27 Thread Nik Conwell
This is typically an Outlook problem.  The client runs various  
filters, and possibly has perf issues on local disk (look for disk  
light activity on the PC) as it's updating its caches.

Turn on Cyrus logging for the particular user (in the server config  
log subdir do mkdir username and make sure the cyrus acct has write  
access).  Check the timestamps at the beginning of each request.   
Typically you'll see that Outlook requests headers (FETCH) and then  
it will idle for 5, 10, perhaps 30 seconds, then it will request more  
stuff.

The solution to all of this is to leave Outlook running all the time.

-nik




On Aug 27, 2007, at 10:44 AM, Mike Eggleston wrote:

 One user mentions that it still takes several minutes in the  
 morning to
 'fetch headers'. This user has Outlook 2003 on MS XP as the mail  
 client.
 I have less than 20 users using cyrus-imapd 2.3.1 on a fedora core 5
 box with current rpms.

 Does anyone have any tuning hints? Some way to speed things for  
 this user?
 No one else has mentioned any problems.

 Mike
 
 Cyrus Home Page: http://cyrusimap.web.cmu.edu/
 Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
 List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html





Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: [INFO-CYRUS] Re: tuning cyrus?

2007-08-27 Thread Nik Conwell

On Aug 27, 2007, at 11:14 AM, Mike Eggleston wrote:

[...]

 Where is the server logging sub-directory? I do not see one in
 /var/lib/cyrus-imapd (those are all binaries) nor do I see one in / 
 var/log.

It should be a subdirectory called log in whatever directory  
configdirectory is set to in your imapd.conf.  You may need to  
create it depending on how your server was installed.  We're using  
the Simon Matter Invoca rpm at 2.3.8 and IIRC the log subdirectory is  
created by default.  (We've modified our configs so that we no longer  
use /var/lib/cyrus-imapd but instead have a directory for each  
instance of the server.)

The logging process creates a log file log/[username]/[pid] for each  
imap process for that [username] providing the [username] directory  
exists and is writable.

-nik


Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: Cyrus with a NFS storage. random DBERROR

2007-06-08 Thread Nik Conwell

On Jun 8, 2007, at 11:36 AM, Paul Dekkers wrote:

 Dmitriy Kirhlarov wrote:

 Which reminds me... isn't it strange that an unfinished logfile is
 removed when the cyrus master (or was it the sync_client -r) is
 restarted? Would make sense to me if the file is renamed / stored for
 later running through sync_client -f. (Or that sync_client -r reads  
 this
 file too before it starts rolling.)

I agree.  I think sync_client should process any pending log-* in the  
sync directory when it's later restarted.  (Or at least have an  
option to do that.)

Do people run sync_client in the SERVICES section rather than START?   
The install-replication docs indicate to put it in START.  If my  
replica goes away for a little while, sync_client exits and then I  
have to restart it manually and then process any pending logs.  Would  
be nice if it just started automatically and picked up where it left  
off.

-nik


Nik Conwell
Boston University


Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: backup imapd with TSM

2007-05-24 Thread Nik Conwell


On Thu, 24 May 2007, Hans Moser wrote:


Hi!

Does anyone actually backup and restore Cyrus IMAPd with Tivoli Storage 
Manager (TSM)?


As far as my backup admin told me, the restore tool only shows file names, I 
cannot see file content. So if user comes and tells

I lost my email x from user y, which arrived 2 weeks ago.
I (obviously) cannot ask him about the filename. And cannot search the backup 
store for user b's email address. :(


A workaround would be to restore the complete folder to /tmp/ and grep there 
for the right mail file. hm...


I have a prototype system I'm working on that does backups with TSM (via LVM 
snapshot).


For restores, I have a prototype script that restores the entire mailbox in 
question to a subdirectory of the user's main mailbox and names it 
RESTORE.MAILBOX.MMDD and then does a cyrus reconstruct -r -f to make the 
mailbox seen by cyrus.


The idea being that if somebody loses something, we restore the entire mailbox 
and the user can figure out what e-mails they want and can move them back to the 
right place themselves.  Something will automatically delete the 
RESTORE.MAILBOX.MMDD after a couple of days.


Eventually I plan on having a web GUI in front of the restore request and 
everything can happen automatically.


-nik

Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Cyrus 'cluster' general upgrade strategy?

2007-05-15 Thread Nik Conwell
For those with medium to large scale murders or other glued together  
clusters of cyrus servers, what's the general strategy for upgrading?


Do you take downtime for the entire cluster and upgrade, or do you  
roll through upgrades of the parts?


The latter would be prudent, except that I discovered you can't  
replicate 2.3.7 to 2.3.8 (CREATE has an extra argument at 2.3.8) so  
it wouldn't surprise me if it generally isn't considered a good idea  
to mix versions like this in a murder.


-nik
Nik Conwell   Information Technology Boston  
University

[EMAIL PROTECTED]

Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: Replication speeds?

2007-05-10 Thread Nik Conwell


On May 9, 2007, at 2:01 PM, Wesley Craig wrote:

Obviously looking at more iostat information would give a better  
idea, but I'd estimate that you are NOT I/O bound.  Sorry I can't  
give you absolute numbers from UM, but I can share a patch that we  
wrote that we believe has increase sync throughput substantially,  
as evidenced by the lack of a sync backlog which we were getting  
before we added the patch.


Thanks for the info  patch.  I applied it to the 2.3.7 test system  
but no appreciable speed increase.


Did it help you with both large replications (I'm doing a single 1.1G  
user to test) and the rolling replication?


ttcp shows the nets can do about 8.8MB/sec.

-nik


Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: Recomendations for a 15000 Cyrus Mailboxes

2007-05-10 Thread Nik Conwell


On Apr 11, 2007, at 8:37 PM, Bron Gondwana wrote:


As for complexity?  It's on the cusp.  We've certainly had many more
users on a single instance before, but we prefer to keep under 10k  
users

per Cyrus instance these days for quicker recoverability.  It really


Hi - just a clarification question - when you say 10k users per Cyrus  
instance and you mentioned in an earlier message each machine hosts  
multiple (in the teens) of this size stores, does this include the  
replicas?  So for example, one of your xSeries boxes might host 16  
instances, 8 master, 8 replica, so the box would master about 80k  
users and provide replica backups for another 80K users?


Thanks for the info.  I'm looking for sizing hints as we plan to move  
our 40,000+ UW IMAP users (spread over 7 xSeries 346/3650 and 6  
RS6000) to Cyrus.


-nik


Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: load balancing at fastmail.fm

2007-05-10 Thread Nik Conwell


On Jan 12, 2007, at 10:43 PM, Rob Mueller wrote:

Yep, this means we need quite a bit more software to manage the  
setup, but now that it's done, it's quite nice and works well. For  
maintenance, we can safely fail all masters off a server in a few  
minutes, about 10-30 seconds a store. Then we can take the machine  
down, do whatever we want, bring it back up, wait for replication  
to catch up again, then fail any masters we want back on to the  
server.


Just curious how you do this - do you just stop the masters and then  
change the proxy to point to the replica?  Webmail users shouldn't  
notice this but don't the desktop IMAP clients notice?



Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: Replication and failover

2007-05-10 Thread Nik Conwell


On Jan 18, 2007, at 5:35 PM, Rob Mueller wrote:


Attached is our operation group's notes on the subject.  It makes
reference to the tool we use to manage the OS of the machines
(radmind), but it should be pretty clear what they are talking about
without any radmind knowledge.


As an FYI, we have a similar procedure to this, the main  
differences are:


1. We don't change the DNS. Instead we give each machine a primary  
IP address, but we also create IP addresses for cyrusXmaster and  
cyrusXreplica names(where X is numbers for each machine). When we  
swap roles, we rebind the different IPs to the particular machines  
and send ARPs to clear the router table, rather than changing the  
DNS. This means you can always access the master as cyrusXmaster  
from every machine without having to worry about DNS getting out of  
sync.
2. Every machine has cyrus-master.conf, cyrus-replica.conf, imapd- 
master.conf and imapd-replica.conf. We just symlink cyrus.conf and  
imapd.conf to the appropriate file depending on what mode the  
machine is currently in


Do you have separate IP addresses for each instance of cyrus on the  
machine as well, or just the machine itself?  If just the machine,  
what 'names' does the front-end know the back-end instances by?


FWIW we use IP names for our 17 back-end UW mailstores...

Thanks.
-nik


Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Replication speeds?

2007-05-09 Thread Nik Conwell

What sort of rates are you all getting for replication?

At 2.3.7 for a manual sync_client for a user, I'm seeing ~35MB/minute  
across a 100M net to a linux SW RAID 1 pair of U320 disks.


Is this speed typical or abysmal?

Disks appear to be holding me back:

Device:rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/srkB/swkB/ 
s avgrq-sz avgqu-sz   await  svctm  %util
dm-0 0.00   0.00  0.00 429.960.00 3439.65 0.00   
1719.83 8.00 7.24   16.75   1.86  80.01


Production (40,000+ users) would probably be FC and gigE...


Nik Conwell
Boston University
[EMAIL PROTECTED]

Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Bug? xfermailbox to murder front-end is broken but using rename to xfer a mailbox works just fine.

2007-05-07 Thread Nik Conwell
I can (via cyradm) issue renames to a frontend and move people from  
backend to backend just fine, but if I issue xfermailbox to the  
frontend to move people it hangs and I end up with a frontend  
mboxlist that's messed up (points to frontend).


Is this a bug or am I just stupid issuing commands to the frontend?   
Issuing rename or xfermailbox to the appropriate backends works just  
fine.



Example:

Mailbox renames to move to a different back-end work fine issued to  
the front end server.


cyradm frontend
rename user/nik user/nik backend02
rename user/nik user/nik backend01
[all successful]

But, if I do xfermailbox to a front end, it hangs:

cyradm frontend01
xfermailbox user/nik backend02
[hangs]

The front end logs show:

could not dump mailbox in backend01 (unknown error)
could not move mailbox: user.nik, dump_mailbox() failed


and then a ctl_mboxlist -d shows:


frontend:
user.nik  1   frontend!backend01  nik   lrswipkxtecda
[normally it should be backendNN!default]

backend01:
user.nik  0   default nik   lrswipkxtecda
backend02:
user.nik  0   default nik   lrswipkxtecda


Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


BUG? File descriptor use in cmd_append (for MULTIAPPEND) results in many open files

2006-12-11 Thread Nik Conwell


I'm using the UW mailutil to transfer mailboxes from UW to Cyrus  
(2.3.7).  It uses APPEND, specifically multiappend (single APPEND  
with multiple messages being appended).  Cyrus-imapd handles this  
multiappend by creating stage files for each appended message and  
leaving the file descriptor open.  The problem is that after 240  
messages, we run out of file descriptors and so an open() of the next  
stage file fails with EMFILE.  I updated /etc/cyrus.conf to make the  
max fds be 1024 (AFAICT kernel MAX) which helped somewhat but not for  
larger mailboxes with  1008 messages.


Shouldn't the multiappend/append be closing the FD for each stage  
file and then reopening it later as it needs it?


Do people just tweak their kernels to have some insane number of FDs  
available in order to compensate for this?


Or, do people not use mailutil and instead use something that issues  
multiple append commands rather than a single append with multiple e- 
mails?


-nik


Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: BUG? File descriptor use in cmd_append (for MULTIAPPEND) results in many open files

2006-12-11 Thread Nik Conwell


On Dec 11, 2006, at 12:29 PM, Andrew Morgan wrote:


On Mon, 11 Dec 2006, Nik Conwell wrote:

I'm using the UW mailutil to transfer mailboxes from UW to Cyrus  
(2.3.7).  It uses APPEND, specifically multiappend (single APPEND  
with multiple messages being appended).  Cyrus-imapd handles this  
multiappend by creating stage files for each appended message and  
leaving the file descriptor open.  The problem is that after 240  
messages, we run out of file descriptors and so an open() of the  
next stage file fails with EMFILE.  I updated /etc/cyrus.conf to  
make the max fds be 1024 (AFAICT kernel MAX) which helped somewhat  
but not for larger mailboxes with  1008 messages.


Shouldn't the multiappend/append be closing the FD for each stage  
file and then reopening it later as it needs it?


Do people just tweak their kernels to have some insane number of  
FDs available in order to compensate for this?


Or, do people not use mailutil and instead use something that  
issues multiple append commands rather than a single append with  
multiple e-mails?


We run with a much, much larger number of file descriptors here.   
I've increased the system limit to around 200k (/proc/sys/fs/file- 
max on linux).  This is for the day-to-day running of Cyrus, so I  
don't know if you would need a higher limit for running mailutil  
(but I doubt it).


In practice, each of my backends has only used a maximum of around  
12k file descriptors, but I'd hate to run out!  :)


That's for the entire system though, right?  I'm running into a 1024  
limit per process, namely the cyrus imap server process has all the  
appended stage files open.  Am I missing something fundamental here?   
(I probably am because I would have figured people would have run  
into this issue already...)


BTW - my /proc/sys/fs/file-max has 406572.


Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: BUG? File descriptor use in cmd_append (for MULTIAPPEND) results in many open files

2006-12-11 Thread Nik Conwell


On Dec 11, 2006, at 1:02 PM, Rich Graves wrote:


Nik Conwell wrote:
multiple messages being appended).  Cyrus-imapd handles this  
multiappend by creating stage files for each appended message and  
leaving the file descriptor open.  The problem is that after 240  
messages, we run out of file descriptors and so an open() of the  
next stage file fails with EMFILE.  I updated /etc/cyrus.conf to  
make the max fds be 1024 (AFAICT kernel MAX) which helped somewhat  
but not for larger mailboxes with  1008 messages.


Excellent troubleshooting. I'm getting worried that I have a  
problem and don't know it. Did you get useful error messages (that  
we can search for, too)?


The server logs:
  IOERROR: creating message file /var/spool/imap/stage./ 
828-1165849099-1008: File exists


The file is named pid-timestamp-stage_sequence_number

The File exists error is bogus.  With strace you see the real error:

open(/var/spool/imap/stage./828-1165849099-1008, O_RDWR|O_CREAT| 
O_TRUNC, 0666) = -1 EMFILE (Too many open files)


and then the code goes on try a mkdir so it loses the errno:

mkdir(/var/spool/imap/stage./, 0755)  = -1 EEXIST (File exists)

following through the rest of the trace you see everything being  
unwound:


close(1023) = 0
munmap(0xb652b000, 4096)= 0
unlink(/var/spool/imap/stage./828-1165849099-1007) = 0
close(1022) = 0
munmap(0xb652c000, 4096)= 0
unlink(/var/spool/imap/stage./828-1165849099-1006) = 0
close(1021) = 0

[...]




The man page for cyrus.conf suggests that the default is 256, but  
that the integer value is optional. So if maxfds does not appear in  
cyrus.conf at all, is the default 256, or is it unlimited (up to  
ulimit)? Looks like the former to me. Ick.


Looks like the former to me as well since it was crapping out at 240  
stage files (a bunch of files already open when it started).



Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: BUG? File descriptor use in cmd_append (for MULTIAPPEND) results in many open files

2006-12-11 Thread Nik Conwell


On Dec 11, 2006, at 1:27 PM, Andrew Morgan wrote:


On Mon, 11 Dec 2006, Nik Conwell wrote:


On Dec 11, 2006, at 12:29 PM, Andrew Morgan wrote:


On Mon, 11 Dec 2006, Nik Conwell wrote:
I'm using the UW mailutil to transfer mailboxes from UW to Cyrus  
(2.3.7). It uses APPEND, specifically multiappend (single APPEND  
with multiple messages being appended).  Cyrus-imapd handles  
this multiappend by creating stage files for each appended  
message and leaving the file descriptor open.  The problem is  
that after 240 messages, we run out of file descriptors and so  
an open() of the next stage file fails with EMFILE.  I updated / 
etc/cyrus.conf to make the max fds be 1024 (AFAICT kernel MAX)  
which helped somewhat but not for larger mailboxes with  1008  
messages.
Shouldn't the multiappend/append be closing the FD for each  
stage file and then reopening it later as it needs it?
Do people just tweak their kernels to have some insane number of  
FDs available in order to compensate for this?
Or, do people not use mailutil and instead use something that  
issues multiple append commands rather than a single append with  
multiple e-mails?
We run with a much, much larger number of file descriptors here.   
I've increased the system limit to around 200k (/proc/sys/fs/file- 
max on linux). This is for the day-to-day running of Cyrus, so I  
don't know if you would need a higher limit for running mailutil  
(but I doubt it).
In practice, each of my backends has only used a maximum of  
around 12k file descriptors, but I'd hate to run out!  :)


That's for the entire system though, right?  I'm running into a  
1024 limit per process, namely the cyrus imap server process has  
all the appended stage files open.  Am I missing something  
fundamental here?  (I probably am because I would have figured  
people would have run into this issue already...)


BTW - my /proc/sys/fs/file-max has 406572.


Ah, then you just need to up the ulimits in your cyrus init script.  
Something like:


# Crank up the limits
ulimit -n 209702
ulimit -u 2048
ulimit -c 102400


That did the trick.  Thanks a lot.
-nik


Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: how to backup a cyrus server?

2006-12-05 Thread Nik Conwell


On Dec 4, 2006, at 6:42 PM, Andrew Morgan wrote:


On Mon, 4 Dec 2006, Rafael Mahecha wrote:


[...]
I used to use tivoli to backup the old server (which was ok since  
no data bases were involved)... but since cyrus has databases and  
such, I am concern about file-locking and database corruption.


What is the best way to back up the server? shutdown cyrus for a  
while, then snap shot it, and then back up to tivoli or should  
I just be able to back up the running server directly to tivoli?


what other software can I use to backup?


Check out the Cyrus Wiki page at:

  http://cyrusimap.web.cmu.edu/twiki/bin/view/Cyrus/Backup

Most people just make a regular backup of the filesystem using  
whatever tools they normally use.  The only trick is to export  
your mailboxes.db to a flat text file in order to back it up (which  
you should be doing periodically anyways).

[...]

We're using TSM for backups.  The Wiki notes using LVM snapshots so  
we ended up doing that.  We have a pre-backup script that does the  
ctl_mboxlist -d for a text mailboxes file, ctl_cyrusdb -c to  
checkpoint, a sync, and then a lvcreate --snapshot --size 10G --name  
lv_cyrus_snapshot /dev/vg_cyrus/lv_cyrus.  We then mount the snapshot  
and back it up normally with TSM.  Haven't had a lot of restore  
experience but testing worked out OK.



Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: how to backup a cyrus server?

2006-12-05 Thread Nik Conwell
I chose that number based on scientific total lack of clue.  Also,  
that was the nearest round number of space left over in the cyrus VG  
(~68G).  I take it I overestimated a little :)



On Dec 5, 2006, at 9:02 AM, Guus Leeuw jr. wrote:

I am just wondering why in the world you are using 10G for your  
snapshot?

This is 10G worth of snapshot bitmap + changes to the source while the
snapshot is active. Hence you either have a *large* source, or a very
*active* mail system...?

Just my $0.02,
Guus


-Original Message-
From: [EMAIL PROTECTED] [mailto:info-cyrus-
[EMAIL PROTECTED] On Behalf Of Nik Conwell
Sent: 05 December 2006 12:05
To: Cyrus User's Mailing List
Subject: Re: how to backup a cyrus server?

ctl_mboxlist -d for a text mailboxes file, ctl_cyrusdb -c to
checkpoint, a sync, and then a lvcreate --snapshot --size 10G --name
lv_cyrus_snapshot /dev/vg_cyrus/lv_cyrus.  We then mount the snapshot



Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: timeouts when connecting to imap server

2006-12-01 Thread Nik Conwell


On Dec 1, 2006, at 12:29 PM, Timo Veith wrote:

[...]

Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.

And I can wait and wait ...

This is the point where I start wondering what the hell cyrus is  
doing now

that it takes so long to answer.

I started the master daemon with -D and export CYRUS_VERBOSE=1, but  
I saw
no log messages that helped me. At least they don't sound critical  
to me.

Is there anything I should be looking for?


Have you tried strace -tt -p on the master to see if it gets hung up  
somewhere?  Does netstat --listening or netstat --tcp show a large  
listen queue for it?



Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: Captive mailbox in Cyrus IMAP?

2006-11-30 Thread Nik Conwell


On Nov 29, 2006, at 4:17 PM, Greg A. Woods wrote:

The only thing I don't like is use of a file in the filesystem as a  
flag

instead of using a proper configuration file whos contents can be
version controlled and more easily backed up, restored, documented,
centralized, shared, and understood.  I.e. tables in files are much
better suited to this use.  (unless perhaps you happen to have a brain


I go back and forth on this.  I used to be a file guy but recently  
I've been automating some linux stuff and so I've been swayed by the  
ease of placing a file in a directory without having to do passes  
through a file and the locking, updating only that entry logic, and  
utilities for managing the file that it brings along with it.  (That  
said, I have a perl program I use for automated additions to  
monolithic files (fstab, exports, whatever), allowing for example  
fstab.[whatever] to be easily appended to and removed from fstab.)   
For version control we have a convention where old versions of files  
are named filename.MMDD which usually provides enough  
breadcrumbs.  Let's hope we don't adopt usernames of that form...


In this case I'm just being lazy - it was easier to throw something  
together to check for the presence of the file rather than coding to  
parse a config file (I'd want to allow comments and have decent error  
messages for parsing errors).  I've been spoiled by Perl such that  
any ad hoc parsing and string handling in C is painful.


Did you do anything about the seen state of the unread message?   
Maybe
I'm just ignorant of how a read-only inbox will behave though --  
perhaps

the message will always appear new (and a POP client will always
download it anew too) and so nothing need be done special?


Nothing special on the seen state so it's just how it behaves to  
something with ACL lr.  My experience with Apple Mail is that the e- 
mail shows up as new and unread each time.



Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


Re: Captive mailbox in Cyrus IMAP?

2006-11-29 Thread Nik Conwell
Thanks to all for the suggestions.  They were very good so I ignored  
them.  :)


I've patched imapd.c (cmd_login and cmd_authenticate) so that the  
presence and contents of {config_dir}/captive/{username} indicate the  
actual user that should be logged in (providing it begins with  
disabled).  So for example if /var/lib/imap/captive/smith contains  
disabled-archiving, then when smith logs in, it will really be  
taken as disabled-archiving is logging in.   disabled-archiving has  
previously been primed with a message and has an ACL of lr to prevent  
updates.


I've tried swapping things back and forth with Apple Mail and  
Mulberry and things seem to work OK.  I don't know how this will work  
with a murder.


If anybody wants the patch (pretty small) I can send it somewhere  
appropriate.


Thanks.
-nik


Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html