Re: [Dovecot] Dsync and compressed mailboxes

2012-01-13 Thread Dovecot-GDH
The dsync process will be aware of whatever configuration file it refers to.

The best thing to do is to set up a separate instance of Dovecot with 
compression enabled (really not that hard to do) and point dsync to that 
separate instances's configuration.

Mailboxes written by dsync will be compressed.

On Jan 13, 2012, at 5:59 AM, Joseba Torre wrote:

> Hi,
> 
> I will begin two migrations next week, and in both cases I plan to use 
> compressed mailboxes with mdbox format. But in the last minute one doubt has 
> appeared: is dsync aware of compressed mailboxes? I'm not sure if
> 
> dsync -u $USER mirror mdbox:compressed_mdbox_path
> 
> works, or if I have to use something else (I guess that with a running 
> dovecot dsync backup should work).
> 
> Thanks.



Re: [Dovecot] Compressing existing maildirs

2011-12-28 Thread Dovecot-GDH
The cleanest (though not necessarily simplest) way to go about this would be to 
use dsync to create a new maildir and incrementally direct traffic to a 
separate Dovecot instance.

Unless you have a legacy application that relies on maildir, switching to mdbox 
would be a good idea too.

I expect that with Dovecot compression is something that can "just be turned 
on", but for fear of any possible issue, I chose to migrate mailboxes in 
batches with the way mentioned above.

On Dec 24, 2011, at 7:20 AM, Jan-Frode Myklebust wrote:

> I've just enabled zlib for our users, and am looking at how to compress
> the existing files. The routine for doing this at
> http://wiki2.dovecot.org/Plugins/Zlib seems a bit complicated. What do
> you think about simply doing:
> 
>   find /var/vmail -type f -name "*,S=*" -mtime +1 -exec gzip -S Z -6 '{}' 
> +
> 
> 
> I.e. find all maildir-files:
> 
>   - with size in the name ("*,S=*")
>   - modified before I enabled zlib plugin
>   - compress them 
>   - add the Z suffix
>   - keep timestamps (gzip does that by default)
>   
> 
> It's of course racy without the maildirlock, but are there any other
> problems with this approach ?
> 
> 
> -jf



Re: [Dovecot] lmtp panic in proxy lmtp director

2011-12-05 Thread Dovecot-GDH
This happens when the LMTP proxy doesn't receive feedback from the back-end 
LMTP process for a certain amount of time.

This typically happens either because of very low I/O performance or NFS locks. 
Do an strace on your back-end LMTP processes. If you see that these processes 
are waiting on NFS locks ande you are using NFSv3, you should move over to 
NFSv4.


On Dec 5, 2011, at 6:26 AM, Xavier Pons wrote:

> Hi, we are getting some  core dumps with signal 6 in ltmp on a dovecot 
> director proxy server,
> like this:
> 
> Dec  5 14:31:51 sproxy1 dovecot: lmtp(2): Panic: file lmtp-proxy.c: line 
> 376 (lmtp_proxy_output_timeout): assertion failed: (proxy->data_input->eof)
> Dec  5 14:31:51 sproxy1 dovecot: lmtp(2): Error: Raw backtrace: 
> /usr/lib64/dovecot/libdovecot.so.0() [0x363323d99a] -> 
> /usr/lib64/dovecot/libdovecot.so.0() [0x363323d9e6] -> 
> /usr/lib64/dovecot/libdovecot.so.0(i_error+0) [0x3633216f8f] -> 
> dovecot/lmtp() [0x406e57] -> 
> /usr/lib64/dovecot/libdovecot.so.0(io_loop_handle_timeouts+0xd4) 
> [0x3633248ff4] -> 
> /usr/lib64/dovecot/libdovecot.so.0(io_loop_handler_run+0x5b) [0x3633249bdb] 
> -> /usr/lib64/dovecot/libdovecot.so.0(io_loop_run+0x28) [0x3633248c58] -> 
> /usr/lib64/dovecot/libdovecot.so.0(master_service_run+0x13) [0x3633236fc3] -> 
> dovecot/lmtp(main+0x154) [0x403f84] -> 
> /lib64/libc.so.6(__libc_start_main+0xfd) [0x373cc1ec5d] -> dovecot/lmtp() 
> [0x403d69]
> Dec  5 14:31:51 sproxy1 abrt[30067]: saved core dump of pid 2 
> (/usr/libexec/dovecot/lmtp) to 
> /var/spool/abrt/ccpp-1323091911-2.new/coredump (1368064 bytes)
> Dec  5 14:31:51 sproxy1 dovecot: master: Error: service(lmtp): child 2 
> killed with signal 6 (core dumped)
> 
> we have something misconfigured  or it's a bug on this dovecot version?
> 
> our doveconf -n is:
> # 2.0.15: /etc/dovecot/dovecot.conf
> # OS: Linux 2.6.32-71.29.1.el6.x86_64 x86_64 CentOS Linux release 6.0 (Final)
> auth_cache_size = 3 k
> auth_cache_ttl = 15 mins
> auth_verbose = yes
> base_dir = /var/run/dovecot/
> default_client_limit = 3
> default_process_limit = 5000
> director_doveadm_port = 990
> director_mail_servers = 10.80.82.21 10.80.82.22
> director_servers = 10.80.82.11 10.80.82.12
> doveadm_proxy_port = 24245
> lmtp_proxy = yes
> managesieve_notify_capability = mailto
> managesieve_sieve_capability = fileinto reject envelope encoded-character 
> vacation subaddress comparator-i;ascii-numeric relational regex imap4flags 
> copy include variables body enotify environment mailbox date ihave
> passdb {
>  args = proxy=y nopassword=y starttls=any-cert
>  driver = static
> }
> postmaster_address = xavier.p...@uib.es
> protocols = imap pop3 lmtp sieve
> service auth {
>  client_limit = 27048
>  unix_listener /var/spool/postfix/private/auth {
>mode = 0666
>  }
>  unix_listener auth-userdb {
>group = dovecot
>mode = 0660
>  }
> }
> service director {
>  fifo_listener login/proxy-notify {
>mode = 0666
>  }
>  inet_listener {
>port = 991
>  }
>  inet_listener director-doveadm {
>port = 990
>  }
>  unix_listener director-userdb {
>mode = 0660
>  }
>  unix_listener login/director {
>mode = 0666
>  }
> }
> service doveadm {
>  inet_listener {
>port = 24245
>  }
> }
> service imap-login {
>  executable = imap-login director
>  inet_listener imap {
>port = 143
>  }
>  inet_listener imaps {
>port = 993
>ssl = yes
>  }
> }
> service lmtp {
>  inet_listener lmtp {
>port = 30025
>  }
> }
> service managesieve-login {
>  executable = managesieve-login director
> }
> service pop3-login {
>  executable = pop3-login director
>  inet_listener pop3 {
>port = 110
>  }
>  inet_listener pop3s {
>port = 995
>ssl = yes
>  }
> }
> ssl = required
> ssl_cert =  ssl_key =  syslog_facility = local1
> verbose_proctitle = yes
> protocol lmtp {
>  auth_socket_path = director-userdb
>  passdb {
>args = /etc/dovecot/dovecot-ldap-pass.conf.lmtp
>driver = ldap
>  }
> }
> protocol doveadm {
>  auth_socket_path = director-userdb
> }
> protocol imap {
>  mail_max_userip_connections = 20
> }
> protocol pop3 {
>  pop3_client_workarounds = outlook-no-nuls oe-ns-eoh
> }
> 
> Xavier
> 
> -- 
> xavier.p...@uib.es
> Centre de Tecnologies de la Informació
> Universitat Illes Balears
> 
> 



Re: [Dovecot] MS Exchange IMAP Proxy

2011-11-30 Thread Dovecot-GDH
>> An Exchange 2000 server is ancient. I wouldn't waste time with it
>> unless there was no possible way to get an updated version; ie, Exchange
>> server 2010.
> 
> 
> The client won't pay for an Exchange update just to support a handful of 
> external IMAP users.
> 
> It works perfectly well internally, using a Postfix relayhost.
> 
> Terry

If the client is inept enough to run Exchange 2000 for only a handful of users, 
you're probably wasting your time attempting to sanitize IMAP commands.

If your contract with them mandates that you secure their server, you'll most 
likely have to replace their broken software.

Re: [Dovecot] Indexes to MLC-SSD

2011-11-29 Thread Dovecot-GDH
https://github.com/facebook/flashcache/blob/master/doc/flashcache-doc.txt

On Nov 28, 2011, at 4:04 PM, Micah Anderson wrote:

> Dovecot-GDH  writes:
> 
>> If I/O performance is a concern, you may be interested in ZFS and Flashcache.
>> 
>> Specifically, ZFS' ZIL (ZFS Intent Log) and its L2ARC (Layer 2 Adaptive Read 
>> Cache)
>> ZFS does run on Linux http://zfs-fuse.net
>> 
>> Flashcache: https://github.com/facebook/flashcache/
> 
> That site has no information about what flashcache is.
> 
> 



Re: [Dovecot] Multiple Patitions with with mdbox

2011-11-09 Thread Dovecot-GDH
> How do you handle >> 10 TB mailstore?

ZFS: no need to fsck.
GlusterFS: "always-online".

On Nov 8, 2011, at 6:19 AM, Peer Heinlein wrote:

> 
> Having > 10 TByte mailstore filesystem-checks takes too much time.
> 
> At the moment we have four different partitions, but I don't like to set 
> symlinks or LDAP-flags to sort customers and their domains to there 
> individual mount-point. I'd like to work with mdbox:/mail/%d/%n to calculate 
> the path automatically.
> 
> How do you handle >> 10 TB mailstore?
> 
> I'm very interested in the feature "alternative mailstore" with mdbox, 
> because that makes it very easy to use at least TWO filesystems without any 
> tricky configuration.
> 
> I think I'd love to have  alternative mailstores. Why does dbox doesn't 
> look for its m.*-files in more then two directorys? Sure, looking in 4 
> directorys would lead to four disc operations, but maybe it could be very 
> helpful.
> 
> Peer
> -- 
> 
> Heinlein Professional Linux Support GmbH
> Linux: Akademie - Support - Hosting
> http://www.heinlein-support.de
> 
> Tel: 030/405051-42
> Fax: 030/405051-19
> 
> Zwangsangaben lt. §35a GmbHG:
> HRB 93818 B / Amtsgericht Berlin-Charlottenburg, 
> Geschäftsführer: Peer Heinlein  -- Sitz: Berlin



Re: [Dovecot] Indexes to MLC-SSD

2011-11-01 Thread Dovecot-GDH
If I/O performance is a concern, you may be interested in ZFS and Flashcache.

Specifically, ZFS' ZIL (ZFS Intent Log) and its L2ARC (Layer 2 Adaptive Read 
Cache)
ZFS does run on Linux http://zfs-fuse.net

Flashcache: https://github.com/facebook/flashcache/

Both of these techniques can use a pair of SSDs in RAID1 rather than a single 
SSD.

On Oct 27, 2011, at 1:31 AM, Ed W wrote:

> On 27/10/2011 03:36, Stan Hoeppner wrote:
>> On 10/26/2011 4:13 PM, Patrick Westenberg wrote:
>>> Hi all,
>>> 
>>> is anyone on this list who dares/dared to store his index files on a
>>> MLC-SSD?
>> I have not.  But I can tell you that a 32GB Corsair MLC SSD in my
>> workstation died after 4 months of laughably light duty.  It had nothing
>> to do with cell life but low product quality.  This was my first foray
>> into SSD.  The RMA replacement is still kickin after 2 months,
>> thankfully.  I'm holding my breath...
>> 
>> Scanning the reviews on Newegg shows early MLC SSD failures across most
>> brands, early being a year or less.  Some models/sizes are worse than
>> others.  OCZ has a good reputation overall, but reviews show some of
>> their models to be grenades.
>> 
>> Thus, if you were to put indexes on SSD, you should strongly consider
>> using a mirrored pair.
>> 
> 
> I don't think you are saying that the advice varies here compared with
> HDDs?  I do agree that some SSDs are showing very early failures, but
> it's only a tweak to the probability parameter compared with any other
> storage medium.  They ALL fail at some point, and generally well within
> the life of the rest of the server.  Some kind of failure planning is
> necessary
> 
> Caveat the potentially higher failures vs HDDs I don't see any reason
> why an SSD shouldn't work well? (even more so if you are using maildir
> where indexes can be regenerated).
> 
> More interestingly: for small sizes like 32GB, has anyone played with
> the "compressed ram with backing store" thing in newer kernels (that I
> forget the name of now). I think it's been marketed for swap files, but
> assuming I got the theory it could be used as a ram drive with slow
> writeback to permanent storage?
> 
> Good luck
> 
> Ed W



Re: [Dovecot] dsync should sync sieve-dirs to!

2011-10-30 Thread Dovecot-GDH
>> Why using dsync at all?

dsync is a tool used for synchronizing mailboxes.

>> It should be possible to make a *complete* backup/mirror of a user's 
>> mailbox with sync

The Sieve folder is not part of the mailbox.

On Oct 30, 2011, at 5:24 AM, Robert Schetterer wrote:

> Am 30.10.2011 13:16, schrieb Peer Heinlein:
>> Am Samstag, 29. Oktober 2011, 09:15:31 schrieb Robert Schetterer:
>> 
>>> Hi Peer meanwhile , you may use rsync additional as workaround
>> 
>> Yes, I'm using rsync for 15 years for this.
>> 
>> I'd like to STOP using rsync.
>> 
>> It should be possible to make a *complete* backup/mirror of a user's 
>> mailbox with dsync. And a backup/mirror without sieve is incomplete.
>> 
>> Peer
>> 
>> 
>> 
> 
> yes youre right
> 
> -- 
> Best Regards
> 
> MfG Robert Schetterer
> 
> Germany/Munich/Bavaria



Re: [Dovecot] Seen flag getting lost

2011-10-28 Thread Dovecot-GDH
If more than one Dovecot instance is accessing the same set of mailboxes over 
NFS or other network filesystem, you will need to use the directors. You may as 
well upgrade to 2.0.

On Oct 25, 2011, at 4:02 AM, Edgar Fuß wrote:

> We have two dovecot 1.2 instances sharing Maildirs on NFS. Indexes are local 
> to the individual servers.
> Occasionally (no idea how to trigger this), the Seen flag gets lost on some 
> messages. I've verified that actually the ``S'' is missing from the filename.
> I suspect something like server A caching the flags, server B setting Seen, 
> and then server A flushing its cache for another change so overwriting what B 
> changed.
> Any ideas short of switching to 2.0?



Re: [Dovecot] dsync should sync sieve-dirs to!

2011-10-28 Thread Dovecot-GDH
Why not just add a line for your local sieve folder to the same shell/cgi 
script that executes dsync?

On Oct 28, 2011, at 4:41 PM, Peer Heinlein wrote:

> 
> Having dsync to make backups from existing mail-spaces, it would be nice 
> to make dsync syncing the sieve-dirs too. -Otherweise backups aren't 
> complete...
> 
> Peer
> 
> 
> -- 
> Heinlein Professional Linux Support GmbH
> Linux: Akademie - Support - Hosting
> 
> http://www.heinlein-support.de
> Tel: 030 / 40 50 51 - 0
> Fax: 030 / 40 50 51 - 19
> 
> Zwangsangaben lt. §35a GmbHG:
> HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
> Geschäftsführer: Peer Heinlein  -- Sitz: Berlin