Re: [Dovecot] Best Cluster Storage

2011-01-20 Thread Henrique Fernandes
[]'sf.rique


On Fri, Jan 21, 2011 at 2:14 AM, Stan Hoeppner wrote:

> Henrique Fernandes put forth on 1/20/2011 11:55 AM:
>
> > Even the storage system is not SUN those ocfs2 servers are connect via
> iSCSI
> > from the storage with ocfs2 in virtual machine
>
> Storage are an EMC CX4 ( don't have all info about it right now )


> Please provide a web link to the iSCSI storage array product you are using,
> and
> tell us how many 1GbE ports you are link aggregating to the switch.
>

Just one, the EMC have one interface and we are exporting iSCSI over it.
Right now is what we can do, we are waiting to be able to buy an SUN to make
everything works.

>
> Also, are you using Oracle OCFS2 or IBM GPFS?  You mentioned both.
>  Considering
> you are experiencing severe performance issues with metadata operations due
> to
> the distributed lock manager...
>
> OCFS2 1.4  Because it is free. Have no money for anything else, we have
considered swith it to NFS.


> Have you considered SGI CXFS?  It's the fastest cluster FS on the planet by
> an
> order of magnitude.  It uses dedicated metadata servers instead of a DLM,
> which
> is why it's so fast.  Directory traversal operations would be orders of
> magnitude faster than what you have now.
>
> http://en.wikipedia.org/wiki/CXSGI CXFSFS
> http://www.sgi.com/products/storage/software/cxfs.html
>

We don't considerd buying an clustered filesystem.

We are out of ideias to make it faster. We only came up making more ocfs2
cluster with smaller disks. With this we are gettng better performance. We
have now 2 cluster one with 4 TB other with 1 TB and are migrating some os
emails form 4TB to 1TB and already have ready another cluster with 1 TB. So
we have 3 machines and those 3 mount 3 disks each from the storage and mount
3 ocfs2 cluster. So we think the each DLM gets less work.  Are we right?

Thanks!



> --
> Stan
>


Re: [Dovecot] Fwd: Re: Dotlock dovecot-uidlist errors / NFS / High Load

2011-01-20 Thread Stan Hoeppner
Brandon Davidson put forth on 1/20/2011 11:22 PM:
> Stan,
> 
> On 1/20/11 7:45 PM, "Stan Hoeppner"  wrote:
>>
>> What you're supposed to do, and what VMWare recommends, is to run ntpd _only
>> in
>> the ESX host_ and _not_ in each guest.  According to:
>> http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displ
>> ayKC&externalId=1006427
> 
> Did you read the document you linked? As was mentioned on this list fairly
> recently, that's not been the recommendation for quite some time. To the
> contrary:

I didn't read the bottom, no.  They've changed the rec.  Contact VMWare and ask
them why they reversed themselves on their previous recommendation.

> ===
> NTP Recommendations
> Note: In all cases use NTP instead of VMware Tools periodic time
> synchronization. (...) When using NTP in the guest, disable VMware Tools
> periodic time synchronization.
> ===

Simply put, they made this change to lower support calls for time sync issues.
Running ntpd inside each guest is unnecessary bloat.

> We run the guests with divider=10, periodic timesync disabled, and NTP on
> both the host and the guest. We have not had any time problems in several
> years of operation.

I'm glad it works for you.  You can achieve the same results without running
ntpd inside the guests and without running the vmtools time sync in the guests,
doing exactly what I mentioned.  Again, I helped them write the early book on
this back in '06 before they had a decent time keeping strategy.  If you recall,
back then, nptd wasn't even installed in the ESX console by default.  You had to
manually install and configure it.

As with many things in this tech world of ours, there are many ways to skin the
same cat and achieve the same result.  I have fairly intimate knowledge of both
the Linux kernel timer and the ntp protocol due to the serious amount of
research I had to do 4+ years ago.  If you had the same knowledge, you too would
realize it's just plain silly to run ntpd redundantly inside the host and guest
operating systems atop the same physical machine.

The single biggest reason is that the ntp drift file in the guest instantly
becomes dirty after a vmotion because the drift file tracks the physical
hardware clock.  This is virtualized to guest by the ESX kernel.  Once you
vmotion the drift characteristics have changed as they're slightly different on
each physical host, and thus each ESX kernel.

Thus, even though the guest clock is still relatively accurate after a vmotion,
why use ntpd with drift inside the guest if the drift isn't being used properly?
 Ergo, why not eliminate ntpd, which is unnecessary, and simply run an ntpdate
periodically, based on SA documented drift over 30 days, as I do?

Again, you get the same, or sufficiently similar result, but without an extra
unnecessary daemon running in each Linux guest.  Try it yourself.  Disable ntpd
on one of your guests and cron ntpdate each midnight against your local ntpd
server.  After a few days report back with your results.

-- 
Stan


Re: [Dovecot] Dotlock dovecot-uidlist errors / NFS / High Load

2011-01-20 Thread Stan Hoeppner
Gregory Finch put forth on 1/20/2011 3:27 PM:

> You're much better off running one ntp server than two. With just two
> servers providing time, if they drift from one another, for whatever
> reason, there is no way to tell which one has the correct time. If you
> need to ensure the time is correct, peer at least 3 machines together,
> then they can take care of themselves if one drifts.

The reason for two local ntpd servers instead of one is simply redundancy in
case one dies.  In your scenario, you first have to realize you have a drifting
server.  Having 3 servers won't help in this case.  Once you identify the drift,
you simply run a manual ntpdate against external servers and you know instantly
which of your local servers is drifting.

-- 
Stan


Re: [Dovecot] SSD drives are really fast running Dovecot

2011-01-20 Thread Stan Hoeppner
Frank Cusack put forth on 1/20/2011 2:30 PM:
> On 1/20/11 12:06 AM -0600 Stan Hoeppner wrote:
>>  This is amusing considering XFS is hands down
>> the best filesystem available on any platform, including ZFS.  Others are
>> simply ignorant and repeat what they've heard without looking for current
>> information.

> Your pronouncement that others are simply ignorant is telling.

So is your intentionally quoting me out of context.  In context:

Me:
"Prior to 2007 there was a bug in XFS that caused filesystem corruption upon
power loss under some circumstances--actual FS corruption, not simply zeroing of
files that hadn't been fully committed to disk.  Many (uneducated) folk in the
Linux world still to this day tell others to NOT use XFS because "Power loss
will always corrupt your file system."  Some probably know better but are EXT or
JFS (or god forbid, BTRFS) fans and spread fud regarding XFS.  This is amusing
considering XFS is hands down the best filesystem available on any platform,
including ZFS.  Others are simply ignorant and repeat what they've heard without
looking for current information."

The "ignorant" are those who blindly accept the false words of others regarding
4+ year old "XFS corruption on power fail" as being true today.  They accept but
without verification.  Hence the "rumor" persists in many places.

>> In my desire to be brief I didn't fully/correctly explain how delayed
>> logging works.  I attempted a simplified explanation that I thought most
>> would understand.  Here is the design document:
>> http://oss.sgi.com/archives/xfs/2010-05/msg00329.html

> I guess I understand your championing of it if you consider that a
> design document.  That brief piece of email hardly describes it at
> all, and the performance numbers are pretty worthless (due to the
> caveat that barriers are disabled).

You quoted me out of context again, intentionally leaving out the double paste
error I made of the same URL.

Me:
"In my desire to be brief I didn't fully/correctly explain how delayed logging
works.  I attempted a simplified explanation that I thought most would
understand.  Here is the design document:
http://oss.sgi.com/archives/xfs/2010-05/msg00329.html

Early performance numbers:
http://oss.sgi.com/archives/xfs/2010-05/msg00329.html";

Note the double URL paste error?  Frank?  Why did you twist an honest mistake
into something it's not?  Here's the correct link:

http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=blob;f=Documentation/filesystems/xfs-delayed-logging-design.txt

> Given the paragraph in the "design document":

Stop being an ass.  Or get off yours and Google instead of requiring me to spoon
feed you.

>> The best IO behaviour comes from the delayed logging version of XFS,
>> with the lowest bandwidth and iops to sustain the highest
>> performance. All the IO is to the log - no metadata is written to
>> disk at all, which is the way this test should execute.  As a result,
>> the delayed logging code was the only configuration not limited by
>> the IO subsystem - instead it was completely CPU bound (8 CPUs
>> worth)...
> 
> it is indeed a "ram spooler", for metadata, which is a standard (and
> good) approach.  That's not a side effect, that's the design.  AFAICT
> from the brief description anyway.

As you'll see in the design doc, that's not the intention of the patch.  XFS
already had a delayed metadata update design, but it was terribly inefficient in
implementation.  Dave increased the efficiency several fold.  The reason I
mentioned it on Dovecot is that it directly applies to large/busy maildir style
mail stores.

XFS just clobbers all other filesystems in parallel workload performance, but
historically its metadata performance was pretty anemic, about half that of
other FSes.  Thus, parallel creates and deletes of large numbers of small files
were horrible.  This patch fixes that issue, and brings the metadata performance
of XFS up to the level of EXT3/4, Reiser, and others, for single process/thread
workloads, and far surpasses their performance with large parallel
process/thread workloads, as is shown in the email I linked.

This now makes XFS the perfect Linux FS for maildir and [s/m]dbox on moderate to
heavy load IMAP servers.  Actually it's now the perfect filesystem for all Linux
server workloads.  Previously it was for all workloads but metadata heavy ones.

> This is guaranteed to lose data on power loss or drive failure.

On power loss, on a busy system, yes.  Due to a single drive failure?  That's
totally incorrect.  How are you coming to that conclusion?

As with with every modern Linux filesystem that uses the kernel buffer cache,
which is, all of them, you will lose in flight data that's in the buffer cache
when power drops.

Performance always has a trade off.  The key here is that the filesystem isn't
corrupted due to this metadata loss.  Solaris with ZFS has the same issues.  One
can't pipeline anything in a block device queue and not hav

Re: [Dovecot] Fwd: Re: Dotlock dovecot-uidlist errors / NFS / High Load

2011-01-20 Thread Brandon Davidson
Stan,

On 1/20/11 7:45 PM, "Stan Hoeppner"  wrote:
> 
> What you're supposed to do, and what VMWare recommends, is to run ntpd _only
> in
> the ESX host_ and _not_ in each guest.  According to:
> http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displ
> ayKC&externalId=1006427

Did you read the document you linked? As was mentioned on this list fairly
recently, that's not been the recommendation for quite some time. To the
contrary:

===
NTP Recommendations
Note: In all cases use NTP instead of VMware Tools periodic time
synchronization. (...) When using NTP in the guest, disable VMware Tools
periodic time synchronization.
===

We run the guests with divider=10, periodic timesync disabled, and NTP on
both the host and the guest. We have not had any time problems in several
years of operation.

-Brad



Re: [Dovecot] Best Cluster Storage

2011-01-20 Thread Stan Hoeppner
Henrique Fernandes put forth on 1/20/2011 11:55 AM:

> Even the storage system is not SUN those ocfs2 servers are connect via iSCSI
> from the storage with ocfs2 in virtual machine

Please provide a web link to the iSCSI storage array product you are using, and
tell us how many 1GbE ports you are link aggregating to the switch.

Also, are you using Oracle OCFS2 or IBM GPFS?  You mentioned both.  Considering
you are experiencing severe performance issues with metadata operations due to
the distributed lock manager...

Have you considered SGI CXFS?  It's the fastest cluster FS on the planet by an
order of magnitude.  It uses dedicated metadata servers instead of a DLM, which
is why it's so fast.  Directory traversal operations would be orders of
magnitude faster than what you have now.

http://en.wikipedia.org/wiki/CXFS
http://www.sgi.com/products/storage/software/cxfs.html

-- 
Stan


Re: [Dovecot] Fwd: Re: Dotlock dovecot-uidlist errors / NFS / High Load

2011-01-20 Thread Stan Hoeppner
l...@airstreamcomm.net put forth on 1/20/2011 11:09 AM:
> Stan,
> 
> Thanks for the reply.  In our case we have actually already done most of
> the work you suggested to no avail.  We had rebuilt two new ntp servers
> that sync against two stratum 1 sources, and all our nfs clients,
> regardless of using dovecot, sync to those two machines.  You bring up the
> difference between bare metal and hypervisor, and we are running these
> machines on vmware 4.0.  All the vmware knowledge base articles tend to
> push us towards ntp, and since we are using centos 5.5 there are no kernel
> modifications that need to be made regarding timing from what we can find. 
> I will give the ntpdate option a try and see what happens.

What you're supposed to do, and what VMWare recommends, is to run ntpd _only in
the ESX host_ and _not_ in each guest.  Each guest kernel needs to be running as
few ticks as possible, and preferably the tickless kernel.  As with many/most
distribution Linux kernels, you're going to need to use boot parameters to get
accurate guest clock time keeping.  According to:
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1006427

You will need the following _kernel boot parameters_ for CentOS 5.5 guests:

For 32bit kernels:  divider=10, clocksource=acpi_pm
For 64bit kernels:  notsc, divider=10

Also, note at the top of the article that you must run a uniprocessor kernel on
a uniprocessor VM, and an SMP kernel on a "virtual SMP" VM.  Mismatches here
will cause clock drift.

Once you have all of this setup, your guest kernel timekeeping should be fairly
accurate, and you can cron ntpdate once a week or month as necessary in each
guest, depending on your drift.

I discovered all of this in 2006 when attempting to get accurate clocks on SLES9
and Debian 3 guests for kerberos to work properly, before VMWare had thorough
documentation for ESX2/3 timekeeping with Linux guests.  I spent about two weeks
doing the kernel research, experimenting, and figuring all this out on my own.
I posted my results on the VMWare forums, and my work was used in creating later
VMWare timekeeping documentation.

Monkeying with LILO/Grub boot parameters is often beyond the comfort level of
some SAs.  This is why previously I recommended the "short cut" of simply
cron'ing ntpdate in each guest.  It used to get one "close enough" without the
other headaches.  It may not still work today.  I've not tried that method in a
long time.

I cannot stress enough that you _MUST_ disable the ntpd daemon in each Linux
guests.  ntpd is installed by default with every Linux distro.

So, to recap:

1.  Install, configure and enable ntpd in the ESX 4 shell on each physical host
2.  Disable ntpd in each Linux guest
3.  Modify your LILO/Grub command line in each guest as described above
4.  Document drift in each guest for a month and cron ntpdate to compensate

You need to do _all_ of these things in combination.  Doing some and not all
will leave you with unacceptable clock drift.

-- 
Stan


> I was also hoping to understand why the uidlist file is the only file that
> uses dotlock, or if there was plans to give it the option to use other
> locking mechanisms in the future.
> 
> Thanks again!
> 
> Michael
> 
>  Original Message 
> Subject: Re: [Dovecot] Dotlock dovecot-uidlist errors / NFS / High Load
> Date: Thu, 20 Jan 2011 10:57:24 -0600
> From: Stan Hoeppner 
> To: dovecot@dovecot.org
> 
> l...@airstreamcomm.net put forth on 1/20/2011 8:32 AM:
> 
>> Secondly we thought the issues were due to NTP as the time stamps vary
> so
>> widely, so we rebuilt our NTP servers and found closer stratum 1 source
>> clocks to synchronize to hoping it would alleviate the problem but the
>> dotlock errors returned after about 12 hours.  We have fcntl locking set
> in
>> our configuration file, but it is our understanding from look at the
> source
>> code that this file is locked with dotlock.  
>>
>> Any help troubleshooting is appreciated.
> 
>>From your description it sounds as if you're ntpd syncing each of the 4
> servers
> against an external time source, first stratum 2/3 sources, then stratum 1
> sources in an attempt to cure this problem.
> 
> In a clustered server environment, _always_ run a local physical
> box/router ntpd
> server (preferably two) that queries a set of external sources, and
> services
> your internal machine queries.  With RTTs all on your LAN, and using the
> same
> internal time sources for every query, this clock drift issue should be
> eliminated.  Obviously, when you first set this up, stop ntpd and run
> ntpdate to
> get an initial time sync for each cluster host.
> 
> If after setting this up, and we're dealing with bare metal cluster member
> servers, then I'd guess you've got a failed/defective clock chip on one
> host.
> If this is Linux, you can work around that by changing the local time
> source.
> There are something like 5 options.  Google for "Linux time" o

Re: [Dovecot] How to enable COPY and APPEND commands separately

2011-01-20 Thread Timo Sirainen
On 21.1.2011, at 1.51, Alex Cherniak wrote:

> I'd like to allow a user to move messages between his folders on Dovecot
> IMAP account, but prevent move/copy from different accounts (Exchange in
> particular).
> Outlook uses "xx UID COPY 1 folder" and then "xx UID STORE 1 +FLAGS
> (\Deleted \Seen)" for internal moves and "xx APPEND folder" for external
> ones.
> I tried to achieve this with ACL, but i (insert) seems to control both.
> Do I miss something? Should I look somewhere else?

That would also prevent users from saving messages to Drafts or Sent Messages. 
Unless of course this was a per-folder ACL.

Anyway .. nope, there's no way to do that. Why would you want it? You could 
create a plugin for that though.



[Dovecot] How to enable COPY and APPEND commands separately

2011-01-20 Thread Alex Cherniak
I'd like to allow a user to move messages between his folders on Dovecot
IMAP account, but prevent move/copy from different accounts (Exchange in
particular).
Outlook uses "xx UID COPY 1 folder" and then "xx UID STORE 1 +FLAGS
(\Deleted \Seen)" for internal moves and "xx APPEND folder" for external
ones.
I tried to achieve this with ACL, but i (insert) seems to control both.
Do I miss something? Should I look somewhere else?
Please help.


Re: [Dovecot] utility to update indexes ?

2011-01-20 Thread Timo Sirainen
On Thu, 2011-01-20 at 23:50 +0100, Jan-Frode Myklebust wrote:

> But this woun´t work if the maildir has been modified outside of
> dovecot (i.e. webmail usage). Are there any simple interface I can use
> in this short snippet for noticing that the index is out of sync, and
> update it ?

With v2.0 you could use doveadm easily (you can also just use doveadm
binary from v2.0 and keep using v1.2 elsewhere):

doveadm mailbox status unseen inbox

If you're actually running v2.0 you can also ask this information via
UNIX/TCP socket.



signature.asc
Description: This is a digitally signed message part


[Dovecot] utility to update indexes ?

2011-01-20 Thread Jan-Frode Myklebust
We have both dovecot and a webmail application that are both modifying
our users maildirs, so dovecot indexes can be out of sync when the
webmail has been messing with the maildirs. We also have a webservice
that report how many unread messages a user has in his inbox, which is
simply counting files in the maildirs (plus caching the result, and
present old values if the directory timestamps hasn´t changed).

I think this checking of unread messages webservice would be much
nicer on the filesystem if it instead used the dovecot indexes, so I
created a small utility based on utils/idxview.c that do that:
http://blag.tanso.net/code/unread.c

But this woun´t work if the maildir has been modified outside of
dovecot (i.e. webmail usage). Are there any simple interface I can use
in this short snippet for noticing that the index is out of sync, and
update it ?


  -jf


Re: [Dovecot] Problems with Upgrade from Courier

2011-01-20 Thread Aaron Pettitt
That did it Timo.  Thank you so much  I guess coming from the windows
world, some habits are still hard to break...  Again, I can't thank you
enough!

-Original Message-
From: Timo Sirainen [mailto:t...@iki.fi] 
Sent: Thursday, January 20, 2011 4:49 PM
To: Aaron Pettitt
Cc: dovecot@dovecot.org
Subject: Re: [Dovecot] Problems with Upgrade from Courier

On 20.1.2011, at 23.37, Aaron Pettitt wrote:

Note the difference of upper/lowercasing:

> dovecot: 01/20/2011 10:27:25 Info: IMAP(samantha.fre...@mybridemail.com):
> maildir++: root=/home/vmail/mybridemail.com/Samantha.Freeze, index=,
> control=, inbox=/home/vmail/mybridemail.com/Samantha.Freeze

vs.

> deliver(samantha.fre...@mybridemail.com): 01/20/2011 10:44:27 Info:
> maildir++: root=/home/vmail/mybridemail.com/samantha.freeze, index=,
> control=, inbox=/home/vmail/mybridemail.com/samantha.freeze

A simple solution would be:

auth_username_format = %Lu



Re: [Dovecot] Problems with Upgrade from Courier

2011-01-20 Thread Timo Sirainen
On 20.1.2011, at 23.37, Aaron Pettitt wrote:

Note the difference of upper/lowercasing:

> dovecot: 01/20/2011 10:27:25 Info: IMAP(samantha.fre...@mybridemail.com):
> maildir++: root=/home/vmail/mybridemail.com/Samantha.Freeze, index=,
> control=, inbox=/home/vmail/mybridemail.com/Samantha.Freeze

vs.

> deliver(samantha.fre...@mybridemail.com): 01/20/2011 10:44:27 Info:
> maildir++: root=/home/vmail/mybridemail.com/samantha.freeze, index=,
> control=, inbox=/home/vmail/mybridemail.com/samantha.freeze

A simple solution would be:

auth_username_format = %Lu



Re: [Dovecot] Problems with Upgrade from Courier

2011-01-20 Thread Aaron Pettitt
Thanks for the reply Timo.  Here are parts of the debug log and it looks
just like a user that works.

dovecot: 01/20/2011 10:27:25 Info: imap-login: Login:
user=, method=PLAIN, rip=127.0.0.1,
lip=127.0.0.1, secured
dovecot: 01/20/2011 10:27:25 Info: IMAP(samantha.fre...@mybridemail.com):
Effective uid=5000, gid=5000,
home=/home/vmail/mybridemail.com/Samantha.Freeze
dovecot: 01/20/2011 10:27:25 Info: IMAP(samantha.fre...@mybridemail.com):
maildir: data=~/
dovecot: 01/20/2011 10:27:25 Info: IMAP(samantha.fre...@mybridemail.com):
maildir++: root=/home/vmail/mybridemail.com/Samantha.Freeze, index=,
control=, inbox=/home/vmail/mybridemail.com/Samantha.Freeze
dovecot: 01/20/2011 10:27:25 Info: IMAP(samantha.fre...@mybridemail.com):
Disconnected: Logged out bytes=50/115

Here is my login which is one that works:

dovecot: 01/19/2011 20:13:24 Info: IMAP(aa...@mybridemail.com): Effective
uid=5000, gid=5000, home=/home/vmail/mybridemail.com/aaron
dovecot: 01/19/2011 20:13:24 Info: IMAP(aa...@mybridemail.com): maildir:
data=/home/vmail/mybridemail.com/aaron/
dovecot: 01/19/2011 20:13:24 Info: IMAP(aa...@mybridemail.com): maildir++:
root=/home/vmail/mybridemail.com/aaron, index=, control=,
inbox=/home/vmail/mybridemail.com/aaron
dovecot: 01/19/2011 20:13:24 Info: imap-login: Login:
user=, method=PLAIN, rip=127.0.0.1, lip=127.0.0.1,
secured
dovecot: 01/19/2011 20:13:24 Info: IMAP(aa...@mybridemail.com):
Disconnected: Logged out bytes=91/474

-Original Message-
From: Timo Sirainen [mailto:t...@iki.fi] 
Sent: Thursday, January 20, 2011 4:21 PM
To: Aaron Pettitt
Cc: dovecot@dovecot.org
Subject: Re: [Dovecot] Problems with Upgrade from Courier

On Thu, 2011-01-20 at 11:02 -0500, Aaron Pettitt wrote:

> It's really strange why dovecot can deliver the mail to the inbox but 
> cannot see the inbox when trying to retrieve the mail

Set mail_debug=yes. See what it logs when logging in as the user. It should
log where it's looking for the mails.




Re: [Dovecot] Dotlock dovecot-uidlist errors / NFS / High Load

2011-01-20 Thread Gregory Finch
On 2011-01-20 8:57 AM, Stan Hoeppner wrote:
> l...@airstreamcomm.net put forth on 1/20/2011 8:32 AM:
>
>> Secondly we thought the issues were due to NTP as the time stamps vary so
>> widely, so we rebuilt our NTP servers and found closer stratum 1 source
>> clocks to synchronize to hoping it would alleviate the problem but the
>> dotlock errors returned after about 12 hours.  We have fcntl locking set in
>> our configuration file, but it is our understanding from look at the source
>> code that this file is locked with dotlock.  
>>
>> Any help troubleshooting is appreciated.
> >From your description it sounds as if you're ntpd syncing each of the 4 
> >servers
> against an external time source, first stratum 2/3 sources, then stratum 1
> sources in an attempt to cure this problem.
>
> In a clustered server environment, _always_ run a local physical box/router 
> ntpd
> server (preferably two) that queries a set of external sources, and services
> your internal machine queries.  With RTTs all on your LAN, and using the same
> internal time sources for every query, this clock drift issue should be
> eliminated.  Obviously, when you first set this up, stop ntpd and run ntpdate 
> to
> get an initial time sync for each cluster host.
You're much better off running one ntp server than two. With just two
servers providing time, if they drift from one another, for whatever
reason, there is no way to tell which one has the correct time. If you
need to ensure the time is correct, peer at least 3 machines together,
then they can take care of themselves if one drifts.

-Greg



signature.asc
Description: OpenPGP digital signature


[Dovecot] Fwd: Re: Dotlock dovecot-uidlist errors / NFS / High Load

2011-01-20 Thread list
Stan,

Thanks for the reply.  In our case we have actually already done most of
the work you suggested to no avail.  We had rebuilt two new ntp servers
that sync against two stratum 1 sources, and all our nfs clients,
regardless of using dovecot, sync to those two machines.  You bring up the
difference between bare metal and hypervisor, and we are running these
machines on vmware 4.0.  All the vmware knowledge base articles tend to
push us towards ntp, and since we are using centos 5.5 there are no kernel
modifications that need to be made regarding timing from what we can find. 
I will give the ntpdate option a try and see what happens.

I was also hoping to understand why the uidlist file is the only file that
uses dotlock, or if there was plans to give it the option to use other
locking mechanisms in the future.

Thanks again!

Michael

 Original Message 
Subject: Re: [Dovecot] Dotlock dovecot-uidlist errors / NFS / High Load
Date: Thu, 20 Jan 2011 10:57:24 -0600
From: Stan Hoeppner 
To: dovecot@dovecot.org

l...@airstreamcomm.net put forth on 1/20/2011 8:32 AM:

> Secondly we thought the issues were due to NTP as the time stamps vary
so
> widely, so we rebuilt our NTP servers and found closer stratum 1 source
> clocks to synchronize to hoping it would alleviate the problem but the
> dotlock errors returned after about 12 hours.  We have fcntl locking set
in
> our configuration file, but it is our understanding from look at the
source
> code that this file is locked with dotlock.  
> 
> Any help troubleshooting is appreciated.

>From your description it sounds as if you're ntpd syncing each of the 4
servers
against an external time source, first stratum 2/3 sources, then stratum 1
sources in an attempt to cure this problem.

In a clustered server environment, _always_ run a local physical
box/router ntpd
server (preferably two) that queries a set of external sources, and
services
your internal machine queries.  With RTTs all on your LAN, and using the
same
internal time sources for every query, this clock drift issue should be
eliminated.  Obviously, when you first set this up, stop ntpd and run
ntpdate to
get an initial time sync for each cluster host.

If after setting this up, and we're dealing with bare metal cluster member
servers, then I'd guess you've got a failed/defective clock chip on one
host.
If this is Linux, you can work around that by changing the local time
source.
There are something like 5 options.  Google for "Linux time" or similar. 
Or,
simply replace the hardware--RTC chip, mobo, etc.

If any of these cluster members are virtual machines, regardless of
hypervisor,
I'd recommend disabling using ntpd, and cron'ing ntpdate to run once every
5
minutes, or once a a minute, whatever it takes to get the times to remain
synced, against your local ntpd server mentioned above.  I got to the
point with
VMWare ESX that I could make any Linux distro VM of 2.4 or 2.6 stay within
one
minute a month before needing a manual ntdate against our local time
source.
The time required to get to that point is a total waste.  Cron'ing ntpdate
as I
mentioned is the quick, reliable way to solve this issue, if you're using
VMs.

-- 
Stan





Re: [Dovecot] Populating mailbox dir

2011-01-20 Thread Timo Sirainen
On Wed, 2011-01-19 at 19:36 -0600, Matt Rude wrote:
> 
> On Wed, 19 Jan 2011 17:02:48 -0500, Mauricio Tavares wrote: 
> 
> > Who
> populates/creates the initial files and folders in the user
> mailbox?
> 
> Dovecot

No, IMAP clients do that typically.

> http://wiki2.dovecot.org/Plugins/Autocreate [1] will create new
> folders if the folder doesn't exist when a user logs in.

That's of course possible too. Nothing requires clients to actually use
those folders though.



signature.asc
Description: This is a digitally signed message part


Re: [Dovecot] Imap Error

2011-01-20 Thread Timo Sirainen
On Thu, 2011-01-20 at 11:22 -0500, Jason Liedtke wrote:
> This morning I have a Outlook 2007 user who getting the error and I am
> unsure how to fix it.
> 
> Cannot open this item. The server responded: "Error in IMAP command UID
> FETCH: Invalid uidset'

Looks like Outlook is sending some garbage to Dovecot. You could verify
this by getting the raw IMAP traffic logs, e.g. using
http://wiki.dovecot.org/Debugging/Rawlog or wireshark or ngrep.

The solution is probably to recreate the account in Outlook, or
something like that.



signature.asc
Description: This is a digitally signed message part


Re: [Dovecot] Problems with Upgrade from Courier

2011-01-20 Thread Timo Sirainen
On Thu, 2011-01-20 at 11:02 -0500, Aaron Pettitt wrote:

> It's really strange why dovecot can deliver the mail to the inbox but cannot
> see the inbox when trying to retrieve the mail 

Set mail_debug=yes. See what it logs when logging in as the user. It
should log where it's looking for the mails.



signature.asc
Description: This is a digitally signed message part


Re: [Dovecot] Dotlock dovecot-uidlist errors / NFS / High Load

2011-01-20 Thread Timo Sirainen
On Thu, 2011-01-20 at 08:32 -0600, l...@airstreamcomm.net wrote:

> Created dotlock file's timestamp is different than current time
> (1295480202 vs 1295479784): /mail/user/Maildir/dovecot-uidlist

Hmm. This may be a bug that happens when dotlocking has to wait for a
long time for dotlock. See if
http://hg.dovecot.org/dovecot-1.2/raw-rev/9a50a9dc905f fixes this.



signature.asc
Description: This is a digitally signed message part


Re: [Dovecot] LMTP & home, chroot, mail userdb fields.

2011-01-20 Thread Timo Sirainen
On Thu, 2011-01-20 at 15:21 +0300, Lev Serebryakov wrote:
> Jan 20 12:19:25 lmtp(38939, l...@domain.com): Error: mkdir(./cur) in 
> directory /var/run/dovecot failed: Permission denied (euid=3(v-mail) 
> egid=3(v-mail) missing +w perm: ., euid is not dir owner)

Fixed: http://hg.dovecot.org/dovecot-2.0/rev/0fc2d00f83df



signature.asc
Description: This is a digitally signed message part


Re: [Dovecot] SSD drives are really fast running Dovecot

2011-01-20 Thread Frank Cusack

On 1/20/11 12:06 AM -0600 Stan Hoeppner wrote:

 This is amusing considering XFS is hands down
the best filesystem available on any platform, including ZFS.  Others are
simply ignorant and repeat what they've heard without looking for current
information.


Not to be overly brusque, but that's a laugh.  The two "best" filesystems
out there today are vxfs and zfs, for almost any enterprise workload that
exists.  I won't argue that xfs won't stand out for specific workloads
such as sequential write, it might and I don't know quite enough about
it to be sure, but for general workloads including a mail store zfs is
leaps ahead.  I'd include WAFL in the top 3 but it's only accessible
via NFS.  Well there is a SAN version but it doesn't really give you
access to the best of the filesystem feature set (tradeoff for other
features of the hardware).

Your pronouncement that others are simply ignorant is telling.


Your data isn't safe until it hits the disk.  There are plenty of ways
to spool data to ram rather than committing it, but they are all
vulnerable to data loss until the data is written to disk.


The delayed logging code isn't a "ram spooler", although that is a mild
side effect.  Apparently I didn't explain it fully, or precisely.  And
keep in mind, I'm not the dev who wrote the code.  So I'm merely
repeating my recollection of the description from the architectural
document and what was stated on the XFS list by the author, Dave Chinner
of Red Hat.

...

In my desire to be brief I didn't fully/correctly explain how delayed
logging works.  I attempted a simplified explanation that I thought most
would understand.  Here is the design document:
http://oss.sgi.com/archives/xfs/2010-05/msg00329.html


I guess I understand your championing of it if you consider that a
design document.  That brief piece of email hardly describes it at
all, and the performance numbers are pretty worthless (due to the
caveat that barriers are disabled).

Given the paragraph in the "design document":


The best IO behaviour comes from the delayed logging version of XFS,
with the lowest bandwidth and iops to sustain the highest
performance. All the IO is to the log - no metadata is written to
disk at all, which is the way this test should execute.  As a result,
the delayed logging code was the only configuration not limited by
the IO subsystem - instead it was completely CPU bound (8 CPUs
worth)...


it is indeed a "ram spooler", for metadata, which is a standard (and
good) approach.  That's not a side effect, that's the design.  AFAICT
from the brief description anyway.

This is guaranteed to lose data on power loss or drive failure.


Re: [Dovecot] ldap auth error

2011-01-20 Thread pch0317

On 20/01/11 13:31, Charles Marcus wrote:

On 2011-01-20 3:31 AM, Jan-Frode Myklebust wrote:
   

On Wed, Jan 19, 2011 at 05:27:52PM -0500, Charles Marcus wrote:
 

On 2011-01-19 5:04 PM, pch0317 wrote:
   

I have dovecot 2.0.beta6 and I'm newbie with dovecot.
 
   

First assignment: upgrade to 2.0.9... why waste time fighting with bugs
that are already long fixed?
   
   

RHEL6 ships dovecot 2.0-beta6 (2.0-0.10.beta6.20100630.el6), and many
sysadmins like to stick with the distro provided packages, so I think
we'll see quite a few of these until RHEL6.1 or something hopefully
upgrades the package to something newer..
 

There are other repos for getting working stable builds... refusing to
do so and sticking with a known buggy pre-release version of critical
software is just not a good idea. If my chosen distro put me in that
position, then I'd find another distro.

   

ok, I fix this error.

Instead
dn = cn=administrator,ou=users,dc=my,dc=domain
should be
dn = cn=administrator,cn=users,dc=my,dc=domain

thx


Re: [Dovecot] LMTP & home, chroot, mail userdb fields.

2011-01-20 Thread Per Jessen
Lev Serebryakov wrote:

> Hello, Per.
> You wrote 20 января 2011 г., 21:28:11:
> 
> chroot: "/usr/home/hosted/v-mail/%d/%n"
> home: "/"
> mail: "maildir:."
>>> Then IMAP4/POP3 processes will do chroot to
>>> "/usr/home/hosted/v-mail/domain/user" and will try to find
>>> "maildir:/usr/home/hosted/v-mail/domain/user" RELATIVE to chroot.
>>> Mail will be delivered, but can not be acessed.
>> Okay, I see how you've set it up now.  Any chance that lmtp is having
>> problems with chroot()ing ?
>
>   I don't think, that lmtp needs "real" chroot at all (it can degrade
> performance and spoil whole idea of long-living delivery process),
> but, IMHO, lmtp should calculate full path from all three components
> -- chroot + home + maildir. And it seems, that lmtp doesn't use chroot
> variable at all.

Yes, that is what it looks like. 


/Per Jessen, Zürich



Re: [Dovecot] LMTP & home, chroot, mail userdb fields.

2011-01-20 Thread Lev Serebryakov
Hello, Per.
You wrote 20 января 2011 г., 21:28:11:

 chroot: "/usr/home/hosted/v-mail/%d/%n"
 home: "/"
 mail: "maildir:."
>> Then IMAP4/POP3 processes will do chroot to
>> "/usr/home/hosted/v-mail/domain/user" and will try to find
>> "maildir:/usr/home/hosted/v-mail/domain/user" RELATIVE to chroot. Mail
>> will be delivered, but can not be acessed.
> Okay, I see how you've set it up now.  Any chance that lmtp is having
> problems with chroot()ing ?
  I don't think, that lmtp needs "real" chroot at all (it can degrade
performance and spoil whole idea of long-living delivery process),
but, IMHO, lmtp should calculate full path from all three components
-- chroot + home + maildir. And it seems, that lmtp doesn't use chroot
variable at all.

-- 
// Black Lion AKA Lev Serebryakov 



Re: [Dovecot] LMTP & home, chroot, mail userdb fields.

2011-01-20 Thread Per Jessen
Lev Serebryakov wrote:

> Hello, Per.
> You wrote 20 января 2011 г., 18:30:44:
> 
>>> chroot: "/usr/home/hosted/v-mail/%d/%n"
>>> home: "/"
>>> mail: "maildir:."
> 
>> For starters, I think you need to return a field "mail" containing
>> perhaps:
>> maildir:/usr/home/hosted/v-mail/domain/user
>
> Then IMAP4/POP3 processes will do chroot to
> "/usr/home/hosted/v-mail/domain/user" and will try to find
> "maildir:/usr/home/hosted/v-mail/domain/user" RELATIVE to chroot. Mail
> will be delivered, but can not be acessed.
> 

Okay, I see how you've set it up now.  Any chance that lmtp is having
problems with chroot()ing ?


/Per Jessen, Zürich



Re: [Dovecot] Best Cluster Storage

2011-01-20 Thread Henrique Fernandes
Stan!

Sorry i did not explained well!

FULL

Spool to disk: ~24h   TransferRate: 6MB/s

Despool to tape: ~7h TransferRate: 16MB/s

INCREMENTAL

Spool to disk: ~11hTransferRate: 300KB/s

Despool to tape: ~12m   TransferRate: 16MB/s

When doind a backup, we turn on another machine in the ocfs2 cluster and
from there make an spool in the disk and after it it goes from the disk to
the tape.

Nothing is in SAN everthing in dlinks swith at 1Gbit.

Even the storage system is not SUN those ocfs2 servers are connect via iSCSI
from the storage with ocfs2 in virtual machine

Sorry, my english such and make it harder to explain!

[]'sf.rique


On Thu, Jan 20, 2011 at 3:17 PM, Jan-Frode Myklebust wrote:

> On Thu, Jan 20, 2011 at 5:20 PM, Henrique Fernandes 
> wrote:
>
> >> > Not all, if this counts as large:
> >> >
> >> >FilesystemSize  Used Avail Use% Mounted on
> >> >/dev/gpfsmail  9.9T  8.7T  1.2T  88% /maildirs
> >> >
> >> >FilesystemInodes   IUsed   IFree IUse% Mounted on
> >> >/dev/gpfsmail 105279488 90286634 14992854   86% /maildirs
> >> >
> >>
> >> how do you backup that data? :)
> >>
> > Same question!
> >
> > I have about 1TB used and it takes 22 hrs to backup maildirs!
>
> Our maildirs are spread in subfolders under /maildirs/[a-z0-9], where
> mail addresses starting with a is stored under /maildirs/a/, b in
> /maildirs/b, etc.. and then we have distributed these top-level
> directories about evenly for backup by each host. So the 7 servers all
> run backups of different parts of the filesystem. The backups go to
> Tivoli Storage Manager, with it´s default incremental forever policy,
> so there´s not much data to back up. The problem is that it´s very
> slow to traverse all the directories and compare against what was
> already backed up. I believe we´re also using around 20-24 hours for
> the daily incremental backups... so we soon will have to start looking
> at alternative ways of doing it (or get rid of the non-dovecot
> accesses to maildirs, which are probably stealing quite a bit
> performance from the file scans).
>
> One alternative is the "mmbackup"-utility, which is supposed to use a
> much faster inode scan interface in GPFS:
>
>
> http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/index.jsp?topic=%2Fcom.ibm.cluster.gpfs31.basicadm.doc%2Fbl1adm_mmback.html
>
> but last time we tested it it was a too fragile...
>
>
>  -jf
>


Re: [Dovecot] Best Cluster Storage

2011-01-20 Thread Spyros Tsiolis

--- On Thu, 20/1/11, Henrique Fernandes  wrote:

> From: Henrique Fernandes 
> Subject: Re: [Dovecot] Best Cluster Storage
> To: "alex handle" 
> Cc: dovecot@dovecot.org
> Date: Thursday, 20 January, 2011, 18:20
> []'sf.rique
> 
> 
> On Thu, Jan 20, 2011 at 12:10 PM, alex handle 
> wrote:
> 
> > On Mon, Jan 17, 2011 at 7:32 AM, Jan-Frode Myklebust
> 
> > wrote:
> > > On Fri, Jan 14, 2011 at 05:16:50PM -0800, Brad
> Davidson wrote:
> > >>
> > >> Don't give up on the simplest solution too
> easily - lots of us run NFS
> > >> with quite large installs. As a matter of
> fact, I think all of the large
> > >> installs run NFS; hence the need for the
> Director in 2.0.
> > >
> > > Not all, if this counts as large:
> > >
> > >        Filesystem 
>           Size  Used Avail
> Use% Mounted on
> > >        /dev/gpfsmail 
>     9.9T  8.7T  1.2T  88%
> /maildirs
> > >
> > >        Filesystem 
>          
> Inodes   IUsed   IFree IUse%
> Mounted on
> > >        /dev/gpfsmail 
>    105279488 90286634
> 14992854   86% /maildirs
> > >
> >
> > how do you backup that data? :)
> >
> Same question!
> 
> I have about 1TB used and it takes 22 hrs to backup
> maildirs!
> 
> I have problens with ocfs2 in fouding the file!
> 
> >
> > -ah
> >
> 

Yeah !
Same here. How do you backup all this ?

s.




"I merely function as a channel that filters 
music through the chaos of noise"
 - Vangelis







Re: [Dovecot] LMTP & home, chroot, mail userdb fields.

2011-01-20 Thread Lev Serebryakov
Hello, Per.
You wrote 20 января 2011 г., 18:30:44:

>> chroot: "/usr/home/hosted/v-mail/%d/%n"
>> home: "/"
>> mail: "maildir:."

> For starters, I think you need to return a field "mail" containing
> perhaps:
> maildir:/usr/home/hosted/v-mail/domain/user
  Then IMAP4/POP3 processes will do chroot to
"/usr/home/hosted/v-mail/domain/user" and will try to find
"maildir:/usr/home/hosted/v-mail/domain/user" RELATIVE to chroot. Mail
will be delivered, but can not be acessed.


-- 
// Black Lion AKA Lev Serebryakov 



Re: [Dovecot] Best Cluster Storage

2011-01-20 Thread Jan-Frode Myklebust
On Thu, Jan 20, 2011 at 5:20 PM, Henrique Fernandes  wrote:

>> > Not all, if this counts as large:
>> >
>> >        Filesystem            Size  Used Avail Use% Mounted on
>> >        /dev/gpfsmail      9.9T  8.7T  1.2T  88% /maildirs
>> >
>> >        Filesystem            Inodes   IUsed   IFree IUse% Mounted on
>> >        /dev/gpfsmail     105279488 90286634 14992854   86% /maildirs
>> >
>>
>> how do you backup that data? :)
>>
> Same question!
>
> I have about 1TB used and it takes 22 hrs to backup maildirs!

Our maildirs are spread in subfolders under /maildirs/[a-z0-9], where
mail addresses starting with a is stored under /maildirs/a/, b in
/maildirs/b, etc.. and then we have distributed these top-level
directories about evenly for backup by each host. So the 7 servers all
run backups of different parts of the filesystem. The backups go to
Tivoli Storage Manager, with it´s default incremental forever policy,
so there´s not much data to back up. The problem is that it´s very
slow to traverse all the directories and compare against what was
already backed up. I believe we´re also using around 20-24 hours for
the daily incremental backups... so we soon will have to start looking
at alternative ways of doing it (or get rid of the non-dovecot
accesses to maildirs, which are probably stealing quite a bit
performance from the file scans).

One alternative is the "mmbackup"-utility, which is supposed to use a
much faster inode scan interface in GPFS:

http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/index.jsp?topic=%2Fcom.ibm.cluster.gpfs31.basicadm.doc%2Fbl1adm_mmback.html

but last time we tested it it was a too fragile...


  -jf


Re: [Dovecot] Best Cluster Storage

2011-01-20 Thread Stan Hoeppner
Henrique Fernandes put forth on 1/20/2011 10:20 AM:

> I have about 1TB used and it takes 22 hrs to backup maildirs!

To tape library or D2D?  Are you doing differential backup or full backup each 
time?

4/8Gb fiber channel or 1 GbE iSCSI based SAN array?

-- 
Stan


Re: [Dovecot] Dotlock dovecot-uidlist errors / NFS / High Load

2011-01-20 Thread Stan Hoeppner
l...@airstreamcomm.net put forth on 1/20/2011 8:32 AM:

> Secondly we thought the issues were due to NTP as the time stamps vary so
> widely, so we rebuilt our NTP servers and found closer stratum 1 source
> clocks to synchronize to hoping it would alleviate the problem but the
> dotlock errors returned after about 12 hours.  We have fcntl locking set in
> our configuration file, but it is our understanding from look at the source
> code that this file is locked with dotlock.  
> 
> Any help troubleshooting is appreciated.

>From your description it sounds as if you're ntpd syncing each of the 4 servers
against an external time source, first stratum 2/3 sources, then stratum 1
sources in an attempt to cure this problem.

In a clustered server environment, _always_ run a local physical box/router ntpd
server (preferably two) that queries a set of external sources, and services
your internal machine queries.  With RTTs all on your LAN, and using the same
internal time sources for every query, this clock drift issue should be
eliminated.  Obviously, when you first set this up, stop ntpd and run ntpdate to
get an initial time sync for each cluster host.

If after setting this up, and we're dealing with bare metal cluster member
servers, then I'd guess you've got a failed/defective clock chip on one host.
If this is Linux, you can work around that by changing the local time source.
There are something like 5 options.  Google for "Linux time" or similar.  Or,
simply replace the hardware--RTC chip, mobo, etc.

If any of these cluster members are virtual machines, regardless of hypervisor,
I'd recommend disabling using ntpd, and cron'ing ntpdate to run once every 5
minutes, or once a a minute, whatever it takes to get the times to remain
synced, against your local ntpd server mentioned above.  I got to the point with
VMWare ESX that I could make any Linux distro VM of 2.4 or 2.6 stay within one
minute a month before needing a manual ntdate against our local time source.
The time required to get to that point is a total waste.  Cron'ing ntpdate as I
mentioned is the quick, reliable way to solve this issue, if you're using VMs.

-- 
Stan


Re: [Dovecot] SSD drives are really fast running Dovecot

2011-01-20 Thread Stan Hoeppner
Ed W put forth on 1/20/2011 6:54 AM:

> Oh well, pleading over. Good luck and genuinely thanks to Stan for spending 
> his
> valuable time here. Here's hoping you will continue to do so, but also being
> nice to the dummies?

"Dummies" isn't what this was about.  Again, I misread the intent of your
question as being troll bait against XFS.  That's why I responded with a blunt,
short reply.  I misread you, you misread me, now we're all one big happy family.
 Right?  :)

-- 
Stan


[Dovecot] Imap Error

2011-01-20 Thread Jason Liedtke
This morning I have a Outlook 2007 user who getting the error and I am
unsure how to fix it.

Cannot open this item. The server responded: "Error in IMAP command UID
FETCH: Invalid uidset'

DoveCot v 1.2.9


# 1.2.9: /etc/dovecot/dovecot.conf
# OS: Linux 2.6.32-27-generic-pae i686 Ubuntu 10.04.1 LTS
log_timestamp: %Y-%m-%d %H:%M:%S
protocols: imap imaps pop3 pop3s managesieve
listen(default): *
listen(imap): *
listen(pop3): *
listen(managesieve): *:2000
disable_plaintext_auth: no
login_dir: /var/run/dovecot/login
login_executable(default): /usr/lib/dovecot/imap-login
login_executable(imap): /usr/lib/dovecot/imap-login
login_executable(pop3): /usr/lib/dovecot/pop3-login
login_executable(managesieve): /usr/lib/dovecot/managesieve-login
mail_privileged_group: mail
mail_location: maildir:~/
mmap_disable: yes
mail_nfs_storage: yes
mail_nfs_index: yes
mbox_write_locks: fcntl dotlock
mail_executable(default): /usr/lib/dovecot/imap
mail_executable(imap): /usr/lib/dovecot/imap
mail_executable(pop3): /usr/lib/dovecot/pop3
mail_executable(managesieve): /usr/lib/dovecot/managesieve
mail_plugins(default): quota imap_quota
mail_plugins(imap): quota imap_quota
mail_plugins(pop3): quota
mail_plugins(managesieve):
mail_plugin_dir(default): /usr/lib/dovecot/modules/imap
mail_plugin_dir(imap): /usr/lib/dovecot/modules/imap
mail_plugin_dir(pop3): /usr/lib/dovecot/modules/pop3
mail_plugin_dir(managesieve): /usr/lib/dovecot/modules/managesieve
imap_client_workarounds(default): tb-extra-mailbox-sep
imap_client_workarounds(imap): tb-extra-mailbox-sep
imap_client_workarounds(pop3):
imap_client_workarounds(managesieve):
pop3_client_workarounds(default):
pop3_client_workarounds(imap):
pop3_client_workarounds(pop3): outlook-no-nuls oe-ns-eoh
pop3_client_workarounds(managesieve):
managesieve_logout_format(default): bytes=%i/%o
managesieve_logout_format(imap): bytes=%i/%o
managesieve_logout_format(pop3): bytes=%i/%o
managesieve_logout_format(managesieve): bytes ( in=%i : out=%o )
namespace:
  type: private
  separator: /
  inbox: yes
  list: yes
  subscriptions: yes
lda:
  postmaster_address:
  hostname:
  auth_socket_path: /var/run/dovecot/auth-master
  mail_plugins: quota sieve
  log_path:
  info_log_path:
  syslog_facility: mail
auth default:
  username_format: %Lu
  passdb:
driver: ldap
args: /etc/dovecot/dovecot-ldap.conf
  passdb:
driver: sql
args: /etc/dovecot/dovecot-sql.conf
  passdb:
driver: ldap
args: /etc/dovecot/dovecot-ldap.conf
  passdb:
driver: ldap
args: /etc/dovecot/dovecot-ldap.conf
  userdb:
driver: ldap
args: /etc/dovecot/dovecot-ldap.conf
  userdb:
driver: sql
args: /etc/dovecot/dovecot-sql.conf
  userdb:
driver: ldap
args: /etc/dovecot/dovecot-ldap.conf
  userdb:
driver: ldap
args: /etc/dovecot/dovecot-ldap.conf
  socket:
type: listen
master:
  path: /var/run/dovecot/auth-master
  mode: 432
  user: vmail
  group: vmail
plugin:
  quota: maildir
  quota_rule: *:bytes=20M
  sieve: ~/sieve/.dovecot.sieve
  sieve_dir: ~/sieve
  sieve_global_path: /var/mail/default.sieve
  sieve_before: /var/mail/sieve/global
  sieve_extensions: +imapflags


Re: [Dovecot] Best Cluster Storage

2011-01-20 Thread Henrique Fernandes
[]'sf.rique


On Thu, Jan 20, 2011 at 12:10 PM, alex handle  wrote:

> On Mon, Jan 17, 2011 at 7:32 AM, Jan-Frode Myklebust 
> wrote:
> > On Fri, Jan 14, 2011 at 05:16:50PM -0800, Brad Davidson wrote:
> >>
> >> Don't give up on the simplest solution too easily - lots of us run NFS
> >> with quite large installs. As a matter of fact, I think all of the large
> >> installs run NFS; hence the need for the Director in 2.0.
> >
> > Not all, if this counts as large:
> >
> >FilesystemSize  Used Avail Use% Mounted on
> >/dev/gpfsmail  9.9T  8.7T  1.2T  88% /maildirs
> >
> >FilesystemInodes   IUsed   IFree IUse% Mounted on
> >/dev/gpfsmail 105279488 90286634 14992854   86% /maildirs
> >
>
> how do you backup that data? :)
>
Same question!

I have about 1TB used and it takes 22 hrs to backup maildirs!

I have problens with ocfs2 in fouding the file!

>
> -ah
>


Re: [Dovecot] Problems with Upgrade from Courier

2011-01-20 Thread Aaron Pettitt
I was looking at my dovecot.deliver log and it's showing that it's
delivering it to the Inbox:

 

deliver(samantha.fre...@mybridemail.com): 01/20/2011 10:44:27 Info: auth
input: home=/home/vmail/mybridemail.com/samantha.freeze

deliver(samantha.fre...@mybridemail.com): 01/20/2011 10:44:27 Info: maildir:
data=/home/vmail/mybridemail.com/samantha.freeze/

deliver(samantha.fre...@mybridemail.com): 01/20/2011 10:44:27 Info:
maildir++: root=/home/vmail/mybridemail.com/samantha.freeze, index=,
control=, inbox=/home/vmail/mybridemail.com/samantha.freeze

deliver(samantha.fre...@mybridemail.com): 01/20/2011 10:44:27 Info:
msgid=<001a01cbb8b8$eabe89c0$c03b9d40$@net>: saved mail to INBOX

 

However, if I login as her through Telnet, it shows that she has no mail:

 

* FLAGS (\Answered \Flagged \Deleted \Seen \Draft)

* OK [PERMANENTFLAGS (\Answered \Flagged \Deleted \Seen \Draft \*)] Flags
permitted.

* 0 EXISTS

* 0 RECENT

* OK [UIDVALIDITY 1295474980] UIDs valid

* OK [UIDNEXT 1] Predicted next UID

b OK [READ-WRITE] Select completed.

 

If I look in the new folder under her folder, it shows the last emails I
sent this morning:

 

-rw--- 1 vmail vmail   3564 Jan 20 10:21
1295536875.M679042P20187.mybridemail.com,W=3672

-rw--- 1 vmail vmail   3540 Jan 20 10:27
1295537272.M522196P26548.mybridemail.com,W=3649

-rw--- 1 vmail vmail   3554 Jan 20 10:39
1295537952.M462095P9353.mybridemail.com,W=3662

-rw--- 1 vmail vmail   3540 Jan 20 10:44
1295538267.M893549P15392.mybridemail.com,W=3649

 

It's really strange why dovecot can deliver the mail to the inbox but cannot
see the inbox when trying to retrieve the mail 

 

From: Aaron Pettitt [mailto:apett...@comcast.net] 
Sent: Thursday, January 20, 2011 10:02 AM
To: 'dovecot@dovecot.org'
Subject: Problems with Upgrade from Courier

 

I inherited a server from a previous employee.  The server crashed so it was
time to move everything over to another server.  We have a web mail site and
I installed everything running dovecot, postfix and roundcube.  After I
installed it, everything worked great when I created a new user.  The new
user could send and receive emails with no issues.  I then copied the home
directory over from the other server and ran the courier-dovecot migration
script.  It created the subscription files and the dovecot-uidlist files in
each user (about 1000 total users).  When I login as one of the existing
users, it says that there is no mail in the mailbox.  However, if I look at
the user's cur and new folders, there is mail in those folders.  If I send a
new mail to the user, it does not show up in their inbox.  If I look in
their new folder, the new mail that I sent was delivered to that folder but
it does not show up in their inbox.  I've tried going to dovecot directly
through telnet with the same results.  I've been stuck for 2 days now so any
help is greatly appreciated.  Below is my dovecot.conf with all the comments
removed.

 

Thanks all!

 

protocols = imap imaps

 

disable_plaintext_auth = no

 

log_path = '/var/log/dovecot/error.log' 

 

info_log_path = '/var/log/dovecot/info.log' 

 

log_timestamp = "%m/%d/%Y %H:%M:%S "

 

#mail_location = maildir:~/

mail_location = maildir:/home/vmail/%d/%n/

 

mail_privileged_group = mail

 

mail_debug = yes

 

protocol imap {



  

 

}

  

protocol pop3 {

  

}

 

protocol managesieve {

 

  sieve_storage=~/sieve

  

}

 

 

protocol lda {

log_path = /home/vmail/dovecot-deliver.log

auth_socket_path = /var/run/dovecot/auth-master

postmaster_address = postmas...@mybridemal.com

mail_plugins = cmusieve

global_script_path = /home/vmail/globalsieverc

 

}

 

auth_verbose = yes

 

auth_debug = no

 

auth_debug_passwords = no

auth default {

 

  passdb sql {

args = /etc/dovecot/dovecot-sql.conf

  }

 

args = uid=5000 gid=5000 home=/home/vmail/%d/%n allow_all_users=yes

  }

 

  

  user = root

 

 

  path = /var/run/dovecot/auth-master

  mode = 0600



  user = vmail

  #group = 

}

client {

  path = /var/spool/postfix/private/auth

  mode = 0660

  user = postfix

  group = postfix

}

  }

}

 

}

 

 

 



Re: [Dovecot] LMTP & home, chroot, mail userdb fields.

2011-01-20 Thread Per Jessen
Lev Serebryakov wrote:

> Hello, Dovecot.
> 
> 
>   I'm using postfix + dovecot with pure virtual users. postfix uses
> standard virtual transport, and dovecot fetches such fields from
> userdb:
> 
> chroot: "/usr/home/hosted/v-mail/%d/%n"
> home: "/"
> mail: "maildir:."
> 
>   Everything works Ok -- dovecot founds users' mail.
> 
>   Now, after upgrade to dovecot2, I want to use it LMTP server as
> virtual_transport in postifx. I've changed virtual_transport setting
> to "lmtp:unix:/var/run/dovecot/lmtp".
> 
>  dovecot's LMTP can not deliver messages, because it seems that it
> uses userdb fields in some OTHER way. Errors look like this:
> 
> Jan 20 12:19:25 lmtp(38939): Info: Connect from local
> Jan 20 12:19:25 auth: Info: mysql: Connected to /tmp/mysql.sock
> (mailhost) Jan 20 12:19:25 lmtp(38939, l...@domain.com): Error:
> mkdir(./cur) in directory /var/run/dovecot failed: Permission denied
> (euid=3(v-mail) egid=3(v-mail) missing +w perm: ., euid is not
> dir owner) 

That looks like dovecot is trying to create a mailbox (./cur) in the
base directory (/var/run/dovecot)

> How should I change my userdb output to make both POP/IMAP and LMTP
> processes happy?

For starters, I think you need to return a field "mail" containing
perhaps:

maildir:/usr/home/hosted/v-mail/domain/user


/Per Jessen, Zürich



Re: [Dovecot] domain stripping -SOLVED

2011-01-20 Thread PA
Basically after thinking about it I added another SQL user/password DB
lookup that has a default domain name on the sql query.


passdb {
  args = /usr/local/etc/dovecot/sql.conf.ext
  driver = sql
}


passdb {
  
  args = /usr/local/etc/dovecot/sql.conf2.ext
  driver = sql
}

passdb {
  driver = pam
}

userdb {
  driver = prefetch
}

userdb {
  args = /usr/local/etc/dovecot/sql.conf.ext
  driver = sql
}


userdb {
  
  args = /usr/local/etc/dovecot/sql.conf2.ext
  driver = sql
}



password_query = SELECT username as user, password,
concat('/var/vmail/test2000.com/', maildir) as userdb_home,
concat('maildir:/var/vmail/test2000.com/', maildir) as userdb_mail, 101 as
userdb_uid, 502 as userdb_gid, concat('user quota:messages=+:storage=+',
quota) AS userdb_quota_rule FROM mailbox WHERE username = '%n...@test2000.com'

user_query = SELECT maildir, 101 AS uid, 502 AS gid, concat('user
quota:messages=+:storage=+', quota) as quota_rule FROM mailbox WHERE
username = '%n...@test2000.com' AND active = '1'



-Original Message-
From: dovecot-bounces+razor=meganet@dovecot.org
[mailto:dovecot-bounces+razor=meganet@dovecot.org] On Behalf Of PA
Sent: Wednesday, January 19, 2011 12:36 PM
To: 'Dovecot Mailing List'
Subject: [Dovecot] domain stripping

Hi, using dovecot 2.0 and I'm using a couple of user DBs, sql/prefetch and
pam. Currently if the user logins with username@domain it authenticates off
the sql DB and works fine. If the user logins with username with no @domain
it fails on the sql lookup and succeeds on the pam user DB.

However I was wondering if I can have another sql DB lookup that says when
the user logins and fails against the first two user DBs, sql/pam db, to try
this last sql user DB and appends a default domain to it, because the sql DB
lists username with the domain. Currently I have all users login in with no
realm on the older mail server and I wanted to migrate these users to
dovecot 2.x with minimal impact and wanted to have the ability for these
virtual users to login with and without a realm.

 

 Thanks paul.




[Dovecot] Problems with Upgrade from Courier

2011-01-20 Thread Aaron Pettitt
I inherited a server from a previous employee.  The server crashed so it was
time to move everything over to another server.  We have a web mail site and
I installed everything running dovecot, postfix and roundcube.  After I
installed it, everything worked great when I created a new user.  The new
user could send and receive emails with no issues.  I then copied the home
directory over from the other server and ran the courier-dovecot migration
script.  It created the subscription files and the dovecot-uidlist files in
each user (about 1000 total users).  When I login as one of the existing
users, it says that there is no mail in the mailbox.  However, if I look at
the user's cur and new folders, there is mail in those folders.  If I send a
new mail to the user, it does not show up in their inbox.  If I look in
their new folder, the new mail that I sent was delivered to that folder but
it does not show up in their inbox.  I've tried going to dovecot directly
through telnet with the same results.  I've been stuck for 2 days now so any
help is greatly appreciated.  Below is my dovecot.conf with all the comments
removed.

 

Thanks all!

 

protocols = imap imaps

 

disable_plaintext_auth = no

 

log_path = '/var/log/dovecot/error.log' 

 

info_log_path = '/var/log/dovecot/info.log' 

 

log_timestamp = "%m/%d/%Y %H:%M:%S "

 

#mail_location = maildir:~/

mail_location = maildir:/home/vmail/%d/%n/

 

mail_privileged_group = mail

 

mail_debug = yes

 

protocol imap {



  

 

}

  

protocol pop3 {

  

}

 

protocol managesieve {

  sieve_storage=~/sieve

  

}

 

 

protocol lda {

log_path = /home/vmail/dovecot-deliver.log

auth_socket_path = /var/run/dovecot/auth-master

postmaster_address = postmas...@mybridemal.com

mail_plugins = cmusieve

global_script_path = /home/vmail/globalsieverc

}

 

auth_verbose = yes

 

auth_debug = no

 

auth_debug_passwords = no

auth default {

  passdb sql {

args = /etc/dovecot/dovecot-sql.conf

  }

 

args = uid=5000 gid=5000 home=/home/vmail/%d/%n allow_all_users=yes

  }

 

  

  user = root

 

  path = /var/run/dovecot/auth-master

  mode = 0600



  user = vmail

  #group = 

}

client {

  path = /var/spool/postfix/private/auth

  mode = 0660

  user = postfix

  group = postfix

}

  }

}

 

}

 

 

 



[Dovecot] Dotlock dovecot-uidlist errors / NFS / High Load

2011-01-20 Thread list
As of late our four node dovecot 1.2.13 cluster has been experiencing a
massive number of these dotlock errors:

Created dotlock file's timestamp is different than current time
(1295480202 vs 1295479784): /mail/user/Maildir/dovecot-uidlist

These dotlock errors correspond with very high load averages, and
eventually we have to turn off all but one server to stop them from
occurring.  We first assumed this trend was related to the NFS storage, but
we could not find a networking issue or NFS related problem to speak of. 
We run the mail storage on NFS which is hosted on a Centos 5.5 host, and
mounted with the following options:

udp,nodev,noexec,nosuid.  

Secondly we thought the issues were due to NTP as the time stamps vary so
widely, so we rebuilt our NTP servers and found closer stratum 1 source
clocks to synchronize to hoping it would alleviate the problem but the
dotlock errors returned after about 12 hours.  We have fcntl locking set in
our configuration file, but it is our understanding from look at the source
code that this file is locked with dotlock.  

Any help troubleshooting is appreciated.

Thanks,

Michael


# 1.2.13: /etc/dovecot.conf
# OS: Linux 2.6.18-194.8.1.el5 x86_64 CentOS release 5.5 (Final) 
protocols: imap pop3
listen(default): *:143
listen(imap): *:143
listen(pop3): *:110
shutdown_clients: no
login_dir: /var/run/dovecot/login
login_executable(default): /usr/libexec/dovecot/imap-login
login_executable(imap): /usr/libexec/dovecot/imap-login
login_executable(pop3): /usr/libexec/dovecot/pop3-login
login_process_per_connection: no
login_process_size: 128
login_processes_count: 4
login_max_processes_count: 256
login_max_connections: 386
first_valid_uid: 300
mail_location: maildir:~/Maildir
mmap_disable: yes
dotlock_use_excl: no
mail_nfs_storage: yes
mail_nfs_index: yes
mail_executable(default): /usr/libexec/dovecot/imap
mail_executable(imap): /usr/libexec/dovecot/imap
mail_executable(pop3): /usr/libexec/dovecot/pop3
mail_plugin_dir(default): /usr/lib64/dovecot/imap
mail_plugin_dir(imap): /usr/lib64/dovecot/imap
mail_plugin_dir(pop3): /usr/lib64/dovecot/pop3
auth default:
  username_format: %Ln
  worker_max_count: 50
  passdb:
    driver: pam
  userdb:
    driver: passwd



Re: [Dovecot] Best Cluster Storage

2011-01-20 Thread alex handle
On Mon, Jan 17, 2011 at 7:32 AM, Jan-Frode Myklebust  wrote:
> On Fri, Jan 14, 2011 at 05:16:50PM -0800, Brad Davidson wrote:
>>
>> Don't give up on the simplest solution too easily - lots of us run NFS
>> with quite large installs. As a matter of fact, I think all of the large
>> installs run NFS; hence the need for the Director in 2.0.
>
> Not all, if this counts as large:
>
>        Filesystem            Size  Used Avail Use% Mounted on
>        /dev/gpfsmail      9.9T  8.7T  1.2T  88% /maildirs
>
>        Filesystem            Inodes   IUsed   IFree IUse% Mounted on
>        /dev/gpfsmail     105279488 90286634 14992854   86% /maildirs
>

how do you backup that data? :)

-ah


Re: [Dovecot] ldap auth error

2011-01-20 Thread Charles Marcus
On 2011-01-20 3:31 AM, Jan-Frode Myklebust wrote:
> On Wed, Jan 19, 2011 at 05:27:52PM -0500, Charles Marcus wrote:
>> On 2011-01-19 5:04 PM, pch0317 wrote:
>>> I have dovecot 2.0.beta6 and I'm newbie with dovecot.

>> First assignment: upgrade to 2.0.9... why waste time fighting with bugs
>> that are already long fixed?

> RHEL6 ships dovecot 2.0-beta6 (2.0-0.10.beta6.20100630.el6), and many
> sysadmins like to stick with the distro provided packages, so I think
> we'll see quite a few of these until RHEL6.1 or something hopefully
> upgrades the package to something newer..

There are other repos for getting working stable builds... refusing to
do so and sticking with a known buggy pre-release version of critical
software is just not a good idea. If my chosen distro put me in that
position, then I'd find another distro.

-- 

Best regards,

Charles


Re: [Dovecot] Delivered-To header without +extension ?

2011-01-20 Thread Charles Marcus
On 2011-01-20 4:06 AM, Per Jessen wrote:
> I've been reading
> a bit, and I think the issue is that postfix adds X-Original-To when
> delivering to a mailbox - which delivery via smtp/lmtp isn't. 
> 
> I'm not sure if postfix should be adding it - postfix applies
> virtual_aliases_maps, then delivers to dovecot via lmtp (set up via
> virtual_transport) - without X-Original-To, the information
> of "original recipient" is lost at this point.

Yikes... I've been planning on switching to LMTP for delivery, but this
would be a show-stopper...

Please keep us updated on what you find out...

-- 

Best regards,

Charles


Re: [Dovecot] SSD drives are really fast running Dovecot

2011-01-20 Thread Ed W

On 20/01/2011 06:06, Stan Hoeppner wrote:

If you think the above is "hostile" you have lived a privileged and sheltered
life, and I envy you. :)  That isn't "hostile" but a combination of losing
patience and being blunt.  "Hostile" is "f--k you!".  Obviously I wasn't being
"hostile".


I'm living in the "Dovecot mailing list" which has historically been a 
very tolerant and forgiving place to learn?  Do you mind if I continue 
to remain "sheltered"?




You're overreacting.  Saying "I'm not your personal XFS tutor" is not being
hostile.  Heh, if you think that was hostile, go live on NANAE for a few days or
a week and report back on what real hostility is. ;)


I for one don't want the tone of this list to deteriorate to "NANAE" 
levels


There are plenty of lists and forums where you can get sarcastic answers 
from folks with more experience than ones self. Please lets try and keep 
the tone of this list as the friendly, helpful place it has been?


To offer just an *opinion*, being sarcastic (or just less than fully 
helpful) to "idiots" who "can't be bothered" to learn the basics before 
posting is rarely beneficial. Many simply leave and go elsewhere. Some 
do the spadework and become "experienced", but in turn they usually 
respond in the same sharp way to new "inexperienced" questions... The 
circle continues...


I find it helpful to always presume there is a reason I should respect 
the poster, despite what might look like a lazy question to me.  Does 
someone with 10 years of experience in their of field deserve me to be 
sharp with them because they tried to skip a step and ask a "lazy 
question" without doing their own leg work?  Only yesterday I was that 
dimwit having spent 5 hours applying the wrong patch to a kernel and 
wondering why it failed to build, until I finally asked their list and 
got a polite reply pointing out my very trivial mistake...


Lets assume everyone deserves some respect and take the time to answer 
the dim questions politely?


Oh well, pleading over. Good luck and genuinely thanks to Stan for 
spending his valuable time here. Here's hoping you will continue to do 
so, but also being nice to the dummies?


Regards

Ed W



[Dovecot] LMTP & home, chroot, mail userdb fields.

2011-01-20 Thread Lev Serebryakov
Hello, Dovecot.


  I'm using postfix + dovecot with pure virtual users. postfix uses
standard virtual transport, and dovecot fetches such fields from
userdb:

chroot: "/usr/home/hosted/v-mail/%d/%n"
home: "/"
mail: "maildir:."

  Everything works Ok -- dovecot founds users' mail.

  Now, after upgrade to dovecot2, I want to use it LMTP server as
virtual_transport in postifx. I've changed virtual_transport setting
to "lmtp:unix:/var/run/dovecot/lmtp".

 dovecot's LMTP can not deliver messages, because it seems that it
uses userdb fields in some OTHER way. Errors look like this:

Jan 20 12:19:25 lmtp(38939): Info: Connect from local
Jan 20 12:19:25 auth: Info: mysql: Connected to /tmp/mysql.sock (mailhost)
Jan 20 12:19:25 lmtp(38939, l...@domain.com): Error: mkdir(./cur) in directory 
/var/run/dovecot failed: Permission denied (euid=3(v-mail) 
egid=3(v-mail) missing +w perm: ., euid is not dir owner)
Jan 20 12:19:25 lmtp(38939, l...@domain.com): Error: Opening INBOX failed: 
Mailbox doesn't exist: INBOX
Jan 20 12:19:25 lmtp(38939, l...@domain.com): Error: mkdir(./cur) in directory 
/var/run/dovecot failed: Permission denied (euid=3(v-mail) 
egid=3(v-mail) missing +w perm: ., euid is not dir owner)
Jan 20 12:19:25 lmtp(38939, l...@domain.com): Info: XXVtE00oOE0bmAAAWL5c8Q: 
msgid=unspecified: save failed to INBOX: Internal error occurred. Refer to 
server log for more information. [2011-01-20 12:19:25]
Jan 20 12:19:25 lmtp(38939, l...@domain.com): Error: BUG: Saving failed to 
unknown storage
Jan 20 12:19:25 lmtp(38939): Info: Disconnect from local: Client quit

  How should I change my userdb output to make both POP/IMAP and LMTP
 processes happy?

-- 
// Black Lion AKA Lev Serebryakov 



Re: [Dovecot] Delivered-To header without +extension ?

2011-01-20 Thread Robert Schetterer
Am 20.01.2011 10:06, schrieb Per Jessen:
> Robert Schetterer wrote:
> 
>> Am 20.01.2011 09:41, schrieb Per Jessen:
>>> Tom Hendrikx wrote:
>>>
 On 20/01/11 08:50, Per Jessen wrote:
> Per Jessen wrote:
>
>> Pascal Volk wrote:
>>
>>> Hi Per,
>>>
>>> now the +ext is included in the Delivered-To header again:
>>> http://hg.dovecot.org/dovecot-2.0/rev/a3a7cc0172fd
>>>
>>
>> Thanks Pascal, that was fast!
>>
>> Last night, I reverse applied the patch you mentioned earlier to
>> 2.0.9, which worked just fine, I'm building it just now.
>
> Probably superfluous, but nevertheless - it works fine, I'm getting
> the
> right Deliver-To header including the +extension.  Interestingly,
> I'm not seeing X-Original-To - isn't that normally added by
> postfix?
>
>
> /Per Jessen, Zürich
>

 X-Original-To: header is added by postfix' pipe(8) command,
>>>
>>> Hmm, it's can't be only pipe() - if I revert to regular virtual
>>> delivery to maildir (instead of lmtp to dovecot), I get the
>>> X-Original-To header, and that involves no pipe().
>>>
 and is only available when delivering to a single recipient
 (_destination_recipient_limit = 1).
>>>
>>> I'll try that.
>>>
>>>
>>> /Per Jessen, Zürich
>>>
>>
>> if have no idea if this help , but its easy to try, after all you
>> loose performance with lmtp if you set 1 here
>>
>> lmtp_destination_recipient_limit (default:
>> $default_destination_recipient_limit)
> 
> Hi Robert
> 
> yes, I've just tried that, but it made no difference.  I've been reading
> a bit, and I think the issue is that postfix adds X-Original-To when
> delivering to a mailbox - which delivery via smtp/lmtp isn't. 
> 
> I'm not sure if postfix should be adding it - postfix applies
> virtual_aliases_maps, then delivers to dovecot via lmtp (set up via
> virtual_transport) - without X-Original-To, the information
> of "original recipient" is lost at this point.
> 

sounds plausible

to make sure you may ask Wietse
perhaps feature request or some magical setup tip may accour *g
greetz from Munich to Zuerich

> 
> /Per Jessen, Zürich
> 


-- 
Best Regards

MfG Robert Schetterer

Germany/Munich/Bavaria


Re: [Dovecot] Delivered-To header without +extension ?

2011-01-20 Thread Per Jessen
Robert Schetterer wrote:

> Am 20.01.2011 09:41, schrieb Per Jessen:
>> Tom Hendrikx wrote:
>> 
>>> On 20/01/11 08:50, Per Jessen wrote:
 Per Jessen wrote:

> Pascal Volk wrote:
>
>> Hi Per,
>>
>> now the +ext is included in the Delivered-To header again:
>> http://hg.dovecot.org/dovecot-2.0/rev/a3a7cc0172fd
>>
>
> Thanks Pascal, that was fast!
>
> Last night, I reverse applied the patch you mentioned earlier to
> 2.0.9, which worked just fine, I'm building it just now.

 Probably superfluous, but nevertheless - it works fine, I'm getting
 the
 right Deliver-To header including the +extension.  Interestingly,
 I'm not seeing X-Original-To - isn't that normally added by
 postfix?


 /Per Jessen, Zürich

>>>
>>> X-Original-To: header is added by postfix' pipe(8) command,
>> 
>> Hmm, it's can't be only pipe() - if I revert to regular virtual
>> delivery to maildir (instead of lmtp to dovecot), I get the
>> X-Original-To header, and that involves no pipe().
>> 
>>> and is only available when delivering to a single recipient
>>> (_destination_recipient_limit = 1).
>> 
>> I'll try that.
>> 
>> 
>> /Per Jessen, Zürich
>> 
> 
> if have no idea if this help , but its easy to try, after all you
> loose performance with lmtp if you set 1 here
> 
> lmtp_destination_recipient_limit (default:
> $default_destination_recipient_limit)

Hi Robert

yes, I've just tried that, but it made no difference.  I've been reading
a bit, and I think the issue is that postfix adds X-Original-To when
delivering to a mailbox - which delivery via smtp/lmtp isn't. 

I'm not sure if postfix should be adding it - postfix applies
virtual_aliases_maps, then delivers to dovecot via lmtp (set up via
virtual_transport) - without X-Original-To, the information
of "original recipient" is lost at this point.


/Per Jessen, Zürich



[Dovecot] Bug found, assertion failed

2011-01-20 Thread Tobias Daucher

Hi there,
We're running dovecot 2.06, with mdbox.
Following message was in our syslog:
Jan 20 09:26:48 servername dovecot: [ID 583609 mail.crit] imap(user): Panic: file istream-limit.c: 
line 79: assertion failed: (v_offset <= lstream->v_size)


The problem could be solved on client side, by just deleting the 
ImapMail-Folder in Thunderbird.
Why? Thunderbird tried to move a message, that obviously wasn't there. Dovecot got killed and the 
message above was in the syslog. Connection was closed and Thunderbird told me the server is dead. 
Thunderbird tried this every few seconds, and there was no way to say thunderbird "stop try moving". 
So the only way was to delete thunderbirds mail cache.


I think it would be very nice, if dovecot doesn't die, just because the client tries to move a 
message, which isn't there.


Thanks,
Tobias Daucher

--


Dr. Nagler & Company GmbH
Hauptstraße 9
92253 Schnaittenbach

Tel : 09622-7197-38
Fax : 09622-7197-50
Web : http://www.nagler-company.com
E-Mail : tobias.dauc...@nagler-company.com

Hauptsitz:  Schnaittenbach
Handelregister: Amberg HRB 4653
Gerichtsstand:  Amberg
Steuernummer:   201/118/51825
USt.-ID-Nummer: DE 273143997
Geschäftsführer:Dr. Martin Nagler, Dr. Dr. Karl-Kuno Kunze


Re: [Dovecot] Delivered-To header without +extension ?

2011-01-20 Thread Robert Schetterer
Am 20.01.2011 09:41, schrieb Per Jessen:
> Tom Hendrikx wrote:
> 
>> On 20/01/11 08:50, Per Jessen wrote:
>>> Per Jessen wrote:
>>>
 Pascal Volk wrote:

> Hi Per,
>
> now the +ext is included in the Delivered-To header again:
> http://hg.dovecot.org/dovecot-2.0/rev/a3a7cc0172fd
>

 Thanks Pascal, that was fast!

 Last night, I reverse applied the patch you mentioned earlier to
 2.0.9, which worked just fine, I'm building it just now.
>>>
>>> Probably superfluous, but nevertheless - it works fine, I'm getting
>>> the
>>> right Deliver-To header including the +extension.  Interestingly, I'm
>>> not seeing X-Original-To - isn't that normally added by postfix?
>>>
>>>
>>> /Per Jessen, Zürich
>>>
>>
>> X-Original-To: header is added by postfix' pipe(8) command, 
> 
> Hmm, it's can't be only pipe() - if I revert to regular virtual delivery
> to maildir (instead of lmtp to dovecot), I get the X-Original-To
> header, and that involves no pipe(). 
> 
>> and is only available when delivering to a single recipient
>> (_destination_recipient_limit = 1).
> 
> I'll try that.
> 
> 
> /Per Jessen, Zürich
> 

if have no idea if this help , but its easy to try, after all you loose
performance with lmtp if you set 1 here

lmtp_destination_recipient_limit (default:
$default_destination_recipient_limit)

The maximal number of recipients per message for the lmtp message
delivery transport. This limit is enforced by the queue manager. The
message delivery transport name is the first field in the entry in the
master.cf file.

Setting this parameter to a value of 1 changes the meaning of
lmtp_destination_concurrency_limit from concurrency per domain into
concurrency per recipient.


-- 
Best Regards

MfG Robert Schetterer

Germany/Munich/Bavaria


Re: [Dovecot] Delivered-To header without +extension ?

2011-01-20 Thread Per Jessen
Tom Hendrikx wrote:

> On 20/01/11 08:50, Per Jessen wrote:
>> Per Jessen wrote:
>> 
>>> Pascal Volk wrote:
>>>
 Hi Per,

 now the +ext is included in the Delivered-To header again:
 http://hg.dovecot.org/dovecot-2.0/rev/a3a7cc0172fd

>>>
>>> Thanks Pascal, that was fast!
>>>
>>> Last night, I reverse applied the patch you mentioned earlier to
>>> 2.0.9, which worked just fine, I'm building it just now.
>> 
>> Probably superfluous, but nevertheless - it works fine, I'm getting
>> the
>> right Deliver-To header including the +extension.  Interestingly, I'm
>> not seeing X-Original-To - isn't that normally added by postfix?
>> 
>> 
>> /Per Jessen, Zürich
>> 
> 
> X-Original-To: header is added by postfix' pipe(8) command, 

Hmm, it's can't be only pipe() - if I revert to regular virtual delivery
to maildir (instead of lmtp to dovecot), I get the X-Original-To
header, and that involves no pipe(). 

> and is only available when delivering to a single recipient
> (_destination_recipient_limit = 1).

I'll try that.


/Per Jessen, Zürich



Re: [Dovecot] ldap auth error

2011-01-20 Thread Jan-Frode Myklebust
On Wed, Jan 19, 2011 at 05:27:52PM -0500, Charles Marcus wrote:
> On 2011-01-19 5:04 PM, pch0317 wrote:
> > I have dovecot 2.0.beta6 and I'm newbie with dovecot.
> 
> First assignment: upgrade to 2.0.9... why waste time fighting with bugs
> that are already long fixed?

RHEL6 ships dovecot 2.0-beta6 (2.0-0.10.beta6.20100630.el6), and many
sysadmins like to stick with the distro provided packages, so I think
we'll see quite a few of these until RHEL6.1 or something hopefully
upgrades the package to something newer..

We sysadmins should be proactive about known bugs -- so maybe we should
set up test cases for all known bugs fixed since 2.0-beta6, and report
these to Red Hat support ?  ;-)


  -jf


Re: [Dovecot] Delivered-To header without +extension ?

2011-01-20 Thread Tom Hendrikx
On 20/01/11 08:50, Per Jessen wrote:
> Per Jessen wrote:
> 
>> Pascal Volk wrote:
>>
>>> Hi Per,
>>>
>>> now the +ext is included in the Delivered-To header again:
>>> http://hg.dovecot.org/dovecot-2.0/rev/a3a7cc0172fd
>>>
>>
>> Thanks Pascal, that was fast!
>>
>> Last night, I reverse applied the patch you mentioned earlier to
>> 2.0.9, which worked just fine, I'm building it just now.
> 
> Probably superfluous, but nevertheless - it works fine, I'm getting the
> right Deliver-To header including the +extension.  Interestingly, I'm
> not seeing X-Original-To - isn't that normally added by postfix?
> 
> 
> /Per Jessen, Zürich
> 

X-Original-To: header is added by postfix' pipe(8) command, and is only
available when delivering to a single recipient
(_destination_recipient_limit = 1).

I don't see any options in the smtp/lmtp manpage that enable adding a
header like this, which is logically since both support multiple
recipients by protocol spec, iirc.

--
Tom



signature.asc
Description: OpenPGP digital signature