Re: [Dovecot] Filter from Outlook fails after migration

2010-11-15 Thread Antonio Perez-Aranda
After many checks. Outlooks 2003 filters are running with this at
capability starts.

  imap_capability = +I18NLEVEL=1 NAMESPACE

2010/11/10 Timo Sirainen t...@iki.fi:
 On Wed, 2010-11-10 at 17:32 +0100, Antonio Perez-Aranda wrote:
 Well, If I put all capabalities in the prelogin message, filters are running
 correctly

 I hate Outlook ...

 Can you check which capability specifically it wants, or is it all of
 them? I can't really think of how any capabilities would affect filters.





Re: [Dovecot] Single-instance storage is bad for you!

2010-11-15 Thread Jose Celestino
On Dom, 2010-11-14 at 22:34 -0800, Daniel L. Miller wrote:
 
 You're quite the enabler, Timo!  Now any mail admin who tries to lecture 
 users on only sending attachments to those in-house personnel who really 
 need it, because otherwise it's wasteful of precious server storage 
 space, is going to hear the response, Dude - don't stress out so much - 
 just setup Dovecot with SIS!.
 
 Thanks again!

Hi,

I have to agree that Dovecot promotes bad habits.

For instance we used to have to check cpu usage graphics for our IMAP
servers several times a day with the previous server software, now
sometimes a day passes and we don't check it. At first the operations
dudes would notice our not logging in and call to check if we're sick,
if our fifth grandfather had passed away or if we had hurt both our
hands playing badminton, again, now they just look with a weird
you-lazy-bastard kind of look.

Also we used to be called often because of IMAP problems and we were
always full alert, nowadays we sometimes don't even answer the phone if
we're in the middle of a urban terror game because we're almost sure
that the call is not to report a problem and those nice red team folks
really depend on our headshot skills and the blue team ones on our
grenade throwing disability.

I think Timo should quit the crap and revert some patches to institute
the old and healthy practices!!

-- 
Jose Celestino | http://japc.uncovering.org/files/japc-pgpkey.asc

Assumption is the Mother of Screw-Up -- Mr. John Elwood Hale



Re: [Dovecot] v1.2.16 released

2010-11-15 Thread Bruce Bodger

On Nov 15, 2010, at 1:41 AM, Stephan Bosch wrote:

 On 11/15/2010 2:17 AM, Bruce Bodger wrote:
 On Nov 11, 2010, at 2:11 PM, Stephan Bosch wrote:
 Refreshed the ManageSieve patch:
 
 http://www.rename-it.nl/dovecot/1.2/dovecot-1.2.16-managesieve-0.11.12.diff.gz
 http://www.rename-it.nl/dovecot/1.2/dovecot-1.2.16-managesieve-0.11.12.diff.gz.sig
 Hello, Stephan,
 
 Perhaps some of us need a new sieve version for compatibility w/ Dovecot 
 1.2.16 ?  :-)
 
 What do you mean?
 
I get this when I compile and run Dovecot 1.2.16 while still using 
dovecot-1.2-sieve-0.1.18:

Nov 14 15:44:42 server dovecot[37024]: deliver(jjohnson): Module is for 
different version 1.2.15: /usr/local/lib/dovecot/lda/lib90_sieve_plugin.so
Nov 14 15:44:42 server dovecot[37024]: deliver(jjohnson): Fatal: Couldn't load 
required plugins

I presume I would need a version of sieve specifically for Dovecot 1.2.16.  
Didn't you add some version checking a few months ago?

Thank you,
B. Bodger




Re: [Dovecot] v1.2.16 released

2010-11-15 Thread Timo Sirainen
On 15.11.2010, at 12.34, Bruce Bodger wrote:

 Nov 14 15:44:42 server dovecot[37024]: deliver(jjohnson): Module is for 
 different version 1.2.15: /usr/local/lib/dovecot/lda/lib90_sieve_plugin.so
 Nov 14 15:44:42 server dovecot[37024]: deliver(jjohnson): Fatal: Couldn't 
 load required plugins
 
 I presume I would need a version of sieve specifically for Dovecot 1.2.16.  
 Didn't you add some version checking a few months ago?

You just need to recompile.



Re: [Dovecot] Local node indexes in a cluster backend with GFS2

2010-11-15 Thread Timo Sirainen
On 15.11.2010, at 6.44, Aliet Santiesteban Sifontes wrote:

 mail_location =
 sdbox:/var/vmail/%d/%3n/%n/sdbox:INDEX=/var/indexes/%d/%3n/%n
 
 /var/vmail is shared clustered filesystem with GFS2 shared by node1 and
 node2
 
 /var/indexes is a local filesystem at the node, so each node has his own
 /var/indexes stuff on ext3 and raid1 for improving performance, I mean each
 node a different /var/indexes of its own.

This is a bad idea. With dbox the message flags are only stored in index files, 
so if you lose indexes you lose all message flags. Users won't be happy.


Re: [Dovecot] v1.2.16 released

2010-11-15 Thread Bruce Bodger


On Nov 15, 2010, at 6:53 AM, Timo Sirainen wrote:

Nov 14 15:44:42 server dovecot[37024]: deliver(jjohnson): Module is  
for different version 1.2.15: /usr/local/lib/dovecot/lda/ 
lib90_sieve_plugin.so
Nov 14 15:44:42 server dovecot[37024]: deliver(jjohnson): Fatal:  
Couldn't load required plugins


I presume I would need a version of sieve specifically for  
Dovecot 1.2.16.  Didn't you add some version checking a few months  
ago?


You just need to recompile.


I did that however I did not do a 'make clean' first.  Once I did, and  
then recompiled, all is well.


Thanks to you and Stephan.

B. Bodger




Re: [Dovecot] proctitle woes with 2.0.7 vs. 2.0.6

2010-11-15 Thread Timo Sirainen
On Sat, 2010-11-13 at 02:25 +0100, Clemens Schrimpe wrote:
  It seems, that while anvil has gained some verbosity,
  
  Yes.
  
  auth and imap-login (and maybe pop-login) seem to have lost their voice.
  
  Is there any intention behind this or did I ran into a bug?

The bug was in a bit unexpected place. Fixed:
http://hg.dovecot.org/dovecot-2.0/rev/656da7e0d6b9




Re: [Dovecot] Dovecot will not start, error in stat(/var/run/dovecot)

2010-11-15 Thread Timo Sirainen
On Sun, 2010-11-14 at 10:31 -0600, Frank Collette IV wrote:
 Error: stat(/var/run/dovecot) failed: Invalid argument
 Fatal: Invalid configuration in /etc/dovecot.conf

That's strange. It just shouldn't ever happen. Try reinstalling Dovecot
or something.




[Dovecot] Trying to building a customized auth plugin

2010-11-15 Thread Antonio Perez-Aranda
I'm on dovecot 2.0.7 and I trying to building a customized auth plugin.

I take passdb-passwd-file.c and userdb-passwd-file.c to try to build
with a simple gcc comand out of dovecot environment as in example:

export DOVECOT=/path/to/untar/dovecot-2.0.7
gcc -fPIC -shared -g -Wall -I$DOVECOT \
 -I$DOVECOT/src/lib \
 -I$DOVECOT/src/lib-auth  \
 -I$DOVECOT/src/lib-sql \
 -I$DOVECOT/src/lib-settings \
 -I$DOVECOT/src/lib-ntlm \
 -I$DOVECOT/src/lib-master \
 -I$DOVECOT/src/auth \
 passdb-passwd-file.c -o passdb-passwd-file.o

With this, I get errors relate with uoff_t

Is possible to build this plugin by this way?

Another way is patching against one plugin on dovecot RPM, but I
prefer this on a separated RPM.


Re: [Dovecot] Single-instance storage is bad for you!

2010-11-15 Thread Stan Hoeppner
Daniel L. Miller put forth on 11/15/2010 12:34 AM:
 Now any mail admin who tries to lecture
 users on only sending attachments to those in-house personnel who really
 need it, because otherwise it's wasteful of precious server storage
 space, is going to hear the response, Dude - don't stress out so much -
 just setup Dovecot with SIS!.

No, the OP won't hear this from end users.  They don't even know what
Dovecot is, will have never heard of it nor SIS.

The only folks he might take grief from is fellow sysadmins.  But why
would they care?

Also, what if at some point management forces you to dump Dovecot and
move to a platform that doesn't offer SIS?  If you change your tune with
said users now, in the future, you may have screwed yourself after
re-educating your users to the SIS way.

-- 
Stan


Re: [Dovecot] Trying to building a customized auth plugin

2010-11-15 Thread Timo Sirainen
On 15.11.2010, at 18.03, Antonio Perez-Aranda wrote:

 gcc -fPIC -shared -g -Wall -I$DOVECOT \
 -I$DOVECOT/src/lib \
 -I$DOVECOT/src/lib-auth  \
 -I$DOVECOT/src/lib-sql \
 -I$DOVECOT/src/lib-settings \
 -I$DOVECOT/src/lib-ntlm \
 -I$DOVECOT/src/lib-master \
 -I$DOVECOT/src/auth \
 passdb-passwd-file.c -o passdb-passwd-file.o
 
 With this, I get errors relate with uoff_t

You need to add -DHAVE_CONFIG_H



[Dovecot] dsync mbox-mdbox: Unexpectedly lost From-line and other issues from a big conversion.

2010-11-15 Thread Axel Thimm
Hi,

I'm trying to convert a 33GB mail store from mbox to compressed mdbox
(largest mbox is 2.7GB). The source store is live, e.g. there are mails
delivered into it and mails are being read. Actually it is my own
mail. :)

Although my test runs were very successful I have run into trouble with
the first run on the whole store. After fighting a bit around with
errors like

Error: Trying to open a non-listed mailbox with 
guid=f59c5b31b8f3df4c7018e7dd553b

I did several reruns in case these errors would be fixed on the next
iteration, but my mdbox storage wasn't growing anymore.

I decided to restart after removing all old index files from the source
store in case I had some corruption somewhere. Since on mbox/maildir
index files are completely recreatable this would just slow down thing
at the worst.

Indeed many of the errors went away, and I managed to convert about
10-20%, but then I hit another array of errors that would persist even
after restarting:

 $ grep -v Info: dsync2.log.old*
 dsync2.log.old1:dsync(user): Error: Next message unexpectedly lost from mbox 
 file /home/user/mail/lists/mplayerhq.hu/ffmpeg-devel at 58706201 (cached)
 dsync2.log.old1:dsync(user): Error: read(msg input) failed: Invalid argument
 dsync2.log.old1:dsync(user): Error: Next message unexpectedly lost from mbox 
 file /home/user/mail/lists/gnupg.org/gnupg-users at 6507197 (cached)
 dsync2.log.old1:dsync(user): Error: Unexpectedly lost From-line from mbox 
 file /home/user/mail/lists/gnupg.org/gnupg-users at 6486645
 dsync2.log.old2:dsync(user): Error: mdbox 
 /home/user/mdbox/mailboxes/lists/mplayerhq.hu/ffmpeg-devel/dbox-Mails: map 
 uid lost for uid 26483
 dsync2.log.old2:dsync(user): Error: msg guid lookup failed: Internal error 
 occurred. Refer to server log for more information. [2010-11-15 02:36:21]
 dsync2.log.old2:dsync(user): Warning: mdbox /home/user/mdbox/storage: 
 rebuilding indexes
 dsync2.log.old2:dsync(user): Error: Corrupted dbox file 
 /home/user/mdbox/storage/m.1725 (around offset=697710): msg header has bad 
 magic value
 dsync2.log.old2:dsync(user): Warning: dbox: Copy of the broken file saved to 
 /home/user/mdbox/storage/m.1725.broken
 dsync2.log.old2:dsync(user): Warning: Transaction log file 
 /home/user/mdbox/storage/dovecot.map.index.log was locked for 295 seconds
 dsync2.log.old2:dsync(user): Error: Next message unexpectedly lost from mbox 
 file /home/user/mail/lists/mplayerhq.hu/ffmpeg-devel at 58706201 (cached)
 dsync2.log.old2:dsync(user): Error: read(msg input) failed: Invalid argument
 dsync2.log.old2:dsync(user): Error: Next message unexpectedly lost from mbox 
 file /home/user/mail/lists/gnupg.org/gnupg-users at 6507197 (cached)
 dsync2.log.old2:dsync(user): Error: Unexpectedly lost From-line from mbox 
 file /home/user/mail/lists/gnupg.org/gnupg-users at 6486645
 dsync2.log.old2:dsync(user): Error: Unexpectedly lost From-line from mbox 
 file /home/user/mail/lists/gnupg.org/gnupg-users at 6507197
 dsync2.log.old2:dsync(user): Warning: Mailbox changes caused a desync. You 
 may want to run dsync again.
 dsync2.log.old3:dsync(user): Error: mdbox 
 /home/user/mdbox/mailboxes/lists/mplayerhq.hu/ffmpeg-devel/dbox-Mails: map 
 uid lost for uid 26484
 dsync2.log.old3:dsync(user): Error: msg guid lookup failed: Internal error 
 occurred. Refer to server log for more information. [2010-11-15 09:49:26]
 dsync2.log.old3:dsync(user): Warning: mdbox /home/user/mdbox/storage: 
 rebuilding indexes
 dsync2.log.old3:dsync(user): Error: Corrupted dbox file 
 /home/user/mdbox/storage/m.1725 (around offset=758233): msg header has bad 
 magic value
 dsync2.log.old3:dsync(user): Error: link(/home/user/mdbox/storage/m.1725, 
 /home/user/mdbox/storage/m.1725.broken) failed: File exists
 dsync2.log.old3:dsync(user): Warning: Transaction log file 
 /home/user/mdbox/storage/dovecot.map.index.log was locked for 271 seconds
 dsync2.log.old3:dsync(user): Warning: Transaction log file 
 /var/cache/dovecot/indexes/user/lists/lists.fedoraproject.org/.imap/users/dovecot.index.log
  was locked for 180 seconds
 dsync2.log.old3:dsync(user): Error: Next message unexpectedly lost from mbox 
 file /home/user/mail/lists/mplayerhq.hu/ffmpeg-devel at 58706201 (cached)
 dsync2.log.old3:dsync(user): Error: read(msg input) failed: Invalid argument
 dsync2.log.old3:dsync(user): Error: Next message unexpectedly lost from mbox 
 file /home/user/mail/lists/gnupg.org/gnupg-users at 6507197 (cached)
 dsync2.log.old3:dsync(user): Error: Unexpectedly lost From-line from mbox 
 file /home/user/mail/lists/gnupg.org/gnupg-users at 6486645
 dsync2.log.old3:dsync(user): Error: Unexpectedly lost From-line from mbox 
 file /home/user/mail/lists/gnupg.org/gnupg-users at 6507197
 dsync2.log.old3:dsync(user): Warning: Mailbox changes caused a desync. You 
 may want to run dsync again.
 dsync2.log.old4:dsync(user): Warning: Transaction log file 
 

Re: [Dovecot] dsync mbox-mdbox: Unexpectedly lost From-line and other issues from a big conversion.

2010-11-15 Thread Timo Sirainen
On 15.11.2010, at 18.15, Axel Thimm wrote:

 dsync2.log.old1:dsync(user): Error: Next message unexpectedly lost from mbox 
 file /home/user/mail/lists/mplayerhq.hu/ffmpeg-devel at 58706201 (cached)
 dsync2.log.old1:dsync(user): Error: Next message unexpectedly lost from mbox 
 file /home/user/mail/lists/gnupg.org/gnupg-users at 6507197 (cached)

Looks like public mailing list archives. Could you put the mbox file(s) 
somewhere I can download them? Maybe I can easily reproduce the problem.



Re: [Dovecot] Single-instance storage is bad for you!

2010-11-15 Thread Charles Marcus
On 2010-11-15 1:06 PM, Stan Hoeppner wrote:
 No, the OP won't hear this from end users.  They don't even know what
 Dovecot is, will have never heard of it nor SIS.
 
 The only folks he might take grief from is fellow sysadmins.  But why
 would they care?

Methinks it was a joke Stan... ;)

 Also, what if at some point management forces you to dump Dovecot and
 move to a platform that doesn't offer SIS?  If you change your tune with
 said users now, in the future, you may have screwed yourself after
 re-educating your users to the SIS way.

The migration should take care of it - each email that has an attachment
that has been SiS'd would then convert to non-SiS'd with no data loss.
Of course you should test it, because you might have some initial
problems if you dramatically under-estimate the storage necessary
*because* of SiS...

-- 

Best regards,

Charles


Re: [Dovecot] Single-instance storage is bad for you!

2010-11-15 Thread Jerry
On Mon, 15 Nov 2010 12:06:40 -0600
Stan Hoeppner s...@hardwarefreak.com articulated:

 Daniel L. Miller put forth on 11/15/2010 12:34 AM:
  Now any mail admin who tries to lecture
  users on only sending attachments to those in-house personnel who
  really need it, because otherwise it's wasteful of precious server
  storage space, is going to hear the response, Dude - don't stress
  out so much - just setup Dovecot with SIS!.
 
 No, the OP won't hear this from end users.  They don't even know what
 Dovecot is, will have never heard of it nor SIS.
 
 The only folks he might take grief from is fellow sysadmins.  But why
 would they care?
 
 Also, what if at some point management forces you to dump Dovecot and
 move to a platform that doesn't offer SIS?  If you change your tune
 with said users now, in the future, you may have screwed yourself
 after re-educating your users to the SIS way.

Now that appears to be a rather cavalier attitude to take towards the
OP's end users.

True, at some point management might move to a new platform.
Conversely, they might just shut down the enterprise entirely. Worrying
about what might happen is a fruitless and self defeating venture. The
OP should carefully analyze his present situation. Then weighing the
pros and cons of the various avenues available to him, choose the one
that offers the greatest rewards. Rewards being an all inclusive term.

Es ist nicht die Wirklichkeit, die wichtig ist, sondern wie man Dinge 
wahrnehmen.

-- 
Jerry ✌
dovecot.u...@seibercom.net

Disclaimer: off-list followups get on-list replies or get ignored.
Please do not ignore the Reply-To header.
__



signature.asc
Description: PGP signature


Re: [Dovecot] Single-instance storage is bad for you!

2010-11-15 Thread Stan Hoeppner
Jerry put forth on 11/15/2010 12:47 PM:

 Also, what if at some point management forces you to dump Dovecot and
 move to a platform that doesn't offer SIS?  If you change your tune
 with said users now, in the future, you may have screwed yourself
 after re-educating your users to the SIS way.
 
 Now that appears to be a rather cavalier attitude to take towards the
 OP's end users.

End users hate self contradictory statements made by anyone, especially
the IT department, and especially when they see no tangible benefit of
said change.  They see the change as a pain the their ass, and simply a
benefit to the IT department.  In these situations, the IT dept better
have a manager on staff with some darn good PR skills. :)

-- 
Stan


Re: [Dovecot] (no subject)

2010-11-15 Thread David Ford
thanks for playing paley wiener spammer.  GTFO

On 11/15/2010 02:24 PM, Radio Tron wrote:
 http://aigipe.it/here.php


   


Re: [Dovecot] Local node indexes in a cluster backend with GFS2

2010-11-15 Thread Aliet Santiesteban Sifontes
Should I set mmap_disable = yes when storing indexes in a GFS2 shared
filesystem??

2010/11/15 Aliet Santiesteban Sifontes alietsantieste...@gmail.com

 Ok, I will create a LUN also as a shared clustered storage for indexes, any
 consideration to have into account when the indexes are shared by many
 nodes...
 thank you all...

 2010/11/15 Timo Sirainen t...@iki.fi

 On 15.11.2010, at 6.44, Aliet Santiesteban Sifontes wrote:

  mail_location =
  sdbox:/var/vmail/%d/%3n/%n/sdbox:INDEX=/var/indexes/%d/%3n/%n
 
  /var/vmail is shared clustered filesystem with GFS2 shared by node1 and
  node2
 
  /var/indexes is a local filesystem at the node, so each node has his own
  /var/indexes stuff on ext3 and raid1 for improving performance, I mean
 each
  node a different /var/indexes of its own.

 This is a bad idea. With dbox the message flags are only stored in index
 files, so if you lose indexes you lose all message flags. Users won't be
 happy.





Re: [Dovecot] Single-instance storage is bad for you!

2010-11-15 Thread Jerry
On Mon, 15 Nov 2010 13:28:03 -0600
Stan Hoeppner s...@hardwarefreak.com articulated:

 Jerry put forth on 11/15/2010 12:47 PM:
 
  Also, what if at some point management forces you to dump Dovecot
  and move to a platform that doesn't offer SIS?  If you change your
  tune with said users now, in the future, you may have screwed
  yourself after re-educating your users to the SIS way.
  
  Now that appears to be a rather cavalier attitude to take towards
  the OP's end users.
 
 End users hate self contradictory statements made by anyone,
 especially the IT department, and especially when they see no
 tangible benefit of said change.  They see the change as a pain the
 their ass, and simply a benefit to the IT department.  In these
 situations, the IT dept better have a manager on staff with some darn
 good PR skills. :)

I am not seeing where you got this self contradictory statements
from. Furthermore, a reasonably educate individual does not see
change in the terms of anal discomfort, unless they as conservatives.
A conservative being someone who never envisioned a new concept that
they did not dislike. In any case, this is not a job for the PR
department. Rather, it just requires someone with a set. More
apropos, I believe you are simply blowing this whole thing out of
proportion.

-- 
Jerry ✌
dovecot.u...@seibercom.net

Disclaimer: off-list followups get on-list replies or get ignored.
Please do not ignore the Reply-To header.
__



Re: [Dovecot] Local node indexes in a cluster backend with GFS2

2010-11-15 Thread Aliet Santiesteban Sifontes
Read this in GFS2 docs:
mmap/splice support for journaled files (enabled by using the same on disk
format as for regular files)

...
2010/11/15 Aliet Santiesteban Sifontes alietsantieste...@gmail.com

 Should I set mmap_disable = yes when storing indexes in a GFS2 shared
 filesystem??

 2010/11/15 Aliet Santiesteban Sifontes alietsantieste...@gmail.com

 Ok, I will create a LUN also as a shared clustered storage for indexes, any
 consideration to have into account when the indexes are shared by many
 nodes...
 thank you all...

 2010/11/15 Timo Sirainen t...@iki.fi

 On 15.11.2010, at 6.44, Aliet Santiesteban Sifontes wrote:

  mail_location =
  sdbox:/var/vmail/%d/%3n/%n/sdbox:INDEX=/var/indexes/%d/%3n/%n
 
  /var/vmail is shared clustered filesystem with GFS2 shared by node1 and
  node2
 
  /var/indexes is a local filesystem at the node, so each node has his
 own
  /var/indexes stuff on ext3 and raid1 for improving performance, I mean
 each
  node a different /var/indexes of its own.

 This is a bad idea. With dbox the message flags are only stored in index
 files, so if you lose indexes you lose all message flags. Users won't be
 happy.






Re: [Dovecot] Single-instance storage is bad for you!

2010-11-15 Thread Stan Hoeppner
Jerry put forth on 11/15/2010 2:26 PM:

 I am not seeing where you got this self contradictory statements
 from. 

Then apparently you didn't read the thread.  The OP told his users long
ago do this.  Then he made a change and told them oh, you don't
really need to do that anymore.  Then, possibly, down the road, he
tells them, oh, now you need to do it the old way again.

Users see that as self contradiction.  Or why can't he make up his mind?

 Furthermore, a reasonably educate individual does not see
 change in the terms of anal discomfort, unless they as conservatives.
 A conservative being someone who never envisioned a new concept that
 they did not dislike. In any case, this is not a job for the PR
 department. Rather, it just requires someone with a set. More
 apropos, I believe you are simply blowing this whole thing out of
 proportion.

I'm blowing nothing out of proportion.  I simply made an observation.  I
don't know about Europe, or anywhere else, but in American cusiness
culture, in general, most users either dislike or outright hate the IT
staff, regardless of the level of education.  In fact, it's the ones who
think they are highly educated who usually hate IT the most, because
they think that given the chance they could do the job better.  This is
because they don't have a clue as to the work that goes on behind the
scenes, and the fact that their requests may never get implemented, or
may take 6 months to go through the IT vetting process before being
approved.  The requirement of calling IT just to get a piece of software
installed on the user's desktop because the user doesn't have rights is
the first thing to turn users against IT.

Now, throw in the original scenario up top, and it's easy to understand
how and why users would resent the best practices policy espoused by the OP.

My original reply to the OP was geared toward saving him grief.  My
recommendation to the OP was to simply say nothing to the users to save
himself potential grief.  That simple.

I believe you are making this into more than it needs to be.  Have you
ever worked in an IT environment with over 1000 users?  And those users
all love the IT staff, baking cookies for them etc?  Send me contact
info PLEASE so I can forward my resume.

-- 
Stan


Re: [Dovecot] Single-instance storage is bad for you!

2010-11-15 Thread Karsten Bräckelmann
On Mon, 2010-11-15 at 16:27 -0600, Stan Hoeppner wrote:
 Then apparently you didn't read the thread.  The OP told his users long
 ago do this.  Then he made a change and told them [...]

No, he did not.

This thread lost its funny long ago. It started off quite amusing, the
first two posts have been humorous (in case you didn't notice), then
Stan entered the thread. Too bad to see a nice thread die that quickly.


-- 
char *t=\10pse\0r\0dtu...@ghno\x4e\xc8\x79\xf4\xab\x51\x8a\x10\xf4\xf4\xc4;
main(){ char h,m=h=*t++,*x=t+2*h,c,i,l=*x,s=0; for (i=0;il;i++){ i%8? c=1:
(c=*++x); c128  (s+=h); if (!(h=1)||!t[s+h]){ putchar(t[s]);h=m;s=0; }}}



Re: [Dovecot] Local node indexes in a cluster backend with GFS2

2010-11-15 Thread Timo Sirainen
You could try, but mmap_disable=yes might be faster (or might not).

On 15.11.2010, at 20.30, Aliet Santiesteban Sifontes wrote:

 Read this in GFS2 docs:
 mmap/splice support for journaled files (enabled by using the same on disk
 format as for regular files)
 
 ...
 2010/11/15 Aliet Santiesteban Sifontes alietsantieste...@gmail.com
 
 Should I set mmap_disable = yes when storing indexes in a GFS2 shared
 filesystem??
 
 2010/11/15 Aliet Santiesteban Sifontes alietsantieste...@gmail.com
 
 Ok, I will create a LUN also as a shared clustered storage for indexes, any
 consideration to have into account when the indexes are shared by many
 nodes...
 thank you all...
 
 2010/11/15 Timo Sirainen t...@iki.fi
 
 On 15.11.2010, at 6.44, Aliet Santiesteban Sifontes wrote:
 
 mail_location =
 sdbox:/var/vmail/%d/%3n/%n/sdbox:INDEX=/var/indexes/%d/%3n/%n
 
 /var/vmail is shared clustered filesystem with GFS2 shared by node1 and
 node2
 
 /var/indexes is a local filesystem at the node, so each node has his
 own
 /var/indexes stuff on ext3 and raid1 for improving performance, I mean
 each
 node a different /var/indexes of its own.
 
 This is a bad idea. With dbox the message flags are only stored in index
 files, so if you lose indexes you lose all message flags. Users won't be
 happy.
 
 
 
 



Re: [Dovecot] Single-instance storage is bad for you!

2010-11-15 Thread Charles Marcus
On 2010-11-15 5:27 PM, Stan Hoeppner wrote:
 The OP told his users long ago do this. Then he made a change and
 told them oh, you don't really need to do that anymore. Then,
 possibly, down the road, he tells them, oh, now you need to do it
 the old way again.

Ummm... no, he didn't. Go back and read the first post.

This is silly... Stan, he (the OP) was *joking*...

 Have you ever worked in an IT environment with over 1000 users?

Nope... just little old me and our 50-75 users (high turnover, so it
fluctuates all the time)...

 And those users all love the IT staff, baking cookies for them etc?

Yep - I get that all the time, and offers to be taken to lunch, and even
money (for the things I do for personal/home computers)...

 Send me contact info PLEASE so I can forward my resume.

You'd be bored to death here and your skills would go to waste... ;)

-- 

Best regards,

Charles


Re: [Dovecot] service_count=0 for imap and pop3

2010-11-15 Thread Mark Moseley
 Timo,
 Any hints on how many POP3 and IMAP connections I'd be able to get
 away with in a single threads with the above setup, assuming they're
 relative busy? I.e. if my boxes typically have, say, 200 concurrent
 POP3 connections and 600 IMAP connections, if I used
 process_min_avail=50 for POP3 and process_min_avail=100 for IMAP, I'd
 assume that'd be in the area of 4 POP3 concurrent connections handled
 by that one POP3 thread and 6 concurrent connections handled by that
 one IMAP thead.

 I was thinking something like 3-4 per thread (calculating vs max
 connections) for POP3 and maybe 5-7 for IMAP (as I'm assuming that at
 least a few will just be in IDLE at any given point).

 I guess it comes down to, are you aware of any internal limits for a
 POP3 or IMAP thread where they're going to start to bog down and
 possibly death spiral? I'm going to experiment with different
 settings, but I'd love to hear any opinions you have on it. Sorry for
 the vagueness.

 BTW, I realize process_min_avail is just a minimum, but without having
 service_count=1, what would cause dovecot to fork off more processes
 than #process_min_avail?


Actually, it looks like the spreading of connections was in my
imagination only :)  It looks like it can get quite 'clumpy'. Here's a
copy/paste from a small segment of ps:

emailuser  2625  0.0  0.1   4748  3132 ?S17:15   0:00  \_
dovecot/imap [2 connections]
emailuser  2626  0.0  0.2   7484  6124 ?S17:15   0:00  \_
dovecot/imap [16 connections]
emailuser  2627  0.0  0.1   4572  2960 ?S17:15   0:00  \_
dovecot/imap [5 connections]
emailuser  2628  0.0  0.5  12968 10332 ?S17:15   0:01  \_
dovecot/imap [30 connections]
emailuser  2630  0.2  1.0  23316 20836 ?S17:15   0:04  \_
dovecot/imap [90 connections]
emailuser  2632  0.7  2.6  59440 54492 ?S17:15   0:14  \_
dovecot/imap [209 connections]
emailuser  4099  0.7  0.3   8636  7252 ?S17:34   0:06  \_
dovecot/pop3 [2 connections]
emailuser  4418  0.8  0.3   9576  8196 ?S17:38   0:05  \_
dovecot/pop3 [6 connections]
emailuser  4478  0.5  0.7  18664 14868 ?S17:39   0:03  \_
dovecot/imap [62 connections]
emailuser  4622  1.1  0.4  10404  8780 ?S17:40   0:05  \_
dovecot/pop3 [8 connections]
emailuser  4733  1.1  0.3   8576  6972 ?S17:42   0:04  \_
dovecot/pop3 [5 connections]

The interesting thing is that there's quite a number of other
imap/pop3 procs with just one connection and a ton of imap procs that
haven't been touched (i.e. are still running as 'root', waiting to
eventually setuid to 'emailuser').

Is there a way to spread those out more or is there probably no need
to? I imagine in the case of PID 2632, a large number of those
connections are just sitting in IDLE and doing nothing beyond
stat64()'s. Or maybe a better question would be, is there a setting
I'm not finding that puts an upper limit on connections for a single
process, so a single process will stop servicing new connections after
it hits a certain # and lets the other less-loaded processes handle
the new connection. All the various *_limit settings appear to be
across all processes, not per-processs like this. This btw is 2.0.7
(but with the proctitle patch).


Re: [Dovecot] Single-instance storage is bad for you!

2010-11-15 Thread Jerry
On Mon, 15 Nov 2010 16:27:03 -0600
Stan Hoeppner s...@hardwarefreak.com articulated:

 Jerry put forth on 11/15/2010 2:26 PM:
 
  I am not seeing where you got this self contradictory statements
  from. 
 
 Then apparently you didn't read the thread.  The OP told his users
 long ago do this.  Then he made a change and told them oh, you
 don't really need to do that anymore.  Then, possibly, down the
 road, he tells them, oh, now you need to do it the old way again.
 
 Users see that as self contradiction.  Or why can't he make up his
 mind?
 
  Furthermore, a reasonably educate individual does not see
  change in the terms of anal discomfort, unless they as
  conservatives. A conservative being someone who never envisioned a
  new concept that they did not dislike. In any case, this is not a
  job for the PR department. Rather, it just requires someone with a
  set. More apropos, I believe you are simply blowing this whole
  thing out of proportion.
 
 I'm blowing nothing out of proportion.  I simply made an
 observation.  I don't know about Europe, or anywhere else, but in
 American cusiness culture, in general, most users either dislike or
 outright hate the IT staff, regardless of the level of education.  In
 fact, it's the ones who think they are highly educated who usually
 hate IT the most, because they think that given the chance they could
 do the job better.  This is because they don't have a clue as to the
 work that goes on behind the scenes, and the fact that their
 requests may never get implemented, or may take 6 months to go
 through the IT vetting process before being approved.  The
 requirement of calling IT just to get a piece of software installed
 on the user's desktop because the user doesn't have rights is the
 first thing to turn users against IT.
 
 Now, throw in the original scenario up top, and it's easy to
 understand how and why users would resent the best practices policy
 espoused by the OP.
 
 My original reply to the OP was geared toward saving him grief.  My
 recommendation to the OP was to simply say nothing to the users to
 save himself potential grief.  That simple.
 
 I believe you are making this into more than it needs to be.  Have you
 ever worked in an IT environment with over 1000 users?  And those
 users all love the IT staff, baking cookies for them etc?  Send me
 contact info PLEASE so I can forward my resume.

I have no idea how they do things in Europe either since I live in the
USA. One of my first jobs was maintaining a network for approximately
500 users for a municipality. My first project was installing Postfix
and Courier, since changed to Dovecot to replace a pre-2000 Exchange
box. I would have gladly stayed with Exchange if they would have
allocated the funds. That would have taken too long to get approved so
I just ditched everything and installed the new system over a weekend.
The point being, you do what you have to do.

I spend 25 years as a High School  semi-pro football official. On any
close play, 50% of the players and fans are going to think you are
nuts. You just learn to live with it, aka Grow a set. You cannot
please everyone, so please yourself. If you are going to worry and cry
over whether or not Joe Smoe in accounting is mad at you, then perhaps
it is you who should find a new line of work. I am not here to placate
the company's employees.

-- 
Jerry ✌
dovecot.u...@seibercom.net

Disclaimer: off-list followups get on-list replies or get ignored.
Please do not ignore the Reply-To header.
__
The Law of the Perversity of Nature:
You cannot determine beforehand which side of the bread to
butter.


Re: [Dovecot] service_count=0 for imap and pop3

2010-11-15 Thread Timo Sirainen
On 15.11.2010, at 22.58, Mark Moseley wrote:

 Timo,
 Any hints on how many POP3 and IMAP connections I'd be able to get
 away with in a single threads with the above setup, assuming they're
 relative busy?

The problem is that if there is any waiting for locks, all the other 
connections hang there as well waiting for it. Same for any type of disk I/O 
waits. I don't really know how many you could get away with.. My official 
recommendation would be to keep it 1 connection = 1 process, since that's 
guaranteed to work.

 BTW, I realize process_min_avail is just a minimum, but without having
 service_count=1, what would cause dovecot to fork off more processes
 than #process_min_avail?

When all the existing processes have reached client_limit, a new process is 
created.

 emailuser  2625  0.0  0.1   4748  3132 ?S17:15   0:00  \_
 dovecot/imap [2 connections]
..
 emailuser  2632  0.7  2.6  59440 54492 ?S17:15   0:14  \_
 dovecot/imap [209 connections]
 
 Is there a way to spread those out more or is there probably no need
 to?

Whichever process is fastest to grab a new connection gets it.

 I imagine in the case of PID 2632, a large number of those
 connections are just sitting in IDLE and doing nothing beyond
 stat64()'s. Or maybe a better question would be, is there a setting
 I'm not finding that puts an upper limit on connections for a single
 process, so a single process will stop servicing new connections after
 it hits a certain # and lets the other less-loaded processes handle
 the new connection. All the various *_limit settings appear to be
 across all processes, not per-processs like this. This btw is 2.0.7
 (but with the proctitle patch).

service imap {
  client_limit = 5
}



Re: [Dovecot] Single-instance storage is bad for you!

2010-11-15 Thread Dennis Guhl

Stan Hoeppner schrieb:

[..]


I'm blowing nothing out of proportion.  I simply made an observation.  I


I think you might have, by now, noticed OPs the humour -- so I won't 
stress it.



don't know about Europe, or anywhere else, but in American cusiness
culture, in general, most users either dislike or outright hate the IT


I can't speak for Europe at all but in here in Germany most people like 
thier IT at large.


[..]


I believe you are making this into more than it needs to be.  Have you
ever worked in an IT environment with over 1000 users?  And those users


Yep.


all love the IT staff, baking cookies for them etc?  Send me contact


Yep.


info PLEASE so I can forward my resume.


Nope, the job is taken.

. o O (and Charles might be right, he might be bored to death)

Dennis


Re: [Dovecot] service_count=0 for imap and pop3

2010-11-15 Thread Mark Moseley
 Timo,
 Any hints on how many POP3 and IMAP connections I'd be able to get
 away with in a single threads with the above setup, assuming they're
 relative busy?

 The problem is that if there is any waiting for locks, all the other 
 connections hang there as well waiting for it. Same for any type of disk I/O 
 waits. I don't really know how many you could get away with.. My official 
 recommendation would be to keep it 1 connection = 1 process, since that's 
 guaranteed to work.

 BTW, I realize process_min_avail is just a minimum, but without having
 service_count=1, what would cause dovecot to fork off more processes
 than #process_min_avail?

 When all the existing processes have reached client_limit, a new process is 
 created.

 emailuser  2625  0.0  0.1   4748  3132 ?        S    17:15   0:00  \_
 dovecot/imap [2 connections]
 ..
 emailuser  2632  0.7  2.6  59440 54492 ?        S    17:15   0:14  \_
 dovecot/imap [209 connections]

 Is there a way to spread those out more or is there probably no need
 to?

 Whichever process is fastest to grab a new connection gets it.

 I imagine in the case of PID 2632, a large number of those
 connections are just sitting in IDLE and doing nothing beyond
 stat64()'s. Or maybe a better question would be, is there a setting
 I'm not finding that puts an upper limit on connections for a single
 process, so a single process will stop servicing new connections after
 it hits a certain # and lets the other less-loaded processes handle
 the new connection. All the various *_limit settings appear to be
 across all processes, not per-processs like this. This btw is 2.0.7
 (but with the proctitle patch).

 service imap {
  client_limit = 5
 }



Yeah, client_limit looks perfect. I'd googled client_limit a few
times over the past week and for some reason, I thought client_limit
was also cumulative across all procs, instead of per-proc. But it's
working perfectly now.

I see your point about any blocking operations. I'll start low and
work my way up and abandon service_limit=0 completely if I start to
get user complaints about it. Right now the possibility of cutting
context switches by 80% is just too promising to pass up. With
typically 600-700 imap procs and 200-300 pop3 procs, the time spent in
kernel scheduling was getting very high and these are only dual core
uniprocessor boxes, otherwise it'd be more of a moot point if I had 8+
cores to work with.

Thanks Timo!


Re: [Dovecot] Single-instance storage is bad for you!

2010-11-15 Thread Stan Hoeppner
Jerry put forth on 11/15/2010 5:00 PM:

 I am not here to placate
 the company's employees.

Whether you think so or not, you indeed are there to meet the needs of
the user base.  A lot of SAs often learn this lesson too late.  Without
users there is no need for your position.  And that fact usually
_doesn't_ cut both ways.  Think about that for a second.

As I said, at many places this isn't an issue.  At others, usually the
large enterprises, IT staff are perennial fodder for the meat grinder,
mainly because personal relationships with users can't be established
simply because of scale.

You never see most of the people who rely on you, yet, they know exactly
who you are when things you are responsible for break.  They _will_ find
you.  If the wrong things break, or break too often, whether it's your
action or lack thereof that causes the breakage, or equipment failure,
etc, it doesn't often matter.  Someone important who relies on that
system can snap a finger and you're gone, often without ever meeting the
person who snapped the fingers, even after 5, 10, 15 years with the
place. :(  It simply matter who in a position of power is having a
really bad day/week/personal life/whatever, and decides to exact some
form of revenge on the world that day, when a failure of _your_ system
causes s/he to go over the edge.  This phenomenon isn't isolated to IT.
 But, for the past 40 years, IT systems have replaced the functions of
huge portions of the office staff functions, so IT is far more exposed
to wrath. :(

-- 
Stan


Re: [Dovecot] Single-instance storage is bad for you!

2010-11-15 Thread Stan Hoeppner
Dennis Guhl put forth on 11/15/2010 5:23 PM:

 I think you might have, by now, noticed OPs the humour -- so I won't
 stress it.

I did.  It was weak, but it was there.  If you noticed, I responded to a
specific point within his post, not to the entire post.  I was making a
serious point, regardless of his original intent.  This happens quite
frequently with mailing list topics. :)

-- 
Stan