Re: IMAPSieve Changed flags cause?

2017-04-07 Thread MRob

On 2017-04-05 18:50, MRob wrote:

RFC 6785 says I should be able to run scripts when a command changes
flags on a message but I can't understand what to put in
imapsieve_mailboxXXX_causes. Dovecot logs something like STORE as an
invalid cause.

How do I run an administrator Sieve script caused from change in flags?


Is no reply on this to mean the feature is not currently supported, or 
everyone's busy?


If not supported, are there plans to support admin scripts being 
triggered from message flag changes?


Re: Host ... is being updated before previous update had finished

2017-04-07 Thread Mark Moseley
On Mon, Apr 3, 2017 at 6:04 PM, Mark Moseley  wrote:

> We just had a bunch of backend boxes go down due to a DDoS in our director
> cluster. When the DDoS died down, our director ring was a mess.
>
> Each box had thousands (and hundreds per second, which is a bit much) of
> log lines like the following:
>
> Apr 03 19:59:29 director: Warning: director(10.1.20.10:9090/left): Host
> 10.1.17.15 is being updated before previous update had finished (up ->
> down) - setting to state=down vhosts=100
> Apr 03 19:59:29 director: Warning: director(10.1.20.10:9090/left): Host
> 10.1.17.15 is being updated before previous update had finished (down ->
> up) - setting to state=up vhosts=100
> Apr 03 19:59:29 director: Warning: director(10.1.20.10:9090/left): Host
> 10.1.17.15 is being updated before previous update had finished (up ->
> down) - setting to state=down vhosts=100
> Apr 03 19:59:29 director: Warning: director(10.1.20.10:9090/left): Host
> 10.1.17.15 is being updated before previous update had finished (down ->
> up) - setting to state=up vhosts=100
> Apr 03 19:59:29 director: Warning: director(10.1.20.10:9090/left): Host
> 10.1.17.15 is being updated before previous update had finished (up ->
> down) - setting to state=down vhosts=100
> Apr 03 19:59:29 director: Warning: director(10.1.20.2:9090/right): Host
> 10.1.17.15 is being updated before previous update had finished (down ->
> up) - setting to state=up vhosts=100
> Apr 03 19:59:29 director: Warning: director(10.1.20.2:9090/right): Host
> 10.1.17.15 is being updated before previous update had finished (up ->
> down) - setting to state=down vhosts=100
> Apr 03 19:59:29 director: Warning: director(10.1.20.2:9090/right): Host
> 10.1.17.15 is being updated before previous update had finished (down ->
> up) - setting to state=up vhosts=100
> Apr 03 19:59:29 director: Warning: director(10.1.20.2:9090/right): Host
> 10.1.17.15 is being updated before previous update had finished (up ->
> down) - setting to state=down vhosts=100
> Apr 03 19:59:29 director: Warning: director(10.1.20.2:9090/right): Host
> 10.1.17.15 is being updated before previous update had finished (down ->
> up) - setting to state=up vhosts=100
>
> This was on every director box and the status of all of the directors in
> 'doveadm director ring status' was 'handshaking'.
>
> Here's a sample packet between directors:
>
> 19:51:23.552280 IP 10.1.20.10.56670 > 10.1.20.1.9090: Flags [P.], seq
> 4147:5128, ack 0, win 0, options [nop,nop,TS val 1373505883 ecr
> 1721203906], length 981
>
> Q.  [f.|.HOST   10.1.20.10  90901006732 10.1.17.15
>  100 D1491260800
> HOST10.1.20.10  90901006733 10.1.17.15  100
> U1491260800
> HOST10.1.20.10  90901006734 10.1.17.15  100
> D1491260800
> HOST10.1.20.10  90901006735 10.1.17.15  100
> U1491260800
> HOST10.1.20.10  90901006736 10.1.17.15  100
> D1491260800
> HOST10.1.20.10  90901006737 10.1.17.15  100
> U1491260800
> HOST10.1.20.10  90901006738 10.1.17.15  100
> D1491260800
> HOST10.1.20.10  90901006739 10.1.17.15  100
> U1491260800
> HOST10.1.20.10  90901006740 10.1.17.15  100
> D1491260800
> HOST10.1.20.10  90901006741 10.1.17.15  100
> U1491260800
> HOST10.1.20.10  90901006742 10.1.17.15  100
> D1491260800
> HOST10.1.20.10  90901006743 10.1.17.15  100
> U1491260800
> HOST10.1.20.10  90901006744 10.1.17.15  100
> D1491260800
> HOST10.1.20.10  90901006745 10.1.17.15  100
> U1491260800
> HOST10.1.20.10  90901006746 10.1.17.15  100
> D1491260800
> HOST10.1.20.10  90901006747 10.1.17.15  100
> U1491260800
> SYNC10.1.20.10  90901011840 7   1491263483  3377546382
>
> I'm guessing that D1491260800 is the user hash (with D for down), and the
> U version is for 'up'.
>
> I'm happy to provide the full tcpdump (and/or doveconf -a), though the
> tcpdump is basically all identical the one I pasted (same hash, same host).
>
> This seems pretty fragile. There should be some sort of tie break for
> that, instead of bringing the entire cluster to its knees. Or just drop the
> backend host completely. Or something, anything besides hosing things
> pretty badly.
>
> This is 2.2.27, on both the directors and backend. If the answer is
> upgrade to 2.2.28, then I'll upgrade immediately. I see commit
> a9ade104616bbb81c34cc6f8bfde5dab0571afac mentions the same error but the
> commit predates 2.2.27 by a month and a half.
>
> In the meantime, is there any doveadm command I could've done to fix this?
> I tried removing the host (doveadm director remove 10.1.17.15) but that
> didn't do anything. I didn't think to try to flush the mapping for that
> user till just now. I suspect that with the ring unsync'd, flushing the
> user wouldn't have helped.
>
> The only remedy was to kill dovecot on every box in

Re: [Dovecot-news] v2.2.29.rc1 released

2017-04-07 Thread Aki Tuomi

> On April 7, 2017 at 6:48 PM "Daniel J. Luke"  wrote:
> 
> 
> On Apr 7, 2017, at 11:17 AM, Aki Tuomi  wrote:
> >> On April 7, 2017 at 6:00 PM "Daniel J. Luke"  wrote:
> >> On Apr 7, 2017, at 3:01 AM, Aki Tuomi  wrote:
>  On April 7, 2017 at 9:38 AM Timo Sirainen  wrote:
>  On 7 Apr 2017, at 2.25, Daniel J. Luke  wrote:
> > 
> > On Apr 6, 2017, at 1:33 PM, Timo Sirainen  wrote:
> >> Planning to release v2.2.29 on Monday. Please find and report any bugs 
> >> before that.
> > 
> > I'm seeing still seeing the assert that started showing up for me with 
> > 2.2.28 
> > (https://www.dovecot.org/pipermail/dovecot/2017-February/107250.html)
> > 
> > Below I generate it using doveadm with dovecot 2.2.29rc1 (output 
> > slightly cleaned up so the backtrace is easier to read)
> > 
> > % sudo doveadm index -A \*
> > doveadm(dluke): Panic: file mailbox-list.c: line 1159 
> > (mailbox_list_try_mkdir_root): assertion failed: (strncmp(root_dir, 
> > path, strlen(root_dir)) == 0)
>  
>  This is with mbox? I thought this had been happening already a long 
>  time.. Or if not mbox, what's your doveconf -n?
> >> 
> >> this is mbox & lucene
> >> 
> >>> for mbox and lucene, there is a workaround, which is to create directory 
> >>> under the INDEXES directory called lucene-indexes.
> >>> 
> >>> This way the directory creation is not attempted, this is a known bug, 
> >>> but just hasn't been fixed yet.
> >> 
> >> That directory already exists.
> >> 
> >> mail_location = mbox:~/Mail/:INBOX=~/.mbox
> >> 
> >> % ls -l ~/Mail/.imap/lucene-indexes/
> >> total 429344
> >> -rw---  1 dluke  dluke   209M Apr  6 11:12 _30ms.cfs
> >> -rw---  1 dluke  dluke   1.0M Apr  7 08:10 _30rt.cfs
> >> -rw---  1 dluke  dluke   7.2K Apr  7 08:10 _30ru.cfs
> >> -rw---  1 dluke  dluke   6.3K Apr  7 08:11 _30rv.cfs
> >> -rw---  1 dluke  dluke   7.6K Apr  7 08:11 _30rw.cfs
> >> -rw---  1 dluke  dluke   3.8K Apr  7 10:57 _30rx.cfs
> >> -rw---  1 dluke  dluke   3.8K Apr  7 10:58 _30ry.cfs
> >> -rw---  1 dluke  dluke   3.8K Apr  7 10:58 _30rz.cfs
> >> -rw---  1 dluke  dluke   2.3K Apr  7 10:58 dovecot-expunges.log
> >> -rw---  1 dluke  dluke20B Apr  7 10:58 segments.gen
> >> -rw---  1 dluke  dluke   244B Apr  7 10:58 segments_6214
> >> 
> >> When running 2.2.27 this doesn't happen.
> > 
> > Can you try moving it one folder up, so that it's not under .imap?
> 
> Sure, that fixes that, but I end up with lots of log messages like:
> 
> 2017-04-07 11:43:53.958590-0400  localhost log[5770]: (libdovecot.0.dylib) 
> indexer-worker(dluke): Error: Syncing mailbox lucene-indexes/_0.cfs failed: 
> Mailbox isn't a valid mbox file
> 
> (and the lucene-indexes "folder" appears in Mail.app, which is going to 
> confuse some of my users)
> 
> -- 
> Daniel J. Luke

Just wanted to factor in the bug.

So.. as workaround, you *could* move indexes to separate directory, using 
INDEXES= parameter. And move the lucene-indexes to this folder. But thank you, 
this helps a bit.

Aki


Re: [Dovecot-news] v2.2.29.rc1 released

2017-04-07 Thread Daniel J. Luke
On Apr 7, 2017, at 11:17 AM, Aki Tuomi  wrote:
>> On April 7, 2017 at 6:00 PM "Daniel J. Luke"  wrote:
>> On Apr 7, 2017, at 3:01 AM, Aki Tuomi  wrote:
 On April 7, 2017 at 9:38 AM Timo Sirainen  wrote:
 On 7 Apr 2017, at 2.25, Daniel J. Luke  wrote:
> 
> On Apr 6, 2017, at 1:33 PM, Timo Sirainen  wrote:
>> Planning to release v2.2.29 on Monday. Please find and report any bugs 
>> before that.
> 
> I'm seeing still seeing the assert that started showing up for me with 
> 2.2.28 
> (https://www.dovecot.org/pipermail/dovecot/2017-February/107250.html)
> 
> Below I generate it using doveadm with dovecot 2.2.29rc1 (output slightly 
> cleaned up so the backtrace is easier to read)
> 
> % sudo doveadm index -A \*
> doveadm(dluke): Panic: file mailbox-list.c: line 1159 
> (mailbox_list_try_mkdir_root): assertion failed: (strncmp(root_dir, path, 
> strlen(root_dir)) == 0)
 
 This is with mbox? I thought this had been happening already a long time.. 
 Or if not mbox, what's your doveconf -n?
>> 
>> this is mbox & lucene
>> 
>>> for mbox and lucene, there is a workaround, which is to create directory 
>>> under the INDEXES directory called lucene-indexes.
>>> 
>>> This way the directory creation is not attempted, this is a known bug, but 
>>> just hasn't been fixed yet.
>> 
>> That directory already exists.
>> 
>> mail_location = mbox:~/Mail/:INBOX=~/.mbox
>> 
>> % ls -l ~/Mail/.imap/lucene-indexes/
>> total 429344
>> -rw---  1 dluke  dluke   209M Apr  6 11:12 _30ms.cfs
>> -rw---  1 dluke  dluke   1.0M Apr  7 08:10 _30rt.cfs
>> -rw---  1 dluke  dluke   7.2K Apr  7 08:10 _30ru.cfs
>> -rw---  1 dluke  dluke   6.3K Apr  7 08:11 _30rv.cfs
>> -rw---  1 dluke  dluke   7.6K Apr  7 08:11 _30rw.cfs
>> -rw---  1 dluke  dluke   3.8K Apr  7 10:57 _30rx.cfs
>> -rw---  1 dluke  dluke   3.8K Apr  7 10:58 _30ry.cfs
>> -rw---  1 dluke  dluke   3.8K Apr  7 10:58 _30rz.cfs
>> -rw---  1 dluke  dluke   2.3K Apr  7 10:58 dovecot-expunges.log
>> -rw---  1 dluke  dluke20B Apr  7 10:58 segments.gen
>> -rw---  1 dluke  dluke   244B Apr  7 10:58 segments_6214
>> 
>> When running 2.2.27 this doesn't happen.
> 
> Can you try moving it one folder up, so that it's not under .imap?

Sure, that fixes that, but I end up with lots of log messages like:

2017-04-07 11:43:53.958590-0400  localhost log[5770]: (libdovecot.0.dylib) 
indexer-worker(dluke): Error: Syncing mailbox lucene-indexes/_0.cfs failed: 
Mailbox isn't a valid mbox file

(and the lucene-indexes "folder" appears in Mail.app, which is going to confuse 
some of my users)

-- 
Daniel J. Luke


Re: [Dovecot-news] v2.2.29.rc1 released

2017-04-07 Thread Aki Tuomi

> On April 7, 2017 at 6:00 PM "Daniel J. Luke"  wrote:
> 
> 
> On Apr 7, 2017, at 3:01 AM, Aki Tuomi  wrote:
> >> On April 7, 2017 at 9:38 AM Timo Sirainen  wrote:
> >> On 7 Apr 2017, at 2.25, Daniel J. Luke  wrote:
> >>> 
> >>> On Apr 6, 2017, at 1:33 PM, Timo Sirainen  wrote:
>  Planning to release v2.2.29 on Monday. Please find and report any bugs 
>  before that.
> >>> 
> >>> I'm seeing still seeing the assert that started showing up for me with 
> >>> 2.2.28 
> >>> (https://www.dovecot.org/pipermail/dovecot/2017-February/107250.html)
> >>> 
> >>> Below I generate it using doveadm with dovecot 2.2.29rc1 (output slightly 
> >>> cleaned up so the backtrace is easier to read)
> >>> 
> >>> % sudo doveadm index -A \*
> >>> doveadm(dluke): Panic: file mailbox-list.c: line 1159 
> >>> (mailbox_list_try_mkdir_root): assertion failed: (strncmp(root_dir, path, 
> >>> strlen(root_dir)) == 0)
> >> 
> >> This is with mbox? I thought this had been happening already a long time.. 
> >> Or if not mbox, what's your doveconf -n?
> 
> this is mbox & lucene
> 
> > for mbox and lucene, there is a workaround, which is to create directory 
> > under the INDEXES directory called lucene-indexes.
> > 
> > This way the directory creation is not attempted, this is a known bug, but 
> > just hasn't been fixed yet.
> 
> That directory already exists.
> 
> mail_location = mbox:~/Mail/:INBOX=~/.mbox
> 
> % ls -l ~/Mail/.imap/lucene-indexes/
> total 429344
> -rw---  1 dluke  dluke   209M Apr  6 11:12 _30ms.cfs
> -rw---  1 dluke  dluke   1.0M Apr  7 08:10 _30rt.cfs
> -rw---  1 dluke  dluke   7.2K Apr  7 08:10 _30ru.cfs
> -rw---  1 dluke  dluke   6.3K Apr  7 08:11 _30rv.cfs
> -rw---  1 dluke  dluke   7.6K Apr  7 08:11 _30rw.cfs
> -rw---  1 dluke  dluke   3.8K Apr  7 10:57 _30rx.cfs
> -rw---  1 dluke  dluke   3.8K Apr  7 10:58 _30ry.cfs
> -rw---  1 dluke  dluke   3.8K Apr  7 10:58 _30rz.cfs
> -rw---  1 dluke  dluke   2.3K Apr  7 10:58 dovecot-expunges.log
> -rw---  1 dluke  dluke20B Apr  7 10:58 segments.gen
> -rw---  1 dluke  dluke   244B Apr  7 10:58 segments_6214
> 
> When running 2.2.27 this doesn't happen.
> -- 
> Daniel J. Luke

Can you try moving it one folder up, so that it's not under .imap?

Aki


Solved - Re: SELinux policy to allow Dovecot to connect to Mysql

2017-04-07 Thread Robert Moskowitz
I reread my sql.conf.ext files and realized they were actually 
connecting to localhost.  So I did some googling, and found how to 
connect to the socket:


connect = host=/var/lib/mysql/mysql.sock dbname=postfix user=postfix 
password=Postfix_Database_Password


And all fixed.  No more failures.  Plus probably securer.

On 04/07/2017 10:57 AM, Robert Moskowitz wrote:
The strange thing is that dovecot auth has no problem connecting to 
mysql, but the quota query is what is failing.


On 04/07/2017 10:43 AM, Robert Moskowitz wrote:
As I have noted in previous messages, I been getting the following on 
my new mailserver:


Apr  7 10:17:27 z9m9z dovecot: dict: Error: mysql(localhost): Connect 
failed to database (postfix): Can't connect to local MySQL server 
through socket '/var/lib/mysql/mysql.sock' (13) - waiting for 25 
seconds before retry


They go away when I setenforce 0.  It is not a timing issue as I 
earlier thought.


So I googled dovecot mysql selinux and the only worthwhile hit was:

http://zszsit.blogspot.com/2012/12/dovecot-mysql-selinux-issue-on-centos6.html 



that provides a /etc/selinux/dovecot2mysql.te and other selinux stuff.

Is there a simpler way like a setsbool option?

With all the howtos on dovecot with mysql, it is interesting that 
none of them seem to have this problem.  Maybe because they connect 
to mysql through TCP port 3306 which has ITS set of problems (like 
MariaDB defaults to not listening on TCP).


thanks!





Re: [Dovecot-news] v2.2.29.rc1 released

2017-04-07 Thread Daniel J. Luke
On Apr 7, 2017, at 3:01 AM, Aki Tuomi  wrote:
>> On April 7, 2017 at 9:38 AM Timo Sirainen  wrote:
>> On 7 Apr 2017, at 2.25, Daniel J. Luke  wrote:
>>> 
>>> On Apr 6, 2017, at 1:33 PM, Timo Sirainen  wrote:
 Planning to release v2.2.29 on Monday. Please find and report any bugs 
 before that.
>>> 
>>> I'm seeing still seeing the assert that started showing up for me with 
>>> 2.2.28 (https://www.dovecot.org/pipermail/dovecot/2017-February/107250.html)
>>> 
>>> Below I generate it using doveadm with dovecot 2.2.29rc1 (output slightly 
>>> cleaned up so the backtrace is easier to read)
>>> 
>>> % sudo doveadm index -A \*
>>> doveadm(dluke): Panic: file mailbox-list.c: line 1159 
>>> (mailbox_list_try_mkdir_root): assertion failed: (strncmp(root_dir, path, 
>>> strlen(root_dir)) == 0)
>> 
>> This is with mbox? I thought this had been happening already a long time.. 
>> Or if not mbox, what's your doveconf -n?

this is mbox & lucene

> for mbox and lucene, there is a workaround, which is to create directory 
> under the INDEXES directory called lucene-indexes.
> 
> This way the directory creation is not attempted, this is a known bug, but 
> just hasn't been fixed yet.

That directory already exists.

mail_location = mbox:~/Mail/:INBOX=~/.mbox

% ls -l ~/Mail/.imap/lucene-indexes/
total 429344
-rw---  1 dluke  dluke   209M Apr  6 11:12 _30ms.cfs
-rw---  1 dluke  dluke   1.0M Apr  7 08:10 _30rt.cfs
-rw---  1 dluke  dluke   7.2K Apr  7 08:10 _30ru.cfs
-rw---  1 dluke  dluke   6.3K Apr  7 08:11 _30rv.cfs
-rw---  1 dluke  dluke   7.6K Apr  7 08:11 _30rw.cfs
-rw---  1 dluke  dluke   3.8K Apr  7 10:57 _30rx.cfs
-rw---  1 dluke  dluke   3.8K Apr  7 10:58 _30ry.cfs
-rw---  1 dluke  dluke   3.8K Apr  7 10:58 _30rz.cfs
-rw---  1 dluke  dluke   2.3K Apr  7 10:58 dovecot-expunges.log
-rw---  1 dluke  dluke20B Apr  7 10:58 segments.gen
-rw---  1 dluke  dluke   244B Apr  7 10:58 segments_6214

When running 2.2.27 this doesn't happen.
-- 
Daniel J. Luke


Re: SELinux policy to allow Dovecot to connect to Mysql

2017-04-07 Thread Robert Moskowitz
The strange thing is that dovecot auth has no problem connecting to 
mysql, but the quota query is what is failing.


On 04/07/2017 10:43 AM, Robert Moskowitz wrote:
As I have noted in previous messages, I been getting the following on 
my new mailserver:


Apr  7 10:17:27 z9m9z dovecot: dict: Error: mysql(localhost): Connect 
failed to database (postfix): Can't connect to local MySQL server 
through socket '/var/lib/mysql/mysql.sock' (13) - waiting for 25 
seconds before retry


They go away when I setenforce 0.  It is not a timing issue as I 
earlier thought.


So I googled dovecot mysql selinux and the only worthwhile hit was:

http://zszsit.blogspot.com/2012/12/dovecot-mysql-selinux-issue-on-centos6.html 



that provides a /etc/selinux/dovecot2mysql.te and other selinux stuff.

Is there a simpler way like a setsbool option?

With all the howtos on dovecot with mysql, it is interesting that none 
of them seem to have this problem.  Maybe because they connect to 
mysql through TCP port 3306 which has ITS set of problems (like 
MariaDB defaults to not listening on TCP).


thanks!



SELinux policy to allow Dovecot to connect to Mysql

2017-04-07 Thread Robert Moskowitz
As I have noted in previous messages, I been getting the following on my 
new mailserver:


Apr  7 10:17:27 z9m9z dovecot: dict: Error: mysql(localhost): Connect 
failed to database (postfix): Can't connect to local MySQL server 
through socket '/var/lib/mysql/mysql.sock' (13) - waiting for 25 seconds 
before retry


They go away when I setenforce 0.  It is not a timing issue as I earlier 
thought.


So I googled dovecot mysql selinux and the only worthwhile hit was:

http://zszsit.blogspot.com/2012/12/dovecot-mysql-selinux-issue-on-centos6.html

that provides a /etc/selinux/dovecot2mysql.te and other selinux stuff.

Is there a simpler way like a setsbool option?

With all the howtos on dovecot with mysql, it is interesting that none 
of them seem to have this problem.  Maybe because they connect to mysql 
through TCP port 3306 which has ITS set of problems (like MariaDB 
defaults to not listening on TCP).


thanks!


LAYOUT=fs and subfolders listing

2017-04-07 Thread Dovecot

dovecot-2.2.28-1.el6_31.wing.x86_64

I've created this public shared namespace with

namespace public2 {
 type = public
 separator = .
 prefix = Public2.
 location = 
maildir:/mail/public2:INDEX=/mail/%u/public:LAYOUT=fs:DIRNAME=.store

 subscriptions = no
}

and I'm using the "imap_client_workarounds = tb-extra-mailbox-sep"

and filesystem structure listed at the end.

The problem is that with this configuration thunderbird (and 
roundcubemail) shows the Public2 namespace but empty - without any folders.

I've tried with another DIRNAME=StoR - same effect

If I will not use DIRNAME at all(and no .stor directories) TB sees the 
f1 f2 folders but not the f2/s1 subfolder. Just like it would not list 
any subfolders for subscription.


Within TB I can create subfolder say f1/mysub1 but after it's 
unsubscribed cannot be subscribed again(not shown in listing)


Can You provide any hints? I can live without the DIRNAME= but it has to 
list subfolders for subscription.


BTW: it works well without LAYOUT=fs - but it's kind of requirement here.

/mail/public2/:
drwxrwsr-x 4 nobody havemail 4096 Apr  7 15:07 f1
drwxrwsr-x 3 nobody havemail 4096 Apr  7 15:02 f2

/mail/public2/f1:
-rw-r--r-- 1 root   havemail   42 Apr  7 15:07 dovecot-acl
-r--r--r-- 1 nobody havemail0 Oct 14  2015 dovecot-shared
-rw-rw-r-- 1 nobody havemail0 Oct 15  2015 dovecot-uidlist
drwxrwsr-x 3 nobody havemail 4096 Apr  7 15:02 s1
drwxrws--- 5 nobody havemail 4096 Apr  7 15:02 .store

/mail/public2/f1/s1:
-rw-r--r-- 1 root   havemail   40 Oct 16  2015 dovecot-acl
-r--r--r-- 1 nobody havemail0 Oct 14  2015 dovecot-shared
-rw-rw-r-- 1 nobody havemail0 Oct 15  2015 dovecot-uidlist
drwxrws--- 5 nobody havemail 4096 Apr  7 15:02 .store

/mail/public2/f1/s1/.store:
drwxrws--- 2 nobody havemail 4096 Oct 15  2015 cur
drwxrws--- 2 nobody havemail 4096 Oct 15  2015 new
drwxrws--- 2 nobody havemail 4096 Oct 15  2015 tmp

/mail/public2/f1/.store:
drwxrws--- 2 nobody havemail 4096 Oct 15  2015 cur
drwxrws--- 2 nobody havemail 4096 Oct 15  2015 new
drwxrws--- 2 nobody havemail 4096 Oct 15  2015 tmp

/mail/public2/f2:
-rw-r--r-- 1 root   havemail   40 Oct 16  2015 dovecot-acl
-r--r--r-- 1 nobody havemail0 Oct 14  2015 dovecot-shared
-rw-rw-r-- 1 nobody havemail0 Oct 15  2015 dovecot-uidlist
drwxrws--- 5 nobody havemail 4096 Apr  7 15:02 .store

/mail/public2/f2/.store:
drwxrws--- 2 nobody havemail 4096 Oct 15  2015 cur
drwxrws--- 2 nobody havemail 4096 Oct 15  2015 new
drwxrws--- 2 nobody havemail 4096 Oct 15  2015 tmp


Re: Running sievec as user vmail

2017-04-07 Thread Robert Moskowitz

thanks

On 04/07/2017 08:24 AM, Florian Beer | 42dev wrote:

You could give

# su -c MYSIEVESCRIPT vmail

a try.

Also: # man su


Cheers, Florian
_
42dev e. U. - web solutions & hosting services
http://42dev.eu

On 2017-04-07 14:19, Robert Moskowitz wrote:

My sieve problem ended up a permissions problem.  I ran sievec as root
and .svbin needs vmail:mail ownership.

I could always just add the chown command to my process, but I wonder
if there is some 'clean' way to run sievec as user vmail while logged
in as root?

thanks




Re: Running sievec as user vmail

2017-04-07 Thread Florian Beer | 42dev

You could give

# su -c MYSIEVESCRIPT vmail

a try.

Also: # man su


Cheers, Florian
_
42dev e. U. - web solutions & hosting services
http://42dev.eu

On 2017-04-07 14:19, Robert Moskowitz wrote:

My sieve problem ended up a permissions problem.  I ran sievec as root
and .svbin needs vmail:mail ownership.

I could always just add the chown command to my process, but I wonder
if there is some 'clean' way to run sievec as user vmail while logged
in as root?

thanks


Running sievec as user vmail

2017-04-07 Thread Robert Moskowitz
My sieve problem ended up a permissions problem.  I ran sievec as root 
and .svbin needs vmail:mail ownership.


I could always just add the chown command to my process, but I wonder if 
there is some 'clean' way to run sievec as user vmail while logged in as 
root?


thanks


Re: Doubt with imap/imap-login processes in proxy server

2017-04-07 Thread Angel L. Mateo


imap-login can handle quite a number of connections (thousands), and imap 
process is not usually spawned on proxies.


Ok. Thank you.

--
Angel L. Mateo Martínez
Sección de Telemática
Área de Tecnologías de la Información
y las Comunicaciones Aplicadas (ATICA)
http://www.um.es/atica
Tfo: 868889150
Fax: 86337


Re: Doubt with imap/imap-login processes in proxy server

2017-04-07 Thread Aki Tuomi

> On April 7, 2017 at 1:26 PM "Angel L. Mateo"  wrote:
> 
> 
> Hi,
> 
>   I have a question I have not cleared after reading 
> https://wiki.dovecot.org/LoginProcess.
> 
>   I'm developing a proxy server (without director, because the backend 
> server is specified by a ldap lookup).
> 
>   My question is: the limit of imap (or pop3) concurrently users that the 
> proxy can handle is determined by the imap-login limit or the imap one?
> 
> -- 
> Angel L. Mateo Martínez

imap-login can handle quite a number of connections (thousands), and imap 
process is not usually spawned on proxies.

Aki


Doubt with imap/imap-login processes in proxy server

2017-04-07 Thread Angel L. Mateo

Hi,

	I have a question I have not cleared after reading 
https://wiki.dovecot.org/LoginProcess.


	I'm developing a proxy server (without director, because the backend 
server is specified by a ldap lookup).


	My question is: the limit of imap (or pop3) concurrently users that the 
proxy can handle is determined by the imap-login limit or the imap one?


--
Angel L. Mateo Martínez
Sección de Telemática
Área de Tecnologías de la Información
y las Comunicaciones Aplicadas (ATICA)
http://www.um.es/atica
Tfo: 868889150
Fax: 86337


Re: [Dovecot-news] v2.2.29.rc1 released

2017-04-07 Thread Aki Tuomi

> On April 7, 2017 at 9:38 AM Timo Sirainen  wrote:
> 
> 
> On 7 Apr 2017, at 2.25, Daniel J. Luke  wrote:
> > 
> > On Apr 6, 2017, at 1:33 PM, Timo Sirainen  wrote:
> >> Planning to release v2.2.29 on Monday. Please find and report any bugs 
> >> before that.
> > 
> > I'm seeing still seeing the assert that started showing up for me with 
> > 2.2.28 (https://www.dovecot.org/pipermail/dovecot/2017-February/107250.html)
> > 
> > Below I generate it using doveadm with dovecot 2.2.29rc1 (output slightly 
> > cleaned up so the backtrace is easier to read)
> > 
> > % sudo doveadm index -A \*
> > doveadm(dluke): Panic: file mailbox-list.c: line 1159 
> > (mailbox_list_try_mkdir_root): assertion failed: (strncmp(root_dir, path, 
> > strlen(root_dir)) == 0)
> 
> This is with mbox? I thought this had been happening already a long time.. Or 
> if not mbox, what's your doveconf -n?

for mbox and lucene, there is a workaround, which is to create directory under 
the INDEXES directory called lucene-indexes.

This way the directory creation is not attempted, this is a known bug, but just 
hasn't been fixed yet.

Aki