Dovecot quota

2018-03-27 Thread David Mehler
Hello,

I'm running Dovecot on a FreeBSD system with Postfix in a virtual user
setup, with Mysql. I am trying to understand the quota configuration.

I've got a Mysql database with an accounts table with a quota field.
I've also got two other tables one quota (currently has nothing in it
an empty set), and quota2 messages and bytes which has one entry. My
goal is to have different quotas for each user so say one user has a
512MB quota I put 512 in the accounts quota column, while another user
might have 256MB, put 256 in the accounts quota column. These are just
examples. I'm assuming messages in the quota2 table track how many
messages are under that user's is it inbox or all folders in the
account? And bytes is that the space being taken up again by inbox or
by all messages in the account?

I'm also trying to have a separate quota for my public folders, which
is not working.

If anyone could take a look at this configuration see if it looks good
and maybe where public is not happening i'd appreciate it.

Thanks.
Dave.

Configuration:
mysql> describe accounts;
+--+--+--+-+-++
| Field| Type | Null | Key | Default | Extra  |
+--+--+--+-+-++
| id   | int(10) unsigned | NO   | PRI | NULL| auto_increment |
| name | varchar(255) | NO   | | NULL||
| username | varchar(64)  | NO   | MUL | NULL||
| domain   | varchar(255) | NO   | MUL | NULL||
| password | varchar(255) | NO   | | NULL||
| quota| int(10) unsigned | YES  | | 0   ||
| enabled  | tinyint(1)   | YES  | | 0   ||
| sendonly | tinyint(1)   | YES  | | 0   ||
| last_login   | int(11)  | YES  | | NULL||
| last_login_ip| varchar(16)  | YES  | | NULL||
| last_login_date  | datetime | YES  | | NULL||
| last_login_proto | varchar(16)  | YES  | | NULL||
+--+--+--+-+-++
12 rows in set (0.00 sec)

mysql> describe quota;
+--+--+--+-+-+---+
| Field| Type | Null | Key | Default | Extra |
+--+--+--+-+-+---+
| username | varchar(255) | NO   | PRI | NULL|   |
| path | varchar(100) | NO   | PRI | NULL|   |
| current  | bigint(20)   | NO   | | 0   |   |
+--+--+--+-+-+---+
3 rows in set (0.00 sec)

mysql> describe quota2;
+--+--+--+-+-+---+
| Field| Type | Null | Key | Default | Extra |
+--+--+--+-+-+---+
| username | varchar(100) | NO   | PRI | NULL|   |
| bytes| bigint(20)   | NO   | | 0   |   |
| messages | int(11)  | NO   | | 0   |   |
+--+--+--+-+-+---+
3 rows in set (0.01 sec)

mysql> select * from quota;
Empty set (0.00 sec)

mysql> select * from quota2;
++---+--+
| username   | bytes | messages |
++---+--+
| u...@example.com | 171430625 |20591 |
++---+--+
1 row in set (0.00 sec)

doveconf -n
# 2.2.35 (b1cb664): /usr/local/etc/dovecot/dovecot.conf
# Pigeonhole version 0.4.23 (b2e41927)
# OS: FreeBSD 11.1-RELEASE-p4 amd64
# Hostname: localhost
auth_cache_size = 24 M
auth_cache_ttl = 18 hours
auth_default_realm = example.com
auth_mechanisms = plain login
auth_realms = example.com
dict {
  acl = mysql:/usr/local/etc/dovecot/dovecot-dict-sql.conf.ext
  quota = mysql:/usr/local/etc/dovecot/dovecot-dict-sql.conf.ext
}
first_valid_gid = 999
first_valid_uid = 999
hostname = mail.example.com
imap_idle_notify_interval = 10 mins
last_valid_gid = 999
last_valid_uid = 999
lda_mailbox_autocreate = yes
lda_mailbox_autosubscribe = yes
lda_original_recipient_header = X-Original-To
listen = 127.0.0.1 xxx.xxx.xxx.xxx
log_path = /var/log/dovecot/dovecot.log
log_timestamp = "%Y-%m-%d %H:%M:%S "
mail_access_groups = vmail
mail_fsync = never
mail_gid = vmail
mail_home = /home/vmail/mailboxes/%d/%n
mail_location = maildir:~/mail:LAYOUT=fs
mail_plugins = acl mail_log notify quota quota_clone trash virtual welcome zlib
mail_privileged_group = vmail
mail_server_admin = mailto:postmas...@example.com
mail_uid = vmail
mailbox_idle_check_interval = 59 secs
mailbox_list_index = yes
managesieve_notify_capability = mailto
managesieve_sieve_capability = fileinto reject envelope
encoded-character vacation subaddress comparator-i;as

[sieve][pigeonhole] Can't catch stdout for pipe script after upgrade Dovecot 2.2 -> 2.3

2018-03-27 Thread Konstantin Shalygin

Hi.


I use custom script:


require [ "vnd.dovecot.pipe", "variables" ];

if address :is :all "from" "snip@snap"
{
  pipe "sieve_to_owncloud";
}



sieve_to_owncloud:


DATE=`date +%Y-%m-%d_%H-%M-%S`
PYTHONIOENCODING=utf8 python /opt/sieve-pipe/python-imap-to-owncloud.py \
  --owncloud-host https:// \
  --owncloud-user user \
  --owncloud-password secret \
  --owncloud-path /path/to/here &>/tmp/sieve_logger_py_${DATE}



On dovecot 2.2 after script execution I will get plaintext log files on 
/tmp. After upgrade - don't.


I also changed transport to lmtp when upgrade. Any ideas where is my stdout?



k



Released Pigeonhole v0.5.1 for Dovecot v2.3.1.

2018-03-27 Thread Stephan Bosch
Hello Dovecot users,

Here's the Pigeonhole release that goes with Dovecot v2.3.1. You will
need this release for Dovecot v2.3.1, because the previous v0.5.0.1
release will not work. Apart from compatibility changes, it only
contains bugfixes.

Changelog v0.5.1:

- Explicitly disallow UTF-8 in localpart in addresses parsed from Sieve
  script.
- editheader extension: Corrected the stream position calculations
  performed while making the modified message available as a stream.
  Pigeonhole Sieve crashed in LMTP with an assertion panic when the
  Sieve editheader extension was used before the message was redirected.
  Experiments indicate that the problem occurred only with LMTP and that
  LDA is not affected.
- fileinto extension: Fix assert panic occurring when fileinto is used
  without being listed in the require line, while the copy extension is
  listed there. This is a very old bug.
- imapsieve plugin: Do not assert crash or log an error for messages
  that disappear concurrently while applying Sieve scripts. This event
  is now logged as a debug message.
- Sieve extprograms plugin: Large output from "execute" command crashed
  delivery. Fixed buffering issue in code that handles output from the
  external program.

The release is available as follows:

https://pigeonhole.dovecot.org/releases/2.3/dovecot-2.3-pigeonhole-0.5.1.tar.gz
https://pigeonhole.dovecot.org/releases/2.3/dovecot-2.3-pigeonhole-0.5.1.tar.gz.sig

Refer to http://pigeonhole.dovecot.org and the Dovecot v2.x wiki for
more information. Have fun testing this release and don't hesitate to
notify me when there are any problems.

Regards,

-- 
Stephan Bosch
step...@rename-it.nl













Release 2.3.1

2018-03-27 Thread aki . tuomi
Hi!

We are releasing v2.3.1, which mostly consists of bug fixes for 2.3.0, and few 
improvements. This is also available via https://repo.dovecot.org if you want 
packages. libsodium support didn't get into this build, due to build 
environment issues, but 2.3.2 will contain it.

* Submission server support improvements and bug fixes
  - Lots of bug fixes to submission server
* API CHANGE: array_idx_modifiable will no longer allocate space
 - Particularly affects how you should check MODULE_CONTEXT result, or use 
REQUIRE_MODULE_CONTEXT.

+ mail_attachment_detection_options setting controls when
  $HasAttachment and $HasNoAttachment keywords are set for mails.
+ imap: Support fetching body snippets using FETCH (SNIPPET) or
  (SNIPPET (LAZY=FUZZY))
+ fs-compress: Automatically detect whether input is compressed or not.
  Prefix the compression algorithm with "maybe-" to enable the
  detection, for example: "compress:maybe-gz:6:..."
+ Added settings to change dovecot.index* files' optimization behavior.
  See https://wiki2.dovecot.org/IndexFiles#Settings
+ Auth cache can now utilize auth workers to do password hash
  verification by setting auth_cache_verify_password_with_worker=yes.
+ Added charset_alias plugin. See
  https://wiki2.dovecot.org/Plugins/CharsetAlias
+ imap_logout_format and pop3_logout_format settings now support all of the 
generic variables (e.g. %{rip}, %{session}, etc.)

--
Aki Tuomi
Dovecot oy


signature.asc
Description: PGP signature


Re: dovecot logging

2018-03-27 Thread A. Schulze


Am 27.03.2018 um 17:28 schrieb Alex JOST:
> Did you try running rsyslog inside the container...

no, I like follow the preferred way to run container: one process per container.

Andreas


Re: dovecot logging

2018-03-27 Thread Alex JOST

Am 27.03.2018 um 16:06 schrieb A. Schulze:

Hello,

I'm currently playing with a number of dovecot instances to evaluate my "next 
generation setup"
For now I run 6 instances of dovecot, one per docker container:
  - 2x redirector
  - 2x backend #1
  - 2x backend #2

All docker container use syslog. And there the problems starts.
Every instance identify itself as "dovecot" That's not helpful :-/
I tried to set an instance name but that change nothing.

My options are now
  - let rsyslogd separate the sources by IP
  - use logfiles files
Are there other?

Currently the code use a fixed string "dovecot":
https://github.com/dovecot/core/blob/master/src/lib-master/master-service.c#L415

I would ask if it's possible to use the instance name as syslog identifier, too?

Andreas



Did you try running rsyslog inside the container and forwarding 
everything to the endpoint? You can modify the configuration of rsyslog 
and explicitly set a hostname if needed.


--
Alex JOST


Re: Slow imap connection to local dovecot server

2018-03-27 Thread Lothar Paltins

Hi Joseph,

thanks for your answer. I have now downloaded and compiled the 
"original" dovecot-2.3.0.1 package and the slow login issue is gone. So 
it isn't an issue of dovecot or my configuration, but a problem of the 
OpenSuse rpm package.


Best regards
--
Lothar Paltinslptm...@arcor.de


Duplicate mails on pop3 expunge with dsync replication on 2.2.35 (2.2.33.2 works)

2018-03-27 Thread Gerald Galster
Hello,

consider the following setup with dovecot 2.2.35:

smtp/587 (subject: test 1535)
 |
 |
mx2a.example.com   --> dsync/ssh   --> mx2b.example.com
|
  pop3 fetch/expunge (uid 23)
|
  !! dsync (copy from INBOX -> uid 24) /|
dsync (expunge uid 23) /

The pop3 client deletes mail from the server which triggers a copy from
INBOX before it is expunged. On the next pop3 fetch you get the copy of
the mail you thought had been expunged.

This occurs only if mail is received by smtp on mx2a, synced to mx2b
via dsync/ssh and then expunged via pop3 on mx2b. It does not occur
if mail is received and expunged on mx2b.

As a temporary workaround the system has been downgraded to 2.2.33.2.
There are no duplicate emails after expunge with this version.
2.2.34 has not been tested.

Does anyone know if there were changes in the dsync code from 2.2.33.2
to 2.2.35?

Log:

(mail received on mx2a.example.com and delivered via dsync to mx2b.example.com, 
then expunged via pop3 on mx2b.example.com -> copy/duplicate)
Mar 26 15:35:57 mx2b.example.com dovecot[3825]: pop3-login: Login: 
user=, method=PLAIN, rip=91.0.0.1, lip=188.0.0.1, 
mpid=3922, TLS, TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)
Mar 26 15:35:58 mx2b.example.com dovecot[3825]: pop3(popt...@example.com): 
expunge: box=INBOX, uid=23, 
msgid=, size=1210, 
subject=test 1535
Mar 26 15:35:58 mx2b.example.com dovecot[3825]: pop3(popt...@example.com): 
Disconnected: Logged out top=0/0, retr=1/1259, del=1/1, size=1242
Mar 26 15:35:58 mx2b.example.com dovecot[3825]: doveadm: Error: 
dsync-remote(popt...@example.com): Info: copy from INBOX: box=INBOX, uid=24, 
msgid=, size=1210, 
subject=test 1535
Mar 26 15:35:58 mx2b.example.com dovecot[3825]: doveadm: Error: 
dsync-remote(popt...@example.com): Info: expunge: box=INBOX, uid=23, 
msgid=, size=1210, 
subject=test 1535

(mail received on mx2b.example.com and expunged via pop3 on mx2b.example.com -> 
no copy/duplicate)
Mar 26 15:36:09 mx2b.example.com dovecot[3825]: pop3-login: Login: 
user=, method=PLAIN, rip=91.0.0.1, lip=188.0.0.1, 
mpid=3927, TLS, TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)
Mar 26 15:36:09 mx2b.example.com dovecot[3825]: pop3(popt...@example.com): 
expunge: box=INBOX, uid=24, 
msgid=, size=1210, 
subject=test 1535
Mar 26 15:36:09 mx2b.example.com dovecot[3825]: pop3(popt...@example.com): 
Disconnected: Logged out top=0/0, retr=1/1259, del=1/1, size=1242
Mar 26 15:36:10 mx2b.example.com dovecot[3825]: doveadm: Error: 
dsync-remote(popt...@example.com): Info: expunge: box=INBOX, uid=24, 
msgid=, size=1210, 
subject=test 1535

Thanks for looking into this
Gerald

dovecot logging

2018-03-27 Thread A. Schulze
Hello,

I'm currently playing with a number of dovecot instances to evaluate my "next 
generation setup"
For now I run 6 instances of dovecot, one per docker container:
 - 2x redirector
 - 2x backend #1
 - 2x backend #2

All docker container use syslog. And there the problems starts.
Every instance identify itself as "dovecot" That's not helpful :-/
I tried to set an instance name but that change nothing.

My options are now
 - let rsyslogd separate the sources by IP
 - use logfiles files
Are there other?

Currently the code use a fixed string "dovecot":
https://github.com/dovecot/core/blob/master/src/lib-master/master-service.c#L415

I would ask if it's possible to use the instance name as syslog identifier, too?

Andreas


Re: murmurhash3 test failures on big-endian systems

2018-03-27 Thread Josef 'Jeff' Sipek
On Tue, Mar 27, 2018 at 13:15:31 +0300, Apollon Oikonomopoulos wrote:
> On 13:05 Tue 27 Mar , Apollon Oikonomopoulos wrote:
> > On 11:31 Tue 27 Mar , Apollon Oikonomopoulos wrote:
> > It turns out there's a missing byte-inversion when loading the blocks 
> > which should be addressed in getblock{32,64}. Murmurhash treats each 
> > block as an integer expecting little-endian storage. Applying this 
> > additional change fixes the build on s390x (and does not break it on 
> > x864_64):
> > 
> > --- b/src/lib/murmurhash3.c
> > +++ b/src/lib/murmurhash3.c
> > @@ -23,7 +23,7 @@
> > 
> >  static inline uint32_t getblock32(const uint32_t *p, int i)
> >  {
> > -  return p[i];
> > +  return cpu32_to_le(p[i]);
> 
> … or perhaps le32_to_cpu, although it should be the same in the end. 

Right.

I'm going get the changes reviewed & committed.  I'll ping you when there is
a commit with the "official" fix.

Thanks,

Jeff.

> >  }
> > 
> >  
> > //-
> > @@ -105,7 +105,7 @@
> > 
> >  static inline uint64_t getblock64(const uint64_t *p, int i)
> >  {
> > -  return p[i];
> > +  return cpu64_to_le(p[i]);
> >  }
> > 
> > Regards,
> > Apollon

-- 
*NOTE: This message is ROT-13 encrypted twice for extra protection*


Re: murmurhash3 test failures on big-endian systems

2018-03-27 Thread Apollon Oikonomopoulos
On 13:05 Tue 27 Mar , Apollon Oikonomopoulos wrote:
> On 11:31 Tue 27 Mar , Apollon Oikonomopoulos wrote:
> It turns out there's a missing byte-inversion when loading the blocks 
> which should be addressed in getblock{32,64}. Murmurhash treats each 
> block as an integer expecting little-endian storage. Applying this 
> additional change fixes the build on s390x (and does not break it on 
> x864_64):
> 
> --- b/src/lib/murmurhash3.c
> +++ b/src/lib/murmurhash3.c
> @@ -23,7 +23,7 @@
> 
>  static inline uint32_t getblock32(const uint32_t *p, int i)
>  {
> -  return p[i];
> +  return cpu32_to_le(p[i]);

… or perhaps le32_to_cpu, although it should be the same in the end. 

>  }
> 
>  
> //-
> @@ -105,7 +105,7 @@
> 
>  static inline uint64_t getblock64(const uint64_t *p, int i)
>  {
> -  return p[i];
> +  return cpu64_to_le(p[i]);
>  }
> 
> Regards,
> Apollon


Re: murmurhash3 test failures on big-endian systems

2018-03-27 Thread Apollon Oikonomopoulos
On 11:31 Tue 27 Mar , Apollon Oikonomopoulos wrote:
> Hi,
> 
> On 12:55 Mon 26 Mar , Josef 'Jeff' Sipek wrote:
> > On Mon, Mar 26, 2018 at 15:57:01 +0300, Apollon Oikonomopoulos wrote:
> > ...
> > > I'd be happy to test the patch, thanks!
> > 
> > Ok, try the attached patch.  (It is a first pass at the issue, so it may not
> > be the final diff that'll end up getting committed.  It'd be good to know if
> > it actually fixes the issue for you - sadly, I don't have a big endian
> > system to play with.)
> 
> Thanks for the quick response!
> 
> Unfortunately still fails, although with fewer assertion errors than 
> before:
> 
> test-murmurhash3.c:34: Assert(#8) failed: memcmp(result, vectors[i].result, 
> sizeof(result)) == 0
> test-murmurhash3.c:34: Assert(#11) failed: memcmp(result, vectors[i].result, 
> sizeof(result)) == 0
> test-murmurhash3.c:34: Assert(#12) failed: memcmp(result, vectors[i].result, 
> sizeof(result)) == 0
> murmurhash3 (murmurhash3_32) . : 
> FAILED
> test-murmurhash3.c:34: Assert(#12) failed: memcmp(result, vectors[i].result, 
> sizeof(result)) == 0
> murmurhash3 (murmurhash3_128)  : 
> FAILED

It turns out there's a missing byte-inversion when loading the blocks 
which should be addressed in getblock{32,64}. Murmurhash treats each 
block as an integer expecting little-endian storage. Applying this 
additional change fixes the build on s390x (and does not break it on 
x864_64):

--- b/src/lib/murmurhash3.c
+++ b/src/lib/murmurhash3.c
@@ -23,7 +23,7 @@

 static inline uint32_t getblock32(const uint32_t *p, int i)
 {
-  return p[i];
+  return cpu32_to_le(p[i]);
 }

 //-
@@ -105,7 +105,7 @@

 static inline uint64_t getblock64(const uint64_t *p, int i)
 {
-  return p[i];
+  return cpu64_to_le(p[i]);
 }

Regards,
Apollon


Re: murmurhash3 test failures on big-endian systems

2018-03-27 Thread Apollon Oikonomopoulos
Hi,

On 12:55 Mon 26 Mar , Josef 'Jeff' Sipek wrote:
> On Mon, Mar 26, 2018 at 15:57:01 +0300, Apollon Oikonomopoulos wrote:
> ...
> > I'd be happy to test the patch, thanks!
> 
> Ok, try the attached patch.  (It is a first pass at the issue, so it may not
> be the final diff that'll end up getting committed.  It'd be good to know if
> it actually fixes the issue for you - sadly, I don't have a big endian
> system to play with.)

Thanks for the quick response!

Unfortunately still fails, although with fewer assertion errors than 
before:

test-murmurhash3.c:34: Assert(#8) failed: memcmp(result, vectors[i].result, 
sizeof(result)) == 0
test-murmurhash3.c:34: Assert(#11) failed: memcmp(result, vectors[i].result, 
sizeof(result)) == 0
test-murmurhash3.c:34: Assert(#12) failed: memcmp(result, vectors[i].result, 
sizeof(result)) == 0
murmurhash3 (murmurhash3_32) . : FAILED
test-murmurhash3.c:34: Assert(#12) failed: memcmp(result, vectors[i].result, 
sizeof(result)) == 0
murmurhash3 (murmurhash3_128)  : FAILED

Regards,
Apollon