Re: Possible hack via doveadm

2023-05-14 Thread Daniel Miller via dovecot
I only allow explicit service traffic through. IMAPS, SMTPS, etc. If 
doveadm is communicating via the IMAP(S) ports then all I can do via 
firewall is block countries. Which of course I can but I'm asking about any 
additional hardening for Dovecot itself.


--
Daniel
On May 13, 2023 6:25:06 PM jeremy ardley via dovecot  
wrote:



On 14/5/23 09:14, Daniel L. Miller via dovecot wrote:


May 12 15:45:58 cloud1 dovecot: doveadm(194.165.16.78): Error: doveadm
client not compatible with this server (mixed old and new binaries?)
May 13 03:44:31 cloud1 dovecot: doveadm(45.227.254.48): Error: doveadm
client not compatible with this server (mixed old and new binaries?)

Since I don't recognize those IPs, the first is out of Panama and the
other is Belize, I assume these are hostile attackers trying to
exploit something. How can I defend against this?


Set up a firewall rule that only allows access from an IP range you
control. For any other source, simply drop the connection.

You can get really fancy and use port forwarding using ssh to connect
from remote but appear as localhost to the server. This access can be
configured in dovecot as well as firewall


Jeremy
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re[9]: Replicator bug report

2021-12-11 Thread Daniel Miller
It appears when I set vsz_limit=0 it works without crashing. So the 
problem appears when setting an explicit maximum.


--
Daniel

On 12/7/2021 9:57:29 PM, "Aki Tuomi"  wrote:




On 7 December 2021 23.10.50 UTC, Daniel Miller  wrote:

On 12/7/2021 12:29:49 PM, "Daniel Miller"  wrote:


service replicator {
vsz_limit = 2G
}

Aki


Tried that - got another one.



I just tried setting
service replicator {
   vcsz_limit = 5G
}
and I still get:
Dec  7 15:08:25 bubba dovecot: replicator: Panic: data stack: Out of
memory when allocating 4294967336 bytes
--
Daniel




This looks like a bug. We'll take a look.

Aki






Re[7]: Replicator bug report

2021-12-07 Thread Daniel Miller

On 12/7/2021 12:29:49 PM, "Daniel Miller"  wrote:


service replicator {
   vsz_limit = 2G
}

Aki


Tried that - got another one.



I just tried setting
service replicator {
  vcsz_limit = 5G
}
and I still get:
Dec  7 15:08:25 bubba dovecot: replicator: Panic: data stack: Out of 
memory when allocating 4294967336 bytes

--
Daniel




vsz_limit

2021-12-07 Thread Daniel Miller
I just noticed, that when checking "doveconf -a" - all vsz_limit 
settings that are not explicitly given have a value of:

 18446744073709551615 B

I have this on two servers, one with an implicit default_vsz_limit=256M, 
the other with an explicit default_vsz_limit=2G.


Is this correct?
--
Daniel

Re[6]: Replicator bug report

2021-12-07 Thread Daniel Miller

service replicator {
   vsz_limit = 2G
}

Aki


Tried that - got another one.

[New LWP 14072]
Core was generated by `dovecot/replicator'.
Program terminated with signal SIGABRT, Aborted.
---Type  to continue, or q  to quit---
#0  __GI_raise (sig=sig@entry=6) at 
../sysdeps/unix/sysv/linux/raise.c:51

51  ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.
(gdb) bt full
#0  __GI_raise (sig=sig@entry=6) at 
../sysdeps/unix/sysv/linux/raise.c:51
set = {__val = {0, 1459, 1460, 94545983793224, 8, 
140396720010827, 153, 140396719819340, 140727510797504, 120, 
206158430224,
140727510797840, 140727510797632, 2471027943189898752, 
94545983793200, 140396719586582}}

pid = 
tid = 
ret = 
#1  0x7fb0a84eb8b1 in __GI_abort () at abort.c:79
save_stage = 1
act = {__sigaction_handler = {sa_handler = 0x0, sa_sigaction = 
0x0}, sa_mask = {__val = {94545983791152, 18446744073709551615,
  1073741824, 94545983669264, 140396719562123, 
140396722477344, 140396719540956, 140396722477344, 2471027943189898752,
  140396722477320, 140396719818546, 140727510797840, 
140396722477344, 140727510797840, 140396719818937, 140396722477344}},

  sa_flags = -1466408342, sa_restorer = 0x5}
sigs = {__val = {32, 0 }}
__cnt = 
__set = 
__cnt = 
---Type  to continue, or q  to quit---
__set = 
#2  0x7fb0a89949d1 in default_fatal_finish (status=0, 
type=LOG_TYPE_PANIC) at failures.c:459
backtrace = 0x55fd33c76008 
"/usr/lib/dovecot/libdovecot.so.0(backtrace_append+0x42) 
[0x7fb0a8986142] -> /usr/lib/dovecot/libdovecot.so.0(backtrace_get+0x1e) 
[0x7fb0a898625e] -> /usr/lib/dovecot/libdovecot.so.0(+0xf8a1e) 
[0x7fb0a"...

backtrace = 
recursed = 0
#3  fatal_handler_real (ctx=, format=, 
args=) at failures.c:471

status = 0
#4  0x7fb0a8994ac1 in i_internal_fatal_handler (ctx=, 
format=, args=) at failures.c:872

No locals.
#5  0x7fb0a88e14a7 in i_panic (format=format@entry=0x7fb0a89fa2d0 
"data stack: Out of memory when allocating %zu bytes")

at failures.c:524
ctx = {type = LOG_TYPE_PANIC, exit_status = 0, timestamp = 0x0, 
timestamp_usecs = 0, log_prefix = 0x0, log_prefix_type_pos = 0}
args = {{gp_offset = 16, fp_offset = 48, overflow_arg_area = 
0x7ffdad4a8f60, reg_save_area = 0x7ffdad4a8ea0}}
#6  0x7fb0a898d4e8 in mem_block_alloc 
(min_size=min_size@entry=2147483648) at data-stack.c:386

block = 
---Type  to continue, or q  to quit---
prev_size = 
alloc_size = 4294967296
#7  0x7fb0a898dae3 in t_malloc_real (size=size@entry=2147483648, 
permanent=permanent@entry=true) at data-stack.c:492

block = 
ret = 
alloc_size = 2147483648
warn = false
#8  0x7fb0a898dd6a in t_malloc_no0 (size=size@entry=2147483648) at 
data-stack.c:543

No locals.
#9  0x7fb0a89b7f28 in pool_data_stack_realloc (pool=, 
mem=0x7fb052ea2038, old_size=1073741824, new_size=2147483648)

at mempool-datastack.c:173
dpool = 
new_mem = 
pool = 
new_size = 2147483648
mem = 0x7fb052ea2038
---Type  to continue, or q  to quit---
old_size = 1073741824
dpool = 
new_mem = 
dpool = 
new_mem = 
#10 0x7fb0a8988aa3 in p_realloc (new_size=2147483648, 
old_size=, mem=, pool=)

at mempool.h:120
No locals.
#11 buffer_alloc (buf=buf@entry=0x55fd33c36f78, size=2147483648) at 
buffer.c:40

__func__ = "buffer_alloc"
#12 0x7fb0a8988fb4 in buffer_check_limits (data_size=32, 
pos=1073741792, buf=0x55fd33c36f78) at buffer.c:85

new_alloc_size = 
new_size = 1073741824
new_size = 
max = 
new_alloc_size = 
---Type  to continue, or q  to quit---
#13 buffer_check_append_limits (data_size=32, buf=0x55fd33c36f78) at 
buffer.c:117

No locals.
#14 buffer_append (_buf=0x55fd33c36f78, data=0x55fd33c58410, 
data_size=32) at buffer.c:235

pos = 1073741792
buf = 0x55fd33c36f78
#15 0x55fd33946846 in array_append_i (count=1, data=0x55fd33c58410, 
array=) at ../../../src/lib/array.h:210

No locals.
#16 replicator_queue_handle_sync_lookups (user=0x55fd33c5f460, 
queue=0x55fd33c4a230) at replicator-queue.c:297

lookups = 
i = 0
count = 
success = 255
callbacks = 
lookups = 
callbacks = 
i = 
---Type  to continue, or q  to quit---
count = 
success = 
lookups_end = 
#17 replicator_queue_push (queue=0x55fd33c4a230, user=0x55fd33c5f460) at 
replicator-queue.c:315

_data_stack_cur_id = 3
__func__ = "replicator_queue_push"
#18 0x55fd33945f67 in dsync_callback 
(reply=reply@entry=DSYNC_REPLY_OK,
state=state@entry=0x55fd33c36bb0 
"AQAAAPyg1DBh63NeGjoAAJ21rMvbbs1ZAAE", 'A' , 
"cCkXAK4xIV4XMwAAnbWsy+UnHl4FCw", 'A' , 
"UAAABMf78EtTEhXh4zAACdtazL5iceXgAB", 'A' , 

Re[4]: Replicator bug report

2021-12-07 Thread Daniel Miller - CLOUD

Use

gdb /path/to/replicator /path/to/core
bt full

Aki

root@bubba:/var/core# gdb /usr/lib/dovecot/replicator 
/var/core/11199.replicator

GNU gdb (Ubuntu 8.1.1-0ubuntu1) 8.1.1
Copyright (C) 2018 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later 


This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show 
copying"

and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
.
Find the GDB manual and other documentation resources online at:
.
For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from /usr/lib/dovecot/replicator...Reading symbols from 
/usr/lib/debug/.build-id/63/bc9a0e025f7ecba8e4906abc177b978bf6c2ad.debug...done.

done.
[New LWP 11199]
Core was generated by `dovecot/replicator'.
Program terminated with signal SIGABRT, Aborted.
#0  __GI_raise (sig=sig@entry=6) at 
../sysdeps/unix/sysv/linux/raise.c:51

51  ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.
(gdb) bt full
#0  __GI_raise (sig=sig@entry=6) at 
../sysdeps/unix/sysv/linux/raise.c:51
set = {__val = {0, 1459, 1460, 94875967713912, 8, 
140599001568843, 153, 140599001377356, 140727459487856, 120, 
206158430224,
140727459488192, 140727459487984, 126291299233366272, 
94875967713888, 140599001144598}}

pid = 
tid = 
ret = 
#1  0x7fdfc13a58b1 in __GI_abort () at abort.c:79
save_stage = 1
act = {__sigaction_handler = {sa_handler = 0x0, sa_sigaction = 
0x0}, sa_mask = {__val = {94875967711840, 18446744073709551615,
  1073741824, 94875967583248, 140599001120139, 
140599004035360, 140599001098972, 140599004035360, 126291299233366272,
  140599004035336, 140599001376562, 140727459488192, 
140599004035360, 140727459488192, 140599001376953, 140599004035360}},

  sa_flags = -1048313238, sa_restorer = 0x5}
sigs = {__val = {32, 0 }}
__cnt = 
__set = 
__cnt = 
__set = 
#2  0x7fdfc184e9d1 in default_fatal_finish (status=0, 
type=LOG_TYPE_PANIC) at failures.c:459
backtrace = 0x564a085a6a38 
"/usr/lib/dovecot/libdovecot.so.0(backtrace_append+0x42) 
[0x7fdfc1840142] -> /usr/lib/dovecot/libdovecot.so.0(backtrace_get+0x1e) 
[0x7fdfc184025e] -> /usr/lib/dovecot/libdovecot.so.0(+0xf8a1e) 
[0x7fdfc"...

backtrace = 
recursed = 0
#3  fatal_handler_real (ctx=, format=, 
args=) at failures.c:471

status = 0
#4  0x7fdfc184eac1 in i_internal_fatal_handler (ctx=, 
format=, args=) at failures.c:872

No locals.
#5  0x7fdfc179b4a7 in i_panic (format=format@entry=0x7fdfc18b42d0 
"data stack: Out of memory when allocating %zu bytes")

at failures.c:524
ctx = {type = LOG_TYPE_PANIC, exit_status = 0, timestamp = 0x0, 
timestamp_usecs = 0, log_prefix = 0x0, log_prefix_type_pos = 0}

---Type  to continue, or q  to quit---
args = {{gp_offset = 16, fp_offset = 48, overflow_arg_area = 
0x7ffdaa3ba310, reg_save_area = 0x7ffdaa3ba250}}
#6  0x7fdfc18474e8 in mem_block_alloc 
(min_size=min_size@entry=2147483648) at data-stack.c:386

block = 
prev_size = 
alloc_size = 4294967296
#7  0x7fdfc1847ae3 in t_malloc_real (size=size@entry=2147483648, 
permanent=permanent@entry=true) at data-stack.c:492

block = 
ret = 
alloc_size = 2147483648
warn = false
#8  0x7fdfc1847d6a in t_malloc_no0 (size=size@entry=2147483648) at 
data-stack.c:543

No locals.
#9  0x7fdfc1871f28 in pool_data_stack_realloc (pool=, 
mem=0x7fdf6bd5c038, old_size=1073741824, new_size=2147483648)

at mempool-datastack.c:173
dpool = 
new_mem = 
pool = 
new_size = 2147483648
mem = 0x7fdf6bd5c038
old_size = 1073741824
dpool = 
new_mem = 
dpool = 
new_mem = 
#10 0x7fdfc1842aa3 in p_realloc (new_size=2147483648, 
old_size=, mem=, pool=)

at mempool.h:120
No locals.
#11 buffer_alloc (buf=buf@entry=0x564a08567838, size=2147483648) at 
buffer.c:40

__func__ = "buffer_alloc"
---Type  to continue, or q  to quit---
#12 0x7fdfc1842fb4 in buffer_check_limits (data_size=32, 
pos=1073741792, buf=0x564a08567838) at buffer.c:85

new_alloc_size = 
new_size = 1073741824
new_size = 
max = 
new_alloc_size = 
#13 buffer_check_append_limits (data_size=32, buf=0x564a08567838) at 
buffer.c:117

No locals.
#14 buffer_append (_buf=0x564a08567838, data=0x564a08587410, 
data_size=32) at buffer.c:235

pos = 1073741792
buf = 0x564a08567838
#15 0x564a07e5a846 in array_append_i (count=1, data=0x564a08587410, 
array=) at 

Re[2]: Replicator bug report

2021-12-07 Thread Daniel Miller

-- Original Message --


Hi!

Can you instead submit gdb bt full output and doveconf -n?

Aki



Certainly - but I need to know how. The problem is during TCP 
replication.


Here is dovecot -n:

# 2.3.17.1 (476cd46418): /etc/dovecot/dovecot.conf
# Pigeonhole version 0.5.17.1 (a1a0b892)
# OS: Linux 5.4.0-91-generic x86_64 Ubuntu 18.04.6 LTS xfs
# Hostname: bubba.amfes.lan
auth_cache_size = 4 k
auth_master_user_separator = *
auth_mechanisms = plain login
auth_policy_hash_nonce = # hidden, use -P to show it
auth_policy_hash_truncate = 8
auth_policy_server_api_header = Authorization: Basic 
d2ZvcmNlOnVsdHJhLXNlY3JldC1zZWN1cmUtc2FmZQ

auth_verbose = yes
default_login_user = nobody
default_vsz_limit = 2 G
disable_plaintext_auth = no
doveadm_password = # hidden, use -P to show it
doveadm_port = 10993
imap_capability = +SPECIAL-USE
listen = *
login_trusted_networks = 192.168.0.0/24
mail_attachment_detection_options = add-flags
mail_attachment_hash = %{sha512}
mail_attribute_dict = file:/var/mail/attributes
mail_gid = mail
mail_location = sdbox:/var/mail/%d/%n/sdbox
mail_plugins = fts fts_solr acl zlib virtual notify replication 
mailbox_alias

mail_prefetch_count = 10
mail_shared_explicit_inbox = yes
mail_uid = vmail
managesieve_notify_capability = mailto
managesieve_sieve_capability = fileinto reject envelope 
encoded-character vacation subaddress comparator-i;ascii-numeric 
relational regex imap4flags copy include variables body enotify 
environment mailbox date index ihave duplicate mime foreverypart 
extracttext

mdbox_rotate_size = 20 M
namespace archives {
  list = children
  location = mdbox:/var/mail/%d/%n/Archives/mdbox
  mailbox Unsorted {
auto = no
special_use = \Archive
  }
  prefix = INBOX/Archives/
  separator = /
  subscriptions = no
  type = private
}
namespace inbox {
  alias_for =
  hidden = no
  inbox = yes
  list = yes
  location =
  mailbox "Deleted Messages" {
auto = no
autoexpunge = 30 days
special_use = \Trash
  }
  mailbox Drafts {
auto = subscribe
special_use = \Drafts
  }
  mailbox Sent {
auto = subscribe
special_use = \Sent
  }
  mailbox Trash {
auto = subscribe
autoexpunge = 30 days
special_use = \Trash
  }
  prefix = INBOX/
  separator = /
  subscriptions = no
  type = private
}
namespace lists {
  list = children
  location = mdbox:/var/mail/%d/%n/Lists/mdbox
  prefix = INBOX/Lists/
  separator = /
  subscriptions = no
  type = private
}
namespace subscriptions {
  hidden = yes
  list = no
  location =
  prefix =
  separator = /
  subscriptions = yes
  type = private
}
namespace usershares {
  list = yes
  location = sdbox:/var/mail/%%d/%%n/sdbox:NO-NOSELECT
  prefix = INBOX/shared/%%d/%%n/
  separator = /
  subscriptions = no
  type = shared
}
namespace virtual {
  list = children
  location = virtual:/var/mail/%d/%n/virtual
  mailbox Flagged {
comment = All my flagged messages
special_use = \Flagged
  }
  prefix = INBOX/virtual/
  separator = /
  subscriptions = no
}
passdb {
  args = /etc/dovecot/dovecot-sql.conf.ext
  driver = sql
}
plugin {
  acl = vfile
  acl_shared_dict = file:/var/mail/%d/shared-mailboxes
  fts = solr
  fts_autoindex = yes
  fts_autoindex_exclude = \Trash
  fts_autoindex_exclude2 = \Junk
  fts_autoindex_exclude3 = \Spam
  fts_enforced = no
  fts_index_timeout = 20s
  fts_solr = url=http://127.0.0.1:8983/solr/dovecot/ batch_size=2000
  mail_replica = tcp:10.23.1.10
  mailbox_alias_new3 = Deleted Messages
  mailbox_alias_old3 = Trash
  replication_sync_timeout = 2
  sieve = file:~/sieve;active=~/.dovecot.sieve
}
protocols = imap lmtp sieve
replication_dsync_parameters = -d -l 30 -U -n INBOX -n INBOX/Archives -n 
INBOX/Lists -x INBOX/virtual -x INBOX/shared

replication_max_conns = 5
service aggregator {
  fifo_listener replication-notify-fifo {
mode = 0600
user = vmail
  }
  unix_listener replication-notify {
mode = 0600
user = vmail
  }
}
service auth {
  unix_listener /var/spool/postfix/private/auth {
group = postfix
mode = 0660
user = postfix
  }
  unix_listener auth-userdb {
group = mail
mode = 0600
user = vmail
  }
}
service doveadm {
  inet_listener {
port = 10993
  }
  user = vmail
}
service imap-login {
  process_min_avail = 4
}
service imap-postlogin {
  executable = script-login /etc/dovecot/post-login.sh
  user = $default_internal_user
}
service imap {
  executable = imap imap-postlogin
  vsz_limit = 4 G
}
service indexer-worker {
  user = vmail
}
service lmtp {
  unix_listener /var/spool/postfix/private/dovecot-lmtp {
group = mail
mode = 0666
user = vmail
  }
}
service replicator {
  process_min_avail = 1
  unix_listener replicator-doveadm {
mode = 0600
user = vmail
  }
}
ssl_cert =   mail_plugins = fts fts_solr acl zlib virtual notify replication 
mailbox_alias sieve

  postmaster_address = 
}
protocol imap {
  mail_plugins = fts fts_solr acl zlib virtual notify replication 
mailbox_alias imap_acl

}


--
Daniel




Replicator bug report

2021-12-07 Thread Daniel Miller
I've run dovecot-sysreport -o  and generated a file - but it's a 
few gigs in size. Am I generating the core dump incorrectly? Should I do 
something different?


--
Daniel

Re[5]: Replication weirdness

2021-12-04 Thread Daniel Miller - CLOUD

Another update.

I dug deeper into the mailboxes - and found the "subscriptions" and 
actual mailboxes weren't correct in all cases. I guess when I shifted to 
the explicit INBOX/ namespace not all the existing boxes migrated 
correctly. So...after manually correcting all the "subscription" files, 
and manually moving the duplicated "Archives" folders to the correct 
locations nearly all the errors have gone.


I wish I'd gotten more inforrmative error messages, and processes 
certainly shouldn't have crashed, but since I obviously created the 
problem due to manually poking things and improper configuration I guess 
I can't complain too much.


I still have problems using the "-N" flag for syncing - but things seem 
to be working with the multiple explicit "-n" namespaces. I do still 
have the locking error appearing during long-running syncs - I don't see 
why Dovecot doesn't know that it's already syncing a given user before 
trying to start a second process. Probably something else I setup wrong 
but I don't know what.


--
Daniel

Re[4]: Replication weirdness

2021-12-03 Thread Daniel Miller

And some more messages...

Dec  3 15:10:58 bubba dovecot: 
doveadm(obfuscated)<1901>: Error: Mailbox Sent 
sync: mailbox_rename failed: Can't rename mailbox while it has aliases
Dec  3 15:10:58 bubba dovecot: 
doveadm(obfuscated)<1900>: Error: Duplicate 
mailbox GUID f4338038839caa613a1a0500b88bfabe for mailboxes INBOX/Sent 
Messages and INBOX/Sent - giving a new GUID 
bcf4f82702a4aa616c079db5accb to INBOX/Sent


--
Daniel

-- Original Message --
From: "Daniel Miller" 
To: "Daniel Miller" ; dovecot@dovecot.org
Sent: 12/3/2021 3:13:55 PM
Subject: Re[3]: Replication weirdness


And...

The user who has both a "Sent" and a "Sent Messages" now has:

drwx--  3 vmail mail  24 Dec  3 09:56 Sent
lrwxrwxrwx  1 vmail mail   4 Nov 30 17:51 'Sent Messages' -> Sent
drwx--  3 vmail mail  24 Dec  3 15:10 'Sent Messages-temp-1'
drwx--  3 vmail mail  24 Dec  3 15:10 'Sent 
Messages-temp-fc30bd0a3a9aaa61c1180500b88bfabe'


and I got the following errors:

Dec  3 15:10:46 cloud1 dovecot: 
doveadm(obfuscated)<336247>: Error: Duplicate 
mailbox GUID 6aae8c39f3a3aa615a079db5accb for mailboxes Sent and 
Sent Messages-temp-1 - giving a new GUID 
63481f29f6a3aa6177210500b88bfabe to Sent
Dec  3 15:10:50 cloud1 dovecot: 
doveadm(obfuscated)<336245>: Panic: file 
dsync-brain-mailbox.c: line 851 (dsync_brain_slave_recv_mailbox): 
assertion failed: (memcmp(dsync_box->mailbox_guid, 
local_dsync_box.mailbox_guid, sizeof(dsync_box->mailbox_guid)) == 0)
Dec  3 15:10:50 cloud1 dovecot: 
doveadm(obfuscated)<336245>: Error: Raw 
backtrace: #0 fatal_handler_real[0x7fde7fd20060] -> #1 
i_internal_fatal_handler[0x7fde7fd20190] -> #2 i_panic[0x7fde7fc731ff] 
-> #3 dsync_brain_slave_recv_mailbox[0x55dde7b22900] -> #4 
dsync_brain_run[0x55dde7b20380] -> #5 
dsync_brain_run_io[0x55dde7b20b50] -> #6 
dsync_ibc_stream_input[0x55dde7b329c0] -> #7 
io_loop_call_io[0x7fde7fd36500] -> #8 
io_loop_handler_run_internal[0x7fde7fd37ac0] -> #9 
io_loop_handler_run[0x7fde7fd365c0] -> #10 io_loop_run[0x7fde7fd36740] 
-> #11 cmd_dsync_server_run[0x55dde7b04f60] -> #12 
doveadm_mail_next_user[0x55dde7b06850] -> #13 
doveadm_cmd_ver2_to_mail_cmd_wrapper[0x55dde7b077e0] -> #14 
doveadm_cmd_run_ver2[0x55dde7b17f00] -> #15 
client_connection_tcp_input[0x55dde7b1c6b0] -> #16 
io_loop_call_io[0x7fde7fd36500] -> #17 
io_loop_handler_run_internal[0x7fde7fd37ac0] -> #18 
io_loop_handler_run[0x7fde7fd365c0] -> #19 io_loop_run[0x7fde7fd36740] 
-> #20 master_service_run[0x7fde7fca87d0] -> #21 main[0x55dde7af7770] 
-> #22 __libc_start_main[0x7fde7f8f9fc0] -> #23 _start[0x55dde7af78d0]
Dec  3 15:10:50 cloud1 dovecot: 
doveadm(obfuscated)<336245>: Fatal: master: 
service(doveadm): child 336245 killed with signal 6 (core dumped)
Dec  3 15:10:52 cloud1 dovecot: 
doveadm(obfuscated)<336253><2VTpM/ujqmF9IQUAuIv6vg>: Error: Duplicate 
mailbox GUID 63481f29f6a3aa6177210500b88bfabe for mailboxes INBOX/Sent 
and INBOX/Sent Messages-temp-1 - giving a new GUID 
cba35507fca3aa617d210500b88bfabe to INBOX/Sent
Dec  3 15:10:58 cloud1 dovecot: 
doveadm(obfuscated)<336258>: Error: Duplicate 
mailbox GUID dc3b4434fba3aa6166079db5accb for mailboxes Sent and 
Sent Messages-temp-1 - giving a new GUID 
60ad190102a4aa6182210500b88bfabe to Sent


--
Daniel

-- Original Message --
From: "Daniel Miller" 
To: "Daniel Miller" ; dovecot@dovecot.org
Sent: 12/3/2021 2:42:12 PM
Subject: Re[2]: Replication weirdness


And...one more.

I'm now seeing (again) messages like:

Dec  3 14:29:14 cloud1 dovecot: 
doveadm(obfuscated)<334017>: Error: Duplicate 
mailbox GUID bcb9ca36ae36aa617f0a9db5accb for mailboxes INBOX/Sent 
Messages and INBOX/Sent - giving a new GUID 
fc30bd0a3a9aaa61c1180500b88bfabe to INBOX/Sent
Dec  3 14:38:59 cloud1 dovecot: 
doveadm(obfuscated)<334394>: Error: Duplicate 
mailbox GUID fc30bd0a3a9aaa61c1180500b88bfabe for mailboxes INBOX/Sent 
Messages and INBOX/Sent - giving a new GUID 
f4338038839caa613a1a0500b88bfabe to INBOX/Sent


Having one message for the initial sync I suppose is reasonable. A 
second...maybe? But I'm getting nervous I'm about to start seeing the 
endless temp folders again.

--
Daniel

-- Original Message --
From: "Daniel Miller" 
To: "Daniel Miller" ; dovecot@dovecot.org
Sent: 12/3/2021 2:39:25 PM
Subject: Re: Replication weirdness


Another item.

Again, it may be a 2.3.13 issue and I'm now on 2.3.17. But...I had 
problem when using the "-N" parameter for dsync. So - I just have 
(had):


replication_dsync_parameters = -d -l 30 -U -x INBOX/virtual -x 
INBOX/shared


Now that things are working - I wanted to have my other namespaces 
sync as well. So I went to:


replication_dsync_parameters = -d -l 30 -U -n INBOX -n INBOX/Archives 
-n INBOX/Lists -x INBOX/virtual -x INBOX/shared

Re[3]: Replication weirdness

2021-12-03 Thread Daniel Miller

And...

The user who has both a "Sent" and a "Sent Messages" now has:

drwx--  3 vmail mail  24 Dec  3 09:56 Sent
lrwxrwxrwx  1 vmail mail   4 Nov 30 17:51 'Sent Messages' -> Sent
drwx--  3 vmail mail  24 Dec  3 15:10 'Sent Messages-temp-1'
drwx--  3 vmail mail  24 Dec  3 15:10 'Sent 
Messages-temp-fc30bd0a3a9aaa61c1180500b88bfabe'


and I got the following errors:

Dec  3 15:10:46 cloud1 dovecot: 
doveadm(obfuscated)<336247>: Error: Duplicate 
mailbox GUID 6aae8c39f3a3aa615a079db5accb for mailboxes Sent and 
Sent Messages-temp-1 - giving a new GUID 
63481f29f6a3aa6177210500b88bfabe to Sent
Dec  3 15:10:50 cloud1 dovecot: 
doveadm(obfuscated)<336245>: Panic: file 
dsync-brain-mailbox.c: line 851 (dsync_brain_slave_recv_mailbox): 
assertion failed: (memcmp(dsync_box->mailbox_guid, 
local_dsync_box.mailbox_guid, sizeof(dsync_box->mailbox_guid)) == 0)
Dec  3 15:10:50 cloud1 dovecot: 
doveadm(obfuscated)<336245>: Error: Raw 
backtrace: #0 fatal_handler_real[0x7fde7fd20060] -> #1 
i_internal_fatal_handler[0x7fde7fd20190] -> #2 i_panic[0x7fde7fc731ff] 
-> #3 dsync_brain_slave_recv_mailbox[0x55dde7b22900] -> #4 
dsync_brain_run[0x55dde7b20380] -> #5 dsync_brain_run_io[0x55dde7b20b50] 
-> #6 dsync_ibc_stream_input[0x55dde7b329c0] -> #7 
io_loop_call_io[0x7fde7fd36500] -> #8 
io_loop_handler_run_internal[0x7fde7fd37ac0] -> #9 
io_loop_handler_run[0x7fde7fd365c0] -> #10 io_loop_run[0x7fde7fd36740] 
-> #11 cmd_dsync_server_run[0x55dde7b04f60] -> #12 
doveadm_mail_next_user[0x55dde7b06850] -> #13 
doveadm_cmd_ver2_to_mail_cmd_wrapper[0x55dde7b077e0] -> #14 
doveadm_cmd_run_ver2[0x55dde7b17f00] -> #15 
client_connection_tcp_input[0x55dde7b1c6b0] -> #16 
io_loop_call_io[0x7fde7fd36500] -> #17 
io_loop_handler_run_internal[0x7fde7fd37ac0] -> #18 
io_loop_handler_run[0x7fde7fd365c0] -> #19 io_loop_run[0x7fde7fd36740] 
-> #20 master_service_run[0x7fde7fca87d0] -> #21 main[0x55dde7af7770] -> 
#22 __libc_start_main[0x7fde7f8f9fc0] -> #23 _start[0x55dde7af78d0]
Dec  3 15:10:50 cloud1 dovecot: 
doveadm(obfuscated)<336245>: Fatal: master: 
service(doveadm): child 336245 killed with signal 6 (core dumped)
Dec  3 15:10:52 cloud1 dovecot: 
doveadm(obfuscated)<336253><2VTpM/ujqmF9IQUAuIv6vg>: Error: Duplicate 
mailbox GUID 63481f29f6a3aa6177210500b88bfabe for mailboxes INBOX/Sent 
and INBOX/Sent Messages-temp-1 - giving a new GUID 
cba35507fca3aa617d210500b88bfabe to INBOX/Sent
Dec  3 15:10:58 cloud1 dovecot: 
doveadm(obfuscated)<336258>: Error: Duplicate 
mailbox GUID dc3b4434fba3aa6166079db5accb for mailboxes Sent and 
Sent Messages-temp-1 - giving a new GUID 
60ad190102a4aa6182210500b88bfabe to Sent


--
Daniel

-- Original Message --
From: "Daniel Miller" 
To: "Daniel Miller" ; dovecot@dovecot.org
Sent: 12/3/2021 2:42:12 PM
Subject: Re[2]: Replication weirdness


And...one more.

I'm now seeing (again) messages like:

Dec  3 14:29:14 cloud1 dovecot: 
doveadm(obfuscated)<334017>: Error: Duplicate 
mailbox GUID bcb9ca36ae36aa617f0a9db5accb for mailboxes INBOX/Sent 
Messages and INBOX/Sent - giving a new GUID 
fc30bd0a3a9aaa61c1180500b88bfabe to INBOX/Sent
Dec  3 14:38:59 cloud1 dovecot: 
doveadm(obfuscated)<334394>: Error: Duplicate 
mailbox GUID fc30bd0a3a9aaa61c1180500b88bfabe for mailboxes INBOX/Sent 
Messages and INBOX/Sent - giving a new GUID 
f4338038839caa613a1a0500b88bfabe to INBOX/Sent


Having one message for the initial sync I suppose is reasonable. A 
second...maybe? But I'm getting nervous I'm about to start seeing the 
endless temp folders again.

--
Daniel

-- Original Message --
From: "Daniel Miller" 
To: "Daniel Miller" ; dovecot@dovecot.org
Sent: 12/3/2021 2:39:25 PM
Subject: Re: Replication weirdness


Another item.

Again, it may be a 2.3.13 issue and I'm now on 2.3.17. But...I had 
problem when using the "-N" parameter for dsync. So - I just have 
(had):


replication_dsync_parameters = -d -l 30 -U -x INBOX/virtual -x 
INBOX/shared


Now that things are working - I wanted to have my other namespaces 
sync as well. So I went to:


replication_dsync_parameters = -d -l 30 -U -n INBOX -n INBOX/Archives 
-n INBOX/Lists -x INBOX/virtual -x INBOX/shared


This appears to be working (the sync is just starting)...but I'm 
seeing lock errors in the logs such as:
Dec  3 14:34:24 bubba dovecot: 
doveadm(dmil...@amfes.com)<31785>: Error: 
Couldn't lock /var/mail/amfes.com/dmiller/.dovecot-sync.lock: 
fcntl(/var/mail/amfes.com/dmiller/.dovecot-sync.lock, write-lock, 
F_SETLKW) locking failed: Timed out after 30 seconds (WRITE lock held 
by pid 31373)


Checking the pid in question I see it's actively syncing a folder in 
my mailbox. So I'm guessing, purely guessing, that by having multiple 
namespaces explicitly directed to sync Dovecot is trying to start a 
sync process for

Re[2]: Replication weirdness

2021-12-03 Thread Daniel Miller

And...one more.

I'm now seeing (again) messages like:

Dec  3 14:29:14 cloud1 dovecot: 
doveadm(obfuscated)<334017>: Error: Duplicate 
mailbox GUID bcb9ca36ae36aa617f0a9db5accb for mailboxes INBOX/Sent 
Messages and INBOX/Sent - giving a new GUID 
fc30bd0a3a9aaa61c1180500b88bfabe to INBOX/Sent
Dec  3 14:38:59 cloud1 dovecot: 
doveadm(obfuscated)<334394>: Error: Duplicate 
mailbox GUID fc30bd0a3a9aaa61c1180500b88bfabe for mailboxes INBOX/Sent 
Messages and INBOX/Sent - giving a new GUID 
f4338038839caa613a1a0500b88bfabe to INBOX/Sent


Having one message for the initial sync I suppose is reasonable. A 
second...maybe? But I'm getting nervous I'm about to start seeing the 
endless temp folders again.

--
Daniel

-- Original Message --
From: "Daniel Miller" 
To: "Daniel Miller" ; dovecot@dovecot.org
Sent: 12/3/2021 2:39:25 PM
Subject: Re: Replication weirdness


Another item.

Again, it may be a 2.3.13 issue and I'm now on 2.3.17. But...I had 
problem when using the "-N" parameter for dsync. So - I just have 
(had):


replication_dsync_parameters = -d -l 30 -U -x INBOX/virtual -x 
INBOX/shared


Now that things are working - I wanted to have my other namespaces sync 
as well. So I went to:


replication_dsync_parameters = -d -l 30 -U -n INBOX -n INBOX/Archives 
-n INBOX/Lists -x INBOX/virtual -x INBOX/shared


This appears to be working (the sync is just starting)...but I'm seeing 
lock errors in the logs such as:
Dec  3 14:34:24 bubba dovecot: 
doveadm(dmil...@amfes.com)<31785>: Error: 
Couldn't lock /var/mail/amfes.com/dmiller/.dovecot-sync.lock: 
fcntl(/var/mail/amfes.com/dmiller/.dovecot-sync.lock, write-lock, 
F_SETLKW) locking failed: Timed out after 30 seconds (WRITE lock held 
by pid 31373)


Checking the pid in question I see it's actively syncing a folder in my 
mailbox. So I'm guessing, purely guessing, that by having multiple 
namespaces explicitly directed to sync Dovecot is trying to start a 
sync process for each of those namespaces - but all of them share a 
common lock and therefore only one operation is allowed at a time.


Am I correct, and whether or not I am - how can I correct these errors? 
Do I dare try going back to just "-N"?


--
Daniel

-- Original Message --
From: "Daniel Miller" 
To: dovecot@dovecot.org
Sent: 12/3/2021 2:16:28 PM
Subject: Replication weirdness

First, I have to say this. After configuring everything correctly - 
and that means *everything* correctly - Dovecot replication Just 
Works. I'm not sure how (yes I do - Timo & Co. Magic) - but it does. 
Real-time new sync is near instantaneous.


Now the problem. Or the background for the problem. My primary server 
uses sdbox for primary storage, mdbox for archival storage, and 
fts-solr. I spun up a second server, using sdbox, mdbox, and 
fts-flatcurve. My namespaces are as defined below. As best I can tell 
(based on diff comparing two 'doveconf -n' outputs) my namespaces are 
the same on both servers.


namespace archives {
  list = children
  location = mdbox:/var/mail/%d/%n/Archives/mdbox
  mailbox Unsorted {
auto = no
special_use = \Archive
  }
  prefix = INBOX/Archives/
  separator = /
  subscriptions = no
  type = private
}
namespace inbox {
  alias_for =
  hidden = no
  inbox = yes
  list = yes
  location =
  mailbox "Deleted Messages" {
auto = no
autoexpunge = 30 days
special_use = \Trash
  }
  mailbox Drafts {
auto = subscribe
special_use = \Drafts
 }
  mailbox Sent {
auto = subscribe
special_use = \Sent
  }
  mailbox Trash {
auto = subscribe
autoexpunge = 30 days
special_use = \Trash
  }
  prefix = INBOX/
  separator = /
  subscriptions = no
  type = private
}
namespace lists {
  list = children
  location = mdbox:/var/mail/%d/%n/Lists/mdbox
  prefix = INBOX/Lists/
  separator = /
  subscriptions = no
  type = private
}
namespace subscriptions {
  hidden = yes
  list = no
  location =
  prefix =
  separator = /
  subscriptions = yes
  type = private
}
namespace usershares {
  list = yes
  location = sdbox:/var/mail/%%d/%%n/sdbox:NO-NOSELECT
  prefix = INBOX/shared/%%d/%%n/
  separator = /
  subscriptions = no
  type = shared
}
namespace virtual {
  list = children
  location = virtual:/var/mail/%d/%n/virtual
  mailbox Flagged {
comment = All my flagged messages
special_use = \Flagged
  }
  prefix = INBOX/virtual/
  separator = /
  subscriptions = no
}

I also have:
plugin {
  mailbox_alias_new = Sent Messages
  mailbox_alias_new2 = Sent Items
  mailbox_alias_new3 = Deleted Messages
  mailbox_alias_old = Sent
  mailbox_alias_old2 = Sent
  mailbox_alias_old3 = Trash
}

This setup worked fine with my single server. Then I enabled 
replication - just on the primary. Dsync went to work (it seemed to 
take forever for the initial sync but that's what happens with large 
mailboxes and slow internet connections).


The problem came up with certain subfolde

Re: Replication weirdness

2021-12-03 Thread Daniel Miller

Another item.

Again, it may be a 2.3.13 issue and I'm now on 2.3.17. But...I had 
problem when using the "-N" parameter for dsync. So - I just have (had):


replication_dsync_parameters = -d -l 30 -U -x INBOX/virtual -x 
INBOX/shared


Now that things are working - I wanted to have my other namespaces sync 
as well. So I went to:


replication_dsync_parameters = -d -l 30 -U -n INBOX -n INBOX/Archives -n 
INBOX/Lists -x INBOX/virtual -x INBOX/shared


This appears to be working (the sync is just starting)...but I'm seeing 
lock errors in the logs such as:
Dec  3 14:34:24 bubba dovecot: 
doveadm(dmil...@amfes.com)<31785>: Error: 
Couldn't lock /var/mail/amfes.com/dmiller/.dovecot-sync.lock: 
fcntl(/var/mail/amfes.com/dmiller/.dovecot-sync.lock, write-lock, 
F_SETLKW) locking failed: Timed out after 30 seconds (WRITE lock held by 
pid 31373)


Checking the pid in question I see it's actively syncing a folder in my 
mailbox. So I'm guessing, purely guessing, that by having multiple 
namespaces explicitly directed to sync Dovecot is trying to start a sync 
process for each of those namespaces - but all of them share a common 
lock and therefore only one operation is allowed at a time.


Am I correct, and whether or not I am - how can I correct these errors? 
Do I dare try going back to just "-N"?


--
Daniel

-- Original Message --
From: "Daniel Miller" 
To: dovecot@dovecot.org
Sent: 12/3/2021 2:16:28 PM
Subject: Replication weirdness

First, I have to say this. After configuring everything correctly - and 
that means *everything* correctly - Dovecot replication Just Works. I'm 
not sure how (yes I do - Timo & Co. Magic) - but it does. Real-time new 
sync is near instantaneous.


Now the problem. Or the background for the problem. My primary server 
uses sdbox for primary storage, mdbox for archival storage, and 
fts-solr. I spun up a second server, using sdbox, mdbox, and 
fts-flatcurve. My namespaces are as defined below. As best I can tell 
(based on diff comparing two 'doveconf -n' outputs) my namespaces are 
the same on both servers.


namespace archives {
  list = children
  location = mdbox:/var/mail/%d/%n/Archives/mdbox
  mailbox Unsorted {
auto = no
special_use = \Archive
  }
  prefix = INBOX/Archives/
  separator = /
  subscriptions = no
  type = private
}
namespace inbox {
  alias_for =
  hidden = no
  inbox = yes
  list = yes
  location =
  mailbox "Deleted Messages" {
auto = no
autoexpunge = 30 days
special_use = \Trash
  }
  mailbox Drafts {
auto = subscribe
special_use = \Drafts
 }
  mailbox Sent {
auto = subscribe
special_use = \Sent
  }
  mailbox Trash {
auto = subscribe
autoexpunge = 30 days
special_use = \Trash
  }
  prefix = INBOX/
  separator = /
  subscriptions = no
  type = private
}
namespace lists {
  list = children
  location = mdbox:/var/mail/%d/%n/Lists/mdbox
  prefix = INBOX/Lists/
  separator = /
  subscriptions = no
  type = private
}
namespace subscriptions {
  hidden = yes
  list = no
  location =
  prefix =
  separator = /
  subscriptions = yes
  type = private
}
namespace usershares {
  list = yes
  location = sdbox:/var/mail/%%d/%%n/sdbox:NO-NOSELECT
  prefix = INBOX/shared/%%d/%%n/
  separator = /
  subscriptions = no
  type = shared
}
namespace virtual {
  list = children
  location = virtual:/var/mail/%d/%n/virtual
  mailbox Flagged {
comment = All my flagged messages
special_use = \Flagged
  }
  prefix = INBOX/virtual/
  separator = /
  subscriptions = no
}

I also have:
plugin {
  mailbox_alias_new = Sent Messages
  mailbox_alias_new2 = Sent Items
  mailbox_alias_new3 = Deleted Messages
  mailbox_alias_old = Sent
  mailbox_alias_old2 = Sent
  mailbox_alias_old3 = Trash
}

This setup worked fine with my single server. Then I enabled 
replication - just on the primary. Dsync went to work (it seemed to 
take forever for the initial sync but that's what happens with large 
mailboxes and slow internet connections).


The problem came up with certain subfolders. And I believe it only 
happens with subfolders that have spaces in their names. I had two 
user's mailboxes (under Sent), one of which had a "Sent Messages" 
symlink alias for "Sent", that started generating tens or hundreds of 
duplicates during sync. Fortunately those subfolders only had a few 
mails in them. But I had trees looking like:


[...] (below is under /var/mail/domain/user/sdbox/mailboxes/Sent/)
Proposal 
Requests-temp-c6e003375e64a961c93d9db5accb-temp-1-temp-f80b1a00ce9aa961a86-temp-2
Proposal 
Requests-temp-c6e003375e64a961c93d9db5accb-temp-1-temp-f80b1a00ce9aa961a86-temp-3
Proposal 
Requests-temp-c6e003375e64a961c93d9db5accb-temp-2-temp-023fa4271c9ca9611ade0400b88bfabe
Proposal 
Requests-temp-c6e003375e64a961c93d9db5accb-temp-2-temp-023fa4271c9ca9611ad-temp-1

Proposal Requests-temp-c6e003375e64a961c93d9db5accb-temp-2-temp-1
Proposal Requests-temp-

Re[2]: No source packages in APT repo?

2021-12-03 Thread Daniel Miller

+1.

I *was* using the repo's packages for Ubuntu - but when I wanted to try 
fts-flatcurve I needed something to compile against. I'd rather do so 
against the repo sources if possible.


--
Daniel

-- Original Message --
From: "Shawn Heisey" 
To: dovecot@dovecot.org
Sent: 12/3/2021 1:45:26 PM
Subject: Re: No source packages in APT repo?


On 12/3/21 1:28 PM, Aki Tuomi wrote:

Is there a particular reason you need to build source packages?



There are sometimes moments when I want to test out a code change in a program 
I'm running on a server.  If a source package is available, I can make the 
change and build a new package that includes all of the Ubuntu customizations 
for that program plus the change I am testing.  To test, I just manually 
install the package.  To revert, I reinstall the original package with apt.  
Without a source package, testing might require complex steps with the upstream 
source and customization information that I may not have access to.

Thanks,
Shawn








Replication weirdness

2021-12-03 Thread Daniel Miller
First, I have to say this. After configuring everything correctly - and 
that means *everything* correctly - Dovecot replication Just Works. I'm 
not sure how (yes I do - Timo & Co. Magic) - but it does. Real-time new 
sync is near instantaneous.


Now the problem. Or the background for the problem. My primary server 
uses sdbox for primary storage, mdbox for archival storage, and 
fts-solr. I spun up a second server, using sdbox, mdbox, and 
fts-flatcurve. My namespaces are as defined below. As best I can tell 
(based on diff comparing two 'doveconf -n' outputs) my namespaces are 
the same on both servers.


namespace archives {
  list = children
  location = mdbox:/var/mail/%d/%n/Archives/mdbox
  mailbox Unsorted {
auto = no
special_use = \Archive
  }
  prefix = INBOX/Archives/
  separator = /
  subscriptions = no
  type = private
}
namespace inbox {
  alias_for =
  hidden = no
  inbox = yes
  list = yes
  location =
  mailbox "Deleted Messages" {
auto = no
autoexpunge = 30 days
special_use = \Trash
  }
  mailbox Drafts {
auto = subscribe
special_use = \Drafts
 }
  mailbox Sent {
auto = subscribe
special_use = \Sent
  }
  mailbox Trash {
auto = subscribe
autoexpunge = 30 days
special_use = \Trash
  }
  prefix = INBOX/
  separator = /
  subscriptions = no
  type = private
}
namespace lists {
  list = children
  location = mdbox:/var/mail/%d/%n/Lists/mdbox
  prefix = INBOX/Lists/
  separator = /
  subscriptions = no
  type = private
}
namespace subscriptions {
  hidden = yes
  list = no
  location =
  prefix =
  separator = /
  subscriptions = yes
  type = private
}
namespace usershares {
  list = yes
  location = sdbox:/var/mail/%%d/%%n/sdbox:NO-NOSELECT
  prefix = INBOX/shared/%%d/%%n/
  separator = /
  subscriptions = no
  type = shared
}
namespace virtual {
  list = children
  location = virtual:/var/mail/%d/%n/virtual
  mailbox Flagged {
comment = All my flagged messages
special_use = \Flagged
  }
  prefix = INBOX/virtual/
  separator = /
  subscriptions = no
}

I also have:
plugin {
  mailbox_alias_new = Sent Messages
  mailbox_alias_new2 = Sent Items
  mailbox_alias_new3 = Deleted Messages
  mailbox_alias_old = Sent
  mailbox_alias_old2 = Sent
  mailbox_alias_old3 = Trash
}

This setup worked fine with my single server. Then I enabled replication 
- just on the primary. Dsync went to work (it seemed to take forever for 
the initial sync but that's what happens with large mailboxes and slow 
internet connections).


The problem came up with certain subfolders. And I believe it only 
happens with subfolders that have spaces in their names. I had two 
user's mailboxes (under Sent), one of which had a "Sent Messages" 
symlink alias for "Sent", that started generating tens or hundreds of 
duplicates during sync. Fortunately those subfolders only had a few 
mails in them. But I had trees looking like:


[...] (below is under /var/mail/domain/user/sdbox/mailboxes/Sent/)
Proposal 
Requests-temp-c6e003375e64a961c93d9db5accb-temp-1-temp-f80b1a00ce9aa961a86-temp-2
Proposal 
Requests-temp-c6e003375e64a961c93d9db5accb-temp-1-temp-f80b1a00ce9aa961a86-temp-3
Proposal 
Requests-temp-c6e003375e64a961c93d9db5accb-temp-2-temp-023fa4271c9ca9611ade0400b88bfabe
Proposal 
Requests-temp-c6e003375e64a961c93d9db5accb-temp-2-temp-023fa4271c9ca9611ad-temp-1

Proposal Requests-temp-c6e003375e64a961c93d9db5accb-temp-2-temp-1
Proposal Requests-temp-c6e003375e64a961c93d9db5accb-temp-2-temp-2
Proposal 
Requests-temp-c6e003375e64a961c93d9db5accb-temp-2-temp-2-temp-1-temp-1

Proposal Requests-temp-c6e003375e64a961c93d9db5accb-temp-2-temp-3
Proposal Requests-temp-c6e003375e64a961c93d9db5accb-temp-2-temp-4
Proposal Requests-temp-c6e003375e64a961c93d9db5accb-temp-2-temp-5
Proposal 
Requests-temp-c6e003375e64a961c93d9db5accb-temp-2-temp-e2aa0f35c99ba96135659db5accb

[...]

I kept stopping, cleaning up the folders, and re-starting - and they 
kept regenerating. I tried renaming the folders to eliminate the spaces 
and I think that helped in one case - for the others I just moved the 
folders outside of the mail area completely to let the sync finish.


Now that it's been stable for a day or two - I enabled sync in the other 
direction. And after setting *all* the required parameters instead of 
just most of them...it's working. But...I'm nervous about moving the 
problem folders back over. I will say, if it makes any difference, my 
primary server *was* running version 2.3.13 and I just updated it to 
2.3.17. The remote is also 2.3.17.


--
Daniel

Re: Shared mailboxes setups and dictionaries

2020-09-15 Thread Daniel Miller

On 9/15/2020 10:07 AM, Matej Tyc wrote:

On 14. 09. 20 22:46, Daniel Miller wrote:

On 9/14/2020 1:19 PM, Matej Tyc wrote:

...

When learning about how ACL work in e.g. 
[...] so I can't use 
it to reverse-engineer the correct syntax.




The global ACLs are...global. They apply to all matching mailboxes 
system-wide. So to answer your question, yes "* user=foo lrw" means 
all mailboxes of all accounts are shared to the user foo. But...


Great, what about the format itself? Is it 
//? The documentation brings up, i.e. 
/j...@example.com/* shares all mailboxes of John from the example.com 
domain? Or have I overlooked a documentation page where the syntax is 
introduced?


No. You need to read the docs again:
   https://doc.dovecot.org/settings/plugin/acl/

Global ACLs live in their own little space - either filesystem based or 
file based. You specify who is *granted* global access - and the level 
of that global access applies system-wide. So if you grant 
"j...@example.com" global read/write access to all Inboxes - John will 
be able to access every Inbox of every user (however, he might not know 
that a given inbox exists - without explicit configuration or explicit 
sharing which updates the dictionary).


Next what https://wiki.dovecot.org/SharedMailboxes/Shared and 
https://wiki.dovecot.org/Dictionary describe is a possibility to 
reference LDAP data to define an ACL dictionary. Do I understand it 
correctly that if a LDAP database is the single source of truth, then 
I don't have to worry about updating dictionaries as long as LDAP 
itself is up-to-date, but I have to keep ACLs and LDAP in sync 
manually (or using an application)?
Again, a dictionary is a list of shared mailboxes - not ACL's. You can 
use any dictionary source Dovecot can read from - but if the 
dictionary also supports writing then any manipulation of ACLs will 
automatically update the dictionary.


What the above implies, and I will now state explicitly, is that while 
global ACLs provide *access* they do not *publish* that access. A 
dictionary must be manually updated to list those mailboxes.


What I understand is that ACLs are purely filesystem-based, i.e. no LDAP 
backend, and one has to sync LDAP to respective ACLs "manually".


If I follow what you have said, one could have an equal result with a 
database, syncing ACLs "manually" from LDAP, and doveadm will make sure 
that the database backend will be up-to-date.


First, I provide the disclaimer that I don't use LDAP. I had it years 
ago and I'm quite happy to leave it behind. So I can't give you current 
LDAP/Dovecot experience. However, a quick read of the page you reference 
shows LDAP is read-only. Which means while you could theoretically use 
LDAP for a global ACL source - trying to use it for per-user shares 
would require quite a bit of manual effort for every change. I believe 
the technical term for such a setup is "masochistic".


I totally understand the desire to have a single database for general 
config purposes - however I think you're trying to use a power drill as 
a hammer. Leave your authentication database, i.e. LDAP, alone and let 
your mail server do its thing. Consider the mail store an entity as a 
whole - not just the messages, but the format, the folder structure, and 
the ACLs as a "black box" and I think you'll save yourself a lot of 
frustration. Dovecot (in my own uninformed opinion) is designed to be 
(mostly) autonomous and file-based - any database support is just for 
user/passwords and leave it at that.


If you want per-user shares just use the example at the top of the wiki 
page. From my own config:


plugin {
acl = vfile
acl_shared_dict = file:/var/mail/%d/shared-mailboxes
}

based on a mail_location of "sdbox:/var/mail/%d/%n/sdbox".

--
Daniel



Re: Shared mailboxes setups and dictionaries

2020-09-14 Thread Daniel Miller

On 9/14/2020 1:19 PM, Matej Tyc wrote:

Hello,

I am relatively new to the world of MTAs and MDAs, and I try to set up 
shared mailboxes.


So far I have somehow succeeded - I have defined a shared namespace and 
I have managed to create per-mailbox ACL files thanks to the doveadm 
command.


However, I have been following these resources and there were bits that 
have puzzled me:


When learning about how ACL work in e.g. 
https://doc.dovecot.org/settings/plugin/acl/ - when one wishes to use 
the Global ACL file, how does one link it to a particular user's 
mailboxes? Examples that are listed in the documentation are far too 
generic. For example does "* user=foo lrw" imply that all mailboxes of 
all accounts are shared to the user foo? The doveadm command works only 
if dovecot is set up with per-mailbox ACL files, so I can't use it to 
reverse-engineer the correct syntax.




The global ACLs are...global. They apply to all matching mailboxes 
system-wide. So to answer your question, yes "* user=foo lrw" means all 
mailboxes of all accounts are shared to the user foo. But...


An interesting aspect to ACLs are dictionaries. I understood it as some 
kind of cache - if there is no dictionary or it is empty, then shared 
mailboxes don't work. Conversely, dictionary itself is not enough, one 
needs actual ACLs set up correctly. Is this a correct understanding?


The ACLs grant/deny access to a specific mailbox - when that mailbox is 
known to the client. But ACLs are never scanned or iterated over to 
generate a list of available mailboxes - that's where the dictionary 
comes in. The dictionary is a list of shared mailboxes - but that's all 
it is. So when a client queries the server for a list of available 
mailboxes the dictionary is consulted. The ACLs are then applied for 
each transaction whenever a client tries to read/write/access/whatever a 
specific mailbox. So theoretically, if you can manually specify the 
shared mailbox correctly, no dictionary is required for access.




Next what https://wiki.dovecot.org/SharedMailboxes/Shared and 
https://wiki.dovecot.org/Dictionary describe is a possibility to 
reference LDAP data to define an ACL dictionary. Do I understand it 
correctly that if a LDAP database is the single source of truth, then I 
don't have to worry about updating dictionaries as long as LDAP itself 
is up-to-date, but I have to keep ACLs and LDAP in sync manually (or 
using an application)?


Again, a dictionary is a list of shared mailboxes - not ACL's. You can 
use any dictionary source Dovecot can read from - but if the dictionary 
also supports writing then any manipulation of ACLs will automatically 
update the dictionary.


What the above implies, and I will now state explicitly, is that while 
global ACLs provide *access* they do not *publish* that access. A 
dictionary must be manually updated to list those mailboxes.


--
Daniel



dbox alternate storage and archived namespace

2020-09-09 Thread Daniel Miller
This may (and probably does) come under the heading of "really dumb 
ideas", but...


Before I develop this further I need to ask - is it possible to have a 
"primary" mail_location using single-dbox with an alternate storage 
using multi-dbox? This is *not* the same as different storages for 
different namespaces (already have that).

--
Daniel



Re[2]: [EXT] Re: Support for MULTISEARCH

2020-05-11 Thread Daniel Miller
What client(s) use this and how? I've used virtual folders - by 
explicitly "subscribing" to them and then performing a search within 
them. By hiding the virtual folders how do you use them?


---
Daniel

-- Original Message --
From: "Joe Wong" 
To: "Aki Tuomi" 
Cc: dovecot@dovecot.org; "Peter" ; "Sami Ketola" 


Sent: 5/11/2020 4:51:14 AM
Subject: Re: [EXT] Re: Support for MULTISEARCH


On Mon, May 11, 2020 at 7:18 PM Aki Tuomi 
wrote:



 > On 11/05/2020 14:09 Joe Wong  wrote:
 >
 >
 >
 >
 >
 > On Mon, May 11, 2020 at 5:16 PM Aki Tuomi 
 wrote:
 > >
 > >  > On 11/05/2020 12:12 Joe Wong  wrote:
 > >  >
 > >  >
 > >  >
 > >  >
 > >  >
 > >  > On Sun, May 10, 2020 at 3:54 PM Sami Ketola 
 wrote:
 > >  > >
 > >  > >
 > >  > > > On 10. May 2020, at 1.51, Peter  wrote:
 > >  > > >
 > >  > > > Am 10.05.20 um 00:22 schrieb Daniel Miller:
 > >  > > >> Thank you - I'm aware of the virtual folder option and do use
 it. My interest is for a Windows client, EM Client, which I otherwise
 really enjoy. Unfortunately, they've implemented server-side searching only
 via MULTISEARCH - for reasons passing my understanding. So I was hoping to
 hear Dovecot either already had support or there were plans to implement it.
 > >  > > >
 > >  > > > Virtual folder does not scale. Thank you for naming a client
 that does multisearch!
 > >  > >
 > >  > > Virtual folder scales just fine. What makes you think it does not?
 > >  > >
 > >  > > We have customers that have users with thousand folders and
 millions of emails and still virtual folder scales.
 > >  > >
 > >  > > Sami
 > >  > >
 > >  >
 > >  > is this possible to *hide* the virtual folder from listing but make
 it SELECTable / EXAMINEable from IMAP?
 > >  >
 > >  >
 > >
 > >  namespace {
 > >  location = virtual:...
 > >  hidden = yes
 > >  }
 > >
 > >  Aki
 >
 >
 > * NAMESPACE (("" "/")) NIL NIL
 > a OK Namespace completed (0.001 + 0.000 secs).
 > a list "" "*"* LIST (\HasNoChildren \UnMarked) "/" FromL3
 > * LIST (\HasNoChildren \UnMarked) "/" Apple
 > * LIST (\HasNoChildren \UnMarked) "/" JunkMail
 > * LIST (\HasNoChildren \Marked \Trash) "/" Trash
 > * LIST (\HasNoChildren \UnMarked \Drafts) "/" Drafts
 > * LIST (\HasNoChildren \Marked) "/" SENT
 > * LIST (\HasNoChildren) "/" virtual* LIST (\HasNoChildren) "/" INBOX
 > a OK List completed (0.003 + 0.000 + 0.003 secs).
 >
 > It's now hidden in the namepsace but I can still see it in the folder
 list, is this expected?
 >

 Sorry, forgot to say

 hidden=yes
 list=no

 Aki



thanks it is working now.

Re[2]: Support for MULTISEARCH

2020-05-09 Thread Daniel Miller
Thank you - I'm aware of the virtual folder option and do use it. My 
interest is for a Windows client, EM Client, which I otherwise really 
enjoy. Unfortunately, they've implemented server-side searching only via 
MULTISEARCH - for reasons passing my understanding. So I was hoping to 
hear Dovecot either already had support or there were plans to implement 
it.


---
Daniel

-- Original Message --
From: "Teemu Huovila" 
To: dovecot@dovecot.org
Sent: 5/8/2020 5:49:34 AM
Subject: Re: Support for MULTISEARCH



On 6.5.2020 3.57, Daniel Miller wrote:

Does Dovecot presently support the MULTISEARCH command, or are there plans to 
do so?

If you mean RFC7377, that is not supported.  ref. https://www.imapwiki.org/Specs


I would suggest evaluating if searching a single virtual folder could work for 
your use case. ref. https://doc.dovecot.org/configuration_manual/virtual_plugin/

br,
Teemu


---
Daniel

Support for MULTISEARCH

2020-05-05 Thread Daniel Miller
Does Dovecot presently support the MULTISEARCH command, or are there 
plans to do so?


---
Daniel

Re: Headsup on feature removal

2020-03-29 Thread Daniel Miller

-- Original Message --

[...]
To start, the following features are likely to be removed in next few releases 
of Dovecot.
[...]
 - mailbox alias plugin

Like autocreate, autosubscribe, and expire - Is there a built-in feature 
that makes this plugin obsolete?


---
Daniel




Re[2]: Namespace problem? duplicated folders...

2020-03-29 Thread Daniel Miller

Try this:

namespace inbox {
  hidden = no
  inbox = yes
  list = yes
  mailbox Drafts {
auto = no
special_use = \Drafts
  }
  mailbox Junk {
auto = no
special_use = \Junk
  }
  mailbox Sent {
auto = no
special_use = \Sent
  }
  mailbox "Sent Messages" {
auto = no
special_use = \Sent
  }
  mailbox Trash {
auto = no
special_use = \Trash
  }
  prefix = INBOX/
  separator = /
  subscriptions = no
  type = private
}
namespace subscriptions {
  hidden = yes
  list = no
  subscriptions = yes
  type = private
}

---
Daniel


-- Original Message --
From: "Gregory Sloop" 
To: "Dovecot Mailing List" 
Sent: 1/23/2020 12:46:54 PM
Subject: Re: Namespace problem? duplicated folders...


Anyone?
A tip on where/what to look at, even?
I've read the docs on namespaces and I'm not sure how I could do this 
[or not do it, in this case] from the docs.


I realize I didn't include the version I'm running in the prior post 
too.

It is; 2.2.22




I'm sure this is related to name-spaces, but for some reason [brain 
damage perhaps?

:)
] I can't seem to figure out how to fix it.

In TBird, for example, I have a folder tree that looks like this

Inbox -|
Folder-A
Folder-B
Folder-C
Folder-A
Folder-B
Folder-C

And  Folder-A, Folder-B, Folder-C are the same folders, just shown 
twice. [In two different hierarchies.]


I'm using mbox files to store mail. [If that matters.]

Here's what I have in my conf files for namespace defs

namespace {
inbox = yes
#hidden = yes
prefix = INBOX/
separator = /
}

namespace inbox {
mailbox Drafts {
  special_use = \Drafts
}
mailbox Junk {
  special_use = \Junk
}
mailbox Trash {
  special_use = \Trash
}
mailbox Sent {
  special_use = \Sent
}
mailbox "Sent Messages" {
  special_use = \Sent
}
}


I've had this problem for literally years, and it's not been that big 
of a deal (mostly irritating) - and I've tried to "fix" it before 
without any success. But now it's causing some issues and I'd really 
like to get this nagging issue gone. Since it's been so long since I 
last tried to fix it, I really can't recall any of settings changes I 
made in an attempt to fix it. I'd guess this is a trivial issue, but if 
so, can someone point me in the right direction.


Thanks in advance!

-Greg


--
Gregory Sloop, Principal: Sloop Network & Computer Consulting
Voice: 503.251.0452 x121
EMail: gr...@sloop.net
http://www.sloop.net
---

Re: Shared Mailboxes with Multiple Domains

2020-02-17 Thread Daniel Miller

Any thoughts on this?

---
Daniel

-- Original Message --
From: "Daniel Miller" 
To: "Dovecot Mailing List" 
Sent: 2/12/2020 6:16:05 PM
Subject: Shared Mailboxes with Multiple Domains


Trying to track down a problem I've been dealing with for a while. Everything 
else works fine - the problem is with shared mailboxes.

My present, and desired, prefix for the shared namespace is:
  prefix = INBOX/shared/%%d/%%n/

Some mail clients, particularly Thunderbird and Android's AquaMail, have no problem with this. But other 
(presumably broken) clients don't show the shared mailboxes. This includes EM Client and Webmail Lite. 
Actually, Webmail Lite lists the mailboxes in the subscription window, but then the "live"folder 
list shows "shared" and "shared/domain" but none of the shared mailboxes below the domain.

Changing to:
  prefix = INBOX/shared/%%u/

Works across all clients - but I'd rather have the domain separation. Testing with telnet 
". LIST '' '*'" yields the full list with either config.

The files /var/mail/%d/shared-mailboxes contain entries like:
shared/shared-boxes/group/allshared/u...@domain.com
  1


Below is "doveconf -n" output.

# 2.3.9.3 (9f41b88fa): /usr/local/etc/dovecot/dovecot.conf
# Pigeonhole version 0.5.9 (db4e9a2f)
# OS: Linux 5.3.0-28-generic x86_64 Ubuntu 18.04.4 LTS
# Hostname: bubba.amfes.lan
auth_cache_size = 4 k
auth_master_user_separator = *
auth_mechanisms = plain login
auth_policy_hash_nonce = # hidden, use -P to show it
auth_policy_hash_truncate = 8
auth_policy_server_api_header = Authorization: Basic 
d2ZvcmNlOnVsdHJhLXNlY3JldC1zZWN1cmUtc2FmZQ
default_login_user = nobody
default_vsz_limit = 2 G
disable_plaintext_auth = no
imap_client_workarounds = tb-extra-mailbox-sep
imap_idle_notify_interval = 29 mins
listen = *
login_trusted_networks = 192.168.0.0/24
mail_attachment_hash = %{sha512}
mail_plugins = fts fts_solr acl zlib virtual
mail_prefetch_count = 10
mail_shared_explicit_inbox = yes
managesieve_notify_capability = mailto
managesieve_sieve_capability = fileinto reject envelope encoded-character 
vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy 
include variables body enotify environment mailbox date index ihaveduplicate 
mime foreverypart extracttext
mdbox_rotate_size = 20 M
namespace archives {
  list = children
  location = mdbox:/var/mail/%d/%n/Archives/mdbox
  mailbox Unsorted {
auto = no
special_use = \Archive
  }
  prefix = INBOX/Archives/
  separator = /
  subscriptions = no
  type = private
}
namespace inbox {
  hidden = no
  inbox = yes
  list = yes
  location =
  mailbox "Deleted Messages" {
auto = no
autoexpunge = 30 days
special_use = \Trash
  }
  mailbox Drafts {
auto = subscribe
special_use = \Drafts
  }
  mailbox Sent {
auto = subscribe
special_use = \Sent
  }
  mailbox "Sent Items" {
auto = no
special_use = \Sent
  }
  mailbox "Sent Messages" {
auto = no
special_use = \Sent
  }
  mailbox Trash {
auto = subscribe
autoexpunge = 30 days
special_use = \Trash
  }
  prefix = INBOX/
  separator = /
  subscriptions = no
  type = private
}
namespace lists {
  list = children
  location = mdbox:/var/mail/%d/%n/Lists/mdbox
  prefix = INBOX/Lists/
  separator = /
  subscriptions = no
  type = private
}
namespace subscriptions {
  hidden = yes
  list = no
  location =
  prefix =
  subscriptions = yes
}
namespace usershares {
  list = children
  location = sdbox:/var/mail/%%d/%%n/sdbox:NO-NOSELECT
  prefix = INBOX/shared/%%d/%%n/
  separator = /
  subscriptions = no
  type = shared
}
namespace virtual {
  list = children
  location = virtual:/var/mail/%d/%n/virtual
  mailbox Flagged {
comment = All my flagged messages
special_use = \Flagged
  }
  prefix = INBOX/virtual/
  separator = /
  subscriptions = no
}
passdb {
  args = /usr/local/etc/dovecot/dovecot-sql.conf.ext
  driver = sql
}
plugin {
  acl = vfile
  acl_shared_dict = file:/var/mail/%d/shared-mailboxes
  fts = solr
  fts_autoindex = yes
  fts_autoindex_exclude = \Trash
  fts_autoindex_exclude2 = \Junk
  fts_autoindex_exclude3 = \Spam
  fts_enforced = no
  fts_index_timeout = 20s
  fts_solr = url=http://127.0.0.1:8983/solr/dovecot/
  mailbox_alias_new = Sent Messages
  mailbox_alias_new2 = Sent Items
  mailbox_alias_new3 = Deleted Messages
  mailbox_alias_old = Sent
  mailbox_alias_old2 = Sent
  mailbox_alias_old3 = Trash
  sieve = file:~/sieve;active=~/.dovecot.sieve
}
protocols = imap lmtp sieve
service auth {
  unix_listener /var/spool/postfix/private/auth {
group = postfix
mode = 0660
user = postfix
  }
  unix_listener auth-userdb {
group = mail
mode = 0600
user = vmail
  }
}
service dict {
  unix_listener dict {
group = mail
mode = 0660
user = vmail
  }
}
service imap-login {
  process_min_avail = 10
  service_count = 1
}
service imap-postlogin {
  executabl

Shared Mailboxes with Multiple Domains

2020-02-12 Thread Daniel Miller
Trying to track down a problem I've been dealing with for a while. 
Everything else works fine - the problem is with shared mailboxes.


My present, and desired, prefix for the shared namespace is:
  prefix = INBOX/shared/%%d/%%n/

Some mail clients, particularly Thunderbird and Android's AquaMail, have 
no problem with this. But other (presumably broken) clients don't show 
the shared mailboxes. This includes EM Client and Webmail Lite. 
Actually, Webmail Lite lists the mailboxes in the subscription window, 
but then the "live" folder list shows "shared" and "shared/domain" but 
none of the shared mailboxes below the domain.


Changing to:
  prefix = INBOX/shared/%%u/

Works across all clients - but I'd rather have the domain separation. 
Testing with telnet ". LIST '' '*'" yields the full list with either config.


The files /var/mail/%d/shared-mailboxes contain entries like:
  shared/shared-boxes/group/allshared/u...@domain.com
  1


Below is "doveconf -n" output.

# 2.3.9.3 (9f41b88fa): /usr/local/etc/dovecot/dovecot.conf
# Pigeonhole version 0.5.9 (db4e9a2f)
# OS: Linux 5.3.0-28-generic x86_64 Ubuntu 18.04.4 LTS
# Hostname: bubba.amfes.lan
auth_cache_size = 4 k
auth_master_user_separator = *
auth_mechanisms = plain login
auth_policy_hash_nonce = # hidden, use -P to show it
auth_policy_hash_truncate = 8
auth_policy_server_api_header = Authorization: Basic 
d2ZvcmNlOnVsdHJhLXNlY3JldC1zZWN1cmUtc2FmZQ

default_login_user = nobody
default_vsz_limit = 2 G
disable_plaintext_auth = no
imap_client_workarounds = tb-extra-mailbox-sep
imap_idle_notify_interval = 29 mins
listen = *
login_trusted_networks = 192.168.0.0/24
mail_attachment_hash = %{sha512}
mail_plugins = fts fts_solr acl zlib virtual
mail_prefetch_count = 10
mail_shared_explicit_inbox = yes
managesieve_notify_capability = mailto
managesieve_sieve_capability = fileinto reject envelope 
encoded-character vacation subaddress comparator-i;ascii-numeric 
relational regex imap4flags copy include variables body enotify 
environment mailbox date index ihave duplicate mime foreverypart extracttext

mdbox_rotate_size = 20 M
namespace archives {
  list = children
  location = mdbox:/var/mail/%d/%n/Archives/mdbox
  mailbox Unsorted {
auto = no
special_use = \Archive
  }
  prefix = INBOX/Archives/
  separator = /
  subscriptions = no
  type = private
}
namespace inbox {
  hidden = no
  inbox = yes
  list = yes
  location =
  mailbox "Deleted Messages" {
auto = no
autoexpunge = 30 days
special_use = \Trash
  }
  mailbox Drafts {
auto = subscribe
special_use = \Drafts
  }
  mailbox Sent {
auto = subscribe
special_use = \Sent
  }
  mailbox "Sent Items" {
auto = no
special_use = \Sent
  }
  mailbox "Sent Messages" {
auto = no
special_use = \Sent
  }
  mailbox Trash {
auto = subscribe
autoexpunge = 30 days
special_use = \Trash
  }
  prefix = INBOX/
  separator = /
  subscriptions = no
  type = private
}
namespace lists {
  list = children
  location = mdbox:/var/mail/%d/%n/Lists/mdbox
  prefix = INBOX/Lists/
  separator = /
  subscriptions = no
  type = private
}
namespace subscriptions {
  hidden = yes
  list = no
  location =
  prefix =
  subscriptions = yes
}
namespace usershares {
  list = children
  location = sdbox:/var/mail/%%d/%%n/sdbox:NO-NOSELECT
  prefix = INBOX/shared/%%d/%%n/
  separator = /
  subscriptions = no
  type = shared
}
namespace virtual {
  list = children
  location = virtual:/var/mail/%d/%n/virtual
  mailbox Flagged {
comment = All my flagged messages
special_use = \Flagged
  }
  prefix = INBOX/virtual/
  separator = /
  subscriptions = no
}
passdb {
  args = /usr/local/etc/dovecot/dovecot-sql.conf.ext
  driver = sql
}
plugin {
  acl = vfile
  acl_shared_dict = file:/var/mail/%d/shared-mailboxes
  fts = solr
  fts_autoindex = yes
  fts_autoindex_exclude = \Trash
  fts_autoindex_exclude2 = \Junk
  fts_autoindex_exclude3 = \Spam
  fts_enforced = no
  fts_index_timeout = 20s
  fts_solr = url=http://127.0.0.1:8983/solr/dovecot/
  mailbox_alias_new = Sent Messages
  mailbox_alias_new2 = Sent Items
  mailbox_alias_new3 = Deleted Messages
  mailbox_alias_old = Sent
  mailbox_alias_old2 = Sent
  mailbox_alias_old3 = Trash
  sieve = file:~/sieve;active=~/.dovecot.sieve
}
protocols = imap lmtp sieve
service auth {
  unix_listener /var/spool/postfix/private/auth {
group = postfix
mode = 0660
user = postfix
  }
  unix_listener auth-userdb {
group = mail
mode = 0600
user = vmail
  }
}
service dict {
  unix_listener dict {
group = mail
mode = 0660
user = vmail
  }
}
service imap-login {
  process_min_avail = 10
  service_count = 1
}
service imap-postlogin {
  executable = script-login /usr/local/etc/dovecot/post-login.sh
  user = $default_internal_user
}
service imap {
  executable = imap imap-postlogin
  vsz_limit = 4 G
}
service indexer-worker {
  process_limit = 3
}
service lmtp {
  process_min_avail = 5
  unix_listener 

Split storage & solr

2020-01-29 Thread Daniel Miller

Is it possible to setup the following:

1. Primary mailserver with SMTP solutions and Dovecot.
2. Primary server will store recent mails.
3. Secondary server for archival storage and Solr.

So I'm not looking for a distributed cluster - simply a splitting of 
designated functions. Clients would only be connecting to the primary 
server. I know Dovecot supports an alternate storage location - that's 
what made me think this was possible. What I'm more uncertain of is 
Solr.


My (probably flawed) thinking is to take advantage of a low-cost 
high-availability cloud server to provide primary email services. 
Dovecot, Postfix, and even ASSP don't have significant memory or 
processor requirements by themselves - especially for my smaller user 
base.  However - storage in the cloud can be at a premium. Therefore I'm 
thinking of continuing to self-host the archives.  And my own server has 
the raw power & memory to handle Solr easily.  What's triggering this is 
our ISP's quality has been deteriorating - and the alternates don't 
appear much better.


Initial visualization would have a VPN/SSH connection between the 
servers, and NFS mounting my storage to the cloud server for archives. 
If our connection drops - in theory "current" mail handling is 
unaffected.


My concerns/questions:
1. If Dovecot is unable to reach the remote Solr - upon re-connection 
will Solr be told about the new messages to index? Or do I need to setup 
a periodic re-scan?
2. Is there a "better" method of accessing the archive storage area than 
NFS? Either a different network file system or is there a way to do it 
with Dovecot directly?
3. What am I not taking into consideration in this setup that you think 
will be a problem?


---
Daniel

Re: Namespace problem? duplicated folders...

2020-01-23 Thread Daniel Miller
And I should have seen that from your first post but output from 
"doveconf" is definitive.


--
Daniel

On 1/23/2020 6:57 PM, Daniel Miller wrote:

Keeping in mind I'm 0% Dovecot certified and 100% certifiable...

I'm guessing you have your namespace definition separate from your 
special-use mailbox declarations. Possibly in 10-mail.conf and 
15-mailboxes.conf? Not that it matters.


First sugggestion - that first namespace definition. Change it from:

 namespace {

to

 namespace inbox {

and see what happens.

--
Daniel


On 1/23/2020 4:40 PM, Gregory Sloop wrote:



*DM> Interesting.

DM> While I don't use mbox (dbox or maildir) I wouldn't think that would
DM> matter. Your namespace layout looks nearly the same as my own. But to
DM> verify what's going on - post the output of "doveconf namespace".

DM> Possibly just reviewing that output may tell you enough - otherwise
DM> share it (fully) here.

DM> --
DM> Daniel

*namespace {
  disabled = no
  hidden = no
  ignore_on_failure = no
  inbox = yes
  list = yes
  location =
  order = 0
  prefix = INBOX/
  separator = /
  subscriptions = yes
  type = private
}
namespace inbox {
  disabled = no
  hidden = no
  ignore_on_failure = no
  inbox = no
  list = yes
  location =
  mailbox Drafts {
    auto = no
    autoexpunge = 0
    comment =
    driver =
    special_use = \Drafts
  }
  mailbox Junk {
    auto = no
    autoexpunge = 0
    comment =
    driver =
    special_use = \Junk
  }
  mailbox Sent {
    auto = no
    autoexpunge = 0
    comment =
    driver =
    special_use = \Sent
  }
  mailbox "Sent Messages" {
    auto = no
    autoexpunge = 0
    comment =
    driver =
    special_use = \Sent
  }
  mailbox Trash {
    auto = no
    autoexpunge = 0
    comment =
    driver =
    special_use = \Trash
  }
  order = 0
  prefix =
  separator =
  subscriptions = yes
  type = private
}








Re: Namespace problem? duplicated folders...

2020-01-23 Thread Daniel Miller

Keeping in mind I'm 0% Dovecot certified and 100% certifiable...

I'm guessing you have your namespace definition separate from your 
special-use mailbox declarations. Possibly in 10-mail.conf and 
15-mailboxes.conf? Not that it matters.


First sugggestion - that first namespace definition. Change it from:

namespace {

to

namespace inbox {

and see what happens.

--
Daniel


On 1/23/2020 4:40 PM, Gregory Sloop wrote:



*DM> Interesting.

DM> While I don't use mbox (dbox or maildir) I wouldn't think that would
DM> matter. Your namespace layout looks nearly the same as my own. But to
DM> verify what's going on - post the output of "doveconf namespace".

DM> Possibly just reviewing that output may tell you enough - otherwise
DM> share it (fully) here.

DM> --
DM> Daniel

*namespace {
  disabled = no
  hidden = no
  ignore_on_failure = no
  inbox = yes
  list = yes
  location =
  order = 0
  prefix = INBOX/
  separator = /
  subscriptions = yes
  type = private
}
namespace inbox {
  disabled = no
  hidden = no
  ignore_on_failure = no
  inbox = no
  list = yes
  location =
  mailbox Drafts {
    auto = no
    autoexpunge = 0
    comment =
    driver =
    special_use = \Drafts
  }
  mailbox Junk {
    auto = no
    autoexpunge = 0
    comment =
    driver =
    special_use = \Junk
  }
  mailbox Sent {
    auto = no
    autoexpunge = 0
    comment =
    driver =
    special_use = \Sent
  }
  mailbox "Sent Messages" {
    auto = no
    autoexpunge = 0
    comment =
    driver =
    special_use = \Sent
  }
  mailbox Trash {
    auto = no
    autoexpunge = 0
    comment =
    driver =
    special_use = \Trash
  }
  order = 0
  prefix =
  separator =
  subscriptions = yes
  type = private
}





Re: Namespace problem? duplicated folders...

2020-01-23 Thread Daniel Miller

Interesting.

While I don't use mbox (dbox or maildir) I wouldn't think that would 
matter. Your namespace layout looks nearly the same as my own. But to 
verify what's going on - post the output of "doveconf namespace".


Possibly just reviewing that output may tell you enough - otherwise 
share it (fully) here.


--
Daniel

On 1/23/2020 12:46 PM, Gregory Sloop wrote:

Anyone?
A tip on where/what to look at, even?
I've read the docs on namespaces and I'm not sure how I could do this 
[or not do it, in this case] from the docs.


I realize I didn't include the version I'm running in the prior post too.
It is; 2.2.22




	I'm sure this is related to name-spaces, but for some reason [brain 
damage perhaps? :) ] I can't seem to figure out how to fix it.


In TBird, for example, I have a folder tree that looks like this

Inbox -|
Folder-A
Folder-B
Folder-C
Folder-A
Folder-B
Folder-C

And  Folder-A, Folder-B, Folder-C are the same folders, just shown 
twice. [In two different hierarchies.]


I'm using mbox files to store mail. [If that matters.]

Here's what I have in my conf files for namespace defs

namespace {
inbox = yes
#hidden = yes
prefix = INBOX/
separator = /
}

namespace inbox {
mailbox Drafts {
   special_use = \Drafts
}
mailbox Junk {
   special_use = \Junk
}
mailbox Trash {
   special_use = \Trash
}
mailbox Sent {
   special_use = \Sent
}
mailbox "Sent Messages" {
   special_use = \Sent
}
}


I've had this problem for literally years, and it's not been that big of 
a deal (mostly irritating) - and I've tried to "fix" it before without 
any success. But now it's causing some issues and I'd really like to get 
this nagging issue gone. Since it's been so long since I last tried to 
fix it, I really can't recall any of settings changes I made in an 
attempt to fix it. I'd guess this is a trivial issue, but if so, can 
someone point me in the right direction.


Thanks in advance!

-Greg



/--
Gregory Sloop, Principal: Sloop Network & Computer Consulting
Voice: 503.251.0452 x121
EMail: /gr...@sloop.net 
http://www.sloop.net
/--- /




Re: http API for IMAP

2019-11-15 Thread Daniel Miller via dovecot

On 11/13/2019 11:59 PM, Thomas Güttler via dovecot wrote:



Am 13.11.19 um 17:21 schrieb Ralph Seichter via dovecot:

* Thomas Güttler via dovecot:


AFAIK you can't sent a link/URL to a mail on a shared folder to a friend.
Like "Hi  bob, she loves me. See this message from here https:/./"

Regards,
   Thomas Güttler



Actually - why not? It doesn't seem that difficult (at an abstract 
level) to implement such with available tools. PHP has built-in support 
for IMAP - so creating an interface that maps HTTP URI's to IMAP 
commands doesn't look too bad.


I might even suggest leveraging existing platforms like Nextcloud - 
instead of creating a whole new authentication, authorization, 
processing, and presentation framework you'd "simply" write a Nextcloud 
add-on that publishes IMAP folders/messages in whatever manner you 
prefer. Nextcloud already provides for file-sharing - so I see this as a 
good fit.


Daniel



Re: MariaDB database for users and passwords?

2019-11-09 Thread Daniel Miller via dovecot

There is some ambiguity in the setting names, however:

In the "upper" authentication config file (possibly 
conf.d/auth-sql.conf.ext) you define which "internal" driver the 
authentication system will use. These are...more of a top-level engine 
selection if you will - perhaps not what you'd consider a "true" driver.


In the "lower" authentication config file (like dovecot-sql.conf.ext), 
which is referenced by the 'args' setting in the userdb & passdb 
sections of the "upper" file, is where you explicitly specific the 
"true" driver, the actual database, and any field mappings.


If you're just getting things setup I suggest you check out:

http://postfixadmin.sourceforge.net/

Very clean & simple admin GUI for mail services. It includes 
documentation for setting up Dovecot.


Daniel


On 11/8/2019 11:12 PM, Aki Tuomi via dovecot wrote:



On 09/11/2019 05:44 Ken Wright via dovecot  wrote:

  
On 11/8/19 3:40 PM, Alexander Dalloz via dovecot wrote:

Am 08.11.2019 um 21:23 schrieb Ken Wright via dovecot:


On 11/8/19 3:14 PM, @lbutlr via dovecot wrote:

On 08 Nov 2019, at 11:56, Ken Wright  wrote:

Nov  8 13:28:53 grace dovecot: auth: Fatal: Unknown passdb driver ‘

You do not have Dovecot compiled with support for mysql'


But the dovecot-mysql package is installed!  Why can't it see that?



The driver is called "sql". See

https://doc.dovecot.org/configuration_manual/authentication/sql/

Alexander


Are you sure?  I looked at that page, and it says there are different
drivers for MySQL and PostgreSQL:  mysql and pgsql respectively.  I also
checked dovecot.conf, and there the driver is called "sql."

Ken


SQL is the **authentication** database, which has mysql **driver**. So in 
dovecot.conf you use sql, and in the config file for the sql authentication, 
you specify the driver. See 
https://github.com/dovecot/core/blob/master/doc/example-config/dovecot-sql.conf.ext#L32

Aki





SQL iterate_query

2019-10-24 Thread Daniel Miller via dovecot

I've been hunting some ghost mailboxes - and I *think* I found the source.

I use the complete email address as the username, and store such in a 
database. The storage structure is location=/var/mail/%d/%n. Not unusual 
I think.


So all I *should* see from "ls /var/mail" would be a list of domains. 
But I keep seeing empty mailboxes being created at this level. Having 
corrected a few other errors I *hope* I've found the last one - but if 
I'm right I believe the docs need updating:


The examples given for SQL userdb's include:
iterate_query = SELECT userid AS username, domain FROM users

So this means the username is returned for *both* the username and 
domain. Even if I'm wrong as to the cause of my own troubles this can't 
be right. It just can't. Or am I mistaken?


So, given that the complete address is used as the username I now use:
iterate_query = SELECT username FROM mailbox
(I'm using postfixadmin to administer this - and "mailbox" is the 
default user table name)


I believe the alternative would be an explicit:
iterate_query = SELECT username, domain AS username, domain FROM users

I don't *think* that would make any security difference for my use case 
so why add the extra processing?


I believe the documentation should be updated, or at least clarified, on 
this issue.

--
Daniel



subscription namespace

2019-10-24 Thread Daniel Miller via dovecot
The current documentation makes mention of a "special" subscription 
namespace. The example given:


namespace subscriptions {
  subscriptions = yes
  prefix = ""
  list = no
  hidden = yes
}

namespace inbox {
  inbox = yes
  location =
  subscriptions = no
[...]

results in a startup error as both namespaces have the same prefix. Was 
the intent for the "inbox" namespace to have an explicit "INBOX/" prefix?


If this is configured for an existing server that previously had no such 
"INBOX/" prefix namespace - will clients need to be reconfigured?


--
Daniel



Re: Still trying to get past authorization problems

2019-10-24 Thread Daniel Miller via dovecot

In conf.d/10-logging.conf, set:

auth_debug_passwords = yes
mail_debug = yes
verbose_ssl = yes

You might try setting them one-by-one as having all three will give a 
ton of info, and auth_debug_passwords will expose all passwords used 
while set, but those settings should show you what the problem is.


Daniel

On 10/24/2019 6:23 AM, Steve Matzura via dovecot wrote:

That's already in conf.d/10-auth.conf.


On 10/24/2019 1:31 AM, Aki Tuomi via dovecot wrote:

On 24.10.2019 6.18, Steve Matzura via dovecot wrote:

Got all the Postfix errors fixed but maybe one, so I don't think
that's involved in this mix any more.


I had a domain definition problem, got that sorted.


The accounts' logins are correct. I tried several from the shell, and
they let me in.


Here's the minus-n output, not very different from the first time I
posted it:


Try adding


auth_mechanisms = PLAIN LOGIN


and do not use [x] secure password in your MUA.

Aki







Re: Password issue

2019-10-10 Thread Daniel Miller via dovecot

On 10/9/2019 6:58 PM, @lbutlr via dovecot wrote:

On Oct 9, 2019, at 5:23 PM, @lbutlr  wrote:

Postfix logs "Client host rejected: Access denied” but as I said, other 
accounts can submit and there’s nothing special in the submission service in 
master.cf.


submission inet  n   -   n   -   -   smtpd
-o smtpd_tls_security_level=encrypt
-o smtpd_sasl_auth_enable=yes
-o smtpd_sasl_type=dovecot
-o smtpd_sasl_security_options=noanonymous
-o smtpd_sasl_path=private/auth
-o smtpd_milters=
-o milter_connect_macros=
-o milter_macro_daemon_name=ORIGINATING
-o syslog_name=postfix/submit
-o smtpd_client_restrictions=permit_sasl_authenticated,reject
-o smtpd_data_restrictions=
-o smtpd_relay_restrictions=permit_sasl_authenticated,reject
-o smtpd_helo_restrictions=
-o smtpd_recipient_restrictions=permit_sasl_authenticated,reject
-o smtpd_recipient_restrictions=permit_sasl_authenticated,reject





I suggest you re-post this to the Postfix as this is a Postfix issue. 
However, before doing so, reference

http://www.postfix.org/DEBUG_README.html

To begin with, I'd suggest adding a "-v" to the smtpd command above, 
followed by a Postfix reload, and test sending again. If that doesn't 
reveal your issue re-post to the Postfix list, and include the output of 
"postconf -n". BTW - I'm assuming the duplicate 
smtpd_recipient_restrictions line at the end is an email artificat.


--
Daniel



Re: File manager or browser for IMAP?

2019-09-24 Thread Daniel Miller via dovecot
Not defending Thunderbird - but I don't understand your "taking hours to 
load my Dovecot IMAP". I suppose if you have sync enabled then the first 
time you connect to a large mailstore there would be an initial 
download. But...I always disable sync immediately upon setting up 
accounts in Thunderbird so that's never been an issue for me.


Being unable to prevent downloads or utilize server-side searches is why 
some other clients have been disappointing for me - like EM Client and 
Mailbird.


Daniel

On 9/23/2019 5:36 PM, Steve Litt via dovecot wrote:

Thunderbird is an absolute pig, taking hours to load my Dovecot IMAP.
Claws-mail is good, but I have some problems with it. Alpine appears
not to be ready for prime time to act as a window into IMAP. Same with
the rest I've tried.

SteveT

On Tue, 24 Sep 2019 00:21:33 +0200
Ionel Spanachi  wrote:


Why not use thunderbird (or any other IMAP talking client)? :-)


Ionel

On 24.09.19 00:14, Steve Litt via dovecot wrote:

Hi all,

I could really use a file manager or browser to browse my Dovecot
IMAP. Ideally it would have hotkeys to move, copy, delete and send.
The send part needn't be coded: Just a call to a shellscript which
can handle the send the way it's locally the most convenient.

Anyone know of such a file manager or browser for IMAP?

SteveT


Steve Litt
Author: The Key to Everyday Excellence
http://www.troubleshooters.com/key
Twitter: http://www.twitter.com/stevelitt
  






Re: fts_solr: Error: fts_solr: received invalid uid '0'

2019-09-19 Thread Daniel Miller via dovecot

On 9/19/2019 6:28 AM, Fabian via dovecot wrote:


Thanks for your response! No we are not limiting Soli’s memory usage. After your tip, we've 
also upgraded the memory to 32GB. But the behavior remains the same. I have also already 
considered that Dovecot may index the UID incorrectly. But if I search the index directly, I 
don't find any entries with UID = 0, so I have no idea where this "fts_solr: received 
invalid uid '0"" message might come from.

In our test environment we actually indexed only one user. The user's mailbox 
contains about 100.000 mails. This means that there is not really much data in 
the index.

Are there any other hints or tips regarding this „invalid uid ‚0‘"-message?

Logfile:



Sep 16 08:35:27 mailservertest dovecot: imap(user01)<30204><+IjNzqWS2s2sEQoK>: 
Debug: http-client[1]: peer 172.17.10.12:8983: Creating 1 new connections to handle 
requests (already 0 usable, connecting t$


Your post has truncated the lines (right margin). Re-post with the full 
lines.


--
Daniel



Re: Imaptest stall

2019-09-17 Thread Daniel Miller via dovecot
If you're just speed testing for writing probably sdbox or maildir would 
be the fastest.


Daniel

On 9/17/2019 1:09 PM, Marc Roos via dovecot wrote:


Yes dovecot is showing the inserted messages until the stall. Looks like
it is an issue with imap test because I am able to empty the mailbox
again via thunderbird. I am comparing write tests to different backends.



-Original Message-
From: Daniel Miller [mailto:dmil...@amfes.com]
Sent: dinsdag 17 september 2019 22:06
To: Marc Roos; dovecot
Subject: Re: Imaptest stall

On 9/17/2019 12:58 AM, Marc Roos via dovecot wrote:


I have been testing with imaptest and getting 'stalls', I tried even
building from source and static. Even running it on the same host.
Anyone knows what I could doing wrong?

[@~]# ./imaptest - append=100,0 logout=0 host=192.168.10.44 port=143
user=test2 pass= seed=100 secs=240 clients=1 mbox=64kb.mbox
box=INBOX/test


What are you trying to test? Do the Dovecot logs show any connections?


--
Daniel








Re: Imaptest stall

2019-09-17 Thread Daniel Miller via dovecot

On 9/17/2019 12:58 AM, Marc Roos via dovecot wrote:


I have been testing with imaptest and getting 'stalls', I tried even
building from source and static. Even running it on the same host.
Anyone knows what I could doing wrong?

[@~]# ./imaptest - append=100,0 logout=0 host=192.168.10.44 port=143
user=test2 pass= seed=100 secs=240 clients=1 mbox=64kb.mbox
box=INBOX/test


What are you trying to test? Do the Dovecot logs show any connections?


--
Daniel



Namespace overlap

2019-09-17 Thread Daniel Miller via dovecot

Given an existing default namespace:

namespace inbox {
  type = private
  separator = /
  prefix =
  location = sdbox:/var/mail/%d/%n/sdbox
  inbox = yes
  hidden = no
  list = yes
  subscriptions = yes
}

And mailboxes like:
INBOX
INBOX/Archives
INBOX/Archives/2018

if I then define a new namespace:

namespace archives {
  type = private
  separator = /
  prefix = Archives/
  location = mdbox:/var/mail/%d/%n/Archives/mdbox
  subscriptions = no
  list = children
}

What will happen to the previous existing mailboxes & mails? Will they 
simply be "masked" by the new namespace and remain pending other 
operations? Or would they be moved/deleted?


If they remain - is it possible to refer to the old mailboxes via either 
IMAP or doveadm?


--
Daniel



Re: fts_solr: Error: fts_solr: received invalid uid '0'

2019-09-16 Thread Daniel Miller via dovecot

On 9/13/2019 1:21 AM, Fabian via dovecot wrote:

Hi,

we are trying to add full text search functionality with Solr to our Doveoct 
setup. Our Versions:
OS: Debian 9
Tried versions:
- Dovecot 2.2.7 with Solr 3.6
- Dovecot 2.3.4 with Solr 8.2
(2.2.7 from offical Debian repository, 2.3.4 from backports)

Search is working mostly of the time perfrectly smooth. But sometimes following 
message appears in mail.err:
dovecot: imap(username)16189UxYWLVuSYMasEQoK: Error: fts_solr: 
received invalid uid '0'

If this error occurs our webmail frontend delivers most of the time a timeout. 
Sometimes the search only takes really long.

Are  there any ideas why this error occurs? We are not able to reproduce the 
error in such a way that it would always be reproducible. However, we can 
reproduce the behavior in some form over and over again - but we do not know 
exactly what is decisive.



Are you limiting Solr's memory usage? How much available memory is on 
your server?


To shortcut the conversation - if you don't have at least 16G of *free* 
RAM it's time to upgrade. My own server has 32G installed - I used to 
have 16G. My own Solr problems basically disappeared after adding RAM. 
And I only serve a few users - my own mailstore is the largest as I keep 
most of my mails. If you're serving 20+ users you'd probably benefit 
from doubling to at least 64G.


--
Daniel



doveadm mailbox list

2019-09-08 Thread Daniel Miller via dovecot

It's quite likely I'm doing it wrong, but...

Given a valid mailbox...

doveadm mailbox list -u  realmb
returns "realmb"

doveadm mailbox list -u  real*
returns "realmb"

Seems reasonable. Now, with a non-existent mailbox...
doveadm mailbox list -u  bogus
returns "bogus"

doveadm mailbox list -u  bogus*
returns ""

Is this a bug or correct behavior?

--
Daniel



Namespace structure

2019-09-05 Thread Daniel Miller via dovecot
Is the following "legal" for Dovecot? And...is this separation 
recommended or a bad idea? Particularly I'm asking about the "archives" 
namespace - I haven't actually implemented this yet and I'm checking 
before I break something.


10-mail.conf

# Primary private namespace
# Using sdbox for storage
namespace inbox {
  type = private
  separator = /
  prefix =
  location = sdbox:/var/mail/%d/%n/sdbox
  inbox = yes
  hidden = no
  list = yes
  subscriptions = yes
}

# For long-term archival
namespace archives {
  type = private
  separator = /
  prefix = Archives/
  location = mdbox:/var/mail/%d/%n/Archives/mdbox
  subscriptions = no
  list = children
}

# Shared mailboxes
mail_shared_explicit_inbox = yes
namespace usershares {
  type = shared
  separator = /
  prefix = shared/%%n/
  location = sdbox:/var/mail/%%d/%%n/sdbox
  subscriptions = no
  list = children
}

# Virtual mailboxes - for server-side searches
namespace virtual {
  prefix = virtual/
  separator = /
  location = virtual:/var/mail/%d/%n/virtual
  subscriptions = no
  list = children
}

--
Daniel



Re: Some questions

2019-07-09 Thread Daniel Miller via dovecot



On 7/9/2019 6:17 AM, Jérôme Bardot via dovecot wrote:

Hello,

This is my first email here.
I want to understand well how dovecot is integrate with ldap in a
postfix/dovecot/ldap setup.
I use a debian server.


Perfectly!



More specifically what dovecot need in ldap to work.
I saw we can use several "mode" related to virtual domain, etc. For
"start" i only need one domain with several address.
I currently use fusiondirectory for manage my ldap users. i guess i
can use that schema to auto create users email
(name.firstn...@domain.tld for ie) ?
I also want to setup some aliases and share directory based on ldap
group/role can i do it ?

An other question is can we have two domain name for imap.domain.tld
&& smtp.domain.tld ?


Yes.

Dovecot & Postfix have no "hard" schema, or database definition, or 
particular fields. You need to create map files which tell each server 
how to use the information from LDAP (or any other database). Each 
server (Postfix & Dovecot) have their own configuration which is 
separate from each other. So you need to start with one or the other. 
Postfix questions should be asked on the Postfix list.


Everything you asked for above is easily doable - just start with one 
step at a time. Ask specific questions when you get stuck.


--
Daniel



Re: FTS Xapian

2019-06-09 Thread Daniel Miller via dovecot

Yes, latest git version.

The logs show (as I read them) returned results - yet nothing shows in the 
client. The logs look the same (with different numbers) when querying 
"regular" folders - but results are shown in clients.





--
Daniel
On June 6, 2019 12:16:08 AM Joan Moreau  wrote:

Hi
Are you using the latest git version ?
WHich part exactly of your logs relates to "virtual folders do not work" ?



On 2019-06-05 13:08, Daniel Miller via dovecot wrote:

Logs:

Jun 5 06:02:25 bubba dovecot: 
imap(dmil...@amfes.com)<25877>: Opening DB (RO) 
/var/mail/amfes.com/dmiller/sdbox/xapian-indexes/db_f2857830c70c844e2f1d3bc41c5f
Jun 5 06:02:25 bubba dovecot: 
imap(dmil...@amfes.com)<25877>: FTS Xapian: FLAG=AND
Jun 5 06:02:25 bubba dovecot: 
imap(dmil...@amfes.com)<25877>: FTS Xapian: Query= 
(subject:"dovecot" OR from:"dovecot" OR to:"dovecot" OR cc:"dovecot" OR 
bcc:"dovecot" OR message-id:"dovecot" OR body:"dovecot")
Jun 5 06:02:25 bubba dovecot: 
imap(dmil...@amfes.com)<25877>: FTS Xapian: 0 results in 1 ms
Jun 5 06:02:25 bubba dovecot: 
imap(dmil...@amfes.com)<25877>: Opening DB (RO) 
/var/mail/amfes.com/dmiller/sdbox/xapian-indexes/db_78544714f3f1ae5b9b0d3bda95b5
Jun 5 06:02:25 bubba dovecot: 
imap(dmil...@amfes.com)<25877>: FTS Xapian: FLAG=AND
Jun 5 06:02:25 bubba dovecot: 
imap(dmil...@amfes.com)<25877>: FTS Xapian: Query= 
(subject:"dovecot" OR from:"dovecot" OR to:"dovecot" OR cc:"dovecot" OR 
bcc:"dovecot" OR message-id:"dovecot" OR body:"dovecot")
Jun 5 06:02:25 bubba dovecot: 
imap(dmil...@amfes.com)<25877>: FTS Xapian: 53 results in 
40 ms
Jun 5 06:02:25 bubba dovecot: 
imap(dmil...@amfes.com)<25877>: Opening DB (RO) 
/var/mail/amfes.com/dmiller/sdbox/xapian-indexes/db_bdcb8e2172fadf4db50b3bc41c5f
Jun 5 06:02:25 bubba dovecot: 
imap(dmil...@amfes.com)<25877>: FTS Xapian: FLAG=AND
Jun 5 06:02:25 bubba dovecot: 
imap(dmil...@amfes.com)<25877>: FTS Xapian: Query= 
(subject:"dovecot" OR from:"dovecot" OR to:"dovecot" OR cc:"dovecot" OR 
bcc:"dovecot" OR message-id:"dovecot" OR body:"dovecot")
Jun 5 06:02:25 bubba dovecot: 
imap(dmil...@amfes.com)<25877>: FTS Xapian: 0 results in 
12 ms
Jun 5 06:02:25 bubba dovecot: 
imap(dmil...@amfes.com)<25877>: Opening DB (RO) 
/var/mail/amfes.com/dmiller/sdbox/xapian-indexes/db_be25c00241fedf4de00b3bc41c5f
Jun 5 06:02:25 bubba dovecot: 
imap(dmil...@amfes.com)<25877>: FTS Xapian: FLAG=AND
Jun 5 06:02:25 bubba dovecot: 
imap(dmil...@amfes.com)<25877>: FTS Xapian: Query= 
(subject:"dovecot" OR from:"dovecot" OR to:"dovecot" OR cc:"dovecot" OR 
bcc:"dovecot" OR message-id:"dovecot" OR body:"dovecot")
Jun 5 06:02:25 bubba dovecot: 
imap(dmil...@amfes.com)<25877>: FTS Xapian: 3 results in 
32 ms
Jun 5 06:02:25 bubba dovecot: 
imap(dmil...@amfes.com)<25877>: Opening DB (RO) 
/var/mail/amfes.com/dmiller/sdbox/xapian-indexes/db_a7e75820d9fadf4dd90b3bc41c5f
Jun 5 06:02:25 bubba dovecot: 
imap(dmil...@amfes.com)<25877>: FTS Xapian: FLAG=AND
Jun 5 06:02:25 bubba dovecot: 
imap(dmil...@amfes.com)<25877>: FTS Xapian: Query= 
(subject:"dovecot" OR from:"dovecot" OR to:"dovecot" OR cc:"dovecot" OR 
bcc:"dovecot" OR message-id:"dovecot" OR body:"dovecot")
Jun 5 06:02:25 bubba dovecot: 
imap(dmil...@amfes.com)<25877>: FTS Xapian: 0 results in 
11 ms
Jun 5 06:02:25 bubba dovecot: 
imap(dmil...@amfes.com)<25877>: Opening DB (RO) 
/var/mail/amfes.com/dmiller/sdbox/xapian-indexes/db_6fa78f2738cbdf4d007b3bc41c5f
Jun 5 06:02:25 bubba dovecot: 
imap(dmil...@amfes.com)<25877>: FTS Xapian: FLAG=AND
Jun 5 06:02:25 bubba dovecot: 
imap(dmil...@amfes.com)<25877>: FTS Xapian: Query= 
(subject:"dovecot" OR from:"dovecot" OR to:"dovecot" OR cc:"dovecot" OR 
bcc:"dovecot" OR message-id:"dovecot" OR body:"dovecot")
Jun 5 06:02:25 bubba dovecot: 
imap(dmil...@amfes.com)<25877>: FTS Xapian: 0 results in 
21 ms
Jun 5 06:02:25 bubba dovecot: 
imap(dmil...@amfes.com)<25877>: Opening DB (RO) 
/var/mail/amfes.com/dmiller/sdbox/xapian-indexes/db_6ea78f2738cbdf4d007b3bc41c5f
Jun 5 06:02:25 bubba dovecot: 
imap(dmil...@amfes.com)<25877>: FTS Xapian: FLAG=AND
Jun 5 06:02:25 bubba dovecot: 
imap(dmil...@amfes.com)<25877>: FTS Xapian: Query= 
(subject:"dovecot" OR from:"dovecot" OR to:"dovecot" OR cc:"dovecot" OR 
bcc:"dovecot" OR message-id:"dovecot" OR body:"dovecot")
Jun 5 06:02:25 bubba dovecot: 
imap(dmil...@amfes

Re: FTS Xapian

2019-06-05 Thread Daniel Miller via dovecot
ot;dovecot" OR 
bcc:"dovecot" OR message-id:"dovecot" OR body:"dovecot")
Jun 5 06:02:29 bubba dovecot: 
imap(dmil...@amfes.com)<25877>: FTS Xapian: 15 results in 
39 ms
Jun 5 06:02:29 bubba dovecot: 
imap(dmil...@amfes.com)<25877>: Opening DB (RO) 
/var/mail/amfes.com/dmiller/sdbox/xapian-indexes/db_d5359c092c8b584ee25d3bc41c5f
Jun 5 06:02:29 bubba dovecot: 
imap(dmil...@amfes.com)<25877>: FTS Xapian: FLAG=AND
Jun 5 06:02:29 bubba dovecot: 
imap(dmil...@amfes.com)<25877>: FTS Xapian: Query= 
(subject:"dovecot" OR from:"dovecot" OR to:"dovecot" OR cc:"dovecot" OR 
bcc:"dovecot" OR message-id:"dovecot" OR body:"dovecot")
Jun 5 06:02:29 bubba dovecot: 
imap(dmil...@amfes.com)<25877>: FTS Xapian: 9 results in 
37 ms
Jun 5 06:02:29 bubba dovecot: 
imap(dmil...@amfes.com)<25877>: Opening DB (RO) 
/var/mail/amfes.com/dmiller/sdbox/xapian-indexes/db_d6359c092c8b584ee25d3bc41c5f
Jun 5 06:02:29 bubba dovecot: 
imap(dmil...@amfes.com)<25877>: FTS Xapian: FLAG=AND
Jun 5 06:02:29 bubba dovecot: 
imap(dmil...@amfes.com)<25877>: FTS Xapian: Query= 
(subject:"dovecot" OR from:"dovecot" OR to:"dovecot" OR cc:"dovecot" OR 
bcc:"dovecot" OR message-id:"dovecot" OR body:"dovecot")
Jun 5 06:02:29 bubba dovecot: 
imap(dmil...@amfes.com)<25877>: FTS Xapian: 49 results in 
35 ms
Jun 5 06:02:29 bubba dovecot: 
imap(dmil...@amfes.com)<25877>: Opening DB (RO) 
/var/mail/amfes.com/dmiller/sdbox/xapian-indexes/db_e84b2f0bed746259565f3bda95b5
Jun 5 06:02:29 bubba dovecot: 
imap(dmil...@amfes.com)<25877>: FTS Xapian: FLAG=AND
Jun 5 06:02:29 bubba dovecot: 
imap(dmil...@amfes.com)<25877>: FTS Xapian: Query= 
(subject:"dovecot" OR from:"dovecot" OR to:"dovecot" OR cc:"dovecot" OR 
bcc:"dovecot" OR message-id:"dovecot" OR body:"dovecot")
Jun 5 06:02:29 bubba dovecot: 
imap(dmil...@amfes.com)<25877>: FTS Xapian: 11 results in 
18 ms
Jun 5 06:02:29 bubba dovecot: 
imap(dmil...@amfes.com)<25877>: Opening DB (RO) 
/var/mail/amfes.com/dmiller/sdbox/xapian-indexes/db_e8a36d2782404c56de4b9db5accb
Jun 5 06:02:29 bubba dovecot: 
imap(dmil...@amfes.com)<25877>: FTS Xapian: FLAG=AND
Jun 5 06:02:29 bubba dovecot: 
imap(dmil...@amfes.com)<25877>: FTS Xapian: Query= 
(subject:"dovecot" OR from:"dovecot" OR to:"dovecot" OR cc:"dovecot" OR 
bcc:"dovecot" OR message-id:"dovecot" OR body:"dovecot")
Jun 5 06:02:29 bubba dovecot: 
imap(dmil...@amfes.com)<25877>: FTS Xapian: 80 results in 
54 ms
Jun 5 06:02:29 bubba dovecot: 
imap(dmil...@amfes.com)<25877>: Opening DB (RO) 
/var/mail/amfes.com/dmiller/sdbox/xapian-indexes/db_701a2a2d6848815c750e9db5accb
Jun 5 06:02:29 bubba dovecot: 
imap(dmil...@amfes.com)<25877>: FTS Xapian: FLAG=AND
Jun 5 06:02:29 bubba dovecot: 
imap(dmil...@amfes.com)<25877>: FTS Xapian: Query= 
(subject:"dovecot" OR from:"dovecot" OR to:"dovecot" OR cc:"dovecot" OR 
bcc:"dovecot" OR message-id:"dovecot" OR body:"dovecot")
Jun 5 06:02:29 bubba dovecot: 
imap(dmil...@amfes.com)<25877>: FTS Xapian: 54 results in 
43 ms


doveconf -n:
# 2.3.6 (7eab80676): /usr/local/etc/dovecot/dovecot.conf
# Pigeonhole version 0.5.6 (92dc263a)
# OS: Linux 4.15.0-50-generic x86_64 Ubuntu 18.04.2 LTS
# Hostname: bubba.amfes.lan
auth_cache_size = 4 k
auth_master_user_separator = *
auth_mechanisms = plain login
default_login_user = nobody
default_vsz_limit = 2 G
dict {
 acl = mysql:/usr/local/etc/dovecot/dovecot-dict-sql.conf.ext
}
disable_plaintext_auth = no
imap_client_workarounds = tb-extra-mailbox-sep
imap_idle_notify_interval = 29 mins
listen = *
login_trusted_networks = 192.168.0.0/24
mail_attachment_hash = %{sha512}
mail_plugins = fts fts_xapian acl zlib virtual
mail_prefetch_count = 10
mail_shared_explicit_inbox = yes
managesieve_notify_capability = mailto
managesieve_sieve_capability = fileinto reject envelope encoded-character 
vacation subaddress comparator-i;ascii-numeric relational regex imap4flags 
copy include variables body enotify environment mailbox date index ihave 
duplicate mime foreverypart extracttext

namespace inbox {
 hidden = no
 inbox = yes
 list = yes
 location =
 mailbox "Deleted Messages" {
   auto = no
   autoexpunge = 30 days
   special_use = \Trash
 }
 mailbox Drafts {
   auto = subscribe
   special_use = \Drafts
 }
 mailbox INBOX/Archives {
   auto = no
   special_use = \Archive
 }
 mailbox Sent {
   auto = subscribe
   special_use = \Sent
 }
 mailbox "Sent Items" {
   auto = no
   special_use = \Sent
 }
 mailbox "Sent Messages" {
   auto = no
   special_use = \Sent
 }
 mailbox Trash {
   auto = subscribe
   autoexpunge = 30 days
   special_use = \Trash
 }
 mailbox virtual/Flagged {
   comment = All my flagged messages
   special_use = \Flagged
 }
 prefix =
 separator = /
 subscriptions = yes
 type = private
}
namespace usershares {
 list = children
 location = sdbox:/var/mail/%%d/%%n/sdbox
 prefix = shared/%%n/
 separator = /
 subscriptions = no
 type = shared
}
namespace virtual {
 list = children
 location = virtual:/var/mail/%d/%n/virtual
 prefix = virtual/
 separator = /
 subscriptions = no
}
passdb {
 args = /usr/local/etc/dovecot/master-users
 driver = passwd-file
 master = yes
}
passdb {
 args = /usr/local/etc/dovecot/dovecot-sql.conf.ext
 driver = sql
}
plugin {
 acl = vfile:/usr/local/etc/dovecot/global-acls:cache_secs=300
 acl_shared_dict = proxy::acl
 fts = xapian
 fts_autoindex = yes
 fts_autoindex_exclude = \Trash
 fts_autoindex_exclude2 = \Junk
 fts_autoindex_exclude3 = \Spam
 fts_enforced = no
 fts_index_timeout = 20
 fts_xapian = partial=2 full=20
 mailbox_alias_new = Sent Messages
 mailbox_alias_new2 = Sent Items
 mailbox_alias_new3 = Deleted Messages
 mailbox_alias_old = Sent
 mailbox_alias_old2 = Sent
 mailbox_alias_old3 = Trash
 sieve = file:~/sieve;active=~/.dovecot.sieve
 vsz_limit = 4G
}
protocols = imap lmtp sieve
service auth {
 unix_listener /var/spool/postfix/private/auth {
   group = postfix
   mode = 0660
   user = postfix
 }
 unix_listener auth-userdb {
   group = mail
   mode = 0600
   user = vmail
 }
}
service dict {
 unix_listener dict {
   group = mail
   mode = 0660
   user = vmail
 }
}
service imap-login {
 process_min_avail = 10
 service_count = 1
}
service imap-postlogin {
 executable = script-login /usr/local/etc/dovecot/post-login.sh
 user = $default_internal_user
}
service imap {
 executable = imap imap-postlogin
}
service indexer-worker {
 process_limit = 3
}
service lmtp {
 process_min_avail = 5
 unix_listener /var/spool/postfix/private/dovecot-lmtp {
   group = mail
   mode = 0666
   user = vmail
 }
}
service managesieve-login {
 inet_listener sieve {
   port = 4190
 }
 inet_listener sieve_deprecated {
   port = 2000
 }
 process_min_avail = 0
 service_count = 1
}
ssl_cert =  mail_plugins = fts fts_xapian acl zlib virtual sieve postmaster_address = 
postmas...@amfes.com

}
protocol lda {
 mail_plugins = fts fts_xapian acl zlib virtual sieve}
protocol imap {
 mail_max_userip_connections = 50
 mail_plugins = fts fts_xapian acl zlib virtual imap_acl imap_zlib 
mailbox_alias

}
local 192.168.0.2 {
 protocol imap {
   ssl_cert = On June 4, 2019 10:03:47 PM Joan Moreau via dovecot  
wrote:



Hi

Can you post your dovecot conf file and the subset of the log files related
to the issue ?

thanks


On June 5, 2019 9:29:13 AM Daniel Miller via dovecot 
wrote:


For my primary namespace this is working fine - thanks to the developers!


It also appears to work great for shared folders as well.


But my virtual folders aren't returning results - at least not to the
client. The logs show FTS Xapian opening several DB files and getting
results - but nothing is being returned to client. Is this a config
issue on my side or is this a current limitation of the plugin?
--
Daniel






FTS Xapian

2019-06-04 Thread Daniel Miller via dovecot

For my primary namespace this is working fine - thanks to the developers!

It also appears to work great for shared folders as well.

But my virtual folders aren't returning results - at least not to the 
client. The logs show FTS Xapian opening several DB files and getting 
results - but nothing is being returned to client. Is this a config 
issue on my side or is this a current limitation of the plugin?

--
Daniel


Re: dynamic virtual mailboxes?

2019-05-06 Thread Daniel Miller via dovecot

On 5/5/2019 10:50 PM, MRob via dovecot wrote:>

Thank you for helping but-

Again, Dovecot terminology here, mailbox means 'folder' not the whole 
account


Dynamic-
https://www.mail-archive.com/dovecot@dovecot.org/msg71091.html


Ahh...I understand what you want now. Yes it would be nice - no that 
ability does not exist.


--
Daniel


Re: dynamic virtual mailboxes?

2019-05-04 Thread Daniel Miller via dovecot
On 5/3/2019 11:22 AM, MRob via dovecot wrote:> That is not dynamically 
generated and it isn't limited to just one
mailbox (dovecot terminology here is confusing, normally a mailbox is a 
mail account (user), but in this context "mailbox" I guess mean "folder" 
which is how I am using it, as it is used in 15-mailboxes.conf)


What does "dynamically generated" mean? Are you asking to create virtual 
mailboxes via your mail client? If so - then there's no native method.


The examples I gave will indeed give you virtual "folders" which 
respectively contain ALL your flagged (regardless of seen status), 
unseen (regardless of flagged status), or only unseen flagged messages 
from all folders. And they will auto-update.


Virtual mailboxes are defined per user - so indeed the examples I gave 
will only exist for the user(s) that have such files and will only apply 
to their folders.


--
Daniel


Re: Understanding virtual mailboxes (examples in 15-mailboxes.conf)

2019-05-04 Thread Daniel Miller via dovecot

On 5/3/2019 11:18 AM, MRob via dovecot wrote:>
Thank you, but question is about the example mailbox settings in 
15-mailboxes.conf
I found I can put those mailbox definitions in the new virtual 
namespace, still not sure if they would work if I kept them in the inbox 
namespace, maybe the documentation in the example file can include 
clarification


No - you need to keep them in a separate namespace.

--
Daniel


Re: dynamic virtual mailboxes?

2019-05-03 Thread Daniel Miller via dovecot

On 5/2/2019 12:47 PM, MRob via dovecot wrote:
hi, I spent time learning about virtual mailboxes. Is there some way to 
create dynamic virtual mailboxes? I mean, when I look at a mailbox, I 
want to see only unread messages or flagged messages in that mailbox.


contents of /var/mail/mydomain/myuser/virtual/Flagged/dovecot-virtual:
*
   flagged


contents of /var/mail/mydomain/myuser/virtual/Unread/dovecot-virtual:
*
   unseen


contents of 
/var/mail/mydomain/myuser/virtual/Unread-Flagged/dovecot-virtual:

*
   unseen flagged


--
Daniel


Re: Understanding virtual mailboxes (examples in 15-mailboxes.conf)

2019-05-01 Thread Daniel Miller via dovecot

On 4/30/2019 11:13 PM, MRob via dovecot wrote:

The examples in 15-mailboxes.conf

# If you have a virtual "All messages" mailbox:
   #mailbox virtual/All {
   #  special_use = \All
   #  comment = All my messages
   #}

   # If you have a virtual "Flagged" mailbox:
   #mailbox virtual/Flagged {
   #  special_use = \Flagged
   #  comment = All my flagged messages
#}

They seem to reference some kind of virtual mailbox setup that doesn't 
compare to the docs for the "virtual" plugin. That plugin says we should 
create a separate namespace instead, like "namespace virtual" and put 
files representing the virtual folders into user maildirs. What if we 
use mdbox? add the files to user/mailboxes director I will guess.


Is there a way to use the mailbox examples in the inbox namespace in the 
default config? Does it use some other method different from the virtual 
plugin? maybe more config hints for those examples would be helpful.


Thank you.



You will indeed need to setup a virtual namespace. The virtual mailboxes 
will exist in a folder alongside but separate from your primary 
mailstore. If your default namespace is:


namespace inbox {
  type = private
  separator = /
  prefix =
  location = maildir:/var/mail/%d/%n/Maildir
  inbox = yes
  hidden = no
  list = yes
  subscriptions = yes
}

then add

namespace virtual {
  prefix = virtual/
  separator = /
  location = virtual:/var/mail/%d/%n/virtual
  subscriptions = no
  list = children
}

So for user dan...@somedomain.org there will exist:
  /var/mail/somedomain.org/daniel/Maildir
  /var/mail/somedomain.org/daniel/virtual

And then you'll need to create the virtual definition files for each 
user's mailbox as needed.


--
Daniel


Re: Sis to deduplicate attachments does not work?

2019-04-24 Thread Daniel Miller via dovecot

On April 23, 2019 10:54:38 PM luckydog xf  wrote:
Is it worthwile to use dbox? seeing from 
http://www.linuxmail.info/mbox-maildir-mail-storage-formats/ it may cause 
file lock and easy to corrupt.
As with everything - it depends. You're asking me so these are *my* 
opinions - and I do not claim to be anything more than a hobbyist/tinkerer 
when the comes to this.


mbox has potential use for long term read-only archives - I see no reason 
to use it for live mailboxes.


maildir is undoubtedly the least susceptible to corruption. It's also the 
slowest format for reading. How slow is "slow" depends on your hardware - 
it may be imperceptible with enough RAM and SSD's - or it may result in 
user complaints with large mailboxes.


dbox is Dovecot's preferred format. I know Timo has put a lot of effort 
into it. sdbox is similar to maildir in that each mail is a separate file. 
mdbox significantly reduces the number of files which can make file-based 
backups faster. Both dbox formats are dependent on their index files.


If you've got good hardware, including a proper UPS, I'd recommend dbox (my 
server is presently using sdbox). With large mailboxes and file-based 
backups you'll benefit from mdbox. When reliability is the #1 concern above 
anything else - use maildir. Depending on your use SIS can have significant 
impact on storage requirements - but storage these days is relatively cheap.


I haven't seen much feedback from users actively using SIS - I'd love to 
hear from high traffic sites with SIS experience to know if the corruption 
issues have been resolved. In my case there was at least a 30% reduction in 
space but I had too many errors - admittedly it's been a couple years since 
I last tried it.


--
Daniel


Re: Sis to deduplicate attachments does not work?

2019-04-23 Thread Daniel Miller via dovecot

On 4/23/2019 1:53 AM, luckydog xf via dovecot wrote:

Hi, I use sis to deduplicate attachments, here is my `doveconf -n`

[...]
mail_location = maildir:/var/mail/%n/Maildir
[...]


SIS is a function of dbox.  You're using Maildir.

--
Daniel



Re: Dovecot and FTS experiment

2019-01-29 Thread Daniel Miller via dovecot

On 1/29/2019 9:15 AM, Tomasz Nowak wrote:

Hello,

I'm trying to experiment with Dovecot and Solr server.
I have >30k email addresses that I want to index to speed up searching 
and save IOPS on mail servers.
For now - I'm doing some experiments and I'm testing how it is 
working. I'm thinking about adding one additional server with Solr and

configure all mail servers to use that server.

I have some questions.
1. I have 15 mail servers. It will be good If I add new server with 
Solr and use it on all Dovecot servers? Or maybe I should install Solr 
on all mail servers?


You need to start somewhere. If you've never played with Solr before I 
suggest you start with one and get it working before you explore 
"sharding". When you're ready for that you should consult the solr 
mailing list. The importance of enough RAM for Solr cannot be overstated.



2. I notice - I have mail account with 3GB of mail. Index files in 
mail dir has 5MB. After indexing mailbox in Solr - index files has 
15MB. What changes in those files? FTS indexing adds something to that 
files - but what?


What mail storage format are you using?  dbox?

Thinking...I believe that Dovecot records which mails have been reported 
to the FTS.  That may help account for the increased size.



--
Daniel



Re: Rsync to backup dbox with SIS

2019-01-25 Thread Daniel Miller via dovecot


On 1/25/2019 1:33 PM, ash-dove...@comtek.co.uk wrote:


We will be deploying a replacement Dovecot server soon, and we are 
planning to use maildir for the primary storage, but with an archive 
namespace using mdbox (or perhaps sdbox), and SIS.


Our backup servers and (luke)warm spare server need to obtain full 
copies of the mail store. For the maildirs I know I can simply use 
rsync (we already use it here).


I'm a little wary of using rsync with mdbox and SIS though.


Significantly limited knowledge opinions below:

Probably not the answer you want - but I would strongly suggest using 
Dovecot replication.  Dovecot replication Just Works - so don't reinvent 
the wheel when Timo provided such a polished tool already.  And based on 
my previous SIS experience - while dbox is nice I would suggest avoiding 
SIS until there are reports of more development.  Sdbox will be solid - 
which is what you want for an archive - though maildir would be the 
safest.  Archives don't need to be rapid-access - they need to be 
dependable.  SIS is wonderful for space saving - but until there's more 
safety checks built-in I'd suggest avoiding it for production backups.  
Drives are cheap - lost data, lost time, lost hair, lost sanity...is not.


I'm aware of "doveadm backup", and (although it currently throws up a 
few errors) it seems like it might be a valid solution for our warm 
spare server. Our backup servers, on the other hand, aren't supposed 
to be visible to the production machines, with the exception of the 
backup machine sshing in to do an rsync each night. We can't install 
dovecot on them.


The backups don't have to be "visible" to other machines - don't even 
have to be running IMAP/POP services (I think).  And the replication 
command is run - via ssh (see https://wiki.dovecot.org/Replication) - so 
what's the problem?


--

Daniel



Re: Solr

2019-01-05 Thread Daniel Miller via dovecot

On 1/5/2019 9:58 AM, Tanstaafl wrote:

Thanks Daniel...

So, as one who has no experience of the benefit of either...

How does this compare with Squat? Meaning, Is it exponentially faster?
Twice as fast?


It's been many years since I last had a Squat setup - but that's my memory.

--
Daniel



Re: Solr

2019-01-03 Thread Daniel Miller via dovecot

On 1/3/2019 10:56 AM, Tanstaafl wrote:

On 12/21/2018, 11:19:42 AM, Daniel Miller via dovecot
 wrote:

There is a *huge* difference between a functional Solr setup & squat

Interesting. Care to elaborate?


This is one of those things that has to be experienced to be 
understood.  When you can perform an FTS search across (pause while I 
check current stats...):


du -c -h /var/mail        136G

Solr numDocs:        520102

and using any IMAP client that supports server-side searches (like 
Thunderbird & AquaMail) the results are basically instantaneous...it's 
worth the effort.  And that's searching a Dovecot virtual folder defined 
as "* all", including all my archives, all my list subscriptions, and 
all the shared Inbox/Sent folders from my other users.


But I certainly wish it was easier to setup.

--
Daniel



Re: Solr

2019-01-03 Thread Daniel Miller via dovecot

On 1/1/2019 3:49 PM, Joan Moreau via dovecot wrote:


Hi

Solr is a standard package in ArchLinux. ("pacman -S solr") . the 
systemd installation script is included (and it is launching 
/opt/solr/bin/solr.in.sh)


Instance : sudo -u solr /opt/solr/bin/solr create -c dovecot -> this 
creates a separate folder with default solrconfig.xml, schema.xml, etc..


I made a symlink of the data folder to a second drive (ext4) much bigger

I'm using that nasty word *should*...in that the above installation 
*should* yield working results.  But...since I don't use Arch and have 
no insight into it I suggest downloading a binary tarball from the Solr 
site and do a clean install.  It may behave identically...or maybe 
something will be different.


--
Daniel



Re: Solr

2019-01-03 Thread Daniel Miller via dovecot

On 1/2/2019 12:59 AM, M. Balridge wrote:

So, without rancour or antipathy, I ask the entire list: has ANYONE gotten a
Dovecot/solr-fts-plugin setup to work that provides as a BASELINE, all of the
following functionality:

1) The ability to search for a string within any of the structured fields
(from/subject) that returns correct results?


Yes.




2) The ability to search for any string within the BODY of emails, including
the MIME attachment boundaries?


Yes.




3) The ability to do "ranging" searches for structures within emails that
decompose to "dates" or other simple-numeric data?


Dunno - I don't think I've needed that and I'm not sure how to do it.  
My mail clients are Thunderbird and AquaMail (on Android). If you'll 
give me either the desired Thunderbird steps or telnet-based IMAP 
command I'm happy to test.





OPTIONALLY, and this is probably way outside of the scope of the above,
despite the fact that it's listed as a "selling point" of SOLR versus other
full text search engines:

4) The ability to do searches against any attachments that are able to be
post-processed and hyper-indexed by SOLR+Tika?


Haven't tried.



SOLR seems to have "brand cachet", so presumably it actually works (for 
somebody).


It works - just sometimes needs more effort to setup than it should.



Dovecot has not a little "brand cachet", and for me, I have innate faith and
trust in Timo and his software.


I think we're all in agreement here.



But please, level with us faithful users.  Does this morass of Java B.S.
actually work, and if not, please just deprecate and remove this moribund
software, and stop trying to bury the only FTS plugin many of us HAVE actually
gotten to work.  (Pretty please?)

I respect that Messr. Moreau has made an earnest effort to get this JAVA B.S.
to actually work, as I have.

He persevered where I'd given up. He's vocal about it, and now I'm chiming in
that this ornate collection of switchblades only cuts those who try to use them.


Short answer - it actually works.  Longer answer - I've gone through a 
hate/love/hate/like relationship with Solr myself.  The transition from 
v3 to v4 was a major headache - and I gave up for a while.  But versions 
6 & 7 have been pretty good for me.  I'm neither a Dovecot nor a Solr 
developer - just enough of a fiddler to get them working to fulfill my 
own needs.


If my unreliable memory serves I believe the Dovecot fts-solr plugin 
hasn't needed to change much (I recall one significant change required 
when Solr changed it's protocol - I think an XML/JSON thing).  So having 
a stable interface let's Timo & Co. forget about on-going FTS 
development and continue focusing on things not provided by other 
tools.  Hopefully they'll revisit SIS...


I recall reading something about the Lucene library (which Squat & Solr 
are based on) and again my memory is the C version(s) weren't getting 
maintained as well as might be desired.  I think having the Solr/Lucene 
team focusing on Java development was another point of consideration for 
Dovecot's squat - but I could be totally off here.


Based on the errors reported by Joan I believe that system's problems 
are due to configuration - either Solr, Dovecot, or both.  They don't 
sound like Java related issues (which are a *major* pain to deal 
with!).  I've provided a copy of what is a working configuration *for 
me*.  I'm happy to continue helping as best I can - and if Joan, you, or 
anyone else would like my aid I'll do my best.  If you're crazy 
I-mean-trusting enough to have me SSH or remote view to your system I'm 
willing to take a look.  I've had enough people help me over the years 
for various packages that I'd like to pay it forward where I can.


--
Daniel



Re: Solr

2019-01-03 Thread Daniel Miller via dovecot
I'm running 7.5.0.  The solrconfig.xml file is what I've modified over 
time - I haven't started one from scratch for a while but perhaps I'll try.


Have you tried using the complete config that I sent you?  With *all* 
the files I included - and *none* of yours?


--

Daniel

On 1/1/2019 4:12 PM, Joan Moreau wrote:


The real main differecne seems coming from "diffconfig.xml"

When I put yours, Solr delete (!) schema.xml and create a 
"manage-schema" and starts complaining about useless types (tdates, 
booleans, etc..) that are not needed for Mail fileds


When I put mine (from standard distribution of Arch), it keeps things 
as they are (yeah !), does not complains about those useless types and 
startup properly.


I attach my diffconfig


But these are the configurations that one should adjust as per his/her 
own use.


The main problem is : After some time of indexing from Dovecot, 
Dovecot returns errors (invalid SID, etc...) and Solr return "out of 
range indexes" errors




On 2019-01-02 07:49, Joan Moreau wrote:


Hi

Solr is a standard package in ArchLinux. ("pacman -S solr") . the 
systemd installation script is included (and it is launching 
/opt/solr/bin/solr.in.sh)


Instance : sudo -u solr /opt/solr/bin/solr create -c dovecot -> this 
creates a separate folder with default solrconfig.xml, schema.xml, etc..


I made a symlink of the data folder to a second drive (ext4) much bigger





On 2018-12-31 14:09, Daniel Miller wrote:

On 12/29/2018 4:49 PM, Joan Moreau wrote:


Also :

- Java is 10.0.2

Same as me.


- If i delete schema.xml but create only managed-schema, the
solr refuses to start with a java error "schema.xml missing"

Ok...so we need to do some more digging.

How did you install Solr? (I downloaded a "binary" installation
and unpacked it)

How did you create the dovecot instance?  (I've provided explicit
instructions for how I did it - did you follow those exactly or
something different)?

How are you starting Solr?  (I use the provided "solr/bin/solr
start" command, wrapped inside a systemd service).

--
Daniel



--
--
Daniel



Re: Solr

2018-12-30 Thread Daniel Miller via dovecot

On 12/29/2018 4:49 PM, Joan Moreau wrote:


Also :

- Java is 10.0.2


Same as me.


- If i delete schema.xml but create only managed-schema, the solr 
refuses to start with a java error "schema.xml missing"



Ok...so we need to do some more digging.

How did you install Solr? (I downloaded a "binary" installation and 
unpacked it)


How did you create the dovecot instance?  (I've provided explicit 
instructions for how I did it - did you follow those exactly or 
something different)?


How are you starting Solr?  (I use the provided "solr/bin/solr start" 
command, wrapped inside a systemd service).


--
Daniel



Re: Solr

2018-12-30 Thread Daniel Miller via dovecot

On 12/29/2018 4:46 PM, Joan Moreau wrote:


Hi Daniel,

I am on Archlinux. Anyway, I adapted the scripts.

2 questions:

1 - It looks like we are not on the same version . I am on 7.5.0. 
Which version are you running ?



Solr 7.5.0.


2 - Your conf shows that you let managed-schema but deleted 
schema.xml. What is the meaning of each ?


schema.xml is the legacy configuration file.  managed-schema is the 
config file used by current Solr versions.


--
Daniel


Re: Segfault report

2018-12-26 Thread Daniel Miller via dovecot

On 12/26/2018 1:32 AM, Aki Tuomi wrote:

On 26 December 2018 at 11:26 Daniel Miller via dovecot  
wrote:


Ubuntu 18.04, AMD Opteron, Dovecot Version 2.3.3, local file storage.  I
believe it's one of my users checking mail remotely via mobile - don't
remember if it's an iPhone or Android.



I believe this is fixed with 
https://github.com/dovecot/core/commit/4fcd4e8fad45dcaa637e4cb36a9f99204d69badf.patch
 on v2.3.4.

Aki


Just to be clear - fixed with v2.3.4, or need to apply a patch on top of 
it (that will be included in next point release)?



Daniel



Segfault report

2018-12-26 Thread Daniel Miller via dovecot
Ubuntu 18.04, AMD Opteron, Dovecot Version 2.3.3, local file storage.  I 
believe it's one of my users checking mail remotely via mobile - don't 
remember if it's an iPhone or Android.


gdb backtrace:
Reading symbols from /usr/local/libexec/dovecot/imap...done.
[New LWP 13852]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Core was generated by `dovecot/imap [kkhany@amfes.c'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  event_want_debug_log (event=event@entry=0x0, 
source_filename=source_filename@entry=0x7efd84178aa3 "mail-storage.c",

    source_linenum=source_linenum@entry=1261) at event-log.c:120
120 if (event->forced_debug)
(gdb) bt full
#0  event_want_debug_log (event=event@entry=0x0, 
source_filename=source_filename@entry=0x7efd84178aa3 "mail-storage.c",

    source_linenum=source_linenum@entry=1261) at event-log.c:120
    ctx = {type = LOG_TYPE_DEBUG, exit_status = 0, timestamp = 0x0, 
timestamp_usecs = 0, log_prefix = 0x0,

  log_prefix_type_pos = 0}
#1  0x7efd83dc0986 in event_want_debug (event=event@entry=0x0,
    source_filename=source_filename@entry=0x7efd84178aa3 
"mail-storage.c", source_linenum=source_linenum@entry=1261)

    at event-log.c:140
No locals.
#2  0x7efd840bf270 in mailbox_open_full 
(box=box@entry=0x55704dc81058, input=input@entry=0x0) at mail-storage.c:1259

    _tmp_event = 0x0
    ret = 
#3  0x7efd840bf57a in mailbox_open_full (input=0x0, 
box=0x55704dc81058) at mail-storage.c:1368

    ret = 
    ret = 
    _tmp_event = 
    _data_stack_cur_id = 
    _data_stack_cur_id = 
#4  mailbox_open (box=0x55704dc81058) at mail-storage.c:1349
No locals.
#5  0x55704c36a31b in select_open (readonly=false, 
mailbox=, ctx=0x55704dc13bc8) at cmd-select.c:288

    client = 0x55704dc11de8
    status = {messages = 1830951344, recent = 32766, unseen = 
2391910144, uidvalidity = 1475818629, uidnext = 1830951424,
  first_unseen_seq = 32766, first_recent_uid = 1832402502, 
last_cached_seq = 32766, highest_modseq = 0,
  highest_pvt_modseq = 4294967296, keywords = 0x55704dbf1380, 
permanent_flags = 1280910144, flags = 21872,
  permanent_keywords = false, allow_new_keywords = false, 
nonpermanent_modseqs = false, no_modseq_tracking = false,
  have_guids = false, have_save_guids = true, have_only_guid128 
= false}

    flags = 
---Type  to continue, or q  to quit---
    ret = 0
    client = 
    status = 
    flags = 
    ret = 
#6  cmd_select_full (cmd=, readonly=) at 
cmd-select.c:417

    client = 0x55704dc11de8
    ctx = 
    args = 0x55704dbef690
    list_args = 0x5d006e
    mailbox = 0x55704dbe1540 "shared"
    error = 0x55704dc11de8 ""
    ret = 
    __func__ = "cmd_select_full"
#7  0x55704c371e30 in command_exec (cmd=cmd@entry=0x55704dc13a38) at 
imap-commands.c:201

    hook = 0x55704dbeb0f0
    finished = 
    __func__ = "command_exec"
#8  0x55704c3701d2 in client_command_input (cmd=, 
cmd@entry=0x55704dc13a38) at imap-client.c:1152

    client = 0x55704dc11de8
    command = 
    __func__ = "client_command_input"
#9  0x55704c370274 in client_command_input (cmd=) at 
imap-client.c:1215

    client = 0x55704dc11de8
    command = 
    __func__ = "client_command_input"
#10 0x55704c370675 in client_handle_next_command 
(remove_io_r=, client=0x55704dc11de8) at 
imap-client.c:1257

---Type  to continue, or q  to quit---
No locals.
#11 client_handle_input (client=0x55704dc11de8) at imap-client.c:1271
    _data_stack_cur_id = 3
    ret = 
    remove_io = false
    ret = 
    remove_io = 
    client = 0x55704dc11de8
    handled_commands = 
    _data_stack_cur_id = 
    ret = 
    remove_io = 
    _data_stack_cur_id = 
#12 0x55704c370ccc in client_input (client=0x55704dc11de8) at 
imap-client.c:1317

    cmd = 0x55704dc0bcb0
    output = 0x55704dc2d150
    bytes = 17
    __func__ = "client_input"
#13 0x7efd83ddae0f in io_loop_call_io (io=0x55704dc13910) at 
ioloop.c:698

    ioloop = 0x55704dbe9ee0
    t_id = 2
    __func__ = "io_loop_call_io"
#14 0x7efd83ddc7c6 in io_loop_handler_run_internal 
(ioloop=ioloop@entry=0x55704dbe9ee0) at ioloop-epoll.c:221

    ctx = 0x55704dbedc00
    events = 
    event = 
    list = 0x55704dc13970
---Type  to continue, or q  to quit---
    io = 
    tv = {tv_sec = 1799, tv_usec = 999365}
    events_count = 
    msecs = 
    ret = 
    i = 0
    j = 
    call = 
    __func__ = "io_loop_handler_run_internal"
#15 0x7efd83ddaf1c in io_loop_handler_run (ioloop=) 
at ioloop.c:750

No locals.
#16 0x7efd83ddb138 in io_loop_run (ioloop=0x55704dbe9ee0) at 
ioloop.c:723

    __func__ = "io_loop_run"
#17 0x7efd83d50873 in 

Re: Solr

2018-12-21 Thread Daniel Miller via dovecot

Joan,

The reason for dropping squat, I'm assuming, is that Lucene and Solr 
potentially provide superior features & performance and as they are 
3rd-party libraries & apps it reduces the maintenance responsibilities 
and let's the Dovecot team focus on mail server specific stuff - and let 
others focus on FTS.  There is a *huge* difference between a functional 
Solr setup & squat - and if I'm able to get it working we should be able 
to get you there as well.


I don't recall what OS you're running - I'm on Ubuntu 18.04.  My Java 
version is OpenJDK 10.0.2.  Attached is my complete Solr config.  Try 
one more time - stop the server, delete the data folder, unpack the 
attached into the conf folder - and restart.  I also have



/etc/default/solr.in.sh:
SOLR_OPTS="$SOLR_OPTS -Dsolr.autoSoftCommit.maxTime=3000"
SOLR_OPTS="$SOLR_OPTS -Dsolr.autoCommit.maxTime=6"
SOLR_PID_DIR=/run/solr
SOLR_HOME=/usr/local/lib

Adjust the above folders as appropriate - or don't use them at all if 
you're using the defaults.



/etc/systemd/system/solr.service:
# put this file in /etc/systemd/system/ as root
# below paths assume solr installed in /opt/solr, SOLR_PID_DIR is /data
# and that all configuration exists in /etc/default/solr.in.sh which is 
the case if previously installed as an init.d service

# change port in pid file if differs
# note that it is configured to auto restart solr if it fails 
(Restart=on-faliure) and that's the motivation indeed :)
# to switch from systemv (init.d) to systemd, do the following after 
creating this file:

# sudo systemctl daemon-reload
# sudo service solr stop # if already running
# sudo systemctl enable solr
# systemctl start solr
# this was inspired by 
https://confluence.t5.fi/display/~stefan.roos/2015/04/01/Creating+systemd+unit+(service)+for+Apache+Solr

[Unit]
Description=Apache SOLR 7.5.0
After=syslog.target network.target remote-fs.target nss-lookup.target 
systemd-journald-dev-log.socket

Before=multi-user.target graphical.target nginx.service dovecot.service
Conflicts=shutdown.target
[Service]
LimitNOFILE=65000
User=vmail
Group=mail
ExecStartPre=/bin/mkdir -p /run/solr
ExecStartPre=/bin/chown -R vmail.mail /run/solr
PermissionsStartOnly=true
PIDFile=/run/solr/solr-8983.pid
Environment=SOLR_INCLUDE=/etc/default/solr.in.sh
ExecStart=/opt/solr/bin/solr start
ExecStop=/opt/solr/bin/solr stop
Restart=on-failure
RestartSec=15s
TimeoutStopSec=30s
[Install]
WantedBy=multi-user.target graphical.target dovecot.service

If you don't use systemd disregard - but see if any of the above applies 
for your setup.


Let me know what happens.  I agree this can be a mortal pain to setup - 
but it's worth it.


Daniel

On 12/21/2018 4:33 AM, Joan Moreau wrote:


Dear Daniel.

Thank you for your kind reply.

Regarding NFS, no, there is nothing like this in my setup.

Deleteing SOLR and recreating it, I did it so  many times already.

I started with *your* setup in the first place, as FTS_squat (which 
actually works very well and very straightforward, I have no clue why 
going for SOlr which is just a pain and not maintaining squat), and it 
leads to totally funny results (for instance, I type "emirates" in my 
"Air Companies" subfolder and get a lot of results .. but of competing 
companies :D )


I added the fts_enforce following AKi advice.

I removed fts_decoder for the time being.

I don't know where to go now. Dovcot still returning errors and SOlr 
still companinig with "Out of range" and other Java errors.


Bottom line, I am back to squat, but as it is not maintained so 
crashed also times to times.



I think we should discuss on

(1) Why the damn choice of Solr has been main. As you empahised, 
maintainend so many independent software is a pain


(2) If there is a real reason why going for SOlr, how to have a 
working (i.e. getting the right results to the end user) setup ?


(3) If there iare no tangible reason, what about maintaining fts_squat 
, which did the job nicely for years and no complains about.






On 2018-12-16 08:51, Daniel Miller via dovecot wrote:


Joan,

I understand and sympathize with your frustration - trying to get 
multiple applications to work together, particularly given the lack 
of documentation for some of them, can be extremely challenging.  
That said, I suggest you consider an alternative viewpoint.  
Frequently being misunderstood myself I apologize in advance if I'm 
reading you wrong - but it appears your view towards the situation is 
there is a bug in Dovecot related to this problem.  That may well be 
- but I generally approach these matters from the assumption that *I* 
made the error in configuration and go from there.  I'm not an 
official rep for any product nor claim to be any form of expert in 
these matters - but I do have a working setup and I'd like to help 
you if I can.  If you're willing to - take a deep breath and let's 
try starting over.


Looking back through y

SIS feature request

2018-12-20 Thread Daniel Miller via dovecot
I tried SIS a couple years ago - I was very excited with the resulting 
decrease in storage requirements but the undiagnosed intermittent issues 
became too significant to ignore so I switched away.  Recently I was 
thinking about it again.


The primary issue with SIS seemed to be links would be deleted even 
though the source attachment files and related mails still existed.  It 
was possible to either manually re-build the links or have a script scan 
the mail error log and perform such.


I haven't looked at the code - but a thought for a possible "temporary" fix:

    1.  Whatever function in dbox code that performs the deletion of 
links - prior to actually deleting call a new function that will verify 
if any mails exist that reference it.  A new function, without modifying 
existing code, may catch something the existing functions don't - and if 
it logs the fact that it was called and found something...perhaps we can 
find the flaw in the original algorithm.  Just a thought.


    2.  In the mail retrieval function, if the attachment link doesn't 
exist - perform the relevant scan through the attachment database and if 
found re-create the link automatically.  This should log an error but 
indicate the recovery.


--
Daniel



Possible attack?

2018-12-17 Thread Daniel Miller via dovecot

I found an error in my log today...

Dec 17 12:03:30 bubba dovecot: 
imap(us...@amfes.com)<23017>: Error: fts_solr: 
received invalid uid '0'
Dec 17 12:04:44 bubba dovecot: 
imap(us...@amfes.com)<25004>: Fatal: master: 
service(imap): child 25004 killed with signal 11 (core dumps disabled - 
https://dovecot.org/bugreport.html#coredumps)


I've now enabled core dumps (I think) and restarted - if it comes back 
hopefully I can get a backtrace.  But reading that fts_solr message, and 
some other comments, leads me to wonder - could this be caused by 
someone/thing trying to authenticate as root?


On that theory - I tried doing so via telnet - and received:

Dec 17 15:06:02 bubba dovecot: auth: Error: 
plain(ultradeitytypeper...@amfes.com,127.0.0.1,<4kQr0z99UMZ/AAAB>): user 
not found from any userdbs
Dec 17 15:06:02 bubba dovecot: imap: Error: Authenticated user not found 
from userdb, auth lookup id=3522297857 (auth connected 1 msecs ago, 
handshake 0 msecs ago, request took 1 msecs, client-pid=29572 client-id=1)


I have root's email aliased to a valid user's email.  I'm not sure how 
I'm able to authenticate as root - there isn't a root user defined in my 
LDAP database and that should be the only auth backend enabled for 
Dovecot.  Or do I need to explicitly block local users from /etc/passwd 
on the server?  The only auth databases shown in doveconf -n:


userdb {
  driver = prefetch
}
userdb {
  args = /usr/local/etc/dovecot/dovecot-ldap.conf.ext
  driver = ldap
}
passdb {
  args = /usr/local/etc/dovecot/master-users
  driver = passwd-file
  master = yes
}
passdb {
  args = /usr/local/etc/dovecot/dovecot-ldap.conf.ext
  driver = ldap
}

and "master-users" doesn't list root either.

--
Daniel



ssh_dh?

2018-12-16 Thread Daniel Miller via dovecot
Don't know if this was corrected in 2.3.4 (haven't upgraded yet but 
didn't see it in the notes) - but in 2.3.3 I see this in my log:


imap-login: Error: Diffie-Hellman key exchange requested, but no DH 
parameters provided. Set ssh_dh=

So...either there's an undocumented feature of SSH-over-IMAP (that's 
Dovecot - always on the cutting edge!) or someone had a coffee shortage 
during a coding session...



--
Daniel



Re: Upgrade to 2.3.1 has failed

2018-12-16 Thread Daniel Miller via dovecot

As a LetsEncrypt user myself, I have:

ssl_cert = So nothing further should be required.  You say Dovecot fails to start - 
have you tried simply executing "dovecot -F"?


Daniel

On 12/16/2018 6:19 AM, C. Andrews Lavarre wrote:

Phil hi.

Thank you for explaining what the symbol does... so it is like the 
BASH *from* symbol. OK.That is new information.


So without it dovecot reads the *path/to/file* as if it were a hashed 
cert, which of course doesn't work. So *with* the symbol dovecot tries 
to follow the path to read the cert but for some reason cannot read 
it. Now, that is curious, since I can *cat* the path/to/file and read 
the cert or key...


Now, while the /path/to/file permission is presently *root:root 0777 
*(yes, I know 0777 is not good, but I was trying to eliminate any 
prevention to reading it)**it is actually a soft link to yet another 
file. Let'sEncrypt has to be renewed every so often so the cert engine 
(*certbot*) recreates the softlink to the new cert so that we don't 
need to edit *10-ssl.conf*.


So I have entered the actual full path/to/file for the cert and key 
(not the softlinks) to eliminate that possibility, buty it didn't 
help. So it's something else.


As you say, focus on the problem: Simply put, why can 2.3.1 not read a 
file while we can list and print out (*ls, cat*) the file? What 
changed in that regard from 2.2.x to 2.3.1?




I'm very grateful for the time folks have spent on this, including my 
own time. I'm not being rude, just factual. This is what is happening.


But "something is wrong with your configuration",  while equally 
factual, is also equally ineffective.


OTOH, in my experience factually describing an anomaly can lead to 
someone wondering why it might be, and if they are more knowledgeable 
of the inner workings of the system be better able to understand why 
that might be.


For example, I didn't know anything about AppArmor before, now I do, 
have gone down that rabbit hole, and seem to be able to say, nope, 
that's not the problem. So now I can move on to checking out something 
else.


Similarly, under BASH the path/to/files are all correct and I can read 
them from the command line. And 2.2.x didn't have any problem with 
them. So why might 2.3.1 not be able to read them?


So we all need to leave this alone, for now. I'll work along, and 
when/if I figure it out shall return to report. I'm sure it's 
something simple: Easy when you know how. :-)


Thanks again.

Andy

On Sun, 2018-12-16 at 07:41 -0500, Phil Turmel wrote:

Andy,

This is just rude.  You have been told multiple times that the less-than
symbol is required to read the certificate from the file.  Otherwise,
the filename is parsed as if it is the certificate itself.  Which yields
garbage.

If dovecot can't read that file, it is *not* dovecot's fault.  You are
simply not going to succeed until *you* figure out what security
differences you have in your new installation.  So dovecot can read the
files.  Every single attempt to connect via openssh depends on dovecot
reading your certificate and key files.  They are pointless exercises
until dovecot actually loads your files.  Focus on the real problem if
you wish to fix your service.

On 12/15/18 5:12 PM, C. Andrews Lavarre wrote:
Alexander, Thanks, as described before, if I include the "<" then 
Dovecot fails to start at all. Thank you again for your time. I have 
forwarded my latest to Aki to the group. 




Regards,

Phil


Re: Solr

2018-12-15 Thread Daniel Miller via dovecot

Joan,

I understand and sympathize with your frustration - trying to get 
multiple applications to work together, particularly given the lack of 
documentation for some of them, can be extremely challenging.  That 
said, I suggest you consider an alternative viewpoint.  Frequently being 
misunderstood myself I apologize in advance if I'm reading you wrong - 
but it appears your view towards the situation is there is a bug in 
Dovecot related to this problem.  That may well be - but I generally 
approach these matters from the assumption that *I* made the error in 
configuration and go from there.  I'm not an official rep for any 
product nor claim to be any form of expert in these matters - but I do 
have a working setup and I'd like to help you if I can.  If you're 
willing to - take a deep breath and let's try starting over.


Looking back through your emails there were two items that stood out - 
your Dovecot config has two settings I don't use: "fts_decoder" and 
"fts_enforced".  I also asked you earlier whether or not NFS is involved 
here and I didn't see an answer - please clarify.


I suggest you try once more: delete Solr completely.  Re-install per the 
directions and use *my* managed-schema.  Also comment out the Dovecot 
directives for "fts_decoder" and "fts_enforced" so you're closer to my 
setup.  Try running again and then post back - I'll do what I can.  
Based on the fact that Dovecot+Solr 7.5+my schema is working for me 
leads me to believe we can get it working for you as well.


Daniel

On 12/15/2018 2:42 PM, Joan Moreau wrote:


here my latest schema.xml (remove the "long" type hich seems to be 
very deprecated in 7.x)




id


positionIncrementGap="0" />
autoGeneratePhraseQueries="true" positionIncrementGap="100">



ignoreCase="true"/>
generateWordParts="1" generateNumberParts="1" splitOnCaseChange="1" 
splitOnNumerics="1" catenateWords="1" catenateNumbers="1" 
catenateAll="1"/>
 


maxGramSize="15" />
protected="protwords.txt"/>





ignoreCase="true" synonyms="synonyms.txt"/>
 
ignoreCase="true"/>
generateWordParts="1" generateNumberParts="1" splitOnCaseChange="1" 
splitOnNumerics="1" catenateWords="1" catenateNumbers="1" 
catenateAll="1"/>


maxGramSize="15" />
protected="protwords.txt"/>









stored="true"/>




stored="true"/>



stored="true"/>
stored="true"/>






On 2018-12-15 20:54, Joan Moreau wrote:


Daniel,
I have done that so any times (deleteing the data folders, recreating 
the instance, restarting etc...)

But this is really not the issue
The issue is
1 - fts_solr reports errors in the log file (this is a pure dovecot 
issue) : how to have much more details on what fts_solr sends to Slor 
server and what does it returns ?
2 - Solr returns properly for a few hours, then starts crashing or 
responding non-sense after some time
Additionally, is there a doc of fts-squat in order to adjust the code 
to new releases of dovect ?


On December 12, 2018 4:44:10 PM Daniel Miller via dovecot 
 wrote:


On 12/11/2018 4:46 AM, Joan Moreau via dovecot wrote:


I shared the errors already so many times (check this mailinling
for "solr" in teh title)

Contrary to what you say, with SOlr 7.5 and Dovecot git,  I had
to remove the "managed-schema" to make solr respond a bit
properly. It relies on schema.xml

In order to create the instance, no, it copies the default
config in the dovecot instance.


I'm not a Solr expert by any means but I believe you are incorrect.

As of Solr 5.x the managed-schema file is the primary method for
configuration.  The method I detailed previously for setting up a
config helps automate creating new Solr instances - but as I
stated you can either setup a Solr template and then create the
instance from that or create an instance using the default
template and then adjust it.

The part that you *must* do after creating from the default
template is stop the server, delete the entire
"/solr/dovecot/data" folder, then install the correct
managed-schema file, then restart the server.  The server will
not function with mismatched schema/data.

If you'll try that - explicitly "rm -rf
/solr/dovecot/data", copy the managed-schema file into
the conf folder, and restart - things will either work or there's
something else that needs correction.

--
Daniel



Re: Solr

2018-12-12 Thread Daniel Miller via dovecot

On 12/10/2018 10:02 PM, Joan Moreau wrote:


Additionally, here the errors I get in logs:

Dovecot:

Dec 09 09:21:09 imap(j...@grosjo.net)<3349>: Error: 
fts_solr: received invalid uid '0'
Dec 09 09:21:10 imap(j...@grosjo.net)<3349>: Error: 
fts_solr: received invalid uid '0'


or

11 03:36:03 
indexer-worker(j...@grosjo.net)<2093>: 
Error: fts_solr: Indexing failed: 500 Server Error




This looks like a permissions issue.  Are you using NFS?

--
Daniel




Re: Solr

2018-12-10 Thread Daniel Miller via dovecot
The one on the Wiki is mine...which I'm using now.  So it certainly does 
work - but perhaps there's a setting you have differently from me.


Performing a "create -c dovecot" creates a Solr instance *named* dovecot 
- that does *not* initialize it with the necessary schema.  You need to 
specify "-d dovecot", with a dovecot configset already setup, to do that.


The other choice is to create the instance as you show, ensure Solr is 
stopped, delete the "/solr/dovecot/data" folder, and copy the 
managed-schema file to "/solr/dovecot/conf".  Again, the 
filename saved in the /conf folder needs to be "managed-schema" - no 
".xml" suffix.


If that doesn't work for you - please share the errors.

Daniel

On 12/10/2018 11:40 AM, Joan Moreau wrote:


Hi Daniel,

THere is no need of all this, just the command (on Solr 7.5) "create 
-c dovecot " is enough


The chema.xml provided on the wiki basically does not work on 7.5


Here the latest one I am working on , but nothing works properly (bad 
search results, errors in ftp_solr, etc..)





id


positionIncrementGap="0" />
autoGeneratePhraseQueries="true" positionIncrementGap="100">



ignoreCase="true"/>
generateWordParts="1" generateNumberParts="1" splitOnCaseChange="1" 
splitOnNumerics="1" catenateWords="1" catenateNumbers="1" 
catenateAll="1"/>
 


maxGramSize="15" />
protected="protwords.txt"/>





ignoreCase="true" synonyms="synonyms.txt"/>
 
ignoreCase="true"/>
generateWordParts="1" generateNumberParts="1" splitOnCaseChange="1" 
splitOnNumerics="1" catenateWords="1" catenateNumbers="1" 
catenateAll="1"/>


maxGramSize="15" />
protected="protwords.txt"/>









stored="true"/>




stored="true"/>



stored="true"/>
stored="true"/>






On 2018-12-10 21:17, Daniel Miller via dovecot wrote:


On 12/4/2018 10:40 AM, Joan Moreau via dovecot wrote:


In the Wiki, ( https://wiki.dovecot.org/Plugins/FTS/Solr ), it would 
nice to stipulate to the reader  to type the command :


sudo -u solr /opt/solr/bin/solr create -c dovecot # to create the 
dovecot instance


before updating the schema.xml .

Also,  schema.xml is in /opt/solr/server/solr/dovecot/conf for 
archlinux users


Additionaly, the url is http://(solr_ 
server):8983/solr/dovecot/ (error in wiki)


After installing Solr, wherever the installation sets up there should 
a folder similar to:


/solr/server/solr/configsets

If you look there, you'll probably see folders like '_default' and 
'sample_techproducts_configs'.  I haven't played with the 
'techproducts' sample.  Copy the '_default' folder, with all its 
contents, to a 'dovecot' folder.  In the new dovecot folder, replace 
the 'managed-schema' file with the file from the Dovecot Wiki


https://wiki.dovecot.org/Plugins/FTS/Solr?action=AttachFile=view=solr-7.x-schema.xml

after that, you should be able to run 'solr /opt/solr/bin/solr create 
-c dovecot' to create the instance. If things still don't work let us 
know.


The schema is one I've tweaked and updated during my own migrations 
since Solr 3.3.  It's possible there's something else in my config 
that needs documenting - but having experienced Solr search against 
my mailstore I never want to be without it.


Daniel



Re: Solr

2018-12-10 Thread Daniel Miller via dovecot

On 12/4/2018 10:40 AM, Joan Moreau via dovecot wrote:


In the Wiki, ( https://wiki.dovecot.org/Plugins/FTS/Solr ), it would 
nice to stipulate to the reader  to type the command :


sudo -u solr /opt/solr/bin/solr create -c dovecot # to create the 
dovecot instance


before updating the schema.xml .

Also,  schema.xml is in /opt/solr/server/solr/dovecot/conf for 
archlinux users


Additionaly, the url is http://(solr_ 
server):8983/solr/dovecot/ (error in wiki)


After installing Solr, wherever the installation sets up there should a 
folder similar to:


/solr/server/solr/configsets

If you look there, you'll probably see folders like '_default' and 
'sample_techproducts_configs'.  I haven't played with the 'techproducts' 
sample.  Copy the '_default' folder, with all its contents, to a 
'dovecot' folder.  In the new dovecot folder, replace the 
'managed-schema' file with the file from the Dovecot Wiki


https://wiki.dovecot.org/Plugins/FTS/Solr?action=AttachFile=view=solr-7.x-schema.xml

after that, you should be able to run 'solr /opt/solr/bin/solr create -c 
dovecot' to create the instance.  If things still don't work let us know.


The schema is one I've tweaked and updated during my own migrations 
since Solr 3.3.  It's possible there's something else in my config that 
needs documenting - but having experienced Solr search against my 
mailstore I never want to be without it.


Daniel



Multi-server but small scale

2018-11-19 Thread Daniel Miller
I have a small but critical server that supports our group.  As a single 
server - it's obviously a single-point-of-failure for lots of things.  
As I just experienced...again.  It was a lot more fun building systems 
from components when I was younger...


Previously 3rd-party hosted solutions didn't look attractive for several 
reasons...but I'm seeing prices now for cloud virtual machines that are 
stupid cheap.  Even if they wind up being limited speed & availability - 
it would seem they'd be a lot better than nothing!


So I'm considering having at least one backup server for various 
services - obviously that includes mail.  So now I have to wonder about 
the backend.  And while I think I'm reasonably current with networked 
file systems (not distributed or cluster) I haven't played with 
replication for a quite a while.


For this particular usage (I'm envisioning two servers total) - is there 
a need/reason to use any form of networked/distributed/cluster file 
storage?  Or would this be accomplished via "pure" Dovecot - dsync 
replication would keep things updated between the servers and director 
would handle the connections?  So with identically configured SMTP 
servers, passing to the local LMTP agents, the file system would be 
"purely local" with no NFS or other interconnection?


--
Daniel



Update virtual folders via doveadm?

2018-11-19 Thread Daniel Miller
I'm trying to have my server maintain it's FTS indexes reasonably 
current at all times.  From prior threads, and based on my namespace 
configuration, I have the following hourly cronjob:


doveadm index -A -q '*'
doveadm index -A -q 'shared/*'
doveadm index -A -q 'virtual/*'

This seems to work just fine to maintain the FTS (Solr).  However - as I 
understand it Dovecot's virtual folders are not updated with new mails 
until they are accessed by a client.  Therefore the above command 
sequence would work fine for my default namespace and the shared - but 
the virtual namespace won't actually be current.  So...


Would a command such as:

doveadm mailbox status -A messages virtual/*

Be appropriate to run prior to the virtual/* FTS indexing?

--
Daniel



Solr 7.5.0 managed-schema

2018-11-18 Thread Daniel Miller
The attached is the Dovecot schema I am now using for Solr 7.5.0 - no 
"deprecated" warnings!


--
Daniel



managed-schema
Description: Binary data


Re: What causes folders to be reported as noselect?

2018-09-26 Thread Daniel Miller

On 2018-09-26 10:14, Aki Tuomi wrote:

On 26 September 2018 at 18:42 Daniel Miller  wrote:


As the subject says.  This may be a bit open-ended - but it would 
really

help troubleshooting some obscure folder issues.

In my case, I happen to have both some "real" folders and also some
"virtual" folders that respond to IMAP LIST commands with the
"\NoSelect" flag - and I don't know why.  Via telnet, I can manually
issue SELECT, SEARCH, and FETCH for such folders without errors.

--

Daniel



\NoSelect folders are usually namespace boundaries and non-existing
folders, such as parents for children in systems where the parents do
not need to exist for real.

You should not be able to SELECT a \NoSelect folder.

Aki


At the moment, the folders in question:

My primary namespace "inbox", with no prefix, has a folder INBOX, with a 
child folder "Other", which in turn has two children.  "INBOX/Other" 
shows as \NoSelect - the two children are normal.


In my "virtual" namespace, I had a virtual folder defined as "Archives". 
 I created a new folder "Archive-Search" and copied the dovecot-virtual 
file over - and it works fine.


I don't see anything wrong via filesystem permissions or ownership - so 
I'm assuming either there are reserved words I'm not allowed to use with 
IMAP folders (but I can't find any documented), or something in my 
namespace or folder setup is applying some kind of mask (or something is 
corrupted...more on this below), or...there's a bug.  But I'm willing to 
assume the flaw lies with me.  Or at least my ever wonderful server - 
which continues to keep me entertained instead of simply operating 
quietly and consistently without endearing quirks...


As far as selectability...

I was going to post a telnet session to prove I could...but when I 
tested previously I was using the "virtual/Archives" folder and it 
worked manually - before I created the "virtual/Archive-Search" folder 
and deleted the other.  So I tried the "INBOX/Other" folder - and I do 
get the expected "NO Mailbox doesn't exist: INBOX/Other".  So...


Just for fun...I created "virtual/Archives" again, copied the 
dovecot-virtual, set the permissions...and it works fine! And just in 
case...I also tried "virtual/Archive" - also now selectable.  And to be 
clear - I create these folders directly in the filesystem, manually copy 
the dovecot-virtual file, and set the owner/permission.


Let's try another experiment...other email>


Ok...moving on.  "INBOX/Other" isn't selectable.  Let's experiment a 
little more carefully.  Using RoundCubeMail, view the folder list, 
rename "INBOX/Other" to "INBOX/Other-Old".  Same conditions.  Using 
RoundCube - create a new folder "INBOX/Other" - this is now selectable!  
Using RoundCube - move the first child of "INBOX/Other-Old" to 
"INBOX/Other".


Now it's weird.  INBOX/Other is present and selectable, 
INBOX/Other/Child1 is present and selectable - INBOX/Other-Old has 
disappeared and the former INBOX/Other-Old/Child2 is now at 
INBOX/Child2.  Move that to INBOX/Other/Child2...now everything is 
selectable as expected.


Which leaves me wondering...what the  was broken - and was there 
any other way to see it?  The on-disk structure looked right and the 
IMAP folder lists looked right other than the non-selectability.

--
Daniel


Possible bug - otherwise a public admission of oops

2018-09-26 Thread Daniel Miller
While trying to identify possible causes of wrong mail folder creation I 
did something...bad.


Normally, I would recognize that deleting a mail folder would naturally 
delete all the contained mails.  However...somehow my imaginative self 
decided that deleting a virtual folder via IMAP would only delete the 
virtual folder...and not proceed to delete every referenced email via 
the virtual mapping.


So note to self, and possible reminder to others, deleting a virtual 
folder via a filesystem command is just tiny bit different than via 
IMAP...


Fortunately I keep my feathers numbered for just such an emergency...

--
Daniel


What causes folders to be reported as noselect?

2018-09-26 Thread Daniel Miller
As the subject says.  This may be a bit open-ended - but it would really 
help troubleshooting some obscure folder issues.


In my case, I happen to have both some "real" folders and also some 
"virtual" folders that respond to IMAP LIST commands with the 
"\NoSelect" flag - and I don't know why.  Via telnet, I can manually 
issue SELECT, SEARCH, and FETCH for such folders without errors.


--

Daniel



doveadm variables

2018-09-03 Thread Daniel Miller
Are variables such as %d and %n available to doveadm when executed from 
the command line?  Either when explicitly declaring the user via -u or 
when using all users with -A?


--
Daniel



Re: online conversion using replication?

2018-09-02 Thread Daniel Miller
That works for a one-time migration, or perhaps via a cron-job, but what 
I want is basically a constant one-way backup and it seems replication 
could do it more elegantly & efficiently.


--
Daniel

On 9/1/2018 11:14 PM, Aki Tuomi wrote:
You don't need to setup replication for that. See 
https://wiki2.dovecot.org/Migration/MailFormat


---
Aki Tuomi
Dovecot oy

 Original message 
From: Daniel Miller 
Date: 02/09/2018 04:14 (GMT+02:00)
To: Dovecot Mailing List 
Subject: online conversion using replication?

With a single server - and no intent to have a second server online at
this time - is it possible to use the replication service to keep a
"live" backup? Or otherwise perform a storage format conversion?

I'm presently using sdbox - and considering going back to mdbox though
without SIS.  My intent now is ONLY a 1-way backup, to be kept current,
and no clients will utilize the converted storage (unless/until I change).

I've successfully executed:

doveadm backup -u  -n inbox 
mdbox:/var/mail///mdbox


For a few users - but reading the docs leads me to believe I can
automate this.  But there's no explicit example for this - so I'm not
sure what to set.

Initially I'm thinking of:

dsync_remote_cmd = doveadm backup -u %u -n inbox 
mdbox:/var/mail/%d/%n/mdbox


and if that's right - which other services/listeners do I need to setup?

--
Daniel





remove non-standard flags

2018-09-01 Thread Daniel Miller
Having relocated my virtual folders to a new namespace - I'm having some 
subscription challenges.  I'm able to reach them via telnet without 
issue - and some clients have better luck than others.  Some folders 
have no problems at all - some seem different.


In trying to identify what might be the issue - at the moment the only 
thing I'm seeing is the ones that are completely without issue don't 
have any "weird" flags.


Example: an apparently "clean" virtual folder:

. select virtual/Flagged
* OK [CLOSED] Previous mailbox closed.
* FLAGS (\Answered \Flagged \Deleted \Seen \Draft $Forwarded)
* OK [PERMANENTFLAGS (\Answered \Flagged \Deleted \Seen \Draft 
$Forwarded \*)] Flags permitted.

* 12 EXISTS
* 0 RECENT
* OK [UNSEEN 7] First unseen.
* OK [UIDVALIDITY 1535841027] UIDs valid
* OK [UIDNEXT 13] Predicted next UID
* OK [HIGHESTMODSEQ 3] Highest
. OK [READ-WRITE] Select completed (0.316 + 0.000 + 0.315 secs).

This folder is recognized by all clients.  But this one:

. select virtual/INBOX
* OK [CLOSED] Previous mailbox closed.
* FLAGS (\Answered \Flagged \Deleted \Seen \Draft $Forwarded junk $MDNSent)
* OK [PERMANENTFLAGS (\Answered \Flagged \Deleted \Seen \Draft 
$Forwarded junk $MDNSent \*)] Flags permitted.

* 9540 EXISTS
* 0 RECENT
* OK [UNSEEN 85] First unseen.
* OK [UIDVALIDITY 1535841672] UIDs valid
* OK [UIDNEXT 9541] Predicted next UID
. OK [READ-WRITE] Select completed (44.266 + 0.000 + 44.265 secs).

Is a little more troublesome.  And then...

. select virtual/Archives
* FLAGS (\Answered \Flagged \Deleted \Seen \Draft $MDNSent unknown-0 
unknown-2 unknown-3 unknown-5 unknown-9 unknown-4 $Forwarded unknown-7 
unknown-8 junk $NotJunk $label5)
* OK [PERMANENTFLAGS (\Answered \Flagged \Deleted \Seen \Draft $MDNSent 
unknown-0 unknown-2 unknown-3 unknown-5 unknown-9 unknown-4 $Forwarded 
unknown-7 unknown-8 junk $NotJunk $label5 \*)] Flags permitted.

* 142605 EXISTS
* 0 RECENT
* OK [UNSEEN 82998] First unseen.
* OK [UIDVALIDITY 1535841824] UIDs valid
* OK [UIDNEXT 142606] Predicted next UID
. OK [READ-WRITE] Select completed (10.217 + 0.000 + 10.217 secs).

I can't get Thunderbird to recognize it all (though RoundCube web client 
works).


There very well may be something totally separate from the flags that's 
causing the problem, but, is there a way to strip the flags off?  I tried:


doveadm flags remove -u  '$MDNSent unknown-0 unknown-2 
unknown-3 unknown-5 unknown-9 unknown-4 unknown-7 unknown-8 junk 
$NotJunk $label5' mailbox 'inbox/Archive/2004'


since that's one of the source folders of the virtual folder - no error, 
but no apparent change.  Same for operating on the virtual folder directly.


--
Daniel



online conversion using replication?

2018-09-01 Thread Daniel Miller
With a single server - and no intent to have a second server online at 
this time - is it possible to use the replication service to keep a 
"live" backup? Or otherwise perform a storage format conversion?


I'm presently using sdbox - and considering going back to mdbox though 
without SIS.  My intent now is ONLY a 1-way backup, to be kept current, 
and no clients will utilize the converted storage (unless/until I change).


I've successfully executed:

doveadm backup -u  -n inbox mdbox:/var/mail///mdbox

For a few users - but reading the docs leads me to believe I can 
automate this.  But there's no explicit example for this - so I'm not 
sure what to set.


Initially I'm thinking of:

dsync_remote_cmd = doveadm backup -u %u -n inbox mdbox:/var/mail/%d/%n/mdbox

and if that's right - which other services/listeners do I need to setup?

--
Daniel



Update both virtual indexes & FTS indexes

2018-08-28 Thread Daniel Miller

Will the following command:

doveadm index -A '*'

Ensure all Dovecot indexes are current (including virtual mailboxes) and 
also update FTS?  Other than the time/resources needed to parse all 
users/mailboxes - is there a reason not to schedule this to run on a 
regular (hourly?) basis?


My goal is to have my server constantly working...so my client isn't 
waiting for either virtual mailbox or Solr FTS updates.


--
Daniel



Re: sdbox filesystem backup potential excludes

2018-08-16 Thread Daniel Miller

On 8/14/2018 10:41 PM, Aki Tuomi wrote:


On 14.08.2018 21:23, Daniel Miller wrote:

On 8/14/2018 12:55 AM, Aki Tuomi wrote:

On 13.08.2018 19:51, Daniel Miller wrote:

When doing a filesystem backup of an moderate sdbox mailstore (300GB)
- are there any files that can be safely excluded from the backup?
Like *.log or *.backup?  Or are they all "vital" for recovery?

I'm already excluding the sdbox/virtual folders as it looks like they
get created and updated as needed.


Hi, can you provide 'doveconf -n'. sdbox/virtual folders are not part of
sdbox mail format. In general, if you want to avoid data loss, you can
only omit dovecot.index.cache files, but omitting these can come with
high impact when they are regenerated. Omitting dovecot.index.log or
dovecot.index will cause loss of flags.

Aki

I have virtual folders enabled & defined.

What does "high impact" mean?  A few seconds to a few minutes on
initial mailbox opening for regeneration?  I can live with that as
this is for emergency backup/restore purposes.

It means there will be nothing cached, it depends on your users and what
they do. If they only open first 20 mails it will not be that bad.

Ok...so I'm *probably* ok excluding cache files then...


mmap_disable = yes

If you are not using NFS, don't disable mmap.
I'm using NFS.  Mail server is running in VirtualBox guest, mounting the 
host's native storage via NFS.



   mailbox virtual/Flagged {
     comment = All my flagged messages
     special_use = \Flagged
   }

You can't "alias" folders like this.

Maybe not...but it works .



namespace virtual {
   list = children
   location = virtual:/var/mail/%d/%n/sdbox/virtual
   prefix = virtual/
   separator = /
   subscriptions = no
}
  
You should really not put the virtual indexes inside sdbox directory,

this can confuse sdbox. You should put this under
/var/mail/%d/%n/virtual instead.

I haven't had any errors show up...but if it's incorrect then I'll try 
changing it.


Daniel
 



Recommendations for backup methods

2018-08-14 Thread Daniel Miller
I've been re-thinking my backup strategy - I wanted to see what input 
others have.  At this time - I'm using sdbox as the primary storage 
format and running on a single server.


Previously, all my backups were simple filesystem backups. Either 
inotify-based or cron-based.  The whole mail folder structure would then 
be copied to a remote storage site.  I've changed services - I now use 
OpenDrive for the remote storage and can access the remote drive as a 
mounted folder via WebDAV.  I also now use a cron-based series of restic 
jobs for the backup processing.


Because I have some large folders - both personal & business letters 
going back quite some time as well as mailing list archives - some mail 
folders can be quite sizable.  Now I'm wondering, between sdbox's 
support for alternate locations and the ability to use different storage 
formats via namespaces - perhaps I should setup archive folders that are 
stored differently to improve backup performance.  Maybe maildir or even 
mbox for the archive folders - I create one per year?


Any comments?  Experiences?

--
Daniel



Re: sdbox filesystem backup potential excludes

2018-08-14 Thread Daniel Miller

On 8/14/2018 12:55 AM, Aki Tuomi wrote:


On 13.08.2018 19:51, Daniel Miller wrote:

When doing a filesystem backup of an moderate sdbox mailstore (300GB)
- are there any files that can be safely excluded from the backup?
Like *.log or *.backup?  Or are they all "vital" for recovery?

I'm already excluding the sdbox/virtual folders as it looks like they
get created and updated as needed.


Hi, can you provide 'doveconf -n'. sdbox/virtual folders are not part of
sdbox mail format. In general, if you want to avoid data loss, you can
only omit dovecot.index.cache files, but omitting these can come with
high impact when they are regenerated. Omitting dovecot.index.log or
dovecot.index will cause loss of flags.

Aki


I have virtual folders enabled & defined.

What does "high impact" mean?  A few seconds to a few minutes on initial 
mailbox opening for regeneration?  I can live with that as this is for 
emergency backup/restore purposes.


doveconf -n
# 2.2.31 (65cde28): /usr/local/etc/dovecot/dovecot.conf
# Pigeonhole version 0.4.17 (e179378)
# OS: Linux 4.4.0-131-generic x86_64 Ubuntu 16.04.5 LTS
auth_cache_size = 4 k
auth_master_user_separator = *
auth_mechanisms = plain login
default_login_user = nobody
default_vsz_limit = 1 G
dict {
  acl = mysql:/usr/local/etc/dovecot/dovecot-dict-sql.conf.ext
}
disable_plaintext_auth = no
imap_client_workarounds = tb-extra-mailbox-sep
imap_idle_notify_interval = 29 mins
listen = *
mail_attachment_hash = %{sha512}
mail_plugins = fts fts_solr acl zlib virtual
mail_prefetch_count = 10
mail_shared_explicit_inbox = yes
mailbox_list_index = yes
managesieve_notify_capability = mailto
managesieve_sieve_capability = fileinto reject envelope 
encoded-character vacation subaddress comparator-i;ascii-numeric 
relational regex imap4flags copy include variables body enotify 
environment mailbox date index ihave duplicate mime foreverypart extracttext

mmap_disable = yes
namespace inbox {
  hidden = no
  inbox = yes
  list = yes
  location =
  mailbox "Deleted Messages" {
    auto = no
    autoexpunge = 30 days
    special_use = \Trash
  }
  mailbox Drafts {
    auto = subscribe
    special_use = \Drafts
  }
  mailbox INBOX/Archives {
    auto = no
    special_use = \Archive
  }
  mailbox Sent {
    auto = subscribe
    special_use = \Sent
  }
  mailbox "Sent Items" {
    auto = no
    special_use = \Sent
  }
  mailbox "Sent Messages" {
    auto = no
    special_use = \Sent
  }
  mailbox Trash {
    auto = subscribe
    autoexpunge = 30 days
    special_use = \Trash
  }
  mailbox virtual/Flagged {
    comment = All my flagged messages
    special_use = \Flagged
  }
  prefix =
  separator = /
  subscriptions = yes
  type = private
}
namespace usershares {
  list = children
  location = sdbox:/var/mail/%%d/%%n/sdbox
  prefix = shared/%%n/
  separator = /
  subscriptions = no
  type = shared
}
namespace virtual {
  list = children
  location = virtual:/var/mail/%d/%n/sdbox/virtual
  prefix = virtual/
  separator = /
  subscriptions = no
}
passdb {
  args = /usr/local/etc/dovecot/master-users
  driver = passwd-file
  master = yes
}
passdb {
  args = /usr/local/etc/dovecot/dovecot-ldap.conf.ext
  driver = ldap
}
plugin {
  acl = vfile:/usr/local/etc/dovecot/global-acls:cache_secs=300
  acl_shared_dict = proxy::acl
  fts = solr
  fts_autoindex = yes
  fts_solr = break-imap-search url=http://127.0.0.1:8983/solr/dovecot/
  mailbox_alias_new = Sent Messages
  mailbox_alias_new2 = Sent Items
  mailbox_alias_new3 = Deleted Messages
  mailbox_alias_old = Sent
  mailbox_alias_old2 = Sent
  mailbox_alias_old3 = Trash
  sieve = file:~/sieve;active=~/.dovecot.sieve
}
protocols = imap lmtp sieve
service auth {
  unix_listener /var/spool/postfix/private/auth {
    group = postfix
    mode = 0660
    user = postfix
  }
  unix_listener auth-userdb {
    group = mail
    mode = 0600
    user = vmail
  }
}
service config {
  unix_listener config {
    user = vmail
  }
}
service dict {
  unix_listener dict {
    group = mail
    mode = 0660
    user = vmail
  }
}
service doveadm {
  user = vmail
}
service imap-login {
  process_min_avail = 10
  service_count = 1
}
service imap-postlogin {
  executable = script-login /usr/local/etc/dovecot/post-login.sh
  user = $default_internal_user
}
service imap {
  executable = imap imap-postlogin
}
service lmtp {
  process_min_avail = 5
  unix_listener /var/spool/postfix/private/dovecot-lmtp {
    group = mail
    mode = 0666
    user = vmail
  }
}
service managesieve-login {
  inet_listener sieve {
    port = 4190
  }
  inet_listener sieve_deprecated {
    port = 2000
  }
  process_min_avail = 0
  service_count = 1
}
ssl_cert =   mail_plugins = fts fts_solr acl zlib virtual imap_acl imap_zlib 
mailbox_alias

}
local 192.168.0.4 {
  protocol imap {
    ssl_cert = 

sdbox filesystem backup potential excludes

2018-08-13 Thread Daniel Miller
When doing a filesystem backup of an moderate sdbox mailstore (300GB) - 
are there any files that can be safely excluded from the backup?  Like 
*.log or *.backup?  Or are they all "vital" for recovery?


I'm already excluding the sdbox/virtual folders as it looks like they 
get created and updated as needed.


--
Daniel



DMARC mailing list rejections

2018-01-15 Thread Daniel Miller
I get about a half dozen rejection messages from various servers when I 
post to this list. Is there something I need to configure differently in my 
DMARC record to be better compliant?


Daniel




Re: Submission/SMTP proxy server

2018-01-15 Thread Daniel Miller

On 1/14/2018 6:18 PM, Stephan Bosch wrote:

Op 1/12/2018 om 8:18 PM schreef Daniel Miller:

Sorry if this seems elementary - but a question on
implementation/usage/purpose of this.  My understanding is at this
time the SMTP proxy server is only that - it does not implement any
further functionality.  So its availability now is purely for testing
purposes.  Is that accurate?

No. This is a proxy that adds functionality that is normally either
rather difficult to achieve or not implemented for common SMTP software
(e.g. BURL).
My question was probably poorly phrased.  Based on the thread "New 
Dovecot service: SMTP Submission (RFC6409)" of last month it appears 
that BURL & URLAUTH are implemented in this proxy - but no clients 
presently support them?  And the particular use case of directly placing 
the mail into a "Sent" folder is not presently available (though 
hopefully soon!)?  So again, at this time, what would I use this service 
for besides testing it in advance of future development?



I secondly assume that this intended for trusted clients only - so
this is not intended for processing email submitted via port 25.

It is a submission service. Port 25 is for mail transport. Read
https://tools.ietf.org/html/rfc6409 for more details about the
difference between the two.

Understood.  Just wanted to verify.



And thirdly - if a separate firewall/anti-spam/virus/authentication
service is run outside of the MTA (like ASSP) then the Dovecot proxy
should be inserted between that and the final MTA?

Dovecot submission is meant to be talking to the client directly, so it
would be in front of it all. So, I'd expect Dovecot<->ASSP<->MTA.
Dovecot would in that case take care of the authentication.
That would work with trusted networks - but when using various services 
(including ASSP) to limit connections by IP's (particularly to combat 
brute-forcing attacks) I would think Dovecot should be within the 
protection and not directly exposed.  Or are there other security 
features built-in that I'm not aware of?


--
Daniel


Submission/SMTP proxy server

2018-01-12 Thread Daniel Miller
Sorry if this seems elementary - but a question on 
implementation/usage/purpose of this.  My understanding is at this time 
the SMTP proxy server is only that - it does not implement any further 
functionality.  So its availability now is purely for testing purposes.  
Is that accurate?


I secondly assume that this intended for trusted clients only - so this 
is not intended for processing email submitted via port 25.


And thirdly - if a separate firewall/anti-spam/virus/authentication 
service is run outside of the MTA (like ASSP) then the Dovecot proxy 
should be inserted between that and the final MTA?


--
Daniel



Re: Dovecot and Letsencrypt certs

2017-09-12 Thread Daniel Miller
And remove that "postfix reload" command - Postfix doesn't require 
explicit reloading. It'll pickup the changed cert automagically.


Daniel

On 9/12/2017 9:26 AM, Daniel Miller wrote:

What's wrong with using a certbot "post-hook" script such as:

#!/bin/bash
echo "Letsencrypt renewal hook running..."
echo "RENEWED_DOMAINS=$RENEWED_DOMAINS"
echo "RENEWED_LINEAGE=$RENEWED_LINEAGE"

if grep --quiet "your.email.domain" <<< "$RENEWED_DOMAINS"; then
    /usr/local/sbin/dovecot reload
   /usr/sbin/postfix reload
fi

Daniel

On 9/11/2017 1:57 PM, Joseph Tam wrote:

<mas...@remort.net> writes:


"writing a script to check the certs" - there is no need to write any
scripts. As one mentioned, it's done by a hook to certbot. Please read
the manuals for LE or certbot. The issue you have is quite common and
of course certbot designed to do it for you.


Won't work, of course, if you employ the least-privilege security 
principle

and run the certbot as a non-privileged user.  You'll need a script with
administrator privileges to detect cert renewals and restart the 
service.


I can't willy-nilly restart dovecot to pick up renewed certs without
webmail disruptions.  (My webmail uses persistent IMAP sessions.)
All users get dumped and need to re-authenticate.  If a user happens to
be drafting a message that took 2 hours to compose, I will surely hear
about it.  I should probably install a IMAP proxy to isolate the effects
of restarts.  Most mail readers cope with restarts just fine, though.

Joseph Tam <jtam.h...@gmail.com>


Re: Dovecot and Letsencrypt certs

2017-09-12 Thread Daniel Miller

What's wrong with using a certbot "post-hook" script such as:

#!/bin/bash
echo "Letsencrypt renewal hook running..."
echo "RENEWED_DOMAINS=$RENEWED_DOMAINS"
echo "RENEWED_LINEAGE=$RENEWED_LINEAGE"

if grep --quiet "your.email.domain" <<< "$RENEWED_DOMAINS"; then
    /usr/local/sbin/dovecot reload
   /usr/sbin/postfix reload
fi

Daniel

On 9/11/2017 1:57 PM, Joseph Tam wrote:

 writes:


"writing a script to check the certs" - there is no need to write any
scripts. As one mentioned, it's done by a hook to certbot. Please read
the manuals for LE or certbot. The issue you have is quite common and
of course certbot designed to do it for you.


Won't work, of course, if you employ the least-privilege security 
principle

and run the certbot as a non-privileged user.  You'll need a script with
administrator privileges to detect cert renewals and restart the service.

I can't willy-nilly restart dovecot to pick up renewed certs without
webmail disruptions.  (My webmail uses persistent IMAP sessions.)
All users get dumped and need to re-authenticate.  If a user happens to
be drafting a message that took 2 hours to compose, I will surely hear
about it.  I should probably install a IMAP proxy to isolate the effects
of restarts.  Most mail readers cope with restarts just fine, though.

Joseph Tam 


Re: Auth Policy Server/wforce/weakforced

2017-08-04 Thread Daniel Miller

On 8/4/2017 12:48 PM, Daniel Miller wrote:

On 8/3/2017 6:11 AM, Teemu Huovila wrote:


On 02.08.2017 23:35, Daniel Miller wrote:
Is there explicit documentation available for the (probably trivial) 
configuration needed for Dovecot and Wforce?  I'm probably missing 
something that should be perfectly obvious...


Wforce appears to start without errors.  I added a file to dovecot's 
conf.d:


95-policy.conf:
auth_policy_server_url = http://localhost:8084/
auth_policy_hash_nonce = this_is_my_super_secret_something

Looking at the Wforce console I see:

WforceWebserver: HTTP Request "/" from 127.0.0.1:45108: Web 
Authentication failed


In wforce.conf I have the (default):

webserver("0.0.0.0:8084", "--WEBPWD")

Do I need to change the "--WEBPWD"?  Do I need to specify something 
in the Dovecot config?
You could try putting an actual password, in plain text, where 
--WEBPWD is. Then add that base64 encoded to dovecot setting 
auth_policy_server_api_header.


I knew it would be something like that.  I've made some changes but 
I'm still not there.  I presently have:


webserver("0.0.0.0:8084", "--WEBPWD ultra-secret-secure-safe")
in wforce.conf (and I've tried with and without the --WEBPWD)

and

auth_policy_server_api_header = Authorization: Basic 
dWx0cmEtc2VjcmV0LXNlY3VyZS1zYWZl

in 95-policy.conf for dovecot

Obviously I'm still formatting something wrong.


I think I've got something working a little better.  I'm using:
webserver("0.0.0.0:8084", "ultra-secret-secure-safe")
(so I remove the --WEBPWD - that's a placeholder, not a argument 
declaration)


and for dovecot, the base64 encoding needs to be "wforce:password" 
instead of just the password.


Now I have to see what else needs to be tweaked.

Daniel


Re: Auth Policy Server/wforce/weakforced

2017-08-04 Thread Daniel Miller

On 8/3/2017 6:11 AM, Teemu Huovila wrote:


On 02.08.2017 23:35, Daniel Miller wrote:

Is there explicit documentation available for the (probably trivial) 
configuration needed for Dovecot and Wforce?  I'm probably missing something 
that should be perfectly obvious...

Wforce appears to start without errors.  I added a file to dovecot's conf.d:

95-policy.conf:
auth_policy_server_url = http://localhost:8084/
auth_policy_hash_nonce = this_is_my_super_secret_something

Looking at the Wforce console I see:

WforceWebserver: HTTP Request "/" from 127.0.0.1:45108: Web Authentication 
failed

In wforce.conf I have the (default):

webserver("0.0.0.0:8084", "--WEBPWD")

Do I need to change the "--WEBPWD"?  Do I need to specify something in the 
Dovecot config?

You could try putting an actual password, in plain text, where --WEBPWD is. 
Then add that base64 encoded to dovecot setting auth_policy_server_api_header.

I knew it would be something like that.  I've made some changes but I'm 
still not there.  I presently have:


webserver("0.0.0.0:8084", "--WEBPWD ultra-secret-secure-safe")
in wforce.conf (and I've tried with and without the --WEBPWD)

and

auth_policy_server_api_header = Authorization: Basic 
dWx0cmEtc2VjcmV0LXNlY3VyZS1zYWZl

in 95-policy.conf for dovecot

Obviously I'm still formatting something wrong.

Daniel


Auth Policy Server/wforce/weakforced

2017-08-02 Thread Daniel Miller
Is there explicit documentation available for the (probably trivial) 
configuration needed for Dovecot and Wforce?  I'm probably missing 
something that should be perfectly obvious...


Wforce appears to start without errors.  I added a file to dovecot's conf.d:

95-policy.conf:
auth_policy_server_url = http://localhost:8084/
auth_policy_hash_nonce = this_is_my_super_secret_something

Looking at the Wforce console I see:

WforceWebserver: HTTP Request "/" from 127.0.0.1:45108: Web 
Authentication failed


In wforce.conf I have the (default):

webserver("0.0.0.0:8084", "--WEBPWD")

Do I need to change the "--WEBPWD"?  Do I need to specify something in 
the Dovecot config?


--
Daniel


Re: Auth Policy Server

2017-06-30 Thread Daniel Miller

On 6/30/2017 12:05 PM, Aki Tuomi wrote:

On June 30, 2017 at 9:49 PM Daniel Miller <dmil...@amfes.com> wrote:


I've made a preliminary auth policy server in Perl - and it sort of
works (mostly) - but I've got some questions on "proper" implementation.



Hi!

First of all, which version are you running, and can you get a bt full 
backtrace of the crash?

Secondly, the endpoint does not need to be a proper web server, you can compare 
with https://github.com/PowerDNS/weakforced which is another implementation of 
auth policy server.

Aki


That link helped a lot - among other things forcing me to read.  I 
actually broke my policy server trying to "improve" it - I implemented a 
30-second auth delay on valid logins!  Setting that back to 0 seems to 
do the trick...


I running Dovecot 2.2.28.  For the bt - I'll be happy to if still 
desired, but you'll have to give me instructions as I don't know how.


As I continue tweaking this, if there's any interest I'll see about 
sharing this.  For my own needs I wanted a GeoIP based policy.  My 
thinking, skewed as it is, is that while SMTP needs to be relatively 
open - as I have friends & business contacts in other countries - the 
only people who access my IMAP server are somewhere in my country.  
Therefore, simply restricting login attempts to only be from IP's in my 
country will block the majority of botnets (at least, that's what I 
think I'm seeing from my logs).


Daniel


Auth Policy Server

2017-06-30 Thread Daniel Miller
I've made a preliminary auth policy server in Perl - and it sort of 
works (mostly) - but I've got some questions on "proper" implementation.


It appears the communication is HTTP based - is the intent to talk to a 
"proper" webserver, or is a simple dedicated daemon appropriate (which 
is what I made)?


Should connections be maintained, or terminated after each response 
(which is my current setup)?


If my implementation is correct, I may have found a bug, as I have some 
log entries like:


Jun 30 08:24:20 bubba dovecot: imap-login: Warning: Auth connection 
closed with 1 pending requests (max 31 secs, pid=10253, EOF)
Jun 30 08:24:20 bubba dovecot: auth: Fatal: master: service(auth): child 
31631 killed with signal 11 (core dumped)


Guidance would be appreciated.

--
Daniel


Re: localhost logins

2017-06-27 Thread Daniel Miller

On 6/27/2017 1:33 AM, Daniel Miller wrote:

On 6/27/2017 12:42 AM, Fabian Schmidt wrote:


Am 26.06.17 schrieb Daniel Miller:


On 2017-06-23 15:09, Marcus Rueckert wrote:

On Fri, 23 Jun 2017 11:38:28 -0700
Daniel Miller <dmil...@amfes.com> wrote:


While auditing my logs after an account was compromised, I see a
number of entries like:

Jun 23 11:32:18 bubba dovecot: auth:
ldap("one-of-my-accounts",127.0.0.1): invalid credentials


webmail?


Nagios or someone else monitoring dovecot?


Not running such - and they wouldn't be hitting multiple accounts.

Now I'm more confused.  I changed Dovecot to listen only on a specific 
IP address - and I still see localhost log lines:


Jun 27 12:03:27 bubba dovecot: auth: 
ldap(someu...@mydomain.com,127.0.0.1): invalid credentials


The only other thing I can think of - Postfix runs on this server and 
uses Dovecot SASL.  Is it possible the Dovecot auth log line is caused 
by a Postfix connection attempt?


Daniel


  1   2   >