CA certs for Dovecot-as-client (proxy)

2021-04-21 Thread Peter Mogensen
Hi,

When using proxy=y, ssl=yes (Dovecot 2.3.13) I consistently get this
logged when trying to validate the remote server cert.

"Disconnected by server: Connection closed: Received invalid SSL
certificate: unable to get local issuer certificate: /C=BE/O=GlobalSign
nv-sa/CN=AlphaSSL CA - SHA256 - G2 (check ssl_client_ca_* settings?)"

As I read the 2.3.x documentation (and the error logged) Dovecot needs
to have the trusted CA cert with ssl_client_ca_file or ssl_client_ca_dir.

So, I've tried every combination of putting the cert (and the GlobalSign
root CA signing it) in ssl_client_ca_dir and individually and as a
bundle in ssl_client_ca_file without luck.

But even though I can verify the cert with "openssl s_client -connect"
and with "openssl verify", no matter what I put in the ssl_client_ca_*
settings it seems Dovecot just ignores it.

It does complain though, if I point it to a non-existent file, but not
if I just fill the file with invalid cert data which can't be parsed.

I end up getting in doubt whether it consults the cert data at all.

I'm a bit at loss on how to debug this further, short of running it in
gdb. "verbose_ssl" doesn't really say anything about the process of find
a CA cert to check with.

Have I misunderstood the config?

/Peter


Leaked files in maildir "tmp" after vsz_limit crashes

2020-09-30 Thread Peter Mogensen
Hi,

Lately I've seen a few examples of users hitting the vsz_limit (usually
trying to "delete" mails i Spam/Junk by moving them to Trash with a
large dovecot.index.cache  - which resulted in mails left/leaked in the
tmp directory of Trash.

Sometimes it seems the client gets into a state were it repeatedly tried
to sync the client and server state so it does it again and again,
building up the number of files/links in tmp.

It seems the default 1 week interval to "unlink_old_files()" is not
enough to prevent this from blowing up inode wise.

However, ... lowering it, - or increasing vsz_limit feels a bit like
kicking the can down the road.

PS: This on dovecot 2.2.36

/Peter


Re: dsync and altpath on shared storage.

2019-09-05 Thread Peter Mogensen via dovecot



On 9/4/19 2:12 PM, Peter Mogensen wrote:
> 
> So... I've done some testing.
> 
> One method which seemed to work - at least for primitive cases - was to:
> 
> * Mount the ALT storage on the destination.
> * Run "doveadm force-resync \*" on the destination.
>   (putting all the mails in ALT storage into the dovecot.map.index)
> * Run dsync from source to destination.
> 
> Of course... if there was some way to avoid step 2...

So ... I have an idea.

Assuming users mail_location is:

mdbox:~/mdbox:ALT=/alt:INDEX=~/idx

And /alt is a shard mounted storage.

Then, it suspect the following steps would make dsync avoid transfering
mails on shared storage:

1) Create a rudimentary mdbox on the target side (just containing the
dbox-alt-root link)

2) Mount /alt on the target host

3) Copy all dovecot.index and dovecot.map.index in ~/idx from source to
target. That is: not the transaction (*.log) files or cache files.
I suppose this needs to be done under appropriate read locking.

4) doveadm sync -u source doveadm dsync-server -u target
  ... to get the rest of the mails in primary storage and all updates
sine the index files where snapshot.



It would be nice if there was a way to force dovecot*index.log files to
be snapshot to index files.

If the aim is not to sync two different accounts but to simply move one
account from one host to a new host where it doesn't exist in advance,
are there any caveats with this?

... apart from a few missing tools.

/Peter


Re: dsync and altpath on shared storage.

2019-09-04 Thread Peter Mogensen via dovecot


So... I've done some testing.

One method which seemed to work - at least for primitive cases - was to:

* Mount the ALT storage on the destination.
* Run "doveadm force-resync \*" on the destination.
  (putting all the mails in ALT storage into the dovecot.map.index)
* Run dsync from source to destination.

Of course... if there was some way to avoid step 2...

/Peter


Re: dsync and altpath on shared storage.

2019-09-03 Thread Peter Mogensen via dovecot



On 9/3/19 2:38 PM, Sami Ketola wrote:
> 
> 
>> On 3 Sep 2019, at 15.34, Peter Mogensen via dovecot  
>> wrote:
>>
>>
>>
>> On 9/2/19 3:03 PM, Sami Ketola wrote:
>>>> On 2 Sep 2019, at 15.25, Peter Mogensen via dovecot  
>>>> wrote:
>> ...
>>>> Is there anyway for dsync to avoid moving Gigabytes of data for could
>>>> just be "moved" by moving the mount?
>>>
>>>
>>> Not tested but you can probably do something like this in the target server:
>>>
>>> doveadm backup -u victim -R ssh sudouser@old-server "sudo doveadm 
>>> dsync-server -o mail_location=sdbox:/location-to-your-sdbox/ -u victim"
>>>
>>> just leave ALT storage path from the setting.
>>
>>
>> I'll have to test this... but my initial guess would be that doveadm
>> would then think the mails has disappeared. Would it then copy the index
>> metadata for those mails to the target host anyway?
> 
> 
> Hmm. That is true. It will probably not work after all then. 
> 
> Now I'm out of ideas how to do this efficiently.

I assume it won't even work to just premount the shared storage
read-only on the target side, so the mails are already there.
... since I suppose the receiving dsync reserves the right to re-pack
the m.* storage files?

/Peter



Re: dsync and altpath on shared storage.

2019-09-03 Thread Peter Mogensen via dovecot



On 9/2/19 3:03 PM, Sami Ketola wrote:
>> On 2 Sep 2019, at 15.25, Peter Mogensen via dovecot  
>> wrote:
...
>> Is there anyway for dsync to avoid moving Gigabytes of data for could
>> just be "moved" by moving the mount?
> 
> 
> Not tested but you can probably do something like this in the target server:
> 
> doveadm backup -u victim -R ssh sudouser@old-server "sudo doveadm 
> dsync-server -o mail_location=sdbox:/location-to-your-sdbox/ -u victim"
> 
> just leave ALT storage path from the setting.


I'll have to test this... but my initial guess would be that doveadm
would then think the mails has disappeared. Would it then copy the index
metadata for those mails to the target host anyway?

/Peter


dsync and altpath on shared storage.

2019-09-02 Thread Peter Mogensen via dovecot
Hi,

I was wondering...

If one had mdbox ALT path set to a shared storage mount (say, on NFS)
and one wanted to move a mailbox to a different host... I guess it in
principle wouldn't be necessary to copy all the ALT storage through
dsync, when the volume could just be mounted on the new host.

Is there anyway for dsync to avoid moving Gigabytes of data for could
just be "moved" by moving the mount?

/Peter


Auto rebuilding of Solr indexes on settings change?

2019-04-25 Thread Peter Mogensen via dovecot
Hi,

Looking at the source, it doesn't seem like fts-solr checks for settings
changes using fts_index_have_compatible_settings() like fts-lucene does.

Is there any special reason for why fts-solr shouldn't also rebuild
indexes if settings has changed?

/Peter


Re: Solr connection timeout hardwired to 60s

2019-04-14 Thread Peter Mogensen via dovecot


sorry... I got distracted half way and forgot to put a meaningfull
subject so the archive could figure out the thread. - resending.

On 4/14/19 4:04 PM, dovecot-requ...@dovecot.org wrote:

>> Solr ships with autoCommit set to 15 seconds and openSearcher set to
>> false on the autoCommit.? The autoSoftCommit setting is not enabled by
>> default, but depending on how the index was created, Solr might try to
>> set autoSoftCommit to 3 seconds ... which is WAY too short.

I just run with the default. 15s autoCommit and no autoSoftCommit

>> This thread says that dovecot is sending explicit commits.? 

I see explicit /update req. with softCommit and waitSearcer=true in a
tcpdump.

>> One thing
>> that might be happening to exceed 60 seconds is an extremely long
>> commit, which is usually caused by excessive cache autowarming, but
>> might be related to insufficient memory.? The max heap setting on an
>> out-of-the-box Solr install (5.0 and later) is 512MB.? That's VERY
>> small, and it doesn't take much index data before a much larger heap
>> is required.

I run with

SOLR_JAVA_MEM="-Xmx8g -Xms2g"

> I looked into the code (version 2.3.5.1):

This is 2.2.35. I haven't checked the source difference to 2.3.x I must
admit.

> I immagine that one of the reasons dovecot sends softCommits is because
> without autoindex active and even if mailboxes are periodically indexed
> from cron, the last emails received with be indexed at the moment of the
> search.? 

I expect that dovecot has to because of it's default behavior by only
bringing the index up-to-date just before search. So it has towait for
the index result to be available if there's been any new mails indexed.

> 1) a configurable batch size would enable to tune the number of emails
> per request and help stay under the 60 seconds hard coded http request
> timeout. A configurable http timeout would be less useful, since this
> will potentially run into other timeouts on solr side.

Being able to configure it is great.
But I don't think it solves much. I recompiled with 100 as batch size
and it still ended in timeouts.
Then I recompiled with 10min timeout and now I see all the batches
completing and their processesing time is mostly between 1 and 2 minutes
(so all would have failed).

To me it looks like Solr just takes too long time to index. This is no
small machine. It's a 20 core Intel(R) Xeon(R) Silver 4114 CPU @ 2.20GHz
and for this test it's not doing anything else, so I'm a bit surprised
that even with only a few users this takes so long time.

/Peter




Re: dovecot Digest, Vol 192, Issue 52

2019-04-14 Thread Peter Mogensen via dovecot



On 4/14/19 4:04 PM, dovecot-requ...@dovecot.org wrote:

>> Solr ships with autoCommit set to 15 seconds and openSearcher set to
>> false on the autoCommit.? The autoSoftCommit setting is not enabled by
>> default, but depending on how the index was created, Solr might try to
>> set autoSoftCommit to 3 seconds ... which is WAY too short.

I just run with the default. 15s autoCommit and no autoSoftCommit

>> This thread says that dovecot is sending explicit commits.? 

I see explicit /update req. with softCommit and waitSearcer=true in a
tcpdump.

>> One thing
>> that might be happening to exceed 60 seconds is an extremely long
>> commit, which is usually caused by excessive cache autowarming, but
>> might be related to insufficient memory.? The max heap setting on an
>> out-of-the-box Solr install (5.0 and later) is 512MB.? That's VERY
>> small, and it doesn't take much index data before a much larger heap
>> is required.

I run with

SOLR_JAVA_MEM="-Xmx8g -Xms2g"

> I looked into the code (version 2.3.5.1):

This is 2.2.35. I haven't checked the source difference to 2.3.x I must
admit.

> I immagine that one of the reasons dovecot sends softCommits is because
> without autoindex active and even if mailboxes are periodically indexed
> from cron, the last emails received with be indexed at the moment of the
> search.? 

I expect that dovecot has to because of it's default behavior by only
bringing the index up-to-date just before search. So it has towait for
the index result to be available if there's been any new mails indexed.

> 1) a configurable batch size would enable to tune the number of emails
> per request and help stay under the 60 seconds hard coded http request
> timeout. A configurable http timeout would be less useful, since this
> will potentially run into other timeouts on solr side.

Being able to configure it is great.
But I don't think it solves much. I recompiled with 100 as batch size
and it still ended in timeouts.
Then I recompiled with 10min timeout and now I see all the batches
completing and their processesing time is mostly between 1 and 2 minutes
(so all would have failed).

To me it looks like Solr just takes too long time to index. This is no
small machine. It's a 20 core Intel(R) Xeon(R) Silver 4114 CPU @ 2.20GHz
and for this test it's not doing anything else, so I'm a bit surprised
that even with only a few users this takes so long time.

/Peter




Re: Solr connection timeout hardwired to 60s

2019-04-12 Thread Peter Mogensen via dovecot


Looking further at tcpdumps of the Dovecot->Solr traffic and Solr
metrics it doesn't seem like there's anything suspicious apart from the
TCP windows running full and Dovecot backing of ... until it times out
and close the connection.

>From my understanding of how Dovecot operates towards Solr it will flush
~1000 documents towards Solr in /update request until it has traversed
the mailbox (let's say 20.000 mails), doing softCommits after each.

But is it really reasonable for Dovecot to expect that no request will
take more than 60s to process by Solr?
It doesn't seem like my Solr can handle that, although it does process
documents and it does reasonably fast clear pending documents after
Dovecot closes the connection.

On the surface it looks like Dovecot is too impatient.

/Peter

On 4/10/19 6:25 PM, Peter Mogensen wrote:
> 
> 
> On 4/4/19 6:57 PM, Peter Mogensen wrote:
>>
>>
>> On 4/4/19 6:47 PM, dovecot-requ...@dovecot.org wrote:
>>> For a typical Solr index, 60 seconds is an eternity.  Most people aim
>>> for query times of 100 milliseconds or less, and they often achieve
>>> that goal.
>>
>> I'm pretty sure I get these while indexing, not querying.
>>
>> Apr 04 16:44:50 host dovecot[114690]: indexer-worker(m...@example.com):
>> Error: fts_solr: Indexing failed: Request timed out (Request queued
>> 66.015 secs ago, 1 attempts in 66.005 secs, 63.146 in http ioloop, 0.000
>> in other ioloops, connected 94.903 secs ago)
> 
> Doing a TCP dump on indexing operations which consistently fail, I see
> that there's a lot of softCommits which never get an HTTP answer:
> 
> ==
> POST /solr/dovebody/update HTTP/1.1
> Host: localhost:8983
> Date: Wed, 10 Apr 2019 14:22:29 GMT
> Expect: 100-continue
> Content-Length: 47
> Connection: Keep-Alive
> Content-Type: text/xml
> 
> HTTP/1.1 100 Continue
> 
> 
> 





Re: Solr connection timeout hardwired to 60s

2019-04-10 Thread Peter Mogensen via dovecot



On 4/4/19 6:57 PM, Peter Mogensen wrote:
> 
> 
> On 4/4/19 6:47 PM, dovecot-requ...@dovecot.org wrote:
>> For a typical Solr index, 60 seconds is an eternity.  Most people aim
>> for query times of 100 milliseconds or less, and they often achieve
>> that goal.
> 
> I'm pretty sure I get these while indexing, not querying.
> 
> Apr 04 16:44:50 host dovecot[114690]: indexer-worker(m...@example.com):
> Error: fts_solr: Indexing failed: Request timed out (Request queued
> 66.015 secs ago, 1 attempts in 66.005 secs, 63.146 in http ioloop, 0.000
> in other ioloops, connected 94.903 secs ago)

Doing a TCP dump on indexing operations which consistently fail, I see
that there's a lot of softCommits which never get an HTTP answer:

==
POST /solr/dovebody/update HTTP/1.1
Host: localhost:8983
Date: Wed, 10 Apr 2019 14:22:29 GMT
Expect: 100-continue
Content-Length: 47
Connection: Keep-Alive
Content-Type: text/xml

HTTP/1.1 100 Continue





... in contrast to the first softCommit on the connection:


POST /solr/dovebody/update HTTP/1.1
Host: localhost:8983
Date: Wed, 10 Apr 2019 14:20:53 GMT
Expect: 100-continue
Content-Length: 47
Connection: Keep-Alive
Content-Type: text/xml

HTTP/1.1 100 Continue

HTTP/1.1 200 OK
Content-Type: application/xml; charset=UTF-8
Content-Length: 156





  0
  37


==

The missing softCommit responses seem to start right after the last
added document:
==

0

HTTP/1.1 200 OK
Content-Type: application/xml; charset=UTF-8
Content-Length: 156





  0
  12


POST /solr/dovebody/update HTTP/1.1
Host: localhost:8983
Date: Wed, 10 Apr 2019 14:22:29 GMT
Expect: 100-continue
Content-Length: 47
Connection: Keep-Alive
Content-Type: text/xml

HTTP/1.1 100 Continue


===

... and then the rest of the TCP dump doesn't get responses to
softCommit POSTs

/Peter


Re: Solr connection timeout hardwired to 60s

2019-04-04 Thread Peter Mogensen via dovecot



On 4/4/19 6:47 PM, dovecot-requ...@dovecot.org wrote:
> For a typical Solr index, 60 seconds is an eternity.  Most people aim
> for query times of 100 milliseconds or less, and they often achieve
> that goal.

I'm pretty sure I get these while indexing, not querying.

Apr 04 16:44:50 host dovecot[114690]: indexer-worker(m...@example.com):
Error: fts_solr: Indexing failed: Request timed out (Request queued
66.015 secs ago, 1 attempts in 66.005 secs, 63.146 in http ioloop, 0.000
in other ioloops, connected 94.903 secs ago)

/Peter


Solr connection timeout hardwired to 60s

2019-04-04 Thread Peter Mogensen via dovecot
Hi,

What's the recommended way to handling timeouts on large mailboxes given
the hardwired request timeout of 60s in solr-connection.c:

   http_set.request_timeout_msecs = 60*1000;


/Peter




Way to remove FTS indexes

2019-03-19 Thread Peter Mogensen via dovecot
Hi,

I was wondering if there was anyway to remove FTS indexes in other to
have them rebuild on the next BODY search?

All the doveadm commands I can find seem to result in fully build
indexes. (which is nice if that's what you want).

/Peter


Different listeners with different config

2018-12-07 Thread Peter Mogensen
Hi,

I was wondering about the status on being able to create a dedicated
listener in Dovecot with - say - extra features enabled.

As an example... If I wanted to have Dovecot listening on port 144 with
a slightly different set of auth mechanisms enabled.

/Peter


Re: auth_policy in a non-authenticating proxy chain

2018-09-15 Thread Peter Mogensen



On 09/15/2018 10:41 AM, Aki Tuomi wrote:
> Point of sending the success ones is to maintain whitelist as well as
> blacklist so you know which ones you should not tarpit anymore. We
> know it does scale as we have very large deployments using the whole
> three request per login model.
>
>

"Success" in a proxy which is not it self authenticating is only whether
it know where to proxy the requested username to.
I'm not sure whether this would be input to a whitelist.

I'm not doubting that 3 req/login scales.

/Peter



Re: auth_policy in a non-authenticating proxy chain

2018-09-15 Thread Peter Mogensen
Hi ...

After the below thread, I wrote a patch to select on a node-by-node
basis which auth-policy request should be done from that node.

To my surprise the exact same functionality then turned up in 2.2.34
with just slightly different option names:*
*

*auth_policy_check_before_auth*: Whether to do policy lookup before
authentication is started

*auth_policy_check_after_auth*: Whether to do policy lookup after
authentication is completed

*auth_policy_report_after_auth*: Whether to report authentication result 


This is great.

However... in the setup where you have a proxy in front of a backend and
the backend does all authentication, it would be nice with an option to
only do report requests in case of authentication failure.

Point being that, if the proxy authentication "fails", it does not proxy
and the backend will never see the req. and do any actual authentication
or reporting.

However, you would probably still want to know that there has been a
failed auth-attempt.
Enabling "report_after_auth" on the proxy would however, send a lot of
meaningless traffic about "successful" proxy auth events which would
tell you basically nothing the backed wouldn't later also report.

And the ration between success' and failures in the proxy is probably
very high.


regards,

Peter Mogensen


On 12/14/2017 08:30 AM, Peter Mogensen wrote:
> Hi,
>
> I was looking into the new Authentication Policy feature:
> https://wiki2.dovecot.org/Authentication/Policy
>
> I had kinda hoped that I would be able to enfore this in a proxy running
> in front of several backends. This proxy does not authenticate. It use
> "nopassword".
>
>
> But I realize that the "succes" reported in the final authpolicy req.
> (command=report) is not what is actaully happening on the IMAP protocol
> level, but rather the result of the passdb chain in the proxy.
> (I should probably have predicted this, it's kinda reasonable).
>
> However... since the proxy use "nopassword", ALL passdb lookups result
> in "success", so the proxy will never report an authentication failure
> to the authpolicy server.
>
> This, of course, forces me to do the authpolicy check on the backend
> with a shared state, but It would still have been nice to have the proxy
> being able to do the first "command=allow" req. and reject attemps
> already there even though the backend does "command=report".
>
> /Peter



Re: auth_policy in a non-authenticating proxy chain

2017-12-14 Thread Peter Mogensen


On 2017-12-14 10:31, Sami Ketola wrote:
> 
>> On 14 Dec 2017, at 8.30, Peter Mogensen <a...@one.com> wrote:
>> However... since the proxy use "nopassword", ALL passdb lookups result
>> in "success", so the proxy will never report an authentication failure
>> to the authpolicy server.
> 
> 
> Why not authenticate the sessions at the proxy level already? Is there any
> reason not to do that?

Yes. Several.
This is not a new setup. It's an already well established setup and it's
unlikely that authentication can be moved to the proxy.

/Peter



auth_policy in a non-authenticating proxy chain

2017-12-13 Thread Peter Mogensen
Hi,

I was looking into the new Authentication Policy feature:
https://wiki2.dovecot.org/Authentication/Policy

I had kinda hoped that I would be able to enfore this in a proxy running
in front of several backends. This proxy does not authenticate. It use
"nopassword".


But I realize that the "succes" reported in the final authpolicy req.
(command=report) is not what is actaully happening on the IMAP protocol
level, but rather the result of the passdb chain in the proxy.
(I should probably have predicted this, it's kinda reasonable).

However... since the proxy use "nopassword", ALL passdb lookups result
in "success", so the proxy will never report an authentication failure
to the authpolicy server.

This, of course, forces me to do the authpolicy check on the backend
with a shared state, but It would still have been nice to have the proxy
being able to do the first "command=allow" req. and reject attemps
already there even though the backend does "command=report".

/Peter


dict client auth-worker service count not obeyed?

2017-08-10 Thread Peter Mogensen
Hi,

I've noticed that in recent dovecot versions at least since 2.2.29 and
not in 2.2.12 a dovecot auth-worker will happily issue two
Lshared/passdb... queries on the same dict socket. Not always, but
sometimes.

It used to be that the dict client always closed the socket (AFAIK)
after 1 query. But now I see 2 queries issued on the same connection.

How does this work wrt. the service_count limit of the auth worker
process being default 1 ?? Is it not obeyed?

/Peter


When will passdb callback to mechanism yield PASSDB_RESULT_NEXT?

2017-05-28 Thread Peter Mogensen

Hi,

code question...

I've been trying to figure out the implications of the new 
"noauthenticate" passdb field.


Internally it causes a passdb to result in PASSDB_RESULT_NEXT.

When a SASL mechanism calls 
auth_request_lookup_credentials(...,callback) the passdb result is 
passed to the callback.


But I can't really figure out when that result will ever be 
PASSDB_RESULT_NEXT. It seems the passdb fallthrough resolver will always 
replace it with PASSDB_RESULT_INTERNAL_FAILURE if it ends up being the 
last result.


Can it ever leak into the callback or is it an internal intermediate 
value or the passdb resolver?


/Peter


Re: LDA doing passdb queries ?

2016-08-22 Thread Peter Mogensen


On 2016-08-22 13:21, Peter Mogensen wrote:

===



protocol lda {
#  passdb {
#driver = static
#  }

  userdb {
args = /etc/dovecot/dovecot-dict-auth.conf.ext
driver = dict
result_success = continue-ok
result_failure = return-fail
  }
  userdb {
driver = static
args = uid=vmail gid=vmail home=/srv/vmail/%u mail=maildir:~
  }
}
==


I realized that the passdb is needed when using the static driver to 
find out which users actually exist. And that you have to use 
args=allow_all_users=yes.


But it seems the logic to detect that a passdb is needed doesn't 
discover that I have a dict userdb before the static one ?!?!


Anyway ... I think I got what I wanted by not trying to change the user 
in a userdb, but doing it in a passdb:


==
protocol !lmtp {
  passdb {
driver = passwd-file
args = /etc/dovecot/accounts
  }
}
protocol lmtp {
  passdb {
args = /etc/dovecot/dovecot-dict-auth.conf.ext
driver = dict
  }
}

userdb {
  driver = static
  args = uid=vmail gid=vmail home=/srv/imip/vmail mail=maildir:~
}

==

Where the dict passdb returns something like:
O{"nopassword":"yes", "user": "static-user"}


This leaves me with 1 question though:
Shouldn't you be able to do this with a userdb rewriting "user" on 
delivery (LMTP RCPT) and no passdb?



/Peter


Re: LDA doing passdb queries ?

2016-08-22 Thread Peter Mogensen



Sorry... I meant LDA - not LMTP.

More specifically ... the delivery happening during an LMTP session.

I'm trying something like this:

===
protocol !lda {
  passdb {
driver = passwd-file
args = /etc/dovecot/accounts
  }

  userdb {
driver = static
args = uid=vmail gid=vmail home=/srv/vmail/%u mail=maildir:~
  }
}

protocol lda {
#  passdb {
#driver = static
#  }

  userdb {
args = /etc/dovecot/dovecot-dict-auth.conf.ext
driver = dict
result_success = continue-ok
result_failure = return-fail
  }
  userdb {
driver = static
args = uid=vmail gid=vmail home=/srv/vmail/%u mail=maildir:~
  }
}
==


The point being that delivery is done to an address which needs an 
external userdb to rewrite the "user" value.

All other access (IMAP...) uses the defined accounts.

The above config won't do, since dovecot complains about a missing 
passdb database (and that PLAIN needs one) ... even if there's no actual 
authentication done during delivery.


It doesn't seem to work, since trying to do delivery via LMTP still 
consults /etc/dovecot/accounts


/Peter


LMTP doing passdb queries ?

2016-08-22 Thread Peter Mogensen

Hi,

I can see dovecot is doing a passdb query when handling the LMTP RCPT 
command.


That's kinda unexpected for me. I would have thought it only did a 
userdb lookup.


I have disabled lmtp_proxy to be sure it didn't do a passdb lookup to 
check the proxy field.


Is this expected? Doesn't the LDA only do userdb lookups?

/Peter


Suggestion: Split login_trusted_networks

2016-06-27 Thread Peter Mogensen

Hi,

For the upcoming 2.3 development, I'd like to re-suggest this:

It seems the use of login_trusted_networks is overloaded.

Example:
* It's used for indicating which hosts you trust to provide XCLIENT 
remote IP's. (like a proxy)
* It's used for indicating from which hosts you trust logins enough to 
disable auth penalty. (like in a webmail)


Often these two uses cases have a different set of hosts.

So you can't have one set of hosts which you trust for XCLIENT and 
another set of hosts you trust for not being the origin of brute force 
attacks.


/Peter


Re: Proxying of non plain SASL mechnisms.

2015-03-18 Thread Peter Mogensen


On 2015-03-18 00:47, Timo Sirainen wrote:
- If auth proxying is enabled, perform passdb lookup on non-plaintext 
auth on the initial SASL response. Return finished to the auth 
client with some mech-proxy=y extra field, so it knows to start 
proxying the SASL session to the destination server.


This is actually the tricky part.
To perform a problemer passdb lookup, the proxy will have to be able to 
decode the user from the SASL IR even though it might not be able to 
authenticate. This requires knowledge of the SASL IR format (like 
extracting authz-id/authn-id from the PLAIN argument).
That might not be possible for all SASL mechanisms. With GS2-KRB5 you 
can always get authz-id. On the other hand, mechanisms like GSSAPI 
(which would work for other reasons) requires the actually perform the 
authentication before authz-id can be known.


So ... it might be a bit difficult to precisely define which mechanism 
such a feature covers and which it doesn't.


/Peter


Proxying of non plain SASL mechnisms.

2015-02-25 Thread Peter Mogensen
Hi,

I understand from earlier discussions that the reason dovecot doesn't
support proxying of other SASL mechanisms than those which supply the
plaintext password is that in general it would be possible to proxy any
SASL mechanism since it might protect against man-in-the-middle attacks
(which would prevent proxying).

However, that has led to choice between letting users use PLAIN (or
equivalent), or to have the proxy access the target hosts by master
password.
Of course, having the plaintext password the proxy could in principle do
other challenge/response SASL handshakes with the target backend, but
right now only LOGIN and PLAIN is implemented.

So I wondered about the rationale for not just forward the SASL
handshake.
- First, blindly forwardning will not do, since the mech data has to be
decoded anyway to do any per/user passdb lookup (to, say, find the
target host). But you don't need authentication to actually succeed to
do that. You only need AuthZ-id or AuthN-id.

- Secondly, the design of the interaction between imap-login processes
and the auth-service in general prevent in general to forward
multi-handshake SASL mechanisms, since the authentication must be done
before the proxying can be started. But it doesn't prevent forwarding of
single handshake SASL mechanisms which use SASL-IR.

- Thirdly, while it's correct that some SASL mechanisms protect against
man-in-the-middle attacks, that doesn't apply for most single-handshake
SASL-IR mechanisms unless they do some kind of channel-binding. (like
SASL EXTERNAL)
For example, the GS2-KRB5 SASL mech would be perfectly forwarded if just
the Kerberos ticket doesn't put restrictions on the client IP-address.

So, why not just extend the support for proxy authentication forwarding
to any single-handskake SASL-IR mechanism, which doesn't use
channel-binding? (which includes PLAIN, but also GS2-KRB5, and possibly
others).

/Peter


SPECIAL-USE again

2014-12-29 Thread Peter Mogensen

Hi,

Great to see Thunderbird support SPECIAL-USE now.

I would like to hear the list about the intended use of SPECIAL-USE.

I get the impression from several earlier mails here that the intention 
is for the server to globally decide what the folder-name of a specific 
SPECIAL-USE folder is for all users.
That's the way the documentation exemplifies it and what I get from 
posts like this:

http://www.dovecot.org/list/dovecot/2013-February/088129.html

I get the point that if *all* clients ignored the real folder-name and 
only obeyed SPECIAL-USE, the clients could locally in the GUI decide 
language and name of the \Sent, \Drafts, \Trash folders.

And the real folder name would become just an opaque identifier.

However that's not how the world is like. There are plenty of clients 
ignoring SPECIAL-USE and placing meaning in the actual folder name in a 
language of their own choice.


It seems natural for me to let the user configure their own individual 
SPECIAL-USE tagging according to their language and/or mix of IMAP clients.
 - either by setting IMAP METADATA (RFC5464) or by having the userdb 
return entries like: namespace/inbox/Papperskorg/specialuse=\Trash

(for a swede)

/Peter

PS: Also... Isn't there a need for a Sieve extension to allow fileinto 
to target a folder based on special-use ?


Re: SPECIAL-USE again

2014-12-29 Thread Peter Mogensen

On 2014-12-29 20:45, Stephan Bosch wrote:

For creating a special use mailbox there is the CREATE-SPECIAL-USE
capability (https://tools.ietf.org/html/rfc6154, Section 3). As you
suggested, the special use attributes can also be changed using the
METADATA capability (https://tools.ietf.org/html/rfc6154, Section 4).
Unfortunately, both of these features are not yet supported by Dovecot.


They are also basically two sides of the same feature.
For Dovecot to support CREATE-SPECIAL-USE it has to store that state 
somewhere anyway... and that would probably be in a METADATA dict.



I think it is already possible to return special use attributes from
userdb, although I haven't verified that.


Neither have I, but I see no reason why it shouldn't work. That would 
probably be the easiest way to support per-user SPECIAL-USE (which I 
think makes more sense than a global hardwired setting).


But to make it really useful, it would require Sieve support. Like:
http://www.ietf.org/mail-archive/web/sieve/current/msg05171.html

/Peter


Sieve counterpart of IMAP SPECIAL-USE

2014-11-26 Thread Peter Mogensen

Hi,

It would be useful to allow Sieve scripts to fileinto based on 
SPECIAL-USE flags.


But all I've been able to find about it is this:

http://www.ietf.org/mail-archive/web/sieve/current/msg05171.html

Has there been any progress since?

/Peter


Re: 2.2.14rc1 - dsync in backup mode still changes source permissions

2014-10-12 Thread Peter Mogensen

On 2014-10-11 08:51, Peter Mogensen wrote:

the docs says (or rather said) explicitly:

No changes are ever done to the  source  location.

...


Is the documentation intentionally changed to not make that promise
anymore?


I also notice that the -o for overriding userdb settings has been 
removed from the documentation.


Is that intentionally?

/Peter


Re: 2.2.14rc1 - dsync in backup mode still changes source permissions

2014-10-11 Thread Peter Mogensen

On 2014-10-10 23:52, Timo Sirainen wrote:

It's not doing any changes to mailbox contents, but it's still updating the 
index/uidlist files as part of its normal operation.


I doesn't actually seem to change content of the files. Only 
permissoins. But given that the docs says (or rather said) explicitly:


No changes are ever done to the  source  location.

I would expect operations on the source to be strictly read only - 
including permissions.


Is the documentation intentionally changed to not make that promise anymore?


# dsync -R -o mail_home=/users/user/maildir backup ssh -c arcfour -o 
StrictHostKeyChecking=no -i /root/.ssh/id-rsa-dsync source-host dsync -o 
mail_home=/users/user/maildir


You should use -u user@domain parameter in both sides so it drops root 
privileges.


Yes... but the problem here is that the current userdb has accounts 
which can be activated/de-activated and de-activating an account makes 
the userdb act as it doesn't exist.

... which makes dsync skip it.

I realize that's a broken userdb, but the possible work-around was to 
not do userdb lookups with dsync.


/Peter


2.2.14rc1 - dsync in backup mode still changes source permissions

2014-10-10 Thread Peter Mogensen

Hi,

It seems we are still able to reproduce this:
http://www.dovecot.org/list/dovecot/2014-May/096367.html

However... there's no longer any error-messages. It just silently 
changes permissions on some dovecot files in the source maildir. (most 
often dovecot-uidlist)


We're running dsync as root, with hardwired userdb values for other 
reasons. So it has the OS permissions to change source. But still, 
running in backup shouldn't change source ever, should it?


The command line is of this format - running on destination-host:


# dsync -R -o mail_home=/users/user/maildir backup ssh -c arcfour -o 
StrictHostKeyChecking=no -i /root/.ssh/id-rsa-dsync source-host dsync 
-o mail_home=/users/user/maildir


/Peter


Suggestion: Split login_trusted_networks

2014-06-20 Thread Peter Mogensen

Hi,

It seems the use of login_trusted_networks is overloaded.

Example:
* It's used for indicating which hosts you trust to provide XCLIENT 
remote IP's.
* It's used for indicating from which hosts you trust logins enough to 
disable auth penalty. (like in a webmail)


However... trustwise, this is trusting two different entities.
The first case you put trust in the host.
In the second case, you actually put trust in the user which uses the 
webmail (unless of course the webmail it self implements auth-penalties).


So you can't have one set of hosts which you trust for XCLIENT and 
another set of hosts you trust for not being the origin of brute force 
attacks.


/Peter


[Dovecot] dsync changing source permission to root in backup mode

2014-05-27 Thread Peter Mogensen

Hi,

We have dsync failing once in a while when running in backup mode.
What's strange is that the result is that the file permissions on the 
*source* machine ends up with the wrong permissions (set to uid 0).


Even though the dsync manual clearly says:
Backup mails from default mail location to location2 (or vice versa, if 
-R parameter is given). No changes are ever done to the source location. 
Any changes done in destination are discarded.


Running: 'dsync -R -o mail_home=/users/maildir backup ssh -c arcfour 
src-host dsync -o mail_home=/users/maildir'


I know it's running as root, but even then ... it shouldn't modify the 
source in backup mode ??


The error message from dsync when failing is:

dsync-remote(root): Error: Cached message size larger than expected 
(5292  5289)
dsync-remote(root): Error: Maildir filename has wrong S value, renamed 
the file from 
/users/maildir/.Sent/cur/1381224782.M959810P3574.mail,S=5292,W=5411:2,S 
to /users/maildir/.Sent/cur/1381224782.M959810P3574.mail,S=5289:2,S
dsync-remote(root): Error: Corrupted index cache file 
/users/maildir/.Sent/dovecot.index.cache: Broken physical size for mail 
UID 1040
dsync-remote(root): Error: dsync(dst-host): 
read(/users/maildir/.Sent/cur/1381224782.M959810P3574.mail,S=5292,W=5411:2,S) 
failed: Cached message size larger than expected (5292  5289)



/Peter


Re: [Dovecot] dsync changing source permission to root in backup mode

2014-05-27 Thread Peter Mogensen

Oh ... sorry... I forgot the last log-line. (see below)

btw... tested with versions:
Between 2.2.12 in both ends, and
between dst=2.2.12, src=2.2.13


On 2014-05-27 15:03, Peter Mogensen wrote:

The error message from dsync when failing is:

dsync-remote(root): Error: Cached message size larger than expected
(5292  5289)
dsync-remote(root): Error: Maildir filename has wrong S value, renamed
the file from
/users/maildir/.Sent/cur/1381224782.M959810P3574.mail,S=5292,W=5411:2,S
to /users/maildir/.Sent/cur/1381224782.M959810P3574.mail,S=5289:2,S
dsync-remote(root): Error: Corrupted index cache file
/users/maildir/.Sent/dovecot.index.cache: Broken physical size for mail
UID 1040
dsync-remote(root): Error: dsync(dst-host):
read(/users/maildir/.Sent/cur/1381224782.M959810P3574.mail,S=5292,W=5411:2,S)
failed: Cached message size larger than expected (5292  5289)


dsync-local(root): Error: dsync(src-host): read() failed: read((fd)) 
failed: dot-input stream ends without '.' line


[Dovecot] The submission server

2014-02-17 Thread Peter Mogensen

Hi,

As many others I'm looking forward to the submission server.
But I have a question:

A use-case with authenticated SMTP is to have the server restrict 
From/Sender headers based on the authenticated user. (and adding the 
actual authenticated user to the headers)
Postfix supports this (AFAICS) and I can't imagine Exims doesn't either 
with it's elaborate config possibilities.


But will that be possible with the Dovecot submission server?

/Peter


Re: [Dovecot] The submission server

2014-02-17 Thread Peter Mogensen

On 2014-02-17 21:06, Stephan Bosch wrote:

One piece of the puzzle is
important though: a method to convey the authenticated username to the
backend.


yeah... I figured that would be the crucial part.

Does the dovecot proxy send the authentication name, or the SASL 
authorization name?


/Peter


Re: [Dovecot] master user and ACL's

2014-02-13 Thread Peter Mogensen

On 2014-02-14 05:49, Timo Sirainen wrote:


Sounds like you don't want the master user to be special in any way now or in 
future. In that case setting master_user=%u would do exactly that now and 
always. (There might be some other features besides ACLs that could work 
differently for master user logins in future.)



It's not that can't think of the need for a master user, but I think 
of SASL authz-id in more general terms. - not a something only used for 
master users.

And actually... the GSSAPI mech in Dovecot already works that way.
The authz-id is looked up in the passdb and the authn-id (the principal) 
is matched against the k5principals (*) extra-field - not against the 
master user database.


A more general way would be to generalize the whole userok() check 
into a plugable step between passdb lookup and userdb lookup, which 
tested whether the SASL authz-id request was ok - (and maybe if it was 
ok because it was a master user, or just because local authorization 
allowed that)


/Peter

*: Btw... k5principals is miss-written in the wiki docs as 
k5credentials. But haven't been able to change it.


Re: [Dovecot] master user and ACL's

2014-02-12 Thread Peter Mogensen

On 2014-02-13 04:40, Timo Sirainen wrote:

On 9.2.2014, at 17.36, Peter Mogensen a...@one.com wrote:

But why is the master_user authn-id used in the ACLs and not the authz-id 
(requested-login-user) ?

Isn't the whole point of SASL authz-id semantics to have authorization resolved 
based on the authz-id?


Some people are using master user logins to do other types of things, such as 
allowing voicemail software to access only the Voicemail folder of everyone. Or 
spam software access only to the Spam folder.


But wouldn't the correct way for these use cases be to share the 
individual folders with the voicemail/spam user ACL needed - not to log 
in as the user.



Or an alternative read-only username+password for all users that can access the 
same user's mails only read-only.



This one is more tricky, since it mixes authentication and authorization 
more. ... which always needs thinking in a protocol as IMAP where the 
resource accessed is tied to the user (as opposed to HTTP).


Intuitively, if I would set this up, I would probably try with having 2 
userdb entries pointing to the same mail_location, but with different 
acl_groups userdb fields.

... or something to that effect.
In other words ... not determine it based on authentication-ID, but 
based on authorization-ID.


My own use-case is to have 1 authentication-ID being able to access 
several userdb accounts. - with the same credentials. Based on checking 
whether the give SASL authz-id is OK for that user. But from then on, 
just be that user.


Is specifying master_user=%u the official way to switch between these 
behaviours of which SASL id ACLs are checked against or is there an 
enhancement of the dovecot functionality to consider to handle SASL 
authz-id/authn-id in a more general way?


/Peter


[Dovecot] master user and ACL's

2014-02-09 Thread Peter Mogensen

Hi,

Quick question...I read in the docs that:
Master user is still subject to ACLs just like any other user, which 
means that by default the master user has no access to any mailboxes of 
the user.
... and that the standard workaround is to return master_user=%u from 
the userdb.


But why is the master_user authn-id used in the ACLs and not the 
authz-id (requested-login-user) ?


Isn't the whole point of SASL authz-id semantics to have authorization 
resolved based on the authz-id?



/Peter


Re: [Dovecot] Dovecot MTA

2013-11-11 Thread Peter Mogensen

Timo Sirainen wrote:

 And Dovecot roadmap is slowly shrinking .. there aren’t all that many
 big features left anymore. Soon it’s mainly going to be improvements
 to reliability and performance. So I need to find some new things to
 do in any case. :)

True ...
If I try to make a wish list for features many of them requires fixing 
the IMAP protocol it self.

(Like not having the folder display name being the unique identifier)

Which reminds me that the IMAP5 process (if there ever was one) seems to 
be slowed to a halt.

Now, there's a task for a developer looking for something to do ;-)

/Peter


[Dovecot] server side private/public key

2013-11-11 Thread Peter Mogensen

*Christian Felsing wrote:
*
 Please consider to add server side private/public key encryption for 
incoming mails.
 If client logs on, the password is used to unlock users server side 
private key.
 If mail arrives from MTA or any other source, mail is encrypted with 
users public key.
 Key pair should be located in LDAP or SQL server. PGP and S/MIME 
should be supported.



This is for the situation if NSA or other organizations asks admin for
users mail insistently,


So ... exactly which security threat are you thinking about preventing here?

This won't protect against:
* NSA listening in on the mails when they arrive.
* NSA taking a backup of your mails and wait for your first attempt to read 
them - at which time they'll have your private key in plain text.

It seems like a much wider protection to just keep you private key for your 
self.

/Peter



Re: [Dovecot] Prevent Download messages from server

2012-09-20 Thread Peter Mogensen

 we have no problem, just i want to learn how can i do that. i think
 it's clear .

Well... I'm pretty sure most others don't.

But anyway. As in ALL Internet protocols (IMAP being no exception), 
letting the client read data on the server requires it to download the data.

Preventing download will prevent reading the mail. Period.

So if you're fine with that and just want to learn how to do it, then 
just disable the account in the user database.


/Peter



[Dovecot] 2.0/2.1 - different behavior for LIST-EXTENDED

2012-04-10 Thread Peter Mogensen

Hi Timo,

We are sitting here wondering if this difference in behaviour between 
dovecot 2.0.17 and 2.1.3 is intended.


When you create a folder, subscribe to it and rename it (without 
changing the subscription) these are the behaviours:


For 2.0.17:
. list (SUBSCRIBED)  * RETURN (STATUS (MESSAGES))
* LIST (\Subscribed \NonExistent) . INBOX.test

For 2.1.3:
. list (SUBSCRIBED)  * RETURN (STATUS (MESSAGES))
* LIST (\Subscribed) . INBOX.test
* NO Mailbox doesn't exist: test

If you don't use rfc5819 the folder will just get silently ignored by 
dovecot 2.1.x, but if you actually try to get the number of messages 
you'll get the error.


It seems to me from reading rfc5258 that the 2.0.x behaviour is the 
correct ??


/Peter



[Dovecot] \NoSelect on missing folders in LIST

2012-03-05 Thread Peter Mogensen

Hi,

I noticed a difference between courier and dovecot, and I'm not sure 
which of them is wrong wrt. RFC3501 - if any.


I have a Maildir which has been accessed by an Apple Mail client, so it 
got folders like:


INBOX
INBOX.Trash
INBOX.INBOX.folder
INBOX.INBOX.folder.a
INBOX.INBOX.folder.b

The INBOX.INBOX folder does not exist on disk and is not subscribed.

Courier responds to:
. list  *
with
* LIST (\Noselect \HasChildren) . INBOX.INBOX

But dovecot does not list that folder using *.

However, if you issue:
. list  INBOX.%

Dovecot answers:
* LIST (\Noselect \HasChildren) . INBOX.INBOX

This makes some clients using * to get the folder list ignore the 
folderes below INBOX.INBOX.
I know the recommended client way is to use %, but I'm still curious 
about which is the correct behaviour.


/Peter



Re: [Dovecot] \NoSelect on missing folders in LIST

2012-03-05 Thread Peter Mogensen

On 2012-03-05 15:45, Timo Sirainen wrote:

* LIST (\Noselect \HasChildren) . INBOX.INBOX


I'm surprised Courier would return this.


But dovecot does not list that folder using *.


But it returns all of the mailboxes under INBOX.INBOX, right?


Yes. And they exists on disk and are subscribed to.


However, if you issue:
. list  INBOX.%

Dovecot answers:
* LIST (\Noselect \HasChildren) . INBOX.INBOX


Yes, because if it didn't the client wouldn't know that there are mailboxes 
under INBOX.INBOX.


Seems reasonable.


This makes some clients using * to get the folder list ignore the folderes below 
INBOX.INBOX.


What clients? I haven't heard of this being a problem before. I think Cyrus has 
similar behavior as Dovecot.


Well... mostly perl scripts :) - which could probably be changed to use 
% for wildcard, but since they always need to get the entire folder 
tree it would result in more IMAP traffic.


/Peter


Re: [Dovecot] \NoSelect on missing folders in LIST

2012-03-05 Thread Peter Mogensen

On 2012-03-05 16:36, Timo Sirainen wrote:

Still curious about if Courier is doing something wrong which the scripts just 
happened to take advantage of.


Neither behavior is wrong, just different. :)


Ok... I were in doubt if I had missed something from the RFC.
However... for testing, I tried to create INBOX.INBOX on dovecot.
But then dovecot answers NO and complains that the folder already 
exists. Though it's still not on disk and dovecot still doesn't list it 
with *.


/Peter




[Dovecot] POP3 UIDLs with virtual INBOX and migration from maildir-mdbox

2012-02-09 Thread Peter Mogensen

Hi,

Considering the scenario, where you have some old account with a 
different POP3 UIDL format and you migrate them to dovecot.


So these old UIDLs would be saved to dovecot-uidlist.

At some later time you want to introduce a virtual POP3 INBOX like 
described on:

http://wiki.dovecot.org/Plugins/Virtual

So you decide to make the new UIDL format %f - to make them unique 
across folders.


So far so good.

But then you decide to migrate to mdbox with all your old UIDLs.
The docs says that saving old UIDLs is only supported in Maildir and 
that %f is only supported in Maildir.


So is this at all possible?

Would pop3_uidl_format = %g solve this (except for the old legacy UIDL's) ?

/Peter





Re: [Dovecot] IMAP SPECIAL-USE extension

2011-12-06 Thread Peter Mogensen

On 2011-12-02 22:22, dovecot-requ...@dovecot.org wrote:
 It's implemented now in dovecot-2.1 hg. It also deprecates autocreate
 plugin (but it still works the old way). The idea is that you can now
 do e.g.:

 mailbox Trash {
   auto = no
   special_use = \Trash
 }
 ...

This is great Timo.
But for solving the localization problem for special-use folders, it's 
only half the way.


Are there any plans to support RFC5464 SETMETADATA, so individual users 
can name their \Trash folder Skraldspand in danish or what ever they 
prefer?


/Peter


Re: [Dovecot] Corrupted transaction log file

2011-11-09 Thread Peter Mogensen

On 2011-11-04 22:26, Timo Sirainen wrote:

Nov  4 15:10:42 mail dovecot: imap (t...@aaaone.net): Error: Corrupted
transaction log file /mail/3340444/.TestMails/dovecot.index.log seq 2:
indexid changed 1320419300 -  1320419441 (sync_offset=0)


Session A had TestMails open and created with index file whose ID was
1320419300 (that's also UNIX timestamp of its creation time, Fri Nov  4
17:08:20 EET 2011).

Session B came and recreated the index files 141 seconds later with ID
1320419441. Either it didn't see A's original index files for some
reason or it simply decided to recreate them for some reason. Either way
this shouldn't have happened.


Turns out this is expected to confuse Session A.
The client in question sometimes start the session (B) with this command 
sequence:

DELETE folder
CREATE folder
APPEND...

Any Session A having opened folder of course would be surprised that 
there's a new index file (makes we wish for an IMAP5 where 
folderID!=displayname)


This can be reproduced by hand speaking IMAP with two telnets.

Only question left is, why does Dovecot end the log sequence by saying:

Disconnected: IMAP session state is inconsistent, please relogin.

 ... when it is capable of detecting this and returning BYE folder 
deleted under us and logging the same.


/Peter



Re: [Dovecot] Corrupted transaction log file

2011-11-05 Thread Peter Mogensen

On 2011-11-04 22:26, Timo Sirainen wrote:

Nov  4 15:10:42 mail dovecot: imap (t...@aaaone.net): Error: Corrupted
transaction log file /mail/3340444/.TestMails/dovecot.index.log seq 2:
indexid changed 1320419300 -  1320419441 (sync_offset=0)


Session A had TestMails open and created with index file whose ID was
1320419300 (that's also UNIX timestamp of its creation time, Fri Nov  4
17:08:20 EET 2011).

Session B came and recreated the index files 141 seconds later with ID
1320419441. Either it didn't see A's original index files for some
reason or it simply decided to recreate them for some reason. Either way
this shouldn't have happened.


 Session A then notices that the indexes were recreated, and logs an
 error.


Oh... wait a minute...

The timestamp is UTC, so 17:08:20 is about 2:22 before the log line.
2:22 is 142 seconds.
So... given that the errors doesn't appear every time the client runs 
the series of APPEND requests, but (now I come to think of it) probably 
never the first time he runs it, but the second time - and that he did 
run the script a few minutes before this log line with out errors,
 - then... the problem might be that the first run of the script 
doesn't finish correctly. If session A is the first run of the script, 
then it should have finished and logged out long before session B.

But maybe the problem is the first run not finishing properly.

/Peter


Re: [Dovecot] Blocking auth services

2011-08-15 Thread Peter Mogensen

On 2011-08-14 22:56, Timo Sirainen wrote:

On Mon, 2011-08-08 at 14:04 +0200, Peter Mogensen wrote:


I'm writing an passdb/userdb plugin to authenticate against an external
daemon listening on a UNIX socket.

The connection to the daemon is 1 request at a time and thus blocking
(unlike passdb-ldap), but the daemon is preforking, so it can handle
more connections at a time.


You're talking to it via UNIX socket, so you can talk to it with
non-blocking sockets.


Yes... but a single connection can still only handle one request at a 
time. It's not the socket, which is blocking - it's the server end of 
the connection.



But I also have the option, to let the passdb/userdb plugin maintain a
pools of used/idle connections to the daemon and just pick a idle
connection and moving it to the used pool on each auth_request.
Which would save me the auth worker processes.


This would be more efficient. (I wonder if you could make your external
daemon talk auth-worker protocol and Dovecot would do this pooling
automatically by thinking it's talking to its own workers?)


We actually considered replacing the entire dovecot-auth process with a 
re-write of the daemon, which we had done with courier. But the 
courier-auth process is simpler, so we decided to go for a plugin to 
dovecot-auth.


/Peter


[Dovecot] Blocking auth services

2011-08-08 Thread Peter Mogensen

Hi,

I'm writing an passdb/userdb plugin to authenticate against an external 
daemon listening on a UNIX socket.


The connection to the daemon is 1 request at a time and thus blocking 
(unlike passdb-ldap), but the daemon is preforking, so it can handle 
more connections at a time.


I read from the Wiki:
http://wiki2.dovecot.org/Design/AuthProcess

* The authentication may begin new authentication requests even before 
the existing ones are finished. , and


* If the passdb uses connections to external services, it's preferred 
that they use non-blocking connections. Dovecot does this whenever 
possible (PostgreSQL and LDAP for example). If it's not possible, set 
blocking = TRUE. 


... which tells me to set the module as blocking and let more auth 
worker processes do the work - creating 1 daemon process for each auth 
worker process, I guess.


But I also have the option, to let the passdb/userdb plugin maintain a 
pools of used/idle connections to the daemon and just pick a idle 
connection and moving it to the used pool on each auth_request.

Which would save me the auth worker processes.

Is there a preferred dovecot way?

/Peter


[Dovecot] Question about memory management in plugins

2011-08-04 Thread Peter Mogensen

Hi,

I've writing an passdb/userdb plugin (see my previous question about a 
plugin authenticating via a UNIX socket protocol).


Now... the protocol spoken over this socket is JSON-based and I'm using 
a SAX-like event based parser which maintains a parse context between 
callbacks.


Now... I'm a little bit in doubt about which dovecot memory management 
method would be best for data in this parser context.


Alloc-only pools seems wrong cause the parser object is used as long as 
the connection is open and there might run many auth requests over the 
connection before it's freed making the pool grow for long time.


Data stack allocation won't work either, since with all this async 
network and callbacks, there's really no where to place the stack frame.


So I end up using i_* and i_free for all data during the lifetime of the 
connection.


Is there a better way?

If I could only free my pool-allocated data, but I can't since it's 
almost never the last allocated data I want to free.


/Peter


Re: [Dovecot] Question about memory management in plugins

2011-08-04 Thread Peter Mogensen

On 2011-08-04 22:11, Peter Mogensen wrote:

Is there a better way?


Maybe I can answer my own question...
It dawns upon me that auth_request comes with it's own pool, which of 
probably should be used for allocations temporary to one passbd/userdb 
lookup.


/Peter


[Dovecot] passdb/userdb via UNIX socket?

2011-07-07 Thread Peter Mogensen

Hi,

I've been running some performance tests - especially delivery (LDA and 
LMTP) and it seems there's room for improvement.


At least it would be nice to get rid of the fork() and pipe to deliver 
LDA and the fork of the checkpasswd script for userdb lookup.
I've tried LMTP to not fork deliver (*), but checkpasswd still takes 
time (ok, maybe because it's written on perl as of now).


But I was wondering, why there's no passdb/userdb plugin for talking to 
a local authentication daemon over a UNIX socket? Have I missed 
something? Is there a thirdparty patch for this?


/Peter

*: Using Postfix virtual LDA seems much faster than asking Postfix to 
pipe data to deliver. ... but then of course, I get no dovecot 
indexing by the LDA.




Re: [Dovecot] LMTP returncode 450?

2011-06-28 Thread Peter Mogensen

On 2011-06-28 01:58, Timo Sirainen wrote:

On Mon, 2011-06-27 at 14:55 +0200, Peter Mogensen wrote:


How do I get the LMTP-server to know which mailbox's are locally hosted
and return SMTP code 450 if delivery is attempted to a non local user?


You can't, at least that way. Why are you trying to deliver mails to a
non-local mailbox? You could anyway use Dovecot as LMTP proxy to the
remote LMTP server and it would deliver the mail there without an error.


I was wondering if I could skip running a Postfix or other MTA along 
with dovecot and just let mail be delivered directly to the final host 
by LMTP.
It's no problem to have Postfix do a virtual_mailbox_domains lookup 
before handing it to local LMTP, but it would be simpler with only Dovecot.



I can see that a lookup in the userdb is done, but now matter what I
return (1/111) from my checkpassword script I just get:


Set lmtp_proxy=yes and have passdb lookup return proxy=y and
host=1.2.3.4.


But how does the LMTP proxy deal with temporary errors? It has no queue 
like the SMTP-server ?


/Peter


[Dovecot] LMTP returncode 450?

2011-06-27 Thread Peter Mogensen

Hi,

How do I get the LMTP-server to know which mailbox's are locally hosted 
and return SMTP code 450 if delivery is attempted to a non local user?


I can see that a lookup in the userdb is done, but now matter what I 
return (1/111) from my checkpassword script I just get:


451 4.3.0 l...@domain.tld Internal error occurred. Refer to server log 
for more information.


/Peter


[Dovecot] URLAUTH-patch, BSD specific?

2011-06-15 Thread Peter Mogensen

Hi,

I notice that the Apple patched branch of Dovecot 2.0 with URLAUTH fails 
to compile on Linux.


The file src/plugins/urlauth/urlauth-keys.c uses open(2) with O_EXLOCK, 
which to my knowledge is BSD specific.


Is that a known problem?

/Peter


[Dovecot] Spelling error in #define ?

2011-05-03 Thread Peter Mogensen

Hi,
I stumbled over this define in lazy-expunge-plugin.h:

#ifndef LAZY_EXPUNGE_PLUGIN_H
#define TLAZY_EXPUNGE_PLUGIN_H

Isn't there a T too much?

http://hg.dovecot.org/dovecot-2.0/file/036260ae0261/src/plugins/lazy-expunge/lazy-expunge-plugin.h

/Peter



[Dovecot] UIDPLUS in the wiki

2011-02-02 Thread Peter Mogensen

Hi,

Isn't the stuff in the wiki about UIDPLUS being disabled because of 
maildir outdated?


http://wiki.dovecot.org/FeatUIDPLUS
http://wiki2.dovecot.org/FeatUIDPLUS

/Peter



[Dovecot] Differenft INBOX for IMAP/POP with checkpassword passdb

2011-01-27 Thread Peter Mogensen

Hi,

I'm trying to do a setup where IMAP and POP users see different INBOX'
Like described on the virtual folder wiki page:
http://wiki.dovecot.org/Plugins/Virtual


However, for now, I'm stuck with the checkpassword passdb and prefetch 
userdb

So I can't parameterize the result on %s like the example with MySQL does.

So I thought of having to different checkpassword scripts:
  passdb checkpassword {
args = /usr/bin/checkpassword-%s
  }

However, Dovecot (1.2.15) doesn't seem to expand %s even though the Wiki 
says

it should be available everywhere:
http://wiki.dovecot.org/Variables

Is this a bug, or am I missing something.

/Peter



Re: [Dovecot] Differenft INBOX for IMAP/POP with checkpassword passdb

2011-01-27 Thread Peter Mogensen

On 2011-01-27 14:04, Peter Mogensen wrote:

So I thought of having to different checkpassword scripts:
  passdb checkpassword {
args = /usr/bin/checkpassword-%s
  }


Arh.. .sorry.
I missed the SERVICE env variable.

/Peter



[Dovecot] email addresses as usernames with Kerberos

2011-01-14 Thread Peter Mogensen

Hi,

I was trying out Kerberos authentication with som sample users for 
Dovecot and stumbled into this problem:


The user names are of the form local-part@domain, so the Kerberos 
principal becomes local-part\@domain@REALM.


But it seems Dovecot (1.2.9) doesn't understand that syntax.
Looking at the 2.0.8 sources I guess it's not supported.

But isn't this a valid Kerberos principal?
Is it a bug, or a missing feature or is there a special reason for not 
supporting it ?


kind regards,
Peter