Re: haproxy ssl support

2017-10-26 Thread KT Walrus
When is 2.3 scheduled to be released?

Kevin

> On Oct 26, 2017, at 7:57 AM, Aki Tuomi  wrote:
> 
> Hi!
> 
> There is support for haproxy SSL TLVs in 2.3. See
> 
> https://github.com/dovecot/core/compare/f43567aa%5E...b6fbc235.patch
> 
> Aki
> 
>> On October 26, 2017 at 12:25 PM Rok Potočnik  wrote:
>> 
>> 
>> Even though it seems dovecot (using 2.2.33.1) supports haproxy's 
>> send-proxy-v2, it seems to lack send-proxy-v2-ssl (which also sends 
>> client's ssl state). It would be a nice feature for the backend server 
>> to identify clients so one wouldn't have to use disable_plaintext_auth 
>> on a production environment.
>> 
>> --- haproxy.cfg
>> frontend pop3
>> bind [::]:110 v4v6
>> bind [::]:995 v4v6 ssl crt /etc/pki/tls/private/haproxy.pem
>> mode tcp
>> default_backend pop3
>> backend pop3
>> mode tcp
>> balance leastconn
>> stick store-request src
>> stick-table type ip size 200k expire 30m
>> timeout connect 5000
>> timeout server  5
>> server proxy1 [2001:db8::11]:10110 send-proxy-v2-ssl
>> server proxy2 [2001:db8::22]:10110 send-proxy-v2-ssl
>> ---
>> 
>> --- dovecot.conf
>> haproxy_trusted_networks = [2001:db8::]/64
>> service pop3-login {
>>   inet_listener pop3_haproxy {
>> port = 10110
>> haproxy = yes
>>   }
>> }
>> ---
>> 
>> It would also be nice if haproxy would support STARTTLS offloading but 
>> that's a subject for a different mailing list ;)
>> 
>> -- 
>> BR, Rok


Re: is a self signed certificate always invalid the first time

2017-08-20 Thread KT Walrus

> On Aug 20, 2017, at 1:32 PM, Stephan von Krawczynski <sk...@ithnet.com> wrote:
> 
> On Sun, 20 Aug 2017 12:29:49 -0400
> KT Walrus <ke...@my.walr.us> wrote:
> 
>>> On Aug 20, 2017, at 11:52 AM, Stephan von Krawczynski <sk...@ithnet.com>
>>> wrote:
>>> 
>>> On Sat, 19 Aug 2017 21:39:18 -0400
>>> KT Walrus <ke...@my.walr.us> wrote:
>>> 
>>>>> On Aug 18, 2017, at 4:05 AM, Stephan von Krawczynski <sk...@ithnet.com>
>>>>> wrote:
>>>>> 
>>>>> On Fri, 18 Aug 2017 00:24:39 -0700 (PDT)
>>>>> Joseph Tam <jtam.h...@gmail.com> wrote:
>>>>> 
>>>>>> Michael Felt <mich...@felt.demon.nl> writes:
>>>>>> 
>>>>>>>> I use acme.sh for all of my LetsEncrypt certs (web & mail), it is
>>>>>>>> written in pure shell script, so no python dependencies.
>>>>>>>> https://github.com/Neilpang/acme.sh  
>>>>>>> 
>>>>>>> Thanks - I might look at that, but as Ralph mentions in his reply -
>>>>>>> Let's encrypt certs are only for three months - never ending
>>>>>>> circus.  
>>>>>> 
>>>>>> I wouldn't characterize it as a circus.  Once you bootstrap your first
>>>>>> certificate and install the cert-renew cron script, it's not something
>>>>>> you have to pay a lot of attention to.  I have a few LE certs in use,
>>>>>> and I don't think about it anymore: it just works.
>>>>>> 
>>>>>> The shorter cert lifetime also helps limit damage if your certificate
>>>>>> gets compromised.
>>>>>> 
>>>>>> Joseph Tam <jtam.h...@gmail.com>
>>>>> 
>>>>> Obviously you do not use clustered environments with more than one node
>>>>> per service.
>>>>> Else you would not call it "it just works", because in fact the renewal
>>>>> is quite big bs as one node must do the job while all the others must be
>>>>> _offline_.
>>>>> 
>>>>> -- 
>>>>> Regards,
>>>>> Stephan
>>>> 
>>>> I use DNS verification for LE certs. Much better since generating certs
>>>> only depends on access to DNS and not your HTTP servers. Cert generation
>>>> is automatic (on a cron job that runs every night looking for certs that
>>>> are within 30 days of expiration). Once set up, it is pretty much
>>>> automatic. I do use Docker to deploy all services for my website which
>>>> also makes things pretty easy to manage.
>>>> 
>>>> Kevin
>>>> 
>>> 
>>> DNS verification sounds nice only on first glimpse.
>>> If you have a lot of domains and ought to reload your DNS for every
>>> verification of every single domain that does not look like a method with a
>>> small footprint or particularly elegant.  
>> 
>> I don’t understand what you are trying to say. I have over 170 domains that
>> I generate certs for automatically using the acme.sh script. It is all
>> automatic and requires no “reload your DNS” by me. The script just updates
>> the DNS with a record that Let’s Encrypt checks before issuing the
>> certificate. After Let’s Encrypt verifies that you can update the DNS for
>> your domain with the record, the script removes the record.
>> 
>> This actually works much better than HTTP especially for domains like for
>> email servers that don’t have an HTTP server deployed for them.
>> 
>> Kevin
> 

> You can't update a record without reloading configs in bind. I guess you are
> using some other DNS service...

I use Cloudflare (free DNS) and DNS Made Easy (paid DNS). I would never run my 
own DNS service except for communicating between my Docker services internally 
(Docker has its own internal DNS for this and there are many pre-built docker 
images to provide a public DNS service, if required). But, Let’s Encrypt 
requires you update the public DNS used by the domains you are generating certs 
for. If you run your own public DNS service (for your Dovecot domains), you 
should pick one that has an API for updating the DNS records from a script like 
acme.sh or simply write your own custom hook for acme.sh to use.

See this page for all the DNS services that acme.sh supports: 

https://github.com/Neilpang/acme.sh/tree/master/dnsapi 
<https://github.com/Neilpang/acme.sh/tree/master/dnsapi>

Kevin


Re: is a self signed certificate always invalid the first time

2017-08-20 Thread KT Walrus

> On Aug 20, 2017, at 11:52 AM, Stephan von Krawczynski <sk...@ithnet.com> 
> wrote:
> 
> On Sat, 19 Aug 2017 21:39:18 -0400
> KT Walrus <ke...@my.walr.us> wrote:
> 
>>> On Aug 18, 2017, at 4:05 AM, Stephan von Krawczynski <sk...@ithnet.com>
>>> wrote:
>>> 
>>> On Fri, 18 Aug 2017 00:24:39 -0700 (PDT)
>>> Joseph Tam <jtam.h...@gmail.com> wrote:
>>> 
>>>> Michael Felt <mich...@felt.demon.nl> writes:
>>>> 
>>>>>> I use acme.sh for all of my LetsEncrypt certs (web & mail), it is
>>>>>> written in pure shell script, so no python dependencies.
>>>>>> https://github.com/Neilpang/acme.sh
>>>>> 
>>>>> Thanks - I might look at that, but as Ralph mentions in his reply -
>>>>> Let's encrypt certs are only for three months - never ending circus.
>>>> 
>>>> I wouldn't characterize it as a circus.  Once you bootstrap your first
>>>> certificate and install the cert-renew cron script, it's not something
>>>> you have to pay a lot of attention to.  I have a few LE certs in use,
>>>> and I don't think about it anymore: it just works.
>>>> 
>>>> The shorter cert lifetime also helps limit damage if your certificate
>>>> gets compromised.
>>>> 
>>>> Joseph Tam <jtam.h...@gmail.com>  
>>> 
>>> Obviously you do not use clustered environments with more than one node per
>>> service.
>>> Else you would not call it "it just works", because in fact the renewal is
>>> quite big bs as one node must do the job while all the others must be
>>> _offline_.
>>> 
>>> -- 
>>> Regards,
>>> Stephan  
>> 
>> I use DNS verification for LE certs. Much better since generating certs only
>> depends on access to DNS and not your HTTP servers. Cert generation is
>> automatic (on a cron job that runs every night looking for certs that are
>> within 30 days of expiration). Once set up, it is pretty much automatic. I
>> do use Docker to deploy all services for my website which also makes things
>> pretty easy to manage.
>> 
>> Kevin
>> 
> 
> DNS verification sounds nice only on first glimpse.
> If you have a lot of domains and ought to reload your DNS for every
> verification of every single domain that does not look like a method with a
> small footprint or particularly elegant.

I don’t understand what you are trying to say. I have over 170 domains that I 
generate certs for automatically using the acme.sh script. It is all automatic 
and requires no “reload your DNS” by me. The script just updates the DNS with a 
record that Let’s Encrypt checks before issuing the certificate. After Let’s 
Encrypt verifies that you can update the DNS for your domain with the record, 
the script removes the record.

This actually works much better than HTTP especially for domains like for email 
servers that don’t have an HTTP server deployed for them.

Kevin

Re: is a self signed certificate always invalid the first time

2017-08-20 Thread KT Walrus

> On Aug 20, 2017, at 3:20 AM, Felix Zielcke <fziel...@z-51.de> wrote:
> 
> Am Samstag, den 19.08.2017, 21:39 -0400 schrieb KT Walrus:
>> 
>> I use DNS verification for LE certs. Much better since generating
>> certs only depends on access to DNS and not your HTTP servers. Cert
>> generation is automatic (on a cron job that runs every night looking
>> for certs that are within 30 days of expiration). Once set up, it is
>> pretty much automatic. I do use Docker to deploy all services for my
>> website which also makes things pretty easy to manage.
>> 
>> Kevin
> 
> Hi Kevin,
> 
> what software do you use for DNS based verification? I read with the
> official certbot from LE it's not possible to do this fully automated.
> Currently I use the http based method, but would like to switch to DNS
> based.
> 
> Greetings
> Felix

I use the acme.sh script: https://github.com/Neilpang/acme.sh 
<https://github.com/Neilpang/acme.sh>

The author supports running this script automatically with the docker image 
neilpang/acme.sh.

Kevin


Re: is a self signed certificate always invalid the first time

2017-08-19 Thread KT Walrus

> On Aug 18, 2017, at 4:05 AM, Stephan von Krawczynski  wrote:
> 
> On Fri, 18 Aug 2017 00:24:39 -0700 (PDT)
> Joseph Tam  wrote:
> 
>> Michael Felt  writes:
>> 
 I use acme.sh for all of my LetsEncrypt certs (web & mail), it is
 written in pure shell script, so no python dependencies.
 https://github.com/Neilpang/acme.sh  
>>> 
>>> Thanks - I might look at that, but as Ralph mentions in his reply -
>>> Let's encrypt certs are only for three months - never ending circus.  
>> 
>> I wouldn't characterize it as a circus.  Once you bootstrap your first
>> certificate and install the cert-renew cron script, it's not something
>> you have to pay a lot of attention to.  I have a few LE certs in use,
>> and I don't think about it anymore: it just works.
>> 
>> The shorter cert lifetime also helps limit damage if your certificate
>> gets compromised.
>> 
>> Joseph Tam 
> 
> Obviously you do not use clustered environments with more than one node per
> service.
> Else you would not call it "it just works", because in fact the renewal is
> quite big bs as one node must do the job while all the others must be
> _offline_.
> 
> -- 
> Regards,
> Stephan

I use DNS verification for LE certs. Much better since generating certs only 
depends on access to DNS and not your HTTP servers. Cert generation is 
automatic (on a cron job that runs every night looking for certs that are 
within 30 days of expiration). Once set up, it is pretty much automatic. I do 
use Docker to deploy all services for my website which also makes things pretty 
easy to manage.

Kevin


Re: Example for doveadm-save using Doveadm HTTP API

2017-05-10 Thread KT Walrus

> On May 10, 2017, at 5:16 PM, Sami Ketola <sami.ket...@dovecot.fi> wrote:
> 
> 
>> On 10 May 2017, at 16.26, KT Walrus <ke...@my.walr.us> wrote:
>>> 
>>> # curl -v -X POST -u doveadm:hellodoveadm -H "Content-Type: 
>>> application/json" -d 
>>> '[["save",{"user":"samik","mailbox":"INBOX/myfoldertoo","file":"From: Joulu 
>>> Pukki <joulu.pu...@korvatunturi.fi>\nSubject: plaa\n\nmail body\n"},"bb"]]' 
>>> http://localhost:8080/doveadm/v1
>> 
>> Thanks. I worry that by inlining the entire message in the curl command, the 
>> message might exceed some limits on how long a command can be. Some of my 
>> messages are up to 20MBs with the attachments and 1MB messages are very 
>> common. I also worry about the raw message having unescaped quotes in the 
>> message messing up to actual storage of the message in the INBOX. Are HTML 
>> mail messages encoded to be safe to enclose in quotations? Or, should I 
>> encode the entire mail message and trust that Dovecot can handle decoding 
>> the message in the back end?
> 
> 
> The question is: why do you want to deliver 20MB messages with doveadm http 
> api? I would not replace LMTP with that.

I could certainly end up using SMTP/LMTP, but in my case I need complete 
control over when, where, and how messages are delivered. The IMAP interface 
gives me that control, but the Doveadm HTTP API seems easier and I can do 
operations that span all users with this API. So, I don’t need to worry as much 
about scaling (since I think IMAP is limited to one user at a time per 
connection).

If the Doveadm HTTP API isn’t mature enough or have a PHP interface (like 
Roundcube gives me for IMAP), I may have to just go with IMAP or SMTP. SMTP is 
a whole lot easier from PHP of my 3 options, but only handles new message 
delivery and not the other admin actions I need to do. Eventually, I want to 
hire some programmers to code my admin app in Go and move to a micro-services 
architecture, but to start out, when scaling isn’t that important, I’m coding 
the mail admin app in PHP (since Roundcube Framework gives me a lot of mail 
handling classes that are mature and well tested).

Kevin


Re: Example for doveadm-save using Doveadm HTTP API

2017-05-10 Thread KT Walrus

> On May 10, 2017, at 11:06 AM, Sami Ketola <sami.ket...@dovecot.fi> wrote:
> 
> 
>> On 10 May 2017, at 14.57, KT Walrus <ke...@my.walr.us> wrote:
>> 
>> I could use an example of how to use curl to save a new message to a user’s 
>> INBOX using the Doveadm HTTP API.
>> 
> 
> Here you go:
> 
> doveadm mailbox save 
> 
> parameters:
> 
> {
>"command": "save",
>"parameters": [
>{
>"name": "allUsers",
>"type": "boolean"
>},
>{
>"name": "socketPath",
>"type": "string"
>},
>{
>"name": "user",
>"type": "string"
>},
>{
>"name": "userFile",
>"type": "string"
>},
>{
>"name": "mailbox",
>"type": "string"
>},
>{
>"name": "file",
>"type": "string"
>}
>]
> }
> 
> example:
> 
> [
>[
>"save",
>{
>"file": "From: Joulu Pukki <joulu.pu...@korvatunturi.fi>\nSubject: 
> plaa\n\nmail body\n",
>"mailbox": "INBOX/myfoldertoo",
>"user": "samik"
>},
>"bb"
>]
> ]
> 
> # curl -v -X POST -u doveadm:hellodoveadm -H "Content-Type: application/json" 
> -d '[["save",{"user":"samik","mailbox":"INBOX/myfoldertoo","file":"From: 
> Joulu Pukki <joulu.pu...@korvatunturi.fi>\nSubject: plaa\n\nmail 
> body\n"},"bb"]]' http://localhost:8080/doveadm/v1

Thanks. I worry that by inlining the entire message in the curl command, the 
message might exceed some limits on how long a command can be. Some of my 
messages are up to 20MBs with the attachments and 1MB messages are very common. 
I also worry about the raw message having unescaped quotes in the message 
messing up to actual storage of the message in the INBOX. Are HTML mail 
messages encoded to be safe to enclose in quotations? Or, should I encode the 
entire mail message and trust that Dovecot can handle decoding the message in 
the back end?

I figure that it would be better to put the message in a file and include it 
some way as part of the HTTP request data. But, does the doveadm HTTP server 
handle 20MB requests in a single HTTP request? Probably, it does, but I know I 
had to configure MySQL to take large SQL queries and they really recommend that 
large files be broken up into chunks and stored with multiple queries 
(especially for replication).

I’ll probably implement message delivery in PHP using a class that can safely 
post a large file in an HTTP request, so I won’t really be using curl directly 
at a bash command line.

Do you know of any PHP class for the Doveadm HTTP API that I might use? 

Kevin

Re: No doveadm-save in wiki2?

2017-05-10 Thread KT Walrus

> On May 10, 2017, at 10:18 AM, Sami Ketola <sami.ket...@dovecot.fi> wrote:
> 
>> 
>> On 10 May 2017, at 15.06, KT Walrus <ke...@my.walr.us> wrote:
>> 
>> 
>>> On May 10, 2017, at 9:50 AM, Sami Ketola <sami.ket...@dovecot.fi> wrote:
>>> 
>>> 
>>>> On 9 May 2017, at 19.26, KT Walrus <ke...@my.walr.us> wrote:
>>>> 
>>>> Is “doveadm save” an undocumented feature? Or, just well-hidden?
>>>> 
>>>> https://wiki2.dovecot.org/Tools/Doveadm 
>>>> <https://wiki2.dovecot.org/Tools/Doveadm>
>>> 
>>> That wikipage is autogenerated from the doveadm manpage… which 
>>> unfortunately lags behind on the features. We’ll try to update the manpage 
>>> eventually some day.
>> 
>> How long does this usually take? Googling the topic seems to indicate that 
>> this feature was implemented several years ago.
> 
> Seems that we have been quite busy on working on more important issues. Also 
> it’s possible that we just forgot to update the manpage.

Thanks. I don’t mean to have you change your priorities, but it is difficult to 
really understand how to set up and use Dovecot with incomplete user 
documentation. For example, I had to search for a half hour to figure out how 
to set an API key to use with the Doveadm HTTP API. I finally noticed the 
single reference to ‘doveadm_api_key’ in the Design.DoveadmProtocol.HTTP.txt 
file. I should have noticed it much sooner, but I was looking in the 
example-config files for how to configure doveadm and couldn’t find its 
settings there. In fact, there should be a “How to configure doveadm” page.

I also have plans to deploy using 3 server clusters. Maybe it isn’t a good 
idea, but I haven’t really found much on how to set up a 3 server cluster that 
keeps the local storage sync’d between all 3 servers. Maybe this is obvious to 
most, but the documentation only really goes into depth on how to dsync 2 
servers. I guess it is natural to expect to just set up 3 servers with each 
server dsync’ing to the 2 other servers in the cluster, but I worry that this 
might not work best in production and I should look into using converged 
storage so each server in the cluster has read/write access to the converged 
storage and I make sure that no 2 servers access the same mailbox at the same 
time.

I want to deploy in 3 server clusters since this is the way we deploy MySQL 
database clusters and this works well in production. But, maybe, for Dovecot, 2 
server clusters is enough for production and going to 3 servers is just a waste 
of money. We do not use RAID storage on our local servers preferring to use 
replication to 3 separate servers in 3 separate racks to take care of the 
occasional hardware failures. So, the general rule is that all persistent data 
is replicated 3 times. Maybe, for Dovecot, we should deploy 2 server clusters 
with btrfs/rsync backup to a third backup only server.

Anyway, I’m highjacking my own thread in discussing these production issues, 
but maybe you and your team could consider bumping up the priority on 
documentation just a bit in the future… 

Thanks again,

Kevin

Re: No doveadm-save in wiki2?

2017-05-10 Thread KT Walrus

> On May 10, 2017, at 9:50 AM, Sami Ketola <sami.ket...@dovecot.fi> wrote:
> 
> 
>> On 9 May 2017, at 19.26, KT Walrus <ke...@my.walr.us> wrote:
>> 
>> Is “doveadm save” an undocumented feature? Or, just well-hidden?
>> 
>> https://wiki2.dovecot.org/Tools/Doveadm 
>> <https://wiki2.dovecot.org/Tools/Doveadm>
> 
> That wikipage is autogenerated from the doveadm manpage… which unfortunately 
> lags behind on the features. We’ll try to update the manpage eventually some 
> day.

How long does this usually take? Googling the topic seems to indicate that this 
feature was implemented several years ago.

I’m looking to use the Doveadm HTTP API for new message delivery (thus needing 
to use the “doveadm save” feature). Is the Doveadm HTTP API still experimental? 
The wikipage doesn’t seem to have adequate documentation on using this API with 
just a single “fetch” example using curl.

Kevin


Example for doveadm-save using Doveadm HTTP API

2017-05-10 Thread KT Walrus
I could use an example of how to use curl to save a new message to a user’s 
INBOX using the Doveadm HTTP API.

https://wiki2.dovecot.org/Design/DoveadmProtocol/HTTP 


Do I really use the -d option and inline the entire new message in the 
command-line? Or, should I create a temporary .json file with the message 
wrapped in JSON and pass this filename to the -d option?

Does anyone have a PHP class that abstracts the HTTP API commands?

I’ve been using the Roundcube Framework to deliver new messages by IMAP, but 
I’m experimenting whether the Doveadm HTTP API might be a better solution. I 
eventually want my PHP code to do full administration of my users mailboxes. I 
was planning on using Sieve pipe PHP scripts in conjunction with IMAP to do 
full admin on mailboxes (as they receive new messages or change a flag, I would 
trigger maintenance on the mailbox such as deleting or moving to the trash old 
messages or filing a copy of a new message in a folder).

Finally, is the Doveadm HTTP API stable enough to use in production? It seems 
like the documentation is rather minimal. Can this interface really be used for 
new message delivery instead of LMTP in production? Or, is this a bad idea to 
combine mailbox administration with new message delivery?

Does anyone here use the Doveadm HTTP API in production?

Kevin


No doveadm-save in wiki2?

2017-05-09 Thread KT Walrus
Is “doveadm save” an undocumented feature? Or, just well-hidden?

https://wiki2.dovecot.org/Tools/Doveadm 


Kevin


error trying to access the doveadm http api

2017-05-09 Thread KT Walrus
I’m trying out the the doveadm http api for the first time. When I do:

$ curl -H "Authorization: Basic " 
http://localhost:
curl: (52) Empty reply from server

In the logs, I see the following error:

May 09 11:58:05 doveadm(10.0.0.17): Error: doveadm client not compatible with 
this server (mixed old and new binaries?)

I am running Dovecot 2.2.29.1 in Docker containers.

What am I doing wrong?

Kevin


Re: building Dovecot in Debian 9

2017-04-25 Thread KT Walrus

> On Apr 25, 2017, at 7:54 PM, Peter van der Does  
> wrote:
> 
> Kevin,
> 
> Regarding the configuration error, your missing a package:
> zlib1g-dev

Thanks! I guess default-libmysqlclient-dev drags in zlib1g-dev for some reason 
that the Oracle package doesn’t. Everything builds with the Oracle 
libmysqlclient now, so I’m good to go. Thanks for your help.

> 
> As far as the deprecation warning, it's a bit more complicated. The
> source of Dovecot needs to be patched to check for the OpenSSL version
> and depending on the version use a different DH_generate_numbers function.

Okay. Should I just ignore this then? I’m not actually going to do much with 
this build until Debian 9 is released and all the packages that I use have had 
time to be production hardened on Debian 9. I’m building against Ubuntu 16.04 
for my actual work. I really want to use Debian 9 in production since it comes 
with OpenSSL 1.1.0e and I want to support the ChaCha20-Poly1305 ciphers for 
NGINX sessions (and maybe Dovecot too).

https://github.com/openssl/openssl/issues/304 


Any other comments on my Dockerfile? Since everything builds, I assume it uses 
sane options to build Dovecot. I couldn’t find any other example Dockerfiles 
for building Dovecot so just made this one up from other Dockerfiles that I use 
to build other images.

I’d really like to see an official Dovecot image in the Docker Hub and base my 
containers off that. I actually use s6-overlay and other extras in my real 
Dovecot image, but it would be nice to see a Dockerfile that is based on Alpine 
Linux too. Alpine seems to be the preferred distro for official Docker Hub 
images, as I understand it.

Kevin


Re: building Dovecot in Debian 9

2017-04-25 Thread KT Walrus

> On Apr 25, 2017, at 5:37 PM, KT Walrus <ke...@my.walr.us> wrote:
> 
> Also, I spotted a deprecation warning that you might want to look into since 
> it has to do with building against OpenSSL 1.1 (which is the default version 
> for Debian 9).

Oops!!!

Forgot to attach the warning:

libtool: compile:  gcc -DHAVE_CONFIG_H -I. -I../.. -I../../src/lib 
-I../../src/lib-test -DMODULE_DIR=\"/usr/lib/dovecot\" -std=gnu99 -g -O2 -Wall 
-W -Wmissing-prototypes -Wmissing-declarations -Wpointer-arith 
-Wchar-subscripts -Wformat=2 -Wbad-function-cast -fno-builtin-strftime 
-Wstrict-aliasing=2 -MT iostream-openssl-params.lo -MD -MP -MF 
.deps/iostream-openssl-params.Tpo -c iostream-openssl-params.c  -fPIC -DPIC -o 
.libs/iostream-openssl-params.o
^[[91miostream-openssl-params.c: In function 'generate_dh_parameters':
^[[0m^[[91miostream-openssl-params.c:18:2: warning: 'DH_generate_parameters' is 
deprecated [-Wdeprecated-declarations]
  dh = DH_generate_parameters(bitsize, DH_GENERATOR, NULL, NULL);
  ^~
^[[0m^[[91mIn file included from /usr/include/openssl/dh.h:13:0,
 from /usr/include/openssl/dsa.h:31,
 from /usr/include/openssl/x509.h:32,
 from /usr/include/openssl/ssl.h:50,
 from iostream-openssl.h:6,
 from iostream-openssl-params.c:5:
/usr/include/openssl/dh.h:118:1: note: declared here
 DEPRECATEDIN_0_9_8(DH *DH_generate_parameters(int prime_len, int generator,
 ^


Re: building Dovecot in Debian 9

2017-04-25 Thread KT Walrus


Dockerfile.debian9
Description: Binary data


Re: building Dovecot in Debian 9

2017-04-25 Thread KT Walrus

> On Apr 25, 2017, at 2:16 PM, Peter van der Does <pe...@avirtualhome.com> 
> wrote:
> 
> You might have to install the package default-libmysqlclient-dev from
> the Debian repo.

Isn’t that the MariaDB package? I don’t really want to mix MariaDB with MySQL 
(even though they are probably still compatible, but diverging as time passes).

Dovecot does build with default-libmysqlclient-dev, but maybe ./configure needs 
to be updated by the Dovecot devs to build against the libmysqlclient package 
that Oracle built for Debian Stretch?

Since I’m only testing today to get ready for Debian 9, I don’t really need 
this fixed now. But, when Debian 9 is released, it would be nice to be able to 
do a production build of Dovecot using the Oracle MySQL packages and Debian 9.

Kevin

> 
> Peter
> 
> On 4/25/17 1:37 PM, KT Walrus wrote:
>> I’m trying to build Dovecot 2.2.29.1 in a Docker container today and have 
>> the following error in ./configure:
>> 
>> checking for shadow.h... yes
>> checking for pam_start in -lpam... no
>> checking for auth_userokay... no
>> checking for mysql_config... mysql_config
>> checking for mysql_init in -lmysqlclient... no
>> configure: error: Can't build with MySQL support: libmysqlclient not found
>> 
>> #> find / -name libmysqlclient\*
>> /usr/share/doc/libmysqlclient20
>> /usr/share/lintian/overrides/libmysqlclient20
>> /usr/lib/x86_64-linux-gnu/libmysqlclient.so.20.3.5
>> /usr/lib/x86_64-linux-gnu/libmysqlclient.so.20
>> /var/lib/dpkg/info/libmysqlclient20:amd64.triggers
>> /var/lib/dpkg/info/libmysqlclient20:amd64.shlibs
>> /var/lib/dpkg/info/libmysqlclient20:amd64.list
>> /var/lib/dpkg/info/libmysqlclient20:amd64.md5sums
>> 
>> I have installed MySQL 5.7.18 Debian 9 packages (including the 
>> libmysqlclient-dev package) from the MySQL repo.
>> 
>> I’m not an expert, but is there a bug in the "./configure --prefix=/usr 
>> --sysconfdir=/etc --with-mysql”?
>> 
>> I’ve been building Dovecot with this Dockerfile using Ubuntu 16.04 for a 
>> while now without issue. Do I need some extra ./configure option to get it 
>> to find libmysqlclient.so.20?
>> 
>> Kevin
>> 
> 


building Dovecot in Debian 9

2017-04-25 Thread KT Walrus
I’m trying to build Dovecot 2.2.29.1 in a Docker container today and have the 
following error in ./configure:

checking for shadow.h... yes
checking for pam_start in -lpam... no
checking for auth_userokay... no
checking for mysql_config... mysql_config
checking for mysql_init in -lmysqlclient... no
configure: error: Can't build with MySQL support: libmysqlclient not found

#> find / -name libmysqlclient\*
/usr/share/doc/libmysqlclient20
/usr/share/lintian/overrides/libmysqlclient20
/usr/lib/x86_64-linux-gnu/libmysqlclient.so.20.3.5
/usr/lib/x86_64-linux-gnu/libmysqlclient.so.20
/var/lib/dpkg/info/libmysqlclient20:amd64.triggers
/var/lib/dpkg/info/libmysqlclient20:amd64.shlibs
/var/lib/dpkg/info/libmysqlclient20:amd64.list
/var/lib/dpkg/info/libmysqlclient20:amd64.md5sums

I have installed MySQL 5.7.18 Debian 9 packages (including the 
libmysqlclient-dev package) from the MySQL repo.

I’m not an expert, but is there a bug in the "./configure --prefix=/usr 
--sysconfdir=/etc --with-mysql”?

I’ve been building Dovecot with this Dockerfile using Ubuntu 16.04 for a while 
now without issue. Do I need some extra ./configure option to get it to find 
libmysqlclient.so.20?

Kevin

Re: System load spike on dovecot reload

2017-04-21 Thread KT Walrus

> On Apr 21, 2017, at 4:43 AM, d...@evilcigi.eu wrote:
> 
> Hi everyone,
> 
> I'm running dovecot with quite a lot of users and lots of active imap 
> connections (like 20'000). I'm using different user IDs for users, so I need 
> to have imap {service_count=1} - i.e. I have a lots of imap processes running.
> 
> Everything works fine, until I reload dovecot configuration. When that 
> happen, every client is forced to relogin in the same time and that causes a 
> huge system load spike (2-3000 5 min load).
> 
> I was thinking that it would be great, if dovecot wouldn't kick all the users 
> in the same time during reload, but somehow gradually, during specified 
> interval. I'm aware of the shutdown_clients directive that could help, but I 
> don't like it - I do want the clients get disconnected on dovecot shutdown 
> and also I want them to relogin in reasonably short time after reload.

You could run a Dovecot IMAP proxy in a Docker container on your server and run 
a separate Dovecot IMAP server in another container. Once both containers are 
up and running, enable the Dovecot IMAP proxy to start sending IMAP sessions to 
the IMAP server. When the time comes to change the Dovecot configuration, 
deploy another instance of Dovecot IMAP server with the new configuration. Once 
the new container is up and running, configure Dovecot IMAP proxy to direct a 
few specific test users to the new Dovecot IMAP server. When satisfied that the 
new server can handle new user sessions, configure Dovecot IMAP proxy to direct 
all new sessions to the new instance. After everything seems to be working fine 
for a period of time, start kicking users on the old Dovecot IMAP server off 
(at a comfortable pace) so they will reconnect to the new Dovecot IMAP server. 
When the old Dovecot IMAP server is no longer managing any sessions, it can be 
removed from the server (that is, the Docker container stopped and eventually 
removed completely).

Since all containers are running on the same host server, the old and new 
Dovecot containers will be configured to access the same Dovecot mail storage 
by mounting the host storage to both containers.

I think Docker containers are the easiest way to manage Dovecot in production.

Kevin


Pigeonhole Sieve Extprograms Plugin

2017-04-08 Thread KT Walrus
Just discovered the Pigeonhole Sieve Extprograms plugin 
(https://wiki2.dovecot.org/Pigeonhole/Sieve/Plugins/Extprograms 
)…

I currently have Postfix configured to run a script to store all incoming 
messages into a Mysql DB and a batch job that retrieves the messages in the DB, 
processes them and delivers them to the recipients' Dovecot servers.

Seems to me it might be better to have Postfix deliver all incoming messages 
directly to a single Dovecot mailbox and use this Sieve plugin to do the whole 
job of updating the DB, processing the message, and delivering to the 
recipients.

Also, I have a need to implement updating the DB when the recipient reads a 
message for the first time (setting the \Seen flag). Would this work best to 
execute a script using this Sieve plugin from a IMAP Sieve script that watches 
for \Seen flag changes?

This might be a whole lot better than the way I was thinking of doing it before.

I’m basically implementing mailing lists where the senders receive a 
notification reply with a link to a web page where the message may be 
edited/deleted before it is sent to recipients (after a fixed amount of time 
has elapsed). This gives the sender the opportunity to change his mind before 
it is too late or to send to recipients immediately (by marking the message as 
Urgent). Also, even after the message has been sent out, I want to allow the 
sender to be able to delete it in the recipients mailboxes if they have not 
read it yet (downloaded it to their email client) or to see which recipients 
have read the message (and when they read/downloaded it for the first time). 
Both the sender and the recipients will have Dovecot mailboxes so I want to 
give more control over the messages for their entire lifetimes (both for 
senders and for recipients) than normal email does.

Any thoughts?

Kevin


Updating mysql db when messages first read by IMAP user

2017-04-08 Thread KT Walrus
I need to implement updating a mysql db when the \SEEN flag is set by IMAP.

New message delivery is done by having Postfix store all new inbound messages 
in a mysql db and a batch job watching the db for new messages and delivering 
them to the correct Dovecot server for the recipients. The batch job leaves a 
log entry in the db (for each recipient) with the current timestamp and adds a 
header with the log id to the outgoing message that is sent to Dovecot. The log 
table has a column for the first read timestamp which is set to 0 initially.

I plan on implementing a Sieve IMAP script that watches for changes to the 
\SEEN flag and sends a mailto: notify message that contains the log id (from 
the message header) and the current date to a mail address dedicated for keep 
track of \SEEN flag changes. The notify message will be delivered to this mail 
address where a batch job watches that mail address and updates the db for the 
log id in the notify message with the timestamp and the current date if the row 
in the log table has a first read timestamp of 0.

I plan on providing a web page that the sender can use to see whether the 
messages they have sent have been seen by the recipients. The web page will 
show other info too, but the part that I need to implement next is updating the 
db when a sent message has been read by the recipients of the message.

Is this a good way to do this?

Or, is there a better way than using an IMAP Sieve script to send notify 
messages?

Is there any other way to have a Sieve script update a mysql database?

Or, is there some other mechanism in Dovecot to use instead of an IMAP Sieve 
script?

Kevin

Re: Scaling to 10 Million IMAP sessions on a single server

2017-02-23 Thread KT Walrus

> On Feb 23, 2017, at 4:21 PM, Timo Sirainen  wrote:
> 
> On 23 Feb 2017, at 23.00, Timo Sirainen  wrote:
>> 
>> I mainly see such external databases as additional reasons for things to 
>> break. And even if not, additional extra layers of latency.
> 
> Oh, just thought that I should clarify this and I guess other things I said. 
> I think there are two separate things we're possibly talking about in here:
> 
> 1) Temporary state: This is what I was mainly talking about. State related to 
> a specific IMAP session. This doesn't take much space and can be stored in 
> the proxy's memory since it's specific to the TCP session anyway.

Moving the IMAP session state to the proxy so the backend can just have a fixed 
pool of worker processes is really what I think is necessary for scaling to 
millions of IMAP sessions. I still think it would be best to store this state 
in a way that you could at least “remember” the backend server that is 
implementing the IMAP session and the auth data. To me, that would be to use 
Redis for session state. Redis is a very efficient in-memory database where the 
data is persistent and replicated. And, it is popular enough to be well tested 
and easy to use (the API is very simple).

I use HAProxy for my web servers and HAProxy supports “stick” tables to map a 
client IP to the same backend server that was selected when the session was 
first established. HAProxy then supports proxy “peers” where the “stick” tables 
are shared between multiple proxies. That way, if a proxy fails, I can move the 
VIP over (or let DNS round-robin) to another proxy and still get the same 
backend (which has session state) without having the proxy pick some other 
backend (losing the backend session state). It might be fairly complex for 
HAProxy to share these “stick” tables across a cluster of proxies, but I would 
think it would be easy to use Redis to cache this data so all proxies could 
access this shared data.

I’m not sure if Dovecot proxies would benefit from “sticks and peers” for IMAP 
protocol, but it would be nice if Dovecot proxies could maintain the IMAP 
session if the connections needed to be moved to another proxy (for failover). 
Maybe it isn’t so bad if a dovecot proxy all of a sudden “kicked” 10 Million 
IMAP sessions, but this might lead to a “login” flood for the remaining 
proxies. So, at least the authorization data (the passdb queries) should be 
shared between proxies using Redis.

> 
> 2) Permanent state: This is mainly about the storage. A lot of people use 
> Dovecot with NFS. So one possibility for storing the permanent state is NFS. 
> Another possibility with Dovecot Pro is to store it to object storage as 
> blobs and keep a local cache of the state. A 3rd possibility might be to use 
> some kind of a database for storing the permanent state. I'm fine with the 
> first two, but with 3rd I see a lot of problems and not a whole lot of 
> benefit. But if you think of the databases (or even NFS) as blob storage, you 
> can think of them the same as any object storage and use the same obox format 
> with them. What I'm mainly against is attempting to create some kind of a 
> database that has structured format like (imap_uid, flags, ...) - I'm sure 
> that can be useful for various purposes but performance or scalability isn't 
> one of them.

I would separate the permanent state into two: the indexes and the message 
data. As I understand it, the indexes are the meta data about the message data. 
I believe, that to scale, the indexes need fast read access so this means 
storing on local NVMe SSD storage. But, I want the indexes to be reliably 
shared between all backend servers in a dovecot cluster. Again, this means to 
me that you need some fast in-memory database like Redis to be the “source of 
truth” for the indexes. I think doing read requests to Redis is very fast so 
you might not have to store a cache of the index on local NVMe SSD storage, but 
maybe I’m wrong.

As for the message data, I would really like the option of storing this data in 
some external database like MongoDB. MongoDB stores documents as JSON (actually 
BSON) data which seems perfect for email storage since emails are all text 
files. This would allow me to manage storage using the tools/techniques that an 
external database uses. MongoDB is designed to be hugely scalable and supports 
High Availability. I would rather manage a cluster of MongoDB instances 
containing a petabyte of data than trying to distribute the data among many 
Dovecot IMAP servers. The IMAP servers would then only be responsible for 
implementing IMAP and not be loaded down with all sorts of I/O so might be able 
to scale to 10 Million IMAP sessions per server.

If a MongoDB option wasn’t available, using cloud object storage would be a 
reasonable second choice. Unfortunately, the “obox” support you mentioned 
doesn’t seem to be open source. So, I am stuck using local disks (hopefully 

Re: Problem with Let's Encrypt Certificate

2017-02-23 Thread KT Walrus

> On Feb 20, 2017, at 4:01 PM, Joseph Tam  wrote:
> 
> yacinechaou...@yahoo.com writes:
> 
>> Interesting.  Is there any particular benefit in having only one file
>> for both certificate and private key ? I find that putting private key
>> in a separate file feels more secure.
> 
> It's convenient to have key and cert in one place if you don't need
> the certificate to be publically readable.  Keeping it in separate
> files would add slightly more security (defense in depth), that would
> protect from, for example, an admin fumble or bug in the SSL library.
> 
> "Michael A. Peters"  writes:
> 
>>> I use dehydrated (with Cloudflare DNS challenges) and as far as I know,
>>> it seems to generate a new private key every time.
>> 
>> Yeah that would be a problem for me because I implement DANE.
> 
> It's on my to-do list, but I think you can use dehydrated in signing
> mode.
> 
>   --signcsr (-s) path/to/csr.pem   Sign a given CSR, output CRT on stdout 
> (advanced usage)
> 
> In this way, you can reuse private key, as well as making it more
> secure by removing a privileged operations (private key acces) allowing
> dehydrated to be run as a non-privilged/separate user.

You might want to check out this blog:

http://www.internetsociety.org/deploy360/blog/2016/03/lets-encrypt-certificates-for-mail-servers-and-dane-part-2-of-2/
 


The author outlines a procedure for using DANE and Let’s Encrypt automatically 
generated certs in production.

I don’t really know much about DANE, but those wanting to implement it with 
free certs might want to check out this blog.

Kevin


Re: Scaling to 10 Million IMAP sessions on a single server

2017-02-22 Thread KT Walrus
> On Feb 22, 2017, at 2:44 PM, Timo Sirainen  wrote:
> 
> I guess mainly the message sequence numbers in IMAP protocol makes this more 
> difficult, but it's not an impossible problem to solve.

Any thoughts on the wisdom of supporting an external database for session state 
or even mailbox state (like using Redis or even MySQL)?

Also, would it help reliability or scalability to store a copy of the index 
data in an external database?

I want to use mdbox format but I have heard that these index files do get 
corrupted occasionally and have to be rebuilt (possibly using an older version 
of the index file to construct a new one). I worry that using mdbox might cause 
my users to see the IMAP flags suddenly reset back to a previous state (like 
seeing previously read messages becoming unread in their mail clients).

If a copy of the index data were stored in an external database, such problems 
of duplicate messages occurring in a dovecot cluster could be handled by having 
the cluster “lookup” the index data using the external database instead of the 
local copy stored on the server. An external database could easily implement 
unique serial numbers cluster-wide. In the site I’m working on building, I even 
use Redis to implement “message queues” between Postfix and Dovecot (via redis 
push/pop feature). Currently, I am only delivering new messages via IMAP 
instead of LMTP (no LMTP will be available to my backend mail servers, only 
IMAP).

If you stored the MD5 checksum of the index files (and even the message files) 
in the external database, you could also run a background process that would 
periodically check for corruption of the local index files using the checksums 
from the database, making mdbox format even more bulletproof.

And, the best thing about using an external database is that making the 
external database highly available is not a problem (as most sites already do 
that). The index data stored in the database would become the “source of truth” 
with the local index files/session data being an efficient cache for the 
mailstore. And, re-caching could occur as needed to make the whole cluster more 
reliable.

Kevin


Re: Scaling to 10 Million IMAP sessions on a single server

2017-02-22 Thread KT Walrus

> On Feb 21, 2017, at 11:12 PM, Christian Balzer  wrote:

> But even if you were to implement something that can handle 1 million or
> more sessions per server, would you want to?
> As in, if that server goes down, the resulting packet, authentication
> storm will be huge and most like result in a proverbial shit storm later.
> Having more than 10% or so of your customers on one machine and thus
> involved in an outage that you KNOW will hit you eventually strikes me as
> a bad idea.

The idea would be to store session state in an external database like Redis. I 
use Redis for PHP session data on the web servers and Redis is implemented as a 
high-availability cluster (using Redis Sentinels). If the IMAP session state is 
maintained externally in a high-availability datastore, then rebooting a mail 
server or having it go down unexpectedly should not mean that all existing 
sessions are “kicked” and the clients would need to log in again. Rather, a 
backup mail server or servers could take the load and just use the 
high-availability datastore to manage the sessions that were on the old server.

One potential problem, if not using shared storage for the mailboxes, is that 
dovecot replication is asynchronous so a small number of IMAP sessions might be 
out of date with the data on the replacement server, so some of the data in 
Redis might need to be re-cached to reflect the state of the backup mailstore. 
Other than that, I don’t think there would be much of a "proverbial shit storm” 
caused by the failure of one mail server, even if that server were to handle 1 
million or more sessions per server. The remaining mail servers in the cluster 
would need to be able to absorb the load (maybe cluster in 3 server clusters 
would be the norm so each remaining server would only have to be able to take 
50% of the sessions from the failed server while it is unavailable).

Kevin


Re: Scaling to 10 Million IMAP sessions on a single server

2017-02-22 Thread KT Walrus

> On Feb 21, 2017, at 11:12 PM, Christian Balzer <ch...@gol.com> wrote:
> 
> On Tue, 21 Feb 2017 09:49:39 -0500 KT Walrus wrote:
> 
>> I just read this blog: 
>> https://mrotaru.wordpress.com/2013/10/10/scaling-to-12-million-concurrent-connections-how-migratorydata-did-it/
>>  
>> <https://mrotaru.wordpress.com/2013/10/10/scaling-to-12-million-concurrent-connections-how-migratorydata-did-it/><https://mrotaru.wordpress.com/2013/10/10/scaling-to-12-million-concurrent-connections-how-migratorydata-did-it/
>>  
>> <https://mrotaru.wordpress.com/2013/10/10/scaling-to-12-million-concurrent-connections-how-migratorydata-did-it/>>
>>  about scaling to 12 Million Concurrent Connections on a single server and 
>> it got me thinking.
>> 
> 
> While that's a nice article, nothing in it was news to me or particular
> complex when one does large scale stuff, like Ceph for example. 
> 
>> Would it be possible to scale Dovecot IMAP server to 10 Million IMAP 
>> sessions on a single server?
>> 
> I'm sure Timo's answer will (or would, if he could be bothered) be along
> the lines of: 
> "Sure, if you give me all your gold and then some for a complete rewrite
> of, well, everything”.

It will be a long time before I would need to scale to 10 Million users and I 
will be happy to pay for the rewrite of the IMAP plugin when the time comes, if 
not done before then by someone else.

I have seen proposals for a new client protocol called JMAP that seem to be all 
about running a mail server at scale like an NGINX https web server can scale. 
That got me thinking about wether there is anything fundamental about IMAP that 
causes it to be difficult to scale. After looking into Dovecot’s current IMAP 
implementation, I think the approach was taken that fundamentally would have 
scaling issues (as in, one backend process per IMAP session). I see a couple 
years ago, work was done to “migrate” idling IMAP sessions to a single process 
that “remembers” the state of the IMAP session and can restore it back to a 
backend process when the idling is done.

But, the only estimate that I have read about the “migrate idling” is that you 
are likely to see only a 20% reduction of the number of concurrent processes 
you need if you are running at 50,000 IMAP sessions per mail server. 20% 
reduction is not nearly enough of a benefit for scale. I would need to see at 
least an order of magnitude improvement to scale (and hopefully, several orders 
of magnitude).

So, in my mind, since these IMAP sessions are long lived with infrequent bursts 
of activity, a better approach would be to manage the session data in memory or 
in an external datastore and only process using the session data when there is 
activity. Much like Web Sockets and even HTTPS requests are handled today for 
installations that need to scale to support millions of active users.

As for Dovecot, I would think the work done to “migrate” idling IMAP sessions 
would be a good start to implementing managing a large number of sessions with 
a fixed pool of worker processes like other web servers do.

So, my question really is:

Is there anything about the IMAP protocol that would prevent an implementation 
from scaling to 10 Million users per server? Or, do we need to push for a new 
protocol like JMAP that has been designed to scale better (by being stateless 
with the server requests)?

Kevin


Scaling to 10 Million IMAP sessions on a single server

2017-02-21 Thread KT Walrus
I just read this blog: 
https://mrotaru.wordpress.com/2013/10/10/scaling-to-12-million-concurrent-connections-how-migratorydata-did-it/
 

 about scaling to 12 Million Concurrent Connections on a single server and it 
got me thinking.

Would it be possible to scale Dovecot IMAP server to 10 Million IMAP sessions 
on a single server?

I think the current implementation of having a separate process manage each 
active IMAP session (w/ the possibility of moving idling sessions to a single 
hibernate process) will never be able to deploy a single server managing 10 
Million IMAP sessions.

But, would it be possible to implement a new IMAP server plugin that uses a 
fixed configurable pool of “worker” processes, much like NGINX or PHP-FPM does. 
These servers can probably scale to 10 Million TCP connections, if the server 
is carefully tuned and has enough cores/memory to support that many active 
sessions.

I’m thinking that the new IMAP server could use some external database (e.g., 
Redis or Memcached) to save all the sessions state and have the “worker” 
processes poll the TCP sockets for new IMAP commands to process (fetching the 
session state from the external database when it has a command that is waiting 
on a response). The Dovecot IMAP proxies could even queue incoming commands to 
proxy many incoming requests to a smaller number of backend connections (like 
ProxySQL does for MySQL requests). That might allow each Dovecot proxy to 
support 10 Million IMAP sessions and a single backend could support multiple 
front end Dovecot proxies (to scale to 100 Million concurrent IMAP connections 
using 10 proxies for 100 Million connections and 1 backend server for 10 
Million connections).

Of course, the backend server may need to be beefy and have very fast NVMe SSDs 
for local storage, but changing the IMAP server to manage a pool of workers 
instead of requiring a process per active session, would allow bigger scale up 
and could save large sites a lot of money.

Is this a good idea? Or, am I missing something?

Kevin

Re: Problem with Let's Encrypt Certificate

2017-02-19 Thread KT Walrus
> That's one of the reasons I don't like Let's Encrypt, with one year certs it 
> is easier to look at the certs and see what is going to expire in the coming 
> month needing a new private key.

I use dehydrated (with Cloudflare DNS challenges) and as far as I know, it 
seems to generate a new private key every time. All newly generated certs are 
generated with the timestamp in the filenames and the soft links updated to 
point to the latest timestamped files. I have 4 domains each with an average of 
70 alt names, so Let’s Encrypt is saving me money. I simply run the dehydrated 
script every week in a cron job to regenerate the certs (if there is less than 
30 days until the current cert is set to expire) and rotate in any new certs.

Of course, I run my sites using Docker and it is very easy to automate renewing 
certs. Note that I had the dehydrated script fail occasionally (mostly with 500 
Server Busy errors for the Let’s Encrypt ACME server that sometimes cause me to 
have to wait a week before the script will succeed).

Automating cert renewal and cert rotation into production using Let’s Encrypt 
and Docker is a huge win for me, and has taken the pain out of manually doing 
this once a year for each domain (and paying high fees for the privilege). And 
using the DNS-01 challenge type means that I can easily generate certs for my 
mail domain (that doesn’t have a web server). In fact, using Cloudflare DNS is 
free so even DNS for my mail domain doesn’t cost anything.

Kevin

> On Feb 19, 2017, at 2:00 AM, Michael A. Peters  wrote:
> 
> On 02/18/2017 10:24 PM, Robert L Mathews wrote:
>> On 2/17/17 1:38 PM, chaouche yacine wrote:
>> 
>>> Seems wrong to me too, Robert. If you put your private key inside
>>> your certificate, won't it be sent to the client along with it ?
>> 
>> No; any SSL software that uses the file will extract the parts it needs
>> from it and convert them to its internal format for future use. It never
>> literally sends the file contents anywhere.
>> 
>> It's common and often recommended for a PEM file to contain everything
>> needed; see, for example, the bottom section of:
>> 
>> https://www.digicert.com/ssl-support/pem-ssl-creation.htm
>> 
>> Doing this avoids the key and certificate files getting out of sync later.
>> 
> 
> I don't use Let's Encrypt but to avoid them getting out of sync, I simply put 
> a time stamp in the filename, e.g.
> 
> /etc/pki/tls/private/deviant.email-20160427.key
> /etc/pki/tls/certs/deviant.email-20160427.crt
> 
> I never re-use a private key, when a cert expires I always generate a new 
> private key with a new CSR.
> 
> That's one of the reasons I don't like Let's Encrypt, with one year certs it 
> is easier to look at the certs and see what is going to expire in the coming 
> month needing a new private key.
> 
> Let's Encrypt does 3 month certs and re-uses the private key when it 
> generates a new cert.
> 
> I'm sure it probably could be scripted to use a new private key every time 
> but then I have to have to update the TLSA record frequently (and you have to 
> have the new fingerprint TLSA record in DNS before you start using it) and 
> that would be a hassle.
> 
> I'm sure it probably could also be scripted to use a new private key every 
> fourth time, too.
> 
> But for me its just easier to have certs that last a year and I can easily 
> visually see what is going to need my action.


Re: dovecot config for 1500 simultaneous connection

2017-02-14 Thread KT Walrus

> On Feb 14, 2017, at 5:50 PM, Joseph Tam  wrote:
> 
> Another related security situation I've encountered is when a fraudster
> has phished a user's password.  A user/admin changes the password,
> but forgets to invalidate dovecot's cached entry, allowing the fraudster
> contunuing access to the mail account until the TTL expires or user logs
> in with new credentials.  I've been burnt by this one.

I’m no expert, but should the code that updates the password hash in the 
database also immediately try to log into dovecot for the user with a fake 
password?

Authentication should fail but the cache would be updated?

Or, doesn’t Dovecot expire the cache’d entry on failed authentication?

Re: dovecot config for 1500 simultaneous connection

2017-02-12 Thread KT Walrus
Thanks for the info. I do have one further question for you. On your servers 
that are currently handling 50k IMAP sessions, how many users does that 
correspond to? Since many users will have multiple IMAP sessions on multiple 
devices, I’d like to hear about some real-world numbers that could be used for 
budgeting a new project like mine.

Also, do you use Dovecot IMAP proxies in front of your backend servers? If so, 
how many IMAP sessions can one proxy server handle (assuming the proxy does 
authorization using MySQL running on a separate server)? And, could the proxy 
server be tuned to help in optimizing mostly IDLE backend sessions?

> On Feb 12, 2017, at 1:58 AM, Christian Balzer <ch...@gol.com> wrote:
> 
> 
> Hello,
> 
> On Fri, 10 Feb 2017 14:50:03 -0500 KT Walrus wrote:
> 
>>> 1. 256GB of real RAM, swap is for chums.  
>> 
>> Are you sure that 100,000 IMAP sessions wouldn’t work well with SWAP, 
>> especially with fast SSD storage (which is a lot cheaper than RAM)?
>> 
> 
> I'm sure about tax and death, not much else.
> 
> But as a rule of thumb I'd avoid swapping out stuff on production servers,
> even if it were to SSDs.
> Incidentally the servers I'm talking about here have their OS and swap on
> Intel DC S3710s (200GB) and the actual storage on plenty of 1.6TB DC
> S3610s.
> 
> Relying on the kernel to make swap decisions is likely to result in much
> reduced performance even with fast SWAP when you're overcommitting things
> on that scale.
> 
> 
> But read on.
> 
>> Seems that these IMAP processes are long lived processes (idling most of the 
>> time) that don’t need that much of the contents of real memory available for 
>> much of the life of the process. I use a database proxy in front of MySQL 
>> (for my web apps) so that there can be a large number of TCP connections to 
>> the proxy where the frontend requests are queued for execution using a small 
>> number of backend connections.
>> 
>> Could Dovecot IMAP be re-written to be more efficient so it works more like 
>> MySQL (or other scalable data servers) that could handle a million or more 
>> IMAP sessions on a server with 32GBs or less of RAM? Those IMAP sessions 
>> aren’t doing much most of the time and shouldn’t really average 2MB of 
>> active data per session that needs to be resident in main memory at all 
>> times.
>> 
> See IMAP hibernation:
> https://www.mail-archive.com/dovecot@dovecot.org/msg63429.html 
> <https://www.mail-archive.com/dovecot@dovecot.org/msg63429.html>
> 
> I'm going to deploy/test this in production in about 2 months from now,
> but if you look at the link and the consequent changelog entries you'll see
> that it has certain shortcomings and bug fixes in pretty much each release
> after it was introduced.
> 
> But this is the correct way to tackle things, not SWAP.
> 
> Alas I'm not expecting miracles and if more than 20% of the IMAP sessions
> here will be hibernated at any given time I'd be pleasantly surprised. 
> 
> Because between:
> 
> 1. Finding a sensible imap_hibernate_timeout. 
> 
> 2. Having well behaved clients that keep idling instead of restarting the
> sequence (https://joshdata.wordpress.com/2014/08/09/how-bad-is-imap-idle/ 
> <https://joshdata.wordpress.com/2014/08/09/how-bad-is-imap-idle/>)
> 
> 3. Having lots of mobile clients who either get disconnected (invisible to
> Dovecot) or have aggressive IDLE timers to overcome carrier NAT timeouts
> (a large mobile carrier here times out idle TCP sessions after 2 minutes,
> forcing people to use 1 minute IDLE renewals, making 1. up there a
> nightmare).
> 
> 4. Having really broken clients (don't ask, I can't tell) which open IMAP
> sessions, don't put them into IDLE and thus having them expire after 30
> minutes.
> 
> the pool of eligible IDLE sessions isn't as big as it could be, in my case
> at least.
> 
>> My mail server isn’t that large yet as I haven’t fully deployed Dovecot 
>> outside my own small group yet, but it would be nice if scaling Dovecot IMAP 
>> to millions of users wasn’t limited to 50,000 IMAP sessions on a server...
>> 
> 
> Scaling up is nice and desirable from a cost (rack space, HW) perspective,
> but the scalability of things OTHER than Dovecot as I pointed out plus
> that little detail of failure domains (do you really want half of your
> eggs in one basket?) argue for scaling out after a certain density. 
> 
> I'm feeling my way there at this time, but expect more than 100k sessions
> per server to be tricky.
> 
> Lastly, when I asked about 500k sessions per server here not so long ago,
> ( http://www.dovecot.org/list/dovecot/2016-November/106284.htm

Re: dovecot config for 1500 simultaneous connection

2017-02-10 Thread KT Walrus
> 1. 256GB of real RAM, swap is for chums.

Are you sure that 100,000 IMAP sessions wouldn’t work well with SWAP, 
especially with fast SSD storage (which is a lot cheaper than RAM)?

Seems that these IMAP processes are long lived processes (idling most of the 
time) that don’t need that much of the contents of real memory available for 
much of the life of the process. I use a database proxy in front of MySQL (for 
my web apps) so that there can be a large number of TCP connections to the 
proxy where the frontend requests are queued for execution using a small number 
of backend connections.

Could Dovecot IMAP be re-written to be more efficient so it works more like 
MySQL (or other scalable data servers) that could handle a million or more IMAP 
sessions on a server with 32GBs or less of RAM? Those IMAP sessions aren’t 
doing much most of the time and shouldn’t really average 2MB of active data per 
session that needs to be resident in main memory at all times.

My mail server isn’t that large yet as I haven’t fully deployed Dovecot outside 
my own small group yet, but it would be nice if scaling Dovecot IMAP to 
millions of users wasn’t limited to 50,000 IMAP sessions on a server...

> On Feb 10, 2017, at 11:07 AM, Christian Balzer <ch...@gol.com> wrote:
> 
> On Fri, 10 Feb 2017 07:59:52 -0500 KT Walrus wrote:
> 
>>> 1500 IMAP sessions will eat up about 3GB alone.  
>> 
>> Are you saying that Dovecot needs 2MB of physical memory per IMAP session?
>> 
> That depends on the IMAP session, read the mailbox size and index size,
> etc.
> Some are significantly larger:
> ---
>PID USER  PR  NI  VIRT  RES  SHR S  %CPU %MEMTIME+  COMMAND
>
> 1033864 mail  20   0 97600  67m  54m S 0  0.1   0:01.15 imap  
> 
> ---
> 
> But yes, as somebody who has mailbox servers with 55k+ session the average
> is around 1.6MB. 
> 
>> If I want to support a max 100,000 IMAP sessions per server, I should 
>> configure the server to have at least 200GBs of SWAP?
>> 
> You will want:

> 2. Understanding how to tune Dovecot and more importantly the overall
> system to such a task (see that PID up there?).
> 3. Be willing to deal with stuff like top and ps taking ages to start/run
> and others like atop actually killing dovecot (performance wise, not
> literally) when doing their obviously flawed cleanup on exit. Some things
> clearly do NOT scale well.
> 
> My current goal is to have 100k capable servers that work well, 200k in a
> failover scenario, but that won't be particular enjoyable.
> 
> Christian
> 
>>> On Feb 10, 2017, at 3:58 AM, Christian Balzer <ch...@gol.com> wrote:
>>> 
>>> On Fri, 10 Feb 2017 01:13:20 +0530 Rajesh M wrote:
>>> 
>>>> hello
>>>> 
>>>> could somebody with experience let me know the dovecot config file 
>>>> settings to handle around 1500 simultaneous connections over pop3 and 1500 
>>>> connection over imap simultaneously.
>>>> 
>>> 
>>> Be very precise here, you expect to see 1500 as the result of 
>>> "doveadm who |grep pop3 |wc -l"?
>>> 
>>> Because that implies an ungodly number of POP3 connects per second, given
>>> the typically short duration of these.
>>> 
>>> 1500 IMAP connections (note that frequently a client will have more than
>>> the INBOX open and thus have more than one session and thus process on the
>>> server) are a much easier proposition, provided they are of the typical
>>> long lasting type.
>>> 
>>> So can you put a number to your expected logins per second (both protocols)?
>>> 
>>>> my server
>>>> 
>>>> server configuration
>>>> hex core processor, 16 gb ram 1 X 600 gb 15 k rpm for main drive and 2 X 
>>>> 2000 
>>>> gb hdd for data (No raid)
>>>> 
>>> No RAID and no other replication like DRBD?
>>> Why would you even bother?
>>> 
>>> How many users/mailboxes in total with what quota? 
>>> 
>>> 1500 IMAP sessions will eat up about 3GB alone.
>>> You will want more memory, simply to keep all relevant SLAB bits (inodes,
>>> dentries) in RAM. 
>>> 
>>> If you really have several hundreds logins/s,  you're facing several
>>> bottlenecks:
>>> 1. Login processes themselves (easily fixed by high performance mode)
>>> 2. Auth processes (that will depend on your backends, method mostly)
>>> 3. Dovecot master process (spawning mail processes)
>>> 
>>> The later is a single-threa

Re: dovecot config for 1500 simultaneous connection

2017-02-10 Thread KT Walrus
> 1500 IMAP sessions will eat up about 3GB alone.

Are you saying that Dovecot needs 2MB of physical memory per IMAP session?

If I want to support a max 100,000 IMAP sessions per server, I should configure 
the server to have at least 200GBs of SWAP?

> On Feb 10, 2017, at 3:58 AM, Christian Balzer  wrote:
> 
> On Fri, 10 Feb 2017 01:13:20 +0530 Rajesh M wrote:
> 
>> hello
>> 
>> could somebody with experience let me know the dovecot config file settings 
>> to handle around 1500 simultaneous connections over pop3 and 1500 connection 
>> over imap simultaneously.
>> 
> 
> Be very precise here, you expect to see 1500 as the result of 
> "doveadm who |grep pop3 |wc -l"?
> 
> Because that implies an ungodly number of POP3 connects per second, given
> the typically short duration of these.
> 
> 1500 IMAP connections (note that frequently a client will have more than
> the INBOX open and thus have more than one session and thus process on the
> server) are a much easier proposition, provided they are of the typical
> long lasting type.
> 
> So can you put a number to your expected logins per second (both protocols)?
> 
>> my server
>> 
>> server configuration
>> hex core processor, 16 gb ram 1 X 600 gb 15 k rpm for main drive and 2 X 
>> 2000 
>> gb hdd for data (No raid)
>> 
> No RAID and no other replication like DRBD?
> Why would you even bother?
> 
> How many users/mailboxes in total with what quota? 
> 
> 1500 IMAP sessions will eat up about 3GB alone.
> You will want more memory, simply to keep all relevant SLAB bits (inodes,
> dentries) in RAM. 
> 
> If you really have several hundreds logins/s,  you're facing several
> bottlenecks:
> 1. Login processes themselves (easily fixed by high performance mode)
> 2. Auth processes (that will depend on your backends, method mostly)
> 3. Dovecot master process (spawning mail processes)
> 
> The later is a single-threaded process, so it will benefit from a faster
> CPU core.
> It can be dramatically improved by enabling process re-usage, see:
> http://wiki.dovecot.org/PerformanceTuning
> 
> However that also means more memory usage.
> 
> 
> 
> Christian
> 
>> 
>> thanks
>> rajesh
>> 
> 
> [snip]
> -- 
> Christian BalzerNetwork/Systems Engineer
> ch...@gol.com Global OnLine Japan/Rakuten Communications
> http://www.gol.com/


Re: use IMAPSIEVE to update database with last_read date

2016-11-30 Thread KT Walrus
> if you're instead interested in the date that the user *first* read the 
> message, you could capture the STORE \Seen event.

Yes. That is what I intend to do. That is, the sieve script will run on change 
of FLAGs. I really just want to verify that the user is reading certain emails 
that I send. I don’t need to track every time the user reads the message (which 
the mail server would never see anyway since the message is fetched on first 
read and then stored locally in the client).

I’m also planning on delivering most messages by IMAP to a “next day” mailstore 
and use doveadm sync (during the early morning) of the “next day” mailstore to 
the “current day” mailstore that the clients connect to. Some messages will be 
delivered directly to the “current day” mailstore via Postfix/LMTP which should 
be copied into the “next day” mailstore during the morning sync.

I’m hoping that doveadm sync is really bullet-proof and won’t add to my 
administration burden. Using IMAPSIEVE to track user’s modification of their 
mailboxes will really help keeping the website’s mysql database up to date with 
changes to Dovecot side of the website.

Kevin


> On Nov 30, 2016, at 7:07 AM, Stephan Bosch <step...@rename-it.nl> wrote:
> 
> 
> 
> Op 30-11-2016 om 11:37 schreef Stephan Bosch:
>> 
>> 
>> Op 29-11-2016 om 19:29 schreef KT Walrus:
>>> Just noticed the Dovecot support IMAPSIEVE extension…
>>> 
>>> Could I use this extension to update an external database with the date 
>>> that the user last read the message?
>> 
>> No, IMAPSieve is only triggered by modifications: APPEND, COPY, MOVE and 
>> STORE.
>> 
> 
> BTW,
> if you're instead interested in the date that the user *first* read the 
> message, you could capture the STORE \Seen event.
> 
> Regards,
> 
> Stephan.


use IMAPSIEVE to update database with last_read date

2016-11-29 Thread KT Walrus
Just noticed the Dovecot support IMAPSIEVE extension…

Could I use this extension to update an external database with the date that 
the user last read the message?

My app sends certain “notification” messages to the user’s dovecot mail 
address. The user reads the messages in their dovecot mailboxes only using 
IMAP. I want to update my app’s database to record this read time for all 
“notification” messages sent by the app.

Seems to me I could write a short sieve script to send a “notify” message to an 
app specific address that my app “watches” and updates the appropriate database 
record with the last_read time.

Is this workable for production deploy?

Or, is there a better way for a sender to be notified when the recipient 
actually reads the message? The sender will be my app and the recipient is a 
dovecot mailbox accessed by IMAP.

Kevin

Re: logging TLS SNI hostname

2016-10-17 Thread KT Walrus

> On Oct 17, 2016, at 2:41 AM, Arkadiusz Miśkiewicz  wrote:
> 
> On Monday 30 of May 2016, Arkadiusz Miśkiewicz wrote:
>> Is there a way to log SNI hostname used in TLS session? Info is there in
>> SSL_CTX_set_tlsext_servername_callback, dovecot copies it to
>> ssl_io->host.
>> 
>> Unfortunately I don't see it expanded to any variables (
>> http://wiki.dovecot.org/Variables ). Please consider this to be a feature
>> request.
>> 
>> The goal is to be able to see which hostname client used like:
>> 
>> May 30 08:21:19 xxx dovecot: pop3-login: Login: user=, method=PLAIN,
>> rip=1.1.1.1, lip=2.2.2.2, mpid=17135, TLS, SNI=pop3.somehost.org,
>> session=
> 
> Dear dovecot team, would be possible to add such variable ^ ?
> 
> That would be neat feature because server operator would know what hostname 
> client uses to connect to server (which is really usefull in case of many 
> hostnames pointing to single IP).

I’d love to be able to use this SNI domain name in the Dovecot IMAP proxy for 
use in the SQL password_query. This would allow the proxy to support multiple 
IMAP server domains each with their own set of users. And, it would save me 
money by using only the IP of the proxy for all the IMAP server domains instead 
of giving each domain a unique IP. 

Kevin

Re: Detect IMAP server domain name in Dovecot IMAP proxy

2016-10-12 Thread KT Walrus

> On Oct 12, 2016, at 2:07 PM, Rick Romero <ad...@vfemail.net> wrote:
> 
> Quoting KT Walrus <ke...@my.walr.us>:
> 
>> I’m in the process of setting up a Dovecot IMAP proxy to handle a
> number
>> of IMAP server domains. At the current time, I have my users divided
>> into 70 different groups of users (call them G1 to G70). I want each
>> group to configure their email client to access their mailboxes at a
>> domain name based on the group they belong to (e.g., g1.example.com
>> <http://g1.example.com/>, g2.example.com <http://g2.example.com/>, …,
>> g70.example.com <http://g70.example.com/>). I will only support TLS
>> encrypted IMAP connections to the Dovecot IMAP proxy (‘ssl=yes’ in
> the
>> inet_listener). My SSL cert has alternate names for all 70 group domain
>> names.
>> 
>> I want the group domain to only support users that have been assigned to
>> the group the domain name represents. That is, a user assigned to G23
>> would only be allowed to configure their email client for the IMAP
>> server named g23.example.com <http://g23.example.com/>.
>> 
>> My solution during testing has been to have the Dovecot IMAP proxy to
>> listen on different ports: 9930-. I plan to purchase 70 IPs, one for
>> each group, and redirect traffic on port 993 to the appropriate Dovecot
>> IMAP proxy port based on the IP I assign to the group domain name in the
>> site’s DNS. The SQL for handling the IMAP login uses the port number of
>> the inet_listener
>> 
>> I think this could work in production, but it will cost me extra to rent
>> the 70 IPs and might be a pain to manage. Eventually, I would like to
>> have over 5,000 groups so requiring an IP per group is less than ideal.
>> I also think having Dovecot IMAP proxy have 5,000 inet_listeners might
>> not work so well or might create too many threads/processes/ports to fit
>> on a small proxy server.
>> 
>> I would rather have 1 public IP for each Dovecot IMAP proxy and somehow
>> communicate to the userdb which group domain name was configured in the
>> email client so only the users assigned to this group can login with
>> that username.
>> 
>> Anyone have any ideas?
>>  
> 
> Do you have a SQL userdb?
> Create a table or a 'host' field for the user.
> 
> user_query = SELECT CONCAT(pw_name, '@', pw_domain) AS user, "89" as uid,
> "89" as gid, host, 'Y' AS proxy_maybe, pw_dir as home, pw_dir as mail_home,
> CONCAT('maildir:', pw_dir , '/Maildir/' ) as mail_location FROM vpopmail
> WHERE pw_name = '%n' AND pw_domain = '%d'
> 
> (mine is based on qmail/vpopmail)
> 
> Then populate 'host' for each user if you don't have any other way of
> programatically determining the host..
> 

This doesn’t solve my problem. Indeed, I am doing this already:

password_query = SELECT password, 'Y' as proxy, CONCAT_WS('@',username,domain) 
AS destuser, pms AS host, ’secretmaster' AS master, ’secretpass' AS pass FROM 
users WHERE username='%n' and domain='%d' and (group_id=%{lport}-9930 or 
%{lport}=143 or '%s'='lmtp') and mailbox_status='active’;

This is the password_query I am using on the Dovecot IMAP proxy. This proxy 
doesn’t use a user_query (only the real backend Dovecot servers do). I allow 
authorizations on port 143 only for Postfix. Port 143 isn’t exposed to the 
email clients (only 993 is used by email clients).

Anyway, checking the %{lport} allows only IMAP logins using the proper domain 
name (IP or port) to allow the log in of the user.

I’m looking to find out the IMAP server name that the user configured their 
email client with and make sure I only allow users to access their mailboxes 
using their assigned IMAP server name.

Note that the problem I am trying to solve is if the user configures their 
email client with the wrong IMAP server name (e.g. using g2.example.com 
<http://g2.example.com/> instead of g23.example.com <http://g23.example.com/>) 
and later I move G23 to another datacenter and leave G2 in the current 
datacenter, they will not be able to access their emails since the G2 
datacenter doesn’t have their mailboxes any more and the mailboxes for G23 are 
only in the G23 datacenter. My users aren’t email experts and I don’t want them 
to have to discover that they made a typo in the original setup long after they 
have forgotten how they set up the client in the first place.

To start with, the mailboxes will all be in the same datacenter, but I want to 
be able to move some of the mailboxes to be geographically closer to the users 
of those mailboxes (like Western users using Western servers while Eastern 
users use a datacenter closer to the East coast).

Kevin


Detect IMAP server domain name in Dovecot IMAP proxy

2016-10-12 Thread KT Walrus
I’m in the process of setting up a Dovecot IMAP proxy to handle a number of 
IMAP server domains. At the current time, I have my users divided into 70 
different groups of users (call them G1 to G70). I want each group to configure 
their email client to access their mailboxes at a domain name based on the 
group they belong to (e.g., g1.example.com , 
g2.example.com , …, g70.example.com 
). I will only support TLS encrypted IMAP connections 
to the Dovecot IMAP proxy (‘ssl=yes’ in the inet_listener). My SSL cert has 
alternate names for all 70 group domain names.

I want the group domain to only support users that have been assigned to the 
group the domain name represents. That is, a user assigned to G23 would only be 
allowed to configure their email client for the IMAP server named 
g23.example.com . 

My solution during testing has been to have the Dovecot IMAP proxy to listen on 
different ports: 9930-. I plan to purchase 70 IPs, one for each group, and 
redirect traffic on port 993 to the appropriate Dovecot IMAP proxy port based 
on the IP I assign to the group domain name in the site’s DNS. The SQL for 
handling the IMAP login uses the port number of the inet_listener 

I think this could work in production, but it will cost me extra to rent the 70 
IPs and might be a pain to manage. Eventually, I would like to have over 5,000 
groups so requiring an IP per group is less than ideal. I also think having 
Dovecot IMAP proxy have 5,000 inet_listeners might not work so well or might 
create too many threads/processes/ports to fit on a small proxy server.

I would rather have 1 public IP for each Dovecot IMAP proxy and somehow 
communicate to the userdb which group domain name was configured in the email 
client so only the users assigned to this group can login with that username.

Anyone have any ideas?

For HTTP traffic, it is easy to query the host in the HTTP Request, but I don’t 
think IMAP traffic has such host info in it. Does the Dovecot IMAP proxy 
receive the hostname from the email client when exchanging SSL certs (like SNI 
for HTTPS)?

Or, maybe I should have group domain in the username used to log in with (e.g., 
username+...@example.com  or 
usern...@g23.example.com ). I don’t like to 
make the user configure their email client to log in with a name that is 
different than their mailbox address. It is simpler to just have them configure 
their email client with usern...@example.com  for 
both authorization and for the from/sender headers in the messages. 

Anyway, any ideas of how to set this up in production?

Re: any news Enterprise Repository Access?

2016-07-27 Thread KT Walrus
> That dovecot offers still  EE build for free is great, but a road map on
> what the future subscription plans are would be nice; e.g low cost fee
> for just the repos, higher fees for support etc. That's what I missed.

I’d like to see Dovecot distributed in the Docker Store (coming soon) or the 
Docker Hub. Most enterprises are moving to deploying their apps in containers 
and these containers can run on your laptop the same as they run in production. 
Most modern Linux distributions support Docker these days.

I build and run Dovecot in Docker now (built from latest released sources 
against Ubuntu 16.04), and while I am still in development, I’m sure Docker is 
the way to run my apps and will run great for deployment and maintenance.

Kevin

> On Jul 27, 2016, at 2:31 AM, Götz Reinicke - IT Koordinator 
>  wrote:
> 
> Am 26.07.16 um 21:12 schrieb Alexander Dalloz:
>> Am 26.07.2016 um 14:41 schrieb Sami Ketola:
>>> 
 On 26 Jul 2016, at 09:18, Götz Reinicke - IT Koordinator
  wrote:
 
 Hi,
 we had access to the repository and it was working fine. But as we cant
 get the 2.2.25 update I was looking into the repofolders and there are
 RPMs "just" for RHEL 6// but not 5 any more.
 
 My be I missed the latest discussions or announcements? Could you give
 me an update on information and may be the RHEL 5 RPMs too?
 
 Thanks a lot and regards . Götz
>>> 
>>> 
>>> Dovecot EE build support for RHEL 5 / CentOS 5 is going away soon
>>> even if we still made one more build for CentOS 5. Please upgrade
>>> your system.
>>> 
>>> Sami
>> 
>> Not only because of dovecot
>> 
>> [21:09:27 CEST]  CentOS 5 will go EOL on 31 March, 2017 -- in
>> 35 weeks, 2 days, 4 hours, 50 minutes, and 47 seconds but be aware
>> that it is now in production phase 3 and only receives critical updates
>> 
>> Alexander
> Thx for your both feedback, and yes, it is EOL but as you mentioned in
> 35+ weeks. O.K. Redhat never did a dovecot update to the current version
> and as a lot of customers we think the update policy for some software
> should be changed too to support more modern versions of "core" server
> services. But that's not a dovecot topic ;)
> 
> That dovecot offers still  EE build for free is great, but a road map on
> what the future subscription plans are would be nice; e.g low cost fee
> for just the repos, higher fees for support etc. That's what I missed.
> 
>Regards . Götz
> 
> 
> 
> 
> 


Re: archive all saved IMAP messages

2016-06-18 Thread KT Walrus
One more question:

Does “doveadm sync” replicate messages with refcount=0 at the time of sync’ing?

The reason I ask is that, in my case of sync’ing with a “next-day” mail server 
overnight, all messages that might have been saved by IMAP but deleted shortly 
thereafter, should still be in the user’s mailbox with refcount=0. Correct?

If “doveadm sync” does replicate all messages, whether expunged or not to the 
“next-day” server, I can run “doveadm search -A” to find all messages saved by 
IMAP and archive them. This would not put any extra load on my mail servers and 
allow me to run this extra processing at night on a dedicated “next-day” mail 
server.

Kevin

> On Jun 18, 2016, at 11:20 AM, KT Walrus <ke...@my.walr.us> wrote:
> 
>> How do I configure Dovecot IMAP for this use case?
> 
> I just thought of one possible way to do this:
> 
> 1. Backup all mail servers every hour or so using “doveadm sync -1” to a mail 
> server that uses Maildir (my source mail servers are using mdbox)
> 2. Run a script on the backup server that uses the “find” command to identify 
> all new messages in the Maildir folders that have appeared since the last 
> backup
> 3. Send these new messages to my archive server if the headers in the 
> messages indicate that they were stored by the user directly and not 
> “Received” from outside the mail server
> 
> This will pick up messages that have been saved by APPEND and not deleted 
> before the backup command runs.
> 
> As I type this, maybe I should just do this hourly check on the source mail 
> servers using the “doveadm search -A” command combined with the “doveadm 
> fetch” command to extract the newly saved messages and check their headers to 
> determine whether the message was saved by IMAP. Newly saved IMAP messages 
> could then be sent to the archive mail server (or queued in Redis for further 
> processing).
> 
>> On Jun 18, 2016, at 10:46 AM, KT Walrus <ke...@my.walr.us> wrote:
>> 
>> 
>>> On Jun 17, 2016, at 8:01 PM, chaouche yacine <yacinechaou...@yahoo.com> 
>>> wrote:
>>> 
>>> I'm also interested in learning how to do this best. Last time I thought 
>>> about it is if users have a different e-mail address on the archive server, 
>>> you can setup a BCC map in postfix that matches the pair of emails (primary 
>>> email -  archive email), this will automatically send all sent messages in 
>>> the inbox of the archived email account. In that archive server you can 
>>> setup sieve rules to move the emails to the sent folder. But that's rather 
>>> a complicated solution, besides it doesn't work draft folder.
>> 
>> Yes. I already have Postfix set up to send a copy of all incoming messages 
>> to an archival Dovecot mail server. This was rather easy to do since I have 
>> Postfix deliver all inbound messages to a shell script that queues the 
>> message in a Redis queue and then sends the message to the archive.
>> 
>> But, my issue is capturing all IMAP saved messages (via IMAP APPEND 
>> command). Is there any way to “hook” into the APPEND action to send a copy 
>> of the message to the archival Dovecot mail server? I’d really like to just 
>> post-process the APPENDED messages with a shell script that is similar to my 
>> Postfix shell script (that queues to Redis and sends to the archive).
>> 
>> How do I configure Dovecot IMAP for this use case?
>> 
>> Kevin
>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> From: KT Walrus <ke...@my.walr.us>
>>> To: Dovecot Mailing List <dovecot@dovecot.org> 
>>> Sent: Friday, June 17, 2016 9:18 PM
>>> Subject: archive all saved IMAP messages
>>> 
>>> 
>>> I need to archive (i.e., send to another mail server) all messages saved on 
>>> my mail servers. I’ve implemented for SMTP submission, but haven’t figured 
>>> out how to archive messages saved by IMAP (like to Drafts, Sent, etc.).
>>> 
>>> How would I best implement this? Can I enable Sieve plugin for IMAP? Or, 
>>> some other method? Like one way backup to archive server?
>>> 
>>> I really only need to archive the messages sent/saved by a user and not the 
>>> messages received from other users.
>>> 
>>> Kevin
>> 
> 


Re: archive all saved IMAP messages

2016-06-18 Thread KT Walrus
> How do I configure Dovecot IMAP for this use case?

I just thought of one possible way to do this:

1. Backup all mail servers every hour or so using “doveadm sync -1” to a mail 
server that uses Maildir (my source mail servers are using mdbox)
2. Run a script on the backup server that uses the “find” command to identify 
all new messages in the Maildir folders that have appeared since the last backup
3. Send these new messages to my archive server if the headers in the messages 
indicate that they were stored by the user directly and not “Received” from 
outside the mail server

This will pick up messages that have been saved by APPEND and not deleted 
before the backup command runs.

As I type this, maybe I should just do this hourly check on the source mail 
servers using the “doveadm search -A” command combined with the “doveadm fetch” 
command to extract the newly saved messages and check their headers to 
determine whether the message was saved by IMAP. Newly saved IMAP messages 
could then be sent to the archive mail server (or queued in Redis for further 
processing).

> On Jun 18, 2016, at 10:46 AM, KT Walrus <ke...@my.walr.us> wrote:
> 
> 
>> On Jun 17, 2016, at 8:01 PM, chaouche yacine <yacinechaou...@yahoo.com> 
>> wrote:
>> 
>> I'm also interested in learning how to do this best. Last time I thought 
>> about it is if users have a different e-mail address on the archive server, 
>> you can setup a BCC map in postfix that matches the pair of emails (primary 
>> email -  archive email), this will automatically send all sent messages in 
>> the inbox of the archived email account. In that archive server you can 
>> setup sieve rules to move the emails to the sent folder. But that's rather a 
>> complicated solution, besides it doesn't work draft folder.
> 
> Yes. I already have Postfix set up to send a copy of all incoming messages to 
> an archival Dovecot mail server. This was rather easy to do since I have 
> Postfix deliver all inbound messages to a shell script that queues the 
> message in a Redis queue and then sends the message to the archive.
> 
> But, my issue is capturing all IMAP saved messages (via IMAP APPEND command). 
> Is there any way to “hook” into the APPEND action to send a copy of the 
> message to the archival Dovecot mail server? I’d really like to just 
> post-process the APPENDED messages with a shell script that is similar to my 
> Postfix shell script (that queues to Redis and sends to the archive).
> 
> How do I configure Dovecot IMAP for this use case?
> 
> Kevin
> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> From: KT Walrus <ke...@my.walr.us>
>> To: Dovecot Mailing List <dovecot@dovecot.org> 
>> Sent: Friday, June 17, 2016 9:18 PM
>> Subject: archive all saved IMAP messages
>> 
>> 
>> I need to archive (i.e., send to another mail server) all messages saved on 
>> my mail servers. I’ve implemented for SMTP submission, but haven’t figured 
>> out how to archive messages saved by IMAP (like to Drafts, Sent, etc.).
>> 
>> How would I best implement this? Can I enable Sieve plugin for IMAP? Or, 
>> some other method? Like one way backup to archive server?
>> 
>> I really only need to archive the messages sent/saved by a user and not the 
>> messages received from other users.
>> 
>> Kevin
> 


Re: archive all saved IMAP messages

2016-06-18 Thread KT Walrus

> On Jun 17, 2016, at 8:01 PM, chaouche yacine <yacinechaou...@yahoo.com> wrote:
> 
> I'm also interested in learning how to do this best. Last time I thought 
> about it is if users have a different e-mail address on the archive server, 
> you can setup a BCC map in postfix that matches the pair of emails (primary 
> email -  archive email), this will automatically send all sent messages in 
> the inbox of the archived email account. In that archive server you can setup 
> sieve rules to move the emails to the sent folder. But that's rather a 
> complicated solution, besides it doesn't work draft folder.

Yes. I already have Postfix set up to send a copy of all incoming messages to 
an archival Dovecot mail server. This was rather easy to do since I have 
Postfix deliver all inbound messages to a shell script that queues the message 
in a Redis queue and then sends the message to the archive.

But, my issue is capturing all IMAP saved messages (via IMAP APPEND command). 
Is there any way to “hook” into the APPEND action to send a copy of the message 
to the archival Dovecot mail server? I’d really like to just post-process the 
APPENDED messages with a shell script that is similar to my Postfix shell 
script (that queues to Redis and sends to the archive).

How do I configure Dovecot IMAP for this use case?

Kevin

> 
> 
> 
> 
> 
> 
> 
> 
> From: KT Walrus <ke...@my.walr.us>
> To: Dovecot Mailing List <dovecot@dovecot.org> 
> Sent: Friday, June 17, 2016 9:18 PM
> Subject: archive all saved IMAP messages
> 
> 
> I need to archive (i.e., send to another mail server) all messages saved on 
> my mail servers. I’ve implemented for SMTP submission, but haven’t figured 
> out how to archive messages saved by IMAP (like to Drafts, Sent, etc.).
> 
> How would I best implement this? Can I enable Sieve plugin for IMAP? Or, some 
> other method? Like one way backup to archive server?
> 
> I really only need to archive the messages sent/saved by a user and not the 
> messages received from other users.
> 
> Kevin


archive all saved IMAP messages

2016-06-17 Thread KT Walrus
I need to archive (i.e., send to another mail server) all messages saved on my 
mail servers. I’ve implemented for SMTP submission, but haven’t figured out how 
to archive messages saved by IMAP (like to Drafts, Sent, etc.).

How would I best implement this? Can I enable Sieve plugin for IMAP? Or, some 
other method? Like one way backup to archive server?

I really only need to archive the messages sent/saved by a user and not the 
messages received from other users.

Kevin


Advice needed: SMTP/LMTP or IMAP for internal message delivery

2016-06-17 Thread KT Walrus
I’ve implemented “next day” delivery this week by taking messages submitted 
through Postfix, queuing the messages in a MySQL database, and sending them out 
for delivery through another Postfix instance/Dovecot LMTP proxy with final 
delivery using the destination mail servers LMTP service. Delivery is sent to a 
“next day” mail server which is sync’d to the recipient’s mail server using 
“doveadm sync -A tcp:next-day-mail:12345” on each mail server in the early 
morning.

This all appears to be working well. I do have a need to deliver these messages 
into specific mail folders in the recipient’s mailbox and I was planning on 
using global Sieve scripts running on the “next day” mail server to place the 
messages in the proper folder (not all messages are delivered to the INBOX 
folder).

I am using a PHP script to move the messages from Postfix to the MySQL database 
and then later sending them via SMTP to the internal Postfix instance for 
delivery to the “next day” mail server. I started investing how to code the 
Sieve scripts today and it occurred to me that I could greatly simplify message 
delivery by using IMAP to deliver the messages to the “next day” mail server. 
The PHP script would be able to deliver the exact message that should be stored 
to the proper folders setting the \Seen flags appropriately. 

My backend “next day” mail server already supports Master password login so I 
figure the PHP script should be able to login via IMAP to access the folders in 
the recipient mailboxes. Also, the PHP script could do other mailbox 
maintenance tasks when it connects to the user’s mailbox such as purge folders 
by age, message count, etc. The PHP script could also retrieve info about the 
current state of the user’s mailbox folders (like the date of the oldest unread 
message, how many messages have been read in the last week or so, etc) and 
store this data in the MySQL DB.

I’m looking for any advice on whether to scrap the current plan of deploying 
internal Postfix SMTP/Dovecot LMTP proxy/Dovecot LMTP/Sieve script for “next 
day” mail delivery and just write PHP script to access Dovecot IMAP (direct to 
the “next day” mail server or to the user’s mail server for “immediate” 
delivery).

This would allow me to drop using Dovecot LMTP at all. Postfix SMTP would only 
be configured to invoke a PHP script to deliver messages to the database (which 
is already implemented). Postfix would still use Dovecot IMAP authentication, 
but only Dovecot IMAP service would need to be “highly available”.

Any opinions? Should I dump LMTP?

Kevin

Re: Scalability of Dovecot in the Cloud

2016-06-11 Thread KT Walrus
> Anyway, if it's mostly IDLE connections, I'd expect 100k mailboxes/VM to be 
> fine. Generally I'd expect about 10k active (non-hibernated) IMAP 
> connections/VM for 32 GB of memory, but this depends a lot on the mailbox 
> sizes.

That is great news. 100k mailboxes/VM is a great number. I do expect most IMAP 
clients will be IDLEing. Do almost all email clients in use today do the IDLE 
command? Do most email clients open many connections per mailbox? Perhaps 
IDLEing on multiple namespaces/folders per mailbox? Would this affect your 100k 
mailboxes/VM estimate?

The cloud VM at 4 vCores, 30GB RAM, and local SSD storage is just $40/month 
(OVH Public Cloud). I had expected a cost of 10 cents per mailbox per month 
(with redundancies raising that cost to 25 cents per mailbox per month). But 
100k mailboxes/VM would give me a total operating cost of less than 1 cent per 
month per mailbox, at scale. Maybe even 10 cents per year per mailbox for 
Public Cloud hosting fees?

Does anyone on this list run a large number of mailboxes per server in 
production? What is the largest number of Dovecot mailboxes/client connections 
you supported on a single server before you had to upgrade to multiple Dovecot 
servers?

Kevin

> On Jun 11, 2016, at 6:37 PM, Timo Sirainen <t...@iki.fi> wrote:
> 
> On 04 Jun 2016, at 21:28, KT Walrus <ke...@my.walr.us 
> <mailto:ke...@my.walr.us>> wrote:
>> 
>> Does anyone have any idea of how many IMAP connections a single cloud VM (4 
>> vCores at 2.4GHz, 30GB RAM, local SSD storage - non-RAID) can be expected to 
>> handle in production. The mailboxes are fairly small (average 5MB total - 
>> 50MB max, as I don’t store attachments in Dovecot expect those saved through 
>> IMAP in the Sent/Drafts folders) and each user will probably have an average 
>> of 2 devices that have the mail clients configured to access each mailbox.
>> 
>> Can such a server handle 100,000 mailboxes (200,000 devices/clients)? Or is 
>> it more like 10,000? Or, even smaller?
>> 
>> I can scale the cloud VM up to 32 vCores and 240GB RAM (at 8 times the 
>> price) or split the mailboxes onto multiple VMs. The VM will also be running 
>> LMTP and other Dovecot services (I don’t plan on supporting POP3 at this 
>> time). The mailboxes will be sync’d to a backup VM running Dovecot for high 
>> availability so has some load from this background activity. LMTP will not 
>> be that high a load, I think, since most messages will be delivered by at 
>> night. But, clients will have IMAP connections 24/7.
>> 
>> Just trying to get an idea of the cost of running a potentially huge/growing 
>> mail service in the cloud… I’m going to have to support around a million 
>> mailboxes before the site will generate significant revenue to support 
>> operations.
> 
> Do you mean most of the IMAP clients will be IDLEing waiting for new mails, 
> which mostly won't arrive until the next night? imap-hibernate feature will 
> be very helpful there then.
> 
> Bottlenecks are commonly either the disk IO or the memory usage. With SSD 
> you're probably less likely to run bottleneck in disk IO. Memory usage mainly 
> depends on the number of active (non-hibernated) concurrent connections and 
> also the mailbox sizes of the users.
> 
> I'd limit a single Dovecot VM to 64 GB of memory. Maybe more would work, but 
> it might run into bottlenecks on the CPU usage side for services that are 
> limited to a single process per instance.
> 
> Replication with dsync is going to increase the load and I'm not sure how big 
> of an issue that is.
> 
> Anyway, if it's mostly IDLE connections, I'd expect 100k mailboxes/VM to be 
> fine. Generally I'd expect about 10k active (non-hibernated) IMAP 
> connections/VM for 32 GB of memory, but this depends a lot on the mailbox 
> sizes.


Advice on once a day message delivery setup

2016-06-08 Thread KT Walrus
I’m adding once a day mail delivery to my site. Messages are marked by the 
sender as “overnight” or “once a week” delivery. 

The way I’m planning on implementing this is to queue messages until midnight 
in a MySQL database. Each mailbox will be kept in two Dovecot mailstores. The 
first mailstore will give the users IMAP access to their mailbox. A second 
mailstore will hold the next day’s new messages. At midnight, a cron job runs 
to send messages in the MySQL database out to the second mailstore. Then, at 
6am, a second cron job will run to sync the two mailstores using doveadm sync.

What I am expecting to happen is that during the day, notification messages 
(from the site) may be delivered to the first mailstore (the one providing IMAP 
access to the user) but no messages from other users sent during the day will 
come until after 6am the next day. Each time the user submits a new message, a 
notification message is sent back to the sender with a link for editing the 
queued message in the MySQL database and an indication of when it is scheduled 
for delivery.

I am kind of assuming that the morning sync process is fast enough so it can 
easily complete before noon (in 6 hours) even if I end up having lots of 
mailboxes on each fully loaded Dovecot server.

Is the doveadm sync process reliable and efficient enough for this type of 
“once a day” morning new message mail delivery?

Or, should I just start delivering messages after midnight and not bother with 
the second mailstore and subsequent sync?

Just looking for any advice… I kind of like the idea of modeling my mail 
service after the US Post Office where the mailman delivers new mail once a day 
rather than like Twitter/Facebook where messages are posted in real time to 
encourage users to monitor their boxes throughout the day.

Kevin


Re: password expire warning for dovecot users in IMAP/POP login

2016-06-08 Thread KT Walrus
> I think the easiest solution it to send a mail to the user that the password 
> will expire. A cron job and a shell script should do the work.
> I don't know any mechanism to send this kind of message via POP.

I agree with you. Don’t bother trying to alert the user when he logs in (where 
there is no universal client support for such alerts). But, simply send a 
notification message from a cron script to their mailbox (a couple days before 
expiration). You could mark the message as high priority/urgent just in case 
their client displays such messages more prominently than normal inbox new 
messages. IMAP or POP login is usually done by the email client in the 
background and the user isn’t necessarily even around to handle the alert. But, 
clients are used to alerting the user that they have new mail.

So, simply sending a notification message, from a cron job, to their INBOX is 
definitely the way I would go.

Kevin

> On Jun 8, 2016, at 9:31 AM, Juan Bernhard  wrote:
> 
> 
> El 08/06/2016 a las 03:37 a.m., mkaw...@redhat.com 
>  escribió:
>> Dear list,
>> 
>> Is it possible to give a notification about password exprire warning to
>> users authenticated by OpenLDAP when the users login via dovecot using
>> IMAP or POP? For example, when you ssh to a server and/or run
>> ldapsearch, you can be warned with password expire warning like below:
>> 
>> # ssh testuser@localhost
>> testuser@localhost's password:
>> Your password will expire in 31 minute(s).<==
>> Last login: Wed Jun  8 12:22:08 2016 from localhost.localdomain
>> 
>> ]$ ldapsearch -LLL -D uid=testuser,ou=People,dc=example,dc=com -w
>> redhat  "cn=testuser" -e ppolicy
>> ldap_bind: Success (0) (Password expires in 1808 seconds)<==
>> dn: uid=testuser,ou=People,dc=example,dc=com
>> 
>> Does the same can be done for dovecot users authenticated by OpenLDAP in
>> IMAP/POP?
>> 
>> 
>> Thanks,
>> 
> I think the easiest solution it to send a mail to the user that the password 
> will expire. A cron job and a shell script should do the work.
> I don't know any mechanism to send this kind of message via POP.
> 
> Saludos, Juan.


Re: userdb for imap proxy

2016-06-08 Thread KT Walrus
> In proxy and director configuration you can configure only the passdb lookup.

Thanks. I got my installation working yesterday. I have proxies for LMTP and 
IMAP (no POP3) backed by a farm of Dovecot servers. The IMAP proxy listens on 
70 different IPs/ports and does passdb lookups to authenticate the users based 
on the incoming IP/port. The passdb lookups select the particular backend 
server containing the user’s mailbox. SMTP (Postfix) does authentication 
through the IMAP proxy and mail delivery through the LMTP proxy. I haven’t 
bothered to set up an SMTP proxy yet, since my SMTP server will only handle 
submission and not relay. Submitted messages are queued to a Redis queue for 
importation into a MySQL database where the messages are held giving the sender 
the ability to edit/delete their messages before midnight. Messages are sent 
out to the recipient mailboxes in the early morning through another internal 
SMTP server talking to the LMTP proxy. 

For my site, I only want to delivery new messages once a day (in the early 
morning) with the sender/mailbox admin having the opportunity to edit/delete 
the messages the day it is sent by the sender.

All appears to be working well, but I’m currently only doing SSL/TLS on the 
edge (in SMTP/IMAP) and haven’t figured out how to do SSL from end to end. I’m 
not sure if end to end SSL is important for my site, but it seems to be a trend 
that should not be ignored.

Kevin

> On Jun 8, 2016, at 3:49 AM, Alessio Cecchi <ales...@skye.it> wrote:
> 
> 
> 
> Il 07/06/2016 17:42, KT Walrus ha scritto:
>> If I’m running only imap-login service in my dovecot imap proxy, do I need 
>> to configure userdb or only passdb?
>> 
> 
> In proxy and director configuration you can configure only the passdb lookup.
> -- 
> Alessio Cecchi
> Postmaster @ http://www.qboxmail.it
> https://www.linkedin.com/in/alessice


userdb for imap proxy

2016-06-07 Thread KT Walrus
If I’m running only imap-login service in my dovecot imap proxy, do I need to 
configure userdb or only passdb?

Re: Blowfish hashed passwords

2016-06-06 Thread KT Walrus
I don’t understand your reply. I am running Ubuntu 14.04 in Docker image now, 
but there is no support for BLF-CRYPT in 14.04.

As for openbsd, Docker images can be based on any Linux distro that is 
available in the Docker Hub. OpenBSD is not a Linux distro and I would have to 
run it inside a VM which isn’t acceptable.

See https://hub.docker.com/explore/ <https://hub.docker.com/explore/> for a 
list of Official Repos that are suitable to use as base images for building 
Dovecot such as ubuntu, debian, centos, alpine, oraclelinux, opensuse, etc.

I suspect that most glibc crypt() implementations don’t support BLF-CRYPT and 
that is one reason that PHP includes fallback BLF-CRYPT function so PHP users 
can generate Blowfish password hashes without worrying whether PHP is running 
on Linux or not.

Kevin

> On Jun 6, 2016, at 7:17 PM, Peter Chiochetti <p...@myzel.net> wrote:
> 
> Am 2016-06-06 um 15:36 schrieb KT Walrus:
>> 
>> Since I’m using Docker, the easiest solution for me is to find a linux 
>> distro that can run Dovecot well and supports BLF-CRYPT as well.
>> 
>> What Linux distros support BLF-CRYPT and are well tested and secure?
>> 
> 
> As you are running Ubuntu 14.04 now - I suppose most all Linux distros are as 
> well tested as this.
> 
> For both tested and secure, you may choose openbsd? Dont know if Docker does 
> this though -- nevertheless, I guess docker probably rules out anything 
> secure...
> 
> -- 
> peter


Re: Blowfish hashed passwords

2016-06-06 Thread KT Walrus
> Changing your php app will probably be the easiest solution.

Since I’m using Docker, the easiest solution for me is to find a linux distro 
that can run Dovecot well and supports BLF-CRYPT as well.

What Linux distros support BLF-CRYPT and are well tested and secure?

> On Jun 5, 2016, at 8:54 PM, Edgar Pettijohn <ed...@pettijohn-web.com> wrote:
> 
> On 16-06-05 20:36:35, KT Walrus wrote:
>>>> Maybe, Dovecot could just add support for BLF-CRYPT by using the open 
>>>> source implementation of Blowfish hashing found in 
>>>> https://github.com/php/php-src/tree/master/ext/standard 
>>>> <https://github.com/php/php-src/tree/master/ext/standard>. The 
>>>> implementation looks like a single function to generate the hash. I???m 
>>>> not much of a programmer, but it would seem to me that these .c/.h files 
>>>> could be added to Dovecot for doing BLF-CRYPT hashing. 
>>>> 
>>> It already does. As previously stated.
>> 
>> It doesn???t for me. I???m building Dovecot from source (v2.2.24) in a 
>> Docker container using Ubuntu 14.04.
>> 
>> Does BLF-CRYPT work for you?
> 
> Yes, but I don't use ubuntu.
> 
>> 
>> Maybe I???m not building Dovecot correctly. I install libssl-dev and 
>> libmysqlclient-dev and do:
>> 
>> $ ./configure --prefix=/usr --sysconfdir=/etc --with-mysql
>> $ make
>> $ make install
>> 
>> Am I missing some library/switch to enable BLF-CRYPT?
> 
> Does your libc support it?
> 
> $ man crypt || $ man bcrypt 
> 
>> 
>> I just did a quick Google search, and it appears that Ubuntu 14.04 doesn???t 
>> have support for BLF-CRYPT according to this issue:
>> 
>> https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1349252 
>> <https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1349252> 
>> <https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1349252 
>> <https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1349252>>
>> 
>> Actually, now that I???ve researched this a bit more, it was a mistake for 
>> my PHP app to make BLF-CRYPT password hashes since SHA512-CRYPT with a high 
>> number of rounds should be just as good. If Ubuntu 16.04 didn't add support 
>> for BLF-CRYPT, I guess I will have to implement a Checkpassword script for 
>> Dovecot that might generate SHA512-CRYPT replacement hashes after 
>> successfully checking against the BLF-CRYPT hashes. I???m no Dovecot expert, 
>> but I think I can have multiple passdbs so the first passdb mysql lookup 
>> will be set to fail if it finds a BLF-CRYPT hash so the Checkpassword script 
>> would only be run once per failed mysql lookup.
>> 
> 
> Changing your php app will probably be the easiest solution.
> 
>> Hopefully, I just missed some ./configure switch to enable BLF-CRYPT and 
>> don???t have to deal with converting BLF-CRYPT to SHA512-CRYPT just for 
>> Dovecot.
>> 
>> Kevin
>> 
>> 
>>> On Jun 5, 2016, at 7:43 PM, Edgar Pettijohn <ed...@pettijohn-web.com> wrote:
>>> 
>>> 
>>> 
>>> Sent from my iPhone
>>> 
>>> On Jun 5, 2016, at 6:16 PM, KT Walrus <ke...@my.walr.us> wrote:
>>> 
>>>>> I would love to know why your ubuntu 14.04 system doesn't support 
>>>>> sha512-crypt.
>>>> 
>>>> I just tried SHA512-CRYPT and it is supported on Ubuntu 14.04. I think I 
>>>> was thinking about DBMail instead of Dovecot.
>>>> 
>>>> I could really use support for BLF-CRYPT since my current password hashes 
>>>> generated by PHP are using Blowfish encryption.
>>>> 
>>>> Maybe, Dovecot could just add support for BLF-CRYPT by using the open 
>>>> source implementation of Blowfish hashing found in 
>>>> https://github.com/php/php-src/tree/master/ext/standard 
>>>> <https://github.com/php/php-src/tree/master/ext/standard>. The 
>>>> implementation looks like a single function to generate the hash. I???m 
>>>> not much of a programmer, but it would seem to me that these .c/.h files 
>>>> could be added to Dovecot for doing BLF-CRYPT hashing. 
>>>> 
>>> It already does. As previously stated.
>>> 
>>> 
>>>> This would mean all installations of Dovecot going forward would support 
>>>> BLF-CRYPT regardless of whether the crypt libraries have Blowfish built in.
>>>> 
>>>> Kevin
>>>> 
>>>>> On Jun 4, 2016, at 9:53 AM, Patrick Domack <patric...@patrickdk.com>

Re: Blowfish hashed passwords

2016-06-05 Thread KT Walrus
>> Maybe, Dovecot could just add support for BLF-CRYPT by using the open source 
>> implementation of Blowfish hashing found in 
>> https://github.com/php/php-src/tree/master/ext/standard 
>> <https://github.com/php/php-src/tree/master/ext/standard>. The 
>> implementation looks like a single function to generate the hash. I’m not 
>> much of a programmer, but it would seem to me that these .c/.h files could 
>> be added to Dovecot for doing BLF-CRYPT hashing. 
>> 
> It already does. As previously stated.

It doesn’t for me. I’m building Dovecot from source (v2.2.24) in a Docker 
container using Ubuntu 14.04.

Does BLF-CRYPT work for you?

Maybe I’m not building Dovecot correctly. I install libssl-dev and 
libmysqlclient-dev and do:

$ ./configure --prefix=/usr --sysconfdir=/etc --with-mysql
$ make
$ make install

Am I missing some library/switch to enable BLF-CRYPT?

I just did a quick Google search, and it appears that Ubuntu 14.04 doesn’t have 
support for BLF-CRYPT according to this issue:

https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1349252 
<https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1349252>

Actually, now that I’ve researched this a bit more, it was a mistake for my PHP 
app to make BLF-CRYPT password hashes since SHA512-CRYPT with a high number of 
rounds should be just as good. If Ubuntu 16.04 didn't add support for 
BLF-CRYPT, I guess I will have to implement a Checkpassword script for Dovecot 
that might generate SHA512-CRYPT replacement hashes after successfully checking 
against the BLF-CRYPT hashes. I’m no Dovecot expert, but I think I can have 
multiple passdbs so the first passdb mysql lookup will be set to fail if it 
finds a BLF-CRYPT hash so the Checkpassword script would only be run once per 
failed mysql lookup.

Hopefully, I just missed some ./configure switch to enable BLF-CRYPT and don’t 
have to deal with converting BLF-CRYPT to SHA512-CRYPT just for Dovecot.

Kevin


> On Jun 5, 2016, at 7:43 PM, Edgar Pettijohn <ed...@pettijohn-web.com> wrote:
> 
> 
> 
> Sent from my iPhone
> 
> On Jun 5, 2016, at 6:16 PM, KT Walrus <ke...@my.walr.us> wrote:
> 
>>> I would love to know why your ubuntu 14.04 system doesn't support 
>>> sha512-crypt.
>> 
>> I just tried SHA512-CRYPT and it is supported on Ubuntu 14.04. I think I was 
>> thinking about DBMail instead of Dovecot.
>> 
>> I could really use support for BLF-CRYPT since my current password hashes 
>> generated by PHP are using Blowfish encryption.
>> 
>> Maybe, Dovecot could just add support for BLF-CRYPT by using the open source 
>> implementation of Blowfish hashing found in 
>> https://github.com/php/php-src/tree/master/ext/standard 
>> <https://github.com/php/php-src/tree/master/ext/standard>. The 
>> implementation looks like a single function to generate the hash. I’m not 
>> much of a programmer, but it would seem to me that these .c/.h files could 
>> be added to Dovecot for doing BLF-CRYPT hashing. 
>> 
> It already does. As previously stated.
> 
> 
>> This would mean all installations of Dovecot going forward would support 
>> BLF-CRYPT regardless of whether the crypt libraries have Blowfish built in.
>> 
>> Kevin
>> 
>>> On Jun 4, 2016, at 9:53 AM, Patrick Domack <patric...@patrickdk.com> wrote:
>>> 
>>> 
>>> Quoting KT Walrus <ke...@my.walr.us <mailto:ke...@my.walr.us>>:
>>> 
>>>> (I subscribed to a daily digest for this list and can’t figure out how to 
>>>> reply to a reply.)
>>>> 
>>>> Anyway, Aki Tuomi replied to my feature request saying:
>>>> 
>>>>> We support in latest 2.2 release
>>>>> 
>>>>> MD5 MD5-CRYPT SHA SHA1 SHA256 SHA512 SMD5 SSHA SSHA256 SSHA512 PLAIN
>>>>> CLEAR CLEARTEXT PLAIN-TRUNC CRAM-MD5 SCRAM-SHA-1 HMAC-MD5 DIGEST-MD5
>>>>> PLAIN-MD4 PLAIN-MD5 LDAP-MD5 LANMAN NTLM OTP SKEY RPA CRYPT SHA256-CRYPT
>>>>> SHA512-CRYPT
>>>>> 
>>>>> There is also blowfish support as BLF-CRYPT, but that requires that your
>>>>> system supports it. CRYPT supports whatever your crypt() supports.
>>>> 
>>>> The reason I suggest building in fallback hash type support is that my 
>>>> install of Dovecot on Ubuntu 14.04 didn’t support SHA512-CRYPT or 
>>>> BLF-CRYPT.
>>>> 
>>>> If Dovecot just included the PHP .c files to make sure it can process 
>>>> Blowfish/SHA512 password hashes on all installs, it would greatly simplify 
>>>> adding Dovecot as a service for my existing user accounts (without forcing 
>>

Re: Blowfish hashed passwords

2016-06-05 Thread KT Walrus
> I would love to know why your ubuntu 14.04 system doesn't support 
> sha512-crypt.

I just tried SHA512-CRYPT and it is supported on Ubuntu 14.04. I think I was 
thinking about DBMail instead of Dovecot.

I could really use support for BLF-CRYPT since my current password hashes 
generated by PHP are using Blowfish encryption.

Maybe, Dovecot could just add support for BLF-CRYPT by using the open source 
implementation of Blowfish hashing found in 
https://github.com/php/php-src/tree/master/ext/standard 
<https://github.com/php/php-src/tree/master/ext/standard>. The implementation 
looks like a single function to generate the hash. I’m not much of a 
programmer, but it would seem to me that these .c/.h files could be added to 
Dovecot for doing BLF-CRYPT hashing. 

This would mean all installations of Dovecot going forward would support 
BLF-CRYPT regardless of whether the crypt libraries have Blowfish built in.

Kevin

> On Jun 4, 2016, at 9:53 AM, Patrick Domack <patric...@patrickdk.com> wrote:
> 
> 
> Quoting KT Walrus <ke...@my.walr.us <mailto:ke...@my.walr.us>>:
> 
>> (I subscribed to a daily digest for this list and can’t figure out how to 
>> reply to a reply.)
>> 
>> Anyway, Aki Tuomi replied to my feature request saying:
>> 
>>> We support in latest 2.2 release
>>> 
>>> MD5 MD5-CRYPT SHA SHA1 SHA256 SHA512 SMD5 SSHA SSHA256 SSHA512 PLAIN
>>> CLEAR CLEARTEXT PLAIN-TRUNC CRAM-MD5 SCRAM-SHA-1 HMAC-MD5 DIGEST-MD5
>>> PLAIN-MD4 PLAIN-MD5 LDAP-MD5 LANMAN NTLM OTP SKEY RPA CRYPT SHA256-CRYPT
>>> SHA512-CRYPT
>>> 
>>> There is also blowfish support as BLF-CRYPT, but that requires that your
>>> system supports it. CRYPT supports whatever your crypt() supports.
>>> 
>> 
>> The reason I suggest building in fallback hash type support is that my 
>> install of Dovecot on Ubuntu 14.04 didn’t support SHA512-CRYPT or BLF-CRYPT.
>> 
>> If Dovecot just included the PHP .c files to make sure it can process 
>> Blowfish/SHA512 password hashes on all installs, it would greatly simplify 
>> adding Dovecot as a service for my existing user accounts (without forcing 
>> them to give their password for the site so I can generate new hashes in a 
>> form that Dovecot supports). SHA256-CRYPT is probably my best option for 
>> password hashing since it supports ROUNDS to make hash generation slower. 
>> But, I would rather use BLF-CRYPT so I can re-use my existing hashes for my 
>> user accounts.
> 
> I would love to know why your ubuntu 14.04 system doesn't support 
> sha512-crypt.
> 
> My dovecot installs have only ever used sha512-crypt since 2008. Been using 
> ubuntu since 7.04 with sha512-crypt, and my current systems running 14.04 and 
> 16.04 both use sha512-crypt.
> 
> The default password hash for system user accounts in ubuntu has been 
> sha512-crypt for a very long time now.


Scalability of Dovecot in the Cloud

2016-06-04 Thread KT Walrus
Does anyone have any idea of how many IMAP connections a single cloud VM (4 
vCores at 2.4GHz, 30GB RAM, local SSD storage - non-RAID) can be expected to 
handle in production. The mailboxes are fairly small (average 5MB total - 50MB 
max, as I don’t store attachments in Dovecot expect those saved through IMAP in 
the Sent/Drafts folders) and each user will probably have an average of 2 
devices that have the mail clients configured to access each mailbox.

Can such a server handle 100,000 mailboxes (200,000 devices/clients)? Or is it 
more like 10,000? Or, even smaller?

I can scale the cloud VM up to 32 vCores and 240GB RAM (at 8 times the price) 
or split the mailboxes onto multiple VMs. The VM will also be running LMTP and 
other Dovecot services (I don’t plan on supporting POP3 at this time). The 
mailboxes will be sync’d to a backup VM running Dovecot for high availability 
so has some load from this background activity. LMTP will not be that high a 
load, I think, since most messages will be delivered by at night. But, clients 
will have IMAP connections 24/7.

Just trying to get an idea of the cost of running a potentially huge/growing 
mail service in the cloud… I’m going to have to support around a million 
mailboxes before the site will generate significant revenue to support 
operations.

Kevin

Re: nginx proxy to dovecot servers

2016-06-03 Thread KT Walrus
> Dovecot supports real IP forwarding with HAproxy.

Yes. I was aware of this, but that doesn’t answer my question of how to 
configure a Dovecot proxy to listen on many IPs/ports and do authentication 
based on the incoming IP/port. If I could do this without having to run 50 
Dovecot proxies (one for each incoming IP/port), I would probably use the 
HAProxy/Dovecot Proxy solution.

Or is Dovecot proxy light-weight enough to run a 100 instances or more on a 
single cloud VM (limited cores/memory) with an HAProxy front-end?

> On Jun 3, 2016, at 9:14 AM, Aki Tuomi <aki.tu...@dovecot.fi> wrote:
> 
> 
> 
> On 03.06.2016 16:00, KT Walrus wrote:
>>> btw, what is the reasong for NGINX proxy anyway? Since dovecot proxy can do 
>>> this for you too.
>> I want to do authentication using the IP that the IMAP client used to 
>> connect to the IMAP server. That is, I have 50 IPs, one for each state my 
>> users live in, so the users can only connect to the IMAP server using the 
>> domain name where their account is hosted (e.g., va.example.com 
>> <http://va.example.com/> for accounts in Virginia or ca.example.com 
>> <http://ca.example.com/> for accounts in California). I figured it was 
>> fairly simple to have NGINX listen on the different IPs for the different 
>> IMAP servers and do the authentication based on the server IP that was used 
>> by the IMAP client and then route the request to the proper Dovecot backend.
>> 
>> I actually plan on using HAProxy to listen on each of the IPs and then proxy 
>> to an NGINX mail proxy listening on different ports (one for each proxied 
>> IP). NGINX would then have mail server sections for each port that invokes a 
>> PHP script passing in the domain name associated with the port (e.g., 
>> va.example.com <http://va.example.com/>). The PHP script would then use this 
>> domain name along with the user/password supplied by the mail client to do 
>> the auth check and backend dovecot server selection.
>> 
>> The only problem I see with using HAProxy and NGINX mail proxy is I think I 
>> will lose the client IP so the Dovecot logs won’t show this IP.
>> 
> Dovecot supports real IP forwarding with HAproxy.
> 
> http://wiki2.dovecot.org/HAProxy
> 
> Aki


Re: nginx proxy to dovecot servers

2016-06-03 Thread KT Walrus
> btw, what is the reasong for NGINX proxy anyway? Since dovecot proxy can do 
> this for you too.

I want to do authentication using the IP that the IMAP client used to connect 
to the IMAP server. That is, I have 50 IPs, one for each state my users live 
in, so the users can only connect to the IMAP server using the domain name 
where their account is hosted (e.g., va.example.com <http://va.example.com/> 
for accounts in Virginia or ca.example.com <http://ca.example.com/> for 
accounts in California). I figured it was fairly simple to have NGINX listen on 
the different IPs for the different IMAP servers and do the authentication 
based on the server IP that was used by the IMAP client and then route the 
request to the proper Dovecot backend.

I actually plan on using HAProxy to listen on each of the IPs and then proxy to 
an NGINX mail proxy listening on different ports (one for each proxied IP). 
NGINX would then have mail server sections for each port that invokes a PHP 
script passing in the domain name associated with the port (e.g., 
va.example.com <http://va.example.com/>). The PHP script would then use this 
domain name along with the user/password supplied by the mail client to do the 
auth check and backend dovecot server selection.

The only problem I see with using HAProxy and NGINX mail proxy is I think I 
will lose the client IP so the Dovecot logs won’t show this IP.

Can I use Dovecot Proxy to do the same thing? Will it use 50 threads to listen 
on the different IPs/ports or will it only have a small set of workers to do 
the proxying (like NGINX)?

Basically, I couldn’t figure out how to use Dovecot Proxy to do authentication 
based on the incoming IP/port or I would use it as the Dovecot Proxy will 
preserve the client IPs in the logs.

Even though I’m starting with 50 IPs for state-based mail servers without 
having to run 50 Dovecot servers, I will eventually have over 100 region-based 
IPs so I need the mail server to scale easily starting with only 1 or 2 backend 
mail servers and scaling gradually to many hundreds of servers.

Any thoughts on how to do this with Dovecot Proxy?

Kevin

> On Jun 3, 2016, at 4:27 AM, Sami Ketola <sami.ket...@dovecot.fi> wrote:
> 
>> 
>> On 02 Jun 2016, at 23:07, KT Walrus <ke...@my.walr.us> wrote:
>> 
>> I’m trying to understand how the nginx mail proxy and dovecot work. 
>> 
>> As a I understand it, nginx can listen on a IP:port for IMAP connections. 
>> NGINX then can invoke a PHP script to do authorization and backend server 
>> selection.
>> 
>> Does NGINX than proxy to the backend dovecot IMAP server all subsequent IMAP 
>> commands that the user’s mail client requests?
>> 
>> Does the backend dovecot IMAP server do its own authentication with another 
>> MySQL password lookup? Or, since NGINX has done the authentication, the 
>> password_query lookup is skipped on the dovecot server? I assume the dovecot 
>> IMAP server still needs to do a MySQL user_query lookup (to find the 
>> location of the user’s mailbox on the server), but I am wondering whether 
>> the password will be checked twice, once by NGINX and a second time by 
>> dovecot IMAP.
> 
> Hi,
> 
> you can always skip password check on dovecot side with static passdb that 
> accepts all passwords if you are absolutely sure that the session has been 
> authenticated earlier. Also you could switch the session from using user 
> password to using a master password at the proxy if NGINX supports this. 
> 
> btw, what is the reasong for NGINX proxy anyway? Since dovecot proxy can do 
> this for you too.
> 
> Sami


Re: Blowfish hashed passwords

2016-06-03 Thread KT Walrus
(I subscribed to a daily digest for this list and can’t figure out how to reply 
to a reply.)

Anyway, Aki Tuomi replied to my feature request saying:

> We support in latest 2.2 release
> 
> MD5 MD5-CRYPT SHA SHA1 SHA256 SHA512 SMD5 SSHA SSHA256 SSHA512 PLAIN 
> CLEAR CLEARTEXT PLAIN-TRUNC CRAM-MD5 SCRAM-SHA-1 HMAC-MD5 DIGEST-MD5 
> PLAIN-MD4 PLAIN-MD5 LDAP-MD5 LANMAN NTLM OTP SKEY RPA CRYPT SHA256-CRYPT 
> SHA512-CRYPT
> 
> There is also blowfish support as BLF-CRYPT, but that requires that your 
> system supports it. CRYPT supports whatever your crypt() supports.
> 

The reason I suggest building in fallback hash type support is that my install 
of Dovecot on Ubuntu 14.04 didn’t support SHA512-CRYPT or BLF-CRYPT.

If Dovecot just included the PHP .c files to make sure it can process 
Blowfish/SHA512 password hashes on all installs, it would greatly simplify 
adding Dovecot as a service for my existing user accounts (without forcing them 
to give their password for the site so I can generate new hashes in a form that 
Dovecot supports). SHA256-CRYPT is probably my best option for password hashing 
since it supports ROUNDS to make hash generation slower. But, I would rather 
use BLF-CRYPT so I can re-use my existing hashes for my user accounts.

Kevin


Blowfish hashed passwords

2016-06-02 Thread KT Walrus
The PHP app I’m using on my website uses PHP to generate password hashes to be 
stored into the user database. These password hashes use Blowfish encryption 
("$2y$”). In fact, since PHP 5.3.0, PHP contains its own implementation of the 
hash types it supports including:

- CRYPT_STD_DES
- CRYPT_EXT_DES
- CRYPT_MD5
- CRYPT_BLOWFISH
- CRYPT_SHA256
- CRYPT_SHA512

The C code for these hash types is in 
https://github.com/php/php-src/tree/master/ext/standard 


I’m working on adding Dovecot to my site, but Dovecot doesn’t seem to support 
Blowfish password hashes (at least on Ubuntu 14.04).

Would you consider adding built-in “fallback” support for Blowfish and SHA512 
(which doesn’t seem to be supported either on Ubuntu 14.04 last time I checked) 
to an upcoming Dovecot release?

You could probably take the source code from the GitHub PHP repo to incorporate 
support for these hash types in Dovecot. That way, Dovecot could easily use the 
same hash types that PHP supports regardless of what hash types are installed 
in the OS running Dovecot. 

And, I wouldn’t have to deal with a second set of hashes for Dovecot passdb for 
my existing user accounts.

See PHP manual for crypt function: http://php.net/manual/en/function.crypt.php

Kevin

nginx proxy to dovecot servers

2016-06-02 Thread KT Walrus
I’m trying to understand how the nginx mail proxy and dovecot work. 

As a I understand it, nginx can listen on a IP:port for IMAP connections. NGINX 
then can invoke a PHP script to do authorization and backend server selection.

Does NGINX than proxy to the backend dovecot IMAP server all subsequent IMAP 
commands that the user’s mail client requests?

Does the backend dovecot IMAP server do its own authentication with another 
MySQL password lookup? Or, since NGINX has done the authentication, the 
password_query lookup is skipped on the dovecot server? I assume the dovecot 
IMAP server still needs to do a MySQL user_query lookup (to find the location 
of the user’s mailbox on the server), but I am wondering whether the password 
will be checked twice, once by NGINX and a second time by dovecot IMAP.

Kevin


Dockerfile for building latest github version

2016-01-16 Thread KT Walrus
Is there a Dockerfile for building/installing the latest Dovecot version in a 
Docker Image?

I don't see Dovecot in the Official Repositories on the Docker Hub like almost 
all the other popular open source software packages that I use.

Why no Official Docker repo for Dovecot?

All the user contributed repos containing Dovecot in the Docker Hub seem to 
just install from a distribution’s repo (like Ubuntu 14.04) but these packages 
seem to lag the most recent version of Dovecot that I see in GitHub.

Kevin

[Dovecot] BLF-CRYPT passwords

2014-02-04 Thread KT Walrus
I’m using the Dovecot Enterprise Edition on Centos 6.5, but Blowfish password 
hashes don’t seem to work.  What can I do to enable Blowfish hashes for 
passwords?  Maybe I don’t have my installation configured properly?

Note that I really want to use the existing Blowfish hashes in my MySQL 
database for Dovecot Authentication.  The hashes are generated by PHP crypt() 
that has Blowfish support built-in.  I looked at PHP’s sources, and PHP uses 
crypt_blowfish.c from http://www.openwall.com/crypt/.  This code is in the 
public domain and could easily be used by Dovecot to support Blowfish passwords 
on all platforms (if Dovecot doesn’t already support Blowfish on all platforms).

Kevin