Re: Upgrading to v2.3.X breaks ssl san?

2019-08-07 Thread Joseph Tam via dovecot

On Wed, 7 Aug 2019, Aki Tuomi wrote:


> (Maybe this config variable should be renamed "ssl_client_ca".)

...  except there already is ssl_client_ca_* settings used to validate
connections from dovecot.


So there is.  Maybe "ssl_usercert_ca" then.  A low-priority suggestion
to avoid newbies who don't read the docs closely enough.  For the longest
time, I held the same mistaken belief about the purpose of "ssl_ca".

Joseph Tam 


Re: Upgrading to v2.3.X breaks ssl san?

2019-08-07 Thread Aki Tuomi via dovecot


> On 07/08/2019 14:28 telsch  wrote:
> 
>  
> with v2.2.34 i can use:
> 
> ssl_ca =  ssl_cert =  
> after upgrade to v2.3.X it doesn't work like before.
> 
> it's working if i manual cat ca-bundle.pem and ssl-imap.pem into one
> file and using only:
> 
> ssl_cert =  
> i thought ssl_ca is where to put the intermediate cert?

(Sorry for duplicate mail, keyboard acted up...)

No, that has always been a mistake and it was fixed in 2.3. Our SSL pages in 
documentation & wiki have always recommended concatenating the intermediates 
with the cert.

Aki


Re: Problem Solr and centos 7

2019-08-07 Thread Shawn Heisey via dovecot

On 8/7/2019 4:23 AM, HTMLServices.it via dovecot wrote:

Thanks Shawn for your reply
I tried to bring the heap size to 5gb as you would like, but the problem 
was not solved.


That machine only has 4GB of total memory, so setting the heap to 5GB 
will eventually be problematic and lead to major performance issues.  If 
I were doing this with that hardware, I would set the heap to 1GB to 
start and maybe go as high as 2 GB.  With only 28000 documents, this 
*should* be enough.


"/If you have configured fts_solr with a URL that contains a # 
character, it's never going to work./"
I'm not sure how to configure this but in the 90-plugins.conf file I 
configured this:


plugin {
   #setting_name = value
   fts = solr
   fts_solr = url = http://5.39.2.59: 8987/solr/dovecot/
}


That URL looks good.  It doesn't have the # character, and from what I 
can see, should be completely correct.


Note that you should restrict access to the Solr server, not allow the 
Internet to reach it.  With it publicly accessible, anyone can delete 
the entire index or issue queries that cause denial of service.  I was 
able to connect to it, and I can see that your Solr index has no data in it:


https://www.dropbox.com/s/9bsx2vfu3kab19j/dovecot-solr-user-issue-screenshot.png?dl=0

I don't know if it's empty because nothing ever got indexed, or if it's 
empty because somebody found the URL for your server on this mailing 
list and decided to delete your index.


If I attempt a query on your index, it fails, complaining that there is 
no _text_ field:


http://5.39.2.59:8987/solr/dovecot/select?q=hello

The solrconfig.xml file that is in the dovecot source should be setting 
the df parameter to a field that exists, perhaps "body".  The field that 
it is using does not exist in the schema that is also in the dovecot source.


It might be that when fts_solr issues queries, it qualifies everything 
to search specific fields.  If that's what it does, then this 
configuration issue would not break fts_solr in the wild.


Thanks,
Shawn


Re: auth-policy crashing

2019-08-07 Thread James via dovecot

On 07/08/2019 11:19, James via dovecot wrote:


My more simplistic policy does not need both.  I perform whitelist,
blacklist, geo and greylist


...and DNSBL which where I started with the policyserver, "Can dovecot 
do DNSBL?", only indirectly via a policyserver.  This is better as most 
pass white list or fail geo local checks before doing the external DNS 
lookup.


Re: Upgrading to v2.3.X breaks ssl san?

2019-08-07 Thread telsch via dovecot

with v2.2.34 i can use:

ssl_ca = 

Re: Problem Solr and centos 7

2019-08-07 Thread HTMLServices.it via dovecot

Thanks Shawn for your reply
I tried to bring the heap size to 5gb as you would like, but the problem 
was not solved.
"/If you have configured fts_solr with a URL that contains a # 
character, it's never going to work./"
I'm not sure how to configure this but in the 90-plugins.conf file I 
configured this:


plugin {
  #setting_name = value
  fts = solr
  fts_solr = url = http://5.39.2.59: 8987/solr/dovecot/
}

In the 10-mail.conf file I added this as a guide:

# Space separated list of plugins to load for all services. Plugins 
specific to

# IMAP, LDA, etc. are added to this list in their own .conf files.
mail_plugins = $ mail_plugins fts fts_solr


if I run one of these two commands as a guide
curl http://5.39.2.59:8987/solr/dovecot/update?optimize=true
curl http://5.39.2.59:8987/solr/dovecot/update?commit=true
I get




   0 
   2 



this is right? have I forgotten or am I wrong?
If you have time to see or try any queries, I have access to 
http://5.39.2.59:8987/solr/#/ without a password.

thanks for your time!



Il 06/08/2019 23:30, Shawn Heisey via dovecot ha scritto:

On 8/5/2019 12:02 PM, HTMLServices.it via dovecot wrote:
Given that I am not an expert, I am doing tests with Solr, I 
installed following the guide but I have no benefits on the search, 
the search on the body on 28000 mails takes a few minutes and then 
goes to timeout.


If the problems you're having are with Solr itself and not fts_solr, 
then the Solr mailing list or IRC channel is probably a better place 
to get help.


https://lucene.apache.org/solr/community.html#mailing-lists-irc

The following info, combined with your document count of 28000, will 
be very useful:


https://cwiki.apache.org/confluence/display/solr/SolrPerformanceProblems#SolrPerformanceProblems-Askingforhelponamemory/performanceissue 



When gathering the screenshot, be sure that the process listing is 
sorted as described.


With no other info to go on, I suspect that maybe your Solr install is 
still configured with a 512MB heap and that the heap size needs to be 
increased to handle the index you've built.


I did several tests but I can't get it to work, this is the test 
server link: http://5.39.2.59:8987/solr/#/


If you have configured fts_solr with a URL that contains a # 
character, it's never going to work.  URLs containing # are only 
usable in a browser and will not function correctly anywhere else.


Thanks,
Shawn




Re: auth-policy crashing

2019-08-07 Thread James via dovecot

On 07/08/2019 11:02, Aki Tuomi via dovecot wrote:


before and after auth?  roundcube webmail reports an error with only
auth_policy_check_before_auth.  I cannot see why.  The simple and lazy
solution is to use double auth_policy_check_!

...


The double-check is for places which want to implement something like
COS or want to perform validations in policy server *after* we know the
user identity. The first check is done before we even know if the user
or the credential(s) are valid.


I can see why both before and after are options.  My more simplistic 
policy does not need both.  I perform whitelist, blacklist, geo and 
greylist and do not cross reference these with the user.  I can't see 
why roundcubemail fails without both.  The IMAP exchange with 
roundcubemail should not be aware of the policy server.  I was spending 
[wasting] too much time on looking for an answer and gave up.


Re: auth-policy crashing

2019-08-07 Thread Aki Tuomi via dovecot


On 7.8.2019 11.51, James via dovecot wrote:
> On 06/08/2019 06:46, Aki Tuomi via dovecot wrote:
>>
>> On 2.8.2019 13.45, James via dovecot wrote:
>>> My auth process is dumping core.  This happens several times per day
> ...
>
>> There is an easy fix for this, attached.
>
> Patch applied; no core dump in 24 hours.
>
> This appears to have fixed the problem.  I found that it crashed when
> the policy server responded too quickly.  As the before and after auth
> command=allow request are the same I cache the first, leading to a
> fast second response.  Removing the cache (nginx proxy_cache ...) must
> change the timings and circumvented the crash.  Why use both check
> before and after auth?  roundcube webmail reports an error with only
> auth_policy_check_before_auth.  I cannot see why.  The simple and lazy
> solution is to use double auth_policy_check_!
>
> Thank you Aki for looking at this and finding a solution so quickly.


The double-check is for places which want to implement something like
COS or want to perform validations in policy server *after* we know the
user identity. The first check is done before we even know if the user
or the credential(s) are valid.

Aki



Re: auth-policy crashing

2019-08-07 Thread James via dovecot

On 06/08/2019 06:46, Aki Tuomi via dovecot wrote:


On 2.8.2019 13.45, James via dovecot wrote:

My auth process is dumping core.  This happens several times per day

...


There is an easy fix for this, attached.


Patch applied; no core dump in 24 hours.

This appears to have fixed the problem.  I found that it crashed when 
the policy server responded too quickly.  As the before and after auth 
command=allow request are the same I cache the first, leading to a fast 
second response.  Removing the cache (nginx proxy_cache ...) must change 
the timings and circumvented the crash.  Why use both check before and 
after auth?  roundcube webmail reports an error with only 
auth_policy_check_before_auth.  I cannot see why.  The simple and lazy 
solution is to use double auth_policy_check_!


Thank you Aki for looking at this and finding a solution so quickly.


Re: [BUG?] Double quota calulation when special folder is present

2019-08-07 Thread Timo Sirainen via dovecot

> On 6 Aug 2019, at 21.08, Mark Moseley via dovecot  wrote:
> 
>> 
>> I've bisected this down to this commit: 
>> 
>> git diff 
>> 7620195ceeea805137cbd1bae104e385eee474a9..97473a513feb2bbd763051869c8b7b83e24b37fa
>> 
>> Prior to this commit, anything updating the quota would do the right thing 
>> for any .INBOX. folders (i.e. not double count the contents of 
>> "INBOX" against the quota). After this commit, anything updating quota (new 
>> mail, quota recalc, etc) does the double counting of INBOX.
> 
> Thank you for the bisect! We'll look into this.
> 
> Hi. I was curious if there were any fixes for this? We're still affected by 
> this (and I imagine others are too but don't realize it). Thanks!

Looks like this happens only with Maildir++ quota. As a workaround you could 
switch to dict-file or "count" quota. Anyway added to internal tracking as 
DOP-1336.



Re: Dovecot replication and userdb "noreplicate".

2019-08-07 Thread Reio Remma via dovecot

On 07/08/2019 09:29, Sami Ketola wrote:



On 6 Aug 2019, at 23.52, Reio Remma via dovecot  wrote:

service doveadm {
 user = vmail
}

This seems to have fixed it. Here's hoping for no unforeseen side-effects. :)

I still need allow dovecot_t ssh_exec_t:file { execute execute_no_trans open 
read }; for selinux, but there are no more errors in maillog and it can read 
both the key and known_hosts (from either /home/vmail/.ssh/known_hosts or 
/etc/ssh/ssh_known_hosts).

There might be. What we usually is just allow dsync user to sudo doveadm 
dsync-server and then add sudo to dsync remote command.

Sami



Thanks! I'll keep it in mind in case I run into problems with doveadm as 
vmail. So far so good.


Thanks again!
Reio


Re: Dovecot replication and userdb "noreplicate".

2019-08-07 Thread Sami Ketola via dovecot



> On 6 Aug 2019, at 23.52, Reio Remma via dovecot  wrote:
> 
> service doveadm {
> user = vmail
> }
> 
> This seems to have fixed it. Here's hoping for no unforeseen side-effects. :)
> 
> I still need allow dovecot_t ssh_exec_t:file { execute execute_no_trans open 
> read }; for selinux, but there are no more errors in maillog and it can read 
> both the key and known_hosts (from either /home/vmail/.ssh/known_hosts or 
> /etc/ssh/ssh_known_hosts).

There might be. What we usually is just allow dsync user to sudo doveadm 
dsync-server and then add sudo to dsync remote command.

Sami