On 9/6/2021 12:58 AM, Vincent Brillault wrote:
Hi Alessio,
this optimization also produce a less RAM requirements on Solr server?
Unfortunately we didn't measure this before/after the change. Since we
are removing features (position information), I wouldn't expect the
memory requirement to
Hi Alessio,
this optimization also produce a less RAM requirements on Solr server?
Unfortunately we didn't measure this before/after the change. Since we
are removing features (position information), I wouldn't expect the
memory requirement to increase, but I'm no expert.
To be honest, I'v
Hi Vincent,
thanks for your investigations!
Il 01/09/21 11:27, Vincent Brillault ha scritto:
Dear all,
Just a status update, in case this can help others.
We went forward and disabled the position information indexing and the
re-indexed of our mail data (over a couple of days to avoid
overl
Dear all,
Just a status update, in case this can help others.
We went forward and disabled the position information indexing and the
re-indexed of our mail data (over a couple of days to avoid overloading
the systems). Before the re-indexing we had 1.33 TiB in our Solr
Indexes. After re-index
On 8/5/2021 1:00 AM, Vincent Brillault wrote:
Indeed, I should have. I'm using Solr 8.6, which is clearly not the same
as Solr 6.2.0, but when looking at more recent versions of the
documentation, no information about the use of each file appeared.
That's why I was mentioning it was slightly outd
Dear Shawn,
Thanks for your very complete answer!
> This is completely off-topic for the dovecot list. I am involved with
> the Solr project, so I can discuss it. My message will also be off
> topic here.
Sorry, maybe I didn't explain myself properly. I asked on the dovecot
mailing list as I'm
On 8/4/2021 1:24 AM, Vincent Brillault wrote:
On a local dovecot cluster currently hosting roughly 2.1TB of data,
using Solr as its FTS backend, we now have 256GB of data in Solr, split
in 12 shard (to which replication adds 256GB of data through 12
additional cores).
I'm now trying to see if we
Dear all,
On a local dovecot cluster currently hosting roughly 2.1TB of data,
using Solr as its FTS backend, we now have 256GB of data in Solr, split
in 12 shard (to which replication adds 256GB of data through 12
additional cores).
I'm now trying to see if we can optimize that data. Looking at o