In terms of SOLR-16879 - any chance you're willing to work on a fix Pierre?
>
> Best,
>
> Jason
>
> On Mon, Feb 5, 2024 at 12:52 PM Pierre Salagnac >
> wrote:
>
> > The regression was introduced in 9.4.
> >
> > Le lun. 5 févr. 2024 à 18:31, Pierre Salagnac &g
The regression was introduced in 9.4.
Le lun. 5 févr. 2024 à 18:31, Pierre Salagnac a
écrit :
> Hi Jason,
>
> A regression was introduced in backup/restore for large collections. This
> was reported in a comment of SOLR-16879[1].
> Should this be considered as a blocker f
Hi Jason,
A regression was introduced in backup/restore for large collections. This
was reported in a comment of SOLR-16879[1].
Should this be considered as a blocker for 9.5 ?
[1]
> > >> > project; I'm going to try elsewhere.
> > >> >
> > >> > For shards, there doesn't even need to be a "leader election" recipe
> > >> > because there are no shard leader threads that always need to be
> > >> > thinking/doin
We recently had a couple of issues with production clusters because of race
conditions in shard leader election. By race condition here, in mean for a
single node. I'm not discussing how leader election is distributed
across multiple Solr nodes, but how multiple threads in a single Solr node
, 2 Oct 2023 at 22:22, Pierre Salagnac
> wrote:
>
> > Hi Ishan,
> > Sorry for the late chime in.
> >
> > Some time ago I filled a Jira for a Solr 8 specific bug:
> > https://issues.apache.org/jira/browse/SOLR-16843
> >
> > At that time, I
Hi Ishan,
Sorry for the late chime in.
Some time ago I filled a Jira for a Solr 8 specific bug:
https://issues.apache.org/jira/browse/SOLR-16843
At that time, I wasn't expecting more 8.x releases, so I did not open a PR
for it.
I can work on a fix if we have a few days more before the release. I
I opened a pull request[1] that fixes the case reported. The issue was
subqueries with grouped fields like "field:(term1 term2 term3), only the
first term was skipped when generating the boost query with fields
specified in pf parameter.
Unfortunately, this pre-parsing (method splitIntoClauses())
It seems there was some push back due to the complexity and the impact on
the overseer code. So this PR is now probably stale.
I opened a simpler version that introduces a dedicated thread pool for
"expensive" operations. End behavior is the same: we don't execute more
than 5 concurrent expensive
what you think
Thanks
Le jeu. 29 juin 2023 à 15:37, Pierre Salagnac a
écrit :
> Jason, I haven't done much scalability testing, so it's hard to give
> accurate numbers on when we start having issues.
> For the environment I looked in detail we run a 16 nodes cluster, and the
> collect
Hi Jan,
As far as I know, Solr only supports circuit breaks for queries at the
moment.
We have a custom integration of circuit breakers for indexing (in Solr 8,
so that's not fully aligned with what in solr 9) with a custom
UpdateRequestProcessor. Basically, a new instance of every update
Jason, I haven't done much scalability testing, so it's hard to give
accurate numbers on when we start having issues.
For the environment I looked in detail we run a 16 nodes cluster, and the
collection I wasn't able to backup has about 1500 shards, ~1.5 GB each.
Core backups/restores are
Thanks for starting this thread David.
I've been internally working on this, since we have issues (query failures)
during backups of big collections because of IO saturation.
I see two different approaches to solve this:
1. Throttle at the IO level, like David mentioned.
2. Limit the number of
ne else tackling the
> > problem down the road.
> >
> > Best,
> >
> > Jason
> >
> > On Thu, Jun 1, 2023 at 10:29 AM Pierre Salagnac
> > wrote:
> > >
> > > I know the autoscaling framework does not exist anymore w
I know the autoscaling framework does not exist anymore with Solr 9+, but I
wanted to share here a bug we found in it.
Probably there are still plenty of Solr 8 users still relying on this
framework.
The triggers use timestamps returned by the JVM call System.nanoTime(), but
according to the
Hello everyone,
I'm investigating issues where a replica ends in having no leader, and I
wonder whether my specified cases were already discussed somewhere.
More specifically in the code, I (with the help of my colleagues)
identified two gaps where we exit the leadership process, without going
I discussed this issue offline with David, and I'm now working on a code
change to make the preferredLeader to become the leader when we register a
replica.
The idea is, when we register a replica from Zookeeper, we check whether it
has the preferred leader flag. When true, we tell the current
17 matches
Mail list logo