Multiple replicas of the same shard will execute their autocommits at different wall clock times. Thus there may be a _temporary_ time when newly-indexed document is found by a query that happens to get served by replica1 but not by replica2. If you have a timestamp in the doc, and a soft commit interval of, say, 1 minute, you can test whether this is the case by adding &fq=timestamp:[* TO NOW-2MINUTE]. In that case you should see identical returns.
Best, Erick On Thu, Sep 19, 2019 at 1:20 AM Jayadevan Maymala <jayade...@ftltechsys.com> wrote: > > Hi all, > > There is something "strange' happening in our Solr cluster. If I execute a > query from the server, via solarium client, I get one result. If I execute > the same or similar query from admin Panel, I get another result. If I go > to Admin Panel - Collections - Select Collection and click "Reload", and > then repeat the query, the result I get is consistent with the one I get > from the server via solarium client. So I picked the query that is getting > executed, from Solr logs. Evidently, the query was going to different nodes. > > Query that went from Admin Panel, went to node 4 and fetched 0 documents > 2019-09-19 05:02:04.549 INFO (qtp434091818-205178) > [c:paymetryproducts s:shard1 r:*core_node4* > x:paymetryproducts_shard1_replica_n2] o.a.s.c.S.Request > [paymetryproducts_shard1_replica_n2] webapp=/solr path=/select > params={q=category_id:5a0aeaeea6bc7239cc21ee39&_=1568868718031} *hits=0* > status=0 QTime=0 > > > Query that went from solarium client running on a server, went to node 3 > and fetched 4 documents > > 2019-09-19 05:06:41.511 INFO (qtp434091818-17) > [c:paymetryproducts s:shard1 r:*core_node3* > x:paymetryproducts_shard1_replica_n1] o.a.s.c.S.Request > [paymetryproducts_shard1_replica_n1] webapp=/solr path=/select > params={q=category_id:5a0aeaeea6bc7239cc21ee39&json.nl=flat&omitHeader=true&fl=ID&start=0&rows=900000&wt=json} > *hits=4* status=0 QTime=104 > > What could be causing this strange behaviour? How can I fix this? > SOlr Version - 7.3 > Shard count: 1 > replicationFactor: 2 > maxShardsPerNode: 1 > > Regards, > Jayadevan