Yes, the requirements (for now) is not to return any results. I think
they may change the requirements,pending their return from the holidays.
If so, then check for those words in the query before sending it to Solr.
That is what I think so too.
Thinking further, using stopwords for this, the
Hi Alex
The business requirement (for now) is not to return any result when the
search keywords are cigarette related. The business user team will
provide the list of the cigarette related keywords.
Will digest, explore and research on your suggestions. Thank you.
On 30/9/2020 10:56 am, Alex
raj.yadav wrote
> In cases for which we are getting this warning, I'm not able to extract
> the
> `exact solr query`. Instead logger is logging `parsedquery ` for such
> cases.
> Here is one example:
>
>
> 2020-09-29 13:09:41.279 WARN (qtp926837661-82461) [c:mycollection
> s:shard1_0 r:core_
Hello all
We are using Apache Solr 7.7 on Windows platform. The data is synced to Solr
using Solr.Net commit. The data is being synced to SOLR in batches. The
document size is very huge (~0.5GB average) and solr indexing is taking long
time. Total document size is ~200GB. As the solr commit is
Can some one help in troubleshooting some issues that happening from DIH??
Solr version: 8.2; zookeeper 3.4
Solr cloud with 4 nodes and 3 zookeepers
1. Configured DIH for ms sql with mssql jdbc driver, and when trying to pull
the data from mssql it’s connecting and fetching records but we do see
We do this sort of thing outside of Solr. The indexing process includes creating
a feed file with one JSON object per line. The feed files are stored in S3 with
names that are ISO 8601 timestamps. Those files are picked up and loaded into
Solr. Because S3 is cross-region in AWS, those files are als
>whether we should expect Master/Slave replication also to be deprecated
it better not ever be depreciated. it has been the most reliable mechanism
for its purpose, solr cloud isnt going to replace standalone, if it does,
thats when I guess I stop upgrading or move to elastic
On Wed, Sep 30, 202
Based on the thread below (reading "legacy" as meaning "likely to be deprecated
in later versions"), we have been working to extract ourselves from
Master/Slave replication
Most of our collections need to be in two data centers (a read/write copy in
one local data center: the disaster-recovery-
I’m not clear on the requirements. It sounds like the query “cigar” or “cuban
cigar”
should return zero results. Is that right?
If so, then check for those words in the query before sending it to Solr.
But the stopwords approach seems like the requirement is different. Could you
give
some examp
You may also want to look at something like: https://docs.querqy.org/index.html
ApacheCon had (is having..) a presentation on it that seemed quite
relevant to your needs. The videos should be live in a week or so.
Regards,
Alex.
On Tue, 29 Sep 2020 at 22:56, Alexandre Rafalovitch wrote:
>
>
Hi,
I went through other queries for which we are getting `The request took too
long to iterate over doc values` warning. As pointed by Erick I have cross
check all the fields that are being used in query and there is no such field
against which we are searching and it as index=false and docValues
Hi,
I went through other queries for which we are getting `The request took too
long to iterate over doc values` warning. As pointed by Erick I have cross
check all the fields that are being used in query and there is no such field
against which we are searching and it as index=false and docValues
I went through other queries for which we are getting `The request took too
long to iterate over doc values` warning. As pointed by Erick I have cross
check all the fields that are being used in query and there is no such field
against which we are searching and it as index=false and docValues=true
Increasing the number of rows should not have this kind of impact in either
version of Solr, so I think there’s something fundamentally strange in your
setup.
Whether returning 10 or 300 documents, every document has to be scored. There
are two differences between 10 and 300 rows:
1> when retu
On 30/09/2020 05:14, Rahul Goswami wrote:
> Thanks for sharing this Anshum. Day 1 had some really interesting sessions.
> Missed out on a couple that I would have liked to listen to. Are the
> recordings of these sessions available anywhere?
The ASF will be uploading the recordings of all sessions
15 matches
Mail list logo